Wasabi Technologies, known for its Hot Cloud Storage, is doubling down on AI infrastructure. The company announced Wasabi Fire, a high-performance NVMe storage class designed for compute-heavy AI workloads, alongside a new San Jose storage region, co-located with IBM Cloud. Both milestones aim to help enterprises maximize GPU utilization, reduce latency, and accelerate AI development at predictable costs.
“Object storage is the backbone of AI, but customers shouldn’t have to choose between speed and cost,” said David Friend, co-founder and CEO of Wasabi. “With Wasabi Fire, we’re delivering NVMe performance at disruptive prices, allowing organizations to cost-effectively store the critical data needed to train AI.”
Why Storage Matters for AI
AI spending has traditionally focused on GPUs, but rapidly growing data lakes are shifting storage from a secondary concern to a strategic cost driver. Existing high-performance options are often expensive, forcing organizations to trade budget for speed. Wasabi Fire changes the game by combining SSD-class NVMe performance with the predictability Wasabi is known for, priced at $19.99 per terabyte per month with no egress or hidden fees.
The platform is purpose-built for:
- Machine learning and AI model training
- Real-time inference
- High-frequency data logging
- Media and analytics pipelines
By providing high throughput and low latency at competitive pricing, Wasabi Fire helps organizations unlock full GPU potential and reduce idle compute cycles.
Strategic Expansion in Silicon Valley
Wasabi’s 16th global storage region is now operational in San Jose, co-located with IBM Cloud infrastructure. This expansion positions Wasabi at the heart of the AI ecosystem, enabling enterprises in Silicon Valley to deploy high-performance, low-latency storage for complex AI workflows.
“We’re excited for Wasabi to expand into Silicon Valley with the IBM Cloud San Jose data center,” said Alan Peacock, General Manager, IBM Cloud. “Wasabi Fire on the IBM Cloud is designed to give clients the benefits of IBM’s secured enterprise-grade infrastructure.”
With over three exabytes of data under management, Wasabi demonstrates scale and reliability, reinforcing its role as a cost-efficient, enterprise-ready storage provider for AI and ML workloads.
Analyst Perspective: Timing Meets Market Need
Industry analysts see Wasabi’s move as aligning with broader AI infrastructure trends. High-performance storage is increasingly recognized as a bottleneck in GPU-heavy environments, and predictable, scalable storage is critical to controlling costs while accelerating AI adoption.
“Wasabi’s momentum reflects a clear demand for simple, predictable cloud storage,” said Dave McCarthy, Research VP at IDC. “By adding a new storage class and expanding into Silicon Valley, Wasabi positions itself as a storage provider aligned to the full lifecycle of AI development while maintaining the simplicity that has defined its growth to date.”
Power Tomorrow’s Intelligence — Build It with TechEdgeAI










