Liqid Supercharges Enterprise AI with Composable Infrastructure Upgrades
In an AI-hungry enterprise world where performance and power bills climb in tandem, Liqid is pushing back with smarter, more efficient infrastructure. The composable infrastructure leader just rolled out its most ambitious hardware-software portfolio update yet—designed to give IT teams both the agility and muscle to handle modern AI workloads at scale, all while cutting down on wasted watts and dollars.
With AI shifting from cloud novelty to on-prem necessity—especially for inference, large language models (LLMs), and edge use cases—Liqid’s latest releases arrive not a moment too soon.
Why It Matters: AI Is Shifting the Infrastructure Equation
As enterprises rush to deploy generative AI and advanced analytics, they’re hitting a wall with traditional, fixed infrastructure. Static resource provisioning means racks full of underutilized GPUs or overspent memory—inefficient and expensive. Liqid’s composable architecture, by contrast, promises real-time hardware fluidity: GPUs, memory, and storage that can be dynamically assigned where and when they’re needed.
Think of it as Kubernetes for physical infrastructure—but instead of juggling containers, Liqid is orchestrating bare metal resources with surgical precision.
What’s New: A Full-Stack Assault on AI Inefficiency
At the heart of this release is Liqid Matrix® 3.6, an updated version of its core composable infrastructure software. Now delivering a unified interface for managing GPU, memory, and storage resources in real time, Matrix 3.6 is built to squeeze every ounce of performance from each component—no matter how chaotic the workload mix.
Also announced:
- Liqid EX-5410P: A PCIe Gen5, 10-slot GPU chassis supporting modern 600W GPUs like NVIDIA’s H200 and Intel Gaudi 3. It enables GPU composability over ultra-low-latency, high-bandwidth interconnects.
- Liqid EX-5410C: A memory composability chassis using CXL 2.0 to pool DRAM across systems, perfect for LLMs and in-memory databases. It’s also backed by Matrix software.
- LQD-5500 NVMe Drives: Updated Gen5 IOAs offering 128TB capacity and bandwidth up to 50GB/s, tailored for high-throughput AI and HPC storage needs.
The performance claims? Up to 2x more tokens per watt and 50% higher tokens per dollar—metrics rapidly becoming the gold standard for evaluating AI infrastructure.
Composable, Scalable, Configurable: Pick Your Stack
Liqid is offering two deployment models for both GPU and memory composability:
- UltraStack: Delivers raw horsepower by dedicating up to 30 GPUs or 100TB of memory to a single server.
- SmartStack: Pools the same resources across up to 20 (GPU) or 32 (memory) nodes for dynamic allocation and better infrastructure elasticity.
This flexibility lets enterprises scale horizontally or vertically depending on the workload, without re-architecting their environment or adding excessive overhead.
The Bigger Picture: Redefining On-Prem AI Architecture
With the rise of AI factories and on-prem inference stacks, Liqid’s timing is savvy. Rivals like NVIDIA’s DGX platform and Intel’s Habana Gaudi systems are pushing performance boundaries, but often at the expense of flexibility or power efficiency. Liqid is betting that software-defined composability—built on open standards like PCIe Gen5 and CXL 2.0—can outperform in both metrics and modularity.
Its real-time orchestration, support for multiple accelerators (FPGAs, DPUs, TPUs), and tight integrations with Kubernetes, VMware, and Slurm make it a serious contender for any enterprise building a scalable, AI-first datacenter.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI.