Liquid AI has announced a deep partnership with Shopify to deploy its flagship Liquid Foundation Models (LFMs) across latency-sensitive workflows, including e-commerce search and recommendation systems. The move follows Shopify’s participation in Liquid AI’s $250 million Series A in December 2024 and formalizes ongoing co-development.
The first production deployment—a sub-20 millisecond text model—enhances Shopify search, delivering real-time, high-quality results for millions of merchants and shoppers. LFMs are engineered for multimodal, low-latency inference, achieving superior results with approximately 50% fewer parameters than popular open-source models like Qwen3, Gemma3, and LLaMA 3, while running 2–10× faster.
“Recommendation is the backbone of decision-making in finance, healthcare, and e-commerce,” said Ramin Hasani, CEO of Liquid AI. “Shopify has been an ideal partner to validate that at scale. We’re excited to bring Liquid Foundation Models to millions of shoppers and merchants and show how efficient ML translates into measurable value.”
Generative Recommender System
As part of the partnership, Liquid and Shopify co-developed a generative recommender system using a novel HSTU architecture, designed to maximize conversion rates while maintaining low-latency performance. Controlled testing shows it outperforms Shopify’s prior stack, enabling a faster, more engaging shopping experience.
“I’ve seen a lot of models,” said Shopify CTO Mikhail Parakhin. “No one else delivers sub-20ms inference on real workloads like this. Liquid’s architecture is efficient without sacrificing quality; a model with ~50% fewer parameters beats Alibaba Qwen and Google Gemma, while running 2–10× faster. That’s what it takes to power interactive commerce at scale.”
Liquid AI CTO Mathias Lechner added that LFMs are optimized for production robustness, including low-variance tail latency, safety, and drift monitoring, making them ideal for personalized ranking, retrieval-augmented generation, and session-aware recommendations under tight latency and cost budgets.
Expanding Use Cases
The partnership includes a multi-purpose license for LFMs across Shopify’s low-latency, quality-sensitive workflows, ongoing R&D collaboration, and a shared roadmap for future deployments. While the current rollout focuses on search, the companies are exploring multimodal models for additional use cases such as customer profiles, AI agents, and product classification.
The collaboration highlights a broader trend: high-performance foundation models are becoming central to delivering real-time, AI-driven experiences at enterprise scale, where latency, quality, and efficiency directly impact business outcomes.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI










