Zerve today announced the launch of the industry’s first multi-agent system designed specifically for enterprise-grade data and AI development. Moving beyond lightweight code assistants, Zerve’s Agent actively participates throughout the entire development lifecycle — from planning and provisioning infrastructure to building and deploying AI products at scale. Built for enterprise environments, the Zerve Operating System seamlessly integrates with internal infrastructure, whether cloud-based or on-premise. It offers a visual collaboration canvas where human teams and AI agents work together. Featuring multi-agent orchestration, comprehensive compute control, and native access to code, data, and workflows, Zerve transforms AI agents from mere co-pilots into true teammates.
Reducing Infrastructure Complexity and Speeding Innovation
One of Zerve’s standout features is its automatic compute provisioning and management, eliminating the usual infrastructure headaches in running AI workflows. This empowers teams to accelerate development cycles with confidence and security.
Phily Hayes, CEO and Co-founder of Zerve, explains:
“If you’ve tried ‘vibe-coding’ agents and wanted to apply them to real enterprise AI workflows, now you can. Hosted in your environment with your policies and connected to your data and LLMs, Zerve provides a secure, productive space for teams and AI agents to explore, build, and deploy faster than ever.”
How Zerve Agents Work
Users can activate multiple AI agents on any task using natural language prompts. The agents collaboratively develop a plan, generate canvases, create and link code blocks, write code, orchestrate infrastructure, and automate data workflows. Mirroring human iterative coding with data, the agents reevaluate and retry when experiments or code fail.
New Patent-Pending Features in Zerve 2.0
Originally released in 2024 and adopted by leading organizations like NASA, Canal+, and Hewlett Packard Enterprise, Zerve’s 2.0 release introduces several game-changing capabilities:
- The Fleet: A distributed, serverless computing engine enabling massively parallel code execution with a single command, ideal for scaling calls to large language models (LLMs) efficiently.
- App Builder: Empowers data and AI teams to build scalable, robust applications without front-end or DevOps expertise. The Zerve Agent can be embedded into apps, enabling natural language querying by end users.
Richard Springer, Director of Data at Cubic, notes:
“Zerve’s OS gives us the foundation to scale AI and data standardization confidently and accelerate meaningful outcomes.”
Availability and Upcoming Events
Zerve 2.0 is now available to existing customers, and new users can explore a free, full-featured Community Tier SaaS , At the Open Data Science Conference (ODSC) in Boston, Zerve Co-founder & CPO Greg Michaelson will discuss scaling GenAI workflows on May 13th and present on scalable compute and LLMs for call center analytics on May 14th. Visit Zerve at booth 27.