At KubeCon, Mirantis, a leader in open-source cloud infrastructure and platform engineering, announced that Nebul, a Netherlands-based private cloud provider, has deployed k0rdent to deliver on-demand AI inference services. By leveraging NVIDIA-accelerated infrastructure, Nebul enables enterprises to run production AI inference workloads efficiently, ensuring privacy, sovereignty, and high performance.
Nebul’s AI Infrastructure Evolution
- Pioneering AI & High-Performance Computing
- Nebul specializes in high-performance computing, artificial intelligence (AI), and machine learning.
- As an NVIDIA elite partner, Nebul integrates NVIDIA GPU Operator and Gcore Everywhere Inference for scalable AI operations.
- Optimized AI Inference with k0rdent
- k0rdent provides Kubernetes-native multi-cluster management, ensuring seamless AI inference deployment.
- Enables low-latency, high-performance AI processing with dynamically provisioned resources.
- Policy-driven automation optimizes GPU utilization, reducing costs and enhancing efficiency.
The Power of Open Source in AI Infrastructure
- Mirantis CEO Alex Freedland highlights that open-source technology is crucial for scalable AI infrastructure.
- k0rdent helps platform engineers tackle infrastructure sprawl and operational complexity across cloud, on-premises, and edge environments.
- Key k0rdent capabilities:
- Declarative automation for simplified infrastructure management.
- Centralized policy enforcement for compliance and security.
- Production-ready templates optimized for AI workloads.
- Composable architecture leveraging Cluster API for multi-cloud and on-premise deployments.
Nebul’s Transition to Inference-as-a-Service
- Shifting from VMware to Open Source
- k0rdent allows Nebul to unify its diverse infrastructure, integrating OpenStack, bare metal Kubernetes, and cloud environments.
- CEO Arnold Juffer emphasizes that k0rdent streamlines operations and accelerates AI inference transformation.
- Enterprises can now bring AI models to their data and execute inference securely and in compliance with regulations.
- Gcore’s Edge AI Integration
- Smart Routing directs AI inference tasks to the nearest GPUs, minimizing latency.
- The Everywhere Inference portal simplifies deployment and management of AI inference workloads.
AI Inference at Scale: The Future of AI Computing
- NVIDIA’s Perspective on Scaling AI
- AI models are growing in complexity and size, requiring full-stack infrastructure to meet enterprise demands.
- AI inference must be cost-efficient, high-performance, and scalable for real-world deployment.
By adopting Mirantis’ k0rdent, Nebul is redefining AI inference infrastructure, ensuring businesses can deploy scalable, high-performance AI solutions with open-source efficiency. As demand for AI services grows, k0rdent’s multi-cluster automation and GPU optimization position Nebul as a leader in Inference-as-a-Service.