The demand for real-time AI inferencing is driving the need for high-performance computing at the Edge. To meet this challenge, MEF, a global industry association accelerating enterprise digital transformation, has unveiled a groundbreaking GPU-as-a-Service initiative. In collaboration with Infosys, NVIDIA, and IronYun, MEF is demonstrating how service providers can monetize network infrastructure by delivering scalable, low-latency AI processing at the Edge. This initiative is being showcased at Mobile World Congress (MWC) in Barcelona, where attendees can witness how MEF’s Lifecycle Service Orchestration (LSO) APIs enable automated ordering and deployment of GPU resources. This marks a major milestone in AI-powered networks, setting the stage for a future where enterprises can seamlessly leverage on-demand AI processing at the Edge.
Unlocking AI at the Edge: A Game-Changer for Enterprises and Service Providers
The rise of AI-driven applications in video analytics, autonomous systems, and real-time decision-making requires powerful GPU resources closer to the data source. Traditional cloud-based AI processing introduces latency issues, making Edge computing a critical solution.
Innovations in MEF’s GPUaaS Initiative
- Edge Compute Infrastructure-as-a-Service (IaaS) Standardization – MEF defines a framework for comparing Edge IaaS offerings, enabling standardization across service providers.
- Expansion to GPU-as-a-Service (GPUaaS) – Standardized delivery of GPU resources at the Edge for AI inferencing, reducing latency and improving AI performance.
- New Revenue Opportunities – Service providers can offer GPU-based AI processing as a monetizable service, unlocking new business models.
“This initiative is a major leap forward in AI at the provider edge,” said Pascal Menezes, CTO of MEF. “By enabling service providers to offer GPU-as-a-Service, we empower enterprises to run AI inferencing at the Edge with greater scalability and efficiency.”
A Fully Standardized, On-Demand AI Ecosystem
At MWC, MEF and its partners are demonstrating how automated, standardized API-driven orchestration is transforming AI at the Edge.
Key Features of MEF’s GPUaaS Model
- On-Demand GPU Resources – Enterprises can access high-performance GPUs at the Edge without heavy upfront investments.
- Seamless Ordering & Deployment – MEF’s standardized LSO APIs automate pricing, ordering, and activation of GPU resources across service providers.
- Optimized AI Performance – Low-latency Edge computing enhances AI-driven applications, such as:
- Real-time video analytics
- Intelligent traffic management
- AI-enhanced security and safety applications
This collaborative effort enables Cloud Service Providers and Subscribers to compare GPUaaS offerings within a common framework, driving consistency, interoperability, and efficiency in AI deployment.
Industry Leaders on AI at the Edge
Infosys: Driving Scalable AI Solutions
Balakrishna D. R. (Bali), Executive VP, Global Services Head, AI and Industry Verticals, Infosys, stated:
“Unlocking AI at the Edge is crucial for enterprises. By integrating GPU-as-a-Service, Infosys empowers organizations to run AI inferencing with lower latency and greater efficiency. Through our collaboration with MEF, we’re setting a new industry benchmark, enabling enterprises to harness AI for real-world impact.”
IronYun: Advancing AI-Powered Video Analytics
Marshall Tyler, CEO of IronYun, emphasized:
“We appreciate the opportunity to partner with MEF to showcase our advanced vision AI through this groundbreaking GPU-as-a-Service initiative. By combining deployment flexibility with real-time inferencing power at the Edge, Vaidio empowers providers to monetize their networks and enables enterprises to unlock new levels of security and operational efficiency.”
The Future of AI at the Edge
MEF’s GPUaaS initiative represents a transformative shift in how AI workloads are processed. By moving AI inferencing closer to the data source, enterprises can benefit from:
- Faster response times – Reducing latency for real-time applications.
- Cost-effective AI processing – On-demand GPU resources lower infrastructure costs.
- Improved AI model performance – Enhanced accuracy and efficiency for AI-driven analytics.
With MEF, Infosys, NVIDIA, and IronYun leading the way, this initiative is paving the path for widespread adoption of AI at the Edge—making AI-powered solutions more accessible, scalable, and monetizable for enterprises worldwide.
The AI revolution is shifting from centralized cloud processing to intelligent, real-time Edge computing. MEF’s GPU-as-a-Service model is a pivotal step toward enabling AI-driven enterprises with standardized, automated access to scalable GPU resources.
As AI-powered applications continue to grow, service providers and enterprises must embrace Edge AI to stay ahead. MEF’s collaborative ecosystem is redefining AI service delivery, unlocking new possibilities for innovation, efficiency, and business growth.