Core42, a G42 company specializing in sovereign cloud and AI infrastructure, has made OpenAI’s latest open-weight models—gpt-oss-20B and gpt-oss-120B—available on its AI Cloud platform. The models are instantly accessible through the Core42 Compass API, giving enterprises, researchers, and developers a choice of leading silicon platforms for scalable, high-performance inference.
With inference speeds of up to 3,000 tokens per second per user, the deployment is tuned for real-time AI workloads, from advanced automation to decision-support systems. The platform matches each workload with the optimal infrastructure for cost-performance balance, whether that means running in-country for sovereign compliance or scaling globally.
Why It’s Different
The Compass API integration lets organizations:
- Run enterprise-scale AI at global speeds for demanding, low-latency workloads.
- Deploy sovereign AI in regulated sectors such as healthcare, finance, and national security.
- Optimize costs for agentic AI workloads without sacrificing compliance.
- Fine-tune models locally with full transparency and control over deployment.
Strategic Context
“This launch delivers the flexibility and performance needed for today’s AI workloads,” said Kiril Evtimov, CEO of Core42 and Group CTO of G42. “Organizations can now access the latest open-weight AI models, choose the optimal platform, and scale innovation securely.”
The release follows G42’s recent AI infrastructure push, including a 5GW US-UAE AI campus, the 1GW Stargate UAE facility, and Microsoft’s $1.5B investment in 2024—moves aimed at cementing the UAE’s role as a global AI hub.
Bottom Line
By pairing open-weight models with sovereign-ready infrastructure, Core42 is positioning its AI Cloud as a global-scale, compliance-friendly alternative to closed, centralized AI services—giving enterprises more control over speed, cost, and data sovereignty.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI