Telecom networks are no longer just pipes—they’re data goldmines. At least, that’s the bet GIGABYTE Technology is making at Mobile World Congress.
The company used the Barcelona show to roll out an expanded end-to-end AI infrastructure portfolio tailored specifically for telecom operators. The pitch is ambitious: give telcos the compute muscle to transform network and subscriber data into automation, operational intelligence, and entirely new revenue streams.
In GIGABYTE’s framing, operators aren’t just upgrading networks—they’re building AI factories.
The AI Factory: 72 Blackwell GPUs in a Single Rack
At the center of this strategy is the GB300 NVL72, a liquid-cooled, rack-scale system integrating:
- 72 NVIDIA Blackwell Ultra GPUs
- 36 NVIDIA Grace CPUs
- NVIDIA Quantum-X800 InfiniBand or Spectrum-X Ethernet
- ConnectX-8 SuperNICs
That’s a dense slab of compute power designed for large-scale AI training and inference.
For telecom operators, the idea is straightforward: consolidate massive volumes of network telemetry, subscriber behavior data, and operational metrics into centralized AI clusters capable of:
- Automating network operations
- Optimizing radio and core planning
- Powering predictive maintenance
- Commercializing AI-driven services
As operators look beyond connectivity margins, AI-based services—analytics-as-a-service, enterprise AI hosting, and internal automation—are becoming strategic priorities. Infrastructure like the NVL72 positions telcos to compete with hyperscale cloud providers in specific vertical domains.
Smoothing the Data-to-AI Pipeline
Raw compute is only part of the equation. GIGABYTE is also targeting bottlenecks across the data-to-AI pipeline with expanded AI and HPC platforms.
Among the highlights:
- G894-SD3-AAX7, powered by NVIDIA HGX B300, aimed at real-time traffic analytics and reasoning models.
- XN24-VC0-LA61, based on NVIDIA MGX architecture and GB200 Grace Blackwell NVL4 Superchips, featuring direct liquid cooling for dense, energy-efficient deployments.
- G893-ZX1-AAX4, pairing AMD EPYC 9005/9004 CPUs with AMD Instinct MI355X GPUs for high performance-per-watt inference and simulation workloads.
This multi-architecture approach reflects the competitive accelerator landscape. While NVIDIA remains dominant in AI training, AMD is gaining traction in inference and cost-sensitive environments.
For operators juggling performance, power budgets, and ROI targets, architectural flexibility matters.
Digital Twins for 24/7 Network Simulation
Digital twins are emerging as a powerful tool in telecom—virtual replicas of networks used to simulate performance, capacity expansion, and failure scenarios.
To support that shift, GIGABYTE introduced the XL44-SX2-AAS1, built on NVIDIA MGX architecture and equipped with eight NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.
With 800 GB/s bandwidth via ConnectX-8 SuperNIC and PCIe Gen6 connectivity, the system is designed for high-fidelity, real-time simulation environments.
In practical terms, that means operators can model network behavior before making costly physical adjustments—reducing risk while accelerating innovation cycles.
AI Cloud and Neocloud at Telco Scale
As telecom providers expand into AI cloud hosting and so-called “neocloud” services, infrastructure density and energy efficiency become critical.
Enter the B683-Z80-LAS1, a 6U, 10-node blade system powered by AMD EPYC processors in a 1:1 CPU-to-NIC configuration. The system uses full direct liquid cooling, removing over 90% of system heat and improving power usage effectiveness (PUE).
Energy efficiency isn’t just an ESG talking point—it’s a margin lever. AI workloads are notoriously power-hungry, and cooling costs can erode profitability quickly. High-density liquid-cooled systems are increasingly becoming table stakes in large AI deployments.
Pushing AI to the Edge
While the AI factory narrative centers on core data centers, GIGABYTE is also extending AI infrastructure to the edge.
The W775-V10-L01 workstation, powered by the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, supports up to 775 GB of coherent memory—enabling large AI models to run locally.
Additional AMD EPYC– and Intel Xeon–based workstations offer flexibility for private network deployments and enterprise edge scenarios.
Then there’s AI TOP ATOM, a palm-sized system delivering up to one petaFLOP of AI compute. It’s designed for rapid prototyping and localized AI inference—use cases where latency, privacy, or bandwidth constraints make centralized processing impractical.
For telcos building private 5G and edge AI services for enterprises, compact high-performance nodes could open new vertical revenue opportunities.
The Bigger Picture: Telcos as AI Platforms
GIGABYTE’s MWC showcase underscores a larger industry shift: telecom operators are trying to reposition themselves as AI infrastructure providers rather than mere bandwidth suppliers.
To do that, they need:
- Hyperscale-class compute
- HPC integration
- Energy-efficient cooling
- Edge-ready AI hardware
- Flexible accelerator options
By delivering a portfolio that spans rack-scale Blackwell clusters to desk-side AI workstations, GIGABYTE is attempting to cover the entire spectrum.
Whether operators fully embrace the AI factory model remains to be seen. But one thing is clear: the infrastructure arms race has moved decisively beyond traditional base stations and core routers.
At MWC 2026, GIGABYTE isn’t just selling servers. It’s selling a blueprint for the AI-native telecom operator.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI









