TechInsights has released preliminary findings from its teardown of NVIDIA’s Blackwell HGX B200 platform, a cutting-edge system designed for advanced artificial intelligence (AI) and high-performance computing (HPC) workloads. The analysis provides early insights into the GPU’s innovative architecture, confirming key suppliers and packaging technologies that position Blackwell as NVIDIA’s most advanced chipset to date for the generative AI era.
Subtopics and Pointers
1. SK hynix Powers GB100 with HBM3E
- The GB100 GPU integrates eight HBM3E packages supplied by SK hynix.
- Each HBM package contains:
- Eight stacked memory dies in a 3D configuration.
- A separate controller die beneath the stack.
- Delivers a total of 192GB high-bandwidth memory, with 3GB per DRAM die—a 50% increase in capacity per die compared to previous HBM generations.
- More technical analysis is underway to identify node-level specifications of these memory dies and controllers.
2. Breakthrough in Advanced Packaging: CoWoS-L
- The GB100 features TSMC’s CoWoS-L (chip-on-wafer-on-substrate with local area bridge) packaging technology.
- Marks the first commercial use of CoWoS-L, enhancing GPU die interconnect and density.
- The GPU houses two dies on a 4nm TSMC process, nearly doubling the die area over the Hopper generation.
- TechInsights plans a deeper dive into the packaging interconnect architecture in upcoming reports.
3. HGX B200: AI-Centric Server Board
- NVIDIA’s HGX B200, launched in March 2024, links eight GB100 GPUs using NVLink for parallel GPU performance.
- Supports x86-based generative AI workloads.
- Enables networking up to 400Gb/s via NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet platforms.
- Represents the first use of multi-die GPU packaging in NVIDIA’s product line.
4. Analyst Insights and Market Implications
- Cameron McKnight-McNeil, process analyst at TechInsights, noted: “The Blackwell product line is the world’s most advanced chipset that NVIDIA developed for the ‘generative AI’ era.”
- The teardown underscores NVIDIA’s aggressive innovation in packaging and memory integration to support the explosive growth in AI compute demand.
The teardown of NVIDIA’s Blackwell GB100 GPU by TechInsights reveals a leap forward in GPU design and AI acceleration capabilities. With SK hynix’s HBM3E memory and TSMC’s advanced CoWoS-L packaging, NVIDIA’s latest platform delivers unmatched performance for generative AI and HPC workloads. As more technical insights emerge, the GB100 is set to redefine standards in AI infrastructure design and scalability.