NVIDIA Blackwell GB200 GB300 Server Details and GB300 Ultra AI Chip Specs Revealed

NVIDIA reveals technical details for its Blackwell GB200 and GB300 servers using the modular MGX architecture.
mgtid Published by
NVIDIA Blackwell GB200 GB300 Server Details and GB300 Ultra AI Chip Specs Revealed

NVIDIA Presents Blackwell GB200/GB300 Servers and the Mighty GB300 Ultra Chip

NVIDIA presents an intensive technical descriptive detail about its Blackwell GB200 and GB300 servers built on modular MGX architecture and other features to Open Compute Platform (OCP) standards. Furthermore, the announcing capacity for its improved chips included the newly Blackwell Ultra GB300 AI chip, now fully in production.

MGX Architecture A Modular Perspective on AI Infrastructure

The MGX architecture is another important topic of NVIDIA's presentation to Hot Chips 2025 programs-an open and modular platform that can cater to the scale challenges of various AI workloads and applications in computing. MGX essentially compartmentalizes a server system into these complementary interoperable building blocks that allow a customer to custom-design whatever components (NICs, management solutions, CPU/GPU mix, and much more) without redesigning the system as a whole.

By adding MGX to OCP, NVIDIA had made its specifications, 3-dimensional models and drawings available to the public, thus allowing clients to work with their private supply chains to fashion specific solutions.

NVIDIA Blackwell GB200 GB300 Server Details and GB300 Ultra AI Chip Specs Revealed

Inside the Blackwell GB200/GB300 Rack

The GB200/GB300 system is purely an engineering marvel made using MGX infrastructure. A single rack is composed of 120 kw and holds compute trays, switch trays, and power supplies, all interconnected through a high-speed NVLink spine.

  • System Layout: The rack houses 18 compute trays and nine switch trays, delivering up to 1.4 exaflops of FP4 AI performance.
  • Compute Trays: Each of the 18 compute trays is liquid-cooled, consumes around 7 kilowatts, and contains two CPUs and four GPUs. These trays connect to the NVLink spine, which uses low-latency copper interconnects running at 200 Gb/s per lane.
  • Power and Cooling: The system uses a high-capacity bus bar capable of supporting up to 1,400 amps. The entire system is 100% liquid-cooled, leveraging the OCP standard universal quick disconnect (UQD) for fluid connections.
  • Density and Standards: NVIDIA adapted the OCP rack to support the denser 1RU enterprise standard (44.5mm pitch) for more efficient deployment of hardware.

The Blackwell Ultra GB300 NVIDIA's Fastest AI Chip

The Blackwell Ultra GB300 built on the Blackwell architecture showcases an even more advanced, performance, and memory-augmented AI chip by NVIDIA.

NVIDIA Blackwell GB200 GB300 Server Details and GB300 Ultra AI Chip Specs Revealed

Key Specifications of the GB300 Ultra

  • Architecture: The chipset uses two reticle-sized dies connected via a 10 TB/s NV-HBI interface to function like a single GPU with 208 billion transistors.
  • Compute Power: It comes with 20480 CUDA cores and 640 5th Gen Tensor Cores, providing 15 PetaFLOPS of NVFP4 dense compute performance, which is a 50% enhancement against the GB200.
  • Memory: This extremely large memory capacity can house within memory multi-trillion-parameter AI models. The GB300 Ultra has 288 GB of HBM3e memory with a total bandwidth of 8 TB/s.
  • Interconnect: It features NVLink 5, with 1.8 TB/s in bidirectional bandwidth per GPU, and PCIe Gen6 support for host connectivity.

GB300 Ultra Performance Comparison

Feature Blackwell (GB200) Blackwell Ultra (GB300)
NVFP4 Dense Performance 10 PetaFLOPS 15 PetaFLOPS
Max HBM3e Capacity 192 GB 288 GB
Max HBM Bandwidth 8 TB/s 8 TB/s
Max Power (TGP) Up to 1,200W Up to 1,400W

Production and Future Cadence

Both systems have been fully produced, and their services are being utilized in different data center hyperscale locations. NVIDIA assured that it is in an annual cadence regarding innovation, pushing limits for density, power, and cooling in its AI platforms.

About the author

mgtid
Owner of Technetbook | 10+ Years of Expertise in Technology | Seasoned Writer, Designer, and Programmer | Specialist in In-Depth Tech Reviews and Industry Insights | Passionate about Driving Innovation and Educating the Tech Community Technetbook

Post a Comment