The AI boom reshaping the computing landscape is poised to scale even faster in 2026. As breakthroughs in model capability and computing power drive rapid growth, enterprise data centers are being pushed beyond the limits of conventional server and rack architectures. This is creating new pressures on power budgets, thermal envelopes, and facility space.
NVIDIA MGX modular reference architecture provides forward-looking designs that enable faster time-to-market (TTM) with standardized building blocks. MGX helps system partners integrate fast-evolving technologies and deliver the flexible, energy-efficient platforms modern AI data centers require.
This post explores the next evolution in the MGX modular reference architectur…
The AI boom reshaping the computing landscape is poised to scale even faster in 2026. As breakthroughs in model capability and computing power drive rapid growth, enterprise data centers are being pushed beyond the limits of conventional server and rack architectures. This is creating new pressures on power budgets, thermal envelopes, and facility space.
NVIDIA MGX modular reference architecture provides forward-looking designs that enable faster time-to-market (TTM) with standardized building blocks. MGX helps system partners integrate fast-evolving technologies and deliver the flexible, energy-efficient platforms modern AI data centers require.
This post explores the next evolution in the MGX modular reference architecture: a 6U (800 mm) chassis configuration designed specifically for the next generation of accelerated compute and networking platforms. This includes the new liquid-cooled variant of the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU.
Flexible, future-proof design with enhanced serviceability
Forward-looking compatibility and flexibility are core design principles of the MGX 6U platform. It features a single chassis that can span multiple computing generations and workload profiles. It’s designed to support today’s most powerful computing platforms while offering future-proof compatibility, reducing the need for disruptive redesigns over time.
Partners can design these systems with multiple MGX-based host-processor modules (HPMs), including x86 platforms and the next-generation NVIDIA Vera CPU. This enables standardizing on a single server design while supporting multiple CPU architectures and workload requirements.
Lastly, the larger chassis volume creates accessible service pathways for maintenance. Key components like network cards, power supplies, and other field-replaceable units are easy to reach. This simplifies serviceability and reduces operational overhead when managing rack-scale infrastructure.
Sustainable, efficient computing with liquid-cooled NVIDIA RTX PRO Server
The MGX 6U design is the foundation for the next wave of accelerated computing platforms, starting with a new liquid-cooled NVIDIA RTX PRO Server. This new RTX PRO Server configuration will feature eight of the latest liquid-cooled RTX PRO 6000 Blackwell Server Edition GPUs, along with advanced AI networking capability delivered by NVIDIA BlueField-3 DPUs and NVIDIA ConnectX-8 SuperNICs with built-in PCIe Gen 6 switches (Figure 1).
Figure 1. The MGX 6U system topology with eight GPUs, NVIDIA BlueField-3 DPUs, and ConnectX-8 SuperNICs with built-in PCIe Gen 6 switches
With a compact, single-slot liquid-cooled form factor, RTX PRO 6000 Blackwell delivers breakthrough performance for powering AI factories and accelerating demanding enterprise AI workloads with improved thermal efficiency. It’s capable of running the full suite of NVIDIA enterprise software, including NVIDIA AI Enterprise, NVIDIA Omniverse, NVIDIA vGPU, and NVIDIA Run:ai. It provides a universal data center platform for building and deploying the next generation of AI-enabled applications, from agentic AI and physical AI to scientific computing, simulation, graphics, and video.
Additionally, the RTX PRO 6000 Blackwell Server Edition GPU is validated by more than 50 leading enterprise ISVs spanning engineering, scientific computing, and professional visualization applications, as well as the most widely adopted orchestration, management, and AI ops platforms.
**Figure 2. Liquid-cooled NVIDIA RTX PRO 6000 Blackwell Server Edition GPU **
High-performance AI networking with NVIDIA ConnectX
Network performance is essential to maximize the performance of AI workloads at scale. MGX 6U reference design supports ConnectX-8 AI networking today and will support ConnectX-9 when it becomes available, delivering Ethernet and InfiniBand connectivity options to meet diverse data center and workload requirements.
The liquid-cooled RTX PRO Server, based on the MGX 6U configuration, features a streamlined system architecture that includes the latest-generation ConnectX-8 SuperNICs with integrated PCIe Gen 6 switches.
Built for AI workloads, ConnectX-8 with integrated PCIe Gen 6 switches supports up to 400 Gb/s of network bandwidth per RTX PRO 6000 Blackwell GPU (based on a 2:1 GPU-to-NIC ratio).
In addition to streamlining the design and reducing server complexity versus systems with dedicated PCIe switches, ConnectX-8 effectively doubles per‑GPU network bandwidth. This helps to remove I/O bottlenecks and speeds data movement between GPUs, NICs, and storage, resulting in up to 2x higher NCCL all‑to‑all performance and more scalable multi‑GPU, multi‑node workloads across AI factories.
AI runtime security and infrastructure acceleration with NVIDIA BlueField
As accelerated infrastructure grows in scale and complexity, securing every layer of the system becomes essential. The MGX 6U design features NVIDIA BlueField data processing units (DPUs) to bring zero-trust security and infrastructure acceleration directly into the data center layer. The BlueField processor offloads and accelerates functions such as line-rate encryption, micro-segmentation, and real-time threat detection—enforcing least-privilege access while preserving the host’s computing resources (GPU/CPU) to focus on AI and other modern workloads.
By isolating control and management planes in hardware, BlueField enables organizations to protect AI pipelines from emerging threats while accelerating networking, storage, and virtualization services. Enterprises can further extend these capabilities by deploying validated BlueField-accelerated applications from leading software providers, enhancing both infrastructure efficiency and cybersecurity coverage. This combination helps ensure that RTX PRO Server deployments can scale securely, with consistent performance and policy enforcement across every node in the AI factory.
Building future-ready AI factories
As NVIDIA Blackwell and future GPU generations continue to push beyond traditional computing boundaries, the NVIDIA MGX modular architecture ensures AI factories can evolve with silicon innovations. For ecosystem partners building the next generation of accelerated computing platforms, MGX reduces engineering costs, shortens time to market, and delivers multigenerational compatibility while ensuring optimal performance and efficiency for enterprises deploying AI workloads at scale.
Systems featuring the liquid-cooled NVIDIA RTX PRO 6000 Blackwell Server Edition GPU, along with liquid-cooled RTX PRO Servers based on the MGX 6U configuration, are expected to arrive from global system builders in the first half of 2026.