TL;DR: At GTC 2026, NVIDIA released the Vera Rubin DSX AI Factory reference design — a standardized blueprint for building enterprise-grade AI data centers — alongside the Omniverse DSX digital twin blueprint, which lets operators simulate thermal, power, and networking conditions before breaking ground. A broad ecosystem of partners including AWS, Siemens, Schneider Electric, and Dassault Systèmes is backing the initiative. AWS separately expanded its NVIDIA partnership to deploy more than one million NVIDIA GPUs spanning the full Blackwell and Rubin compute stack.
Table of contents
- What DSX is and why it matters
- The Omniverse DSX digital twin blueprint
- The partner ecosystem backing DSX
- AWS expands to one million NVIDIA GPUs
- Dassault Systèmes and AI-powered virtual twins
- How standardization changes data center builds
- Thermal and power simulation before physical build
- Competitive context: why blueprints are the new battleground
- Enterprise adoption: what changes for infrastructure teams
- What this means for the AI infrastructure market
- FAQ
What DSX is and why it matters
NVIDIA's Vera Rubin DSX AI Factory reference design is best understood as a construction playbook for AI data centers. Rather than leaving enterprises, hyperscalers, and colocation operators to design their own AI factory floor plans from scratch, NVIDIA is offering a standardized, validated architecture that specifies how Vera Rubin GPU clusters should be physically configured, cooled, powered, and networked.
The "DSX" designation — Data center Systems eXperience — signals that this is not a chip release or a software update. It is a system-level specification that reaches from the compute node all the way to the building envelope. The reference design covers rack density, power distribution units, cooling topology, network cabling pathways, and the software stack that ties everything together under NVIDIA's AI Enterprise umbrella.
Why does this matter? Because the biggest bottleneck in deploying next-generation AI infrastructure is no longer silicon availability. It is the time required to design, engineer, permit, and construct the physical environment those chips require. Vera Rubin clusters running at full density draw extraordinary amounts of power and generate heat at densities that earlier data center generations were never designed to handle. Custom-engineering every facility from first principles is slow, expensive, and error-prone.
A reference design compresses that timeline. It gives architecture, engineering, and construction (AEC) firms a validated starting point. It gives colocation operators a reproducible template. It gives hyperscalers a way to accelerate capacity additions without reinventing the wheel at each site. And it gives NVIDIA a mechanism to ensure that its hardware performs as specified in the field — because a cluster running in a thermally or electrically suboptimal environment will underperform regardless of how capable the silicon is.
The announcement came during GTC 2026, Jensen Huang's annual showcase of the company's technology direction, at a moment when demand for AI compute is accelerating faster than the industry's ability to build the physical facilities to house it.
The Omniverse DSX digital twin blueprint
Running in parallel with the reference design is the Omniverse DSX digital twin blueprint — a framework for building a photorealistic, physically accurate simulation of an AI data center before any steel is erected or cable is pulled.
NVIDIA's Omniverse platform is built on Universal Scene Description (USD), Pixar's open interchange format that has become the lingua franca for complex 3D environments. The DSX digital twin blueprint extends Omniverse into the data center domain, creating a simulation environment that integrates:
- Computational fluid dynamics (CFD) for airflow and thermal modeling
- Power flow simulation for load distribution, UPS topology, and fault scenarios
- Network topology modeling for InfiniBand and Ethernet fabric planning
- Structural and spatial planning for rack placement, cable management, and aisle configuration
The practical outcome is that an operator can run hundreds of design variations in simulation before committing to a physical layout. Want to test how a hot-aisle containment change affects inlet temperatures at row 12? Run the simulation. Want to know how the power distribution topology holds up if two PDUs fail simultaneously? Run the simulation. Want to validate that your cooling plant can handle a 40 percent increase in GPU density before you upgrade? Run the simulation.
Jensen Huang articulated the vision plainly at GTC: "Everything will be represented in a virtual twin." That statement is less a product pitch and more a design philosophy that NVIDIA is now systematically encoding into its enterprise offerings. The Omniverse DSX blueprint is the data center manifestation of that philosophy.
The digital twin is not a static model. Once a facility is operational, sensor telemetry can be fed back into the Omniverse simulation to keep the digital twin synchronized with physical reality. That creates a living operations dashboard where changes — adding a new pod of Rubin nodes, rebalancing cooling loads, rerouting network paths — can be previewed in simulation before being executed in the real world.
The partner ecosystem backing DSX
A reference design without industry adoption is a white paper. What distinguishes the Vera Rubin DSX announcement is the breadth and depth of the partner ecosystem that has aligned behind it at launch. NVIDIA announced support from more than a dozen industry partners spanning design software, construction, power, cooling, networking, and operations.
Cadence brings its system analysis tools into the DSX workflow, enabling more accurate signal integrity and power integrity modeling for high-density GPU clusters.
Dassault Systèmes is integrating its 3DEXPERIENCE platform with Omniverse DSX, extending virtual twin capabilities into biology, materials science, engineering, and manufacturing — the verticals where AI factories will increasingly be put to work.
Eaton contributes power management hardware and software, including its intelligent rack PDUs and energy monitoring solutions validated for Vera Rubin power profiles.
Jacobs is one of the world's largest AEC firms. Its involvement means that DSX reference designs can be directly handed off to a construction partner capable of executing them at scale globally.
NScale, one of the newer AI cloud operators, is building its infrastructure on the DSX blueprint — providing a proof of concept that the reference design is deployable at commercial scale by a hyperscale-adjacent operator.
Phaidra brings AI-driven autonomous operations software that can optimize cooling plant efficiency in real time, integrating with the digital twin for predictive management.
Procore connects construction project management to the DSX workflow, enabling better coordination between design, procurement, and construction phases.
PTC contributes industrial IoT and product lifecycle management (PLM) expertise, helping operators manage the physical asset lifecycle of data center hardware.
Schneider Electric is a natural anchor partner given its dominance in data center power and cooling infrastructure. Its EcoStruxure platform is being integrated with Omniverse DSX for power management and sustainability monitoring.
Siemens brings building management systems (BMS) expertise, enabling DSX data centers to integrate with facility-wide energy, HVAC, and safety systems.
Switch, the colocation operator, is adopting DSX as a framework for its next-generation AI-ready facilities.
Trane Technologies contributes HVAC and thermal management expertise, particularly relevant as liquid cooling becomes the default for high-density GPU clusters.
Vertiv rounds out the power and cooling partner roster with its power conversion, thermal management, and IT infrastructure solutions validated for DSX configurations.
The full partner list, as reported by GlobeNewswire, reflects a deliberate strategy to cover every layer of the data center stack — not just the compute and networking layers where NVIDIA already dominates, but the power, cooling, construction, and operations layers where the real-world bottlenecks live.
AWS expands to one million NVIDIA GPUs
Separate from the DSX announcement but announced in the same GTC window, Amazon Web Services revealed a significant expansion of its NVIDIA GPU deployment, targeting more than one million NVIDIA GPUs across the full compute stack — both Blackwell (current generation) and Rubin (next generation).
This is not a minor capacity addition. One million GPUs at the scale of modern NVLink-connected clusters represents a substantial fraction of global AI compute capacity. The expansion covers:
- Blackwell GB200 NVL72 rack-scale systems already in deployment
- Vera Rubin systems as they ramp through 2026 and 2027
- The full software stack: CUDA, NCCL, cuDNN, NVIDIA AI Enterprise, and the NIM microservices catalog
For AWS, the strategic logic is straightforward: enterprise AI workloads are increasingly sensitive to silicon specificity. Customers training frontier models or running large-scale inference workflows want GPU-native infrastructure, not abstracted compute. Offering NVIDIA's full stack — including the newest Rubin architecture — is a competitive necessity against Microsoft Azure and Google Cloud, both of which are simultaneously investing in proprietary AI silicon.
For NVIDIA, AWS deploying DSX-compliant infrastructure at this scale validates the reference design commercially before most enterprises have had a chance to evaluate it. It is the kind of proof point that accelerates adoption across the rest of the customer base.
Dassault Systèmes and AI-powered virtual twins
Among the DSX partners, Dassault Systèmes warrants particular attention because its involvement points to a use case that extends well beyond data center construction.
Dassault's 3DEXPERIENCE platform is used by engineers and scientists to model complex systems — aircraft, vehicles, biological processes, material microstructures, manufacturing workflows. The company is now integrating this capability with Omniverse DSX, creating a pathway where the same virtual twin technology used to design and operate AI data centers can also be used to run the scientific and industrial workloads that those data centers will ultimately serve.
In practical terms: a pharmaceutical company building an AI factory to accelerate drug discovery could use Omniverse DSX to design and operate the facility while simultaneously using Dassault's AI-powered virtual twins to model the biological systems their AI is being trained to understand. The data center and the scientific domain it serves become part of a continuous digital thread.
Dassault's CEO characterized the partnership as extending virtual twin capabilities to "biology, materials, engineering, and manufacturing" — four domains where physics simulation combined with AI is expected to produce the most dramatic acceleration in scientific productivity over the next decade. NVIDIA's hardware is increasingly the engine running those simulations, and DSX is the blueprint for building the facilities that house that engine.
How standardization changes data center builds
Before reference designs like DSX, every major AI data center build was, to varying degrees, a custom engineering project. Operators would engage AEC firms, MEP (mechanical, electrical, plumbing) engineers, and equipment vendors independently, producing designs that were often incompatible at the interfaces between systems — cooling that underdelivered for actual GPU thermal loads, power distribution that couldn't handle transient draw spikes, network pathways that created cable management nightmares at scale.
The cost of this custom-engineering approach is measured in two currencies: time and risk. Time, because each bespoke design requires engineering, review, revision, permitting, and procurement cycles that can add months to a project timeline. Risk, because a novel design has not been validated at scale, and discovering a thermal or power design flaw after construction is expensive to remediate.
A validated reference design addresses both. The thermal, power, and network designs have already been modeled, simulated, and validated by NVIDIA and its partners. AEC firms can implement a known-good baseline rather than engineering from scratch. Equipment vendors can pre-qualify products against a stable specification. Operators can deploy with higher confidence that the system will perform as designed from day one.
For the AI infrastructure market specifically, where data center construction cycles are measured in months and demand is measured in quarters, the ability to compress design-to-deployment time is a significant competitive advantage.
Thermal and power simulation before physical build
The Omniverse DSX digital twin's thermal and power simulation capabilities deserve focused attention because they address the two constraints that most frequently cause AI data center projects to underperform or fail.
Thermal simulation at the fidelity that Omniverse DSX enables goes well beyond traditional CFD analysis. The simulation integrates GPU-level thermal output profiles — which vary dynamically with workload — with rack-level airflow modeling, room-level cooling topology, and building-level HVAC system response. The result is a simulation that can predict hot spots under realistic mixed-workload conditions, not just peak theoretical load. Operators can identify cooling design inadequacies before they manifest as throttled GPU performance or premature hardware failures.
Power simulation is equally important given the extraordinary power density of Vera Rubin clusters. The reference design addresses not just steady-state power draw but transient load spikes — the sharp increases in power demand that occur when large training jobs launch simultaneously or when inference traffic surges. Power distribution infrastructure that is sized for average load but not transient peaks can trip breakers or damage UPS systems. The DSX simulation models these scenarios explicitly, enabling operators to right-size power infrastructure without over-provisioning excessively.
The combination of validated reference design plus simulation-first methodology represents a meaningful shift in how AI infrastructure is engineered. Rather than building, discovering problems, and remediating, operators can discover and resolve problems in simulation before construction begins.
Competitive context: why blueprints are the new battleground
NVIDIA's DSX announcement does not exist in isolation. It reflects a broader competition in the AI infrastructure market where the battleground has shifted from chips to systems to ecosystems.
AMD has been building out its ROCm software ecosystem and partnering with system integrators to offer competitive AI compute alternatives. Intel is positioning Gaudi 3 for inference workloads with its own reference architectures. But neither competitor has assembled a partner ecosystem of the breadth and domain depth that NVIDIA has rallied behind DSX.
The hyperscalers — AWS, Azure, Google Cloud — are simultaneously building proprietary AI silicon (Trainium, Maia, TPU) while continuing to deploy NVIDIA hardware at scale, because enterprise customer demand for NVIDIA-native workloads remains strong enough that they cannot afford to be GPU-agnostic.
The reference design strategy is NVIDIA's answer to the question of what happens after silicon leadership. Even if a competitor closes the gap on raw GPU performance, NVIDIA's integrated blueprint — covering chips, systems, facilities, simulation, and operations — creates a switching cost and an integration advantage that is difficult to replicate. Choosing DSX means choosing not just an NVIDIA GPU but an NVIDIA-certified facility design, an NVIDIA-validated cooling topology, and an NVIDIA-integrated operations platform.
Enterprise adoption: what changes for infrastructure teams
For enterprise infrastructure teams evaluating AI factory investments, the DSX announcement changes the calculus in several ways.
Procurement simplification. Rather than assembling a vendor matrix from scratch — GPU vendor, server OEM, cooling vendor, power vendor, network vendor, facility designer — teams can use DSX as a starting point with pre-validated vendor combinations. This does not eliminate the need for competitive procurement, but it dramatically reduces the scoping and qualification work.
Risk reduction. A validated reference design with simulation-first methodology reduces the probability of post-construction surprises. For enterprises that have experienced the pain of deploying GPU infrastructure into facilities that were not designed for it, this is a meaningful value proposition.
Talent efficiency. Specialized expertise in AI data center design is scarce. A reference design allows teams with more general data center experience to execute AI factory builds without requiring frontier-level expertise at every step of the process.
Timeline compression. For enterprises under pressure to deploy AI capacity quickly — and most large organizations are — the ability to shortcut the design phase by starting from a validated blueprint rather than blank paper can represent a difference of months in time-to-compute.
The caveat is that reference designs optimized for a specific vendor's hardware create lock-in. DSX is a Vera Rubin-centric blueprint. Enterprises that adopt it deeply are making a long-term bet on the NVIDIA platform. That bet may well be rational given NVIDIA's current market position, but infrastructure teams should enter with clear-eyed awareness of the commitment they are making.
What this means for the AI infrastructure market
The Vera Rubin DSX announcement, taken together with the Omniverse DSX digital twin blueprint and the AWS partnership expansion, signals a specific thesis about where value will concentrate in the AI infrastructure market over the next several years.
NVIDIA's thesis is that the constraint on AI deployment will increasingly be physical infrastructure, not silicon. Chips can be manufactured faster than data centers can be designed, permitted, and constructed. By solving the physical infrastructure problem — standardizing designs, integrating simulation, rallying a partner ecosystem — NVIDIA extends its relevance and its economic capture well beyond the GPU itself.
The Omniverse platform, which was sometimes characterized as a gaming or visualization curiosity in its early years, is emerging as a serious enterprise infrastructure tool. The DSX application is arguably the most industrially significant deployment of Omniverse to date: a physics simulation platform that can model facilities housing hundreds of millions of dollars of hardware before a single server rack is installed.
For the broader AI infrastructure ecosystem, the DSX announcement raises the strategic stakes for every vendor in the data center supply chain. Cooling vendors, power vendors, AEC firms, and colocation operators that align with DSX early gain certification credibility and reference customer access. Those that do not align risk being designed out of AI factory builds as DSX adoption scales.
The data center, long treated as a commodity infrastructure layer beneath the interesting AI compute stack, is becoming a strategic asset in its own right. NVIDIA, with Vera Rubin DSX, is making its bid to own the blueprint.
FAQ
What is the Vera Rubin DSX AI Factory reference design?
It is a standardized blueprint from NVIDIA that specifies how to design, build, and operate AI data centers optimized for Vera Rubin GPU clusters. It covers physical layout, power distribution, cooling topology, networking, and software integration.
What does DSX stand for?
DSX stands for Data center Systems eXperience — NVIDIA's framework for end-to-end AI data center specification and simulation.
What is Omniverse DSX?
Omniverse DSX is a digital twin blueprint built on NVIDIA's Omniverse platform. It allows operators to create a physically accurate simulation of a planned AI data center to model thermal, power, and networking behavior before construction begins.
Which partners are supporting DSX at launch?
The launch ecosystem includes Cadence, Dassault Systèmes, Eaton, Jacobs, NScale, Phaidra, Procore, PTC, Schneider Electric, Siemens, Switch, Trane Technologies, and Vertiv — covering compute, power, cooling, construction, and operations.
How many NVIDIA GPUs is AWS deploying?
AWS announced an expanded deployment targeting more than one million NVIDIA GPUs spanning both Blackwell and Rubin generations of hardware.
What does Dassault Systèmes contribute to the DSX ecosystem?
Dassault Systèmes integrates its 3DEXPERIENCE AI-powered virtual twin platform with Omniverse DSX, extending simulation capabilities into biology, materials science, engineering, and manufacturing workloads.
What did Jensen Huang say about digital twins?
Huang stated at GTC 2026: "Everything will be represented in a virtual twin" — articulating NVIDIA's philosophy that digital simulation should precede physical construction across all domains.
How does DSX reduce data center build time?
By providing a pre-validated reference design, DSX eliminates much of the custom engineering work in each new AI facility build. AEC firms and operators can start from a known-good baseline rather than designing from scratch, compressing the design phase by weeks to months.
What thermal simulation does Omniverse DSX provide?
It integrates GPU-level thermal profiles, rack-level airflow modeling, room-level cooling topology, and building-level HVAC response to predict hot spots under realistic mixed-workload conditions before construction.
Does DSX only work with Vera Rubin hardware?
The DSX reference design is optimized for Vera Rubin GPU clusters, though the broader Omniverse DSX simulation framework can model various hardware configurations. Enterprises adopting DSX are making a meaningful long-term commitment to the NVIDIA platform.
Who is NScale and why are they in the DSX ecosystem?
NScale is an emerging AI cloud operator building its infrastructure directly on the DSX blueprint, serving as an early commercial validation that the reference design is deployable at operational scale.
What does Schneider Electric bring to DSX?
Schneider Electric contributes its EcoStruxure platform for power management and sustainability monitoring, validated for the power density profiles of Vera Rubin cluster deployments.
How does Phaidra integrate with DSX?
Phaidra provides AI-driven autonomous operations software that optimizes cooling plant efficiency in real time. It integrates with the Omniverse DSX digital twin to enable predictive facility management.
What does the AWS GPU expansion mean competitively?
It reinforces NVIDIA's position as the default AI compute platform for major cloud providers even as AWS invests in proprietary silicon. It also validates DSX commercially at a scale that accelerates adoption across other enterprise customers.
Is DSX publicly available or only for large enterprises?
NVIDIA has not detailed specific access or licensing terms, but reference designs of this type are typically made available through NVIDIA's partner network and direct enterprise sales channels.
What is the relationship between DSX and NVIDIA AI Enterprise?
The DSX reference design integrates with NVIDIA AI Enterprise, the company's software platform for enterprise AI workloads, creating a continuous stack from physical facility design through AI model deployment and management.
How does power transient simulation work in Omniverse DSX?
The simulation models dynamic load spikes — such as those that occur when large training jobs launch simultaneously — allowing operators to size power infrastructure for peak transient demand rather than just steady-state average draw, preventing circuit breaker trips and UPS failures.
What is the competitive alternative to DSX?
AMD and Intel offer their own reference architectures for AI server deployments, but neither has assembled a partner ecosystem spanning AEC, construction, power, cooling, and operations with the depth of the DSX launch coalition. Hyperscalers building proprietary AI clouds use internal designs that are not available as public reference architectures.
How does Siemens contribute to the DSX ecosystem?
Siemens contributes building management system (BMS) expertise, enabling DSX data centers to integrate with facility-wide energy management, HVAC control, and safety systems.
What is the long-term strategic significance of DSX?
DSX represents NVIDIA's extension of its platform strategy from the chip and software layers into the physical infrastructure layer. By standardizing AI factory design and rallying a broad ecosystem behind that standard, NVIDIA creates integration advantages and switching costs that compound over the multi-year lifecycle of data center assets.