NVIDIA Corp. is investing up to $100 billion into OpenAI as part of an ambitious partnership to construct hundreds of billions of dollars’ worth of data centers powered by the chipmaker’s artificial intelligence (AI) processors.

Per the deal announced Monday, OpenAI will build and deploy NVIDIA systems requiring 10 gigawatts of power, the companies said. The gigawatt measurement has become a standard metric for describing the scale of the largest AI chip clusters as the industry rapidly expands its computing infrastructure.

The investment underscores growing demand for specialized computing power as AI applications proliferate. The partnership positions NVIDIA’s processors at the center of OpenAI’s planned infrastructure expansion while providing the AI lab with significant capital to scale its operations.

“This is a giant project,” NVIDIA CEO Jensen Huang told CNBC in an interview, flanked by OpenAI CEO Sam Altman and company president Greg Brockman. Huang said 10 gigawatts is equal to between 4 million and 5 million GPUs, which is what the company will ship this year.

Neither company disclosed the timeline for the data center construction or provided additional details about the deployment schedule for the new systems.

The partnership builds on deep ties between NVIDIA and OpenAI, arguably AI’s most influential players. NVIDIA’s graphics processing units experienced a surge in demand following OpenAI’s ChatGPT launch in late 2022, and the AI company continues to depend on these GPUs for software development and user deployment.

The massive scale of NVIDIA technology required signals OpenAI’s ambitions to develop next-generation AI systems with capabilities far exceeding current models. The company faces mounting pressure to expand its chip capacity as its user base has grown to 700 million active weekly users.

“You should expect a lot from us in the coming months,” Altman said in the CNBC interview. “There are three things that OpenAI has to do well: we have to do great AI research, we have to make these products people want to use, and we have to figure out how to do this unprecedented infrastructure challenge.”

The investment will be released incrementally as infrastructure development progresses, with NVIDIA serving as OpenAI’s preferred supplier for processors and networking equipment. While NVIDIA maintains its dominant position in the AI chip market, the company faces growing competition from Advanced Micro Devices and major cloud service providers developing proprietary chips and integrated systems.

For the broader AI infrastructure ecosystem, the deal highlights the importance of scale, access, and independence as demand accelerates, says Saurabh Giri, chief product and technology officer at Voltage Park.

“A $100 billion AI infrastructure program isn’t a vanity play — it’s a bet on compounding productivity,” Giri said. “At this scale, you’re not just buying GPUs; you’re building an industrial supply chain for intelligence: silicon, power, land, networking, and the software abstractions that make it usable. The immediate return comes from lower unit costs for training and high-volume inference, but the durable upside is new classes of agentic applications that hyperscalers can monetize across every workflow.”

Said Jim Piazza, vice president of AI/ML and predictive systems at Ensono: “One interesting callout is that the deal covers both compute and network, and I think we’ll see this continue as companies continue spending real money for bleeding-edge capabilities. Take the Vera Rubin project as an example. It also demanded bleeding-edge networking. That said, NVIDIA and OpenAI have to be careful not to fall into the ‘It only works with our hardware’ trap. We all remember what happened when IBM went down the microchannel bus path.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Events

TECHSTRONG AI PODCAST

SHARE THIS STORY