
The Open Compute Project (OCP) Foundation this week played host to multiple advances in open infrastructure for artificial intelligence (AI) applications spanning everything from power and cooling to interconnects.
During the 2025 OCP EMEA Summit, OCP revealed that Google has now joined with Meta and Microsoft on a Mount Diablo project intended to create a rack design for AI data centers that separates power and compute into different cabinets. Initially launched last year, the goal is to build 1MW rack densities capable of meeting the requirement of next-generation AI applications.
“Making a 1MW rack an open standard will enable the most advanced data center and cloud providers, and their “arms dealer” equipment suppliers, to accelerate the deployment of bleeding-edge AI, HPC, analytics, and transaction processing services,” says Guy Currier, analyst with the Futurum Group.
Historically, racks have integrated the power and server infrastructure in a single rack, but the Mount Diablo project seeks to move all the power conversion infrastructure required into a separate disaggregated power rack.
Google also announced it has contributed a fifth-generation liquid cooling distribution unit, dubbed Project Deschutes, to OCP along with sharing the specifications for a +/-400 power delivery that can support up to 1MW per rack.
Madhusudan Iyengar, principal engineer at Google, told conference attendees that rack densities of greater than 500kW in a rack will be needed by 2030. “There is a lot of competition for every millimeter cubed of space in the rack, which means you’re going to have to push out some of the supporting infrastructure from inside the rack to outside the rack,” he noted.
Meta, meanwhile, unveiled a liquid-cooled rack designed to host the NVIDIA Blackwell chips last year in a way that provides up to 140kW of power.
At the same time, OCP announced an alliance with the Ultra Accelerator Link (UALink) Consortium, which is building a set of open interconnects for AI and high-performance computing (HPC) workloads.
The collaboration with OCP ensures UALink’s ongoing inclusion and compatibility with a full set of open standards, from the shop floor to the service edge, for the advancement of AI and high-performance infrastructure, says Currier. Multitudes of accelerators (GPUs) and other processors can be connected in a way that makes them appear to the system as a single, colossal processing unit, he noted.
“It’s a clear open alternative to NVIDIA’s dominant proprietary NVLink interconnect technology and protocol,” he adds.
Finally, OCP is adding an AI portal on the OCP Marketplace to make it simpler for builders and designers of AI clusters to discover AI Infrastructure products, white papers, best practice documents and related reference materials.
It’s not clear if and when these advances might mitigate an AI energy crisis that is already starting to have a significant impact on demands for power or prevent IT organizations from finding themselves locked into proprietary infrastructure. However, it is apparent that necessity is once again proving to be the mother of invention at a time when the number of AI workloads being deployed in data centers continues to exponentially increase.