
Politics and AI semiconductor manufacturing make for strange bedfellows. If you follow tech, you’ve likely seen headlines about OpenAI and Broadcom teaming up to co-develop custom AI chips. But as with many stories in our industry, the headlines don’t tell the whole story. So here’s Shimmy’s view.
First, let’s lay out the basics. OpenAI and Broadcom have inked a pact to build custom application-specific integrated circuits (ASICs), with mass production slated for 2026. These chips will be fabricated by TSMC using its bleeding-edge 3-nanometer process technology. OpenAI won’t be selling these chips to outside customers. Instead, the plan is to use them in-house to power its own data centers — initially for inference workloads, while continuing to rely on NVIDIA GPUs for model training.
If that setup sounds familiar, it should. Google has its TPUs, Amazon has Trainium and Inferentia, and Meta has its in-house silicon projects. All of them have one thing in common: An effort to wean themselves off NVIDIA’s expensive, supply-constrained GPUs. OpenAI is simply the latest — and perhaps the most visible — entrant into this game.
Wall Street, for its part, loves the move. Broadcom’s shares jumped after news of the partnership, while NVIDIA’s took a dip. Investors see Broadcom solidifying its role as a credible player in the AI hardware arms race. A $10 billion chip order — announced by Broadcom earlier this year but attributed only to an unnamed customer — suddenly makes a lot more sense when you connect the dots.
Now, let’s get to the Shimmy take. You’d be hard-pressed to find a major AI player today who wouldn’t like to get out from under NVIDIA’s yoke. Don’t get me wrong — NVIDIA isn’t the villain here. Jensen Huang and company have executed brilliantly, building not just great GPUs but also CUDA and an entire software ecosystem that’s become the default infrastructure for AI. But when one company controls the keys to the kingdom, the rest of the industry has little choice but to pay whatever toll is demanded. That’s not a desirable position for any business, especially one scaling as rapidly as OpenAI.
So, if not NVIDIA, who? AMD often gets mentioned as the alternative. And to be fair, AMD has been making real strides with its MI300 accelerators and ROCm software stack. But AMD still lags NVIDIA on software maturity and developer adoption. ROCm isn’t CUDA. And in AI, software ecosystems matter just as much — if not more — than raw silicon. Betting your company’s compute destiny on AMD isn’t a slam dunk.
Then there’s China. On paper, Chinese manufacturers are making progress. But “on paper” is the key phrase here. They remain several steps behind NVIDIA on performance, and geopolitical realities add another layer of risk. Depending on Chinese fabs and suppliers for your most strategic compute workloads? That’s not just a supply chain gamble — that’s a sovereignty and national security issue. Compared to that, even NVIDIA’s market dominance feels tame.
Which brings us back to OpenAI’s move. Just like Apple with its A-series and M-series chips, OpenAI wants to control its own silicon destiny. By building custom chips, OpenAI can optimize specifically for the workloads it cares about: The massive inference tasks of running models like ChatGPT at scale. It can also reduce reliance on external suppliers, cut costs over time, and design chips fine-tuned to its unique needs.
There’s also a political wrinkle worth watching. If OpenAI and Broadcom really want to avoid tariffs and curry favor in Washington, they could tap Intel’s fabs here in the U.S. Intel is desperately trying to reassert itself in advanced manufacturing, and a Broadcom-Intel-OpenAI alliance would check a lot of boxes — for business, for politics, and for national industrial policy. Stranger things have happened.
Still, we should temper our expectations. Partnerships like these can sometimes fizzle into what I like to call “Barney deals”: all hugs and smiles, “I love you, you love me,” but not much to show in the end. This pact is supposed to get into gear in 2026. That’s an eternity in AI time, where product cycles and model releases are measured in quarters, not years. By then, NVIDIA will have released at least two new generations of GPUs. AMD will have iterated. Intel will either be back in the game or left for dead. And who knows what geopolitical shocks may hit the semiconductor supply chain in the meantime.
But for now, Broadcom and OpenAI have put a marker down. They’re signaling that NVIDIA may no longer have the AI world on lock. Whether this partnership reshapes the hardware landscape — or ends up as another well-intentioned press release — remains to be seen.
One thing is certain, though: The AI hardware wars are just getting started. And in this game, control of your own silicon may be the ultimate prize.