JOIN queries, OpenAI, ChatGPT

OpenAI is considering building its own AI chips to offset what it says is a global shortage of needed GPUs to run its software and to possibly reduce somewhat the exorbitant costs of running the hardware supporting its ChatGPT generative AI chatbot.

According to a Reuters report, the eight-year-old company has even evaluated acquiring an unnamed company, though it’s also weighing other options, including teaming more closely with Nvidia and expanding its supply chain to include other GPUs makers.

OpenAI and CEO Sam Altman have not yet decided on a plan, though concern about the availability of the expensive AI chips has been something haunting the company, Reuters reported, citing unnamed sources.

The company relies heavily on GPUs to run its expanding software portfolio. OpenAI runs all of its software (such as ChatGPT, GPT, Codex for code generation, and Dall-E for image creation) and related operations – compute, storage, networking and databases – on an AI supercomputer in Microsoft’s Azure cloud. Microsoft is investing more than $10 billion in OpenAI – and taking a 49% – and is the startup’s exclusive cloud partner.

Microsoft also is using OpenAI technologies throughout its product portfolio.

GPU Worries

The Microsoft system running OpenAI’s operations uses 10,000 Nvidia GPUs, according to Reuters. More are needed to support the company’s ongoing innovation.

Altman said the GPU shortage is delaying some short-term plans, including rolling out 32k context to more users, finetuning its API and expanding its dedicated capacity offering, according to a report.

The IT industry has been talking about the GPU shortage for months. Microsoft noted several times in its fiscal year 2023 report its ongoing worry about being able to acquire GPUs for its cloud operations.

That said, some are seeing improvement. Speaking at the Code Conference last week, Microsoft CTO Kevin Scott told the crowd that it is easier getting access to Nvidia GPUs to run AI workloads than it was a few months ago.

“Demand was far exceeding the supply of GPU capacity that the whole ecosystem could produce,” Scott said. “That is resolving. It’s still tight, but it’s getting better every week, and we’ve got more good news ahead of us than bad on that front, which is great.”

Is It Worth the Cost?

OpenAI also would have to determine if the cost of buying a company and running a chip-making business would offset any of the money spent running ChatGPT and its other software. Running ChatGPT reportedly costs OpenAI $700,000 a day, while Reuters noted that each ChatGPT query costs about 4 cents. The news site pointed to research saying that if such queries grow to a tenth that of Google search, it would take about $48 billion worth of GPUs upfront and $16 billion in chips a year to keep running.

OpenAI is making other moves to bring in more revenue, such as rolling out ChatGPT Enterprise in late August, hoping that businesses would pay for such promises as better performance, corporate-grade data analysis, and no usage cap.

If OpenAI opts to make its own chips, it will be entering a crowded field. Nvidia, Intel, and AMD already are building out their AI chip portfolios and smaller startups, like SambaNova and Cerebras, are in the mix. Some large hyperscalers – including Microsoft, Meta, Google, and Amazon – have developed or are developing their own AI processors.

In a blog post in May, Santosh Janardhan, vice president and head of infrastructure at Meta, wrote that the company expects its AI compute needs to “grow dramatically over the next decade,” which means its AI infrastructure will need expand.

“This includes our first custom silicon chip for running AI models, a new AI-optimized data center design and the second phase of our 16,000 GPU supercomputer for AI research,” Janardhan wrote.

Not a Trivial Undertaking

There are pros and cons to OpenAI making its own processors, according to Rob Enderle, principal analyst with The Enderle Group. On the plus side, it would enable the company to better dictate its destiny by controlling not only its software but also the hardware it runs on.

“Software has a hardware requirement and if you control both you have a firm idea on the direction you’ll need to take to remain competitive, but if you only own one, a change in the other could immediately degrade your solution, potentially making you irrelevant,” Enderle told Techstrong.ai.

But there are risks.

“Creating chips is not trivial and foundries and fabs are already saturated, making it very likely that OpenAI will fail in this effort,” he said, adding that they’d do better partnering with AMD, Qualcomm, Nvidia, or Intel – which has its own foundries – to get the control they feel they need.

Todd R. Weiss, senior analyst with The Futurum Group, said creating their own chips so they don’t have to rely on anyone else “at first glance … is a cool idea.”

“But then you must think of the ramifications, and they are not small,” Weiss told Techstrong.ai. “First there are the immense costs. You think paying someone else for chips is expensive? Start looking at what it is going to cost you to design your own chips, build your own chip making facilities, develop a road map of new and better chips on a never-ending schedule, and then worry about your own supply chain issues to keep the chips flowing and selling.”

Established chip makers will always be a steps ahead in innovating and bringing their chips to a rapidly changing market, though OpenAI would have a shot to pull this off with the money Microsoft is giving the company and possible funding from other financial backers.

“But the bottom line is that I think it could be more posturing and ego than a smart strategy,” he said.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY