AI chip

Leading hyperscalers including Microsoft, Google and Amazon are making significant investments in developing their own semiconductor technologies to meet the soaring demand for GPUs as demand for generative AI (GenAI) powered applications skyrockets.

This strategic shift aims to reduce their reliance on Nvidia, whose GPUs are highly sought after for AI-specific workloads, while also fostering innovation and facilitating global expansion, according to a GlobalData report.

Nvidia has become the dominant player in the burgeoning GenAI market, thanks to its GPUs’ ability to handle AI-specific workloads efficiently.

However, the high cost and limited supply of these chips have created a financial burden for major cloud computing companies that provide GenAI services.

Beatriz Valle, senior enterprise technology and services analyst at GlobalData, explained there is a significant imbalance between supply and demand for GPU processors.

“GenAI models, particularly multimodal systems that generate images and videos, benefit greatly from GPUs’ parallel processing capabilities, making these chips both expensive and scarce,” she said.

To address these challenges, GenAI companies are increasingly developing proprietary technologies to run their workloads.

For example, Google has introduced its tensor processing unit (TPU) semiconductors, while Amazon has developed its Inferentia and Trainium architectures.

Facebook’s parent company Meta has also ramped up production of AI chips, and it announced the next generation of custom-made chips designed for AI workloads just last month.

“The goal is to drive performance improvements and help power the ranking and recommendation ads models on social media platforms such as Facebook and Instagram,” Valle said.

GlobalData’s analysis also indicated a continuous flow of investments in the lucrative GenAI sector, as hyperscalers strive to maintain their competitive edge.

Microsoft, aiming to expand its global presence in AI services through its Azure infrastructure, has recently invested $1.5 billion in Abu Dhabi-based AI consortium G24.

The software also came to market, for the first time ever, with its first in-house microprocessor architecture, released late last year.

These new chips are specifically designed for GenAI workloads, the Azure Maia 100 and Cobalt 100 chips, the first two custom silicon chips designed by Microsoft for its cloud infrastructure.

“For Microsoft to take this step, it means that the company is investing a lot of money in innovation to drive its ambitious AI plans,” Valle said.

She explained at this point in the GenAI landscape, this is the early stage when companies can establish an early competitive advantage.

“These companies know this,” Valle said. “They don’t want to be laggards at this stage of development.”

Meanwhile, Amazon Web Services (AWS) is finalizing a $4 billion funding round for its partnership with Anthropic, following Microsoft’s substantial investment in OpenAI.

Valle noted Nvidia’s H100 chips have been largely responsible for powering the generative AI revolution, also pushing companies such as AMD to accelerate innovation with chips such as the Antares Instinct MI300X and MI300A GPU accelerators, adding further pressure to the competitive landscape.

“Startups such as OpenAI are also planning to develop their own chips, perhaps partnering with third party suppliers,” she said.

She pointed to reports that OpenAI’s CEO Sam Altman has been seeking U.S. government approval to raise billions of dollars to boost global manufacturing of AI chips.

“Microsoft is teaming up with AMD in the latest sign that over the long term, it won’t be so easy for Nvidia to maintain its competitive position and the competitive landscape may start to shift,” she said. “We have seen it already in China where they are manufacturing their own processor to run AI workloads. They are not as good as GPUs, but they do the job.”

Valle explained the semiconductor industry is experiencing unprecedent growth, and hyperscalers and startups plan to leverage their own chips for scalability, cost savings and for general strategy planning.

She predicted Nvidia would become less of a dominant force in the AI chip industry due to rising competition in the sector, but added this would take time.

“As companies of all sizes work to reduce their dependence of Nvidia, alternative chip architectures will grow stronger, with CPUs, ASIC chips, and other designs gaining more ground,” she said.

Startups in the AI chip market, including Cerebras, Groq, Mythic, Graphcore, Cambricon and Horizon Robotics are also challenging the dominance of major players like Nvidia and AMD by developing specialized AI accelerators targeted at the wider market.

“These startups are focused on creating custom AI chips that operate faster, consume less power and can be optimized for training neural nets and making inferences more effectively,” Valle said.

However, chip startups face several challenges in penetrating the market dominated by major players – one of which is the strong market position and brand recognition of these major players.

“Additionally, major players often have extensive resources and established relationships with customers,” Valle said. “Startups may face barriers in terms of manufacturing capabilities and access to capital, which can limit their ability to scale and compete effectively.”