Amazon.com Inc.’s AWS is reportedly hammering out a five-year, $475 million deal to provide IBM Corp. with NVIDIA Corp. processors to train artificial intelligence (AI) models.
The purported deal would offer IBM access to Amazon Web Services’ EC2 servers equipped with NVIDIA’s GPUs, according to a report by Business Insider, citing an internal Amazon document.
AWS was not immediately available for comment.
If such a deal is consummated, industry analysts said, it would illustrate insatiable demand for NVIDIA’s GPUs as well as a potential windfall for hyperscalers such as AWS that have invested heavily into data centers.
“The move strengthens the AWS-IBM partnership in fulfilling enterprise customer AI demands,” Ron Westfall, research director at The Futurum Group, said in an email.
Earlier this year, IBM announced plans to combine Amazon SageMaker with IBM’s WatsonX AI and analytics platform to help broaden the availability of production-ready generative AI solutions. Moreover, the pair announced the expansion of IBM’s software portfolio availability on the AWS Marketplace, designed to spur adoption of hybrid cloud and AI capabilities across global enterprises. Prior to that collaboration, IBM SaaS solutions had been available in select countries.
“It’s a smart move aligned to meeting enterprise demands to have more multi-cloud flexibility in meeting their AI training/inferencing requirements,” Westfall said. “The five-year agreement gives IBM access to AWS’s EC2 servers, which use NVIDIA GPUs. This would swiftly bolster IBM’s AI model training capacity and flexibility including its new Granite 3.0 models that prioritize efficiency and performance, providing value at a lower cost than larger models.”
An accord could prove to be good news for enterprises in the proof of concept, pre-productized stage of implementing AI models based on right-sized language models and more intelligent data-management capabilities, Westfall added. “This is essential to easing new AI capabilities, including GenAI NLP interfaces, and directly linking AI investments to improved business outcomes,” Westfall said.
Tech analyst Jack Gold deemed the deal a “good move” for both AWS and IBM.
“AWS doing so likely means they have some excess capacity (otherwise they would not have done this with a competitor, although one with a much smaller competitive cloud position), and provides them with higher utilization of their assets,” Gold said in an email. “Good news for IBM since its probably pretty hard to get Nvidia GPUs given their relatively smaller buys relative to the big players like xAI, ChatGPT, AWS, Azure, etc., and the shortage of Nvidia GPUs, and so gives IBM increased GPU capacity to work with.”
Amazon, which has developed its own AI chips such as Trainium and Inferentia, last month signed a five-year deal with Databricks to let the data warehouse company use Amazon’s Trainium AI chips. AWS racked up $27.45 billion in fiscal third-quarter revenue, up 19% year-over-year.
Amazon is also expected to introduce its latest AI chips in December, a move to leverage Amazon’s deep investments in semiconductors and reduce reliance on NVIDIA.