aws logo

Amazon is investing up to $4 billion in AI startup Anthropic, becoming the latest IT giant to drop billions of dollars in hopes of getting a stronger foothold in what is the most significant technology trend since the cloud.

With the deal announced today, Amazon is looking to compete more strongly against the likes of Microsoft – which is investing as much as $11 billion in ChatGPT-maker OpenAI – Google, Meta, and IBM. The deal not only includes Anthropic’s expanded use of Amazon technologies and AWS cloud services, but also Amazon taking a minority ownership role in two-year-old company.

“AWS will now be Anthropic’s primary cloud provider and help build, train and deploy its future foundation models on [Amazon’s] Trainium and Inferentia chips,” Amazon CEO Andy Jassy wrote in a post on X (formerly Twitter).

Tech companies have been working on this for years, but interest and adoption have rapidly accelerated in that past 10 months since OpenAI introduced generative AI to the business world, with the release in late November 2022 of the ChatGPT chatbot. According to Bloomberg Investments, the global generative AI market could grow as much as 42% a year over the next 10 years, growing to $1.3 trillion by 2032, fueled by innovation around training and inferencing technologies, large-language models (LLMs), digital ads and specialized software and services.

“This is a smart investment and a solid long-term collaboration move for Amazon and AWS as competing technology companies, including Microsoft, Google, Salesforce and others keep moving forward to incorporate their own broader generative AI assets and features into their offerings for customers,” Todd R. Weiss, senior analyst with The Futurum Group, told Techstrong.ai.

AWS

Riding the Generative AI Wave With Claude

Anthropic created Claude, a generative AI chatbot that rivals ChatGPT, Google’s Bard and similar AI tools, and also develops foundation models, which are large-scale deep learning models used to train on a broad dataset of data at scale and which can be adapted to suit an organization’s more specific purposes.

The company in July released Claude 2, promising improved performance and longer responses and letting users access it through an API rather than only from the LLM’s public-facing beta site. Anthropic also has been vocal about it’s push to develop AI in a responsible and safe manner.

As part of the deal with Amazon, Anthropic will use AWS’ Trainium – an accelerator for training large AI models – and Inferentia chip – for AI inferencing applications – to build, train and launch upcoming foundation models and also will work with AWS on future Trainium and Inferentia development.

In addition, AWS will be Anthropic’s primary cloud provider for such tasks as safety research and foundation model development, running most of its workloads on the world’s top cloud environment. AWS customers also will get access to those foundation models via Amazon Bedrock, a foundation model service introduced earlier this year that includes such companies as Salesforce and Coda AI as customers.

Amazon’s developers and engineers also will be able to build generative AI capabilities into their software through access to Anthropic models available in Bedrock. The cloud giant’s customers also will get early access to some of Anthropic’s offerings.

“This will include secure model customization and fine-tuning on the service to enable enterprises to optimize Claude’s performance with their expert knowledge, while limiting the potential for harmful outcomes,” Anthropic officials wrote in a blog post.

They added that the deal gives them the scale and technology they need to grow the business.

“Training state-of-the-art models requires extensive resources, including compute power and research programs,” they wrote. “Amazon’s investment and supply of AWS Trainium and Inferentia technology will ensure we’re equipped to continue advancing the frontier of AI safety and research.”

Investments and Deals Aplenty

Amazon like other tech companies has been rapidly adding generative AI capabilities to its portfolio over the past year. Earlier this year in a letter to shareholders, Jassy said the company is “investing heavily in [LLMs] and generative AI,” noting the rapid uptake of the technologies in the past year.

“We have been working on our own LLMs for a while now, believe it will transform and improve virtually every customer experience, and will continue to invest substantially in these models across all of our consumer, seller, brand, and creator experiences,” he wrote.

In July, AWS introduced new and enhanced tools for creating LLMs in Bedrock and pushed its efforts to ensure the responsible creation and use of AI, an initiative that dovetails with Anthropic’s vision.

All this will be needed as the company works to keep pace with others in the AI field. As noted, Microsoft is putting a lot of money in OpenAI, expanding the use of the startup’s technologies as well as its own generative AI development throughout its product portfolio. Google is continuing to build out Bard and late last year invested $300 million of its own into Anthropic, which was co-founded by ex-OpenAI executives, including CEO Dario Amodei.

According to Reuters, Amodei said that the deal with Amazon won’t affect the agreement with Google, which included making Google its preferred cloud provider.

For its part, Meta, whose technology includes the open source Llama AI models, reportedly will invest as much as $33 billion to build out its AI capabilities, with executives saying AI is a business priority.

Competition at the Chip Level

Amazon, with Tranium and Inferentia, also is competing with the likes of Nvidia and Intel, as well as smaller AI chipmakers, in the growing AI accelerator field.

Amazon also was among a number of large tech firms – including Nvidia, Salesforce, Intel, IBM, AMD, Qualcomm, and Google – to invest $235 million last month in Hugging Face, which offers a platform similar to GitHub that gives AI developers tools and a place to share code and datasets to more easily develop open source AI models.

Deals like Amazon’s with Anthropic will be a long-term theme, with larger IT firms unwilling to be left behind in the generative AI space in the eyes of customers and the fast-moving marketplace, Futurum’s Weiss said.

“At the same time, it is still early in the generative AI race, so we are just seeing the beginning of what is possible, without the full development of this technology at this point,” he said. “There will be big successes as well as some failures in these deals as the market boils, simmers, and morphs with more new advancements, but the market has made it very clear in the last year that the time for generative AI is here and sitting on one’s laurels is not the smart move today.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY