
Advanced Micro Devices (AMD) revealed that its next generation of artificial intelligence (AI) accelerator, dubbed the MI325X, will become available via its partners in 2025.
In addition, an MI350 series of processors, scheduled to be made available in the second half of 2025, is based on a next generation architecture that provides access to 256GB of memory and 6TBps of throughput, which is 1.8x more capacity and 1.3x more bandwidth than the H200 graphical processor unit (GPU) series, also known as Blackwell, from NVIDIA.
That processor will be followed up in 2026 by an even faster MI400 series based on the same Compute DNA (CDNA) architecture as the MI350 series.
The MI350 series in particular represents a major potential leap forward, says Daniel Newman, CEO of The Futurum Group. AMD is now forecasting that the AI accelerator market will generate $500 billion in revenue by 2028, which is two times current forecasts, he notes.
The MI325X, in contrast, uses the same architecture as the MI300X that AMD launched late last year. In 2024, AMD reports it has shipped a half million MI300X series processors, and from a price/performance perspective the MI325X is expected to compete favorably with the Blackwell series GPUs that NVIDIA is starting to deliver.
AMD also unveiled a processor to optimize AI networking. Dubbed AMD Pensando Pollara 400, this Ethernet accelerator provides AI teams with a fully programmable 400 Gigabit per second (Gbps) RDMA Ethernet Network Interface Card (NIC) for moving data between multiple GPU nodes.
Additionally, AMD is adding a 5th Gen EPYC class of processors, formerly codenamed Turin, that provides access to eight to 192 cores in a way that increases the number of instructions per clock that can be run by up to 17% compared to the previous generation of EPYC processors. Additionally, a Ryzen AI PRO 300 Series based on the same architecture makes available an integrated neural processing unit (NPU) optimized for running AI models on PCs and other devices.
AMD CEO Dr Lisa Su told conference attendees that, in general, the company is now better positioned to using open source software to gain additional market share at the expense of CPU and GPU rivals in a way that doesn’t lock organizations into a specific processor architecture or platform. “When it comes to computing, there is no such thing as one size fits all,” she says.
In fact, the CPU success AMD has been enjoying as of late with its EYPC series will bode well for their AI accelerators, notes Newman. “AMD can take advantage of that strength to win more GPU deals,” he says.
However, the company will need to build an ecosystem around the AMD ROCm software stack to achieve that goal. An open alternative to the CUDA framework developed by NVIDIA, the latest edition of ROCm provides up to a 2.4X performance improvement on inference and 1.8X improvement on training for a variety of large language models (LLMs) over previous versions. Making that sentiment shift occur among developers that in many cases have already embraced CUDA will be critical, notes Newman.
The one thing that is certain is that, when its comes to IT infrastructure, developers of AI applications will ultimately determine whether AMD, NVIDIA or some other rivals end up carrying the day.