AMD has pushed the delivery date for its Instinct MI350 series to midyear, launching its first rack-scale AI accelerator aimed at data centers.
On Tuesday, AMD shared this exciting news and revealed a significant 24% revenue boost year-over-year for the fourth quarter, reaching $7.7 billion. Sales from its Instinct GPUs and EPYC CPUs made up $3.9 billion of this total, with data center hardware revenue soaring by 69% in December.
“We’re thrilled with our data center GPU business’s performance in 2024,” said AMD CEO Lisa Su in an earnings call. “Looking ahead to 2025, we’re really happy with the strides we’re making in both hardware and software.”
Originally, AMD intended to start the production of the Instinct MI350 later in the year, but testing went quicker than expected. Su noted that they’ll send chip samples to key customers this quarter.
The MI350 boasts AMD’s CDNA 4 microarchitecture, specifically crafted for high-performance computing and AI tasks in data centers. It targets a range of clients, including cloud providers, pharmaceutical firms, and large retailers. AMD claims this new GPU offers 35 times the performance compared to its CDNA 3 architecture.
According to Su, AMD plans to roll out the MI400, the MI350’s successor, in 2026. This model will integrate networking, CPU, and GPU capabilities at the silicon level.
For optimizing workloads on its Instinct AI accelerators, AMD has developed the Radeon Open Compute (ROCm) software platform. ROCm encompasses various programming models, tools, compilers, libraries, and runtimes. However, it isn’t as developed as Nvidia’s CUDA software stack. In 2024, Nvidia commanded the AI and accelerated computing data center market with $43.6 billion in sales.
Nvidia’s powerful, high-priced GPUs are the go-to for cloud providers and hyperscalers training complex foundation models, which aim to enhance reasoning capabilities beyond just generating text or visuals.
Recently, the Chinese company DeepSeek introduced an open-source reasoning model trained on lower-end Nvidia GPUs. Their research outlines how to create a high-performing model without needing top-tier hardware.
If DeepSeek’s findings hold true, the AI landscape could shift in AMD’s favor, according to David Nicholson, chief research officer at The Futurum Group. Currently, AMD’s offerings compete with Nvidia’s H100 GPUs, but these chips are one generation behind Nvidia’s newest architecture, Blackwell.
“This is a wake-up call,” Nicholson remarked, suggesting it might encourage decision-makers to consider AMD who previously deemed it too risky.
Su echoed this sentiment, viewing DeepSeek’s work as beneficial for AMD. Models that are cheaper to train and refine will open opportunities for more businesses, not just the largest players. “Innovation in models and algorithms supports AI adoption,” she said.
Looking to the current quarter, AMD anticipates revenues of about $7.1 billion, with a margin of $300 million, marking a 30% increase from last year. The company expects solid growth in both its data center and client businesses. Its client segment—which includes Ryzen mobile and desktop CPUs—drove a 58% revenue surge, bringing in $2.3 billion in the fourth quarter.
Antone Gonsalves is an experienced tech journalist and editor at large for Informa TechTarget, focusing on essential industry trends for enterprise tech buyers. He’s based in San Francisco and welcomes news tips via email.