Nvidia’s latest earnings show that demand for AI computing is still going strong, and there’s no sign of a slowdown. They’re gearing up to launch a more powerful version of their Blackwell GPU later this year, followed by a next-gen AI chip in 2026.
Blackwell, which hit the market in early December, generated $11 billion in revenue during the fiscal fourth quarter that ended on January 26. This marks the quickest product ramp-up in Nvidia’s history. Overall revenue for that quarter climbed 78% year-over-year to $39.3 billion, surpassing what analysts had predicted. The bulk of that revenue, about $35.6 billion, came from data center products.
Big names in cloud services like AWS, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure contributed to half of Nvidia’s data center revenue. These providers rolled out Nvidia’s new GB200 systems across various regions.
The GB200 features an advanced processor combining Nvidia’s Grace CPU and dual Blackwell B200 GPUs. The CPU is named after the pioneering computer scientist Grace Hopper, while the GPU honors mathematician David Blackwell.
Analysts were keenly observing Nvidia’s earnings for hints of a dip in AI spending, especially among cloud services that have plans to invest tens of billions in data centers this year. Some speculated a potential slowdown after a startup in China, DeepSeek, developed an AI model with less advanced and cheaper tech than Nvidia’s latest chips.
During an earnings call, CEO Jensen Huang mentioned that Nvidia plans to introduce Blackwell Ultra in the second half of this year. He expects the transition from Blackwell to Blackwell Ultra to be smoother than the earlier shift from Hopper to Blackwell. Initially, low production yields for Blackwell were a result of a design flaw that complicated the move from Hopper. It required a completely new chassis and power system, which posed challenges.
“This transition was tough, but the next one should go much more smoothly,” Huang stated. Blackwell Ultra will introduce updated networking, memory, and processors.
Analyst Alexander Harrowell from Omdia noted that any issues from the Hopper to Blackwell transition were overshadowed by the immense demand and TSMC’s accelerated capacity expansion, which exceeds forecasts for 2024.
Looking ahead to 2026, Nvidia will roll out the Vera Rubin architecture, featuring a Vera CPU and Rubin GPU, with the latter using TSMC’s cutting-edge 3-nanometer process. This new platform will integrate CPU and GPU capabilities, replacing the current Grace-Blackwell setup.
“All our partners are ramping up for this change,” Huang said. He teased that Nvidia will unveil more about Vera Rubin and Blackwell Ultra at their annual GTC conference next month.
On another note, China is unlikely to receive Nvidia’s most advanced tech due to U.S. chip export restrictions from 2022. Nvidia’s revenue from China has dropped by half since then, prompting them to develop a less advanced processor for the region called the H20.
Additionally, potential tariffs on chips made outside the U.S., as threatened by the previous administration, could impact Nvidia’s revenue. Most of Nvidia’s chips are produced by TSMC.
Nvidia’s CFO Colette Kress said, “It’s still unclear how this will play out until we have more information on the U.S. government’s plans.”
Huang anticipates that future AI advancements will emerge in areas like agentic AI, physical AI for robotics, and sovereign AI. He believes these sectors will define the next wave of AI, building on the current growth of generative AI in business and consumer applications.
“These developments are just getting started, but we can see them emerging around us,” Huang said. “We’re at the heart of this progress and it’s buzzing in various sectors.”