Nvidia Debuts New AI Chips: Blackwell Ultra & Rubin

Nvidia is printing profit—literally. The company now earns $2,300 in profit every second, riding the explosive wave of AI adoption. What once was a gaming hardware company is now a titan of AI infrastructure, with its data center business outpacing even its iconic GPUs. Now, Nvidia is raising the bar again with a new lineup of AI chips designed to secure its lead far into the future.
Introducing the Blackwell Ultra GB300
The Blackwell Ultra, shipping in the second half of 2025, isn’t a radical leap from its predecessor—but it’s a serious upgrade where it counts:
- 20 petaflops of AI performance, same as original Blackwell
- 288GB of HBM3e memory, up from 192GB
- In cluster form (DGX GB300 Superpod), it boasts 11.5 exaflops FP4 and 300TB memory—a bump from 240TB
The Blackwell Ultra may not redefine performance metrics, but it clearly serves as a bridge to something much bigger.
Vera Rubin and Rubin Ultra: The Future is Exponential
Nvidia surprised everyone at its GDC 2025 keynote by moving quickly beyond Blackwell Ultra to reveal its next two architectures:
- Vera Rubin (2026): 50 petaflops of FP4 performance
- Rubin Ultra (2027): 100 petaflops of FP4, 1TB of memory, and 15 exaflops inference in full-rack form
These chips are Nvidia’s play for the next decade of AI infrastructure. Rubin Ultra alone is projected to deliver 14x the performance of today’s Blackwell Ultra racks.
AI for the Desktop
Not just for server rooms, Nvidia is bringing elite AI to the desktop:
- DGX Station with a single Blackwell Ultra chip
- 784GB unified memory
- 800Gbps built-in networking
- Partners: Asus, Dell, HP, Supermicro, and more
It’s the kind of workstation aimed at serious AI developers, researchers, and enterprise customers.
The Market Says Yes—Loudly
Nvidia’s AI chips are already moving fast:
- $11 billion in Blackwell revenue already booked
- 1.8 million Blackwell chips sold to just the top 4 buyers in 2025
Despite investor anxiety earlier this year (sparked by breakthroughs like DeepSeek suggesting AI could become more efficient), CEO Jensen Huang insists demand is still sky-high. In his words, “we need 100x more computing power than we thought we needed last year.”
What’s Next?
The roadmap doesn’t stop in 2027. Nvidia is already teasing its 2028 architecture: “Feynman,” named after the legendary physicist. The message is clear—Nvidia isn’t slowing down. If anything, it’s just getting started.
Bottom line: Nvidia is staking its future on the idea that the world will need massive amounts of compute power to fuel the AI revolution—and it's building the chips to deliver it.