Nvidia Set to Unleash Blackwell AI Powerhouses in 2025

In an industry where the demand for AI computation is soaring, Nvidia is set to redefine the landscape once again with its cutting-edge Blackwell architecture. Tailored to begin production in the latter half of 2024, Nvidia’s Blackwell systems are slated to make an aggressive entrance into the market by 2025, with intentions to roll out a whopping 40,000 units. This launch embodies Nvidia’s strategic pivot towards selling comprehensive systems instead of individual chips—a move indicating a potential decline in volume when juxtaposed with the sales of its predecessor, Hopper. Yet, this tactic underscores the company’s commitment to delivering high-performance, specialized AI computing solutions.

Despite a shift in market approach, Nvidia has prepared an extensive product lineup to cater to diverse computing needs. The portfolio encompasses three core configuration models: NVL72, NVL36, and HGX B200. The NVL72 stands as the flagship, a liquid-cooled cabinet that boasts an extraordinary amalgamation of 36 dual-GPU Grace Hopper superchips. This powerhouse is constructed to achieve unparalleled compute parallelism, ensuring each of its 72 GPUs operates at peak efficiency, reinforced by an impressive 10 TB/s interconnect.

Engineering Synergy in AI Computing

Nvidia is gearing up to transform the AI computation industry with its advanced Blackwell architecture, eyeing production in late 2024 and aiming to deploy an ambitious 40,000 units by 2025. This marks a strategic shift for Nvidia, which is moving from selling individual chips to full systems, possibly reflecting a dip in sales volume as opposed to its Hopper series. Nevertheless, Nvidia’s commitment to top-tier, specialized AI compute solutions remains steadfast.

The company’s Blackwell lineup features three primary configurations: NVL72, NVL36, and HGX B200. The NVL72 is the premier model, a liquid-cooled behemoth equipped with 36 dual-GPU Grace Blackwell chips. It’s designed for maximum parallel computing, with each of the 72 GPUs optimized for top performance, all connected by a swift 10 TB/s interconnect. Through this, Nvidia is poised to meet the varied demands of the AI computation sphere.

Explore more