In a bold move that propels the capabilities of server memory technology, Samsung has announced the development of an unprecedented 12-layer high-bandwidth memory (HBM3e) stack. This innovative design exemplifies a seismic shift from the previous generation, housing a remarkable 36GB capacity per stack and a staggering 1,280GB/s bandwidth. Surpassing the erstwhile eight-layer, 24GB HBM3 configurations, this technological marvel represents a leap forward for AI and machine learning applications.
Advantages stemming from the new HBM3e are manifold: a 34% increase in speed for AI training tasks and potential reductions in the cost of ownership are among the most significant. With these developments, Samsung is shattering the existing paradigms of memory performance, placing itself at the forefront of a rapidly advancing sector that is critical to AI service providers and their ambitious computational demands.
Rivalry and Advancements
Samsung’s monumental advancement did not occur in isolation. Competing memory titan Micron has also thrown its hat into the ring, unveiling a 12-layer, 36GB HBM3e product. Micron is poised to begin customer sampling in March 2024, intensifying the competition. Meanwhile, SK Hynix is trailing close behind, with its own version of a 12-layer HBM3 announced last year.
The key to Samsung’s breakthrough lies in its adoption of thermal compression non-conductive film (TC NCF), which has allowed it to maintain the height of the eight-layer design while augmenting vertical density by 20%. This speaks to Samsung’s edge in the high-performance memory sector, where technological innovation is paramount. As these companies vie for dominance, their relentless pursuit of cutting-edge solutions is set to redefine what’s possible in data centers, AI applications, and machine learning platforms around the world.