Open19 Publishes New Version of Rack Standard with Liquid Cooling and 48V DC Power Distribution

The standards group SSIA (formerly known as the Open19 group) has recently released the latest version of the Open19 rack standard. This updated specification aims to provide a standardized approach to data center racks, focusing on advanced features such as liquid cooling, 48V DC power distribution, and enhanced efficiency. With the new Open19 v2 specification, data centers can benefit from improved cooling capabilities and energy efficiency, enabling them to handle increasing power densities and meet evolving industry demands.

Overview of Key Features: Liquid Cooling and 48V DC Power Distribution

The Open19 v2 specification introduces two prominent features – pluggable liquid cooling and a 48V native power solution. Liquid cooling has gained popularity in data centers due to its ability to efficiently dissipate heat generated by high-performance servers. This updated standard provides a pluggable liquid cooling standard for rack-mounted servers, enabling data centers to adopt liquid cooling technologies while ensuring compatibility between different systems. Moreover, the new 48V DC power solution offers enhanced energy efficiency compared to traditional 12V power solutions, reducing power loss and enabling higher power densities.

System Architecture and “Brick” Mechanical Specifications

The Open19 v2 specification encompasses not only cooling and power distribution but also the system architecture and “brick” mechanical specifications of racks. An integral part of the standard, the system architecture aspect defines the structure and organization of the rack-mounted servers. It enables modularity and easy replacement of components, maximizing flexibility and scalability in data center deployments. The “brick” mechanical specifications ensure compatibility across different vendors’ equipment, promoting interoperability and reducing integration complexities.

Interoperability of Different Cooling Systems

One of the significant achievements of the Open19 v2 specification is defining interfaces for liquid cooling that enable interoperability between different cooling systems. This approach allows data centers the flexibility to combine various liquid cooling solutions, even if they are not inherently compatible. The specification ensures that valved connectors for liquids are capable of securely connecting different vendors’ equipment while accommodating different types of fluids. This interoperability promotes vendor diversity, allowing data centers to choose the most suitable hardware without being confined to a single supplier.

Background and Goals of SSIA (formerly Open19 group)

Initially established as the Open19 group, SSIA was created as an alternative to the Open Compute Project (OCP) launched by Facebook in 2011. With a focus on open-source hardware, Open19 aimed to develop a standardized rack design to improve data center efficiency and cost-effectiveness. The group merged into the Linux Foundation in 2021, expanding its objectives beyond rack standards to encompass other areas of data center efficiency and sustainability.

Merger with the Linux Foundation and Expansion of Goals

After merging with the Linux Foundation, Open19 transitioned into the SSIA, broadening its scope and goals. Partnering with industry leaders, the SSIA seeks to drive innovation and foster collaboration in optimizing data centers for improved energy efficiency, sustainability, and operational excellence. With this merger, the SSIA gained a stronger platform to promote and support the adoption of the Open19 v2 standard and advance the overall efficiency of data center infrastructure.

The latest version of the Open19 rack standard, Open19 v2, builds upon the success of its previous iteration. It introduces a pluggable liquid cooling standard, a 48V DC native power solution, and the capacity to support future generations of power density. These advancements enable data centers to keep pace with the increasing demands of high-performance servers and emerging technologies. By adopting the Open19 v2 specifications, data centers can enhance their cooling capabilities, improve energy efficiency, and achieve higher scalability for their infrastructure.

Benefits of Open19 V1: Flexibility and Cost Efficiency

The previous version of the Open19 platform, V1, offered significant benefits in terms of flexibility and cost efficiency. It allowed for highly flexible rack deployments tailored to customer requirements or on a rack-by-rack basis. Despite the flexibility, V1 retained the advantages of an integrated hyperscale solution, providing data centers with the best of both worlds. The Open19 V2 specification further builds upon these advantages while introducing new features and advancements.

Case Study: Equinix Adopts Open19 v2 for Direct-to-Chip Liquid Cooling

Equinix, a leading data center provider, recently announced its adoption of the Open19 v2 specifications for direct-to-chip liquid cooling in over 100 of its data centers. By embracing the pluggable liquid cooling standard and native 48V power distribution, Equinix aims to improve cooling efficiency and reduce energy consumption in its infrastructure. This case study demonstrates the real-world implementation and benefits of the Open19 v2 specification in large-scale data center operations.

The release of the Open19 v2 specification marks a significant milestone in creating a standardized approach to data center racks with liquid cooling, 48V DC power distribution, and other advanced features. The SSIA, with its merger into the Linux Foundation, is driving the industry towards more energy-efficient and sustainable data center architectures. By embracing the Open19 v2 standard, data centers can enhance their cooling efficiency, optimize power distribution, and adapt to the evolving demands of modern server technologies.

Explore more

Agentic Customer Experience Systems – Review

The long-standing wall between promising a product to a customer and actually delivering it is finally crumbling under the weight of autonomous enterprise intelligence. For decades, the business world has accepted a fragmented reality where the software used to sell a service had almost no clue how that service was being manufactured or shipped. This fundamental disconnect led to thousands

Is Biological Computing the Future of AI Beyond Silicon?

Traditional computing is currently hitting a thermal wall that even the most advanced liquid cooling cannot fix, forcing engineers to look toward the three pounds of wet tissue inside the human skull for the next leap in processing power. This shift from pure silicon to “wetware” marks a departure from the brute-force scaling of transistors that has defined the last

Is Liquid Cooling Essential for the Future of AI Data Centers?

The staggering velocity at which generative artificial intelligence has integrated into every facet of the global economy is currently forcing a radical re-evaluation of the physical infrastructure that houses these digital minds. While the software side of AI receives the bulk of public attention, a silent crisis is brewing within the server racks where the actual computation occurs, as traditional

AI Data Center Water Usage – Review

The invisible lifeblood of the global digital economy is no longer just a stream of electrons pulsing through silicon, but a literal flow of billions of gallons of fresh water circulating through massive industrial cooling systems. This shift represents a fundamental transformation in how humanity constructs and maintains its digital environment. As artificial intelligence moves from a speculative novelty to

AI-Powered Content Strategy – Review

The digital landscape has reached a saturation point where the ability to generate infinite text has ironically made meaningful communication harder to achieve than ever before. This review examines the AI-Powered Content Strategy, a methodological evolution that treats artificial intelligence not as a replacement for the writer, but as a sophisticated architectural layer designed to bridge the chasm between hyper-efficiency