AI Workloads Drive Shift to Liquid Cooling in Data Centers

The rapid evolution of artificial intelligence is fundamentally rewriting the blueprint of the modern data center, shifting the focus from simple space management to complex thermal engineering. As high-density workloads push power requirements to unprecedented levels, cooling has evolved from a secondary support function into the primary driver of facility design. Dominic Jainy, a veteran IT professional with a deep specialization in the intersection of AI, machine learning, and infrastructure, joins us to discuss this massive paradigm shift. Our conversation explores how the industry is navigating the transition from air-cooled legacy systems to high-performance liquid solutions, the integration of real-time telemetry for proactive thermal management, and the emerging role of data centers as “heat factories” that contribute to a circular energy economy.

As rack densities move toward the megawatt scale, how does your approach to facility design change to ensure IT and cooling systems operate as a single unit? What specific integration steps are required for building management systems to handle these massive, concentrated heat loads?

In the past, we could treat the data hall and the cooling plant as two separate entities where the facility merely reacted to the heat produced by the IT gear. Today, as we move from tens of kilowatts per rack to hundreds of kilowatts—and even toward megawatt-scale racks—that reactive model is completely broken. We now have to design the entire building as a cohesive, tightly coupled system where the silicon and the cooling fluid are essentially part of the same circuit. This requires a much deeper integration of our Building Management Systems (BMS) and Data Center Infrastructure Management (DCIM) platforms, ensuring they have a granular, real-time view of what the chips are doing. We are moving away from just monitoring room temperature to integrating with workload schedulers so the facility can prepare for a heat spike before it even happens. It feels like moving from a slow-moving thermostat in a house to the fuel injection system of a high-performance race car where every millisecond of data matters.

Hybrid air-liquid architectures are currently the standard for many high-density facilities. How do you determine the right time to transition a facility to full liquid cooling, and what role does modular, skid-based infrastructure play in maintaining flexibility during these technology shifts?

The decision to go full liquid is almost entirely dictated by the requirements of the silicon; when chip power reaches a point where air can no longer physically move the heat away fast enough, the choice is made for us. While single-phase liquid cooling can handle the current and next few generations of high-end chips, we are constantly monitoring the threshold where two-phase systems might become necessary despite their complexity. To manage this uncertainty, we are deploying modular, skid-based architectures that allow us to drop in two- to three-megawatt units as repeatable blocks. These skids allow us to scale power and cooling in tandem, providing a “plug-and-play” capability that keeps the facility agile as densities shift. It takes the guesswork out of the long-term build because we can adapt the cooling method within these modular blocks without having to overhaul the entire central plant.

Liquid-cooled environments offer a much smaller thermal buffer than traditional air-cooled rooms. What specific storage or power integration strategies do you use to prevent failure during mechanical disruptions, and how has this changed the way you define operational risk for your customers?

The most striking change when moving to liquid is the loss of the “thermal safety blanket” that air provided; while an air-cooled room might take several minutes to overheat during a pump failure, a liquid-cooled rack can hit critical limits in just a few seconds. This necessitates a shift toward immediate mechanical continuity, where we integrate thermal storage tanks and backup pumping systems that can kick in without even a momentary drop in flow. We are also tightening the integration between the mechanical systems and the power infrastructure to ensure that if power dips, the cooling doesn’t lag behind by even a fraction of a second. This has fundamentally redefined our risk model with customers, as we now have to be much more transparent about where our responsibility ends and where their rack-level management begins. It is no longer just about uptime; it is about “thermal continuity,” which is a much more demanding and high-stakes metric to maintain.

While heat reuse is common in certain regions, it remains a challenge elsewhere. What practical steps can be taken to transform data centers into “heat factories” for local use, and how do you calculate the long-term sustainability benefits against the initial infrastructure costs?

We are beginning to view data centers not just as consumers of energy, but as “heat generation factories” that produce a valuable byproduct. In regions like Europe, we see successful integration with district heating systems and greenhouses, but in the U.S., the sheer distance between data centers and the end-users of that heat remains a significant hurdle. To overcome this, the first practical step is designing for higher exit temperatures, as hotter fluid is much more efficient to transport and reuse. We calculate the sustainability benefit by looking at the “cascading effect”—using less energy to move heat away from the chip directly reduces the total power draw of the building, which in turn lowers the carbon footprint. There is no longer a trade-off between performance and sustainability; a more efficient cooling system is naturally a more sustainable one, making the initial infrastructure costs easier to justify over the ten- to fifteen-year life of the facility.

There is growing interest in using workload telemetry to inform cooling systems in real time. What technical barriers must be overcome to enable this bidirectional control, and what are the specific implications for the operational boundaries between the facility operator and the client?

The primary barriers to bidirectional control aren’t actually technical—we already have the ability to aggregate telemetry into centralized platforms—but are instead rooted in design philosophy and risk tolerance. For a facility operator to “pre-stage” cooling based on a client’s workload, they need a level of visibility into the client’s internal operations that many companies are traditionally hesitant to share. This creates a new operational boundary where the client must trust the facility’s automation to handle their most sensitive compute cycles. We are seeing a slow shift where operators expose portions of the facility telemetry to the customer for better visibility, but true bidirectional control where the workload actually drives the pumps is still rare. As we move forward, the “fence” between the IT rack and the cooling plant will have to become much more porous to allow for the level of efficiency that megawatt-scale AI demands.

What is your forecast for data center cooling?

I believe we are entering an era of radical standardization where the current “wild west” of bespoke cooling solutions will give way to a more unified, system-level approach across the industry. As cooling accounts for roughly 20% of a data center’s energy use, it will become the primary lever for expanding compute capacity, forcing a move toward a universal design language where the chip developers and facility engineers work in total lockstep. We will eventually see the data center function as a living organism, using real-time AI to balance its own thermal loads, recycle its own energy, and adjust its cooling intensity chip-by-chip, making the rigid, legacy cooling models of the past completely obsolete.

Explore more

How Can Coaching Transform Wealth Advisors in the AI Era?

The rapid convergence of sophisticated generative artificial intelligence and a fundamental shift in client expectations is forcing a radical redefinition of what it means to be a successful wealth advisor in today’s increasingly complex financial landscape. As the industry moves away from a purely transactional foundation, the focus is shifting toward a model that prioritizes deep human connection and holistic

Which CRM Wins in 2026: Dynamics 365 or Salesforce?

A high-performing sales executive no longer views the CRM as a database but as a silent partner that predicts the next deal before the first morning coffee is even brewed. The choice between Microsoft Dynamics 365 and Salesforce has evolved from a simple software preference into a high-stakes decision that defines a company’s operational DNA. As the market stands today,

How Is Bharat Connect Modernizing Postal Life Insurance?

Introduction The tradition of safeguarding a family’s future through insurance has long relied on physical visits to post offices, but this century-old ritual is undergoing a profound digital metamorphosis. This transformation is driven by NPCI Bharat BillPay Limited onboarding Postal Life Insurance into the Bharat Connect ecosystem. By leveraging the expertise of the State Bank of India as the primary

Former Barista Sues Compass Group for Gender Discrimination

The modern workplace is often characterized as a meritocratic environment where professional conduct is the standard, yet the legal battle between a former employee and Compass Group USA reveals a starkly different narrative. Jessica A. Wallace, a former barista for the company’s Canteen division, has initiated a Title VII lawsuit in the U.S. District Court for the Northern District of

Trend Analysis: AI Data Center Power Architectures

The exponential surge in computational requirements for large language models has effectively turned the traditional data center from a silent utility provider into the most significant physical bottleneck of the modern digital age. As artificial intelligence grows more “token-hungry,” the infrastructure supporting these workloads is undergoing a radical transformation to keep pace with the sheer density of the hardware. The