With the rise of artificial intelligence driving computational demands to unprecedented levels, the data center industry is at a critical inflection point. Power densities that were once theoretical are now a reality, pushing traditional cooling methods to their limits. To navigate this new landscape, we sat down with Dominic Jainy, a distinguished IT professional whose work at the intersection of AI, machine learning, and infrastructure provides a unique perspective. We explored the high-stakes decision between building new “greenfield” data centers and retrofitting existing “brownfield” sites, the complex transition to liquid cooling, and the innovative strategies required to power the future of high-performance computing.
Brownfield retrofits can be significantly cheaper and faster, potentially avoiding multi-year revenue delays. How should an operator weigh these immediate cost and speed benefits against the long-term performance, sustainability, and scalability advantages of a new greenfield build? What specific metrics guide this decision?
That’s the central dilemma facing almost every operator today. It’s a classic battle between immediate tactical gains and long-term strategic positioning. On one hand, a brownfield retrofit is incredibly compelling. You’re looking at a project that can cost 30-50% less than a new build and, crucially, gets you to market and generating revenue more than two years faster. In the AI race, that speed is a massive competitive advantage. However, you’re often just kicking the can down the road. The metrics guiding this choice go beyond simple capex. We look at the projected Total Cost of Ownership (TCO), which includes operational efficiency. A greenfield site, designed from the ground up for 100 kW racks and heat reuse, will have a much lower TCO over a decade. We also model the “scalability ceiling”—at what point will the brownfield’s legacy constraints on power or floor loading prevent the next hardware refresh? It’s a decision based on the organization’s risk tolerance, its long-term AI roadmap, and whether it values speed-to-market over ultimate performance and sustainability.
With rack power densities now approaching 200 kW, traditional air systems are becoming obsolete. Can you describe the practical, step-by-step process of integrating hybrid, direct-to-chip liquid cooling into a facility originally designed entirely for air cooling? What are the most common unforeseen challenges?
It’s less of a total overhaul and more of a surgical insertion. You can’t just turn off the facility and start running pipes. The first step is always to identify and isolate a high-density zone. You partition a few rows to house the new AI clusters. Then, you bring in prefabricated liquid loops. These are self-contained units that manage the fluid distribution for the direct-to-chip (DTC) systems, which minimizes the invasive plumbing work in the live data hall. The DTC cold plates are then attached directly to the processors and GPUs in the new racks. The heat is captured in the liquid and moved to the external loop. The most common challenge we see isn’t the technology itself, but the building’s hidden limitations. You’ll suddenly discover the floor can’t handle the weight of fully populated, liquid-filled racks, or the ceiling height isn’t sufficient for the new overhead piping. Another big one is integrating the new liquid heat rejection system with the building’s old chiller plant. Getting those two disparate systems to talk to each other efficiently can be a major engineering headache.
Greenfield builds offer a “blank canvas” for optimal design. Beyond just the physical layout, what are the most critical, “Day One” decisions regarding heat-reuse systems, power integration, and water access that truly future-proof a data center for the next decade of AI infrastructure demands?
The physical layout is almost the easy part. The truly foundational decisions are about utility integration. For power, it’s not just about securing a large feed; it’s about designing for the massive step-up transformers that AI clusters demand and building in the modularity to scale that power infrastructure without disruption. For cooling, the Day One decision is committing to a “liquid-first” philosophy. This means planning the entire water and fluid distribution infrastructure not as an add-on, but as the core of the facility’s thermal design. Most importantly, it’s about heat reuse. A truly future-proofed facility asks, “Where can this waste heat go?” before the first server is ever installed. This means co-locating with potential heat customers like district heating networks or greenhouses. These decisions—power modularity, a liquid-first core, and an integrated heat-reuse plan—are what separate a merely new data center from one that will still be relevant and efficient in ten years.
Legacy constraints in brownfield sites, like floor-loading limits or inadequate power infrastructure, present major hurdles for high-density AI clusters. What are the most innovative engineering solutions you’ve seen for overcoming these physical and electrical limitations without resorting to a complete, cost-prohibitive rebuild?
This is where real engineering creativity comes into play. For floor-loading issues, we’ve moved beyond simple reinforcement. I’ve seen teams use custom-designed weight-distributing platforms that spread the load of a heavy DTC rack across multiple floor tiles and underlying supports. It’s like giving the rack a set of snowshoes. For power, since you often can’t just pull a new utility feed, the innovation is in on-site power distribution. We are seeing more localized, high-efficiency power distribution units and even in-row busbars that can handle the massive amperage draws of AI racks without requiring a full-scale teardown of the existing electrical room. The key is to solve the problem at the local level—at the rack or row—instead of trying to re-engineer the entire building’s core infrastructure. It’s about being precise and targeted with your upgrades.
A hybrid strategy using zoned cooling can upgrade a facility incrementally. Could you elaborate on how an operator can partition a data hall to run high-density, liquid-cooled racks alongside legacy air-cooled equipment? What are the keys to ensuring operational harmony and efficiency between these two zones?
The key to operational harmony is containment. You must create a clear physical and thermal boundary between the two environments. This can be as simple as using vinyl curtains or as robust as building a permanent wall to create a “pod” for the high-density liquid-cooled racks. Inside this pod, the DTC systems capture the majority of the heat directly from the chips. The remaining low-grade heat is easily managed by the pod’s dedicated air handlers. The rest of the data hall continues to operate with its existing air-cooling system, completely undisturbed. The secret to efficiency is ensuring the two systems don’t fight each other. You need separate control and monitoring systems. The liquid cooling loop for the HPC zone should have its own dedicated pumps and heat exchangers, so its operation is independent of the main building chillers that serve the air-cooled side. This allows you to run each zone at its optimal temperature and flow rate, extending the life of your legacy assets while fully enabling your high-performance gear.
What is your forecast for the data center industry? Will the urgent demand for AI capacity lead to a surge in fast brownfield retrofits, or will the long-term efficiency gains of custom-built greenfield sites ultimately dominate the market?
In the short term—the next two to three years—we are going to see a massive surge in brownfield retrofits. The demand for AI capacity is so immediate and intense that operators simply cannot wait for multi-year greenfield projects to come online. Speed to market will be the primary driver, and leveraging existing facilities is the most direct path. However, looking further out, I believe the market will be dominated by greenfield builds. As the AI hardware lifecycle matures and organizations refine their long-term strategies, the operational inefficiencies and physical ceilings of retrofitted sites will become too costly to ignore. The hyperscalers are already leading this charge. The performance, sustainability, and scalability advantages of a purpose-built, liquid-first greenfield facility are simply too great. The most perceptive players will use brownfields as a tactical bridge to get them into the game now, while simultaneously planning their next-generation greenfield sites as the ultimate, long-term solution. The future is an adaptable one, and the best strategy is evolutionary, not revolutionary.
