With the rise of generative AI, the power demands of data centers are exploding, forcing a fundamental reinvention of the electrical infrastructure that underpins our digital world. We’re moving from a paradigm of incremental improvements to one of radical transformation. To shed light on this critical shift, we spoke with Dominic Jainy, an IT professional whose work at the intersection of artificial intelligence and high-performance computing gives him a unique perspective on the immense engineering challenges and opportunities ahead. He discusses the unsustainable nature of legacy power systems, the transformative potential of high-voltage DC architecture, and the collaborative effort required to build the AI factories of the future.
We’re seeing a projected leap from 25 MW data centers to 1 GW AI factories, with racks consuming as much power as 1,000 homes. What are the most pressing operational and architectural challenges this massive scaling creates for facilities built on traditional power delivery systems?
The sheer scale is forcing a reckoning. For years, we made incremental improvements, but that approach has hit a wall. When a single rack is projected to draw one megawatt—the equivalent of a small neighborhood—the traditional low-voltage AC power train simply breaks down. It’s no longer a matter of adding more circuits; the physical limitations and inefficiencies become unsustainable. You’re trying to push an immense amount of power through a system that was never designed for it, leading to colossal energy waste and heat generation. This isn’t just an upgrade; it’s a paradigm shift. We are being forced to completely rethink power distribution from the ground up because the demands of massive GPU deployments have made our old methods obsolete.
Legacy power infrastructure often involves five or more AC/DC conversion steps before electricity reaches the server. Could you walk us through the specific points in this power chain where energy is lost and quantify how these inefficiencies financially impact a gigawatt-scale AI factory?
It’s a journey of a thousand cuts, electrically speaking. The electricity arrives from the utility as medium-voltage AC, but then the dance begins. It’s converted to DC for the uninterruptible power supply and batteries, then back to AC for facility distribution, and finally, it’s converted back again to low-voltage DC inside the rack for the actual servers. With a typical legacy system having five or more of these conversion steps, you lose precious energy at every single stage, which is dissipated as heat. For every watt you lose, you then have to spend even more energy to cool the system down. At the gigawatt scale, this becomes a devastating financial drain. Eliminating just three of those conversions can boost end-to-end efficiency by 3% to 5%, which might not sound like much, but for a 1 GW facility, that translates into tens of millions of dollars in electricity savings annually.
Proposals for an 800 VDC architecture cite major benefits beyond energy efficiency, including a potential savings of half a million tons of copper in a 1 GW facility. Can you explain the physics behind these material savings and provide some metrics on the practical advantages?
It really comes down to the fundamental laws of physics. Power is the product of voltage and current. So, if you dramatically raise the voltage of your primary distribution bus—say, to 800 VDC as Nvidia has proposed—you can deliver the exact same amount of power with a drastically reduced current. This is a game-changer because lower current allows for the use of significantly thinner conductors and busways. The practical effect is staggering. For a single one-megawatt rack, you could be looking at a reduction of about 200 kilograms of copper for the busbars alone. When you scale that up to a one-gigawatt data center, the numbers are almost unbelievable: you could save up to half a million tons of copper. It’s a transformative upgrade that saves money, resources, and simplifies the physical build-out.
Transitioning to HVDC requires new components, particularly for converting high-voltage AC at the perimeter and for protecting DC circuits from faults. Which of these presents the bigger engineering hurdle today, and what steps are needed to move from prototypes to commercially viable solutions?
Both are significant hurdles, but they are different in nature. Protecting the DC circuits is surprisingly the more difficult fundamental challenge. A traditional AC breaker works because the AC waveform naturally crosses zero every few milliseconds, giving it a moment to interrupt the current. In a DC system, the current never crosses zero. Trying to break a fault in a high-voltage DC line is like trying to stop a freight train instantly. The solution requires advanced solid-state circuit breakers built with semiconductor technology that can act instantaneously without creating massive thermal losses. Converting the 13.8 kV grid power to 800 VDC at the perimeter is more of a scaling and commercialization challenge. The technology, using power semiconductor devices in advanced transformers, has been feasible for decades but was historically bulky and expensive. Now, with intense R&D, companies have developed testing prototypes, and the path to commercial viability is becoming clearer, but the DC protection piece requires a bit more disruptive innovation.
For HVDC to become a universal standard, leading tech giants, chip makers, and operators must align on common voltages and safety practices. What does this collaboration process look like in practice, and what are the primary risks if the industry fails to standardize quickly?
This collaboration is happening at what feels like warp speed because the need is so urgent. It involves a torrent of intense discussions among AI chip makers, major tech companies, data center operators, and power system providers to align on everything from common voltage ranges and connector interfaces to universal safety protocols. The risk of failing to standardize is immense. You could end up with a fragmented market where different AI factories are built on incompatible, proprietary systems. This would stifle innovation, drive up costs, and slow down the very AI growth that is fueling this demand. AI has become one of the biggest stories in the energy world because it’s fundamentally an electrical challenge, and solving it requires a unified front. We simply can’t afford to rely on incremental or isolated improvements any longer.
What is your forecast for the widespread adoption of higher-voltage DC power in data centers?
I believe widespread adoption is inevitable and will happen faster than many anticipate. The conversation has shifted from “if” to “when.” We’re no longer in a situation where we can make small, incremental tweaks to existing AC systems. The raw power demands of AI factories are forcing the industry’s hand, making bold changes an absolute necessity for survival and growth. With major players like Nvidia putting a stake in the ground for 800 VDC, the momentum is building rapidly. The next few years will be critical as electrification partners race to develop the foundational components, like advanced transformers and solid-state breakers, quickly and reliably. This isn’t just a computing challenge; it is fundamentally an electrical one, and the industry is now mobilizing with the deep technical expertise needed to ensure both safety and performance at an unprecedented scale.
