Dominic Jainy stands at the forefront of the intersection between high-performance computing and sustainable infrastructure. With a deep background in artificial intelligence and blockchain—technologies notorious for their massive energy appetites—he has pivoted his focus toward the critical challenge of how we power the digital age when the traditional electrical grid simply cannot keep up. His recent work analyzing the PureDC and AVK-SEG microgrid project in Dublin provides a rare look into a future where data centers must function as independent islands of power. This conversation explores the shift from being a passive consumer of grid electricity to becoming a sophisticated, self-sustaining energy producer, navigating the complex web of environmental regulations, technical redundancy, and the looming surge of AI-driven demand.
The following discussion centers on the strategic transition toward self-powered microgrids, the intricate balancing act of multi-fuel energy systems, and the rigorous standards required to achieve 99.999% reliability without a utility safety net. We also delve into the logistical hurdles of sourcing renewable gas certificates, the mechanical evolution toward hydrogen-ready engines, and the role of battery storage in managing the unpredictable ramp rates of modern AI workloads.
In regions where grid connectivity is severely restricted, how do you evaluate the financial risks of building a self-powered microgrid? What specific technical hurdles must be cleared to ensure five-nines availability and 300 seconds or less of annual downtime without a utility fallback?
The financial risk profile changes entirely when you realize that waiting for a grid connection can take eight to ten years in major markets, which is effectively a death sentence for a fast-moving AI project. In Dublin, where the grid was so constrained that deposits were actually handed back to developers, the risk of “doing nothing” far outweighed the capital expenditure of building a private energy center. To protect a 14.2-acre site like Orion Business Park, we have to move away from the traditional mindset of standby power and instead build a primary power plant that operates with a “five-nines” mandate. This means achieving less than 300 seconds of downtime per year by utilizing an integrated engineering architecture that features duplicate power feeds and duplicated control systems. We ensure reliability by deploying multiple gas connection feeds that link back into the UK and across Europe, effectively creating a “gas grid” redundancy that replaces the missing electrical one. It is a grueling seven-year journey from conception to reality, but the result is a 54MW facility that remains operational even when the local utility can offer nothing but a closed door.
Managing a mix of natural gas, hydrotreated vegetable oil, and battery storage requires a complex control architecture. How do you coordinate these systems to handle fluctuating AI processing loads, and what are the practical steps for switching from gas to HVO during a sudden supply failure?
Coordination starts with a “tapestry of technologies” where the control system acts as the central nervous system, balancing 9.8MW Wärtsilä engines against high-voltage battery storage. AI workloads are notoriously spiky, so the battery energy storage scheme (BESS) is essential for absorbing sudden load changes while keeping the dual-fuel engines running at their most efficient points. If the Gas Networks Ireland (GNI) mains supply were to fail, the system is designed to switch immediately to hydrotreated vegetable oil (HVO) without a drop in service. We have custom-built and welded massive storage tanks on-site that hold 72 hours’ worth of HVO, providing a critical third layer of redundancy. This transition is seamless because the engines are specifically adapted for the data center setting, allowing the facility to toggle between gas and liquid fuel while the batteries maintain the frequency and voltage stability required for sensitive compute hardware.
Decarbonizing natural gas consumption through biomethane certificates is a significant commitment. What are the logistical challenges in sourcing certified renewable gas at scale, and how do you ensure these sustainability efforts remain audit-ready under tightening Irish and European environmental regulations?
The primary challenge is ensuring that every megawatt-hour consumed is matched with a verified renewable source, which we achieved through a 100% decarbonization proof of concept in 2025. We source Irish Renewable Gas Guarantees of Origin (RGGOs) and European Biomethane Guarantees of Origin (GOs), which must be retired systematically to match our actual consumption. Logistically, this requires a deep understanding of the chain-of-custody and the RE100 technical criteria to ensure the gas attributes are recognized under the EU emissions trading scheme. We work in lockstep with the Environmental Protection Agency (EPA) to ensure that our operations stay within very tight environmental constraints, which involves constant measurement, monitoring, and disclosure. It is a rigorous process where we analyze lifetime carbon up front, from the emissions associated with building the site to the zero-waste-to-landfill certification, ensuring that our green claims can withstand the most intense regulatory scrutiny.
Transitioning to zero-water consumption requires a total reliance on closed-loop cooling systems. How does this design impact the overall power usage effectiveness of the facility, and what maintenance protocols are necessary to prevent performance degradation over a multi-year operational cycle?
By moving to a closed-loop system, we effectively fill the system once and eliminate the continuous draw on municipal water, which is a major victory for local resource management. While this can theoretically put more pressure on the power usage effectiveness (PUE) because we aren’t using evaporative cooling to shed heat, we offset this through the high efficiency of our on-site energy center. To prevent performance degradation, we have integrated rainwater harvesting readiness, which allows us to collect and treat water on-site for auxiliary engine needs rather than relying on the mains. Maintenance involves strict fluid chemistry management and regular inspections of the heat exchangers to ensure there is no scaling or biological growth that could impede thermal transfer. It is about treating the cooling medium as a permanent asset rather than a disposable commodity, which is essential for the long-term sustainability of a 110MW expansion.
Integrating carbon capture and hydrogen-ready engines represents the next frontier for on-site power generation. What infrastructure modifications are required to make an existing engine “hydrogen-blend ready,” and how do you navigate the commercialization of captured CO2 for medical or beverage manufacturing sectors?
Making an engine hydrogen-ready involves “minor modifications” to the fuel injection and control systems to handle the different combustion characteristics of a hydrogen blend, a process we have already designed into our current energy center blocks. The real magic happens at the back end with carbon capture technology, which we aim to deploy by the end of 2026 through partnerships with specialists like ASCO. We are looking at a future where the data center doesn’t just produce bits, but also high-purity CO2 for the medical and beverage sectors, which frequently suffer from supply shortages. This involves installing specialized recovery units that scrub the exhaust, liquefy the CO2, and store it for transport. It turns an environmental liability into a commercial asset, helping us reach our net-zero operations goal by 2040 while supporting the local industrial supply chain.
Scaling a data center from an initial 14MW load to over 100MW creates significant “ramp rate” challenges. How do you utilize battery energy storage to keep engines running efficiently during low-load phases, and what are the triggers for deploying additional modular energy blocks?
The ramp rate is one of the most difficult variables to manage because while the end goal might be 110MW, the first six months of a data hall’s life might only see a 1MW draw. We use the BESS to act as a buffer, allowing our 2.5MW and 9.8MW engines to run at optimal capacity by charging the batteries with excess power when the data center load is low. As the compute load increases, the batteries discharge to assist with peaks, preventing the engines from having to “hunt” for the load. We deploy additional modular energy blocks in stages—such as our second capacity block coming online in September—triggered by specific utilization thresholds in the data halls. This modular approach, which will eventually see us relocate 10MW of battery storage and build another 10MW for a 20MW total, allows us to scale the power plant in lockstep with the actual demand of the servers, ensuring we never have “stranded” generation capacity.
What is your forecast for the future of microgrid-powered data centers as AI demand doubles by 2030?
We are entering an era where the data center is no longer just a building, but a sophisticated power utility in its own right. With Gartner predicting that global power demand will double to nearly 900TWh by 2030, and 40% of AI facilities already facing operational constraints, the traditional model of relying on a centralized grid is crumbling. My forecast is that the “islanded” microgrid will become the gold standard for Tier 1 markets, moving from a temporary “bridging solution” to a permanent, decentralized energy strategy. We will see these facilities not just taking from the grid, but actively participating in grid stability and providing decarbonized heat to local communities through district heating connectors. The successful data center of 2030 will be defined by its ability to generate its own clean power, capture its own carbon, and operate entirely independent of the increasingly fragile and overburdened public electrical infrastructure.
