Expanding Data Center Capacity to Meet Growing AI Demand

The accelerating use of Artificial Intelligence (AI), particularly with the advent of generative AI (gen AI), has led to an unprecedented surge in demand for data center capacity, creating a potential shortfall in supply. This trend necessitates a comprehensive understanding of the evolving requirements for data centers in the AI-dominated era, shedding light on the factors driving this demand and the strategies that corporations and investors might employ to mitigate the impending capacity shortfall.

Exploding Demand and Supply Constraints

Global data center capacity demand is projected to climb at an exceptional annual rate of 19-22% from 2023 to 2030. The anticipated growth could bring total demand to between 171 and 219 gigawatts (GW) by 2030, a striking increase from the current 60 GW. This growth, predominantly fueled by the requirements of AI-ready data centers, poses a considerable risk of an acute supply shortfall. By 2030, it is predicted that the majority of AI workloads will demand data centers that are specifically geared to handle sophisticated AI tasks, representing approximately 70% of all data center demand, with generative AI accounting for around 40% of the workload.

The primary catalyst behind this surge is the intensive computation power and density that AI workloads necessitate. Advanced-AI data center demand is expected to grow by an average of 33% annually, signaling a pronounced imbalance between supply and demand. To bridge the looming gap, data center capacity that has been built since 2000 would need to be essentially duplicated in under a quarter of the time. This translates to enormous pressure on the industry to rapidly scale up infrastructure to keep pace with AI’s escalating computational demands.

Hyperscalers’ Dominance

Dominating the demand for AI-ready data centers are the leading Cloud Service Providers (CSPs) such as Amazon Web Services, Google Cloud, Microsoft Azure, and Baidu. These hyperscalers require vast capacity to support and develop expansive foundational models like Google’s Gemini and OpenAI’s ChatGPT. While numerous enterprises continue to leverage pre-configured models available on public cloud platforms, there is an emerging trend where a growing number of companies are projected to develop and train models customized for their proprietary data, potentially propelling demand for private hosting solutions. Despite these shifts, projections estimate that by 2030, 60-65% of AI workloads in Europe and the United States will still depend heavily on CSP infrastructures.

To address this overwhelming demand, CSPs now control over half of the world’s AI-ready data center capacity. These cloud giants are turning to partnerships with colocation providers (“colos”) to expand their infrastructure footprint. This ongoing collaboration aids in scaling the necessary infrastructure in response to the massive and accelerating needs for processing power and data handling that advanced AI applications entail.

Emerging Providers and Market Pressures

In the midst of these acute supply constraints, GPU cloud providers such as CoreWeave have emerged to fulfill the need for AI-ready data center capacity. These providers offer high-performance GPUs as a service to AI model developers, frequently in collaboration with colocation providers to establish and maintain data center facilities.

Despite robust expansion efforts, the supply landscape shows pronounced signs of tightening. Colocation prices in the United States, having decreased from 2014 to 2020, witnessed a stark reversal with an average increase of 35% between 2020 and 2023. Moreover, the new data center capacity projected to come online in the next few years is already leased out, leaving vacancy rates exceptionally low, especially in densely populated areas such as Northern Virginia. This phenomenon of pre-leased capacity underscores the urgent need for creative solutions to meet the surging demand.

New Location, Design, and Operational Requirements

The transformative influence of AI on data center construction has necessitated substantial modifications in their location, mechanical and electrical system design, and operational strategies. What was previously considered a large data center with a 30-megawatt (MW) capacity now seems modest compared to the 200-MW facilities developed today. This radical change is primarily driven by the extensive energy consumption intrinsic to AI workloads.

AI-ready data centers, characterized by their elevated power densities, stand in stark contrast to traditional data centers. Ordinary data centers typically average around 8 kilowatts (kW) per rack; however, within two years, AI-ready centers have doubled this density to 17 kW per rack, with expectations to reach 30 kW by 2027. Training models like ChatGPT have power requirements that can exceed 80 kW per rack, and cutting-edge hardware like Nvidia’s latest GB200 chips combined with servers might necessitate densities reaching an astonishing 120 kW per rack.

Data Center Location and Power Infrastructure

The escalating energy demands for expanding data centers have spotlighted power supply as a critical issue, particularly in established hubs like Northern Virginia and Santa Clara. The pace at which data centers are expanding appears to surpass the ability of utility companies to construct the necessary transmission infrastructure fast enough. This mismatch has led to concerns over grid stability, prompting countries like Ireland to temporarily halt new grid connections for data centers in specific regions until 2028.

In response to these power constraints, data centers focused on AI model training are increasingly relocating to more remote locations where power is more readily available, including areas such as Indiana, Iowa, and Wyoming. Some operators are taking an innovative approach by building close to power plants and employing "behind the meter" solutions like fuel cells, batteries, or renewable energy sources to generate their own off-grid power. Looking beyond conventional sources, future possibilities include utilizing small modular reactors (SMRs) to meet growing power needs sustainably and efficiently.

Mechanical System Design

The immense power consumption associated with AI servers necessitates the adoption of innovative cooling systems. Traditional air-based cooling methods are inadequate for power densities that exceed 50 kW per rack. Consequently, a shift towards liquid cooling technologies, which offer significantly enhanced effectiveness, is being witnessed. The three primary types of liquid cooling systems gaining traction include Rear-Door Heat Exchangers (RDHX), Direct-to-Chip (DTC) technology, and Liquid Immersion Cooling.

Rear-Door Heat Exchangers (RDHX) are a hybrid solution that combines the cold air induction method with liquid-cooled heat exchangers installed at the rear of the rack. This method is particularly suitable for managing densities between 40-60 kW. Direct-to-Chip (DTC) technology employs a liquid that circulates through cold plates in direct contact with high-power electronic components, effectively handling up to 120 kW per rack. Liquid Immersion Cooling involves placing servers in tanks filled with dielectric fluids, making it a viable option for exceptionally high densities exceeding 100 kW per rack.

Opportunities for Stakeholders

The rapid expansion of AI-ready data centers opens a myriad of opportunities for various stakeholders in this high-growth market. For owners and operators of data centers, colocation providers can retrofit existing facilities and build new ones to meet the demands of hyperscalers. These providers offering build-to-suit development services tailored to the specific needs of hyperscalers can become particularly valuable partners. Similarly, GPU cloud providers present potentially lucrative investment opportunities.

Construction companies and equipment suppliers also stand to gain from the supply crunch. The increasing demand for modularized construction methods accelerates build-outs and encourages sustainable practices. According to McKinsey, capital spending on mechanical and electrical systems for data centers is expected to exceed $250 billion by 2030.

Energy and power supply companies have a significant role to play, benefiting from the demand surge by generating and distributing more energy, particularly green energy. On-site sustainable power solutions, such as fuel cells, solar power, and small modular reactors, show great potential alongside efforts to reuse heat generated from data centers.

Alternative Approaches

With the rapid pace of change, companies and investors might need to adapt traditional approaches to stay ahead. Speed is of the essence for data center owners and operators seeking new sites with reliable power, cooling-system manufacturers developing solutions for increasingly high power densities, and equipment providers scaling up production.

Collaboration is equally crucial. Partnerships across the value chain and with other sectors can foster innovation and address capacity constraints. For example, joint efforts between utilities and hyperscalers can effectively coordinate grid investments and capacity expansions. Similarly, partnerships between chip or server manufacturers and cooling solution providers can expedite the development of efficient cooling designs.

Investing in scaling data center infrastructure at a previously unheard-of pace is capital-intensive, potentially requiring investment exceeding more than a trillion dollars across the ecosystem. While global investment funds are fueling sector growth, further expansions remain feasible and necessary.

Several companies are actively pursuing these opportunities. For example, Blackstone and Digital Realty entered a $7 billion deal in 2023 to construct new AI-ready data centers in prime locations such as Frankfurt, Paris, and Northern Virginia. Super Micro Computer is investing in new sites across the US and Asia, while HCL Technologies is partnering with Schneider Electric to manage energy consumption in Asia-Pacific data centers.

Conclusion

The rapid expansion of Artificial Intelligence (AI), especially with the rise of generative AI (gen AI), has sparked an extraordinary surge in the demand for data center capacity. This sudden increase is creating a situation where the available supply may not meet the needs. Therefore, it is crucial to develop an in-depth understanding of how data center requirements are evolving in this AI-driven age.

Several factors contribute to this growing demand. First and foremost is the sheer volume of data that AI systems need to process and store. These systems require exceptional computational power and vast amounts of storage, pushing existing data centers to their limits. Furthermore, the complexity and sophistication of AI algorithms mean that more advanced and capable infrastructure is necessary to support their operations.

Corporations and investors are being compelled to think strategically to address the looming shortfall in data center capacity. One approach is to invest in building new data centers, equipped with the latest technology to handle the specific needs of AI workloads. Another strategy includes upgrading existing facilities to improve their efficiency and expand their capacity. Additionally, companies are exploring innovative cooling solutions, energy-saving technologies, and more sustainable practices to manage the operational costs and environmental impacts associated with larger data centers.

As this trend continues, the role of data centers will only become more critical. Ensuring they can meet the demands of an AI-dominated future is not just a technical challenge but also a strategic imperative for businesses and investors alike.

Explore more