How Will AI and ML Surge Reshape Data Center Infrastructures by 2028?

The digital revolution, spearheaded by the burgeoning fields of artificial intelligence (AI) and machine learning (ML), is catalyzing transformative change in how data centers are built and operated. As these technologies hunger for more processing power, data center physical infrastructures (DCPI) must evolve to meet the challenge. By 2028, the value of the DCPI market is expected to surpass $46 billion, propelled by an impressive compound annual growth rate of 11%. This surge underscores the industry’s effort to amplify data centers’ capabilities to handle the intensive computational demands of AI and ML. Such a growth trajectory signals not only a technological but also an infrastructural evolution in the realm of data storage and computation, ensuring the next generation of data centers is equipped to support the advanced requirements of these cutting-edge technologies.

Power and Cooling Innovations

To keep pace with the intense requirements of AI-driven applications, data center infrastructures are undergoing significant enhancements, particularly in power and cooling systems. Rack power densities, once content with 15 kW/rack, are on the brink of a dramatic escalation, estimated to rocket to 60–120 kW/rack. This upsurge is set to transform the prevalent air-cooled heat management traditions, ushering in the era of liquid cooling solutions. These more efficient and effective cooling strategies, indispensable for managing the heat produced by high-density server racks, are expected to flourish, with revenue predictions soaring past $3 billion by 2028.

Market Dynamics and Regional Growth

The majority of growth in the Data Center Physical Infrastructure (DCPI) market is expected to be driven primarily by Cloud and Colocation services, leveraging their vast scales and operational efficiencies for expansion. While the enterprise segment may see slower growth, substantial progress is predicted within regions like Asia Pacific (sans China), North America, and EMEA. In contrast, China and Latin America are on track for more modest advancements. Dell’Oro’s Lucas Beran refers to the current phase as a “calm before the storm,” indicating a time of strategic preparations across the industry. As the sector gears up for the incoming wave of Artificial Intelligence (AI), companies are bracing for the extensive infrastructure investments required to support the swift development anticipated in the AI space. This forward-looking stance embodies the industry’s readiness to tackle the imminent challenges and seize opportunities presented by the next technological frontier.

Explore more

How Does Martech Orchestration Align Customer Journeys?

A consumer who completes a high-value transaction only to be bombarded by discount advertisements for that exact same item moments later experiences the digital equivalent of a salesperson following them out of a store and shouting through a megaphone. This friction point is not merely a minor annoyance for the user; it is a glaring indicator of a systemic failure

AMD Launches Ryzen PRO 9000 Series for AI Workstations

Modern high-performance computing has reached a definitive turning point where raw clock speeds alone no longer satisfy the insatiable hunger of local machine learning models. This roundup explores how the Zen 5 architecture addresses the shift from general productivity to AI-centric workstation requirements. By repositioning the Ryzen PRO brand, the industry is witnessing a focused effort to eliminate the data

Will the Radeon RX 9050 Redefine Mid-Range Efficiency?

The pursuit of graphical fidelity has often come at the expense of power consumption, yet the upcoming release of the Radeon RX 9050 suggests a calculated shift toward energy efficiency in the mainstream market. Leaked specifications from an anonymous board partner indicate that this new entry-level or mid-range card utilizes the Navi 44 GPU architecture, a cornerstone of the RDNA

Can the AMD Instinct MI350P Unlock Enterprise AI Scaling?

The relentless surge of agentic artificial intelligence has forced modern corporations to confront a harsh reality: the traditional cloud-centric computing model is rapidly becoming an unsustainable drain on capital and operational flexibility. Many enterprises today find themselves trapped in a costly paradox where scaling their internal AI capabilities threatens to erase the very profit margins those technologies were intended to

How Does OpenAI Symphony Scale AI Engineering Teams?

Scaling a software team once meant navigating a sea of resumes and conducting endless technical interviews, but the emergence of automated orchestration has redefined the very nature of human-led productivity. The traditional model of human-AI collaboration hit a hard limit where a single engineer could typically only supervise three to five concurrent AI sessions before the cognitive load of context