Digital Edge Secures $1.6 Billion to Boost Asian Data Center Expansion

Digital Edge, a prominent data center developer and operator in Asia, has announced a substantial capital influx exceeding $1.6 billion to fuel its ambitious growth plans. This funding package comprises $640 million in equity investment from a mix of existing and new investors, alongside an additional $1 billion in debt financing aimed at supporting several campus expansions. Since its establishment in 2020, Digital Edge has successfully grown its portfolio to encompass 21 data centers with a combined critical IT load surpassing 500 megawatts (MW), and with further construction underway, the company has its sights set on an additional 300 MW for future development.

The strategically positioned data centers span key Asian markets including Japan, Korea, India, Malaysia, Indonesia, and the Philippines, positioning Digital Edge as a crucial player in the region’s data infrastructure landscape. In its latest developments, the company has officially opened its 23MW EDGE2 facility in Jakarta and is on track to launch the first facility within a 300MW campus in Navi Mumbai by the second quarter of 2025. Other upcoming projects include a hyperscale edge facility in downtown Tokyo and Digital Edge’s ninth data center in Japan, set to open their doors soon.

This infusion of capital is set to significantly accelerate Digital Edge’s expansion efforts, enabling the company to meet the rapidly growing demand for cloud and artificial intelligence solutions among its customers in Asia. The announcement not only reinforces Digital Edge’s strong growth trajectory but also highlights its dedicated focus on penetrating further into key markets within the region. By bolstering its presence and capabilities, Digital Edge aims to address the increasing needs for sophisticated data infrastructure and maintain its competitive edge in the ever-evolving tech landscape.

Explore more

How Does Martech Orchestration Align Customer Journeys?

A consumer who completes a high-value transaction only to be bombarded by discount advertisements for that exact same item moments later experiences the digital equivalent of a salesperson following them out of a store and shouting through a megaphone. This friction point is not merely a minor annoyance for the user; it is a glaring indicator of a systemic failure

AMD Launches Ryzen PRO 9000 Series for AI Workstations

Modern high-performance computing has reached a definitive turning point where raw clock speeds alone no longer satisfy the insatiable hunger of local machine learning models. This roundup explores how the Zen 5 architecture addresses the shift from general productivity to AI-centric workstation requirements. By repositioning the Ryzen PRO brand, the industry is witnessing a focused effort to eliminate the data

Will the Radeon RX 9050 Redefine Mid-Range Efficiency?

The pursuit of graphical fidelity has often come at the expense of power consumption, yet the upcoming release of the Radeon RX 9050 suggests a calculated shift toward energy efficiency in the mainstream market. Leaked specifications from an anonymous board partner indicate that this new entry-level or mid-range card utilizes the Navi 44 GPU architecture, a cornerstone of the RDNA

Can the AMD Instinct MI350P Unlock Enterprise AI Scaling?

The relentless surge of agentic artificial intelligence has forced modern corporations to confront a harsh reality: the traditional cloud-centric computing model is rapidly becoming an unsustainable drain on capital and operational flexibility. Many enterprises today find themselves trapped in a costly paradox where scaling their internal AI capabilities threatens to erase the very profit margins those technologies were intended to

How Does OpenAI Symphony Scale AI Engineering Teams?

Scaling a software team once meant navigating a sea of resumes and conducting endless technical interviews, but the emergence of automated orchestration has redefined the very nature of human-led productivity. The traditional model of human-AI collaboration hit a hard limit where a single engineer could typically only supervise three to five concurrent AI sessions before the cognitive load of context