Cerebras Unveils 10MW AI Data Center in Oklahoma City

I’m thrilled to sit down with Dominic Jainy, a seasoned IT professional with deep expertise in artificial intelligence, machine learning, and blockchain. With a passion for exploring how these cutting-edge technologies transform industries, Dominic brings a wealth of insight into the evolving landscape of AI infrastructure. Today, we’re diving into the fascinating details of a new 10MW data center in Oklahoma City launched by a leading wafer-scale chip designer. Our conversation will explore the innovative features of this facility, its focus on sustainability, the staggering computing power it offers, and the partnerships that made it possible.

What can you tell us about the new 10MW data center in Oklahoma City and what sets it apart in the realm of AI infrastructure?

This facility is a game-changer in the world of AI computing. It’s a 10MW data center designed to push the boundaries of what’s possible with AI workloads. What really sets it apart is its sheer scale and focus on cutting-edge technology. It’s built to handle some of the largest AI models ever created, with over 44 exaflops of compute power. That kind of capacity isn’t just impressive—it’s a leap forward for the industry, enabling breakthroughs in research and application development.

How did Oklahoma City become the chosen location for such an advanced facility?

Oklahoma City offers a strategic mix of factors that make it ideal for a data center of this magnitude. You’ve got access to robust infrastructure, a growing tech ecosystem, and favorable economic conditions. Additionally, the ability to tap into regional resources and collaborate with local partners played a big role. It’s also about positioning—being centrally located in the U.S. helps with connectivity and latency for various applications.

Can you explain the direct-to-chip liquid cooling system used at this facility in simple terms?

Sure, think of it as a highly efficient way to keep the hardware cool. Instead of blowing cold air over servers like traditional setups, this system pumps a special liquid directly to the chips, absorbing heat right at the source. It’s a closed-loop system, meaning the liquid circulates, cools down, and comes back to do it again. This method is much more targeted and prevents overheating in high-performance environments like this one.

How does this cooling approach impact energy efficiency or performance compared to older methods?

It’s a massive improvement. Direct-to-chip liquid cooling is far more energy-efficient because it doesn’t waste power on cooling large spaces—just the chips that need it. Performance-wise, it allows the hardware to run at optimal temperatures, reducing the risk of thermal throttling. That means the AI systems can operate at peak capacity for longer, which is critical when you’re dealing with intensive computations.

The facility matches every kilowatt-hour used with renewable energy. Can you walk us through how that’s achieved?

Essentially, for every unit of energy the data center consumes, an equivalent amount is sourced from renewable energy, often through partnerships or credits. This could mean purchasing renewable energy certificates or directly integrating with providers of wind, solar, or other green energy sources. It’s a commitment to offsetting the environmental impact, ensuring that the facility’s massive power needs don’t contribute to carbon emissions.

Why is sustainability such a priority for projects like this data center?

Sustainability isn’t just a buzzword—it’s a necessity. Data centers are notorious for their energy consumption, and as AI workloads grow, so does the demand for power. Prioritizing renewable energy helps reduce the carbon footprint and aligns with global efforts to combat climate change. Plus, it’s a signal to stakeholders and the public that the tech industry can innovate responsibly.

With over 44 exaflops of AI compute power, what does that actually mean for someone outside the tech world?

Imagine being able to perform 44 quintillion calculations per second—that’s what an exaflop represents. For perspective, that’s like solving problems or processing data at a speed unimaginable a decade ago. In practical terms, it means this facility can train massive AI models, analyze enormous datasets, or run complex simulations in a fraction of the time, opening doors to advancements in healthcare, climate modeling, and more.

What kinds of AI projects or models can benefit from this level of computing power?

This kind of power is ideal for training the next generation of large language models, drug discovery simulations, or even autonomous vehicle systems. It can handle AI projects that require processing petabytes of data or iterating through trillions of parameters. Essentially, any application that demands extreme computational resources—think personalized medicine or real-time global forecasting—can thrive here.

The facility houses an astonishing 1,400 trillion transistors and 315 million AI cores. How do those numbers translate to real-world impact?

Those figures are a testament to the raw processing capability packed into this data center. Transistors are the building blocks of chips, and having 1,400 trillion means an unprecedented density of computing power. The 315 million AI cores are specialized units designed for AI tasks, so together, they enable lightning-fast processing of complex algorithms. In the real world, this could mean drastically cutting down the time it takes to develop new AI solutions or analyze massive datasets.

Can you share an example of a specific task that this hardware setup excels at?

Absolutely. Take something like genomic sequencing for personalized medicine. This involves analyzing billions of data points to identify patterns or mutations. With this hardware, what might have taken weeks or months on a traditional system could be done in days or even hours. That speed can directly translate to faster medical breakthroughs or tailored treatments for patients.

What role did partnerships play in bringing this Oklahoma City facility to life?

Partnerships were crucial. Collaborating with a data center provider helped prepare the site and infrastructure, ensuring it met the specific needs of high-performance AI computing. These alliances bring together expertise in real estate, power management, and tech deployment, allowing the project to scale quickly and efficiently. It’s a synergy that combines the best of both worlds—AI innovation and data center operations.

What’s your forecast for the future of AI infrastructure as facilities like this continue to emerge?

I think we’re just scratching the surface. As AI models grow in complexity, the demand for specialized infrastructure will skyrocket. We’ll see more facilities with extreme compute power, advanced cooling, and sustainable energy practices becoming the norm. The focus will likely shift toward edge computing and global distribution of these centers to reduce latency and improve access. It’s an exciting time, and I believe we’re heading toward a world where AI infrastructure is as ubiquitous and critical as the internet itself.

Explore more

DevOps and AWS Security Create a Competitive Edge

The relentless pace of digital transformation has forced a critical reckoning where the long-held compromise between rapid innovation and airtight security is no longer a sustainable business model. In the modern digital economy, the ability to deploy software quickly is directly tied to the ability to protect it effectively. This new reality demands a paradigm shift away from viewing development

What Is the $9 Trillion Blind Spot in E-Commerce?

The Invisible Revolution Happening on Your Website Right Now While e-commerce leaders meticulously analyze conversion rates and supply chain logistics, a transformative undercurrent is reshaping the digital marketplace largely unnoticed, creating a blind spot projected to influence an astounding $9 trillion in transactions by 2030. This seismic shift is the rise of agentic AI—sophisticated, automated agents that are already shopping,

Why Do Operators Make Better E-commerce Marketers?

In the increasingly crowded digital marketplace, many e-commerce brands find themselves caught in a frustrating cycle with traditional marketing agencies that promise transformative growth but deliver fragmented and often ineffective services. These businesses invest significant resources into separate campaigns for SEO, email marketing, and paid advertising, only to find these efforts operate in disconnected silos, failing to create a cohesive

Is Your Business Ready for Intelligent Automation?

Quietly operating behind the screens of countless global enterprises, a digital workforce of software robots is fundamentally reshaping how business gets done, one automated task at a time. This transformative force is known as Robotic Process Automation (RPA), a technology designed to mimic human actions in navigating digital systems to execute repetitive, rule-based tasks. RPA bots can log into applications,

Business Central Workflow Automation – Review

The silent, rule-based processes humming within modern ERP systems are increasingly defining the line between operational efficiency and competitive stagnation. Workflow automation within Microsoft Dynamics 365 Business Central represents a significant advancement in this domain, moving beyond simple record-keeping to actively manage and enforce business logic. This review explores the evolution of this technology, its key features, performance metrics, and