AMD to Produce AI Chips at TSMC’s Arizona Fab in 2025, Diversifying Supply

In a significant move that could reshape the semiconductor landscape, AMD is reportedly in discussions with TSMC to manufacture high-performance computing (HPC) chips at TSMC’s new facility in Arizona starting in 2025. This initiative, following Apple’s recent production of 5nm chips at the same site, marks a critical shift in the industry, as AMD becomes the first major tech company to venture into producing AI chips outside of Taiwan. The independent journalist Tim Culpan, who previously broke news about Apple’s endeavors at TSMC Arizona, now reveals AMD’s plans to tape out a new chip design for the Arizona fab next year. Production is slated to begin shortly after, targeting HPC products based on older architectures such as Zen 4, aligning perfectly with Fab 21’s 5-nanometer process capabilities.

Despite encountering challenges and operational delays, TSMC’s Arizona expansion remains a focal point for AMD’s supply chain strategy. The site is set to include two additional advanced facilities in the coming years, which will produce 3nm and 2nm chips, thus demonstrating TSMC’s commitment to maintaining cutting-edge manufacturing technologies in multiple locations. Industry analysts speculate that AMD might use Arizona’s Fab 21 to manufacture successors to its MI300 accelerator family, a move aimed at leveraging the site’s initial 5nm node with future upgrades possibly incorporating 4nm processes and HBM3e memory.

A Strategic Shift in Semiconductor Manufacturing

The collaboration between AMD and TSMC in Arizona signifies a strategic effort to diversify advanced chip production away from geopolitical hotspots like Taiwan. This diversification is a critical hedge for both companies amidst increasing global geopolitical tensions, ensuring a more stable and resilient supply chain. By establishing a robust production presence in the United States, AMD and TSMC are not only securing their operational capabilities but also addressing potential disruptions that could arise from geopolitical conflicts or natural disasters affecting Taiwan.

Moreover, this shift is timely given the increasing demand for high-performance chips driven by the growth of artificial intelligence, cloud computing, and other advanced technologies. With AI applications becoming ubiquitous across various industries, having a secure and diversified supply chain is paramount. AMD’s decision to tap into TSMC’s Arizona fab aligns with this trend, potentially giving the company a competitive edge over rivals who remain heavily reliant on Asian manufacturing hubs. By doing so, AMD is also aligning itself with broader industry trends of decentralizing semiconductor production, a move that could set a precedent for other tech giants.

Challenges and Opportunities Ahead

AMD is reportedly in talks with TSMC to produce high-performance computing (HPC) chips at TSMC’s new Arizona plant starting in 2025. Following Apple’s recent success with 5nm chips at the facility, this represents a significant industry shift as AMD ventures into AI chip production outside Taiwan. Prominent journalist Tim Culpan, who previously uncovered Apple’s TSMC Arizona plans, now reports on AMD’s intention to design a new chip for the Arizona fab next year, with production commencing soon after. The target is older architectures like Zen 4, aligning with Fab 21’s 5-nanometer capabilities.

Despite some operational delays, TSMC’s Arizona expansion is critical to AMD’s supply chain. The site will eventually include two additional facilities producing 3nm and 2nm chips, underscoring TSMC’s commitment to advanced manufacturing. Analysts believe AMD could use Fab 21 to make successors to its MI300 accelerator family, initially leveraging the 5nm node and potentially incorporating future upgrades like 4nm processes and HBM3e memory, which could significantly impact the semiconductor landscape, highlighting the ongoing evolution in chip manufacturing.

Explore more

How Does Martech Orchestration Align Customer Journeys?

A consumer who completes a high-value transaction only to be bombarded by discount advertisements for that exact same item moments later experiences the digital equivalent of a salesperson following them out of a store and shouting through a megaphone. This friction point is not merely a minor annoyance for the user; it is a glaring indicator of a systemic failure

AMD Launches Ryzen PRO 9000 Series for AI Workstations

Modern high-performance computing has reached a definitive turning point where raw clock speeds alone no longer satisfy the insatiable hunger of local machine learning models. This roundup explores how the Zen 5 architecture addresses the shift from general productivity to AI-centric workstation requirements. By repositioning the Ryzen PRO brand, the industry is witnessing a focused effort to eliminate the data

Will the Radeon RX 9050 Redefine Mid-Range Efficiency?

The pursuit of graphical fidelity has often come at the expense of power consumption, yet the upcoming release of the Radeon RX 9050 suggests a calculated shift toward energy efficiency in the mainstream market. Leaked specifications from an anonymous board partner indicate that this new entry-level or mid-range card utilizes the Navi 44 GPU architecture, a cornerstone of the RDNA

Can the AMD Instinct MI350P Unlock Enterprise AI Scaling?

The relentless surge of agentic artificial intelligence has forced modern corporations to confront a harsh reality: the traditional cloud-centric computing model is rapidly becoming an unsustainable drain on capital and operational flexibility. Many enterprises today find themselves trapped in a costly paradox where scaling their internal AI capabilities threatens to erase the very profit margins those technologies were intended to

How Does OpenAI Symphony Scale AI Engineering Teams?

Scaling a software team once meant navigating a sea of resumes and conducting endless technical interviews, but the emergence of automated orchestration has redefined the very nature of human-led productivity. The traditional model of human-AI collaboration hit a hard limit where a single engineer could typically only supervise three to five concurrent AI sessions before the cognitive load of context