The tech landscape is shifting beneath our feet as the era of borderless innovation gives way to a fragmented world of sovereign AI. Dominic Jainy, an expert who has spent years navigating the high-stakes intersections of machine learning and blockchain, joins us to discuss why the traditional Silicon Valley playbook—where capital and talent flowed freely to the highest bidder—is being rewritten by national security mandates. We explore the rise of regional power centers in Europe and Asia, the transition of AI from simple software to a strategic asset akin to nuclear technology, and the emerging “origin risk” that now haunts founders even after they relocate their headquarters. Our conversation dives deep into the strategic friction between superpowers and what it means for the next generation of global tech entrepreneurs.
The recent collapse of Meta’s attempt to acquire Manus illustrates a growing trend where governments intervene in the private market to protect domestic intellectual property. How should AI founders manage “origin risk” when their home country views their research as a strategic asset, and what specific steps can teams take to ensure they don’t become trapped by their own success?
Managing origin risk is no longer a peripheral concern for legal teams; it is a fundamental survival skill for the modern founder. In the case of Manus, a $2 billion acquisition was reportedly derailed by the Chinese government despite the company having operations in Singapore, proving that relocation does not provide an immediate escape from a state’s reach. Teams must realize that governments are focusing on who gets to carry tacit knowledge across borders, often treating high-level researchers like strategic human capital that cannot be exported. To mitigate this, founders need to map their “national identity” long before an exit, being transparent with investors about where their core IP was developed and which regulators might claim jurisdiction. We are seeing a shift where founders are literally being summoned by regulators or barred from leaving their home countries during reviews, adding a visceral, personal dimension to corporate strategy that didn’t exist a decade ago.
Silicon Valley has long relied on a self-sustaining flywheel of talent and capital to maintain its dominance, yet we are seeing a massive surge in sovereign AI initiatives across the globe. How does this shift toward regional hubs alter the path for global commercialization, and what should investors prioritize when looking at companies outside the traditional California ecosystem?
The Silicon Valley flywheel is incredibly powerful—U.S. private AI investment reached a staggering $109.1 billion in 2024—but it is no longer the only game in town. While the U.S. produced forty notable AI models last year, China’s fifteen and Europe’s three represent a narrowing performance gap that investors cannot ignore. We are moving away from a hub-and-spoke system where everyone eventually ends up in the Valley, toward separate power systems that connect only where governments allow it. Investors now need to prioritize “regulatory resilience” and local government alignment, recognizing that a startup in the Gulf or Southeast Asia might have access to state-backed compute and capital that compensates for the lack of U.S. venture dollars. The path to commercialization is becoming regionalized; a company’s success may soon depend more on its ability to navigate a specific “government stack” than its ability to compete on a global, open market.
As AI is increasingly categorized alongside dual-use technologies like nuclear energy, the freedom of movement for capital and talent is being restricted. How can a company practically integrate a “geopolitical strategy” into its product roadmap, and what are the real-world risks of trying to operate across competing national power systems?
Integrating a geopolitical strategy means looking at your product roadmap through the lens of national security rather than just user experience. Founders must ask themselves if their AI agents, which can browse, write code, and handle sensitive workflows, could be perceived as tools of espionage or economic destabilization by a rival power. The practical risks are immense: a company could find its expansion blocked or its infrastructure choices forced by export controls, much like how some firms are being pushed toward domestic chip architectures. You have to anticipate that certain markets may close with no warning, which requires building a modular technical stack that can survive being disconnected from a specific cloud provider or chip supplier. It is a grueling exercise in contingency planning where you must assume that your “data provenance” and “talent origin” will eventually be used as political markers by regulators.
Singapore has emerged as a popular neutral ground for startups seeking to distance themselves from Sino-U.S. tensions, yet the Manus case suggests that relocation might not be a foolproof shield. What are the primary challenges of moving a company to bypass regulatory friction, and how do you determine if a move to a neutral hub will actually be respected by home governments?
The challenge with moving to a hub like Singapore is that while it offers a business-friendly environment and a neutral regulatory posture, it cannot erase the history of a company’s intellectual property. Governments are now asserting control based on the original location of the research and the nationality of the founders, meaning that moving your headquarters is often seen as a cosmetic change rather than a legal severance. For a move to be respected, it usually requires a total shift in the gravity of the company’s operations, including moving the “brain trust” of researchers and engineers, not just the C-suite. Even then, as we saw with the co-founders of Manus being barred from leaving China, the physical safety and mobility of the team remain a leverage point for the home state. A neutral hub only works if the founder can prove that the strategic value of the technology is no longer tied to its country of origin, which is an increasingly difficult bar to clear.
With the rise of domestic chip architectures like Huawei’s as alternatives to global standards like Nvidia, the hardware layer is becoming a political statement. How will the software layer adapt to this divergence, and what does this mean for the future of global technical collaboration?
The divergence in hardware is forcing the software layer to become more specialized and less portable, which fundamentally threatens the “build once, run anywhere” philosophy of the last twenty years. When a firm like DeepSeek reportedly shifts toward Huawei chips, they aren’t just changing a line in their procurement budget; they are optimizing their entire software stack for a specific, nationalized hardware environment. This creates a long-term consequence where hardware and data provenance become primary markers of a company’s political identity, making it nearly impossible for a firm to switch stacks without a massive technical and political overhaul. Global technical collaboration is being replaced by “walled garden” innovation, where researchers only share breakthroughs within their own geopolitical bloc. The “tacit knowledge” required to run these systems becomes a guarded secret, and the dream of a unified, global AI ecosystem begins to look like a relic of the pre-AI era.
What is your forecast for the future of global AI commercialization?
My forecast is that we are entering an era of “Technological Mercantilism,” where AI commercialization will be defined by state-to-state agreements rather than open market competition. We will see the world bifurcate into two or three primary technical stacks—led by the U.S., China, and potentially a sovereign European or Gulf coalition—each with its own chips, data standards, and regulatory gates. Companies will be forced to choose a side early in their lifecycle, as taking capital from one bloc may permanently bar them from exiting or even operating in another. While Silicon Valley will remain a massive engine of innovation with its $100 billion-plus investment capacity, it will no longer be the default destination for every ambitious founder. The most successful AI companies of the next decade will be those that master the “political stack,” navigating the friction of national security reviews as skillfully as they navigate the complexities of neural network architecture.
