Will AI Regulation Face Turmoil with Repeal of Biden’s Executive Order?

With the transition of power to the Trump administration, the anticipated revocation of President Joe Biden’s executive order (EO) on artificial intelligence (AI) could lead to significant upheaval in AI regulation. This executive order, initially established by the Biden administration, was intended to create governmental oversight functions and promote the adoption of safety standards among AI model developers. Experts suggest that this revocation could have profound impacts on the AI industry, potentially creating a more chaotic regulatory environment.

The EO under Biden’s administration was designed with a focus on safety and accountability in the realm of AI. It set up oversight offices that encouraged AI model developers to adhere to certain standards ensuring the responsible and safe deployment of their technologies. Moreover, this EO also fostered greater data sharing among developers and increased government investments in AI research. The directive aimed to balance innovation with caution, ensuring AI’s benefits were harnessed responsibly.

However, with the Trump administration signaling its intent to repeal this order, significant challenges are anticipated for enterprises invested in AI. The potential revocation raises several concerns, including the absence of federal oversight, the emergence of inconsistent state-level regulations, increased pressure on private corporations to self-regulate, and reduced government investment in AI innovations. These factors collectively pose a looming uncertainty over the future landscape of AI regulation in the United States.

Patchwork of Local Rules

Before Biden’s EO took effect, there were concerted efforts involving listening tours and industry consultations to explore the most appropriate means of regulating AI. At this time, there was optimism that federal AI regulations could advance under a Democratic-controlled Senate. Nonetheless, insiders now largely believe that the federal appetite for comprehensive AI regulation has significantly diminished.

Gaurab Bansal, executive director of Responsible Innovation Labs, emphasized during the ScaleUp: AI conference that in the absence of federal oversight, individual states might develop their own AI regulations akin to California’s SB 1047. This proposed bill included stringent AI controls, such as a "kill switch" feature for models, that was ultimately vetoed by Governor Gavin Newsom. Despite the veto, industry leaders now fear that other states could pass similar legislation, leading to a fragmented patchwork of state-level regulations.

Dean Ball, a research fellow at George Mason University’s Mercatus Center, shared similar concerns. He noted that a state-by-state regulatory approach could obligate AI developers and companies utilizing AI to navigate a complex compliance landscape. This patchwork of regulations could enforce disjointed and sometimes contradictory compliance regimes, making it increasingly challenging for enterprises to maintain consistent operations across different state jurisdictions.

Voluntary Responsible AI

Although industry-led efforts to promote responsible AI practices have always existed, the potential repeal of the Biden EO would place an even greater burden on companies to be proactive in ensuring accountability and fairness. This shift is particularly significant as customer demand for safety and ethical standards continues to rise. Sarah Bird, Microsoft’s Chief Product Officer for Responsible AI, highlighted the importance of preparing for impending legislation like the European Union’s AI Act. Bird believes that, even in the absence of stringent laws, integrating responsible AI practices from the outset is a prudent and necessary approach.

Furthermore, Jason Corso, a professor of robotics at the University of Michigan, expressed concerns about the potential reduction in data transparency if Biden’s EO is revoked. The EO had emphasized openness in data usage for training AI models, a critical factor in identifying and mitigating biases. Without this emphasis, it may become more challenging to comprehend and govern the data used in AI models, heightening the risk of biased outcomes in AI applications. Thus, without adequate governance, enterprises might encounter significant risks related to data integrity and ethical AI deployment.

Fewer Research Dollars

Government funding has historically played a pivotal role in supporting early-stage, high-risk AI research that private investors might avoid. The anticipated policy shift under the Trump administration could lead to a significant reduction in government contributions toward AI research. Jason Corso voiced concerns about the potential lack of governmental support for essential AI research endeavors, possibly hindering the progress of innovative projects.

Despite these uncertainties, it is noteworthy that the Biden administration secured funding for AI oversight, including the AI Safety Institute, until 2025. According to Matt Mittelsteadt of the Mercatus Center, this guaranteed funding suggests that many activities will likely continue, albeit in different forms, depending on how the next administration decides to reorganize AI policy.

The overarching trend highlighted in the discussion is the likely pivot from federal to state-level AI regulation. This shift introduces potential inconsistencies and complexities in applying AI standards uniformly across the United States. Industry experts collectively underscore the importance of companies being proactive in adopting responsible AI practices and preparing for various regulatory frameworks, including international standards like the EU’s AI Act, as benchmarks for developing best practices.

Overarching Trends and Consensus Viewpoints

A consensus among industry insiders is that while the repeal of Biden’s EO may create immediate challenges, it also emphasizes the necessity for industry self-regulation. Companies must develop and adhere to internal standards for AI deployment to mitigate associated risks. The importance of responsible AI and maintaining transparency in data used for model training are viewed as essential practices that ought to be upheld independently of shifting governmental regulations.

Ensuring responsible AI not only addresses regulatory compliance but also fosters trust and reliability in AI applications. As companies integrate these practices, they can better navigate the unpredictable regulatory landscape and maintain a competitive edge. By proactively adopting high standards for safety, fairness, and transparency, enterprises can position themselves as leaders in ethical AI deployment.

Conclusion

With the Trump administration poised to take power, there’s speculation that President Joe Biden’s executive order (EO) on artificial intelligence (AI) regulation may be repealed, potentially causing upheaval in the AI industry. Biden’s EO was crafted to establish government oversight and promote safety standards among AI developers. Experts believe that revoking this order could result in a more chaotic regulatory environment for AI.

The EO under Biden emphasized safety and accountability, setting up oversight offices to ensure AI model developers adhered to standards for responsible and safe technology deployment. Additionally, it encouraged data sharing among developers and boosted government investment in AI research, aiming to balance innovation with caution and responsibly harness AI benefits.

The Trump administration’s potential repeal creates significant challenges for AI enterprises. Concerns include the lack of federal oversight, inconsistent state regulations, increased pressure on private corporations to self-regulate, and reduced government investment in AI innovation. These factors collectively cast uncertainty over the future of AI regulation in the U.S.

Explore more