AI Governance: Shifting from Explainability to Outcome-Based Regulations

In a world driven by technological advancements, the rise of artificial intelligence (AI) has brought about numerous benefits and possibilities. However, the field of AI also presents unique challenges, particularly when it comes to explainability. The question arises: should we deprive the world of partially explainable technologies when we have the opportunity to ensure they bring benefit while minimizing potential harm? This article explores the need for a different approach to AI governance, specifically focusing on the importance of measurement and assessing AI safety.

The Challenges of Regulating AI

As AI becomes more prevalent, the need for regulation has become increasingly apparent. US lawmakers, who initially sought to regulate AI, quickly realized the complexities associated with explainability. The challenge lies in understanding and defining how AI systems make decisions and considering the potential risks they pose. Therefore, there is a growing recognition that traditional regulatory approaches may not be sufficient to address the unique nature of AI. A different approach to AI governance is imperative to effectively manage this complex technology.

The Role of Randomized Controlled Trials in Assessing Risk

To assess the risk of harm and reduce uncertainty, randomized controlled trials (RCTs) have been widely used in various fields. RCTs provide a framework for evaluating the effectiveness and safety of medical treatments, interventions, and policies. However, when it comes to AI, the classical RCT may not be fit for purpose in assessing the specific risks associated with these systems. Still, the underlying principle of rigorous measurement can be adopted to develop a similar framework, such as A/B testing, that can continuously measure the outcomes of an AI system.

Limitations of Randomized Control Studies for AI Risks

While RCTs have proven valuable in their original context, they may not be the ideal approach for assessing AI risks. The fundamental mismatch lies in the fact that AI systems evolve, learn, and adapt over time, making it challenging to capture their potential risks through controlled experiments. However, there is potential utility in leveraging a similar framework like A/B testing. A/B testing has been extensively used in product development, where different user groups are treated differently to measure the impacts of specific features. This approach could be adapted to assess the outcomes of AI systems perpetually.

A/B Testing in Product Development

A/B testing has become a cornerstone technique in product development, enabling companies to evaluate the impact of changes and features on user experiences and behaviors. By dividing users into different groups and exposing them to variations, A/B testing provides a quantitative measure of the effectiveness of certain product or experiential features. This methodology can be adapted to assess the outcomes and potential harm of AI systems. By comparing the outputs of AI algorithms on different populations, a quantitative and tested framework for determining their harmfulness and safety can be established.

Effective Measurement of AI Safety

In the context of AI, the measurement of safety is crucial to ensure accountability. While explainability may often be subjective and poorly understood, evaluating an AI system based on its outputs on various populations offers a quantitative and tested approach to determine whether the AI algorithm is genuinely harmful. This approach shifts the focus from subjective explanations to objective measurements. Through effective measurement, the accountability of the AI system is established, allowing the AI provider to take responsibility for the system’s proper functioning and alignment with ethical principles.

Establishing Accountability in AI Systems

Accountability is a crucial aspect of AI systems. The ability to attribute responsibility for the proper functioning and ethical alignment of AI algorithms is essential to prevent harm and ensure trust. By adopting a measurement-based approach, AI providers can demonstrate their commitment to safety and ethical principles. A/B testing, or a similar framework, can provide ongoing measurements of AI system outcomes, allowing for timely adjustments and corrective actions. Establishing accountability in AI systems fosters transparency, responsibility, and adherence to ethical guidelines.

The Value of Measurement Over Subjective Explanability

While explainability remains an area of heightened focus for AI providers and regulators across industries, the techniques first used in healthcare and later adopted in the tech industry to address uncertainty can significantly contribute to achieving the universal goal of safe and intended AI usage. By prioritizing measurements and objective assessments, AI systems can be evaluated on their actual outputs and impacts, rather than relying solely on subjective explanations. This transition allows for a more comprehensive and quantitative evaluation of AI algorithms’ safety and alignment with ethical principles.

Ensuring that AI is Working as Intended and is Safe

The ultimate goal of AI governance is to ensure that AI systems operate as intended and are safe for all stakeholders involved. By continuously measuring and assessing the outcomes of AI algorithms through techniques like A/B testing, the risks associated with these systems can be more effectively identified and mitigated. Moreover, ongoing measurement practices contribute to early detection of potential harm, enabling AI providers to take prompt actions and updates to safeguard against unintended consequences. Measurement serves as a vital tool in guaranteeing the functionality and safety of AI systems in a rapidly evolving technological landscape.

As AI technology continues to advance, the regulation and governance of AI systems becomes increasingly critical. Balancing the potential benefits and risks associated with partially explainable technologies is a complex challenge. However, adopting a measurement-based approach can provide a practical and effective solution. By leveraging techniques like A/B testing, AI providers and regulators can continuously measure and assess the safety and ethical alignment of AI systems. Ultimately, the universal goal is to ensure that AI is working as intended and, most importantly, is safe for all stakeholders involved.

Explore more

Are Retailers Ready for the AI Payments They’re Building?

The relentless pursuit of a fully autonomous retail experience has spurred massive investment in advanced payment technologies, yet this innovation is dangerously outpacing the foundational readiness of the very businesses driving it. This analysis explores the growing disconnect between retailers’ aggressive adoption of sophisticated systems, like agentic AI, and their lagging operational, legal, and regulatory preparedness. It addresses the central

What’s Fueling Microsoft’s US Data Center Expansion?

Today, we sit down with Dominic Jainy, a distinguished IT professional whose expertise spans the cutting edge of artificial intelligence, machine learning, and blockchain. With Microsoft undertaking one of its most ambitious cloud infrastructure expansions in the United States, we delve into the strategy behind the new data center regions, the drivers for this growth, and what it signals for

What Derailed Oppidan’s Minnesota Data Center Plan?

The development of new data centers often represents a significant economic opportunity for local communities, but the path from a preliminary proposal to a fully operational facility is frequently fraught with complex logistical and regulatory challenges. In a move that highlights these potential obstacles, US real estate developer Oppidan Investment Company has formally retracted its early-stage plans to establish a

Cloud Container Security – Review

The fundamental shift in how modern applications are developed, deployed, and managed can be traced directly to the widespread adoption of cloud container technology, an innovation that promises unprecedented agility and efficiency. Cloud Container technology represents a significant advancement in software development and IT operations. This review will explore the evolution of containers, their key security features, common vulnerabilities, and

Ireland Ends Data Center Ban with Tough New Power Rules

As the artificial intelligence boom strains global power grids to their breaking point, Ireland has pivoted from a complete ban on new data centers to a revolutionary policy that redefines the cost of digital expansion. This analysis examines the landmark decision to end the de facto moratorium on new grid connections, detailing a stringent new framework that transforms data centers