AI Governance: Shifting from Explainability to Outcome-Based Regulations

In a world driven by technological advancements, the rise of artificial intelligence (AI) has brought about numerous benefits and possibilities. However, the field of AI also presents unique challenges, particularly when it comes to explainability. The question arises: should we deprive the world of partially explainable technologies when we have the opportunity to ensure they bring benefit while minimizing potential harm? This article explores the need for a different approach to AI governance, specifically focusing on the importance of measurement and assessing AI safety.

The Challenges of Regulating AI

As AI becomes more prevalent, the need for regulation has become increasingly apparent. US lawmakers, who initially sought to regulate AI, quickly realized the complexities associated with explainability. The challenge lies in understanding and defining how AI systems make decisions and considering the potential risks they pose. Therefore, there is a growing recognition that traditional regulatory approaches may not be sufficient to address the unique nature of AI. A different approach to AI governance is imperative to effectively manage this complex technology.

The Role of Randomized Controlled Trials in Assessing Risk

To assess the risk of harm and reduce uncertainty, randomized controlled trials (RCTs) have been widely used in various fields. RCTs provide a framework for evaluating the effectiveness and safety of medical treatments, interventions, and policies. However, when it comes to AI, the classical RCT may not be fit for purpose in assessing the specific risks associated with these systems. Still, the underlying principle of rigorous measurement can be adopted to develop a similar framework, such as A/B testing, that can continuously measure the outcomes of an AI system.

Limitations of Randomized Control Studies for AI Risks

While RCTs have proven valuable in their original context, they may not be the ideal approach for assessing AI risks. The fundamental mismatch lies in the fact that AI systems evolve, learn, and adapt over time, making it challenging to capture their potential risks through controlled experiments. However, there is potential utility in leveraging a similar framework like A/B testing. A/B testing has been extensively used in product development, where different user groups are treated differently to measure the impacts of specific features. This approach could be adapted to assess the outcomes of AI systems perpetually.

A/B Testing in Product Development

A/B testing has become a cornerstone technique in product development, enabling companies to evaluate the impact of changes and features on user experiences and behaviors. By dividing users into different groups and exposing them to variations, A/B testing provides a quantitative measure of the effectiveness of certain product or experiential features. This methodology can be adapted to assess the outcomes and potential harm of AI systems. By comparing the outputs of AI algorithms on different populations, a quantitative and tested framework for determining their harmfulness and safety can be established.

Effective Measurement of AI Safety

In the context of AI, the measurement of safety is crucial to ensure accountability. While explainability may often be subjective and poorly understood, evaluating an AI system based on its outputs on various populations offers a quantitative and tested approach to determine whether the AI algorithm is genuinely harmful. This approach shifts the focus from subjective explanations to objective measurements. Through effective measurement, the accountability of the AI system is established, allowing the AI provider to take responsibility for the system’s proper functioning and alignment with ethical principles.

Establishing Accountability in AI Systems

Accountability is a crucial aspect of AI systems. The ability to attribute responsibility for the proper functioning and ethical alignment of AI algorithms is essential to prevent harm and ensure trust. By adopting a measurement-based approach, AI providers can demonstrate their commitment to safety and ethical principles. A/B testing, or a similar framework, can provide ongoing measurements of AI system outcomes, allowing for timely adjustments and corrective actions. Establishing accountability in AI systems fosters transparency, responsibility, and adherence to ethical guidelines.

The Value of Measurement Over Subjective Explanability

While explainability remains an area of heightened focus for AI providers and regulators across industries, the techniques first used in healthcare and later adopted in the tech industry to address uncertainty can significantly contribute to achieving the universal goal of safe and intended AI usage. By prioritizing measurements and objective assessments, AI systems can be evaluated on their actual outputs and impacts, rather than relying solely on subjective explanations. This transition allows for a more comprehensive and quantitative evaluation of AI algorithms’ safety and alignment with ethical principles.

Ensuring that AI is Working as Intended and is Safe

The ultimate goal of AI governance is to ensure that AI systems operate as intended and are safe for all stakeholders involved. By continuously measuring and assessing the outcomes of AI algorithms through techniques like A/B testing, the risks associated with these systems can be more effectively identified and mitigated. Moreover, ongoing measurement practices contribute to early detection of potential harm, enabling AI providers to take prompt actions and updates to safeguard against unintended consequences. Measurement serves as a vital tool in guaranteeing the functionality and safety of AI systems in a rapidly evolving technological landscape.

As AI technology continues to advance, the regulation and governance of AI systems becomes increasingly critical. Balancing the potential benefits and risks associated with partially explainable technologies is a complex challenge. However, adopting a measurement-based approach can provide a practical and effective solution. By leveraging techniques like A/B testing, AI providers and regulators can continuously measure and assess the safety and ethical alignment of AI systems. Ultimately, the universal goal is to ensure that AI is working as intended and, most importantly, is safe for all stakeholders involved.

Explore more

Agentic AI Redefines the Software Development Lifecycle

The quiet hum of servers executing tasks once performed by entire teams of developers now underpins the modern software engineering landscape, signaling a fundamental and irreversible shift in how digital products are conceived and built. The emergence of Agentic AI Workflows represents a significant advancement in the software development sector, moving far beyond the simple code-completion tools of the past.

Is AI Creating a Hidden DevOps Crisis?

The sophisticated artificial intelligence that powers real-time recommendations and autonomous systems is placing an unprecedented strain on the very DevOps foundations built to support it, revealing a silent but escalating crisis. As organizations race to deploy increasingly complex AI and machine learning models, they are discovering that the conventional, component-focused practices that served them well in the past are fundamentally

Agentic AI in Banking – Review

The vast majority of a bank’s operational costs are hidden within complex, multi-step workflows that have long resisted traditional automation efforts, a challenge now being met by a new generation of intelligent systems. Agentic and multiagent Artificial Intelligence represent a significant advancement in the banking sector, poised to fundamentally reshape operations. This review will explore the evolution of this technology,

Cooling Job Market Requires a New Talent Strategy

The once-frenzied rhythm of the American job market has slowed to a quiet, steady hum, signaling a profound and lasting transformation that demands an entirely new approach to organizational leadership and talent management. For human resources leaders accustomed to the high-stakes war for talent, the current landscape presents a different, more subtle challenge. The cooldown is not a momentary pause

What If You Hired for Potential, Not Pedigree?

In an increasingly dynamic business landscape, the long-standing practice of using traditional credentials like university degrees and linear career histories as primary hiring benchmarks is proving to be a fundamentally flawed predictor of job success. A more powerful and predictive model is rapidly gaining momentum, one that shifts the focus from a candidate’s past pedigree to their present capabilities and