AI Governance: Shifting from Explainability to Outcome-Based Regulations

In a world driven by technological advancements, the rise of artificial intelligence (AI) has brought about numerous benefits and possibilities. However, the field of AI also presents unique challenges, particularly when it comes to explainability. The question arises: should we deprive the world of partially explainable technologies when we have the opportunity to ensure they bring benefit while minimizing potential harm? This article explores the need for a different approach to AI governance, specifically focusing on the importance of measurement and assessing AI safety.

The Challenges of Regulating AI

As AI becomes more prevalent, the need for regulation has become increasingly apparent. US lawmakers, who initially sought to regulate AI, quickly realized the complexities associated with explainability. The challenge lies in understanding and defining how AI systems make decisions and considering the potential risks they pose. Therefore, there is a growing recognition that traditional regulatory approaches may not be sufficient to address the unique nature of AI. A different approach to AI governance is imperative to effectively manage this complex technology.

The Role of Randomized Controlled Trials in Assessing Risk

To assess the risk of harm and reduce uncertainty, randomized controlled trials (RCTs) have been widely used in various fields. RCTs provide a framework for evaluating the effectiveness and safety of medical treatments, interventions, and policies. However, when it comes to AI, the classical RCT may not be fit for purpose in assessing the specific risks associated with these systems. Still, the underlying principle of rigorous measurement can be adopted to develop a similar framework, such as A/B testing, that can continuously measure the outcomes of an AI system.

Limitations of Randomized Control Studies for AI Risks

While RCTs have proven valuable in their original context, they may not be the ideal approach for assessing AI risks. The fundamental mismatch lies in the fact that AI systems evolve, learn, and adapt over time, making it challenging to capture their potential risks through controlled experiments. However, there is potential utility in leveraging a similar framework like A/B testing. A/B testing has been extensively used in product development, where different user groups are treated differently to measure the impacts of specific features. This approach could be adapted to assess the outcomes of AI systems perpetually.

A/B Testing in Product Development

A/B testing has become a cornerstone technique in product development, enabling companies to evaluate the impact of changes and features on user experiences and behaviors. By dividing users into different groups and exposing them to variations, A/B testing provides a quantitative measure of the effectiveness of certain product or experiential features. This methodology can be adapted to assess the outcomes and potential harm of AI systems. By comparing the outputs of AI algorithms on different populations, a quantitative and tested framework for determining their harmfulness and safety can be established.

Effective Measurement of AI Safety

In the context of AI, the measurement of safety is crucial to ensure accountability. While explainability may often be subjective and poorly understood, evaluating an AI system based on its outputs on various populations offers a quantitative and tested approach to determine whether the AI algorithm is genuinely harmful. This approach shifts the focus from subjective explanations to objective measurements. Through effective measurement, the accountability of the AI system is established, allowing the AI provider to take responsibility for the system’s proper functioning and alignment with ethical principles.

Establishing Accountability in AI Systems

Accountability is a crucial aspect of AI systems. The ability to attribute responsibility for the proper functioning and ethical alignment of AI algorithms is essential to prevent harm and ensure trust. By adopting a measurement-based approach, AI providers can demonstrate their commitment to safety and ethical principles. A/B testing, or a similar framework, can provide ongoing measurements of AI system outcomes, allowing for timely adjustments and corrective actions. Establishing accountability in AI systems fosters transparency, responsibility, and adherence to ethical guidelines.

The Value of Measurement Over Subjective Explanability

While explainability remains an area of heightened focus for AI providers and regulators across industries, the techniques first used in healthcare and later adopted in the tech industry to address uncertainty can significantly contribute to achieving the universal goal of safe and intended AI usage. By prioritizing measurements and objective assessments, AI systems can be evaluated on their actual outputs and impacts, rather than relying solely on subjective explanations. This transition allows for a more comprehensive and quantitative evaluation of AI algorithms’ safety and alignment with ethical principles.

Ensuring that AI is Working as Intended and is Safe

The ultimate goal of AI governance is to ensure that AI systems operate as intended and are safe for all stakeholders involved. By continuously measuring and assessing the outcomes of AI algorithms through techniques like A/B testing, the risks associated with these systems can be more effectively identified and mitigated. Moreover, ongoing measurement practices contribute to early detection of potential harm, enabling AI providers to take prompt actions and updates to safeguard against unintended consequences. Measurement serves as a vital tool in guaranteeing the functionality and safety of AI systems in a rapidly evolving technological landscape.

As AI technology continues to advance, the regulation and governance of AI systems becomes increasingly critical. Balancing the potential benefits and risks associated with partially explainable technologies is a complex challenge. However, adopting a measurement-based approach can provide a practical and effective solution. By leveraging techniques like A/B testing, AI providers and regulators can continuously measure and assess the safety and ethical alignment of AI systems. Ultimately, the universal goal is to ensure that AI is working as intended and, most importantly, is safe for all stakeholders involved.

Explore more

Resilience Becomes the New Velocity for DevOps in 2026

With extensive expertise in artificial intelligence, machine learning, and blockchain, Dominic Jainy has a unique perspective on the forces reshaping modern software delivery. As AI-driven development accelerates release cycles to unprecedented speeds, he argues that the industry is at a critical inflection point. The conversation has shifted from a singular focus on velocity to a more nuanced understanding of system

Can a Failed ERP Implementation Be Saved?

The ripple effect of a malfunctioning Enterprise Resource Planning system can bring a thriving organization to its knees, silently eroding operational efficiency, financial integrity, and employee morale. An ERP platform is meant to be the central nervous system of a business, unifying data and processes from finance to the supply chain. When it fails, the consequences are immediate and severe.

When Should You Upgrade to Business Central?

Introduction The operational rhythm of a growing business is often dictated by the efficiency of its core systems, yet many organizations find themselves tethered to outdated enterprise resource planning platforms that silently erode productivity and obscure critical insights. These legacy systems, once the backbone of operations, can become significant barriers to scalability, forcing teams into cycles of manual data entry,

Is Your ERP Ready for Secure, Actionable AI?

Today, we’re speaking with Dominic Jainy, an IT professional whose expertise lies at the intersection of artificial intelligence, machine learning, and enterprise systems. We’ll be exploring one of the most critical challenges facing modern businesses: securely and effectively connecting AI to the core of their operations, the ERP. Our conversation will focus on three key pillars for a successful integration:

Trend Analysis: Next-Generation ERP Automation

The long-standing relationship between users and their enterprise resource planning systems is being fundamentally rewritten, moving beyond passive data entry toward an active partnership with intelligent, autonomous agents. From digital assistants to these new autonomous entities, the nature of enterprise automation is undergoing a radical transformation. This analysis explores the leap from AI-powered suggestions to true, autonomous execution within ERP