AI Governance: Shifting from Explainability to Outcome-Based Regulations

In a world driven by technological advancements, the rise of artificial intelligence (AI) has brought about numerous benefits and possibilities. However, the field of AI also presents unique challenges, particularly when it comes to explainability. The question arises: should we deprive the world of partially explainable technologies when we have the opportunity to ensure they bring benefit while minimizing potential harm? This article explores the need for a different approach to AI governance, specifically focusing on the importance of measurement and assessing AI safety.

The Challenges of Regulating AI

As AI becomes more prevalent, the need for regulation has become increasingly apparent. US lawmakers, who initially sought to regulate AI, quickly realized the complexities associated with explainability. The challenge lies in understanding and defining how AI systems make decisions and considering the potential risks they pose. Therefore, there is a growing recognition that traditional regulatory approaches may not be sufficient to address the unique nature of AI. A different approach to AI governance is imperative to effectively manage this complex technology.

The Role of Randomized Controlled Trials in Assessing Risk

To assess the risk of harm and reduce uncertainty, randomized controlled trials (RCTs) have been widely used in various fields. RCTs provide a framework for evaluating the effectiveness and safety of medical treatments, interventions, and policies. However, when it comes to AI, the classical RCT may not be fit for purpose in assessing the specific risks associated with these systems. Still, the underlying principle of rigorous measurement can be adopted to develop a similar framework, such as A/B testing, that can continuously measure the outcomes of an AI system.

Limitations of Randomized Control Studies for AI Risks

While RCTs have proven valuable in their original context, they may not be the ideal approach for assessing AI risks. The fundamental mismatch lies in the fact that AI systems evolve, learn, and adapt over time, making it challenging to capture their potential risks through controlled experiments. However, there is potential utility in leveraging a similar framework like A/B testing. A/B testing has been extensively used in product development, where different user groups are treated differently to measure the impacts of specific features. This approach could be adapted to assess the outcomes of AI systems perpetually.

A/B Testing in Product Development

A/B testing has become a cornerstone technique in product development, enabling companies to evaluate the impact of changes and features on user experiences and behaviors. By dividing users into different groups and exposing them to variations, A/B testing provides a quantitative measure of the effectiveness of certain product or experiential features. This methodology can be adapted to assess the outcomes and potential harm of AI systems. By comparing the outputs of AI algorithms on different populations, a quantitative and tested framework for determining their harmfulness and safety can be established.

Effective Measurement of AI Safety

In the context of AI, the measurement of safety is crucial to ensure accountability. While explainability may often be subjective and poorly understood, evaluating an AI system based on its outputs on various populations offers a quantitative and tested approach to determine whether the AI algorithm is genuinely harmful. This approach shifts the focus from subjective explanations to objective measurements. Through effective measurement, the accountability of the AI system is established, allowing the AI provider to take responsibility for the system’s proper functioning and alignment with ethical principles.

Establishing Accountability in AI Systems

Accountability is a crucial aspect of AI systems. The ability to attribute responsibility for the proper functioning and ethical alignment of AI algorithms is essential to prevent harm and ensure trust. By adopting a measurement-based approach, AI providers can demonstrate their commitment to safety and ethical principles. A/B testing, or a similar framework, can provide ongoing measurements of AI system outcomes, allowing for timely adjustments and corrective actions. Establishing accountability in AI systems fosters transparency, responsibility, and adherence to ethical guidelines.

The Value of Measurement Over Subjective Explanability

While explainability remains an area of heightened focus for AI providers and regulators across industries, the techniques first used in healthcare and later adopted in the tech industry to address uncertainty can significantly contribute to achieving the universal goal of safe and intended AI usage. By prioritizing measurements and objective assessments, AI systems can be evaluated on their actual outputs and impacts, rather than relying solely on subjective explanations. This transition allows for a more comprehensive and quantitative evaluation of AI algorithms’ safety and alignment with ethical principles.

Ensuring that AI is Working as Intended and is Safe

The ultimate goal of AI governance is to ensure that AI systems operate as intended and are safe for all stakeholders involved. By continuously measuring and assessing the outcomes of AI algorithms through techniques like A/B testing, the risks associated with these systems can be more effectively identified and mitigated. Moreover, ongoing measurement practices contribute to early detection of potential harm, enabling AI providers to take prompt actions and updates to safeguard against unintended consequences. Measurement serves as a vital tool in guaranteeing the functionality and safety of AI systems in a rapidly evolving technological landscape.

As AI technology continues to advance, the regulation and governance of AI systems becomes increasingly critical. Balancing the potential benefits and risks associated with partially explainable technologies is a complex challenge. However, adopting a measurement-based approach can provide a practical and effective solution. By leveraging techniques like A/B testing, AI providers and regulators can continuously measure and assess the safety and ethical alignment of AI systems. Ultimately, the universal goal is to ensure that AI is working as intended and, most importantly, is safe for all stakeholders involved.

Explore more

Can Federal Lands Power the Future of AI Infrastructure?

I’m thrilled to sit down with Dominic Jainy, an esteemed IT professional whose deep knowledge of artificial intelligence, machine learning, and blockchain offers a unique perspective on the intersection of technology and federal policy. Today, we’re diving into the US Department of Energy’s ambitious plan to develop a data center at the Savannah River Site in South Carolina. Our conversation

Can Your Mouse Secretly Eavesdrop on Conversations?

In an age where technology permeates every aspect of daily life, the notion that a seemingly harmless device like a computer mouse could pose a privacy threat is startling, raising urgent questions about the security of modern hardware. Picture a high-end optical mouse, designed for precision in gaming or design work, sitting quietly on a desk. What if this device,

Building the Case for EDI in Dynamics 365 Efficiency

In today’s fast-paced business environment, organizations leveraging Microsoft Dynamics 365 Finance & Supply Chain Management (F&SCM) are increasingly faced with the challenge of optimizing their operations to stay competitive, especially when manual processes slow down critical workflows like order processing and invoicing, which can severely impact efficiency. The inefficiencies stemming from outdated methods not only drain resources but also risk

Structured Data Boosts AI Snippets and Search Visibility

In the fast-paced digital arena where search engines are increasingly powered by artificial intelligence, standing out amidst the vast online content is a formidable challenge for any website. AI-driven systems like ChatGPT, Perplexity, and Google AI Mode are redefining how information is retrieved and presented to users, moving beyond traditional keyword searches to dynamic, conversational summaries. At the heart of

How Is Oracle Boosting Cloud Power with AMD and Nvidia?

In an era where artificial intelligence is reshaping industries at an unprecedented pace, the demand for robust cloud infrastructure has never been more critical, and Oracle is stepping up to meet this challenge head-on with strategic alliances that promise to redefine its position in the market. As enterprises increasingly rely on AI-driven solutions for everything from data analytics to generative