EU AI Act: New Regulations Transforming AI Governance and Compliance

The article discusses the beginning of a new regulatory era brought on by the EU AI Act, which starts to take effect next week. The EU AI Act is a comprehensive set of new regulations aimed at governing the use of AI technologies within the European Union and has significant implications for businesses worldwide that utilize AI in their operations within the EU. This is a phased implementation plan, with some prohibitions coming into effect on February 2nd, while the full compliance requirements will be in place by mid-2025.

Early Phase Compliance Challenges

Prohibited High-Risk AI Systems

The initial phase of the EU AI Act focuses on prohibiting the deployment or use of certain high-risk AI systems. These include applications such as social scoring, emotion recognition, and real-time remote biometric identification in public spaces. The Act aims to prevent the use of AI in ways that could harm individuals or society. Companies must be aware of these prohibitions to avoid severe penalties, which can reach up to 7% of their global annual turnover. Understanding the intricacies of these restrictions is crucial for businesses to maintain ethical standards in their technological applications.

The list of prohibited high-risk AI systems represents a significant step in regulating potentially harmful AI practices. For example, social scoring systems, which rank individuals based on behavior and characteristics, can lead to unintended discrimination and privacy issues. Similarly, emotion recognition technology raises concerns about the accuracy of its assessments and the potential violations of personal privacy. These initial prohibitions are designed to create a safe and fair environment for the development and use of AI within the EU, ensuring that such technology respects fundamental human rights and maintains a standard of accountability.

Importance of Data Governance

One of the major challenges highlighted in the early phase of compliance is the need for robust data governance. Levent Ergin, the Chief Strategist for Climate, Sustainability, and AI at Informatica, emphasizes that businesses must take this opportunity to strengthen their data quality and governance programs. Accurate, holistic, integrated, up-to-date, and well-governed data is imperative for both adhering to regulations and achieving business outcomes through AI. Companies must invest in data infrastructure that supports compliance and enables the successful deployment of AI initiatives.

Ergin asserts that robust data governance is not only essential for regulatory compliance but also for leveraging AI to drive business value. High-quality data forms the backbone of effective AI systems, as it ensures the reliability and accuracy of AI-driven insights and decisions. With the upcoming regulations, companies need to focus on improving their data management practices, including data collection, storage, processing, and sharing. This also includes establishing clear data governance policies and procedures to ensure data integrity and security, ultimately fostering a trustworthy AI environment that aligns with both regulatory requirements and business objectives.

Preparing for Full Compliance by 2025

Dual Pressure on Businesses

Despite the full compliance requirements set for mid-2025, the early prohibitions set an important precedent for how companies should prepare. Businesses face dual pressures in 2025: demonstrating a clear return on investment (ROI) from AI applications while contending with regulatory and data quality challenges. This situation is exacerbated by conflicting expectations from generative AI initiatives, as noted by 89% of large businesses in the EU. These organizations must balance innovation with compliance, necessitating a strategic approach to AI implementation.

AI projects present both opportunities and challenges, requiring businesses to show tangible benefits from their investments while navigating the complexities of the regulatory environment. The expectations from various stakeholders, including customers, regulators, and investors, add layers of complexity to this process. Companies need to establish robust frameworks for measuring the ROI of AI initiatives, which include not only financial metrics but also compliance with ethical standards and regulatory mandates. By doing so, businesses can create value from AI while adhering to the stipulations outlined in the EU AI Act.

Overcoming Technology Limitations

Nearly half of these businesses struggle with technology limitations that hinder moving AI projects into production. Ergin advises that robust data governance is essential for compliance and the realization of AI’s potential. Investing in data quality and governance is no longer optional; it is critical, especially as a significant portion of EU companies plan to increase their investments in generative AI by 2025. Organizations must overcome technical barriers to ensure the seamless integration and deployment of AI systems that align with regulatory guidelines.

The journey to full compliance involves addressing various technological gaps that may impede AI project success. This includes upgrading legacy systems, integrating advanced data management tools, and ensuring that AI solutions are scalable and sustainable. Companies must also focus on building internal capabilities and expertise to manage and operate AI technologies effectively. By addressing these technology limitations, organizations can create a solid foundation for AI innovation, ensuring that their systems meet both business needs and regulatory standards, and are prepared for the full compliance requirements by 2025.

Extraterritorial Impact of the EU AI Act

Global Reach of the Act

The extraterritorial nature of the EU AI Act means that businesses outside the EU are also subject to these regulations. Marcus Evans, a partner at Norton Rose Fulbright, explains that the Act applies globally to organizations either using AI within the EU or providing AI services and products where the output is utilized within the EU. For example, an AI-powered recruitment tool used in the EU by a company based elsewhere would still need to comply with the Act. This global reach underscores the importance of understanding and adhering to the new regulations for any business involved in AI.

Businesses worldwide must recognize the implications of the EU AI Act and take proactive measures to ensure compliance, regardless of their physical location. The extraterritorial application of these regulations necessitates a comprehensive understanding of where and how AI is deployed within their operations. Companies must evaluate the impact of the Act on their existing AI systems and future AI initiatives, considering the legal and ethical standards set by the EU. By doing so, businesses can mitigate legal risks and align their AI practices with the global regulatory landscape, fostering a responsible and compliant AI ecosystem.

Steps for Compliance

Businesses should begin by auditing where AI is currently being used within their operations and identifying any use cases that might trigger prohibitions under the new law. Following this audit, a broader governance process should be established to ensure compliance. Compliance with the EU AI Act also requires addressing other complex legal areas including data protection, intellectual property, and risks of discrimination. Companies must develop comprehensive strategies to navigate these legal challenges and ensure that their AI systems adhere to the EU’s regulatory framework.

Initiating an internal audit of AI applications is a crucial step toward compliance. This process involves identifying all AI use cases, assessing their compatibility with the EU AI Act, and pinpointing potential risks. Once the audit is complete, businesses should implement a governance framework that encompasses data protection, intellectual property rights, and measures to avoid discrimination. Training and awareness programs for employees involved in AI development and deployment are also essential to foster an understanding of the legal and ethical implications of AI. By taking these steps, companies can build a robust compliance strategy that aligns with the EU AI Act and supports ethical AI practices.

Raising AI Literacy and Ethical AI Development

Importance of AI Literacy

Evans points out the importance of raising AI literacy within organizations to ensure compliance. It is vital for staff and anyone involved in operating or using AI systems to understand the associated risks and how to manage them. This includes understanding the legal implications and ethical considerations of AI use. Raising AI literacy is a fundamental aspect of the EU AI Act’s compliance strategy, ensuring that all stakeholders are well-informed and capable of making responsible decisions regarding AI technologies.

Improving AI literacy within organizations involves providing training and resources to employees at all levels. This education should cover various aspects of AI, including its benefits, risks, ethical considerations, and the regulatory requirements outlined in the EU AI Act. By fostering a culture of AI awareness and understanding, businesses can ensure that their teams are equipped to handle the complexities of AI systems responsibly and ethically. This approach not only supports compliance but also promotes the development of trustworthy AI solutions that align with the company’s values and regulatory expectations.

Promoting Responsible AI Development

The overarching aim of the EU AI Act is to foster responsible AI development. By banning harmful AI practices and demanding transparency and accountability, the regulation aims to balance the advancement of technology with ethical concerns. Beatriz Sanz Sáiz, the AI Sector Leader at EY Global, describes the legislation as a significant step toward building a responsible and sustainable future for AI. Promoting responsible AI development involves creating frameworks that prioritize transparency, fairness, and accountability in AI systems.

Responsible AI development is essential for maintaining public trust and ensuring that AI technologies contribute positively to society. The EU AI Act’s emphasis on transparency and accountability helps establish clear guidelines for ethical AI practices. Companies must incorporate these principles into their AI development processes, from design to deployment. This includes conducting thorough impact assessments, implementing transparent AI models, and establishing accountable practices for AI governance. By adopting these measures, businesses can contribute to a sustainable future for AI, where innovation and ethical considerations go hand in hand.

Specific Prohibitions and Guidance

Outright Prohibited Activities

The Act specifies certain activities that are outright prohibited, which include harmful subliminal techniques, exploitation of vulnerabilities, unacceptable social scoring, certain crime risk assessments and predictions, untargeted scraping to develop facial recognition databases, emotion recognition in specific contexts, and real-time remote biometric identification in public spaces. These prohibitions are designed to prevent the misuse of AI technologies and protect individuals from potential harm, ensuring that AI systems are used ethically and responsibly.

Understanding and adhering to these prohibitions is essential for businesses to avoid legal repercussions and maintain ethical standards. The prohibited activities outlined in the EU AI Act reflect a commitment to safeguarding individuals’ rights and privacy. Companies must carefully review their AI applications to ensure compliance with these restrictions. This involves conducting regular audits, implementing robust governance frameworks, and staying informed about the latest regulatory updates. By taking these steps, businesses can ensure that their AI systems align with the ethical and legal standards set by the EU AI Act.

Forthcoming Guidance

The article highlights the commencement of a new regulatory era marked by the introduction of the EU AI Act, set to start taking effect next week. The EU AI Act represents a comprehensive set of regulations crafted to oversee the utilization of artificial intelligence technologies within the European Union. This regulatory framework has far-reaching implications for businesses around the globe that rely on AI in their operations within the EU. The implementation of the EU AI Act will be phased, beginning with certain prohibitions that will be enforced starting on February 2nd. By mid-2025, all compliance requirements of the Act will be fully in place.

This new regulatory approach aims to ensure that AI technologies are developed and used responsibly, prioritizing ethics, transparency, and accountability. It addresses various aspects of AI, including data privacy, algorithmic fairness, and preventing potential biases. Companies operating in the EU will need to adapt to these regulations to continue their use of AI technologies, marking a significant shift in how AI is governed within the EU.

Explore more