The EU Council has given its final nod to its AI Act, marking the introduction of a pioneering regulatory framework for artificial intelligence across Europe. This legislation, described by the European Commission (EC) as employing a “risk-based approach,” aims to impose stricter regulations on AI products that pose higher risks to society. The EC's objective is to ensure that AI technologies are both safe and transparent.
Regulation of High-Risk AI Systems
AI systems classified as high-risk, such as those used in critical sectors and law enforcement, will be subject to rigorous oversight. These systems will need to undergo regular audits, pass fundamental rights impact assessments, and be registered in a central database. Mathieu Michel, Belgian Secretary of State for Digitization, highlighted that the AI Act emphasizes trust, transparency, and accountability, while also supporting innovation within Europe.
Prohibited AI Practices
The AI Act explicitly bans several AI practices deemed too risky. These include cognitive behavioral manipulation, social scoring, and predictive policing based on profiling. Additionally, systems that categorize individuals by race, religion, or sexual orientation are prohibited. The Act also covers future potential artificial general intelligence (AGI) but provides exemptions for military, defense, and research applications.
Implementation and Global Compliance
Set to come into force in two years, the Act includes provisions for establishing new administrative offices and expert panels to oversee its implementation. Noncompliance will result in fines calculated as a percentage of global annual turnover or a predetermined amount, whichever is higher. Companies outside the EU that use EU customer data in their AI platforms will also need to comply with these new regulations. Lawyer Patrick van Eecke pointed out that other countries might adopt similar frameworks, akin to the General Data Protection Regulation (GDPR).
UK's AI Safety Initiatives
In a related development, the UK and South Korea have secured commitments from 16 global AI firms, including Amazon, Google, IBM, Microsoft, and Meta, to develop safe AI models. These companies have agreed to assess the risks of their latest models and refrain from releasing them if the risk of misuse is too high. This agreement builds on the Bletchley Declaration signed last year and aims to ensure transparency and accountability in AI development, according to UK Prime Minister Rishi Sunak. However, this agreement is not legally binding.
The EU's AI Act represents a major step in regulating AI technologies, focusing on safety and transparency while encouraging innovation. The law's implications are expected to extend beyond Europe, potentially influencing AI regulation on a global scale.