The European Union has officially published the final text of its AI Act in the Official Journal, marking the start of various legal deadlines. This regulation, aimed at establishing a risk-based framework for AI applications, will be effective from August 1, 2024, with complete enforcement anticipated by mid-2026.
Structured Risk-Based Implementation
The AI Act’s rollout will occur in phases, with different timelines for various aspects. The regulation divides AI applications into risk categories, setting specific developer obligations based on the risk level of their AI systems. Most low-risk AI applications will remain unregulated.
Conversely, high-risk AI systems, including those used in biometric identification, law enforcement, employment, education, and critical infrastructure, will need to meet stringent data quality and anti-bias standards. A third category imposes lighter transparency requirements on tools like AI chatbots.
Banned AI Uses and Compliance Timelines
Some high-risk AI applications will be banned, taking effect six months after the law is enforced, around early 2025. Prohibited uses include social credit scoring systems, untargeted scraping for facial recognition databases, and real-time remote biometrics by law enforcement in public settings, except under specific circumstances like locating missing persons.
By approximately April 2025, codes of practice will apply to developers of AI systems falling under the regulation. These guidelines will be provided by the newly established EU AI Office. However, there are concerns about the potential influence of AI industry participants on the drafting of these codes, as the EU is enlisting consultancy firms for their development.
General Purpose AI Regulations and Extensions
For general purpose AI models such as OpenAI’s GPT, transparency requirements will commence 12 months after the law’s enactment, by August 1, 2025. The most advanced models might also need to conduct systemic risk assessments based on compute thresholds. Certain high-risk AI systems have been granted extended compliance deadlines of up to 36 months, until 2027, while others must comply within 24 months.
Extensive lobbying efforts by some AI industry sectors and Member States have aimed to soften the regulations on general purpose AI, arguing that stringent rules could hinder Europe’s competitiveness against the US and China.
Generative AI and Copyright Regulations
The AI Act also sets guidelines for generative AI and altered media, ensuring that deepfakes and other AI-created content are clearly identified. Companies training AI models must comply with copyright laws unless the model is solely for research purposes. Rightsholders can reserve their rights to prevent unauthorized text and data mining, except for scientific research. Providers of general-purpose AI models must seek authorization from rightsholders to conduct text and data mining on protected works.
Tech Company Concerns
While many tech companies have voiced concerns about the AI Act, they will now need to fall in line with the regulations. Some companies and organizations wanted more support for the open-source development of various AI models. They sent a paper to EU policymakers with their suggestions, including GitHub, Hugging Face, and Creative Commons.
In June last year, over 150 businesses signed an open letter raising concerns the act could stifle industry. 150 executives from companies like Renault, Heineken, Airbus, and Siemens signed an open letter urging the EU to rethink its plans over the AI Act. They claimed that the law is too strict and disproportionate, and could stifle AI innovation and drive AI providers away from the European market.
Last Updated on November 7, 2024 3:36 pm CET