The European Union Parliament has approved the European AI Act, a piece of legislation aimed at regulating the use of artificial intelligence (AI). The Act targets systems that present an “unacceptable level of risk”, such as predictive policing tools and social scoring systems. It also imposes new restrictions on “high-risk AI”, including systems that could influence voters or harm people's health.
Boundaries for Generative AI
The legislation also sets new boundaries for generative AI, requiring content created by systems like ChatGPT from OpenAI, Microsoft Bing´s AI-powered chat and search or Google´s AI search and AI Chatbot Bard to be labeled. Furthermore, it mandates models to publish summaries of copyrighted data used for training. This could pose a significant challenge for systems that generate humanlike speech by scraping text from the internet, often from sources including copyright symbols.
A Threat to OpenAI
The implications of the legislation are so severe that OpenAI, the creator of ChatGPT, has indicated it may be forced to withdraw from Europe, depending on the final text of the Act. However, the legislation is still pending negotiations with the European Council, composed of representatives from EU member states.
Unlike domestic lawmakers, the European Union has spent years developing its artificial intelligence legislation. The European Commission first released a proposal more than two years ago and has amended it in recent months to address new concerns introduced by recent advances in generative AI.
EU AI Act Shows Nuanced Approach to ChatGPT
The AI Act initially classified ChatGPT as a high-risk AI system, subject to strict requirements on data quality, transparency, human oversight, and accountability. However, after months of debate and consultation, the EU has decided to adopt a more nuanced approach to ChatGPT regulation.
ChatGPT will be classified as high-risk only when used for purposes that could cause significant harm to individuals or society, such as political propaganda, hate speech, cyberbullying, or fraud. In contrast, it will be classified as limited-risk when used for purposes that could cause minor harm or inconvenience, such as entertainment, education, or research. Finally, ChatGPT will be classified as minimal-risk when used for purposes that do not pose any harm or inconvenience, such as personal use or hobby.
The move solidifies Europe's position as the de facto global tech regulator, setting rules that influence tech policymaking around the world. The standards set by the EU will likely trickle down to all consumers, as companies shift their practices internationally to avoid a patchwork of different policies. For instance, Microsoft has said it would “extend the rights that are at the heart of GDPR” to all consumers globally, regardless of whether they reside in Europe.
Meanwhile, efforts are progressing slowly in the United States, where Congress has not passed a federal online privacy bill or other comprehensive legislation regulating social media. The US Congress is still grappling with the risks of AI, following the surging popularity of ChatGPT. Senate Majority Leader Charles E. Schumer (D-N.Y.), who is leading bipartisan efforts to craft an AI framework, said they are likely still months away from considering any legislative response.
AI Regulation: Recent Developments
In the past few months, there have been significant developments in the field of AI regulation. Here's a chronological summary of the key events:
Australia Plans Regulatory Framework for AI (June 5, 2023): The Australian government, led by Industry and Science Minister Ed Husic, has initiated a comprehensive review of artificial intelligence (AI) in response to global concerns. The eight-week process aims to shape a new regulatory framework for AI, focusing on high-risk areas like facial recognition. The review will consider strengthening existing regulations, introducing specific AI legislation, or both.
OpenAI CEO Calls for Urgent AI Regulation (May 17, 2023): Sam Altman, the CEO of OpenAI, testified before a US Senate subcommittee, expressing his agreement with lawmakers on the need for regulation of rapidly advancing AI technologies. Altman proposed the creation of an agency that issues licenses for the development of large-scale AI models, safety regulations, and tests that AI models must pass before being released to the public. His testimony marked his recognition as a leading figure in AI.
G-7 Leaders Initiate ‘Hiroshima Process' (May 21, 2023): The leaders of the Group of Seven countries, recognizing the rapid advancement of generative artificial intelligence (AI), agreed to establish a governance protocol named the ‘Hiroshima Process'. This agreement sought to ensure that AI development and deployment align with the shared democratic values of the G-7 nations. The Hiroshima Process marked a significant step in AI regulation worldwide.
Microsoft Publishes Governance Blueprint for Future Development (May 26, 2023): Microsoft released a blueprint for how it believes artificial intelligence (AI) should be governed. The report, titled “Governing AI: A Blueprint for the Future”, outlines five key principles that Microsoft believes should guide the development and use of AI. The company proposes a five-step blueprint for public governance of AI, including implementing government-led AI safety frameworks, establishing a new federal agency dedicated to AI policy, and promoting responsible AI practices across sectors.
Microsoft President Calls for Generative AI Regulations (May 31, 2023): Microsoft President Brad Smith called for regulations for generative AI, emphasizing the need for a framework that ensures the responsible use of AI technologies. Smith's call added to the growing chorus of voices advocating for AI regulation.