HomeWinBuzzer NewsEU Releases First Draft of General-Purpose AI Code of Practice

EU Releases First Draft of General-Purpose AI Code of Practice

The EU has introduced a draft Code of Practice for AI, emphasizing data transparency and risk management for compliance with the EU AI Act.

-

The European Union has taken a major step in regulating artificial intelligence by unveiling its first draft of the EU General-Purpose AI Code of Practice (GPAI). This draft, created under the direction of the EU’s AI Office with contributions from independent experts, is designed to guide developers of powerful AI systems like OpenAI, Meta, and Google in meeting the compliance demands of the AI Act, which came into effect in August 2024.

Transparency at the Forefront

Central to the draft Code is the emphasis on transparency. AI model developers are required to document how training data is sourced, detailing the use of web crawlers and their configurations.

This measure responds to ongoing concerns around copyright compliance and data collection methods, an issue that has led to multiple lawsuits involving major tech companies. Web crawlers, essential tools for automated data collection, must now adhere to EU guidelines for transparency and fair use.

REcently, ETH Zurich, INSAIT, and LatticeFlow assessed prominent AI systems with their Compl-AI tool, translating the AI Act’s requirements into tangible benchmarks. The evaluation revealed that while most models were reasonably adept at limiting harmful outputs, they consistently fell short in fairness metrics and data privacy, indicating areas for improvement before compliance deadlines.

Systemic Risk and Compliance Requirements

High-capacity AI models defined by the EU as having “systemic risk“—notably those utilizing computational power exceeding 10²⁵ FLOPs—face additional expectations. FLOPs (floating-point operations per second) measure a model’s computational intensity, highlighting the scale of AI processing and training. For these models, the draft Code introduces a “Safety and Security Framework” (SSF) that mandates ongoing risk monitoring and incident documentation.

The SSF requires developers to notify the European AI Office and relevant national bodies if systemic risks materialize. The draft encourages public feedback to refine what constitutes a “serious incident,” a necessary clarification given the rapid pace of AI advancement.

Stakeholder Engagement and Feedback Process

The EU’s draft is not final; it marks the beginning of an inclusive, iterative process. Nearly 1,000 stakeholders, encompassing industry leaders, civil organizations, and EU representatives, have until November 28, 2024, to submit feedback via the Futurium platform, an onlice space dedicated to Europeans discussing EU policies. These insights will be integrated into future versions of the draft, which is expected to be finalized by May 2025.

October Findings and Compliance Gaps

The earlier benchmarking by LatticeFlow showcased compliance challenges, especially in areas related to fairness and data protection. Models often scored below 50% in fairness assessments, underscoring the need for clear and enforceable guidelines.

The current draft reflects these findings, specifying that developers must prepare risk forecasts to identify when models could acquire potentially hazardous capabilities. The document highlights that systemic risks include potential misuse in surveillance, cybersecurity breaches, and large-scale disinformation campaigns.

Broader Context: Background on the AI Act

The AI Act, which took effect in August 2024 following its publication in the Official Journal in July, categorizes AI applications based on their associated risks. High-risk AI systems face strict requirements, particularly in areas like biometric surveillance and public safety, while lower-risk systems have fewer obligations. For general-purpose AI models, transparency requirements will be mandatory by August 2025, with more complex compliance protocols set to take effect by August 2027.

In response to the Act, over 150 companies, including major European firms like Renault and Airbus, signed an open letter in June 2024 calling for adjustments to avoid stifling innovation. The companies warned that overly strict regulations could challenge Europe’s tech sector competitiveness.

Navigating Compliance for General AI

While some companies have expressed concerns, the EU’s structured approach aims to offer a unified regulatory environment across member states. This consistency contrasts with the fragmented AI regulations seen in other regions, such as the United States, where policies can vary by state. The current draft Code also considers the capabilities of smaller developers and open-source projects, proposing scaled compliance measures to avoid disproportionate burdens.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x
Mastodon