US Objects to EU’s Draft AI Code Weeks Before Finalization, Pressures Europe to Abandon AI Rulebook

The Trump administration has formally objected to the EU's draft AI Code of Practice, arguing it's overly burdensome and escalates tech regulation tensions.

The Trump administration has formally objected to the European Union’s developing guidelines for artificial intelligence, applying direct pressure weeks before the rules are anticipated to be finalized and adding fuel to ongoing transatlantic tech policy disputes.

According to individuals familiar with the communications, the US Mission to the EU transmitted a letter to the European Commission and various member state governments outlining objections to the current AI Code of Practice, arguing it imposes excessive burdens, as reported Friday by Bloomberg. Commission spokesman Thomas Regnier confirmed the letter’s receipt.

At issue is the voluntary EU General-Purpose AI (GPAI) Code of Practice, a document intended to offer practical guidance for companies developing advanced AI systems on how to comply with the EU’s comprehensive, mandatory AI Act. Washington contends the draft code exceeds the scope of the AI Act itself and creates new, cumbersome obligations.

The US government also offered its technology experts to EU officials for further clarification on its concerns, according to one source familiar with the letter. This position echoes earlier criticisms from major American technology firms.

Code Under Scrutiny from Multiple Fronts

The American intervention arrives as the code faces scrutiny from various angles. Meta’s head of global affairs, Joel Kaplan, previously described an earlier draft as “unworkable and infeasible” in February, stating the company wouldn’t endorse it as written.

“We have an administration in the United States that is prepared to help advance and and defend US technology and technology companies… Obviously we’re going to make sure that they understand what we experience,” Kaplan remarked then. Alphabet has also pushed back, suggesting guidelines on copyright and third-party model testing were excessive.

Concerns aren’t limited to US entities. In a March 28 letter, civil society groups involved in the drafting warned EU officials that the latest iteration weakened crucial fundamental rights protections, reducing them to voluntary suggestions.

Prior to that fourteen AI and SME groups from Central and Eastern Europe argued a prior draft imposed “excessive and impractical obligations.” These criticisms mirror sentiments from June 2023, when over 150 European companies, including heavyweights like Renault and Airbus, cautioned that overly rigid AI rules could impede development within the Union.

Specific points of contention within the code’s drafts have included requirements for detailed documentation of training data sources and web crawler usage (addressing transparency and copyright), and the “Safety and Security Framework” (SSF).

This framework applies to models deemed a “systemic risk” – defined partly by computational intensity exceeding 10²⁵ FLOPs (Floating-point Operations Per Second, a measure indicating substantial computational resources used during training) – mandating ongoing risk assessments and incident reporting to the EU AI Office and national bodies.

The AI Act itself also contains specific requirements for GPAI model providers regarding compliance with EU copyright law, including respecting content reservation opt-outs and publishing detailed training data summaries, which the Code aims to clarify.

AI Act Enforcement and the Code’s Role

The backdrop to this dispute is the phased implementation of the EU AI Act. The law officially entered force in August 2024, but its provisions are rolling out incrementally. Its first major compliance milestone was February 2, when bans on “unacceptable risk” systems took effect, prohibiting certain uses of real-time biometrics, social scoring, and predictive policing.

The next key date is August 2, 2025, when GPAI governance obligations become applicable. The entire regulation becomes fully applicable on August 2, 2026, with some exceptions for high-risk systems extending to 2027. Non-compliance carries significant financial risk, with potential fines reaching 7% of global annual turnover for serious violations (3% for specific GPAI infractions).

Developed under the aegis of the EU AI Office with input from nearly 1,000 stakeholders, the code’s first draft appeared in November 2024. Stakeholders provided feedback via the Futurium platform until late last year. Despite the ongoing debates, the final Code is still expected starting May 2025. The European AI Office, alongside national regulatory agencies, is responsible for overseeing the implementation and enforcement of both the AI Act and its associated guidelines.

A Pattern of Transatlantic Tech Disagreement

This confrontation over the AI code fits into a larger pattern of friction between the Trump administration and the EU regarding digital policy. President Trump previously denounced EU tech regulations as “a form of taxation” during remarks at Davos in January.

That sentiment was echoed when a White House National Security Council spokesman called the EU’s recent €700 million fines against Apple and Meta under the Digital Markets Act (DMA) “economic extortion.”

Apple responded sharply to the DMA fines, stating, “These decisions are bad for the privacy and security of our users, bad for products, and force us to give away our technology for free,” arguing the EU unfairly targets American firms. US legislative figures, including Rep. Jim Jordan, have also engaged Brussels, questioning whether the Digital Services Act (DSA) impacts American free speech via its content rules.

Some US objections may target the foundations of the EU’s approach; reports dating back to late 2022 suggested the US had concerns about the AI Act’s definition of AI being potentially too broad. While facing external and internal critiques, the EU is forging ahead, coupling regulation with strategic funding, such as the €1.3 billion approved in March 2025 under the Digital Europe Programme, intended partly to bolster AI development and compliance capabilities within the bloc.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x