OpenAI and Google have rejected a UK government proposal to establish an independent body to resolve copyright disputes in generative AI, arguing that it would introduce unnecessary legal complexity and threaten innovation. Their responses come amid intensifying global scrutiny over how AI companies train their models and a wave of lawsuits from publishers, authors, and artists demanding compensation.
In formal submissions to the UK Intellectual Property Office’s consultation on AI and copyright, both companies opposed the creation of a statutory dispute resolution mechanism. OpenAI emphasized the need to support licensing markets, avoid legal uncertainty, and make the UK the AI capital of Europe. Google echoed similar concerns about regulatory overreach and advocated for voluntary approaches.
According to POLITICO, both firms described the UK proposal as “unworkable.” The plan includes an opt-out mechanism that would allow AI developers to train models on copyrighted works by default unless rights holders explicitly refuse. UK ministers remain divided over whether to advance the proposal.
Public Artists, Silent Protests
Outside of the tech industry, creative communities have been vocal in their opposition. On February 25, more than 1,000 British musicians, including Kate Bush and Damon Albarn, released a silent protest album titled “Is This What We Want?” in response to the government’s opt-out framework. The action was part of a broader pushback against the UK’s 2023 white paper on AI regulation, which promoted a “pro-innovation” strategy over fixed statutory obligations.
Paul McCartney joined the criticism in January, warning the UK government not to let artificial intelligence exploit musicians. As reported by Reuters, he urged lawmakers to ensure copyright reform “doesn’t rip off artists.”
UK publishers, too, have pushed back. In February, newspapers launched a front-page campaign under the banner “Make It Fair”, criticizing the government’s opt-out mechanism and demanding stronger protections for journalistic content.
A Widening Legal Minefield
While resisting UK oversight, OpenAI and Google have simultaneously asked the U.S. government to recognize the training of AI on copyrighted materials as protected under the fair use doctrine. In their March submissions, both firms argued that such access is essential for maintaining U.S. competitiveness in AI research. This lobbying effort is part of a coordinated strategy to secure legal cover amid mounting litigation.
The legal pressure is already growing. On March 28, a federal judge ruled that The New York Times’ lawsuit against OpenAI and Microsoft could proceed. The suit claims millions of Times articles were used to train models like ChatGPT and Copilot, and that AI-generated outputs mimic original work and redirect revenue-generating traffic. As Times attorney Ian Crosby stated in court, “This is about replacing the content, not transforming it.”
OpenAI responded that its models do not replicate full articles and are designed to generate new content from smaller text fragments called tokens. Microsoft drew comparisons to earlier disputes involving technologies like VCRs and search engines. Nonetheless, the court’s decision to allow the case to move forward signals that legal definitions of fair use in the AI era are still very much in flux.
The lawsuit has been consolidated with similar copyright claims from authors including Sarah Silverman and Ta-Nehisi Coates into a single case to be heard in Manhattan federal court.
Europe Steps In — And Takes Notes
Legal pushback is not limited to the U.S. In France, three major publishing organizations — the Syndicat National de l’Édition (SNE), the Syndicat National des Auteurs et des Compositeurs (SNAC), and the Société des Gens de Lettres (SGDL) — sued Meta in March over allegations that the company used copyrighted books from shadow libraries like LibGen and Z-Library to train its Llama models. Internal documents cited by Le Monde and reviewed in U.S. court filings show that Meta employees raised legal concerns internally, which were escalated to CEO Mark Zuckerberg. He ultimately authorized the use of the datasets.
As one engineer put it in a quote, disclosed in court filings, saying “Torrenting from a [Meta-owned] corporate laptop doesn’t feel right.” . Despite these objections, Meta’s AI team moved forward. According to SNE President Vincent Montagne, the plaintiffs had tried to contact Meta before taking legal action but received no response. They also informed the European Commission of Meta’s actions, potentially triggering further scrutiny under EU copyright and AI laws.
New research revealed on March 26 that Meta may have also participated in the redistribution of pirated material. According to an analysis, approximately 30% of the pirated books downloaded by Meta were reuploaded to BitTorrent networks, likely prolonging their availability. This raises separate legal risks under the Digital Millennium Copyright Act (DMCA), which prohibits unauthorized redistribution of copyrighted works.
More Lawsuits, More Jurisdictions
Besides OpenAI, Google, and Meta other AI companies are also facing legal action. In February, Canadian AI startup Cohere was sued by Condé Nast, McClatchy, and other major publishers who allege the company used proprietary news content to train its “Command Family” of generative models without authorization.
The complaint points to Cohere’s use of retrieval-augmented generation (RAG), a technique that dynamically supplements pre-trained models with real-time document retrieval. While effective for improving accuracy, RAG also introduces new challenges. Plaintiffs argue that this approach enables the reproduction of protected content with minimal transformation, undermining fair use defenses.
Just weeks later, Canadian media outlets including CBC/Radio-Canada and The Canadian Press filed their own lawsuit against OpenAI, accusing it of unauthorized use of news content to train ChatGPT. The plaintiffs are seeking damages and a court order to block further use of their work.
Backchannel Lobbying and the Global Chessboard
While public debates and courtrooms battle over regulation, some of the industry’s most powerful players are working political channels behind the scenes. Meta CEO Mark Zuckerberg is lobbying the Trump administration to intervene in a major FTC antitrust case concerning the company’s acquisitions of Instagram and WhatsApp. At the same time, he has portrayed EU enforcement efforts — such as a pending €1 billion fine related to Meta’s “pay or consent” ad model — as economic attacks on U.S. tech firms.
Trump echoed that framing in a February memorandum, calling the fines “overseas extortion.” Meanwhile, regulatory enthusiasm in Brussels appears to be waning. The European Commission has slowed Digital Markets Act investigations to avoid reigniting trade disputes with the U.S. government.
For the UK, the path forward remains uncertain. On one side are OpenAI and Google, advocating for regulatory restraint. On the other are creators, publishers, and musicians demanding new safeguards. The proposed oversight body may offer a compromise — a neutral forum where disputes over AI training practices can be addressed without resorting to full-scale litigation. But for now, the plan remains stalled in political limbo, with both legal and reputational stakes mounting.