Disney & Universal Sue Midjourney, Escalating AI Copyright War

Disney and Universal have filed a landmark copyright lawsuit against AI image generator Midjourney, accusing the firm of illegally using famous characters like Darth Vader and Elsa to train its models, escalating the global legal battle over AI and intellectual property.

The simmering conflict between creative industries and artificial intelligence developers has erupted into a full-blown war, with Disney and Universal filing a landmark copyright infringement lawsuit against AI image generator Midjourney. The suit, filed on Wednesday, June 11, accuses the AI firm of building its powerful commercial software by unlawfully training it on the entertainment giants’ most iconic characters, including Darth Vader, the Minions, Homer Simpson, and the princesses from Frozen.

This legal assault marks the first time major Hollywood studios have taken direct action over AI-generated imagery, signaling a new, high-stakes front in the war between creators and technology companies.

Bringing Hollywood’s immense legal and financial power to the forefront, the action aligns the studios with a growing coalition of authors, news organizations, and musicians. These groups contend that AI developers have constructed multi-billion dollar enterprises on the back of stolen work. The lawsuit, as reported by The New York Times, is scathing in its assessment, labeling Midjourney as “the quintessential copyright free-rider and a bottomless pit of plagiarism.”

While Disney’s general counsel, Horacio Gutierrez, affirmed the company’s optimism for AI as a creative tool, he drew a firm line, stating, “piracy is piracy, and the fact that it’s done by an A.I. company does not make it any less infringing.”

This case shifts the debate from the abstract replication of artistic styles to the concrete infringement of globally recognized, multi-billion dollar characters. It follows an earlier, less successful legal challenge against Midjourney by a group of artists in 2023, which saw most of its claims dismissed for being too vague. With this new, highly specific complaint, the industry is watching to see if Hollywood can succeed where individual creators have so far struggled.

A Widening Legal War on Multiple Fronts

The Disney and Universal lawsuit is the latest and most powerful salvo in a widespread legal rebellion against AI data scraping. Just days ago, a landmark trial began in London’s High Court between Getty Images and Stability AI, with the photo giant’s lawyer declaring it “This trial is the day of reckoning for that approach.”

Legal experts have called the Getty vs Stability AI case pivotal, with one noting it will be “pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI.” In another significant ruling, a judge in the Thomson Reuters v. ROSS Intelligence case recently rejected the “fair use” defense for AI training, setting a potentially crucial precedent.

This conflict spans nearly every creative industry. The Recording Industry Association of America (RIAA) is suing AI music generators Suno and Udio for copyright infringement, arguing there is “There is nothing fair about appropriating an artist’s work, extracting its essence, and repurposing it to compete with the originals.”

Meanwhile, The New York Times is pursuing a high-profile lawsuit against OpenAI and Microsoft for using millions of its articles, a case a federal judge allowed to proceed in March. More recently, Rupert Murdoch’s Dow Jones and the New York Post sued Perplexity AI for its use of their news content. These major players are joined by coalitions of authors and publishers from around the world, all challenging the foundational practices of the generative AI industry.

The Dual Strategy: Suing in Court, Signing Deals in the Boardroom

The primary legal defense mounted by AI companies is that their data training methods are protected under the “fair use” doctrine. In court filings, Microsoft argued that “Copyright law is no more an obstacle to the LLM than it was to the VCR (or the player piano, copy machine, personal computer, internet, or search engine).” This defense claims that training AI models is a transformative act that creates new works, rather than simply copying existing ones.

However, this legal argument is unfolding alongside a pragmatic business strategy: striking lucrative licensing deals with the very content owners they are fighting in court. A recent analysis from the YouTube channel Hey AI suggests a major shift from pure litigation toward negotiation.

The New York Times, while suing OpenAI, recently announced a major content licensing deal with Amazon. Similarly, major record labels are in licensing talks with Suno and Udio, according to a report from Bloomberg, even as their lawsuit proceeds.

To that end, the music industry is pushing for AI firms to implement ‘fingerprinting’ technology to track the use of artists’ work and ensure proper compensation. During its I/O 2025 event, Google unveiled its SynthID Detector, a public tool identifies AI-created media by checking for embedded digital Google SynthID watermarks in images, video, audio, and text. Meta is working on AudioSeal, a tool focusing on watermarking AI-generated speech. 

Beyond Copyright: Allegations of Piracy and Deception

Some allegations against AI firms venture beyond the legal grey area of fair use and into accusations of outright piracy and unethical behavior. Meta has been hit with multiple lawsuits from authors for allegedly training its Llama AI models on vast collections of pirated books sourced from “shadow libraries” like LibGen.

Court filings revealed internal messages from concerned employees, with one engineer stating, “Torrenting from a [Meta-owned] corporate laptop doesn’t feel right.” According to TorrentFreak, the situation was complicated further by an expert analysis suggesting Meta may have re-uploaded nearly a third of the pirated books it downloaded, potentially participating in their distribution.

In the ongoing Kadrey v. Meta case, a judge found sufficient evidence that Meta may have engaged in criminal copyright infringement by actively helping distribute the pirated files.

In a separate case focused on contract violation, Reddit sued the artificial intelligence company Anthropic on June 4 for unlawfully scraping its user-generated content. The lawsuit alleges Anthropic ignored the platform’s terms of service and technical barriers designed to block such scraping.

In a statement to CBS News, Reddit’s chief legal officer Ben Lee said, “AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data.” This occurred even as Reddit was establishing paid data-sharing partnerships with other AI firms like Google and OpenAI, accusing Anthropic of choosing to take the data for free rather than engaging in good-faith negotiations.

These cases, moving beyond copyright into allegations of theft and deception, represent a critical challenge to the AI industry’s narrative of innovation. The outcome of the Disney lawsuit and the broader legal battles will likely force a new paradigm, one where access to data is no longer a free-for-all but a carefully negotiated and compensated transaction.

The wild west era of AI development appears to be drawing to a close, with creators and corporations alike demanding that their intellectual property be respected, licensed, and paid for.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x