The director of the U.S. Copyright Office, Shira Perlmutter, was fired by President Donald Trump on Saturday, May 10, a move that injects political uncertainty into the complex landscape of artificial intelligence and intellectual property rights.
The dismissal follows the White House’s firing of Librarian of Congress Carla Hayden just two days prior, raising concerns about potential executive branch influence over an office traditionally aligned with the legislative branch. The two dismissals occur as major technology companies, notably Meta Platforms, are facing significant legal and regulatory challenges regarding their data practices and market power.
Representative Joe Morelle, the leading Democrat on the House Administration Committee, alleged a direct connection between the firing and the AI industry. Morelle claimed the action was “no coincidence [Trump] acted less than a day after [Perlmutter] refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models,” referencing a recent report from Perlmutter’s office that raised questions about AI’s use of copyrighted materials.
The congressman asserted that the dismissal “once again tramples on Congress’s Article One authority and throws a trillion-dollar industry into chaos,” and asked, “When will my Republican colleagues decide enough is enough?” Reactions from copyright lawyers and industry groups expressed concern about political interference and the stability of the Copyright Office.
Meta Antitrust Trial Underway
Meta is currently defending itself in a landmark antitrust trial brought by the Federal Trade Commission. The trial, which began on April 14, 2025, in Washington D.C. federal court, centers on the FTC’s allegations that Meta illegally acquired Instagram and WhatsApp to establish and maintain a monopoly in the personal social networking market. The regulatory body is seeking a potentially drastic outcome: the divestiture of both platforms.
During his testimony as the trial’s first witness, Meta CEO Mark Zuckerberg countered the FTC’s market definition. He explained that Facebook has evolved significantly over time, shifting to become “more of a broad discovery-entertainment space,” suggesting a much broader competitive landscape than the narrow market defined by the FTC.
The trial follows unsuccessful settlement negotiations where Zuckerberg’s offers, which reportedly reached nearly $1 billion, were deemed “delusional” by the Wall Street Journal. The failed talks unfolded amidst Zuckerberg’s lobbying efforts at the Trump White House, where he reportedly sought intervention to halt the lawsuit.
AI Copyright Challenges Mount Globally
Adding to Meta’s regulatory woes are multiple lawsuits alleging the unauthorized use of copyrighted content to train its artificial intelligence models. Evidence presented in a U.S. court case indicates that Meta utilized copyrighted books obtained from sources like LibGen, a known source of pirated content.
Internal communications revealed in court documents show that Meta employees were aware of the legal risks associated with these datasets, with one engineer expressing discomfort by stating, “Torrenting from a [Meta-owned] corporate laptop doesn’t feel right.”
Despite these internal concerns, the decision to use the LibGen dataset was reportedly approved after being escalated to Mark Zuckerberg.
These legal challenges, including a lawsuit filed by French publishers and authors who characterized Meta’s alleged actions as “monumental looting,” directly contest Meta’s defense that its AI training practices are protected under ‘fair use’.
Further complicating Meta’s legal standing, expert analysis suggests that approximately 30% of the pirated books Meta downloaded via BitTorrent for AI training were subsequently reuploaded. This finding raises new concerns about Meta’s potential role in facilitating digital piracy, which could undermine its ‘fair use’ arguments.
A judge in the U.S. copyright case has indicated that the outcome may hinge more on whether Meta’s AI models cause economic harm to authors rather than the methods used to acquire the training data.
Political Strategy and Regulatory Landscape
Meta’s navigation of these legal and regulatory challenges appears increasingly tied to its political strategy, particularly its relationship with the Trump administration.
The company recently appointed Dina Powell McCormick, a former high-ranking advisor in the Trump administration, to its board of directors. This appointment, effective April 15, is widely interpreted as a strategic move to strengthen ties with the administration amidst mounting regulatory pressure.
This follows earlier steps, including Meta’s decision in January 2025 to dismantle its U.S. third-party fact-checking program in favor of a user-led Community Notes system, a change publicly praised by President Trump.
Zuckerberg has also actively sought support from Trump to counter European Union digital regulations, including the Digital Markets Act (DMA) rules impacting Meta’s ‘pay or consent’ advertising model.
Meta has framed the EU’s regulatory actions as “overseas extortion,” echoing language used by President Trump. While the EU has reportedly signaled a potential easing of fines to avoid trade tensions with the U.S., Meta continues to face significant scrutiny over issues such as messaging interoperability under the DMA and compliance with the EU AI Act.
The company’s AI development strategy also seems influenced by this political climate, with Meta stating its new Llama 4 models explicitly aim to address perceived political bias, striving for “a range of views.”