The New York Times has already invested $7.6 million in a prolonged legal conflict with Microsoft and OpenAI, reports The Information. Both companies allegedly used copyrighted Times content to train artificial intelligence models without permission.
Initiated last December in the Federal District Court of Manhattan, the case underscores the financial and strategic significance of this battle over generative AI and its potential implications for the media industry.
Copyright Infringement Claims Against AI Models
The lawsuit accuses OpenAI of training ChatGPT and Microsoft of integrating Copilot with Times content, highlighting examples where the AI tools have reproduced or summarized Times articles. These AI-generated summaries allegedly bypass the Times’ paywall, undermining subscription-based and affiliate revenue models. In particular, the case points out Microsoft’s Bing Chat (later rebranded to Copilot), which uses OpenAI’s technology to provide product recommendations that The Times claims have diverted traffic from its lucrative Wirecutter platform.
The Times demands substantial compensation, potentially totaling billions of dollars, and calls for the destruction of any AI models that have been trained on its content. According to the newspaper, these AI applications have had a direct negative impact on its readership and revenue, while the cost of defending its intellectual property has already reached $7.6 million this year, with $4.6 million spent in the past quarter alone.
Microsoft and OpenAI Respond With Fair Use Arguments
Microsoft and OpenAI, however, reject the accusations. Microsoft argues that its AI models fall under fair use, citing historical parallels, like the legal battles over the introduction of VCRs in the 1980s. Microsoft insists that AI-generated content, which is presented as summaries rather than verbatim copies, does not replace the need for original journalism. Additionally, they claim that The Times’ examples were deliberately engineered to force Bing Chat and Copilot into producing non-standard results.
OpenAI has gone further, alleging that The Times conducted manipulative testing to elicit near-verbatim excerpts from ChatGPT, and argues that such content reproduction is atypical. The AI company also maintains that its technology generally synthesizes information available across the internet and is not designed to replace original news articles. Both companies have moved to dismiss parts of the lawsuit, standing firm that their practices comply with copyright laws.
Broader Context: The Copyright War Over AI
The Times’ legal moves are part of a broader wave of copyright concerns as AI systems become more integrated into everyday technology. In May, eight newspapers under Alden Global Capital’s ownership also sued OpenAI and Microsoft, claiming unauthorized use of their journalism to train AI models. Well-known authors like Sarah Silverman and Michael Chabon have taken similar action, alleging that OpenAI used their literary works without consent.
These cases have ignited widespread debate over whether AI training practices violate copyright laws or constitute transformative use. Some media companies, such as TIME, have opted for collaboration instead of conflict, signing licensing agreements with AI developers. TIME granted OpenAI access to its extensive archive in June, while other publishers, like The Atlantic and Vox Media, have forged similar partnerships, balancing the benefits of AI with content protection.
The Financial Stakes for The Times
Beyond the direct copyright concerns, The Times claims that Microsoft’s Bing Chat has negatively affected its affiliate revenue, particularly through Wirecutter, which relies on linking users to retailers. According to The Times, Bing Chat summaries and product suggestions have redirected potential readers away from Wirecutter’s pages, further exacerbating the financial impact. This adds another dimension to the lawsuit, as The Times tries to quantify how AI-generated content could be financially harmful to traditional media.
Despite these allegations, Microsoft remains firm in its belief that these AI tools are enhancing content accessibility and offering new ways for users to engage with information. OpenAI continues to stress its commitment to ethical AI development, denying any deliberate misuse of copyrighted content. Both companies are monitoring the legal developments closely, knowing that the outcome could establish crucial precedents for AI and content ownership.
Perplexity AI Also Under Scrutiny
Adding to the legal tension, The Times sent a cease and desist letter in October to Perplexity AI, a separate AI startup accused of similar copyright violations. The newspaper alleges that Perplexity used its articles without permission for content summarization. Perplexity AI, which has previously faced criticism from other major publishers like Condé Nast, argues that its models operate transparently and fairly. CEO Aravind Srinivas has even suggested a willingness to work collaboratively with media companies but insists that factual information shouldn’t be monopolized.
While Perplexity has implemented revenue-sharing initiatives and CPM advertising partnerships with select publishers, the backlash underscores a larger industry-wide concern: how AI-driven technologies are reshaping the economics of journalism. This ongoing friction between AI providers and media organizations continues to raise critical questions about the future of content rights and the sustainability of traditional news models.
Last Updated on November 7, 2024 2:12 pm CET