OpenAI is intensely challenging a US court directive that mandates the preservation of all ChatGPT user logs, a requirement the company has characterized as a “privacy nightmare.” This order, originating from a copyright infringement lawsuit by news organizations, compels OpenAI to retain even conversations users believed were deleted and sensitive data from its API business clients.
OpenAI argues this directive endangers the privacy of its vast user base and was enacted without adequate opportunity for OpenAI to counter what it terms “unfounded accusations” of evidence destruction.
The core of OpenAI’s objection, as outlined in a court filing (via Ars Technica), is that the order fundamentally undermines its pledges regarding user privacy and data control. The company has voiced concerns that retaining all chat logs—spanning ChatGPT Free, Plus, Pro, and API services—could erode user trust and potentially lead to breaches of contracts and global privacy laws. Consequently, OpenAI is urgently seeking to nullify the “sweeping, unprecedented” order, citing not only profound privacy issues but also the substantial operational burdens and costs involved.
This legal confrontation arises from claims by news plaintiffs that ChatGPT users might erase conversations to conceal alleged copyright violations, such as circumventing paywalls. Judge Ona T. Wang, who issued the initial order, had previously signaled agreement with these concerns, suggesting the measure was vital to prevent potential evidence loss. OpenAI, however, firmly denies destroying any data due to litigation.
Its memorandum argued the plaintiffs’ theory lacks backing, as they “have not offered a single piece of evidence supporting it”. The company further stated, “They have not identified any evidence that anyone (other than the news plaintiffs themselves) has attempted to obtain their content from ChatGPT.” The company also argues the preserved data would likely not be useful for the case.
User Data and Privacy Commitments Under Scrutiny
OpenAI highlights that individuals utilize ChatGPT for diverse activities, with interactions often “ranging from the mundane to profoundly personal.” The company’s established privacy policies, which it asserts the court order has effectively “jettisoned,” previously enabled users to opt out of data retention, manually erase specific dialogues, or employ temporary chats designed to vanish.
The current order now compels OpenAI to keep this data regardless of user preferences. This situation has reportedly triggered alarm among users. Some tech professionals on platforms like LinkedIn and X characterized the mandated retention as “a serious breach of contract for every company that uses OpenAI,” and “an unacceptable security risk,” according to OpenAI’s court submissions.
OpenAI’s central argument is that “users feel more free to use ChatGPT when they know that they are in control of their personal information, including which conversations are retained and which are not.” The company maintains that the court failed to sufficiently consider these user concerns prior to issuing the preservation directive.
Judge Wang, however, partly justified the extensive scope of the order due to the news organizations’ assertion that “the volume of deleted conversations is significant.” She had denied an earlier attempt by OpenAI to reverse her order on May 29, while assuring the company that user data would not be made public. OpenAI continues to press its case, now urging Judge Sidney H. Stein to overturn Judge Wang’s decision.
Evolving Data Practices and Previous Security Responses
The ongoing legal dispute underscores the intricate nature of data management in the generative AI era. OpenAI has progressively updated its features concerning data persistence, notably with the ChatGPT Memory function.
This feature, expanded to more Plus subscribers in May 2024, permits the chatbot to remember details from prior conversations for more personalized interactions, though users retain control and can disable it. More recently, around April 16, OpenAI integrated this Memory capability with web search functions, enabling past conversational context to refine online search queries.
Historically, OpenAI has also acted to safeguard user data. For example, in July 2024, the company implemented encryption for its ChatGPT macOS application following the discovery of a security flaw that had left user conversations in plain text. At that juncture, OpenAI spokesperson Taya Christianson affirmed the company’s dedication to stringent security measures.
Broader Industry Context and Legal Arguments
The current court order, however, introduces a distinct challenge, shifting from specific vulnerabilities to a comprehensive data preservation mandate that OpenAI deems excessively broad. The company also contends that including API business customer data is illogical, as this data is typically governed by standard retention policies not easily alterable by the end-users of those businesses’ applications. The lawsuit also names Microsoft as a co-defendant.
Plaintiffs have argued the preservation order is crucial to counter OpenAI’s defense that its AI models are infrequently used by customers to infringe copyrights. OpenAI counters that Judge Wang’s order effectively “forces OpenAI to rescind users’ control over when and how their ChatGPT conversation data is used and retained”.
The company has emphasized that the harm to its reputation and users outweighs any “‘speculative’ benefit,” saying “The order in essence compels us to retain data that users explicitly choose to remove.”
OpenAI has stated that adherence to the current preservation order would necessitate months of engineering work and entail substantial costs. The company has affirmed its commitment to continue opposing the order to safeguard its users’ interests, asserting that the news plaintiffs’ speculative requirement for the data does not justify the potential detriment to user privacy and OpenAI’s operational stability.