A lawsuit filed in California federal court alleges that LinkedIn violated privacy agreements by sharing private messages and personal data from its Premium subscribers to train artificial intelligence (AI) models.
The lawsuit claims that the Microsoft owned company breached the Stored Communications Act (SCA), state-level unfair competition laws, and its own contractual commitments to subscribers.
The plaintiff, Alessandro De La Torre, represents a proposed class of millions of affected users, primarily Premium subscribers, who trusted LinkedIn to protect the confidentiality of their communications.
The allegations focus on LinkedIn’s introduction of new privacy settings in 2024, which plaintiffs argue allowed the platform to quietly repurpose user data without explicit consent.
The lawsuit states that these practices resulted in a breach of trust, leaving users vulnerable to potential misuse of their sensitive information. According to the filing, LinkedIn’s conduct not only violated its contractual obligations but also undermined the ethical principles of AI development.
Privacy Settings and Controversial Policy Updates
The controversy centers on a privacy setting LinkedIn introduced in August 2024. This feature, titled “Data for Generative AI Improvement,” allowed LinkedIn and its affiliates, including Microsoft, to process user data for training AI models.

This setting was enabled by default, effectively opting all users into the program unless they manually disabled it. The lawsuit highlights that LinkedIn’s policy update did not adequately inform users about the implications of this data-sharing mechanism.
In September 2024, after public scrutiny and media reports, LinkedIn updated its privacy policy to explicitly state that personal data could be used for generative AI training. The update also clarified that users who chose to opt out could only prevent future data sharing; information already collected would remain embedded in AI models.
LinkedIn’s revised FAQ disclosed: “Opting out means that LinkedIn and its affiliates won’t use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place.”
These revelations suggest that users’ control over their data was limited at best. The lawsuit accuses LinkedIn of failing to provide adequate notice about these changes, violating its own policy that requires prior communication of material updates and an opportunity for users to cancel their accounts.
Legal Basis for the Complaint
The plaintiff argues that LinkedIn violated the SCA, which prohibits electronic communication service providers from knowingly disclosing the contents of user communications without authorization.
The lawsuit alleges that LinkedIn breached this law by sharing private InMail messages—available exclusively to paying Premium subscribers—with third parties, including Microsoft’s affiliates and other unnamed providers, for the purpose of training AI models.
The complaint also asserts that LinkedIn violated its Subscription Agreement (LSA) and Data Protection Agreement (DPA), which promise enhanced privacy protections for Premium users. Section 3.2 of the LSA explicitly forbids sharing confidential user information without consent.
The lawsuit states: “LinkedIn breached its contractual promises by disclosing its Premium customers’ private messages to third parties to train generative artificial intelligence (‘AI’) models.”
Additionally, the plaintiff claims LinkedIn engaged in unfair business practices under California law by misleading users about its data-sharing practices.
The Federal Trade Commission (FTC) previously warned against such retroactive changes to privacy policies, stating in 2024: “It may be unfair or deceptive for a company to adopt more permissive data practices…through a surreptitious, retroactive amendment to its terms of service or privacy policy.”
Impact on Premium Subscribers
The lawsuit focuses on LinkedIn’s Premium subscribers, who pay for features such as InMail and advanced analytics. These users are entitled to additional privacy guarantees under the platform’s terms.
According to the complaint, InMail messages often contain sensitive information related to employment, intellectual property, and compensation. The unauthorized disclosure of this data not only violates user trust but also exposes individuals to risks such as reputational harm or identity theft.
One example from the filing describes how the plaintiff’s own InMail messages contained discussions about financing startups and confidential job-seeking efforts. The plaintiff alleges: “Such disclosures could irreparably harm professional relationships, ruin career opportunities, and endanger the competitive advantage of companies and individuals.”
Broader Implications for Microsoft
As LinkedIn’s parent company, Microsoft plays a central role in the lawsuit’s allegations. The filing suggests that user data from LinkedIn could surface across Microsoft’s ecosystem, including in products like Word, Teams, and Excel.
This raises concerns about unintended privacy breaches, such as confidential job searches appearing in Teams auto-completions or business strategies being inferred in Word suggestions.
The lawsuit points out: “Private information could surface in Microsoft products, such as job searches appearing in Word auto-completions, business plans in Teams chat suggestions, or salary-related content in Excel features.”
It underscores disparities in LinkedIn’s data-sharing practices based on geographical location. Users in regions with stricter privacy regulations, such as the European Union, Canada, and Switzerland, were exempt from these data-sharing practices. In contrast, U.S. users, who lack comprehensive federal privacy protections, were subject to the default opt-in setting.
Remedies and Ethical Concerns
The plaintiffs are seeking statutory damages of $1,000 per user under the SCA, along with compensation for overpaid subscription fees. Additionally, they demand “algorithmic disgorgement,” a legal remedy requiring LinkedIn to delete AI models and algorithms trained on improperly obtained data.
The lawsuit raises ethical questions about the use of personal data in AI development. Critics argue that such practices erode public trust and create risks of profiling, discrimination, and identity theft. The case has broader implications for the tech industry, serving as a potential precedent for how companies balance AI innovation with user privacy.