Privacy Group noyb Targets Meta’s AI Data Practices in Europe, Issues Ultimatum Over GDPR Compliance

European privacy group noyb has issued Meta a cease & desist over EU AI data use, demanding GDPR opt-in consent and threatening significant legal action including multi-billion Euro fines.

European privacy group noyb has formally demanded Meta cease using personal data from its European users for AI model training without explicit opt-in consent, escalating a contentious battle over data rights and AI development. The “cease and desist” letter, detailed in a noyb announcement on May 14, targets Meta’s reliance on “legitimate interest” for processing data, an approach noyb argues flouts the EU’s General Data Protection Regulation (GDPR). Meta faces a May 21, 2025, deadline to respond.

The confrontation could significantly disrupt Meta’s AI ambitions in Europe, particularly for its Llama AI models. noyb, a prominent privacy enforcement organization, has threatened substantial legal repercussions, including seeking court injunctions to halt data processing. Such an injunction could also demand the deletion of AI systems trained on improperly acquired European data.

If EU data is intermingled with non-EU data, noyb warns the entire AI model might need to be erased. Furthermore, the group is contemplating a massive class-action lawsuit, potentially costing Meta upwards of €200 billion, based on estimated damages for its European user base.

The core of the dispute lies in Meta’s interpretation of GDPR. noyb insists that using personal data for AI training requires freely given, specific, informed, and unambiguous consent from users. Max Schrems, noyb’s founder, argues that Meta’s attempt to use “legitimate interest” as its legal basis is a flawed interpretation, particularly given past European Court of Justice rulings on data use for advertising.

He stated that Meta effectively “simply says that it’s interest in making money is more important than the rights of its users.” noyb also points out the practical difficulties Meta would face in complying with other GDPR rights, such as data erasure or rectification, within complex AI systems, especially since its Llama models are often distributed as open-source software, making them hard to recall or update.

The High Stakes Of AI Training Data In Europe

Meta’s journey to deploy and train its AI in the European Union has been fraught with regulatory friction. In June 2024, the company was forced to pause its AI training using public data from EU Facebook and Instagram users after objections from regulatory bodies, notably Ireland’s Data Protection Commission (DPC).

Meta contended at the time this would result in a “second-rate experience” for European users. The EU Data Protection Commission described its engagement with Meta as “intensive” before the pause.

Despite these hurdles, Meta AI eventually launched in many European countries in March 2025, but with notable limitations. The European version was specifically marketed as not collecting user interactions for AI training, and features like AI image generation and deep content personalization, available in the U.S., were absent.

However, according to ET CIO, Meta plans to start using personal data from European users for AI technology training from May 27, a move that appears to be the direct trigger for noyb’s current action.

Meta, in response to noyb’s latest challenge, has defended its practices. A company spokesperson said that “NOYB’s arguments are wrong on the facts and the law” and that the company has provided EU users a “clear way to object” via email and in-app notifications, allowing them to do so “at any time.”

Meta further accused noyb of attempting to stifle AI innovation in the EU, claiming such actions “is ultimately harming consumers and businesses who could benefit from these cutting-edge technologies.” Meta believes its approach aligns with a December 2024 European Data Protection Board (EDPB) opinion, which Meta says “affirmed that our original approach met our legal obligations.”

Broader Scrutiny Over Data Practices

This specific challenge over AI training data occurs against a backdrop of wider concerns regarding Meta’s data handling. The company’s standalone Meta AI app, launched in late April, quickly drew criticism for its default retention of chat details for personalization. While Meta points to user controls, privacy advocates have been skeptical.

Ben Winters of the Consumer Federation of America, speaking to The Washington Post, described Meta’s privacy disclosures as “laughably bad.” Meta’s own terms of service explicitly warn users, “do not share information that you don’t want the AIs to use and retain.”

The potential for future advertising based on these AI interactions, a prospect raised by CEO Mark Zuckerberg, adds another layer of concern for privacy watchdogs. Beyond privacy, Meta also faces legal battles over copyright, with French publishers and authors suing the company in March for allegedly using copyrighted materials without authorization to train its AI.

Reports, including from Le Monde, have cited internal documents suggesting Meta approved the use of pirated book libraries for its Llama models. These varied legal and ethical challenges underscore the intense scrutiny Meta faces as it pushes forward with its AI development, often navigating a complex and contested data landscape. Other groups, like German consumer right organization VZ NRW, have also initiated legal action concerning Meta’s AI training plans.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x