Meta Platforms launched its standalone Meta AI application in late April, and while the app quickly climbed download charts, becoming the No. 2 free iPhone app shortly after release, it also ignited immediate privacy concerns. The new app also serves as the replacement control center for Meta’s Ray-Ban smart glasses, further integrating its hardware and AI software ecosystem.
Positioned as a personalized artificial intelligence assistant powered by the company’s new Llama 4 models, the app’s deep integration with user data from Facebook and Instagram, combined with its default behavior of retaining and utilizing chat conversations, has raised red flags among privacy experts and users.
Unlike competitors offering clearer controls or temporary chat modes, Meta AI operates with a default setting that remembers user interactions to tailor future responses and improve its systems.
This memory function, coupled with the potential use of chat data for advertising, presents a different set of privacy trade-offs compared to services like OpenAI’s ChatGPT or Google’s Gemini. Google’s Gemini is also moving towards personalization using search history, facing similar debates over user control, and so does ChatGPT – but Meta’s approach appears to be less transparent.
An AI That Remembers By Default?
Central to the app’s personalization – and the privacy concerns – is its “Memory” feature. Meta AI automatically parses conversations to identify and store key facts about users, intended to make the AI more helpful over time. However, testing by The Washington Post revealed this memory could capture sensitive details inferred from chats, including topics like fertility techniques, divorce, payday loans, and tax evasion inquiries, despite Meta stating it tries to avoid storing sensitive information.
Users can exert some control; the Memory feature is optional and can be disabled in the settings, according to Meta’s Help Center. Users can also view, manage, and delete individual memories or clear the entire file. However, deleting a memory doesn’t automatically erase the chat it originated from; that requires a separate deletion step within the chat history.
The app currently offers no setting to prevent chats or memories from being saved in the first place, nor does it provide an incognito or temporary chat mode like ChatGPT’s. The only current method to avoid this persistent logging within Meta’s ecosystem is using the Meta AI website without being logged into a Meta account. Even then, Meta notes it collects technical data like IP address linked to a temporary identifier.
This default data retention and the complexity of managing it drew criticism. “The disclosures and consumer choices around privacy settings are laughably bad,” Ben Winters, director of AI and data privacy at the Consumer Federation of America, told The Washington Post. He advised caution: “I would only use it for surface-level, fun prompts or things that don’t have anything to do with your personal information, desire, concerns, fears, or anything you wouldn’t broadcast to the internet.”
Meta maintains it provides user control, with spokesman Thomas Richards stating to the Washington Post, “We’ve provided valuable personalization for people on our platforms for decades, making it easier for people to accomplish what they come to our apps to do — the Meta AI app is no different. We provide transparency and control throughout, so people can manage their experience and make sure it’s right for them.”
Your Chats, Meta’s Training Data, Future Ads
Beyond personalization, Meta confirms that user interactions – text, voice recordings, and submitted images via features like “Imagine” – may be used to improve their AI models. The company’s terms of service bluntly warn users: “do not share information that you don’t want the AIs to use and retain.”
Crucially, unlike OpenAI’s ChatGPT which offers a setting to opt out of data usage for model improvement, Meta AI provides no such direct control. This practice exists alongside ongoing legal challenges, such as the Kadrey v. Meta case concerning allegations that Meta’s Llama models were initially trained using copyrighted books obtained without permission.
The potential monetization of these intimate conversations also looms. While the app is currently ad-free, CEO Mark Zuckerberg explicitly mentioned seeing “a large opportunity to show product recommendations or ads” within AI interactions during a recent interview following the Q1 2025 earnings call.
Meta’s policies do not currently prevent using chat content for ad targeting across its platforms. This prospect worries privacy advocates. “The idea of an agent is that it’s working on my behalf — not on trying to manipulate me on others’ behalf,” Justin Brookman of Consumer Reports remarked to The Washington Post, adding that personalized AI advertising “is inherently adversarial.”
Navigating Controls and the Broader AI Picture
The app’s connection to Facebook and Instagram further complicates the data picture. Linking these accounts grants Meta AI access to potentially years of social media activity and profile information. Users wishing to avoid this data mingling are advised to create a separate Meta AI account. The app also features a social “Discover” feed, where users can share chats, but only publicly. While users can download their stored AI information, the overall lack of proactive controls contrasts sharply with competitors.
For instance, DuckDuckGo’s AI features, launched in March, prioritize privacy by anonymizing interactions and avoiding data logging for training.
Meta’s Strategic AI Play
Meta itself demonstrates awareness of different privacy models. In late April, it detailed plans for “Private Processing” in WhatsApp, adopting Apple-like techniques to handle AI requests without accessing message content.
This stands in contrast to the data-hungry approach of the standalone Meta AI app, suggesting its design is a strategic choice. This strategy includes pushing its own AI by blocking Apple Intelligence features within its apps, a move following reportedly failed partnership talks over privacy standards. The company’s AI ambitions also arrive amidst a history of regulatory friction, including being requested to pause AI training using public EU user data last year.