In a notable strategic turn, Meta is building privacy protections for upcoming artificial intelligence features in WhatsApp by directly adopting principles from Apple’s Private Cloud Compute (PCC) system. Announced via a Meta engineering blog post, WhatsApp’s “Private Processing” aims to let users utilize AI for tasks like summarizing chats without Meta or WhatsApp accessing the message content, mirroring the approach Apple detailed for its own “Apple Intelligence” platform.
This move comes just weeks after reports around April 17 indicated Meta had actively blocked Apple Intelligence system features from working within WhatsApp, Instagram, and Facebook on iOS.
That restriction was interpreted as a push towards Meta’s own integrated AI tools, powered by its Llama models. The backdrop also includes reportedly failed talks between Apple and Meta about a potential AI partnership in mid-2024, allegedly due to Apple’s concerns over Meta’s privacy standards.
Now, Meta appears to be tackling the privacy challenge for AI within its encrypted messenger head-on, using its competitor’s blueprint. Meta stated its goal clearly: “Private Processing will allow users to leverage powerful AI features, while preserving WhatsApp’s core privacy promise, ensuring no one except you and the people you’re talking to can access or share your personal messages, not even Meta or WhatsApp.” The company outlined core principles for the user-facing features: Optionality, Transparency, and User Control.
Inside Private Processing
Meta’s engineers detailed a multi-stage process for Private Processing designed to shield user data. The system uses Confidential Computing hardware, specifically Trusted Execution Environments (TEEs) running as Confidential Virtual Machines (CVMs), along with Confidential Compute mode GPUs.
TEEs are secure areas within a processor designed to isolate code and data from the main operating system. When a WhatsApp user opts to use an AI feature requiring cloud processing, the system first obtains anonymous credentials to verify client authenticity without revealing identity.
The request then travels through a third-party relay using Oblivious HTTP (OHTTP), a protocol designed to decouple the request content from the sender’s IP address, thus hiding the user’s identity from Meta’s gateway.
Before processing, the user’s device establishes a secure connection directly with the TEE using Remote Attestation and Transport Layer Security (RA-TLS). Remote Attestation allows the device to cryptographically verify that the TEE is running the expected, untampered software, checking its measurements against a third-party ledger before transmitting data.
The actual user request, like messages for summarization, is then encrypted end-to-end for this specific session with ephemeral keys only the device and the verified TEE can access.
Meta asserts that the processing within the CVM is stateless – the message data is not retained after the AI task is completed. This aligns with Meta’s stated internal requirements of “Stateless processing and forward security” and “Non-targetability”, ensuring “an attacker should not be able to target a particular user for compromise without attempting to compromise the entire Private Processing system.”
Apple’s Foundation and Verification Efforts
This architecture closely follows the design of Apple’s Private Cloud Compute, unveiled as part of Apple Intelligence at WWDC in June 2024. Apple created PCC to extend its primarily on-device AI capabilities for more complex tasks, using custom Apple Silicon servers equipped with security features like the Secure Enclave and running a specialized, hardened operating system. Apple also employs OHTTP and ensures data processed on PCC servers is ephemeral and not used for model training.
Both companies are emphasizing external verification, a point particularly salient for Meta given its history with user data privacy. Meta has pledged to make its CVM image binary available for security researchers to inspect and plans to publish source code for key components, such as attestation verification. The company is also formally expanding its bug bounty program to cover Private Processing.
This echoes Apple’s own efforts; in October 2024, Apple opened its PCC infrastructure to researchers via a Virtual Research Environment, offering rewards up to $1 million for finding serious security flaws, setting a high bar for transparency incentives.
Addressing the Trust Deficit
While the technical designs of Private Processing and PCC show considerable overlap in their goals and methods for privacy preservation, the success of Meta’s implementation will likely depend heavily on the promised transparency and the scrutiny of independent security researchers.
Meta stated that “Users and security researchers must be able to audit the behavior of Private Processing to independently verify our privacy and security guarantees.” The company aims to provide users with optionality and control, including ways to prevent message access for AI features in particularly sensitive chats using WhatsApp’s Advanced Chat Privacy feature. By adopting a model associated with Apple’s privacy-focused branding and committing to external audits, Meta is clearly working to assure users that AI convenience in WhatsApp won’t come at the cost of their private conversations.