Meta is reintroducing facial recognition for security purposes, rolling out AI-driven tools in the UK, EU, and South Korea to help combat scam ads and improve account recovery.
The system, which began testing in October 2024, is designed to block fraudulent advertisements using celebrity images and provide video selfie verification for users locked out of their accounts.
This marks Meta’s return to facial recognition technology after it shut down its general-purpose facial recognition system in 2021 due to privacy concerns.
The company claims this new implementation is focused strictly on security and does not retain biometric data after use. Meta wrote on its blog in October, “We immediately delete any facial data generated from ads for this one-time comparison, regardless of whether our system finds a match, and we don’t use it for any other purpose.”
Meta’s AI Now Targets Scam Ads Using Celebrity Images
Fraudulent ads featuring fake celebrity endorsements have long been a problem on social media. These “celeb-bait” scams trick users into engaging with misleading links, often leading to financial fraud or phishing attacks.
Meta’s new AI system scans flagged ads and compares facial images to verified profile photos of public figures across Meta’s platforms. If a match is found and deemed unauthorized, the ad is automatically removed.
Early trials began with a small group of public figures in October 2024, and Meta reports that the system helped detect scam ads faster. Now, the feature is expanding to include more public figures in the UK and EU, who will have the option to opt in for additional protection.
Video Selfie Verification for Account Recovery
Meta is also introducing an AI-powered video selfie verification tool to simplify the account recovery process.
Users who lose access to their accounts—whether due to hacking, forgotten passwords, or phishing scams—can now verify their identity by recording a short video selfie. The AI then analyzes the video and matches it to the user’s profile picture to confirm ownership.
This new method aims to replace traditional government ID verification, which can be slow and prone to fraud. According to Meta, video selfies are encrypted and permanently deleted once verification is completed.
However, the reintroduction of facial recognition—even in this limited capacity—has raised concerns among privacy advocates, particularly regarding biometric data security.
Privacy Concerns and Regulatory Scrutiny
Facial recognition remains a controversial technology, particularly in regions with strict data protection laws like the EU.
Under the EU’s GDPR rules, biometric data processing requires explicit user consent, and companies must demonstrate clear safeguards to prevent misuse. Meta states it engaged with regulators before launching these features, but privacy watchdogs remain cautious.
Meta’s previous use of facial recognition led to legal challenges, including a $650 million settlement in Illinois over alleged violations of biometric privacy laws.
While the company emphasizes that the new system is designed solely for fraud prevention and account security, critics argue that any reintroduction of biometric AI could set a precedent for broader surveillance.
Other tech companies have taken different approaches. In May 2024, Microsoft banned U.S. police from using its Azure OpenAI facial recognition tools, citing concerns over accuracy, racial bias, and a lack of regulatory oversight.
AI Misidentifications and the Risk of False Positives
Despite advancements in AI, facial recognition technology has repeatedly led to wrongful identifications, fueling skepticism about its reliability.
In May 2024, a UK shopper was mistakenly flagged as a shoplifter by an AI-powered security system at a Home Bargains store, leading to an unnecessary ban before the company admitted the error. Similarly, in 2023, a Black teenager was misidentified by facial recognition at a skating rink and denied entry.
These incidents highlight concerns about AI bias and reliability, raising questions about whether Meta’s system could produce similar errors. The company has stated that its AI is only used for scam prevention and account recovery—not law enforcement or security surveillance.
However, given the broader history of AI misidentifications, privacy groups remain wary of how such technology could evolve.
Public Resistance to Facial Recognition
Beyond misidentifications, public opposition to biometric surveillance has been growing. In August 2024, privacy advocates organized a protest outside Citi Field in New York, challenging the use of AI-driven facial recognition in stadium security screenings.
Critics argued that such systems could lead to mass surveillance, functionally tracking people without their knowledge or consent.
Concerns about AI-driven tracking extend beyond security cameras. In October 2024, two Harvard students demonstrated how Meta’s Ray-Ban smart glasses could be modified to extract personal data in real time using facial recognition software.
The case demonstrated how wearable AI-powered cameras could be repurposed for unauthorized tracking, intensifying concerns over how easily biometric technology could be exploited.
Meta’s AI Strategy and the Future of Biometric Verification
While Meta presents its AI facial recognition rollout as a security measure, the move also signals a broader shift in its approach to identity verification.
The company has not ruled out expanding AI-based authentication in other areas, such as fraud prevention in e-commerce or automated verification for new account sign-ups. If the current deployment proves successful, it could pave the way for a more extensive use of biometric AI across Meta’s platforms.
For now, Meta’s AI-powered security tools remain optional, and the company assures users that all biometric data is deleted after use. Whether this rollout remains a narrowly focused feature or evolves into a broader implementation will likely depend on regulatory responses and public perception.