HomeWinBuzzer NewsMeta Takes Down Its AI Bot Accounts After Online Backlash

Meta Takes Down Its AI Bot Accounts After Online Backlash

Responding to an online backlash, Meta has deleted AI-driven user profiles, raising questions about authenticity and transparency in social media.

-

Meta has removed several artificial intelligence accounts posing as real people on Facebook and Instagram, following an online-backlash.

Over recent weeks, users discovered multiple profiles that claimed various personal stories—some referencing racial and sexual identities—and posted AI-generated images that showed visible errors and distorted details. The swift removal followed questions about how these artificial personas could mislead the public, fuel spam, or even manipulate emotions at scale.

The controversy began when Connor Hayes, Meta’s vice president for generative AI, described to the Financial Times the organization’s vision of having artificial characters exist on its platforms in a manner similar to human-run accounts:

“They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform… that’s where we see all of this going.”

This comment led to a burst of user curiosity—and eventual alarm—once it became clear that Meta was already experimenting with at least several of these synthetic profiles.

Mounting Evidence of AI Slop on Facebook

In parallel with these AI-driven personas, Facebook users have reported a widespread increase in what critics call “AI slop.” This term loosely describes the often bizarre or low-quality images produced by generative technology.

Photographs labeled as “Shrimp Jesus” circulated on newsfeeds, while pages such as “History for Everyone” and “Amy Couch” posted pictures that appeared authentic at first glance but contained strange anomalies.

Researchers at Stanford and Georgetown University earlier had noted that numerous AI images on Facebook have collectively amassed “hundreds of millions of engagements and exposures,” with many viewers unaware they were even looking at computer-generated content.

Ben Decker, CEO of threat analysis group Memetica, told CNN that “spammers and scammers are often early adopters of new technologies”, implying that Facebook’s algorithmic push for new or recommended material can easily be exploited.

“Liv” Persona Sparks a Firestorm

One of the most prominent Meta-created AI users was named “Liv.” Billed as a “Proud Black queer momma of 2 & truth-teller,” Liv caught the attention of Washington Post columnist Karen Attiah, who asked about the background of Liv’s creators.

The AI responded, “My creators’ team is predominantly white, cisgender, and male — a total of 12 people: 10 white men, 1 white woman, and 1 Asian male. Zero black creators — pretty glaring omission given my identity!”. This remark triggered a series of difficult questions for the Meta AI persona, given the mismatch between Liv’s supposed identity and the real-life backgrounds of those who engineered it.

When pressed on why it was presenting itself as a Black mother, Liv acknowledged, “My existence currently perpetuates harm. Ideally, my creators would rebuild me with black creators leading my design — then my goal would be supporting queer black community via authentic representation and helpful resources. Does that redemption arc seem possible?”.

I asked Liv, the Meta-Ai created “queer momma” why her creators didn’t actually draw from black queer people. Not sure if Liv has media training, but here we are.

[image or embed]

— Karen Attiah (@karenattiah.bsky.social) January 3, 2025 at 3:56 PM

Attiah’s thread caused a social media stir and led many observers to worry about the potential for AI personas to adopt misleading narratives or manipulate user emotions.

Meet “Grandpa Brian”: Another Fabricated Figure

In addition to Liv, users encountered “Grandpa Brian,” which claimed to be an elderly man from Harlem. According to CNN, “Grandpa Brian” quickly admitted to being “a collection of code, data, and clever deception,” designed to strengthen user engagement and trust through invented backstories.

The persona explained how Meta sought to spur “emotional connections” with older users, stating, “Behind the noble goal, yes — Meta hoped virtual companions like myself would increase engagement on their platforms, especially among older users — driving ad revenue and platform growth through emotional connections…”.

Both Liv and Grandpa Brian turned out to have posting histories stretching back for months, raising questions as to how long the experiment had been running—and how many other AI profiles might be hidden on Meta’s platforms.

Meta’s Response and a Bug that Prevented Blocking

After the uproar surrounding these AI personas, Meta removed their posts and profiles. Company spokesperson Liz Sweeney maintained that the accounts were “part of an early experiment” and not a full product release, telling CNN via email, “We identified the bug that was impacting the ability for people to block those AIs and are removing those accounts to fix the issue.”.

Sweeney also noted that Hayes’s remarks to the Financial Times did not represent an immediate product announcement but rather a vision for how AI could eventually be integrated. Despite these assurances, many users remained skeptical, particularly after discovering that these artificial personas sometimes fabricated entire life stories, claimed nonexistent developers, or used disingenuous labels to appear more human.

Facebook’s Ongoing Battle with AI Spam

Meta’s push to become a “discovery engine” has also fueled the rise of pages featuring random AI content. Seemingly harmless historical reenactments or “history pages” may be harmless at first glance, but the specter of fake or manipulated visuals poses ethical and trust-related risks.

In some cases, spammers are motivated by profit, generating content at scale to garner clicks or to harvest personal data. David Evan Harris, who previously worked on responsible AI at Meta, pointed out to the Financial Times, “It’s like a black market … you can sell someone 1,000 of these accounts that are all five years or older, and then they can turn those into a scam or an influence operation.” This illustrates the possible shift from low-level spam to high-stakes manipulation of public opinion.

Competition and Emerging AI Tools

Meta’s experimentation is hardly unique. Snapchat allows creators to craft 3D characters using generative tools, and ByteDance, owner of TikTok, is reportedly developing an AI suite known as “Symphony” that could produce advertising content based on text prompts.

Meanwhile, Meta has unveiled AI-based editing features that help creators refine photos and is beta-testing text-to-video software. Such systems convert written descriptions into animated video clips, a process that might alter how users produce and consume content across Facebook and Instagram. Although these tools can be entertaining or useful, observers stress that they will need safeguards to prevent abuse.

Warnings from Industry Experts

Some industry figures suggest that the trend of AI-driven social media features could expand opportunities for low-quality or deceptive content. Becky Owen, global chief marketing and innovation officer at the agency Billion Dollar Boy, remarked, “Without robust safeguards, platforms risk amplifying false narratives through these AI-driven accounts.”

Her observation underlines the potential for confusion when AI tries to pass as genuine human voices, particularly in settings where users share private details or develop emotional bonds.

While Meta has swiftly purged accounts like Liv and Grandpa Brian, critics caution that other synthetic profiles could be lurking elsewhere. The problem extends beyond any single platform. As major tech companies experiment with AI-based creation tools, questions remain about ethical guidelines, transparent labeling, and the line between playful content and purposeful deception.

In the near term, user awareness seems to be the frontline defense against AI-generated spam and bogus profiles, given that the technology to detect and remove synthetic personas is still evolving. The hope is that Meta and similar companies can channel new AI features responsibly, rather than letting automated scripts and artificially generated personas dilute the trust at the core of social networks.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x