Investigation Uncovers Disturbing AI-Content on YouTube With Graphic Violence, Sex and Abuse

An investigation has uncovered dozens of YouTube channels using AI to create disturbing gore and fetish content featuring cartoon characters.

A WIRED investigation published today has identified dozens of YouTube channels deploying generative artificial intelligence to produce unsettling videos featuring popular cartoon characters.

Familiar figures like minions and animated cats are depicted in scenes involving graphic violence, sexualized situations, and abuse, reaching subscriber counts ranging from thousands to millions.

The findings raise concerns about a potential new wave of problematic content reminiscent of the “Elsagate” phenomenon, but supercharged by modern AI tools. Responding to WIRED’s report, YouTube confirmed it terminated two flagged channels for violating its Terms of Service, suspended monetization for three others, and removed specific videos violating its Child Safety policy.

The AI-generated videos often present jarring juxtapositions: familiar characters in grotesque situations. Examples include minions mutating after exposure to radioactive slime and subsequently attacking children, or animated kittens being subjected to violence by parental figures using baseball bats and frying pans.

Despite the disturbing nature, channels hosting this content frequently use misleading descriptions, titles, and tags – such as #funnycat, #familyfun, #disneyanimatedmovies, or even #animalrescue for videos showing parasite infections – seemingly to attract young viewers or circumvent platform filters.

One channel named “Go Cat,” for instance, describes itself as “a fun and exciting YouTube channel for kids!” while featuring visceral, body-horror style animations. Another video depicts a minion transforming gruesomely in a sewer, accompanied by an AI narrator singing, “Beware the minion in the night, a shadow soul no end in sight.”

The high view counts reported by some channels are debatable, with WIRED noting hundreds of automated comments across videos, potentially pointing towards bot activity consistent with the dead internet theory rather than solely human viewership.

Echoes of Elsagate, Amplified by AI

This flood of bizarre content draws parallels to “Elsagate,” where inappropriate videos using children’s characters slipped through filters onto YouTube’s Kids app years ago. YouTube took steps then, removing ads on millions of videos and deleting hundreds of thousands more.

However, the advent of accessible generative AI presents a new challenge. Creating these macabre animations no longer requires specialized skills, only the ability to craft (or circumvent) AI prompts, leading to rapid, large-scale production, often following online tutorials on monetization.

“This trend is particularly concerning because of the scale and speed at which AI can generate this content,” Robbie Torney, senior director of AI programs at Common Sense Media, explained to WIRED.

He added, “Unlike traditional content creation, AI-generated videos can be produced in large volumes with minimal oversight. Without human review in the creation pipeline, inappropriate and potentially harmful material can easily reach kids.” Common Sense Media identified recurring themes in the videos shown to them by WIRED, including “characters in extreme distress or peril,” “mutilation, medical procedures, and cruel experiments,” and “depictions of child abuse and torture.”

Evidence shared by other YouTubers like BitterSnake earlier this year suggested some might be operated from organized office environments, possibly in Asia, hinting at content farm operations.

Platform Moderation and Evolving Tools

YouTube asserts its Community Guidelines and quality principles apply to all content, including AI-generated material, and that it uses both human reviewers and technology for enforcement.

“As always, all content uploaded to YouTube is subject to our Community Guidelines and quality principles for kids—regardless of how it’s generated,” a spokesperson stated. The platform also noted that attempts by banned users to open new channels violate its Terms of Service and are enforced using “a combination of both people and technology.”

While some channels flagged by WIRED were removed, others with identical reposted content remained active at the time of the report.

This situation arises despite previous efforts by YouTube and the wider tech industry to address AI-related harms. In April 2024, Google, Microsoft, and other AI firms publicly committed to child safety measures developed with organizations like Thorn, pledging specifically against training models on or generating CSAM.

Google specifically highlighted its internal teams and technology used for CSAM detection. Later, in September 2024, several companies made voluntary commitments to the White House to combat harmful deepfakes and non-consensual exploitative imagery.

YouTube itself has introduced specific tools over the past year. A June 2024 privacy policy update allowed individuals to request removal of AI media replicating their likeness, though subject to review.

In September 2024, the platform enhanced its Content ID system – technology traditionally used for copyright enforcement – to also detect AI-generated voices and faces. More recently, in October 2024, YouTube introduced a “captured with a camera” label based on the C2PA standard (Coalition for Content Provenance and Authenticity), which embeds metadata to verify authentic, unedited footage.

However, its effectiveness is limited by hardware adoption and the fact that editing breaks the verification chain. YouTube also requires creators to label AI-generated content, though this relies on creator honesty.

Content Farms and Cross-Platform Spread

The persistence of the content found by WIRED, despite these measures, underscores the difficulty platforms face. The channels often reappear quickly after takedowns. The problem is not confined to YouTube.

Reports surfaced months ago of similar AI-generated “minion gore” videos on TikTok, created by overlaying AI filters onto real footage of violent incidents. TikTok told 404 Media that Hateful content as well as gory, gruesome, disturbing, or extremely violent content” is prohibited and said it is taking action to remove harmful AI-generated content that violates its policies.

The cross-platform nature of this issue highlights the broad challenge presented by easily accessible AI generation tools. It also fuels ongoing discussions about platform responsibility and the adequacy of current safety measures, including voluntary industry commitments versus formal regulation, like the EU’s AI Act.

Legislative and Industry Responses

Efforts are underway in places like California, where a bill aims to establish clearer safeguards for children interacting with AI systems, receiving support from groups like Common Sense Media after senior AI adviser Tracy Pizzo Frey testified at a hearing. The bill proposes risk classifications for AI systems and restrictions on certain applications for minors, such as controversial AI companions like Replika.

Experts emphasize the need for collaboration. “The rapid evolution of AI technology demands that all stakeholders—platforms, content creators, parents, and organizations like ours—work together to ensure kids’ exposure to online video content is safe and positive,” stated Robbie Torney, senior director of AI programs at Common Sense Media.

A YouTube spokesperson added, “We want younger viewers to not just have a safer experience but also an enriching one… To support this, we partnered with experts to create a set of quality principles for kids and family content meant to help guide creators in creating quality content for kids and reduce the amount of content that is low quality, regardless of how it was created.” Y

ouTube claims viewership of designated “high quality” content has risen 45 percent on the separate YouTube Kids app since these principles were introduced, though the WIRED investigation reveals disturbing AI-generated content continues to find pathways onto the main platform.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x