YouTube Loosens Rules on Hate Speech, Misinformation

YouTube now allows more hate speech and misinformation under a new 'public interest' policy, sparking immediate backlash from civil rights groups and advertisers over online safety concerns.

YouTube has quietly relaxed its content moderation policies, instructing reviewers to permit videos containing hate speech and misinformation if the content is deemed to be in the “public interest.” The previously undisclosed change, implemented in December, raises the threshold for rule-breaking material within a single video from one-quarter to one-half of its runtime.

For users, this means a greater likelihood of encountering controversial and potentially harmful content that the platform would have previously removed. The move follows a similar decision by Meta to scale back its fact-checking program in the U.S., cementing an industry-wide trend toward more permissive content standards. A YouTube spokesperson explained the adjustment was part of an evolving process, stating the company’s goal remains “to protect free expression on YouTube while mitigating egregious harm.”

A Looser Standard for Harmful Content

Under the new internal guidance, moderators are now encouraged to weigh whether a video’s “freedom of expression value may outweigh harm risk.” Training materials reviewed by reporters illustrated the policy with real-world examples. One video that used a slur targeting a transgender individual was allowed to remain online because it was a single violation within a larger political discussion.

Another video with an inflammatory title covering vaccine policy changes was also permitted, with YouTube judging its public interest value to be greater than its potential for harm. The platform’s definition of “public interest” is expansive, covering everything from elections and political ideologies to race and gender, distancing the company from its more aggressive enforcement actions during the COVID-19 pandemic.

An Industry Pivoting Away From Policing Speech

YouTube’s decision reflects a broader strategic realignment across Silicon Valley. The move comes as parent company Google faces two major antitrust lawsuits from the Department of Justice, and some analysts see the relaxed moderation as an attempt to lower the company’s political profile. However, this approach clashes with international regulatory pressure, as the EU prepares a potential billion-dollar fine against X for content moderation failures.

It also runs contrary to demands from safety advocates, who recently protested outside Meta’s New York office over online harms to children. Meta’s own policy pivot was heavily criticized by its independent Oversight Board, which described the rollout as being “announced hastily, in a departure from regular procedure,” and lacking proper human rights review.

This industry-wide shift is further complicated by a growing reliance on automation. While TikTok laid off human moderators to lean on AI, Meta plans for AI to handle up to 90% of its product risk reviews, a move one former executive warned would have serious consequences.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x