Inside Facebook and X’s Moderation Failures: How Social Media Fuels Misinformation for Profit

X incentivizes election misinformation through monetization, while Facebook’s auto-generated pages aid militia growth, raising new concerns about platform oversight.

With the U.S. election season underway, social media giants Facebook and X, formerly Twitter, are under heightened scrutiny over their handling of extremist content and misinformation. Facebook’s automated system has created pages linked to militia groups, a critical concern amid increased paramilitary organizing.

Meanwhile, as I reported today, X just launched its so-called “Lightning Notes” system, designed to accelerate its misinformation flagging to mitigate harmful posts. These efforts underscore the ongoing challenges platforms face in balancing content moderation, data control, and regulatory compliance.

Facebook Auto-Generates Pages Linked to Extremist Militia Groups

Despite Meta’s 2020 paramilitary group ban, Facebook’s auto-generated page system continues to create spaces that extremist militia groups use for organizing. A report from WIRED suggests the feature, meant to auto-generate pages when there’s high user interest in a topic or location, has inadvertently facilitated organizing for groups like American Patriots Three Percent (AP3).

According to data from the Tech Transparency Project (TTP), hundreds of pages for paramilitary groups have surfaced on Facebook since early 2021, many labeled as “unofficial” to indicate lack of direct affiliation. However, TTP’s director, Katie Paul, highlighted that even these “unofficial” labels don’t prevent groups from using these pages to attract followers, share training plans, and discuss activities like ballot box monitoring.

WIRED and The New Republic report that while Meta claims to carry out periodic “strategic network disruptions,” these auto-generated pages continue to support militia organizing. Paul points out that despite these measures, Facebook remains a central platform for militia growth, as these groups capitalize on the network’s wide reach and group functionalities to establish localized and state-wide connections.

X Introduces Lightning Notes to Speed Up Misinformation Moderation

Today, X implemented “Lightning Notes,” a major overhaul of its Community Notes system, enabling faster response to misinformation. The new system cuts down the typical review time from hours to as little as 14 minutes, with notes appearing on flagged posts within 18 minutes.

Marked by a lightning bolt icon, these notes give users a visual cue that content was quickly verified, aiming to counter misinformation in real-time. The change addresses a longstanding critique of X’s slow moderation, as the platform handles a daily post volume exceeding 500 million, and often relies on its 800,000 contributors worldwide for crowdsourced moderation.

Lightning Notes relies on a crowdsourced scoring model, designed to ensure quick, community-driven feedback. However, this model differs from AI-driven moderation approaches at platforms like TikTok and YouTube, where algorithms take on a more extensive role. 

TikTok and YouTube’s Approaches

On October 11th, ByteDance, the parent company of TikTok, announced workforce reductions impacting hundreds of content moderation staff, with Malaysia experiencing the most significant impact. This restructuring eliminated approximately 700 positions as part of a larger initiative to automate content review processes with artificial intelligence (AI). ByteDance asserts that this AI technology can now identify and remove roughly 80% of harmful content automatically, leaving only complex cases for human review.

These recent layoffs at TikTok coincide with Malaysia’s evolving social media regulations, which have become increasingly stringent. The Malaysian government’s mandate for companies to obtain operational licenses in order to combat digital threats has prompted TikTok to implement automated solutions to ensure compliance. These measures are designed to safeguard user safety and privacy in the online environment, addressing the challenges posed by ever-evolving online risks.

Demonstrating a proactive approach towards these regulatory obligations, ByteDance has committed a substantial $2 billion towards content safety initiatives in 2024. This significant investment underscores the company’s continued dedication to maintaining platform security while adapting its workforce to meet the demands of this changing regulatory landscape.

In a similar effort to promote responsible AI use, YouTube introduced an AI-content labeling tool in March 2024. This tool requires content creators to mark videos that utilize realistic AI tools in ways that have the potential to mislead viewers. Managed through the Creator Studio’s “Altered Content” option, this initiative is part of Google’s broader strategy to preserve content authenticity on its platform.

It is important to note that not all AI-generated content falls under this new labeling requirement. Only videos that incorporate realistic AI elements necessitate explicit flagging, while content featuring fictional or minimally AI-enhanced visuals remains exempt. Creators who fail to accurately label their content face potential penalties, such as demonetization. 

AI-Driven Moderation Gains Momentum, Data Privacy Concerns Emerge

In another significant change, X updated its data-sharing policy on October 18, allowing third-party developers to access user data for AI training. This policy aligns with recent moves by platforms like Reddit to share data with external AI developers, although it has raised concerns over user privacy.

Under the new terms, set to take effect on November 15, violators could face fines of up to $15,000 for extracting over a million posts within 24 hours without permission. CEO Elon Musk, addressing data access challenges, has increasingly limited data scraping to ensure control over X’s extensive user-generated content as the company seeks additional revenue sources.

Meta’s CrowdTangle Removal Sparks Election Misinformation Concerns

Meta’s decision to discontinue its CrowdTangle tool sparked extensive criticism from academics, politicians, and regulators, especially with the U.S. elections fast approaching. Since its acquisition by Facebook in 2016, CrowdTangle has evolved into an indispensable tool for researchers and journalists alike.

The platform has proven instrumental in the identification and tracking of false information, hate speech, and election interference on both Facebook and Instagram. Its capabilities have been particularly valuable in understanding the dissemination of harmful content and in bolstering security during critical election periods.

Brandi Geurkink, head of the Coalition for Independent Technology Research, emphasized CrowdTangle’s unique contribution to civil society’s efforts to monitor harmful content. The tool has played a pivotal role in scrutinizing the dissemination of violence, political disinformation, and fake news on social media.

Meta proposes replacing CrowdTangle with the Meta Content Library (MCL) and Content Library API, which aim to offer broad access to public content archives on  and . However, academics have raised doubts about the efficacy of this new system. The European Commission is also probing the decision and questioning Meta’s planned CrowdTangle replacement

Last Updated on February 20, 2025 1:18 pm CET

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x