HomeWinBuzzer NewsX Overhauls Community Notes for Near-Instant Misinformation Flagging

X Overhauls Community Notes for Near-Instant Misinformation Flagging

With 'Lightning Notes,' X aims to counter misinformation faster by displaying fact-checks within minutes, tackling the speed of viral posts head-on.

-

X has launched its “Lightning Notes” update today to tackle misinformation more rapidly. By overhauling its Community Notes system, X has trimmed down review times to as little as 14 minutes, allowing notes to appear on potentially misleading posts within 18 minutes. With this change, X aims to mitigate the impact of misinformation at the speed it spreads, addressing a key criticism of the platform’s moderation practices.

Accelerating Community Notes: How Lightning Notes Work

Community Notes, introduced on X in 2022, allows contributors to tag posts that lack context or contain misleading information, with notes reviewed for accuracy by other users. Typically, this fact-checking system faced delays, sometimes taking hours, but X’s new approach retools the scoring process to enable faster reviews.

A lightning bolt icon indicates when a fact-check is added within an hour, giving users a visual cue to identify quickly verified content. However, with an average daily post volume of 500 million, maintaining this pace for every flagged post presents a logistical challenge for the over 800,000 contributors worldwide.

X’s strategy aligns with broader trends across social platforms to use AI in content moderation. Unlike TikTok and YouTube, though, X’s reliance on a mostly crowdsourced moderation model underscores the platform’s unique approach to managing content.

X’s Exemption from European Digital Markets Act Compliance

The timing of X’s latest moderation shift comes shortly after the company sidestepped compliance with the European Union’s Digital Markets Act (DMA). Following an October 11 European Commission investigation, X was exempted from the DMA’s “gatekeeper” designation, which imposes significant operational restrictions on major tech companies to enhance market competition.

While X met certain user engagement metrics for the “gatekeeper” label, it doesn’t operate as an essential service connecting businesses and consumers, allowing it more flexibility than companies like Google and ByteDance, which must meet the DMA’s demands for fair competition practices, such as cross-app compatibility.

In practice, X’s freedom from DMA requirements means it can continue its own moderation updates without the compliance hurdles that companies like Google and Meta face in Europe. The DMA framework, in effect since 2023, is intended to curb monopolistic behaviors among tech giants with significant digital market influence.

TikTok’s Moderation Shift and AI Adoption

ByteDance, the company behind TikTok, announced job cuts targeting hundreds of moderation staff on October 11, with Malaysia’s workforce impacted the most. The restructuring eliminated around 700 positions, part of a larger move to automate content reviews with AI that ByteDance claims can catch roughly 80% of harmful posts automatically, leaving only complex cases for human reviewers.

These recent layoffs at TikTok coincide with Malaysia’s increasingly stringent social media regulations. The government’s mandate for companies to obtain operational licenses to counter digital threats has prompted TikTok to implement automated solutions to ensure compliance. These measures are designed to safeguard user safety and privacy in the face of evolving online risks.

To proactively address these regulatory obligations, ByteDance has allocated a substantial $2 billion investment to content safety initiatives in 2024. This significant commitment underscores the company’s dedication to maintaining platform security, even as it adjusts its workforce to adapt to the changing regulatory landscape.

Data Control Policies and the Expanding Role of AI

On October 18, X quietly updated its privacy policy and terms of service, allowing third-party AI developers access to user data for training models, a shift in line with data-sharing policies adopted by platforms like Reddit.

The policy change enables AI bots and external developers to use X’s data, but leaves users with limited options to control this usage. New terms, set to take effect on November 15, also tighten penalties for unauthorized data scraping, fining violators $15,000 for extracting over a million posts within a 24-hour period.

As X broadens its data-sharing scope, CEO Elon Musk has been tightening access restrictions on data extraction to retain control over the platform’s vast content reserves, positioning this as an alternative revenue source amid declining ad revenue.

YouTube’s Approach to AI-Generated Content Transparency

In a similar push for responsible AI use, YouTube introduced an AI-content labeling tool in March 2024, requiring creators to mark videos that use realistic AI tools in ways that could confuse viewers. Managed through the Creator Studio’s “Altered content” option, the tool is part of Google’s plan to maintain content authenticity on its platform.

The company clarified that not all AI-generated content falls under this rule; only videos with realistic AI elements must be flagged, while fictional or minor AI-enhanced visuals remain exempt. Creators face potential penalties, such as demonetization, for failing to label their content accurately, while YouTube’s moderation team can add labels proactively to prevent misinformation.

Does Facebook Puts Profit Above Truth?

As the U.S. election season intensifies, social media platforms like Facebook and X (formerly Twitter) are increasingly scrutinized for their handling of extremist content and misinformation. As I reported today, Facebook’s automated system has inadvertently generated pages associated with militia groups, raising concerns about the platform’s role in facilitating paramilitary organizing.

A report by WIRED suggests that despite Meta’s 2020 ban on paramilitary groups, Facebook’s automated page generation system continues to create spaces for extremist militia groups like the American Patriots Three Percent (AP3).

Data from the Tech Transparency Project (TTP) reveals that hundreds of pages linked to paramilitary groups have emerged on Facebook since early 2021, many labeled as “unofficial” to distance the platform from direct affiliation. However, TTP director Katie Paul emphasizes that even these “unofficial” labels do not prevent groups from utilizing these pages to recruit followers, share training plans, and coordinate activities like ballot box monitoring.

Last Updated on November 7, 2024 2:16 pm CET

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Mastodon