Meta, the parent company of Facebook, Instagram, and Threads, is making a sweeping change to its content moderation policies by eliminating its third-party fact-checking program in favor of a user-driven system called Community Notes.
The transition reflects Meta’s renewed focus on free expression while addressing persistent criticism of bias and overreach in its moderation practices.
“We want to fix that and return to that fundamental commitment to free expression,” writes Joel Kaplan, Meta’s global policy chief, in an official statement.
Kaplan described previous moderation systems as overly restrictive, adding, “As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable.”
The changes will first roll out in the United States over the coming months, with plans for gradual expansion to other regions.
Related: Meta Takes Down Its AI Bot Accounts After Online Backlash
A New Approach: What Are Community Notes?
Community Notes is a moderation system modeled after X (formerly Twitter), which crowdsources context for flagged posts. Instead of imposing intrusive warnings, Community Notes allows contributors to annotate posts with additional information, provided consensus is reached among users with diverse perspectives. This consensus mechanism aims to minimize bias and foster transparency in content moderation.
“We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see.”
Under this new system, Meta will no longer write or moderate Community Notes directly. Instead, contributors will voluntarily craft annotations that meet agreed-upon criteria. The system also eliminates the full-screen warnings that previously blocked users from viewing flagged content, replacing them with smaller labels that users can click for more context.
Related: Meta Urges Legal Block on OpenAI’s Transition to For-Profit Entity
Looser Restrictions and Focused Enforcement
In tandem with the adoption of Community Notes, Meta is loosening restrictions on politically sensitive topics such as immigration and gender identity. These subjects, which are frequently debated in the public sphere, will no longer be as tightly moderated. Meta will instead prioritize enforcing rules on severe violations, such as terrorism, child exploitation, and fraud.
“We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate,” Kaplan writes. “It’s not right that things can be said on TV or the floor of Congress but not on our platforms.”
Meta is also scaling back its reliance on automated moderation systems, which have been criticized for misclassifying content. Less severe violations will now be acted upon only if flagged by users, while algorithms will focus exclusively on high-severity issues.
Related: Meta Introduces Video Seal Framework for Hidden AI Video Watermarks
Political and Strategic Implications
The timing of Meta’s moderation overhaul coincides with a political shift in the United States, prompting speculation about the company’s motivations. Critics have noted that Meta has faced longstanding accusations of political bias, particularly from conservative groups who felt targeted by previous content moderation policies.
Kaplan acknowledged the backlash, stating that overly broad enforcement had led to errors. “We think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated our policies),” he explained, adding that the new approach aims to rebuild trust among users across the political spectrum.
Meta’s original fact-checking program, launched in 2016, was designed to combat election misinformation. While the initiative aimed to provide users with accurate information, it drew criticism for its perceived interference in political debates and its reliance on third-party organizations with their own biases.
Related: Instagram Rolls Out Trial Reels to Help Creators Optimize Engagement
The Relocation of Trust and Safety Teams
As part of its broader restructuring, Meta is relocating its trust and safety teams from California to Texas and other U.S. locations. The move is intended to address concerns about regional bias in moderation decisions, while also signaling a shift in Meta’s operational strategy. Critics have questioned whether this decision also reflects political positioning, given the incoming U.S. administration.
Meta explained that relocating trust and safety teams would improve decision-making and responsiveness to community needs. The company also plans to expand the use of advanced language models to assist with moderation and reduce errors in enforcement decisions.
Related: EU Probes Google-Meta Collaboration Over Teen-Targeted Ads on YouTube
Reintroducing Civic Content with a Personalized Focus
Meta is rethinking its approach to civic and political content on its platforms. After years of reducing such content in users’ feeds, the company plans to reintroduce it with a personalized strategy. Users will have more control over how much political content they see, based on engagement metrics like likes, comments, and viewing habits.
Meta indicated that tests on personalized civic content were conducted, and the company plans to expand options to help users prioritize relevant content. The new initiative aims to balance user preferences with Meta’s goal of fostering robust civic discourse.
Automation, Community Involvement, and Future Outlook
While automation remains an essential tool in Meta’s moderation arsenal, its role is being redefined. Large language models (LLMs) will now act as a second opinion in enforcement decisions, focusing on severe violations. Meanwhile, the shift to Community Notes represents a broader effort to decentralize content oversight and involve users directly in the moderation process.
Zuckerberg indicated that the changes are aimed at ensuring policies better align with Meta’s mission of enabling free expression.