Meta’s Oversight Board delivered its first judgments on the company’s controversial January 2025 content policy changes, issuing a complex set of rulings across 11 cases appealed by users.
“They are the first to reflect on the policy and enforcement changes Meta announced on January 7, 2025,” the Board noted. While upholding Meta’s decisions to permit some contested speech regarding gender identity and apartheid symbolism, the independent body ordered the removal of anti-migrant hate speech and posts inciting violence during the 2024 UK riots.
The Board, informed by over 1,000 public comments, also expressed pointed criticism of Meta’s policy implementation, calling the January rollout “announced hastily, in a departure from regular procedure,” and demanding comprehensive human rights evaluations of the changes.
Meta, in an initial statement, noted it welcomed the board’s decisions “that leave up or restore content in the interest of promoting free expression on our platforms,” while not addressing the takedown orders, committing to a formal response within 60 days.
Board Delivers Split Verdict On Content Cases
The Board’s rulings navigate the difficult terrain between online expression and real-world harm. In two high-profile cases involving videos discussing transgender people’s access to bathrooms and sports participation in the U.S., a majority of the Board sided with Meta, allowing the content to remain despite acknowledging the posts misgendered identifiable people offensively.
The rationale focused on the posts relating to public discussion and not meeting the threshold for likely, imminent violence, discrimination, or bullying under Meta’s policies, although dissenting opinions were noted. In a specific recommendation tied to these cases, the Board advised Meta to remove the term “transgenderism” from its Hateful Conduct policy, a term criticized by advocacy groups like GLAAD as inherently derogatory when the policy changes surfaced via leaked documents in January.
Similarly, a majority let stand two posts displaying South Africa’s 1928-1994 flag, a symbol tied to the apartheid era. While accepting the flag’s painful connotations, the Board found removing these specific posts wasn’t the least intrusive measure and argued they should stay up under international expression standards, even if they technically violated a rule against “hateful ideologies.” They recommended Meta clarify this conflicting standard.
However, the Board reversed Meta’s stance on anti-migrant speech originating from Poland and Germany ahead of the June 2024 European Parliament elections. Posts using racist language and making generalizations about migrants as sexual predators were ordered removed. The Board majority cited the amplified risks of discrimination and violence in a heated political climate where migration was a key issue.
Enforcement Failures And Policy Concerns Highlighted
The Board was unified in demanding the removal of three posts from the summer 2024 UK riots that advocated violence against immigrants and Muslims. It found a clear risk of likely and imminent harm, exacerbated by Meta’s delayed activation of its Crisis Policy Protocol (only initiated August 6, after the posts were made).
Investigation revealed, as detailed by LBC and the OB, that none of these posts received human review initially, staying online after automated checks. Meta only removed one after the Board took up the cases, arguing for leaving another (an AI image of men chasing a toddler) by interpreting it narrowly as referencing a specific false rumor related to the Southport attacker, rather than as general dehumanizing content permissible during protest.
The Board also highlighted other enforcement lapses through summary decisions, overturning Meta’s incorrect removal of a drag artist’s video using an allowed reclaimed slur and its failure to act against dehumanizing comments targeting people with Down syndrome. These cases underscore the Board’s ongoing effort, as it states, to push Meta toward “greater transparency, consistency and fairness.”
Context: Meta’s Controversial January Overhaul
These decisions land as a direct response to Meta’s significant policy overhaul announced January 7. That overhaul included ending Meta’s reliance on third-party fact-checking partners within the United States, opting instead for a user-driven “Community Notes” system, similar to X’s model, which began testing across Facebook, Instagram, and Threads on March 18.
Community Notes allow eligible users to add context or flags to posts, aiming for consensus before a note is publicly displayed, rather than relying on external checkers or direct removal. Meta’s global policy chief, Joel Kaplan, justified the move away from fact-checkers by stating “One to two out of every 10 of these actions may have been mistakes“ under the old system.
Crucially, Meta also relaxed certain hate speech guidelines, explicitly permitting users to describe LGBTQ+ identities as a “mental illness” or “abnormality” under the guise of “political and religious discourse,” according to leaked training materials.
The changes also removed restrictions on referring to women as “household objects or property.” Additionally, Meta stated it would cease proactive automated scanning for certain unspecified “less severe policy violations,” focusing automation mainly on areas like terrorism and child exploitation.
This shift away from external fact-checking and toward Community Notes remains confined to a US trial phase. Meta’s head of global business, Nicola Mendelsohn, confirmed in January, “Nothing is changing in the rest of the world at the moment; we are still working with fact-checkers globally.” This is essential for compliance with regulations like the EU’s Digital Services Act. She added, “We’ll see how that goes as we move it out over the years.”
Political Climate And Internal Reactions
Meta’s January policy adjustments occurred amid a changing political backdrop, shortly before President Donald Trump returned to office. Trump, who has often accused platforms of anti-conservative bias, publicly praised Meta’s new direction.
This, along with Meta appointing Trump ally Dana White to its board in January, followed by former Trump administration advisor Dina Powell McCormick in April, fueled speculation about the company seeking political favor. These actions took place as Meta faced mounting legal and regulatory pressure, including the start of a major FTC antitrust trial on April 14 seeking the divestiture of Instagram and WhatsApp, and ongoing lobbying efforts against EU digital regulations.
The January changes sparked considerable internal dissent, with employee discussions described by 404 Media as “total chaos“ and staff protesting the relaxed LGBTQ+ hate speech rules.
Zuckerberg, in a late January leaked all-hands meeting, framed the changes as necessary to reduce moderation errors and restore free expression, stating, “We’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down,” while acknowledging the need to “wait and see how the new system would be implemented.”
The Oversight Board explicitly criticized the execution of these January changes, stating they were “announced hastily, in a departure from regular procedure,” and noted the absence of public information regarding any prior human rights due diligence.
The Board strongly urged Meta to conduct such assessments now, calling on the company to uphold the UN Guiding Principles on Business and Human Rights and engage with affected stakeholders.
It specifically highlighted the need to evaluate potential “uneven consequences globally, especially in countries experiencing current or recent crises, such as armed conflicts.”
Among 17 recommendations issued, the Board asked Meta to assess Community Notes’ effectiveness versus traditional fact-checking, especially where misinformation poses public safety risks, and to improve the detection of incitement conveyed through images.
It also indicated readiness to accept a Policy Advisory Opinion referral from Meta regarding fact-checking strategies outside the US. The Board pointed to its past impact, stating its decisions have previously shaped Meta policies and led to alternatives like warning screens and AI content labels.