A federal court has upheld California’s rigorous content moderation law, rejecting the claim from Elon Musk’s X that the statute infringes upon free speech rights. The company, previously identified as Twitter, had sought a preliminary injunction to suspend the enforcement of AB 587, which compels social media platforms to publicly disclose their content moderation policies concerning harmful content.
Content Moderation Law Upheld
Passed in the previous year, AB 587 targets large social media companies, requiring them to present detailed reports on how they handle potentially harmful content, including hate speech, racism, extremism, radicalization, disinformation, harassment, and instances of foreign political interference.
X filed a complaint in September arguing that the law’s broad and unclear definitions of hate speech and misinformation, among other categories, would obligate platforms to censor constitutionally protected speech. However, US District Judge William Shubb, in his ruling, found that the law’s reporting requirements were strictly factual and did not impose any intrinsic constraints on free speech.
Impact on Social Media Policies
In his written decision, Shubb emphasized that the required disclosures are not controversial and are simply an ask for platforms to identify their pre-existing moderation policies for specified content. The company, which has seen significant workforce reductions impacting its trust and safety team, did not immediately offer a comment on the ruling.
Meanwhile, in Europe, the social media platform is under similar scrutiny due to a formal investigation by the European Union, focusing on whether the company has violated the bloc’s Digital Services Act through its handling of content during terrorist attacks against Israel. The Digital Services Act is a new regulatory framework introduced by the EU to combat illegal content and disinformation online.
The complaint by the social media company had highlighted the difficulty in precisely defining controversial content such as hate speech and misinformation, suggesting that AB 587 would inadvertently lead to the suppression of legal content. Nonetheless, Judge Shubb’s judgment reiterated that the law only required a biannual reporting to the Attorney General that highlights existing policies and does not extend to moderating the content itself.
The court’s decision represents a significant stance on how social media companies may be required to deal with the murky waters of content moderation, as well as the balance between regulatory reporting obligations and First Amendment rights.
Last Updated on November 7, 2024 11:11 pm CET