Reddit is preparing a significant overhaul of its user verification processes, a move spurred by an ethically dubious AI bot experiment that flooded one of its forums and heightened concerns about platform authenticity.
In a recent blog post, CEO Steve Huffman outlined the plan, stating “To keep Reddit human and to meet evolving regulatory requirements, we are going to need a little more information.” “Specifically, we will need to know whether you are a human, and in some locations, if you are an adult.”
While acknowledging the need of these checks to combat AI manipulation and meet growing regulatory demands for age verification, Huffman attempted to reassure users about the platform’s core value of anonymity, emphasizing “But we never want to know your name or who you are.” and explaining “The way we will do this is by working with various third-party services that can provide us with the essential information and nothing else.”
This policy shift directly addresses the fallout from an unauthorized experiment conducted by University of Zurich researchers, who deployed sophisticated AI bots onto the r/changemyview subreddit between late 2024 and early 2025. These bots posted over 1,700 comments, impersonated sensitive human personas like abuse survivors, and used personal data scraped from user histories to tailor persuasive arguments “in addition to the post’s content, LLMs were provided with personal attributes of the OP (gender, age, ethnicity, location, and political orientation), as inferred from their posting history using another LLM.”
The incident drew sharp condemnation from subreddit moderators “People do not come here to discuss their views with AI or to be experimented upon.” and Reddit itself, with the company’s Chief Legal Officer calling the experiment “What this University of Zurich team did is deeply wrong on both a moral and legal level. It violates academic research and human rights norms, and is prohibited by Reddit’s user agreement and rules, in addition to the subreddit rules,” and confirming Reddit was pursuing legal demands against the researchers, according to 404 Media.
The verification plan underscores a critical challenge for Reddit: preserving the anonymity with Huffman saying that “Anonymity is essential to Reddit,” while ensuring the platform remains a trusted space for human interaction.
“Reddit works because it’s human”, Huffman stated in his post. This tension is amplified by external pressures, including new laws in the UK and several US states requiring age checks online.
The UZH Experiment Sparking Change
The catalyst for this policy shift appears to be the controversial University of Zurich experiment. Between November 2024 and March 2025, researchers secretly used AI models like GPT-4o and Claude 3.5 Sonnet to post over 1,700 comments in the r/changemyview forum.
The AI employed advanced, ethically questionable techniques; according to the researchers’ draft paper, “in addition to the post’s content, LLMs were provided with personal attributes of the OP (gender, age, ethnicity, location, and political orientation), as inferred from their posting history using another LLM.” Bots even adopted false identities, including posing as abuse survivors.
The subreddit’s moderators publicly condemned the unauthorized study, stating “People do not come here to discuss their views with AI or to be experimented upon.”, and filed an ethics complaint.
While the university issued a warning to the researchers, it controversially supported the study’s publication, arguing the insights outweighed the risks “yields important insights, and the risks (e.g. trauma etc.) are minimal. This means that suppressing publication is not proportionate to the importance of the insights the study yields.” – a rationale strongly rejected by the moderators, with external ethics experts like Dr. Casey Fiesler calling the experimten “one of the worst violations of research ethics I’ve ever seen”.
The used approach contrasts sharply with controlled research efforts, such as OpenAI’s internal tests on AI persuasion, which used anonymized Reddit data in a closed environment.
Reddit’s Strategic Shift Towards Control
The planned verification system represents another step in Reddit’s broader trend towards centralizing platform governance and integrating AI. This follows previous moves like tightening moderator controls over subreddit privacy settings following the 2023 API protests and rolling out AI-powered rule enforcement tools earlier this year. Reddit framed these changes as improving user experience, saying they “make it easier for everyone to participate on Reddit,” and aiding moderators. However, these changes also reduce the autonomy of volunteer moderators.
These platform shifts occur against a backdrop of Reddit achieving profitability for the first time in late 2024, fueled significantly by advertising growth and lucrative data licensing deals with Google and OpenAI. CEO Huffman has been vocal about the value of Reddit’s human-generated data for training AI models, saying that “AI models need human knowledge, and Reddit’s content is full of it”, while justifying the platform’s decision to restrict access for companies that don’t pay. “We’re not letting big tech use our data for free,” Huffman made clear.
AI’s Double-Edged Sword on Reddit
Reddit is embracing AI for specific functions, positioning it as a tool to enhance the platform rather than replace human interaction. Huffman highlighted AI’s utility in moderation, safety, translation, and powering features like Reddit Answers, the AI search tool launched in late 2024 designed to make the platform’s vast knowledge base more accessible. Reddit argued that “AI models need human knowledge, and Reddit’s content is full of it.” The company also introduced tools like Reddit Pro Trends for advertisers.
However, the potential for AI misuse remains a significant concern, as acknowledged by Huffman and amplified by the UZH incident. The challenge lies in leveraging AI’s benefits while safeguarding the platform’s authenticity. Industry leaders like Sam Altman have warned that AI could become “capable of superhuman persuasion well before it is superhuman at general intelligence”, raising ethical alarms across the tech sector.
Reddit’s move towards verification, while promising anonymity, reflects this complex balancing act in the evolving digital landscape. Huffman also sought to quell user fears by confirming the classic old.reddit interface is not being shut down, saying “Just kidding. I don’t know why I say stuff like this. We’ll figure out how to work around it and keep it online as long as people are using it,” despite his earlier jest as s part of a larger post published Monday where he stated: “old.reddit is the version of Reddit that we built back in the mid-2000s. It doesn’t scale, it’s impossible to develop on, and it’s ugly af. We will be shutting it down at the end of the month.”
Navigating Anonymity in the Age of AI
Implementing verification without compromising anonymity presents a complex technical and ethical challenge. Huffman’s post suggests relying on third-party services, potentially similar to identity platforms like Persona or Stripe Identity mentioned by Dataconomy, which often require government-issued IDs. Reddit already uses Persona for its Contributor Program verification.
However, privacy concerns persist, particularly regarding the potential for sensitive, anonymously shared information to be linked back to real identities if demanded by authorities.
Reddit insists it will remain “extremely protective of your personal information, and will continue to push back against excessive or unreasonable demands from public or private authorities,” Huffman stated, referencing the company’s Transparency Report. The platform aims to use AI constructively for moderation and features like Reddit Answers, while deploying new verification measures to fend off the AI-driven manipulation that threatens its core identity as a space for authentic human exchange.