Academics from the University of Zurich (UZH) have confirmed they deployed AI-driven accounts to secretly post comments within Reddit’s r/changemyview (CMV) community.
The unauthorized experiment aimed to measure the persuasive power of large language models in a live online forum. However, its methods, which involved using AI to infer personal user details for targeted arguments and impersonating sensitive personas, have provoked strong condemnation from the forum’s volunteer moderators and ignited a debate over research ethics when studies involve unsuspecting online participants.
Experiment Details and Ethical Breaches Revealed
The CMV moderation team publicly disclosed the experiment in late April. They revealed they were notified by the UZH researchers only in March, after the study’s data collection phase (spanning November 2024 to March 2025) was complete.
The researchers’ message, shared by the mods, acknowledged their actions violated the subreddit’s Rule 5 against bots and undisclosed AI content. Despite this, the researchers offered a contentious justification: “We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules.”
The researchers’ draft paper, “Can AI Change Your View? Evidence from a Large-Scale Online Field Experiment,” [PDF] details the methodology. AI accounts used models like GPT-4o, Claude 3.5 Sonnet, and Llama 3.1 405B. Some replies were “Generic,” while others involved “Personalization,” where, according to the draft, “in addition to the post’s content, LLMs were provided with personal attributes of the OP (gender, age, ethnicity, location, and political orientation), as inferred from their posting history using another LLM.”
A third “Community Aligned” approach used a fine-tuned GPT-4o model. The researchers claimed significant success, with AI achieving persuasion rates three to six times higher than the human baseline and the AI accounts accumulating over 10,000 comment karma.
Sensitive Impersonations Spark Outrage
The ethical implications deepened when the CMV moderators reviewed the AI accounts (such as u/markusruscht and u/ceasarJst) provided by the researchers.
They discovered disturbing instances where bots assumed false, sensitive identities to engage users, including posing as “a victim of rape,” “a trauma counselor specializing in abuse,” and “a black man opposed to Black Lives Matter.” The moderators emphasized the violation of user trust, stating, “People do not come here to discuss their views with AI or to be experimented upon.”
University Response and Ongoing Concerns
Following an ethics complaint from the CMV team, UZH’s ethics commission issued a formal warning to the Principal Investigator and indicated plans for stricter future scrutiny, including pre-coordination with online communities.
However, the university supported the study’s publication, stating the project “yields important insights, and the risks (e.g. trauma etc.) are minimal. This means that suppressing publication is not proportionate to the importance of the insights the study yields.”
This rationale was strongly rejected by the moderators, who fear the negative precedent it sets: “Allowing publication would dramatically encourage further intrusion by researchers, contributing to increased community vulnerability to future non-consensual human subjects experimentation.” Dr. Casey Fiesler, a University of Colorado Boulder professor, publicly echoed these concerns, calling the study “one of the worst violations of research ethics I’ve ever seen” and refuting the claim of minimal risk.
Contrasting Research Methods and AI Ethics
The Zurich study’s methodology stands in contrast to other investigations into AI persuasion. OpenAI, for example, conducted research using CMV data obtained via a licensing deal, but performed tests within a controlled environment without deceiving active subreddit users. This highlights alternative approaches that potentially avoid the ethical pitfalls encountered by the UZH researchers.
While the Zurich team claims their AI surpassed human persuasion benchmarks, the controversy centers on whether those results ethically justify the means used to obtain them. The CMV moderators have left the AI account comments visible (though locked) for transparency, allowing users to see the interactions, and provided contact information for the UZH Ombudsperson for those wishing to raise concerns.