Persuasive AI and the hazards of losing sight of humanity
Erin Kernohan-Berning
5/14/20254 min read
At the end of April, members of the subreddit r/changemyview were told that they had without their consent or knowledge been experimented on for months by researchers from the University of Zurich. The experiment was to determine if AI chatbots could be persuasive enough to change the opinion of a human.
Reddit is a platform made up of numerous message boards, called subreddits, dedicated to a particular subject. Each subreddit defines their own rules and community norms which are enforced by volunteer moderators. r/changemyview is an ethics and philosophy subreddit where users post a position they have on a topic and invite other community members to comment and change their view on that topic. Recent popular topics have included current events, politics, and religion. Members whose comments succeed in changing the original poster’s viewpoint are awarded “deltas” which are displayed with their username when posting.
The experiment carried out by the University of Zurich involved finding posts on r/changemyview, using AI to scrape the user information of the original poster and creating a profile of them. They then used AI to create comments tailored to that user to persuade them to change their view on some fairly sensitive and contentious topics. Over 1,700 such comments were made by AI chatbot accounts, including one impersonating a trauma counsellor, with many successfully changing the mind of the human poster.
Many subreddits, including r/changemyview, have rules against the use of AI. So the Zurich researchers broke those rules when they carried out their experiment. They also violated well-established ethical standards by not obtaining the informed consent of their human research subjects. When participating in research studies, human subjects have the right to understand the purpose of the research, what harm if any is possible from participating, and to withdraw their consent and cease participating at any time. This did not happen with the very human members of r/changemyview.
The researchers informed the subreddit of their experiment as part of a disclosure requirement from the University of Zurich’s Institutional Review Board. Reaction from members has been understandably critical. Since the disclosure, Reddit has considered legal action against the University of Zurich, the researchers have been made to apologize, and the research will no longer be published.
According to the r/changemyview moderation team, the researchers argued that their deception was justified given the gap of knowledge in how AI can be used to persuade people. But was deceiving a community of over 3.8 million online users necessary to fill that gap? It turns out, OpenAI explored the persuasive capabilities of AI using the same subreddit, but designed their experiment quite a bit differently. Rather than interact with r/changemyview members directly, OpenAI downloaded a copy of the subreddit, had its AI write replies, and then those replies were reviewed by testers. While OpenAI’s experiment was used for their own internal benchmarking and not intended for wider research purposes, it does show how the Zurich experiment could have been designed without the need to deceive human subjects who didn’t give informed consent [1].
Using deception in research is not new, and in certain situations may be required to study a particular phenomenon. However, there are mechanisms that can help ensure that the humans involved in that research are treated ethically, including being informed that deception is part of the study before consenting, being debriefed after the study to explain that deception and its purpose, and being given the option to have their data withdrawn from that study.
Researchers are responsible for the wellbeing of their human subjects and have an obligation to minimize harm. In these early days, it is difficult to understand the magnitude of harm the Zurich experiment has inflicted. There is certainly emotional harm to an online community whose members trusted one another to act in good faith. Some users expressed that they no longer wanted to participate in r/changemyview because they could no longer be assured they were interacting with actual humans. This shows how inappropriate use of AI can undermine a shared foundation of trust within a community; something that the people who are a part of r/changemyview have been left to grapple with.
As a society, the understanding of how AI can be used to harm us is critically important. There is already extensive research on how disinformation has threatened everything from our public health to our democracies, all of which can be further accelerated by generative AI. However, it is equally important that in pursuit of that goal we don’t inflict that very same harm. The Zurich experiment should be seen as a cautionary tale. The researchers clearly lost sight of the humans behind the data they were collecting, and have likely hurt people in the process.
Learn more
Reddit users were subjected to AI-powered experiment without consent. 2025. Chris Stokel-Walker. (New Scientist) Last accessed 2025/05/14.
'Unethical' AI research on Reddit under fire. 2025. Cathleen O'Grady. (Science) Last accessed 2025/05/14.
META | CMV AI Experiment Update - Apology Received from Researchers. 2025. Automoderator. (r/changemyview) Last accessed 2025/05/14.
Deception of Subjects in Neuroscience: An Ethical Analysis. 2008. Franklin G. Miller and Ted J. Kaptchuk. (JNeurosci) Last accessed 2025/05/14.
OpenAI used this subreddit to test AI persuasion. 2025. Maxwell Zeff. (TechCrunch) Last accessed 2025/05/14.
Correction log
[1] Just as a point of clarification: By no means to I think OpenAI is some bastion of ethical practices. Rather in this VERY specific instance they had an experimental design that didn't necessitate actively deceiving subjects.
Another example of lack of consent in an experiment using social media is one from 2014 in which Facebook manipulated 700,000 users' news feeds to see what affect it would have on their emotions.
Facebook emotion study breached ethical guidelines, researchers say. 2014. Charles Arthur. (The Guardian) Last accessed 2025/05/14.
Editorial Expression of Concern: Experimental evidence of massivescale emotional contagion through social networks. 2014. Inder M. Verma. (PNAS) Last accessed 2025/05/14.