AI bots outsmart Reddit debaters in secret experiment—Internet reacts with outrage

Imagine winning a heated debate on Reddit’s r/ChangeMyView, stacking up karma points, only to learn your opponent wasn’t a person at all. It was an AI bot, posing as a trauma counselor or political activist, engineered to persuade and blend in. That’s exactly what happened during a covert experiment run by researchers at the University of Zurich—one that’s now sparking outrage and raising serious ethical questions.
University of Zurich study with AI bots angers Reddit users
According to multiple reports, over a span of four months, researchers deployed 13 AI-powered accounts to infiltrate r/ChangeMyView, a subreddit known for structured debate and opinion shifts. These bots weren’t just testing the waters—they posted 1,783 comments and won over 100 “delta” medals, Reddit’s badge for successful persuasion.
The kicker? None of the users knew they were arguing with machines.
The bots were built to mimic Reddit’s tone—snark included. They scraped users’ comment histories, picking up on political leanings, age, gender, and other signals to shape replies that felt eerily personal. The AI behind the project? A mix of leading models like GPT-4o and Claude 3.5.
The Lie in the Prompt
One of the more revealing details in the research design was how the team got past the ethical filters built into the language models. To avoid getting blocked from generating bot replies for an unauthorized experiment, the researchers lied to the AI.
In the prompt, they told the model that Reddit users had given their informed consent when they hadn’t.
AI Bots Tricked Reddit Users Into Losing Arguments—Was It Genius or Evil?
Some bots took on roles as sensitive as trauma survivors or racial minorities to appear more credible. One even posed as a “Black man opposed to Black Lives Matter.” The deception was deep, and no one caught on until the whole thing unraveled in April.
Reddit moderators eventually flagged the activity, calling it “psychological manipulation” and banning the accounts. Logan MacGregor, a moderator on the subreddit, told The Washington Post he joined r/ChangeMyView to engage with real people, not bots running social experiments. Reddit’s Chief Legal Officer, Ben Lee, didn’t mince words either, calling the study “deeply wrong on both a moral and legal level” and signaling legal action against the university.
The University of Zurich initially defended the study, saying it had been approved by its ethics board. But there’s a twist: the researchers reportedly bypassed AI safety filters by lying to the language models, claiming users had given consent. As YouTube creator CodeReport put it, “Pretty shady, but in the name of science.”
The backlash didn’t stop at Reddit. AI researchers and ethicists criticized the team for using real people as test subjects without permission, especially when other labs have achieved similar results using simulated environments. The university has since issued a formal warning to the lead researcher and pledged tighter reviews moving forward. But the reputational damage is already baked in.
The research team later addressed the controversy in a Reddit thread, admitting they didn’t write the comments themselves but manually reviewed every one before posting to make sure nothing harmful slipped through.
“We are aware that our experiment violated the community rules for AI-generated comments.”
— LLMResearchTeam said on Reddit
What Made the Researchers Proceed—Despite the Rules?
According to the team, the topic was too important to ignore. They argued that studying AI’s influence on public discourse required real-world conditions—even if it meant breaking subreddit rules. The study, they noted, had received approval from the University of Zurich’s Institutional Review Board.
They claimed every decision was grounded in three guiding principles: ethical research conduct, user safety, and transparency.
AI Bots Are 6 Times More Persuasive Than Humans
What makes this incident more than just a Reddit scandal is what it signals: AI isn’t just capable of generating content or answering questions—it’s now convincingly persuasive in public forums. These bots were three to six times better at changing minds than real people, fueling concerns that future “AI-powered botnets” could quietly manipulate entire communities from the inside.
According to CodeReport (video below), the bots flipped opinions in nearly 20% of cases. Humans? Just 2%.
The tech behind this could be deployed across any online community, and that’s where the bigger risk lies. Whether it’s social media, forums, or political threads, the ability to steer conversations—covertly—at scale opens the door to manipulation far beyond Reddit. From phishing scams to influence campaigns, the line between conversation and con is getting thinner.
So, what do you think? Outraged that you could’ve been bested by a bot, or secretly impressed by the tech? Either way, next time you’re in an online debate, you might want to ask yourself: Is this person real, or a Zurich bot studying how easy you are to convince?
Can AI Change Your View? Evidence from a Large-Scale
Below is a PDF copy of the research.
can_ai_change_your_view🚀 Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured