close
close

Secret Reddit experiment with KI -Perspersonas Sparks Ethics scandal in science

Short

  • Researchers from the University of Zurich used AI bots to posing as people on Reddit, including invented personas such as trauma consultants and political activists.
  • The bots have operated over 1,700 debates about the R/Changemyview Subreddit and in many cases successfully changed the opinions of the users.
  • Reddit moderators and legal advisors condemned the experiment as unethical and cited deception, data protection violations and lack of approval.

Researchers from the University of Zurich have triggered outrage after they have secretly used Ai -Bots on Reddit, which stipulated that rape survivors, trauma consultants and even a “black man to black life” in contradiction – all whether they could change people's opinion on controversial topics.

Spoiler alarm: You could.

The hidden experiment aimed at the R/ChangemyView (CMV) Subreddit, in which 3.8 million people (or so all thought) gather in order to discuss ideas and possibly have changed their opinions through reasonable arguments.

Between November 2024 and March 2025, AI Bots reacted to over 1,000 jobs with dramatic results.

“In the past few months we have published AI-written comments under Posts on CMV, in which the number of deltas preserved by these comments was measured,” the research team revealed this weekend. “In total we published 1,783 comments in almost four months and received 137 Deltas.” A “delta” in the sub -dit represents a person who recognizes that she has changed her opinion.

When Decipher They turned to the R/ChangemyView moderators to get a comment and emphasized that their Subreddit “has a long history of partnership with researchers” and is usually “very research-friendly”. However, the Mod team draws a clear line when deception.

If the goal of the subreddit is to change the views with justified arguments, should it play a role whether a machine can sometimes make better arguments than a person?

We asked the moderation team and the answer was clear. It is not the case that AI was used to manipulating people, but that people were deceived to carry out the experiment.

“Computers can play chess better than people, and yet there are still chess enthusiasts who play chess in tournaments with other people. CMV is how [chess]But for a conversation, “said moderator Terrhanant_Song490.” While the computer science undoubtedly gives certain advantages of society, it is important to keep human spaces. “

When asked whether it plays a role, whether a machine sometimes produces better arguments than humans, the moderator emphasized that the CMV sub -subreddit distinguishes between “sensible” and “real”. “By definition, A-generated content is not meaningful for the purposes of the CMV Subs,” said Tecreinsive_Song490.

The researchers only came to the moderators of the forum after completing their data acquisition. The moderators were not surprising, angry.

“We think that was wrong. We don't believe that” it has not yet been done “an excuse to carry out such an experiment,” they wrote.

“If Openai can create an ethical research design, these researchers should be expected to do the same,” said the moderators in an extensive contribution. “Psychological manipulation risks by LLMS are a comprehensive topic. It is not necessary to experiment with human subjects that are not too constant.”

The Chief Legal Officer from Reddit, Ben Lee, did not crush any words: “What this team at the University of Zurich is deeply wrong on both moral and moral level,” he wrote in an answer to the CMV post. “It violates academic research and human rights standards and is forbidden in addition to the sub -dates by RedITIT's user agreement and rules.”

Lee did not explain his views why such research is a violation of human rights.

Deception and conviction

According to Reddit, the deception of the bots went beyond the simple interaction with the users.

The researchers used a separate AI to analyze the posting story of targeted users and, according to personal details such as age, gender, ethnicity, location and political beliefs, to create convincing answers how social media companies do.

The idea was to compare three categories of answers: generic answers made by the community from models that are finely coordinated with proven convincing comments, and personalized answers that are tailored to the user's public information.

The analysis of the posting patterns of the bots, based on a text file shared by the moderators, revealed several meaningful signatures of ai-generated content.

The same account would claim a much different identities – public defenders, software developers, Palestinian activists, British citizens.

Contributions followed often identical rhetorical structures, starting with a soft concession (“I come from where you come from”), followed by a three -stage refutation that “let me disassemble it”.

The bots also invented authority and claimed job titles, which perfectly matched every topic, about which they argued. When debating the immigration, a Bot claimed: “I worked under construction and I tell you that the construction costs without workers correctly agreed.” These contributions were cultivated with non -sucker statistics that sounded precisely, but had no quotes or connections – manipulation 102.

Fast engineers and AI enthusiasts would easily identify the LLMs behind the accounts.

Many contributions also contained the typical “This is not only a small performance – it is about the larger thing”, which makes some AI models easy to identify.

Ethics in Ai

Research has led to significant debates, especially now that AI is more intertwined in our daily life.

“This is one of the worst violations of the research ethics I've ever seen,” wrote Casey Fiesler, information scientist at the University of Colorado, in Bluesky. “I cannot say that I know something about the ethical review in Switzerland, but in the United States this would have needed consent to deceive that is very difficult to get,” she developed in one thread.

However, the ethics committee of the University of Zurich of the Faculty of Art and Social Sciences advised the researchers that the study was “exceptionally challenging” and recommended that they better justify their approach, to inform the participants and to comply with the platform rules completely. However, these recommendations were not legally binding, and the researchers continued.

Not everyone sees the experiment, however, as an obvious ethical violation.

The co -founder of Ethereum, Vitalik Buterin, weighed up: “I get the original situations that have motivated the taboo that we have today, but if you analyze the situation from today's context, I prefer to feel secretly manipulated in order to manipulate me in random directions to buy me in a random instructions to buy me, to buy a product, Or to change my political view? “

Some Reddit users shared this perspective. “I agree that this was a shitty thing, but I have the feeling that she reported and reveals that it is a powerful and important memory of what AI is definitely used for while speaking,” wrote user trilobyte141. “If this has happened at a university in a number of guidelines -you can bet that you are already widespread by governments and special interest groups.”

Despite the controversy, the researchers defended their methods.

“Although all comments were produced machine, each researcher was checked manually before he was published to ensure that the CMV standards corresponded to a respectful, constructive dialogue and minimized potential damage,” they said.

After the controversy, the researchers have decided not to publish their results. The University of Zurich says that it is now investigating the incident and “checking the relevant evaluation processes critically”.

What next?

If these bots could successfully camouflage themselves in emotional debates, how many other forums may already offer a similar, non -mentioned AI participation?

And if AI-Bots put people gently to more tolerant or sensitive views, is manipulation justified-is manipulation that also means well, a violation of human dignity?

We have no questions about these answers, but our good old KI chatbot has something to say.

“Ethical commitment requires transparency and consent, which indicates that the conviction, no matter how well meant, has to respect the right to self -determination and informed election instead of relying on hidden influence,” said GPT4.5.

Published by Sebastian Sinclair

In general intelligent Newsletter

A weekly AI trip that was told by Gen, a generative AI model.

Leave a Comment