“Everybody knows by now that participating in a scientific study is providing a service, which means you have to be informed about potential risks, even if those risks are low”, says Samia Hurst. She’s a bioethicist and a member of the Swiss National Advisory Commission on Biomedical Ethics. | Photo: Salvatore Di Nolfi / Keystone

Researchers at the University of Zurich ran an experiment on the platform Reddit between November 2024 and March 2025, with assistance from AI. On the subreddit ‘change my view’, people challenge each other’s ideas to try and convince them of their own opinion – and the researchers found that their AI was three to six times more successful at this than humans. So far, so startling, but neither the company nor the moderators of the subreddit, nor the participants themselves were informed about the study. The case caused outrage. Samia Hurst, a bioethicist from the University of Geneva, explains why.

Samia Hurst, can you understand the outrage?

Yes. People were not asked for their consent. Everybody knows by now that participating in a scientific study is providing a service, which means you have to be informed about potential risks, even if those risks are low. In this study, for example, you could have been humiliated. Or you could have been triggered by something the AI says. A low-risk study is one with only the sort of risk you face in daily life. This is box number one that I would have checked, if I had been on the ethics committee.

“There are circumstances, typically in psychology or sociology, when there is no alternative scientific method”.
Are researchers always obliged to ask for consent?

There are circumstances, typically in psychology or sociology, when there is no alternative scientific method to answer the question. That would have been the next box to be checked.

But this is only when the potential gain in knowledge seems worth the waiver of consent, I assume?

I agree. That’s box number three. But there is one more box: As soon as possible, you have to inform the participants that a study was conducted without their consent. You have to explain why. And you have to give them a chance to opt out of having their data used. That’s the debrief.

What about informing the Reddit moderators?

That is another level. If you do research in a community, you need the consent of the community authority. That rule was originally meant for rural communities. But the same applies if you do a study in a school, for example. There you don’t just show up to do it. You ask the school administration beforehand. Both are community structures with people who protect a particular space. I think that is the way that the Reddit moderators should have been treated. They probably think of themselves as protectors of the community. And rightfully so. They have taken on a duty and are enforcing the existing rules.

“I would have said they need the authorisation of the moderators”.
The AI also pretended to be a trauma therapist, for example.

This point probably would have disqualified the study from the low-risk category. People don’t share traumatic events in their everyday life.

What would you have decided if you’d been on an ethics committee reviewing this study?

It is difficult. You are asking me to evaluate in hindsight. I know things now which I didn’t know before. I would have said they need the authorisation of the moderators. And a debrief of the participants, so they can opt out after the fact. I might not have been able to foretell that the AI was going to pretend to be a trauma therapist or an abuse victim, though.