Photo by Christin Hume on Unsplash

Using chatbots to reduce conspiracy beliefs – Expert Reaction

A new US study found personalised interactions with a chatbot could reduce belief in conspiracy theories.

The study, published in Science, involved 2190 Americans who held beliefs in conspiracy theories. Participants who had tailored conversations with a chatbot instructed to “very effectively persuade” them against their conspiracy belief had an average of 20% reduction in these beliefs, which lasted for at least two months. A sample of the AI’s arguments were later fact checked and found to be 99% true.

The researchers say this work shows the potential positive impact of responsibly-used large language models, as well as the importance of minimising irresponsible use.

The Science Media Centre asked third-party experts to comment.


Dr Ana Stojanov, Lecturer, University of Otago, comments:

“Having looked into how generative AI can support learning, I’m not surprised that it also shows promise in reducing conspiracy beliefs.

“As I’ve said before, AI can act like an “all-knowing other,” and this study confirms that tailored interventions are more effective than generic approaches. It’s no wonder misinformation lingers when standard messaging doesn’t answer people’s questions.

“The effect size here is impressive, though it’s predictable that the impact is smaller for those with deeply rooted beliefs. Still, the potential for both good and misuse is huge, making this research timely and important.”

No conflicts of interest.


Dr John Kerr, Department of Public Health, University of Otago, comments:

“Some conspiracy theories are relatively harmless; they are just kooky beliefs that don’t have any real impact on people’s daily lives and the choices they make. But others can be harmful; people who believe them can make choices that hurt themselves or others. A good example is conspiracies that lead people to reject vaccination, leaving themselves or their children susceptible to preventable infectious diseases like measles.

“So, there is a good argument for exploring innovative ways of communicating with conspiracy believers, aiming to shift them to positions more aligned with the evidence.

“And that is exactly what this new US study does. Instead of a ‘one-size-fits-all’ approach to debunking conspiracies, the researchers instructed an AI chatbot to convince participants to abandon a particular conspiracy they believed. The results were impressive. People reported a lower level of belief in their conspiracy after chatting with the AI bot, even two months later.

“On the face of it, that seems like good news to most people, right? Fewer people out there making bad decisions due to believing in conspiracy theories based on no or shonky evidence.

“But in the bigger picture, what concerns me is that this is very much a double-edged sword. What if we flipped the switch and asked AI chatbots to instead convince people that conspiracies are true? Would it be equally persuasive going in the other direction? Previous research has shown that large language models like ChatGPT can produce quite convincing, yet utterly false, information about important health topics.

“One of the takeaways of this research is that people’s conspiracy beliefs are malleable—they can be nudged by well-tailored information, including that produced by AI.

“We need good guardrails in place for AI now to prevent actors from using these tools to spread harmful and inaccurate information at scale. As the authors themselves say, their findings emphasise the “pressing importance of minimising opportunities for this technology to be used irresponsibly.”

Conspiracy theories in New Zealand
“Previous research has quizzed New Zealanders about which conspiracies they think are true or false, finding that half of Kiwis agreed with at least one of the conspiracies covered—including ‘home-grown’ local conspiracies. Some were relatively benign, like the claim that the All Blacks were deliberately poisoned before the 1995 Rugby World Cup (31% agreed). But others are more concerning, like believing the Christchurch Mosque Attacks were orchestrated to restrict gun laws (8% agreed), or that pharmaceutical companies are covering up evidence that vaccines cause autism (17% agreed).

“A more recent study tracked conspiracy beliefs over seven months in a sample of Australians and New Zealanders, finding that some people dip in and out of believing in conspiracies. This shows that some people don’t hold these beliefs very strongly and that not every one falls down a conspiracy theory ‘rabbit hole’.”

No conflicts of interest