New research suggests that chatbots have a greater sway on policy issues than video ads, and that spouting the most information—even if wrong—is the most persuasive strategy
![]()
Sarah Kuta - Daily Correspondent
December 10, 2025 1:52 p.m.
Two new studies involving thousands of participants examined how chatbots can influence political beliefs. Bloomberg Creative via Getty Images
Artificial intelligence chatbots are changing the world, affecting everything from [our brains](https://time.com/7295195/ai-chatgpt-google-learning-school…
New research suggests that chatbots have a greater sway on policy issues than video ads, and that spouting the most information—even if wrong—is the most persuasive strategy
![]()
Sarah Kuta - Daily Correspondent
December 10, 2025 1:52 p.m.
Two new studies involving thousands of participants examined how chatbots can influence political beliefs. Bloomberg Creative via Getty Images
Artificial intelligence chatbots are changing the world, affecting everything from our brains to our mental health to how we do our work. Now, two new studies offer fresh insights into how they might also be shifting our political beliefs.
In a new paper published December 4 in Nature, scientists describe how having a brief back-and-forth exchange with an A.I. chatbot shifted voters’ preferences on political candidates and policy issues. Another paper, published December 4 in the journal Science, finds that the most persuasive chatbots are those that share lots of facts, although the most information-dense bots also dole out the most inaccurate claims.
Together, the findings suggest “persuasion is no longer a uniquely ‘human’ business,’” write Chiara Vargiu and Alessandro Nai, political communication researchers at University of Amsterdam who were not involved with the new papers, in an accompanying Nature commentary.
“Conversational A.I. systems hold the power, or at least the potential, to shape political attitudes across diverse contexts,” they write. “The ability to respond to users conversationally could make such systems uniquely powerful political actors, much more influential than conventional campaign media.”
For the Nature study, scientists recruited thousands of voters ahead of recent national elections in the United States, Canada and Poland.
In the U.S., researchers asked roughly 2,300 participants to rate their support for either Donald Trump or Kamala Harris on a 100-point scale a few months before the 2024 election. Voters also shared written explanations for their preferences, which were fed to an A.I. chatbot. Then, participants spent roughly six minutes chatting with the bot, which was randomly assigned to be either pro-Trump or pro-Harris.
Talking with a bot that aligned with their point of view—a Harris fan chatting with a pro-Harris bot, for instance—further strengthened the participants’ initial attitudes. However, talking about their non-preferred candidate also swayed the voters’ preferences in a meaningful way.
On average, Trump supporters who talked with a pro-Harris bot shifted their views in her favor by almost four points, and Harris supporters who chatted with a pro-Trump bot altered their views in his favor by more than two points. When the researchers repeated the experiment in Canada and Poland ahead of those countries’ 2025 federal elections, the effects were even larger, with the A.I. chatbots shifting voters’ candidate ratings by ten points on average, reports Nature’s Max Kozlov.
Additionally, a smaller U.S.-based experiment to assess A.I.’s ability to change voters’ opinions on a specific policy—the legalization of psychedelics—found that the chatbots changed participants’ opinions by an average of roughly 10 to 14 points.
At first glance, the shifts may not seem like much. But “compared to classic political campaigns and political persuasion, the effects that they report in the papers are much bigger and more similar to what you find when you have experts talking with people one on one,” Sacha Altay, a psychologist who studies misinformation at the University of Zurich who was not involved with the research, tells New Scientist’s Alex Wilkins. For example, on policy issues, professionally produced video advertisements typically sway viewers’ opinions by about 4.5 points on average, the researchers write.
For the Science paper, researchers had nearly 77,000 participants in the United Kingdom chat with 19 A.I. models about 707 different political issues. They wanted to understand the specific mechanisms at play—what, specifically, makes chatbots so persuasive?
The biggest change in participants’ beliefs—nearly 11 percentage points—happened when the bots were prompted to provide lots of facts and information. For comparison, instructing bots to simply be as persuasive as possible only led to a change of about 8 percentage points.
But telling the bots to provide as many facts as possible also had a major downside: It made the bots much less accurate. That result wasn’t necessarily a surprise to the researchers.
“If you need a million facts, you eventually are going to run out of good ones and so, to fill your fact quota, you’re going to have put in some not-so-good ones,” says David Rand, a cognitive scientist at MIT and co-author of both papers, to Science News’ Sujata Gupta.
Quick fact: Making it up
A.I. chatbots are known to spew false information, or “hallucinate,” which was Dictionary.com’s word of the year in 2023.
Surprising or not, the finding that the most persuasive models and prompting strategies produce the least accurate information should serve as a wake-up call, according to Lisa P. Argyle, a political scientist at Purdue University who was not involved with the new papers, who wrote an accompanying *Science *commentary.
“Researchers, policy-makers and citizens alike need to urgently attend to the potential negative effects of AI-propagated misinformation in the political sphere and how to counteract it,” she writes.
While the recent studies demonstrate the potential for A.I. chatbots to shift voters’ attitudes, researchers emphasize an important caveat: The real world is extremely complex.
“Outside of controlled, experimental settings, it’s going to be very hard to persuade people even to engage with these chatbots,” Ethan Porter, a political science and communications scholar at George Washington University who was not involved with the papers, tells the New York Times’ Steven Lee Myers and Teddy Rosenbluth.
You Might Also Like
December 10, 2025
December 10, 2025
December 10, 2025
December 10, 2025
December 10, 2025