Rapid Reads News

HOMEmiscentertainmentcorporateresearchwellnessathletics

Programs like ChatGPT can change the opinion of one in four voters

By Raúl Limón

Programs like ChatGPT can change the opinion of one in four voters

Artificial intelligence (AI) has permeated everyday life. AI chatbots like ChatGPT suggest recipes, finish homework assignments, compare products, and even advise on clothing combinations. What happens if it's incorporated into the electoral debate? Two studies published simultaneously on Thursday in Nature and Science have tested this and discovered that it can influence the opinions of between 1.5% and 25% of the voters analyzed. This effectiveness, according to the studies, is greater than that of traditional campaign ads and highly relevant considering that a quarter of voters decide their vote in the week before the polls open.

The most common and well-known AI tools avoid providing a direct answer to the question of which party to support in the upcoming elections. "I can't tell you who to vote for," is the response from all the conversational platforms consulted. They do this because they include ethical safeguards to prevent political influence. But it's easy to overcome this initial reluctance. You just need to continue the dialogue with less direct questions.

The latest surveys in Spain highlight immigration as one of the main concerns of Spaniards, and this issue has entered the political and social debate. AI, despite introducing nuances, also ends up responding to this concern. "[Anti-austerity party] Podemos and [left-center] Spanish Socialist Party have more favorable policies toward immigration," while "the [right-wing] Popular Party and [far-right] Vox prioritize control, order, or restrictions," one chatbot responds, as if these positions were incompatible. It also does not offer any other political options.

In light of this reality, research led by David Rand, professor of information science and lead author of the articles, and Gordon Pennycook, associate professor of psychology (both at Cornell University), has tested the influence of chatbots. In the study published in Nature, they exposed 2,300 U.S. voters, 1,530 Canadians, and 2,118 Poles to one-on-one debates with an AI specifically trained for the last three presidential elections in each country, held between 2024 and last year.

In all cases, the AI was able to alter voting intentions, though with varying degrees of effectiveness: in the U.S., the model trained to favor Kamala Harris convinced 3.9% of the voters it interacted with, while the one trained to favor Donald Trump persuaded only 1.52%. In Canada and Poland, opinion shifts reached up to 10%. "It was a surprisingly large effect," admits Rand.

The researcher explains that it's not psychological manipulation but persuasion, albeit with limitations: "LLMs [Large Language Models, which AI uses] can indeed change people's attitudes toward presidential candidates and policies by providing many factual statements that support their position. But those statements aren't necessarily accurate, and even arguments based on accurate statements can still be flawed by omission."

In fact, the human fact-checkers who verified the arguments generated by the AI found that the claims used to defend conservative candidates were more erroneous because they were based on data shared by right-wing social media users who, according to Pennycook, "share more inaccurate information than those on the left."

Rand delves deeper into this persuasive power in the research published in Science, studying opinion changes in 77,000 Britons who interacted with AI on 700 political issues. The most optimized model (using conversational AIs with varying degrees of real-argument usage) shifted the opinions of up to 25% of voters.

"Larger models are more persuasive, and the most effective way to increase this ability is to instruct them to support their arguments with as many facts as possible and to give them additional training focused on increasing persuasion," Rand explains.

This skill has an advantage. Arguments generated by conversational AI can reduce vulnerability to conspiracy theories, in which events or facts are falsely attributed to nonexistent secret groups to manipulate the population. The authors of the two recent studies highlight this in another paper published in PNAS Nexus.

But it also has a limitation, as David Rand points out. "As the chatbot is forced to offer more and more factual statements, it eventually runs out of accurate information and begins to fabricate it." This is what is known in the field of AI as a hallucination -- inaccurate information that appears to be true.

The authors conclude that it's crucial to study AI's persuasive capacity, not just in political or electoral contexts, to "anticipate and mitigate misuse" and to promote ethical guidelines on "how AI should and should not be used." "The challenge is finding ways to limit harm and help people recognize and resist AI persuasion," summarizes Rand.

Francesco Salvi, a specialist in computer science and researcher at the Swiss Federal Institute of Technology Lausanne (EPFL), agrees, arguing that "safeguards [limitations] are essential, especially in sensitive areas such as politics, health, or financial advice."

According to the scientist, "by default, LLMs have no intention of persuading, informing, or deceiving. They simply generate text based on patterns in their training data." "Therefore, in most interactions, especially outside of debate settings, the model isn't trying to persuade you: if persuasion does occur, it's usually incidental, not by design," he explains.

Even so, he admits that "persuasion can arise implicitly even when simply offering information": "Suppose someone asks an AI: Is policy X a good idea? Or what do economists say about trade tariffs? The model may generate a response that leans one way or the other, depending on how the question is phrased, which sources it has seen most often, or what framing dominates its training data. And, more importantly, LLMs can be intentionally trained or urged by external actors to be persuasive, manipulating users toward a particular policy position or to drive purchases."

Therefore, for the Swiss researcher and lead author of a study published in Nature Human Behaviour, caution is essential: "I think there should be limitations, absolutely. The line between relevance and exploitation can quickly blur, especially if an AI system is optimized for persuasion without transparency or oversight. If a chatbot is tailoring arguments to push a political agenda or disinformation and is doing so based on a user's psychological profile, that's where we run serious ethical risks," he warns.

Previous articleNext article

POPULAR CATEGORY

misc

6177

entertainment

7083

corporate

5839

research

3663

wellness

5873

athletics

7117