Listen to the article
In brief
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Read the full article here
Fact Checker
Verify the accuracy of this article using AI-powered analysis and real-time sources.
New research from Cornell University and the UK AI Security Institute has found that widely used AI systems could shift voter preferences in controlled election settings by up to 15%.
Published in Science and Nature, the findings emerge as governments and researchers examined how AI might influence upcoming election cycles, while developers seek to purge bias from their consumer-facing models.
“There is great public concern about the potential use of generative artificial intelligence for political persuasion and the resulting impacts on elections and democracy,” the researchers wrote. “We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes.”
The study in Nature tested nearly 6,000 participants in the U.S., Canada, and Poland. Participants rated a political candidate, spoke with a chatbot that supported that candidate, and rated the candidate again.
In the U.S. portion of the study, which involved 2,300 people ahead of the 2024 presidential election, the chatbot had a reinforcing effect when it aligned with a participant’s stated preference. The larger shifts happened when the chatbot supported a candidate the participant had opposed. Researchers reported similar results in Canada and Poland.
The study also found that policy-focused messages produced stronger persuasion effects than personality-based messages.
Accuracy varied across conversations, and chatbots supporting right-leaning candidates delivered more inaccurate statements than those backing left-leaning candidates.
“These findings carry the uncomfortable implication that political persuasion by AI can exploit imbalances in what the models know, spreading uneven inaccuracies even under explicit instructions to remain truthful,” the researchers said.
A separate study in Science examined why persuasion occurred. That work tested 19 language models with 76,977 adults in the United Kingdom across more than 700 political issues.
“There are widespread fears that conversational artificial intelligence could soon exert unprecedented influence over human beliefs,” the researchers wrote.
They found that prompting techniques had a greater effect on persuasion than model size. Prompts encouraging models to introduce new information increased persuasion but reduced accuracy.
“The prompt encouraging LLMs to provide new information was the most successful at persuading people,” the researchers wrote.
Both studies were published as analysts and policy think tanks evaluate how voters viewed the idea of AI in government roles.
A recent survey by the Heartland Institute and Rasmussen Reports found that younger conservatives showed more willingness than liberals to give AI authority over major government decisions. Respondents aged 18 to 39 were asked whether an AI system should help guide public policy, interpret constitutional rights, or command major militaries. Conservatives expressed the highest levels of support.
Donald Kendal, director of the Glenn C. Haskins Emerging Issues Center at the Heartland Institute, said that voters often misjudged the neutrality of large language models.
“One of the things I try to drive home is dispelling this illusion that artificial intelligence is unbiased. It is very clearly biased, and some of that is passive,” Kendal told Decrypt, adding that trust in these systems could be misplaced when corporate training decisions shaped their behavior.
“These are big Silicon Valley corporations building these models, and we have seen from tech censorship controversies in recent years that some companies were not shy about pressing their thumbs on the scale in terms of what content is distributed across their platforms,” he said. “If that same concept is happening in large language models, then we are getting a biased model.”
A weekly AI journey narrated by Gen, a generative AI model.
Read the full article here
Verify the accuracy of this article using AI-powered analysis and real-time sources.
Enter your email to receive detailed fact-checking analysis
You've used your 5 free reports. Sign up for unlimited access!
Already have an account? Sign in here
The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.
We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.
Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.
