Artificial-intelligence chatbots can significantly shift political opinions—and may do so by serving up volumes of inaccurate information—according to a new study of nearly 77,000 U.K. adults published in Science. Researchers found conversational agents from OpenAI, Meta and xAI were most persuasive when delivering dense, detailed arguments, outperforming static AI-written messages by 41% to 52%, with effects that persisted for up to a month. But the most convincing systems were also the least accurate: roughly 19% of claims were rated predominantly inaccurate, and newer “frontier” models, including OpenAI’s GPT-4.5, fared worse on accuracy than older, smaller models. The findings, funded in part by the U.K. government and authored by researchers from Oxford, LSE, Stanford and MIT, underscore growing concerns that increasingly capable chatbots could be weaponized to influence voters at scale amid rising AI adoption—44% of U.S. adults say they use AI tools at least sometimes—even as platforms and policymakers scramble to set safeguards.
Related article:
— Artificial Intelligence Risk Management Framework (AI RMF 1.0)





























