AI-generated content is increasingly appearing in U.S. political advertising, with at least 15 such ads since November, intensifying concerns about voter deception ahead of the 2026 midterms. Campaigns from local races to gubernatorial contests have used synthetic audio and video to mimic rivals or dramatize attacks, citing lower costs and faster production. In Massachusetts, a GOP gubernatorial candidate released a radio spot that used an AI version of Democratic Gov. Maura Healey’s voice without a clear disclaimer, while national Republicans published an AI video featuring Texas Democratic Senate nominee James Talarico. Strategists say the technology’s efficiency is attractive, especially for lean operations, but warn that misleading uses erode trust. Twenty-six states now regulate political deepfakes by requiring disclosures or restricting timing, while federal efforts—including the REAL Political Advertisements Act—remain stalled. Observers expect AI-driven ads to ramp up as the elections near, even as campaigns and regulators struggle to draw ethical and legal lines.
Related articles:





























