Artificial-intelligence voice cloning has crossed a threshold, with new research showing many listeners can’t tell synthetic speech from the real thing—and often trust it more. An alleged scam in Italy used an AI-generated voice of Defense Minister Guido Crosetto to solicit seven-figure wire transfers from prominent business leaders. In lab tests using ElevenLabs, participants mistook 58% of AI voice clones and 41% of novel AI voices for human, with British accents rated more “real” than American. Losses tied to deepfake scams reached $547.2 million in the first half of 2025, according to Resemble AI, while DeepMedia estimates some eight million deepfakes will circulate online by year-end, up from 500,000 in 2023. Regulators are tightening screws: the U.S. criminalized publishing nonconsensual intimate images, including deepfakes, and Australia moved to ban an app for deepfake nudes. Researchers note legitimate uses—such as restoring voices for people who can’t speak—while urging stronger detection, provenance tools and consumer safeguards.





























