Artificial-intelligence-generated videos and images are spreading faster than traditional verification can keep up, fueling confusion around major events from alleged ICE actions in Minneapolis to rumors of Nicolás Maduro’s arrest in Venezuela, according to experts interviewed by The Independent. Newsguard warns that visual cues once used to spot fakes are no longer reliable, while UC Berkeley’s Hany Farid says even official accounts have circulated AI-altered images, eroding trust in institutions. Analysts also note a “liar’s dividend,” where bad actors dismiss authentic footage as deepfakes to avoid accountability. Media-literacy specialists advise checking original sources, scrutinizing backgrounds for anomalies, and being wary of reposted clips that may acquire AI-like artifacts through upscaling. With synthetic content amplified by high-profile accounts and few guardrails in place, the information environment is increasingly vulnerable to rapid, large-scale deception.
Related articles:
— Deepfake
— Synthetic media
— Misinformation
— Disinformation
— Media literacy





























