The accelerating spread of AI-generated images and videos is eroding the public’s default trust in what they see online, media and computer-science researchers say. Following high-profile news events—from the U.S. operation in Venezuela to a fatal ICE shooting—fabricated or manipulated media has flooded social platforms, amplified by engagement-driven incentives. Stanford’s Jeff Hancock says distinguishing fake from real by sight alone is becoming “essentially impossible,” while scholars including Renee Hobbs warn that constant doubt can push users into disengagement. Adam Mosseri, head of Instagram, predicted a shift from assuming authenticity to starting with skepticism. Researchers Hany Farid and Siwei Lyu add that partisanship and familiarity bias further distort judgments, underscoring the need for provenance tools and basic source vetting as AI and authentic content increasingly commingle.
Related articles:
– Deepfake
– Misinformation
– Disinformation
– Media literacy
– Synthetic media





























