Advances in generative video are making fabricated crowd scenes increasingly convincing, raising concerns about how images of rallies, concerts and protests shape public perception. New models from OpenAI and Google—Sora 2 and Veo 3—can render thousands of moving people with improving fidelity, narrowing the gap between real and synthetic footage. That progress heightens two risks: inflating turnout to confer legitimacy and creating plausible deniability for authentic images that are politically inconvenient. A Capgemini report estimates AI accounted for a significant share of images on social media in 2023, underscoring the scale of the challenge. Platforms and developers are turning to labeling and watermarking, with Google applying visible and invisible marks and Meta, YouTube and TikTok requiring disclosures to varying degrees. But labeling remains inconsistent across services, visible marks are easy to miss, and industry standards are still coalescing. As AI-generated visuals proliferate, experts say verification will need to become routine practice for consumers, journalists and campaigns alike.





























