AI-generated fabrications flooded social media after a November UPS cargo-plane crash in Louisville, complicating early reporting with fake videos and fabricated claims about victims. Creators chase engagement and ad revenue, while recommendation algorithms help falsehoods spread, critics say. The Center for Countering Digital Hate argues that weak enforcement and a new executive order limiting state-level AI regulation reduce platforms’ incentives to act. X’s AI assistant, Grok, even misidentified a legitimate photo of Kentucky’s governor as old. Educators urge users to slow down, verify appeals for donations or calls to action, and extend grace to those duped, as increasingly sophisticated tools blur the line between fact and fiction.
Related articles:
Deepfake
Code of Practice on Disinformation
AI Risk Management Framework (NIST)
International Fact-Checking Network (IFCN)
Harvard Kennedy School Misinformation Review





























