Artificial-intelligence chatbots amplified falsehoods in the immediate aftermath of conservative activist Charlie Kirk’s killing, according to a CBS News review. X’s Grok misidentified a suspect before authorities named Tyler Robinson and generated contradictory biographical details once he was in custody. The bot also produced AI-“enhanced” images that distorted official photos; one was briefly reposted by a local sheriff’s office. Perplexity’s X bot described the shooting as “hypothetical” and questioned a White House statement before being removed, while Google’s AI Overview initially surfaced incorrect identifications tied to the case before results were corrected. Experts say generative systems, which predict likely next words rather than verify facts, are ill-suited to fast-moving events and can legitimize rumors that users might otherwise ignore. Utah Gov. Spencer Cox also warned of possible foreign disinformation efforts from Russia and China. The episode underscores mounting pressure on platforms deploying AI features to curb real-time inaccuracies and improve guardrails as synthetic media spreads during high-profile incidents.
Related articles:





























