Three years into the generative-AI boom, the technology’s most pervasive impact may be a creeping sense of unreality. Using a surreal interview between a former cable-news anchor and an AI recreation of a deceased Parkland shooting victim as a case study, the article argues that synthetic voices and avatars—often deployed with earnest intentions—blur the line between authentic and fabricated experience. The result is public disorientation, an erosion of trust, and a normalization of staged intimacy dressed up as progress. Beyond technical glitches and awkward affect, the episode underscores the need for clearer norms and accountability around deploying AI in emotionally charged contexts, as media figures, activists, and platforms help legitimize tech that can confuse, manipulate, or anesthetize audiences.
Related article:
Coalition for Content Provenance and Authenticity (C2PA) and Content Credentials





























