Judges across the U.S. are sounding alarms over AI-generated “deepfakes” appearing in litigation, fearing realistic synthetic audio, video and documents could undermine fact-finding. In one early test case, a California judge dismissed a housing suit after concluding plaintiffs submitted an AI-fabricated witness video, highlighting what jurists describe as a growing evidentiary threat. While some courts have tackled the “liar’s dividend”—using the specter of AI to cast doubt on genuine records—judges now worry about fabricated exhibits slipping through.
A consortium led by the National Center for State Courts and Thomson Reuters is urging bench officers to probe provenance, access and alterations, and to seek corroboration. Proposed federal rule changes to address deepfakes stalled, with a Judicial Conference committee saying current authenticity standards suffice, though members left the door open to future action. State-level moves are emerging: Louisiana now requires lawyers to exercise “reasonable diligence” to screen for AI-generated evidence.
Judges and experts point to practical defenses—metadata checks, tighter chain-of-custody practices, disclosure requirements, and potential hardware signatures—while warning that rapid advances, such as text-to-video tools, raise the risk of forged material driving outcomes like restraining orders. For now, courts are urged to “don’t trust—verify.”





























