New York Assemblymember and former Palantir executive Alex Bores argues that AI deepfakes are a solvable problem if platforms adopt cryptographic content authentication akin to the HTTPS shift that secured online banking. On Bloomberg’s Odd Lots, Bores backed the C2PA standard to attach tamper-evident provenance data to images, video, and audio, saying media lacking credentials should be treated skeptically. He warned that harmful uses—especially non-consensual sexual deepfakes—still require explicit legal bans while Congress lags on comprehensive rules. Bores touted New York’s new Raise Act, which compels “frontier” AI labs to publish safety plans and report critical incidents, framing it as formalizing voluntary industry pledges. The measure has drawn fire from a pro-AI super PAC, underscoring how AI policy is becoming a flashpoint ahead of 2026 races.
Related articles:





























