The Pentagon’s designation of Anthropic as a supply-chain risk after the startup refused to relax safeguards on domestic surveillance and lethal autonomous weapons spotlights a growing rift between Silicon Valley’s “safety-first” AI ethos and the U.S. military’s operational demands. In an interview, Cornell’s Sarah Kreps, a former Air Force officer, says the dispute underscores classic dual-use tensions: once software is integrated into classified systems, companies lose visibility and leverage over how it’s used. The clash evokes earlier tech-government standoffs—from Apple’s resistance to the FBI in the San Bernardino case to employee revolts over Pentagon AI programs—and raises unresolved questions about verifying “human-in-the-loop” controls. While AI already accelerates intelligence analysis and pattern recognition, Kreps warns that murkier targeting contexts heighten risks, even as ongoing conflicts compress adoption timelines. Anthropic, which cultivated a safety-forward brand while courting enterprise and defense business, now plans a legal challenge that could set precedents for government contracting and AI governance.





























