OpenAI moved to tighten its new Pentagon pact after criticism, adding explicit bans on domestic surveillance of U.S. persons and requiring added approvals before intelligence agencies like the NSA can use its systems. CEO Sam Altman acknowledged the rollout was “opportunistic and sloppy,” as user backlash drove a 200% jump in ChatGPT uninstalls and rival Anthropic’s Claude topped Apple’s App Store. The revised terms follow a split between Anthropic and the Defense Department over concerns about mass surveillance and autonomous weapons, intensifying scrutiny of how private AI models are deployed in classified settings. The episode underscores the tug of war between speed and safeguards in military AI, even as defense users ramp up reliance on platforms such as Palantir’s Maven and reiterate “human-in-the-loop” assurances. Academics warn that with more safety-focused actors stepping back, the risk of misuse and errors—like large-language-model hallucinations—could rise.
Related article:





























