The Pentagon is reviewing its relationship with Anthropic amid a dispute over how the AI company’s systems can be used in U.S. military operations, people familiar said. Tensions flared after reports that Anthropic’s Claude was tapped via Palantir in the operation to capture Venezuelan President Nicolás Maduro, testing Anthropic’s guardrails against uses such as domestic surveillance and lethal autonomy. A new Defense Department AI strategy seeks “any lawful use” of AI across contracts, potentially colliding with Anthropic’s restrictions even as the company maintains it found no policy violations and continues talks. Anthropic, which holds defense deals worth up to $200 million and provides models on classified networks with Palantir, says it supports national security applications within limits; Pentagon officials say partners must be willing to help warfighters “win in any fight.” Analysts say current models are unlikely to power lethal autonomous weapons directly, framing the standoff as more about future scope than immediate use cases.





























