The U.S.-Israeli strikes on Iran have thrust military AI into the spotlight as Washington abruptly sidelined Anthropic from a $200 million contract supporting the Maven targeting system over guardrails on weaponization and surveillance, then turned to OpenAI under stated limits. Experts warn that rapid battlefield deployment—spanning logistics, intelligence analysis and target prioritization—is outpacing international efforts in Geneva to set rules for lethal autonomous weapons. Despite hopes that AI might improve precision, researchers say there’s no evidence it reduces civilian harm. Fully autonomous, LLM-powered weapons without human oversight remain unreliable and out of step with humanitarian law, leaving governments to navigate procurement language and safety constraints as adoption accelerates.
Related article:





























