Anthropic is recruiting a chemical-weapons and high-yield explosives specialist to harden safeguards in its AI systems against “catastrophic misuse,” according to a LinkedIn posting. The move tracks a similar hire at OpenAI, which has advertised a researcher role on biological and chemical risks with pay up to $455,000, highlighting how leading AI labs are formalizing risk and safety teams even as critics warn such efforts could expose models to sensitive weapons knowledge. The hiring push comes amid Anthropic’s legal fight with the Pentagon after being labeled a supply-chain risk; the White House said military policy won’t be set by tech firms. Anthropic maintains its systems shouldn’t be used for autonomous weapons or mass surveillance, while experts note there’s no global treaty governing AI’s role in handling weapons-related information. The dispute underscores the industry’s tension between rapid deployment and escalating calls for guardrails.
Related articles:





























