Anthropic has introduced updates to its “responsible scaling” policy for AI model development, specifying new security safeguards for more powerful AI systems. The company will implement stricter protections if a model could potentially aid in dangerous applications such as developing chemical or biological weapons, or if it can automate significant research roles. Previous measures included physical office sweeps for surveillance devices, the creation of an executive risk council, and an in-house security team. Anthropic’s latest funding round reached a $61.5 billion valuation, highlighting the intense competition among global AI companies, including OpenAI, Google, Amazon, Microsoft, and emerging firms from China.





























