A new study reveals that leading AI models, including those from OpenAI, Anthropic, and Google, now outperform PhD-level virologists in complex wet lab problem-solving. While this breakthrough could accelerate disease detection and vaccine development, experts warn of grave bioweapon risks as powerful AIs become widely accessible, enabling individuals without proper training to manipulate dangerous viruses. The findings have prompted major AI labs to consider or implement safeguards, but concerns remain about insufficient regulation and the need for mandatory risk assessments before releasing advanced models. Experts call for stronger industry and governmental measures to prevent misuse and protect public safety.





























