A new report from the Future of Life Institute warns that leading artificial intelligence firms are ill-prepared for the potential dangers of developing systems with human-level cognition. The report’s safety index found that none of the seven major AI developers, including OpenAI, Google DeepMind, and Anthropic, scored above a D grade for “existential safety planning.” The group argues that companies racing to build artificial general intelligence lack comprehensive, actionable plans to control the technology’s risks, despite projections AGI could emerge within the decade. Independent experts and AI watchdogs are now raising alarms about weak industry oversight, with some likening the current situation to launching a nuclear plant without safety protocols. Major companies have disputed these findings, claiming their efforts are more robust than the index reflects, but mounting pressure for regulation and transparency signals that the debate over AI’s safety is only getting louder.































