Google recently released a technical report on its most advanced AI model, Gemini 2.5 Pro, but experts criticize it for omitting crucial safety details. The report does not comprehensively cover potential risks or elaborate on Google’s Frontier Safety Framework, making it hard to verify the company’s claims about safe AI development. The lack of detailed, timely, and transparent reporting is part of a growing trend in the AI industry, as rivals like Meta and OpenAI have also scaled back disclosures. This raises concerns with regulators and industry watchers about a “race to the bottom” in AI safety standards as companies prioritize rapid development over transparency. Google has promised to improve the frequency and depth of its safety reports, but skepticism remains.





























