Anthropic’s warning about its unreleased Claude “Mythos” model—purportedly adept at discovering high-severity cybersecurity flaws—has reignited a familiar industry tactic: sound the alarm about AI’s cataclysmic potential while selling ever-more capable systems. The company says Mythos could outpace human experts at bug discovery and is coordinating with partners to patch vulnerabilities, but outside researchers question the claims, citing missing benchmarks like false-positive rates and a lack of comparisons with established tools. The rhetoric echoes earlier moves by OpenAI around GPT-2 and broader calls from tech leaders to treat AI as an existential risk, even as firms expand commercial ambitions and court public markets. Critics argue the doom narrative inflates valuations, steers regulators toward deference, and diverts attention from current harms—misdiagnoses, environmental costs, mental-health risks and fraud—where accountability is clearer. AI leaders counter that they’re investing in safety partnerships and governance. The clash underscores a core tension: whether apocalyptic framing advances responsible oversight or entrenches Big Tech’s role as both arsonist and fire brigade.
Related article:




























