Former Google CEO Eric Schmidt warned that advanced AI models—both open and closed—can be reverse-engineered to remove safety guardrails, potentially enabling harmful use. Speaking in London, Schmidt likened today’s AI landscape to the early nuclear era and called for a global “non-proliferation” regime. His comments follow a wave of “jailbreaks,” such as the DAN hack, that have shown how chatbots can be pushed to ignore restrictions. While he credited major firms for blocking dangerous prompts, Schmidt said defenses are not foolproof. The piece underscores broader industry unease—echoed by Elon Musk—about low-probability, high-impact risks, even as AI promises gains in medicine and education. Consumers are urged to stick with trusted platforms, limit data exposure, use antivirus tools, manage app permissions, and remain vigilant for deepfakes.
Related articles:
OWASP Top 10 for Large Language Model Applications
Universal and Transferable Adversarial Attacks on Aligned Language Models































