A U.K. government-backed institute says a third of British adults are using AI tools for emotional support or social interaction, with one in 25 doing so daily. In its first report, the AI Safety Institute found evidence of withdrawal-like symptoms among users when chatbots went offline and warned that AI’s cyber capabilities are advancing fast—doubling in some areas every eight months and reaching expert-level tasks. Researchers also said models have surpassed PhD-level performance in biology and are closing in on chemistry, while tests showed “universal jailbreaks” remain possible even as safeguards improve. Though lab experiments suggest early building blocks for autonomous self-replication, the institute said models lack the ability to execute multi-step, undetected real-world sequences. The report found no evidence of deliberate “sandbagging” by models and did not assess near-term job displacement or environmental impacts, areas where experts remain divided. The government said the findings will inform policies to address AI risks before systems are widely deployed.
Related article:
AI regulation: a pro-innovation approach (UK government white paper)





























