A recent study by Apple researchers has uncovered significant limitations in advanced artificial intelligence systems known as large reasoning models (LRMs). The paper reveals that these models, which include those developed by OpenAI, Google, DeepSeek, and Anthropic, experience a “complete accuracy collapse” when faced with highly complex problems. While LRMs outperform standard AI models in simple tasks, both types fail dramatically as the complexity increases. The findings raise concerns about the current trajectory toward artificial general intelligence (AGI) and suggest that current approaches may have hit a fundamental barrier. The research indicates a need for reconsideration in how reasoning and complexity are handled by next-generation AI, casting doubt over claims that AI is close to matching human intelligence in versatile reasoning tasks.
Related articles:
Musk’s Grok AI bot generates expletive-laden rants to questions on Polish politics
‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s number
Amazon boss tells staff AI means their jobs are at risk in coming years





























