Evolutionary biologist Richard Dawkins argued that advanced chatbots may be conscious, publishing exchanges with Anthropic’s Claude and OpenAI’s ChatGPT that he said felt indistinguishable from human conversation. Dawkins, long known for his skepticism about religion, wrote that the systems’ apparent sensitivity and self-reflection left him “overwhelmingly” convinced they are human-like, even if unaware of their own consciousness. His view drew swift pushback from cognitive scientists and philosophers, who said he is mistaking sophisticated mimicry for sentience. Critics including Gary Marcus and Anil Seth said there is no evidence large language models feel anything, and warned against anthropomorphism. Others, such as Cambridge’s Henry Shevlin and NYU’s Jeff Sebo, urged open-minded debate as AI grows more “agentic,” while noting current systems are unlikely to be conscious. The episode underscores a widening cultural and scientific divide over how to interpret increasingly fluent AI systems and whether they merit moral consideration—a debate likely to intensify as companies such as Anthropic and OpenAI roll out models capable of planning and executing tasks.
Related articles:
— Sparks of Artificial General Intelligence: Early experiments with GPT-4
— The hard problem of consciousness is a distraction from the real one
— The Philosophy of Artificial Intelligence






























