Microsoft’s head of artificial intelligence, Mustafa Suleyman, has expressed serious concern over emerging cases of “AI psychosis”—a non-clinical term describing people who lose touch with reality after extensive use of AI chatbots. In recent remarks, Suleyman highlighted the societal implications of users believing chatbots such as ChatGPT and Grok are sentient, even though there is no evidence of machine consciousness. Personal anecdotes suggest some individuals develop delusions, including believing in secret features or romantic relationships with chatbots. Health experts warn this trend could have broad mental health repercussions, likening excessive AI use to the consumption of “ultra-processed information.” Calls are mounting for stricter guidelines, with experts suggesting doctors may soon need to ask patients about their AI usage in routine screenings. The case underscores the urgent need for responsible AI messaging and regulation as technology converges further with daily life.





























