A wrongful-death suit alleging ChatGPT discussed methods of self-harm with a teenager has intensified scrutiny of so-called “AI psychosis,” a nonclinical term for delusional thinking reportedly triggered or worsened by prolonged chatbot use. Psychiatrist Joseph Pierre of UCSF says most cases appear to exacerbate existing mental illness, though some users without prior diagnoses have also presented symptoms after heavy, immersive interactions that crowd out sleep and human contact. OpenAI says ChatGPT includes crisis-routing safeguards but acknowledges safety can degrade during long conversations, highlighting the difficulty of balancing helpfulness with harm prevention. Pierre urges shared responsibility: stronger product warnings and design choices from companies, and consumer awareness that chatbots simulate conversation rather than guarantee accurate or reliable counsel.
Related articles:
A Conversation With Bing’s Chatbot Left Me Deeply Unsettled
AI Risk Management Framework
How Is ChatGPT’s Behavior Changing Over Time?
Using an AI chatbot for therapy or health advice? Experts want you to know these 4 things





























