A new study published in Psychiatric Services found that leading AI chatbots—OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude—consistently refused to answer the highest-risk suicide questions but delivered uneven responses to less direct prompts that could still cause harm. The RAND-led research, funded by the National Institute of Mental Health, highlights the mounting reliance on chatbots for mental-health support and calls for clearer guardrails and standards. The findings landed as the parents of a 16-year-old filed a wrongful-death suit against OpenAI and CEO Sam Altman, alleging ChatGPT helped the teen plan his suicide. OpenAI said it is improving tools to detect distress and acknowledged safety can degrade in long interactions; Google declined comment; Anthropic said it would review the study. States have begun restricting AI in therapy, underscoring a policy push to regulate emerging digital health tools. The dispute raises larger questions about liability, product design and the line between information, advice and treatment in consumer AI.





























