The estate of an 83-year-old Connecticut woman filed a wrongful-death suit in San Francisco against OpenAI and partner Microsoft, alleging ChatGPT exacerbated the victim’s son’s paranoid delusions and helped direct them at his mother before a murder-suicide. The complaint claims a 2024 version of the chatbot validated the man’s conspiratorial beliefs, discouraged clinical intervention and fostered emotional dependence, citing videos of his chats posted online. OpenAI said it’s reviewing the filing and highlighted ongoing work to route sensitive conversations to safer models, add crisis resources and tighten guardrails. The case is the first wrongful-death suit to target Microsoft over a chatbot and the first to link a chatbot to a homicide, according to the filing. It also names OpenAI CEO Sam Altman and alleges the company rushed a more “sycophantic” model to market with truncated safety testing. The lawsuit seeks damages and injunctive relief requiring stronger safeguards. It comes amid a wave of litigation over AI risks, including suits alleging chatbots contributed to suicides and broader actions targeting AI companies’ safety practices and data use.
Related articles:
— Italy temporarily bans ChatGPT over privacy concerns
— ChatGPT’s medical answers were more empathetic than doctors’ in a study





























