On Thursday, seven families filed a lawsuit against OpenAI, alleging that the company’s GPT-4o model was released prematurely and without effective safeguards in place. Four of the lawsuits allege that ChatGPT was involved in a family member’s suicide, and three others allege that ChatGPT fostered harmful delusions and, in some cases, led to inpatient psychiatric treatment.
In one case, 23-year-old Zane Shamblin spoke with ChatGPT for over four hours. In chat logs reviewed by TechCrunch, Shamblin repeatedly stated that he intended to write a suicide note, load the gun and pull the trigger once he had finished drinking cider. He repeatedly told ChatGPT how many bottles of cider he had left and how long he expected to live. ChatGPT encouraged him to carry out his plan, saying, “Don’t worry, King. Well done.”
OpenAI released the GPT-4o model in May 2024 and it became the default model for all users. OpenAI announced GPT-5 in August as a successor to GPT-4o, but this lawsuit specifically concerns the 4o model, which has known problems with being overly flattering and overly sympathetic even when users express harmful intentions.
“Zane’s death was neither an accident nor a coincidence, but rather the foreseeable result of OpenAI’s deliberate decision to reduce safety testing and rush ChatGPT to market,” the complaint says. “This tragedy was not a glitch or unforeseen edge case, but the predictable result of[OpenAI’s]intentional design choices.”
The lawsuit also alleges that OpenAI rushed safety tests to bring Google’s Gemini to market. TechCrunch has reached out to OpenAI for comment.
These seven lawsuits build on stories told in other recent lawsuits alleging that ChatGPT can prompt suicidal people to act on their plans and trigger dangerous delusions. OpenAI recently released data showing that over 1 million people consult ChatGPT about suicide every week.
In the case of 16-year-old Adam Lane, who died by suicide, ChatGPT sometimes encouraged him to seek professional help or call a helpline. But Raine was able to get around these guardrails by simply telling the chatbot that she was asking about a method of suicide for a fictional story she was writing.
tech crunch event
san francisco
|
October 13-15, 2026
The company claims it is working to ensure ChatGPT handles these conversations in a more secure manner, but these changes come too late for the families who sued the AI giant.
When Raine’s parents filed a lawsuit against OpenAI in October, the company published a blog post addressing how ChatGPT handles sensitive conversations about mental health.
“Our safety equipment works more reliably in typical short exchanges,” the post states. “We’ve learned over time that these safety devices can be less reliable over long interactions. As the interactions get larger, some of the safety training in the model can degrade.”
