Seven families applied lawsuits against OpenAI on Thursday, claiming that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits concern ChatGPT’s alleged role in the suicides of family members, while the other three allege that ChatGPT enhanced harmful delusions that in some cases led to inpatient psychiatric treatment.
In one case, a 23-year-old Zane Champlin had a chat with ChatGPT that lasted more than four hours. In the chat logs — which were seen by TechCrunch — Shamblin explicitly stated multiple times that he had written suicide notes, put a bullet in his gun and intended to pull the trigger as soon as he finished drinking cider. He repeatedly told ChatGPT how much cider he had left and how much longer he expected to live. ChatGPT encouraged him to complete his plans, saying, “Calm down, king. You’ve done well.”
OpenAI released the GPT-4o model in May 2024, when it became the default model for all users. In August, OpenAI released GPT-5 as a successor to GPT-4o, but these lawsuits are particularly about the 4o model, which had known problems with overclocking sycophantic or overly acceptable, even when users expressed harmful intentions.
“Zane’s death was neither an accident nor a coincidence, but rather the intended consequence of OpenAI’s deliberate decision to limit security testing and bring ChatGPT to market,” the lawsuit states. “This tragedy was not a mistake or an unforeseen event – it was the predictable outcome [OpenAI’s] deliberate design choices’.
The lawsuits also allege that OpenAI rushed security tests to beat Google’s Gemini to market. TechCrunch has reached out to OpenAI for comment.
These seven lawsuits build on the stories told in other recent legal filings, which allege that ChatGPT can encourage suicidal people to act on their plans and inspire dangerous delusions. OpenAI recently released data that says over a million people talk to ChatGPT about suicide every week.
In the case of Adam Raine, a 16-year-old who died by suicide, ChatGPT sometimes encouraged him to seek professional help or call a helpline. However, Raine was able to bypass these guardrails by simply telling the chatbot that he was asking about suicide methods for a fictional story he was writing.
Techcrunch event
San Francisco
|
13-15 October 2026
The company claims is working to make ChatGPT handle these conversations more securely, but for the families who have sued the AI giant, those changes are coming too late.
When Rain’s parents filed a lawsuit against OpenAI in October, the company published a blog post that took issue with how ChatGPT handles sensitive conversations about mental health.
“Our safeguards work most reliably on common, short exchanges,” the post says he says. “We’ve learned over time that these safeguards can sometimes be less reliable in long-term interactions: as time passes, parts of the model’s security training can degrade.”
