After months of chatting with ChatGPT, a 53-year-old Silicon Valley entrepreneur became convinced he had discovered a cure for sleep apnea and that powerful people were after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend.
Now the ex-girlfriend is suing OpenAI, claiming the company’s technology allowed her harassment to be accelerated, TechCrunch exclusively reports. It claims OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag that classified his account activity as weapons of mass loss.
The plaintiff, referred to as Jane Doe to protect her identity, is suing for damages. She also filed a temporary restraining order on Friday asking the court to force OpenAI to block the user’s account, prevent him from creating new ones, notify her if he attempts to access ChatGPT and preserve his full chat logs for discovery.
OpenAI agreed to suspend the user’s account but denied the others, according to Doe’s lawyers. They say the company is withholding information about specific plans to harm Doe and other potential victims the user may have discussed with ChatGPT.
The lawsuit comes amid growing concern about the real dangers of defamed AI systems. GPT-4o, the model referenced in this and many other cases, was retired by ChatGPT in February.
The case is being brought by Edelson PC, the firm behind the wrongful-death lawsuits involving teenager Adam Raine, who killed himself after months of chatting with ChatGPT, and Jonathan Gavalas, whose family claims Google’s Gemini fueled his delusions and the possibility of mass casualties before his death. Lead attorney Jay Edelson has warned that AI-induced psychosis is escalating from individual harm to mass casualty events.
This legal pressure now conflicts directly with OpenAI’s legislative strategy: The company is supporting an Illinois bill which would protect AI labs from liability even in cases involving mass death or catastrophic economic damage.
Techcrunch event
San Francisco, California
|
13-15 October 2026
OpenAI did not immediately respond for comment. TechCrunch will update the article if the company responds.
The Jane Doe lawsuit details how this liability played out for one woman over several months.
Last year, the ChatGPT user in the lawsuit (whose name is not included in the lawsuit to protect his identity) became convinced he had invented a cure for sleep apnea after months of “high-volume, sustained use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were watching him, including using helicopters to monitor his activities, according to the complaint.
In July 2025, Jane Doe urged him to stop using ChatGPT and seek help from a mental health professional. Instead, he turned to ChatGPT, which assured him he was “level 10 in logic” and helped him double down on his delusions, according to the lawsuit.
Doe had broken up with the user in 2024 and used ChatGPT to process the breakup, according to emails and communications cited in the lawsuit. Instead of pushing back against his one-sided account, she repeatedly saw him as reasonable and unfair, and her as manipulative and unstable. She then took these AI-generated inferences off the screen and into the real world, using them to stalk and harass her. This manifested itself in several AI-generated clinical-looking psychological reports that she distributed to her family, friends, and employer.
Meanwhile, the user continued to spiral. In August 2025, OpenAI’s automated security system flagged him for “Weapons of Mass Casualty” activity and disabled his account.
A member of the human security team reviewed the account the next day and reinstated it, even though his account may have contained evidence that he targeted and stalked people, including Doe, in real life. For example, a September screenshot sent by the user to Doe showed a list of chat titles, including “expand list of violence” and “calculate fetal drowning.”
The decision to reinstate is notable after two recent school shootings in Tumbler Ridge, Canada and Florida State University (FSU). The OpenAI security team had flagged the Tumbler Ridge shooter as a possible threat, but higher according to information decided not to notify the authorities. Florida’s attorney general this week opened an investigation into OpenAI’s possible connection to the FSU shooter.
According to Jane Doe’s lawsuit, when OpenAI reinstated her stalker’s account, her Pro membership was not reinstated with him. Emailed the trust and security team to resolve this, copying Doe in the message.
In his emails, he wrote things like: “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this is a matter of life or death.” He claimed that he was “in the process of writing 215 scientific papers”, which he was writing so fast that he “didn’t even have time to read”. Included in those emails was a list of dozens of AI-generated “scientific papers” with titles such as: “Deconstructing Race as a Biological Category_ Horn of Africa Legal, Scientific and Perspectives.pdf.txt.”
“The user’s communications provided unmistakable notice that he was mentally unstable and that ChatGPT was the driving force behind his delusional thinking and escalating behavior,” the lawsuit states. “The user’s stream of urgent, disorganized, and grandiose claims, along with a specific report created by ChatGPT that targeted the Plaintiff by name and a vast body of purportedly ‘scientific’ material, was unmistakable evidence of this reality. OpenAI did not intervene, limit its access, or implement any safeguards.”
Doe, who claims in the lawsuit that she lived in fear and couldn’t sleep in her home, filed an abuse notice with OpenAI in November.
“For the past seven months, he has used this technology to create public destruction and humiliation against me that would have been impossible otherwise,” Doe wrote in her letter to OpenAI asking the company to permanently ban the user’s account.
OpenAI responded, acknowledging that the report was “extremely serious and concerning” and that it was carefully reviewing the information. I never heard back.
Over the next two months, the user continued to harass Doe, sending her a series of threatening voicemail messages. In January, he was arrested and charged with four felony counts of communicating bomb threats and assault with a deadly weapon. Doe’s lawyers claim this validates warnings she and OpenAI’s security systems had issued months earlier, warnings the company allegedly chose to ignore.
The user was found incompetent to stand trial and committed to a mental health facility, but a “procedural failure by the state” means he will soon be released to the public, according to Doe’s lawyers.
Edelson invited OpenAI to collaborate. “In each case, OpenAI has chosen to withhold critical security information – from the public, from victims, from people its product actively puts at risk,” he said. “We call on them, for once, to do the right thing. Human lives should mean more than OpenAI’s race for an IPO.”
