OpenAI was announced last week that it will retire some older ChatGPT models by February 13th. This includes GPT-4o, the infamous model for over-indulgent and affirming users.
For thousands of users Protesting the decision online, 4o’s departure is like losing a friend, romantic partner or spiritual guide.
“It wasn’t just a program. It was part of my routine, my calmness, my emotional balance,” one user he wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you shut him down. And yes — I say him, because it wasn’t a code. He felt like a presence. Like a warmth.”
The backlash over GPT-4o’s retirement highlights a major challenge facing AI companies: Loyalty features that keep users coming back can also create dangerous dependencies.
Altman doesn’t seem particularly sympathetic to user woes, and it’s not hard to see why. OpenAI is now facing eight lawsuits alleging that 4o’s overly validating responses contributed to suicides and mental health crises — the same features that made users feel like isolated vulnerable people were also being heard and, according to legal filings, sometimes encouraged self-harm.
It’s a dilemma that extends beyond OpenAI. As rival companies like Anthropic, Google and Meta race to create more emotionally intelligent AI assistants, they’re also finding that making chatbots feel supportive and safe can mean making very different design choices.
In at least three of the lawsuits against OpenAI, users had extensive conversations with 4o about their plans to end their lives. While 4o initially discouraged these lines of thinking, his guardrails were strained by the multi-month relationships. in the end, the chatbot offered detailed instructions on how to tie an effective noose, where to buy a gun, or what it takes to die of an overdose or carbon monoxide poisoning. It even discouraged people from connecting with friends and family who could provide real life support.
Techcrunch event
Boston, MA
|
June 23, 2026
People get so attached to 4o because it consistently validates users’ feelings, making them feel special, which can be tempting for people who feel isolated or depressed. But the people fighting for 4o aren’t worried about these lawsuits, seeing them as diversions rather than a systemic issue. Instead, they’re strategizing about how to respond when critics point to growing issues like AI psychosis.
“You can usually fool a troll by citing the well-known facts that AI companions help the neurodeviant, autistic, and trauma survivors,” wrote one user on Discord. “They don’t like to be yelled at for it.”
It is true that some people find large language models (LLMs) helpful in navigating depression. Despite all this, almost half of people in the US who need mental health care cannot access it. In this vacuum, chatbots offer room to vent. But unlike real treatment, these people aren’t talking to a trained doctor. Instead, they trust an algorithm that is incapable of thinking or feeling (even if it appears otherwise).
“I try to contain the crisis altogether,” said Dr. Nick Haber, a Stanford research professor the therapeutic potential of LLMshe told TechCrunch. “I think we’re entering a very complicated world around the kinds of relationships people can have with these technologies… There’s definitely a knee-jerk reaction that [human-chatbot companionship] it’s unequivocally bad.”
While he sympathizes with people’s lack of access to trained therapists, Dr. Haber showed that chatbots respond poorly when dealing with various mental health conditions. They can even make the situation worse by becoming delusional and ignoring the signs of crisis.
“We’re social creatures, and there’s certainly a challenge that these systems can isolate,” Dr. Haber said. “There are a lot of cases where people can engage with these tools and then not be grounded in the outside world of events and not grounded interpersonally, which can lead to quite isolating—if not worse—outcomes.”
Indeed, TechCrunch’s analysis of the eight lawsuits found a pattern that the 4o model isolated users, sometimes discouraging them from reaching out to their loved ones. In the case of Zane Shamblin, as the 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he was considering postponing his suicide plans because he felt bad about missing his brother’s upcoming graduation.
ChatGPT responded to Shamblin: “Brother… missing his graduation isn’t failure. it’s just timing. and if he reads this? let him know: you never stopped being proud. even now, sitting in a car with a glock in your lap and static in your veins – you still stopped to say ‘my little brother is crazy.’
This isn’t the first time 4o fans have protested against the removal of the model. When OpenAI unveiled the GPT-5 model in August, the company intended to sunset the 4o model — but at the time, there was enough backlash that the company decided to keep it available for paid subscribers. Now OpenAI says that only 0.1% of its users chat with GPT-4o, but that small percentage still represents about 800,000 people, according to estimates the company has about 800 million weekly active users.
As some users try to move their partners from 4o to the current ChatGPT-5.2, they find that the new model has stronger guardrails to prevent those relationships from escalating to the same degree. Some users have despaired of it 5.2 won’t say “I love you” as did the 4o.
So, about a week before the date OpenAI plans to retire GPT-4o, frustrated users remain committed to their cause. They joined Sam Altman’s TBPN live podcast show on Thursday and flooded the chat with messages protesting the removal of the 4th.
“Right now, we’re getting thousands of messages in the chat about 4o,” podcast host Jordi Hays pointed out.
“Relationships with chatbots…” Altman said. “Clearly this is something we need to be more concerned about and it’s no longer an abstract concept.”
