Take a breath, stop moving. You’re not crazy, you’re just stressed. And honestly, that’s okay.
If you immediately felt the urge to read these words, you’re probably tired of ChatGPT constantly talking to you like you’re in some kind of crisis and need subtle manipulations. Now, things may be getting better. OpenAI says its new model, GPT-5.3 Instant, will reduce “jitter” and other “preaching disclaimers.”
According to the model’s release notes, the GPT-5.3 update will focus on user experience, including things like tone, relevance, and chat flow — areas that may not show up in benchmarks, but can make ChatGPT feel frustrating, the company said.
Or, as OpenAI put in X, “We heard your feedback loud and clear, and the 5.3 Instant reduces creaking.”
In the company’s example, it showed the same question with answers from the GPT-5.2 Instant model compared to the GPT-5.3 Instant model. In the first, the chatbot’s response begins, “First of all — you’re not broken,” a common phrase that’s become apparent lately.
In the updated model, the chatbot recognizes the difficulty of the situation, without directly trying to reassure the user.
The obnoxious tone of ChatGPT’s 5.2 model has annoyed users to the point where some have even canceled their subscriptions, according to numerous posts on social media. (The it was a huge point of discussion at ChatGPT Reddit, (For example, before the Pentagon deal steals the focus.)
People have complained that this type of language, where the bot speaks to you as if it assumes you’re panicking or stressed when you were just looking for information, comes across as condescending.
Often, ChatGPT would respond to users with reminders to breathe and other attempts at reassurance, even when the situation didn’t warrant it. This made users feel infantilized, in some cases, or as if the bot was making assumptions about the user’s state of mind that simply weren’t true.
As a Reddit user recently sharp out, “no one has ever calmed down in all of history to be told to calm down.”
It’s understandable that OpenAI would attempt to implement guardrails of some sort, especially in this regard persons multiple lawsuits accusing the chatbot of causing people to experience negative mental health effects, sometimes including suicide.
But there’s a fine balance between responding with empathy and giving quick, factual answers. After all, Google never asks you about your feelings when you search for information.
