Openai reorganizes the model behavior group, a small but influence of a team of researchers who shape how the company’s AI models interact with people, TechCrunch has learned.
In a August note to the staff seen by TechCrunch, Openai’s lead researcher Mark Chen said the model behavior team-which consists of about 14 researchers-would participate in the group after training, a larger research team responsible for improving their AI models.
In the context of the changes, the model behavior group will now mention the Openai Post Training Lead Max Schwarzer. An Openai spokesman confirmed these changes to TechCrunch.
The founding leader of the model behavior team, Joanne Jang, is also moving on to launch a new project in the company. In an interview with TechCrunch, Jang says he is creating a new research team called Oai Labs, which will be responsible for the “invention and original interfaces on how people work with AI”.
The model behavior group has become one of Openai’s key research teams, responsible for shaping the company’s AI personality and reducing slander – which occurs when AI models simply agree and enhance users’ beliefs, even unhealthy, instead of offering balances. The team also worked for navigating political bias in model answers and helped Openai to determine its attitude Ai consciousness.
In the staff note, Chen said that now is the time to bring the work of the Openai Model Behavior Team closer to the development of the basic model. In this way, the company marks that AI’s “personality” is now considered a critical factor in how technology is evolving.
In recent months, Openai has faced increased control over the behavior of AI models. Users strongly opposed the personality changes made to the GPT-5, which the company stated that it had lower slander rates, but it looked more colder for some users. This has led Openai to restore access to some of its older models, such as the GPT-4O, and release an update to make the newest GPT-5 answers feel “warmer and more friendly” without increasing slander.
TechCrunch event
Francisco
|
27-29 October 2025
Openai and all AI model developers have to walk a fine line to make their ai chatbots friendly to talk but not sycophantic. In August, the parents of a 16 -year -old boy Ancesta Openai Above the alleged role of Chatgpt in their son’s suicide. The boy, Adam Raine, found some of his suicidal thoughts and designs on ChatGPT (specifically a GPT-4O edition), according to court documents during the months that led to his death. The trial claims that the GPT-4O has failed to promote its suicide ideas.
The model behavior team has worked on every Openai model from GPT-4, including GPT-4O, GPT-4.5 and GPT-5. Before the unit began, Jang has previously worked on projects such as the Dall-E 2, the Early Generation Tool of Openai.
Jang announced to a Post in x Last week he lets the team “start something new in Openai”. The former head of the model’s behavior has been with Openai for almost four years.
Jang told TechCrunch that he will serve as General Manager of Oai Labs, who will report to Chen for now. However, it is the first days, and it is not yet clear what these new interfaces will be, he said.
“I am really excited to explore motifs that move us beyond the conversation standard, which is now more linked to companionship or even agents, where it emphasizes autonomy,” Jang said. ‘I have been thinking about [AI systems] As instruments for thinking, production, reproduction, learning, learning and connection. ”
When asked if OAI LABS will work on these new interfaces with former head of Apple Design Jony Ive – Who is now worker With Openai in a family of AI hardware – Jang said he is open to many ideas. However, he said he would probably start with research areas with which they are more familiar.
This story was informed to include a link to the Jang post that announced its new position, which was released after this story published. We also clarify the models that the Openai model behavior team worked.
