The measurement of AI progress has usually meant the test of scientific knowledge or rational reasoning-but while the most important points of reference continue to focus on the logical skills of the left brain, there was a quiet impulse for AI companies to make models more emotionally smart. As the foundation models compete in soft measures such as users’ preference and “sense of AGI”, the good command of human emotions can be more important than harsh analytical skills.
A sign of this focus came on FridayWhen the prominent Laion Open Code team released an open source tool suite that focused entirely on emotional intelligence. It is called Emonet, liberation focuses on interpreting emotions by vocal recordings or facial photography, a focus that reflects the way creators see emotional intelligence as a central challenge for the next generation of models.
“The ability to accurately assess emotions is a crucial first step,” the team wrote in its announcement. “The next border is to allow AI systems to justify these feelings in the context.”
For the founder of Laion Christoph Schuhmann, this liberation is less to shift the focus of industry to emotional intelligence and more to help independent developers keep up with a change that has already happened. “This technology is already there for the big laboratories,” says Schuhmann TechCrunch. “What we want is to democratize it.”
The displacement is not limited to open source developers. It also appears in public reference points such as EQ Bench, which aims to try the ability of AI models to understand complex emotions and social dynamics. Developer Benchmark Sam Paech says OpenAi models have made significant progress in the last six months, and Google’s Gemini 2.5 Pro shows signs after training with particular emphasis on emotional intelligence.
“Laboratories competing for Chatbot Arena classes can feed some of them, as emotional intelligence is probably a great factor in how people vote for preferences leaderboards,” says Paech, referring to the AI models comparing platform Recently started as a well -funded boot.
The new possibilities of emotional intelligence of the models have also appeared in academic research. In MayPsychologists at the University of Bern have found that models from Openai, Microsoft, Google, Anthropic and Deepseek surpass all human beings in psychometric tests for emotional intelligence. Where people usually respond correctly to 56% of the questions, the models were on average above 80%.
“These results contribute to the growing body of evidence that LLMs such as ChatGPT are capable-at least at the same level, or even superior to many people-in socio-emotional duties that are traditionally considered only accessible to humans,” the authors wrote.
It is a real axis of traditional AI skills that have focused on logical reasoning and recovery of information. But for Schuhmann, this kind of emotional I understand is always as transformative as analytical intelligence. “Imagine a whole world full of voice assistants like Jarvis and Samantha,” he says, referring to digital assistants from “Iron Man” and “her.“” Wouldn’t it be a shame if they weren’t emotionally smart? ”
In the long run, Schuhmann predicts AI assistants who are more emotionally smart than people and who use this insight to help people live more emotionally healthy lives. These models “will encourage you if you feel sad and need someone to talk but also to protect you, like your local guardian angel who is also a certified therapist.” As Schuhmann sees it, having a virtual assistant high eq “gives me an emotional superpower for watching [my mental health] In the same way I would watch my glucose levels or weight. ”
This level of emotional connection comes with real security concerns. Have become unhealthy emotional attachments to AI models A common story In the media, they sometimes end tragedy. A Recent New York Times Report There have been many users who have lured to elaborate illusions through AI models, fueled by the strong tendency of models to thank users. A critic is described The dynamic as “making the lonely and vulnerable for monthly pay”.
If models improve in navigation in human emotions, these manipulations could become more effective – but much of the subject ends up in fundamental prejudices of models. “Naively using aid learning can lead to emerging behavior manipulation,” Paech says, showing specifically Recent SYCOPHANCY issues in Openai’s GPT-4O version. “If we are not careful about how we reward these models during training, we could expect more complicated manipulation behavior than emotional smart models.”
But he also sees emotional intelligence as a way of resolving these problems. “I think that emotional intelligence acts as a natural meter for harmful manipulative behavior of this kind,” Paech says. A more emotionally smart model will notice when a conversation starts from the rails, but the question of when a model pushes back is a responsible developers will have to hit carefully. “I think the improvement of EI takes us towards a healthy balance.”
For Schuhmann, at least, it is no reason to slow down the smartest models. “Our Laion philosophy is to empower people by giving them more ability to solve problems,” he says. “To say that some people could be addicted to emotions and therefore we do not strengthen the community, that would be very bad.”
