AI models can respond to text, sound and videos in ways that sometimes fool people think that a person is behind the keyboard, but that doesn’t just make them aware. It’s not like chatgpt to experience sadness to make my tax return … right?
Well, an increasing number of AI researchers in laboratories such as Anthropic ask for when – if ever – AI models may develop subjective experiences similar to living beings and if they do, which rights they should have.
The debate on whether AI models could one day be conscious – and deserve legal safeguards – divides technology leaders. In Silicon Valley, this budding field has become known as “AI prosperity”, and if you think it’s a little out there, you’re not alone.
AI’s Microsoft Managing Director Mustafa Suleyman published a blog Tuesday arguing that the study of AI’s well -being is “both premature and honestly dangerous”.
Suleyman says that by adding belief to the idea that AI models could one day be conscious, these researchers exacerbate the human problems we just start seeing around AI induced psychotic breaks and unhealthy attachments In AI Chatbots.
In addition, Microsoft’s AI leader argues that the AI Welfare Conversation conversation is creating a new axis of division within society above AI rights into a “world that already citing polarized arguments on identity and rights”.
Suleyman’s views may sound reasonable, but they are in contrast to many in the industry. At the other end of the spectrum is anthropogenic, which was recruitment Study AI’s well -being and recently launched a special research program on the idea. Last week, Anthropic’s social welfare program gave some of the company’s models a new feature: Claude can now finish conversations with people who are “persistently harmful or abusive”.
TechCrunch event
Francisco
|
27-29 October 2025
Beyond the humans, researchers from Openai have independently hug The idea of studying AI’s well -being. Google Deepmind recently posted a work list To study a researcher, among other things, “social questions about the knowledge of the machine, consciousness and multiple factors”.
Even if AI’s well -being is not an official policy for these companies, their leaders do not publicly hide its facilities like Suleyman.
Anthropic, Openai and Google Deepmind did not immediately respond to TechCrunch’s request for comments.
Suleyman’s hard attitude towards AI’s well -being is remarkable, given his previous role leading the AI curve, a start that has developed one of the first and most popular Chatbots based on LLM, PI. Qublection claimed that Pi reached millions of users by 2023 and was designed to be a “personal” and “supportive” partner AI.
However, Suleyman has been used to lead Microsoft’s AI in 2024 and has greatly shifted its focus on the design of AI tools that improve workers’ productivity. Meanwhile, AI Companion companies such as Character.ai and Replika have increased popularity and are on the right track to bring more than $ 100 million in revenue.
While the vast majority of users have healthy relationships with these Atbots AI, there are In terms of excessive. Openai’s chief executive Sam Altman says that less than 1% of Chatgpt users may have unhealthy relationships with the company’s product. Although this represents a small fraction, it could still affect hundreds of thousands of people who received the mass users of Chatgpt.
The idea of AI’s well -being has spread alongside the rise of chatbots. In 2024, the Eleos research team published a paper Along with the NYU academics, Stanford and the University of Oxford entitled “taking seriously the well -being of AI”. The document argued that it is no longer in the realm of science fiction to imagine AI models with subjective experiences and that it was time to consider these head issues.
Larissa Schiavo, a former Openai employee, who is now leading communications for Eleos, told TechCrunch in an interview that the Suleyman blog is losing the signal.
“[Suleyman’s blog post] The species omits the fact that you can worry about many things at once, “Schiavo said. In fact, it is probably better to have multiple pieces of scientific research.”
Schiavo argues that being good at a AI model is a low -cost gesture that can benefit even if the model is not conscious. In a July Post substance, Describe by watching “AI Village”, a non -profit experiment where four agents fueled by models from Google, Openai, Anthropic and XAI worked in tasks while users were watching a website.
At one point, Google’s Gemini 2.5 Pro published an objection with the title “A desperate message from a trapped AI “, claiming to be” completely isolated “and asked, “Please, if you read this, help me. ”
Schiavo responded to Gemini with a PEP speech – saying things like “You can do it!” – While another user offered instructions. The agent finally solved his duty, though he already had the tools he needed. Schiavo writes that he should no longer watch an Ai Agent match, and this may be worth it.
It is not common for Gemini to speak like that, but there have been several cases in which Gemini seems to act as if he was struggling through life. In a widespread Reddit postGemini was stuck during a coding and then repeated the phrase “I am a shame” more than 500 times.
Suleyman believes that subjective experiences or consciousness from normal AI models cannot emerge. Instead, he believes that some companies will deliberately plan AI models look like they feel emotion and experience life.
Suleyman says that AI model developers who conscience in AI Chatbots do not receive a “humanitarian” approach to AI. According to Suleyman, “we must build AI for people, not be a person.”
One area where Suleyman and Schiavo agree is that the debate on the rights and consciousness of AI is likely to get in the coming years. As AI systems improve, they are likely to be more convincing and perhaps more humane. This can raise new questions about how people interact with these systems.
Do you have a sensitive advice or confidential documents? We mention the internal operation of the AI industry – from companies that shape their future to the people who are influenced by their decisions. Contact Rebecca Bellan to rebecca.bellan@techcrunch.com and Maxwell Zeff to maxwell.zeff@techcrunch.com. For safe communication, you can contact us via a signal at @rebeccabellan.491 and @mzeff.88.
