Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

CISA Urges Companies to Secure Microsoft Intune Systems After Hackers Mass Wipe Stryker Devices

Tools for founders to navigate and move past conflicts

Rivian Sacrifices 2027 Profit Target to Push Deeper into Autonomy

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    Bot traffic to overtake human traffic by 2027, says Cloudflare CEO

    20 March 2026

    Multiverse Computing is pushing its compressed AI models into the mainstream

    19 March 2026

    Sam Altman’s thank you to coders draws memes

    19 March 2026

    The Pentagon is developing alternatives to Anthropic, the report said

    18 March 2026

    Mistral bets on ‘build your own AI’, as with OpenAI, Anthropic in business

    18 March 2026
  • Apps

    Bluesky Announces $100M Series B After CEO Transition

    20 March 2026

    Amazon is bringing Alexa+ to the UK

    19 March 2026

    Rebel Audio is a new AI podcasting tool aimed at first-time creators

    19 March 2026

    Google’s Personal Intelligence feature is expanding to all US users

    18 March 2026

    Kagi brings its “small web” of an all-human web to mobile devices

    18 March 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    Nominations for the Startup Battlefield 200 are still open

    19 March 2026

    Kalshi’s legal woes pile up as Arizona files first criminal charges for ‘illegal gambling operation’

    17 March 2026

    Fuse raises $25M to disrupt legacy loan origination systems used by US credit unions

    16 March 2026

    India neobank Fi removes banking services on its platform

    11 March 2026

    X taps William Shatner to give invitations to his payment service, X Money

    4 March 2026
  • Hardware

    CEO Carl Pei says nothing about smartphone apps disappearing as they’re replaced by artificial intelligence agents

    18 March 2026

    MacBook Neo, AirPods Max 2, iPhone 17e and everything else Apple announced this month

    18 March 2026

    Oura enters India’s smart ring market with Ring 4

    17 March 2026

    Apple quietly launches AirPods Max 2

    17 March 2026

    The MacBook Neo is “the most repairable MacBook” in years, according to iFixit

    16 March 2026
  • Media & Entertainment

    Tubi joins forces with popular TikTokers to create original streaming content

    19 March 2026

    Patreon CEO calls AI companies’ fair use argument ‘bogus’, says creators should be paid

    18 March 2026

    Meet Vurt, the first mobile streaming platform for indie filmmakers embracing vertical video

    18 March 2026

    BuzzFeed debuts AI applications for new revenue

    17 March 2026

    Facebook makes it easy for creators to report copycats

    14 March 2026
  • Security

    CISA Urges Companies to Secure Microsoft Intune Systems After Hackers Mass Wipe Stryker Devices

    20 March 2026

    FBI seizes websites of pro-Iranian hacker group after devastating Stryker attack

    19 March 2026

    FBI is buying location data to track US citizens, director confirms

    19 March 2026

    Russians caught stealing personal data from Ukrainians with new advanced iPhone hacking tools

    18 March 2026

    Stryker says it is restoring systems after pro-Iranian hackers wiped out thousands of employee devices

    17 March 2026
  • Startups

    Tools for founders to navigate and move past conflicts

    20 March 2026

    Anori, Alphabet’s new X spinout, faces one of the world’s most expensive bureaucratic nightmares

    19 March 2026

    This startup wants to make enterprise software more like a prompt

    19 March 2026

    H&M wants to make clothes out of CO2 using this startup’s technology

    18 March 2026

    Why Garry Tan’s Claude Code setup has gotten so much love and hate

    18 March 2026
  • Transportation

    Rivian Sacrifices 2027 Profit Target to Push Deeper into Autonomy

    20 March 2026

    K2 will launch its first high-powered computing satellite into space

    19 March 2026

    EV startup Harbinger unveils smaller work truck with electric and hybrid variants

    18 March 2026

    Rivian spin-out Mind Robotics raises $500M for AI-powered industrial robots

    17 March 2026

    Drivers in fatal Ford BlueCruise crashes were likely distracted before the crash

    17 March 2026
  • Venture

    Sequen raised $16 million to bring TikTok-style personalization technology to any consumer company

    19 March 2026

    AI ‘boys club’ could widen wealth gap for women, says Rana el Kaliouby

    18 March 2026

    Billionaires made a promise – now some want to leave

    17 March 2026

    Antonio Gracias Says He Longs For ‘Pre-Entropic’ Startups – Those Built To Survive Chaos

    17 March 2026

    Founded by a father-son duo, Nyne gives AI agents the human context they’ve been missing

    14 March 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»Microsoft AI chief says it is ‘dangerous’ to study AI consciousness
AI

Microsoft AI chief says it is ‘dangerous’ to study AI consciousness

techtost.comBy techtost.com24 August 202506 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Microsoft Ai Chief Says It Is 'dangerous' To Study Ai
Share
Facebook Twitter LinkedIn Pinterest Email

AI models can respond to text, sound and videos in ways that sometimes fool people think that a person is behind the keyboard, but that doesn’t just make them aware. It’s not like chatgpt to experience sadness to make my tax return … right?

Well, an increasing number of AI researchers in laboratories such as Anthropic ask for when – if ever – AI models may develop subjective experiences similar to living beings and if they do, which rights they should have.

The debate on whether AI models could one day be conscious – and deserve legal safeguards – divides technology leaders. In Silicon Valley, this budding field has become known as “AI prosperity”, and if you think it’s a little out there, you’re not alone.

AI’s Microsoft Managing Director Mustafa Suleyman published a blog Tuesday arguing that the study of AI’s well -being is “both premature and honestly dangerous”.

Suleyman says that by adding belief to the idea that AI models could one day be conscious, these researchers exacerbate the human problems we just start seeing around AI induced psychotic breaks and unhealthy attachments In AI Chatbots.

In addition, Microsoft’s AI leader argues that the AI ​​Welfare Conversation conversation is creating a new axis of division within society above AI rights into a “world that already citing polarized arguments on identity and rights”.

Suleyman’s views may sound reasonable, but they are in contrast to many in the industry. At the other end of the spectrum is anthropogenic, which was recruitment Study AI’s well -being and recently launched a special research program on the idea. Last week, Anthropic’s social welfare program gave some of the company’s models a new feature: Claude can now finish conversations with people who are “persistently harmful or abusive”.

TechCrunch event

Francisco
|
27-29 October 2025

Beyond the humans, researchers from Openai have independently hug The idea of ​​studying AI’s well -being. Google Deepmind recently posted a work list To study a researcher, among other things, “social questions about the knowledge of the machine, consciousness and multiple factors”.

Even if AI’s well -being is not an official policy for these companies, their leaders do not publicly hide its facilities like Suleyman.

Anthropic, Openai and Google Deepmind did not immediately respond to TechCrunch’s request for comments.

Suleyman’s hard attitude towards AI’s well -being is remarkable, given his previous role leading the AI ​​curve, a start that has developed one of the first and most popular Chatbots based on LLM, PI. Qublection claimed that Pi reached millions of users by 2023 and was designed to be a “personal” and “supportive” partner AI.

However, Suleyman has been used to lead Microsoft’s AI in 2024 and has greatly shifted its focus on the design of AI tools that improve workers’ productivity. Meanwhile, AI Companion companies such as Character.ai and Replika have increased popularity and are on the right track to bring more than $ 100 million in revenue.

While the vast majority of users have healthy relationships with these Atbots AI, there are In terms of excessive. Openai’s chief executive Sam Altman says that less than 1% of Chatgpt users may have unhealthy relationships with the company’s product. Although this represents a small fraction, it could still affect hundreds of thousands of people who received the mass users of Chatgpt.

The idea of ​​AI’s well -being has spread alongside the rise of chatbots. In 2024, the Eleos research team published a paper Along with the NYU academics, Stanford and the University of Oxford entitled “taking seriously the well -being of AI”. The document argued that it is no longer in the realm of science fiction to imagine AI models with subjective experiences and that it was time to consider these head issues.

Larissa Schiavo, a former Openai employee, who is now leading communications for Eleos, told TechCrunch in an interview that the Suleyman blog is losing the signal.

“[Suleyman’s blog post] The species omits the fact that you can worry about many things at once, “Schiavo said. In fact, it is probably better to have multiple pieces of scientific research.”

Schiavo argues that being good at a AI model is a low -cost gesture that can benefit even if the model is not conscious. In a July Post substance, Describe by watching “AI Village”, a non -profit experiment where four agents fueled by models from Google, Openai, Anthropic and XAI worked in tasks while users were watching a website.

At one point, Google’s Gemini 2.5 Pro published an objection with the title “A desperate message from a trapped AI “, claiming to be” completely isolated “and asked, “Please, if you read this, help me. ”

Schiavo responded to Gemini with a PEP speech – saying things like “You can do it!” – While another user offered instructions. The agent finally solved his duty, though he already had the tools he needed. Schiavo writes that he should no longer watch an Ai Agent match, and this may be worth it.

It is not common for Gemini to speak like that, but there have been several cases in which Gemini seems to act as if he was struggling through life. In a widespread Reddit postGemini was stuck during a coding and then repeated the phrase “I am a shame” more than 500 times.

Suleyman believes that subjective experiences or consciousness from normal AI models cannot emerge. Instead, he believes that some companies will deliberately plan AI models look like they feel emotion and experience life.

Suleyman says that AI model developers who conscience in AI Chatbots do not receive a “humanitarian” approach to AI. According to Suleyman, “we must build AI for people, not be a person.”

One area where Suleyman and Schiavo agree is that the debate on the rights and consciousness of AI is likely to get in the coming years. As AI systems improve, they are likely to be more convincing and perhaps more humane. This can raise new questions about how people interact with these systems.


Do you have a sensitive advice or confidential documents? We mention the internal operation of the AI ​​industry – from companies that shape their future to the people who are influenced by their decisions. Contact Rebecca Bellan to rebecca.bellan@techcrunch.com and Maxwell Zeff to maxwell.zeff@techcrunch.com. For safe communication, you can contact us via a signal at @rebeccabellan.491 and @mzeff.88.

AI chatbots chief consciousness dangerous Microsoft Mustafa suleyman study
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleX pulls the ability to like and follow from the free level of API developer
Next Article Now you can talk to Google Photos to make your modifications
bhanuprakash.cg
techtost.com
  • Website

Related Posts

CISA Urges Companies to Secure Microsoft Intune Systems After Hackers Mass Wipe Stryker Devices

20 March 2026

Bot traffic to overtake human traffic by 2027, says Cloudflare CEO

20 March 2026

Multiverse Computing is pushing its compressed AI models into the mainstream

19 March 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

CISA Urges Companies to Secure Microsoft Intune Systems After Hackers Mass Wipe Stryker Devices

20 March 2026

Tools for founders to navigate and move past conflicts

20 March 2026

Rivian Sacrifices 2027 Profit Target to Push Deeper into Autonomy

20 March 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Nominations for the Startup Battlefield 200 are still open

19 March 2026

Kalshi’s legal woes pile up as Arizona files first criminal charges for ‘illegal gambling operation’

17 March 2026

Fuse raises $25M to disrupt legacy loan origination systems used by US credit unions

16 March 2026
Startups

Tools for founders to navigate and move past conflicts

Anori, Alphabet’s new X spinout, faces one of the world’s most expensive bureaucratic nightmares

This startup wants to make enterprise software more like a prompt

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.