Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Legal AI startup Legora hits $5.6 billion valuation, and its battle with Harvey just got hotter

Rivian cuts DOE loan to $4.5 billion for Georgia plant

Sources: Anthropic Potential $900B+ Valuation Round Could Happen Within 2 Weeks

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    Sources: Anthropic Potential $900B+ Valuation Round Could Happen Within 2 Weeks

    1 May 2026

    Meta says its business AI now facilitates 10 million conversations per week

    30 April 2026

    Amazon’s cloud business is growing — and so is its capital spending

    30 April 2026

    Firestorm Labs raises $82 million to bring drone factories to the field

    29 April 2026

    YouTube is testing an AI-powered search feature that shows guided answers

    28 April 2026
  • Apps

    ChatGPT Images 2.0 is a hit in India, but not a big winner elsewhere, yet

    1 May 2026

    Spotify introduces verified artist badges to distinguish humans from artificial intelligence

    30 April 2026

    Google gains 25 million subscribers in Q1, thanks to YouTube and Google One

    30 April 2026

    Meet Shapes, the app that brings humans and artificial intelligence into the same group chats

    29 April 2026

    Amazon is launching an AI-powered audio Q&A experience on product pages

    29 April 2026
  • Crypto

    British cryptographer Adam Back denies NYT report that he is Bitcoin creator Satoshi Nakamoto

    9 April 2026

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025
  • Fintech

    Y Combinator alum Skio sells for $105 million in cash, raised only $8 million, founder says

    1 May 2026

    Amazon, Meta join the fight to end Google Pay and PhonePe’s dominance in India

    30 April 2026

    Steve Ballmer slams founder he backed, who pleaded guilty to fraud: ‘I was cheated and I feel stupid’

    25 April 2026

    Salmon raises $100 million in equity and debt to bring digital credit to unbanked Filipinos

    24 April 2026

    Cash App targets a new type of customer: children aged 6 to 12 years

    22 April 2026
  • Hardware

    As Tim Cook departs, Apple hits record sales — but chip shortage looms

    1 May 2026

    More Gemini features are coming to Google TV

    30 April 2026

    OpenAI could be building a phone with AI agents that replace apps

    28 April 2026

    SpeakOn’s dictation device is a good idea marred by platform limitations

    27 April 2026

    What Tim Cook Built | TechCrunch

    27 April 2026
  • Media & Entertainment

    Roku’s $3 streaming service Howdy hits 1 million subscribers, per recent report

    29 April 2026

    Australia forces Big Tech companies to pay for news or face 2.25% tax.

    28 April 2026

    India’s app market is booming — but global platforms are raking in most of the profits

    23 April 2026

    YouTube extends its AI similarity detection technology to celebrities

    21 April 2026

    Deezer says 44% of songs uploaded to its platform every day are created with artificial intelligence

    20 April 2026
  • Security

    Hackers are actively exploiting a bug in cPanel, which is used by millions of websites

    30 April 2026

    Sri Lanka reveals another missing payment, days after hackers stole $2.5 million from its finance ministry

    29 April 2026

    The US Supreme Court appears divided on the controversial use of ‘geofence’ search warrants.

    29 April 2026

    Paragon is not cooperating with Italian authorities investigating spyware attacks, the report said

    28 April 2026

    Critical infrastructure giant Itron says it was breached

    28 April 2026
  • Startups

    Legal AI startup Legora hits $5.6 billion valuation, and its battle with Harvey just got hotter

    1 May 2026

    Bill Gurley, Jack Altman back startup Pursuit, which helps companies sell to the government

    30 April 2026

    BCI startup Neurable wants to license ‘mind reading’ technology to wearable consumer devices

    29 April 2026

    Founder of Shark Tank-backed startup Sholly sues buyer Sallie Mae

    29 April 2026

    Lachy Groom to back Indian startup Pronto at $200m valuation, sources say

    26 April 2026
  • Transportation

    Rivian cuts DOE loan to $4.5 billion for Georgia plant

    1 May 2026

    Uber is now in the hospitality industry, thanks in part to artificial intelligence

    29 April 2026

    TechCrunch Mobility: Elon’s Acceptance | TechCrunch

    27 April 2026

    Production of the Rivian R2 has begun despite tornado damage at the factory

    25 April 2026

    Porsche is adding an all-electric Cayenne coupe to its lineup

    24 April 2026
  • Venture

    The climate tech IPO window could finally open

    30 April 2026

    Sources: Anthropic Could Raise New $50B Round at $900B Valuation

    30 April 2026

    BMW i Ventures Has a New $300M Fund and AI Rides Shotgun

    29 April 2026

    How a venture firm invests in an increasingly fragmented world

    29 April 2026

    Stanford freshmen who want to rule the world. . . he will probably read this book and try even harder

    27 April 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»OpenAI believes superhuman artificial intelligence is coming — and wants to build tools to control it
AI

OpenAI believes superhuman artificial intelligence is coming — and wants to build tools to control it

techtost.comBy techtost.com15 December 202308 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Openai Believes Superhuman Artificial Intelligence Is Coming And Wants
Share
Facebook Twitter LinkedIn Pinterest Email

While investors were getting ready to go nuclear after Sam Altman’s unceremonious departure from OpenAI and Altman planning his return to the company, members of OpenAI’s Superalignment team were busily dealing with the problem of how to control the artificial intelligence that is smarter than humans.

Or at least, that’s the impression they’d like to give.

This week, I got on the phone with three of the Superalignment team members — Collin Burns, Pavel Izmailov and Leopold Aschenbrenner — who were in New Orleans at NeurIPS, the annual machine learning conference, to present OpenAI’s newest project to ensure that AI systems behave as intended.

OpenAI formed the Superalignment team in July to develop ways to guide, regulate and govern “superintelligent” AI systems — that is, theoretical systems with intelligence that far exceeds that of humans.

“Today, we can basically align models that are dumber than us or maybe human-level maximumBurns said. “Aligning a model that is actually smarter than us is much, much less obvious — how can we do that?”

The Superalignment effort is led by OpenAI co-founder and chief scientist Ilya Sutskever, who didn’t disappoint in July — but certainly now, in light of the fact that Sutskever was among those who initially pushed for Altman’s firing. While some reference suggests that Sutskever is in a “vacuum state” after Altman’s return, OpenAI’s PR tells me that Sutskever is indeed – as of today, at least – still head of the Superalignment team.

Hyper-alignment is a bit of a touchy subject in the AI ​​research community. Some argue that the subfield is premature. others imply it is a red herring.

While Altman has invited comparisons between OpenAI and the Manhattan Project, going so far as to assemble a team to explore artificial intelligence models to protect against “catastrophic risks,” including chemical and nuclear threats, some experts say there is little evidence that indicate the startup’s technology will acquire vast, world-beating capabilities anytime soon — or ever. Claims of impending superintelligence, these experts add, only serve to deliberately distract and divert attention from the pressing AI regulatory issues of the day, such as algorithmic bias and AI’s propensity for toxicity.

For what it’s worth, Sutskever seems to think fervently that AI — not OpenAI per se, but some incarnation of it — could one day become an existential threat. According to information, he reached the spot supply and combustion a wooden dummy in an off-site company to demonstrate its commitment to preventing artificial intelligence from harming humanity, and orders a significant amount of OpenAI computing—20% of its existing computer chips—for the Superalignment team’s research.

“The progress of artificial intelligence recently has been extremely fast, and I can assure you that it is not slowing down,” Aschenbrenner said. “I think we’ll get to human-level systems very soon, but it won’t stop there — we’ll go straight to superhuman systems… So how do we align superhuman AI systems and make them safe? It really is a problem for all of humanity — perhaps the most important unsolved technical problem of our time.”

The Superalignment team is currently trying to build governance and control frameworks that could they apply well to future powerful AI systems. It’s no simple task, as the definition of “superintelligence” — and whether a particular AI system has achieved it — is hotly debated. But the approach the team has come up with for now involves using a weaker, less sophisticated AI model (eg GPT-2) to guide a more advanced, sophisticated model (GPT-4) in desired directions — and away from spam.

An image illustrating the AI-based Superalignment team’s analogy for the alignment of superintelligent systems. Image Credits: OpenAI

“A lot of what we’re trying to do is tell a model what to do and make sure it does it,” Burns said. “How can we get a model to follow instructions and get a model to only help with things that are real and not things that are made up? How can we get a model to tell us if the code it generated is safe or obscene? These are the types of tasks we want to be able to achieve with our research.”

But wait, you might say – what does AI that guides AI have to do with preventing AI that threatens humanity? Well, it’s an analogy: The weak model is meant to be a stand-in for human supervisors, while the strong model represents the super-intelligent AI. Similar to humans who may not be able to understand a super-intelligent AI system, the weak model cannot “get” all the complexities and nuances of the strong model – making the setup useful for proving hyper-alignment hypotheses, the team says Superalignment.

“You can think of a sixth grader trying to supervise a college student,” Izmailov explained. “Suppose the sixth grader tries to tell the student about a task that he somehow knows how to solve… Although the sixth grader’s supervision may have errors in the details, there is hope that the college student would understand the substance and would be able to do the work better than the supervisor.”

In the Superalignment group setting, a weak model fine-tuned to a particular task creates tags that are used to “communicate” the broad strokes of that task to the strong model. Given these labels, the strong model can more or less correctly generalize according to the weak model’s intent — even if the weak model’s labels contain errors and biases, the team found.

The weak-strong model approach may even lead to breakthroughs in the field of hallucinations, the team claims.

“Illusions are actually very interesting, because internally, the model actually knows whether what it’s saying is fact or fiction,” Aschenbrenner said. “But the way these models are trained today, the human supervisors reward them with ‘up’, ‘up down’ for saying things. So sometimes, inadvertently, people reward the model for saying things that are either false or that the model doesn’t actually know about, and so on. If “If we’re successful in our research, we should develop techniques where we can basically call the knowledge of the model, and we could apply that call to whether something is fact or fiction and use that to reduce hallucinations.”

But the analogy is not perfect. So OpenAI wants to gather ideas.

To that end, OpenAI is launching a $10 million grant program to support technical research on superintelligent alignment, portions of which will go to academic labs, nonprofits, individual researchers, and graduate students. OpenAI also plans to host an academic hyperalignment conference in early 2025, where it will share and promote the work of the hyperalignment prize finalists.

Interestingly, part of the funding for the grant will come from former Google CEO and chairman Eric Schmidt. Schmidt — a staunch supporter of Altman — is fast becoming a poster child for AI doom, arguing that the arrival of dangerous AI systems is imminent and that regulators aren’t doing enough to prepare. It’s not necessarily out of a sense of altruism — the petition Protocol and Wired Note that Schmidt, an active AI investor, stands to benefit enormously commercially if the US government implements his proposed plan to boost AI research.

Donating can be seen as virtue signaling through a cynical lens. Schmidt’s personal fortune is estimated at $24 billion, and he has poured hundreds of millions into other, arguably less focused on ethics AI ventures and capital — including his own.

Schmidt denies this is happening, of course.

“Artificial intelligence and other emerging technologies are reshaping our economy and society,” he said in an emailed statement. “Ensuring that they are aligned with human values ​​is critical and I am proud to support OpenAI’s new [grants] to develop and control artificial intelligence responsibly for public benefit”.

Indeed, the involvement of a figure with such transparent commercial motives begs the question: Will OpenAI’s superalignment research and the research it encourages the community to submit to its future conference be made available to anyone to use as they see fit?

The Superalignment team assured me that, yes, both OpenAI research — including code — and the work of others who receive OpenAI grants and awards for superalignment-related work will be publicly shared. We will keep the company on it.

“Contributing not only to the safety of our models, but also to the safety of other labs’ models and advanced artificial intelligence in general is part of our mission,” Aschenbrenner said. “It’s really the core of our mission to build [AI] for the benefit of all mankind, safely. And we believe that doing this research is absolutely necessary to make it beneficial and safe.”

All included artificial believes build coming control intelligence OpenAI Research superhuman tools
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThree years after its refresh, the Firefox Android browser adds 450+ new extensions
Next Article What’s up with all these new venture funds?
bhanuprakash.cg
techtost.com
  • Website

Related Posts

Sources: Anthropic Potential $900B+ Valuation Round Could Happen Within 2 Weeks

1 May 2026

ChatGPT Images 2.0 is a hit in India, but not a big winner elsewhere, yet

1 May 2026

Meta says its business AI now facilitates 10 million conversations per week

30 April 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Legal AI startup Legora hits $5.6 billion valuation, and its battle with Harvey just got hotter

1 May 2026

Rivian cuts DOE loan to $4.5 billion for Georgia plant

1 May 2026

Sources: Anthropic Potential $900B+ Valuation Round Could Happen Within 2 Weeks

1 May 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Y Combinator alum Skio sells for $105 million in cash, raised only $8 million, founder says

1 May 2026

Amazon, Meta join the fight to end Google Pay and PhonePe’s dominance in India

30 April 2026

Steve Ballmer slams founder he backed, who pleaded guilty to fraud: ‘I was cheated and I feel stupid’

25 April 2026
Startups

Legal AI startup Legora hits $5.6 billion valuation, and its battle with Harvey just got hotter

Bill Gurley, Jack Altman back startup Pursuit, which helps companies sell to the government

BCI startup Neurable wants to license ‘mind reading’ technology to wearable consumer devices

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.