Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

The rise of “micro” apps: non-developers write apps instead of buying them

Musk wants up to $134 billion in OpenAI lawsuit, despite $700 billion fortune

Bluesky launches cashtags and LIVE badges amid push in app installs

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    Musk wants up to $134 billion in OpenAI lawsuit, despite $700 billion fortune

    17 January 2026

    From OpenAI offices to Eli Lilly deal – how Chai Discovery became one of the most impressive names in AI drug development

    16 January 2026

    Anthropic taps former Microsoft India Director to lead Bengaluru expansion

    16 January 2026

    Taiwan to invest $250 billion in US semiconductor manufacturing

    15 January 2026

    Mira Murati’s startup Thinking Machines Lab is losing two of its co-founders to OpenAI

    15 January 2026
  • Apps

    Bluesky launches cashtags and LIVE badges amid push in app installs

    17 January 2026

    TikTok is quietly launching a micro-drama app called ‘PineDrama’

    16 January 2026

    Google’s Trends Explore page gets new Gemini features

    16 January 2026

    After Italy, WhatsApp exempts Brazil from rival chatbot ban

    15 January 2026

    App downloads decline again in 2025, but consumer spending jumps to nearly $156 billion

    15 January 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    Fintech firm Betterment confirms data breach after hackers sent fake crypto scam alert to users

    12 January 2026

    Flutterwave buys Nigeria’s Mono in rare African fintech exit

    5 January 2026

    Even as global crop prices fall, India’s Arya.ag attracts investors – and remains profitable

    2 January 2026

    These 21-year-old school dropouts raise $2 million to launch Givefront, a fintech for nonprofits

    18 December 2025

    Google deepens consumer loyalty drive in India with UPI-linked card

    17 December 2025
  • Hardware

    US slaps 25% tariffs on Nvidia’s H200 AI chips headed to China

    15 January 2026

    The weirdest tech announced at CES 2026

    15 January 2026

    Google’s Gemini will power Apple’s AI features like Siri

    14 January 2026

    Pebble founder says his new company ‘isn’t a startup’

    14 January 2026

    The ring founder details the era of the camera company’s “smart assistants.”

    13 January 2026
  • Media & Entertainment

    YouTube relaxes monetization guidelines for some controversial topics

    16 January 2026

    Bandcamp takes a stand against AI music, banning it from the platform

    15 January 2026

    Paramount filed a lawsuit against Warner Bros. amid the controversial Netflix merger

    13 January 2026

    Netflix had a huge night at the 2026 Golden Globes with 7 wins

    12 January 2026

    Spotify lowers monetization limit for video podcasts

    8 January 2026
  • Security

    Supreme Court Hacker Posts Stolen Government Data on Instagram

    17 January 2026

    Iran’s internet shutdown is now one of the longest as protests continue

    16 January 2026

    AI security company depthfirst announces $40M Series A

    14 January 2026

    Man pleads guilty to hacking US Supreme Court filing system

    14 January 2026

    Internet crashes in Iran amid protests over financial crisis

    9 January 2026
  • Startups

    The rise of “micro” apps: non-developers write apps instead of buying them

    17 January 2026

    Cloud AI startup Runpod hits $120M in ARR — and it started with a Reddit post

    16 January 2026

    Parloa triples valuation in 8 months to $3 billion with $350 million raise

    16 January 2026

    AI video startup Higgsfield, founded by ex-Snap exec, valued at $1.3 billion

    15 January 2026

    India’s Emversity Doubles Valuation as It Scales Workers AI Can’t Replace

    15 January 2026
  • Transportation

    Chinese electric vehicles are closing in on the US as Canada slashes tariffs

    16 January 2026

    Tesla will only offer subscriptions for full self-driving (Supervision) in the future.

    15 January 2026

    The FTC’s data-sharing order against GM was finally settled

    15 January 2026

    The American cargo technology company has publicly exposed its shipping systems and customer data on the web

    14 January 2026

    New York’s governor paves the way for robotaxis everywhere, with one notable exception

    13 January 2026
  • Venture

    Tiger Global loses India tax case linked to Walmart-Flipkart deal in blow to offshore playbook

    15 January 2026

    The super-organization is raising $25 million to support biodiversity startups

    13 January 2026

    These Gen Zers just raised $11.75 million to put Africa’s defense back in the hands of Africans

    12 January 2026

    The venture firm that ate up Silicon Valley just raised another $15 billion

    9 January 2026

    Why This VC Thinks 2026 Will Be ‘The Year of the Consumer’

    8 January 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»OpenAI believes superhuman artificial intelligence is coming — and wants to build tools to control it
AI

OpenAI believes superhuman artificial intelligence is coming — and wants to build tools to control it

techtost.comBy techtost.com15 December 202308 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Openai Believes Superhuman Artificial Intelligence Is Coming And Wants
Share
Facebook Twitter LinkedIn Pinterest Email

While investors were getting ready to go nuclear after Sam Altman’s unceremonious departure from OpenAI and Altman planning his return to the company, members of OpenAI’s Superalignment team were busily dealing with the problem of how to control the artificial intelligence that is smarter than humans.

Or at least, that’s the impression they’d like to give.

This week, I got on the phone with three of the Superalignment team members — Collin Burns, Pavel Izmailov and Leopold Aschenbrenner — who were in New Orleans at NeurIPS, the annual machine learning conference, to present OpenAI’s newest project to ensure that AI systems behave as intended.

OpenAI formed the Superalignment team in July to develop ways to guide, regulate and govern “superintelligent” AI systems — that is, theoretical systems with intelligence that far exceeds that of humans.

“Today, we can basically align models that are dumber than us or maybe human-level maximumBurns said. “Aligning a model that is actually smarter than us is much, much less obvious — how can we do that?”

The Superalignment effort is led by OpenAI co-founder and chief scientist Ilya Sutskever, who didn’t disappoint in July — but certainly now, in light of the fact that Sutskever was among those who initially pushed for Altman’s firing. While some reference suggests that Sutskever is in a “vacuum state” after Altman’s return, OpenAI’s PR tells me that Sutskever is indeed – as of today, at least – still head of the Superalignment team.

Hyper-alignment is a bit of a touchy subject in the AI ​​research community. Some argue that the subfield is premature. others imply it is a red herring.

While Altman has invited comparisons between OpenAI and the Manhattan Project, going so far as to assemble a team to explore artificial intelligence models to protect against “catastrophic risks,” including chemical and nuclear threats, some experts say there is little evidence that indicate the startup’s technology will acquire vast, world-beating capabilities anytime soon — or ever. Claims of impending superintelligence, these experts add, only serve to deliberately distract and divert attention from the pressing AI regulatory issues of the day, such as algorithmic bias and AI’s propensity for toxicity.

For what it’s worth, Sutskever seems to think fervently that AI — not OpenAI per se, but some incarnation of it — could one day become an existential threat. According to information, he reached the spot supply and combustion a wooden dummy in an off-site company to demonstrate its commitment to preventing artificial intelligence from harming humanity, and orders a significant amount of OpenAI computing—20% of its existing computer chips—for the Superalignment team’s research.

“The progress of artificial intelligence recently has been extremely fast, and I can assure you that it is not slowing down,” Aschenbrenner said. “I think we’ll get to human-level systems very soon, but it won’t stop there — we’ll go straight to superhuman systems… So how do we align superhuman AI systems and make them safe? It really is a problem for all of humanity — perhaps the most important unsolved technical problem of our time.”

The Superalignment team is currently trying to build governance and control frameworks that could they apply well to future powerful AI systems. It’s no simple task, as the definition of “superintelligence” — and whether a particular AI system has achieved it — is hotly debated. But the approach the team has come up with for now involves using a weaker, less sophisticated AI model (eg GPT-2) to guide a more advanced, sophisticated model (GPT-4) in desired directions — and away from spam.

An image illustrating the AI-based Superalignment team’s analogy for the alignment of superintelligent systems. Image Credits: OpenAI

“A lot of what we’re trying to do is tell a model what to do and make sure it does it,” Burns said. “How can we get a model to follow instructions and get a model to only help with things that are real and not things that are made up? How can we get a model to tell us if the code it generated is safe or obscene? These are the types of tasks we want to be able to achieve with our research.”

But wait, you might say – what does AI that guides AI have to do with preventing AI that threatens humanity? Well, it’s an analogy: The weak model is meant to be a stand-in for human supervisors, while the strong model represents the super-intelligent AI. Similar to humans who may not be able to understand a super-intelligent AI system, the weak model cannot “get” all the complexities and nuances of the strong model – making the setup useful for proving hyper-alignment hypotheses, the team says Superalignment.

“You can think of a sixth grader trying to supervise a college student,” Izmailov explained. “Suppose the sixth grader tries to tell the student about a task that he somehow knows how to solve… Although the sixth grader’s supervision may have errors in the details, there is hope that the college student would understand the substance and would be able to do the work better than the supervisor.”

In the Superalignment group setting, a weak model fine-tuned to a particular task creates tags that are used to “communicate” the broad strokes of that task to the strong model. Given these labels, the strong model can more or less correctly generalize according to the weak model’s intent — even if the weak model’s labels contain errors and biases, the team found.

The weak-strong model approach may even lead to breakthroughs in the field of hallucinations, the team claims.

“Illusions are actually very interesting, because internally, the model actually knows whether what it’s saying is fact or fiction,” Aschenbrenner said. “But the way these models are trained today, the human supervisors reward them with ‘up’, ‘up down’ for saying things. So sometimes, inadvertently, people reward the model for saying things that are either false or that the model doesn’t actually know about, and so on. If “If we’re successful in our research, we should develop techniques where we can basically call the knowledge of the model, and we could apply that call to whether something is fact or fiction and use that to reduce hallucinations.”

But the analogy is not perfect. So OpenAI wants to gather ideas.

To that end, OpenAI is launching a $10 million grant program to support technical research on superintelligent alignment, portions of which will go to academic labs, nonprofits, individual researchers, and graduate students. OpenAI also plans to host an academic hyperalignment conference in early 2025, where it will share and promote the work of the hyperalignment prize finalists.

Interestingly, part of the funding for the grant will come from former Google CEO and chairman Eric Schmidt. Schmidt — a staunch supporter of Altman — is fast becoming a poster child for AI doom, arguing that the arrival of dangerous AI systems is imminent and that regulators aren’t doing enough to prepare. It’s not necessarily out of a sense of altruism — the petition Protocol and Wired Note that Schmidt, an active AI investor, stands to benefit enormously commercially if the US government implements his proposed plan to boost AI research.

Donating can be seen as virtue signaling through a cynical lens. Schmidt’s personal fortune is estimated at $24 billion, and he has poured hundreds of millions into other, arguably less focused on ethics AI ventures and capital — including his own.

Schmidt denies this is happening, of course.

“Artificial intelligence and other emerging technologies are reshaping our economy and society,” he said in an emailed statement. “Ensuring that they are aligned with human values ​​is critical and I am proud to support OpenAI’s new [grants] to develop and control artificial intelligence responsibly for public benefit”.

Indeed, the involvement of a figure with such transparent commercial motives begs the question: Will OpenAI’s superalignment research and the research it encourages the community to submit to its future conference be made available to anyone to use as they see fit?

The Superalignment team assured me that, yes, both OpenAI research — including code — and the work of others who receive OpenAI grants and awards for superalignment-related work will be publicly shared. We will keep the company on it.

“Contributing not only to the safety of our models, but also to the safety of other labs’ models and advanced artificial intelligence in general is part of our mission,” Aschenbrenner said. “It’s really the core of our mission to build [AI] for the benefit of all mankind, safely. And we believe that doing this research is absolutely necessary to make it beneficial and safe.”

All included artificial believes build coming control intelligence OpenAI Research superhuman tools
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThree years after its refresh, the Firefox Android browser adds 450+ new extensions
Next Article What’s up with all these new venture funds?
bhanuprakash.cg
techtost.com
  • Website

Related Posts

Musk wants up to $134 billion in OpenAI lawsuit, despite $700 billion fortune

17 January 2026

Cloud AI startup Runpod hits $120M in ARR — and it started with a Reddit post

16 January 2026

From OpenAI offices to Eli Lilly deal – how Chai Discovery became one of the most impressive names in AI drug development

16 January 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

The rise of “micro” apps: non-developers write apps instead of buying them

17 January 2026

Musk wants up to $134 billion in OpenAI lawsuit, despite $700 billion fortune

17 January 2026

Bluesky launches cashtags and LIVE badges amid push in app installs

17 January 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Fintech firm Betterment confirms data breach after hackers sent fake crypto scam alert to users

12 January 2026

Flutterwave buys Nigeria’s Mono in rare African fintech exit

5 January 2026

Even as global crop prices fall, India’s Arya.ag attracts investors – and remains profitable

2 January 2026
Startups

The rise of “micro” apps: non-developers write apps instead of buying them

Cloud AI startup Runpod hits $120M in ARR — and it started with a Reddit post

Parloa triples valuation in 8 months to $3 billion with $350 million raise

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.