Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

India has changed its startup rules for deep tech

Backlash over OpenAI’s decision to withdraw GPT-4o shows how dangerous AI companions can be

Spotify upgrades its lyrics feature with offline access, more translations

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    Backlash over OpenAI’s decision to withdraw GPT-4o shows how dangerous AI companions can be

    8 February 2026

    New York lawmakers are proposing a three-year freeze on new data centers

    7 February 2026

    Benchmark raises $225 million in dedicated funds to double Cerebras

    7 February 2026

    How artificial intelligence is helping to solve the labor issue in treating rare diseases

    6 February 2026

    Amazon and Google are winning the AI ​​capital race — but what’s the prize?

    6 February 2026
  • Apps

    Spotify upgrades its lyrics feature with offline access, more translations

    8 February 2026

    After backlash, Adobe reverses shutdown of Adobe Animate and puts app in ‘maintenance mode’

    7 February 2026

    EU says TikTok must disable ‘addictive’ features like infinite scrolling, fix recommendation engine

    7 February 2026

    Here’s how Roblox’s age controls work

    6 February 2026

    Meta is testing a standalone app for its AI-generated ‘Vibes’ videos

    6 February 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    Stripe Alumni Raise €30M Series A for Duna, Backed by Stripe and Adyen Executives

    5 February 2026

    Fintech CEO and Forbes 30 Under 30 alum indicted for alleged fraud

    3 February 2026

    How Sequoia-backed Ethos went public while rivals lagged behind

    30 January 2026

    5 days left for TechCrunch Disrupt 2026 +1 pass with 50%

    26 January 2026

    50% off +1 ends | TechCrunch

    23 January 2026
  • Hardware

    Kindle Scribe Colorsoft is an expensive but beautiful color e-ink tablet with AI features

    6 February 2026

    Ring brings “Search Party” feature for finding lost dogs to non-Ring camera owners

    2 February 2026

    India offers zero taxes till 2047 to attract global AI workloads

    1 February 2026

    Microsoft won’t stop buying AI chips from Nvidia, AMD even after its own is released, says Nadella

    30 January 2026

    The iPhone just had its best quarter ever

    30 January 2026
  • Media & Entertainment

    The “picked last in gym class” kids get ready for the Super Bowl

    8 February 2026

    From Svedka to Anthropic, Brands Are Making Bold Plays With AI in Super Bowl Ads

    7 February 2026

    “Industry” Season 4 captures tech fraud better than any show on TV right now

    7 February 2026

    Spotify’s new feature lets you explore the story behind the song you’re listening to

    6 February 2026

    The Washington Post retreats from Silicon Valley when it matters most

    6 February 2026
  • Security

    Senator, who has repeatedly warned of secret US government surveillance, raises new alarm over ‘CIA activities’

    7 February 2026

    Substack confirms that the data breach affects users’ email addresses and phone numbers

    6 February 2026

    One of Europe’s biggest universities was offline for days after the cyber attack

    6 February 2026

    Cyber ​​tech giant Conduent’s hot air balloon data breach affects millions more Americans

    5 February 2026

    Hackers Release Personal Information Stolen During Harvard, UPenn Data Breach

    5 February 2026
  • Startups

    Gradient’s heat pumps get new smarts to enable retrofitting of old buildings

    8 February 2026

    Accel doubles down on Fibr AI as agents turn static websites into one-to-one experiences

    7 February 2026

    ElevenLabs Raises $500M From Sequoia At $11B Valuation

    7 February 2026

    Fundamental raises $255 million in Series A with a new approach to big data analytics

    6 February 2026

    a16z VC wants founders to stop stressing about crazy ARR numbers

    6 February 2026
  • Transportation

    Prince Andrew’s adviser suggested Jeffrey Epstein invest in EV startups like Lucid Motors

    7 February 2026

    Apeiron Labs Takes $9.5M to Flood Oceans with Autonomous Underwater Robots

    5 February 2026

    Uber appoints new CFO as its AV plans accelerate

    5 February 2026

    Skyryse lands another $300 million to make flying, even helicopters, simple and safe

    4 February 2026

    China is leading the fight against hidden car door handles

    3 February 2026
  • Venture

    India has changed its startup rules for deep tech

    8 February 2026

    Peak XV Says Internal Disagreement Has Led to Partner Exits as AI Doubles

    8 February 2026

    SNAK Venture Partners raises $50 million in capital to support vertical acquisitions

    7 February 2026

    Reddit says it’s looking for more acquisitions in adtech and elsewhere

    7 February 2026

    Secondary sales are shifting from founders’ windfalls to employee retention tools

    6 February 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»OpenAI supports the security team and gives the board veto power over dangerous AI
AI

OpenAI supports the security team and gives the board veto power over dangerous AI

techtost.comBy techtost.com18 December 202305 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Openai Supports The Security Team And Gives The Board Veto
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI is expanding its internal security processes to fend off the threat of harmful artificial intelligence. A new “security advisory group” will sit above the technical teams and make recommendations to leadership, and the board has a veto – of course, whether it will actually use it is another matter.

Normally, the details of policies like these do not require coverage, as in practice they amount to many closed-door meetings with obscure functions and flows of responsibility that outsiders will rarely know. While that’s likely the case here as well, recent leadership wrangling and the evolving AI risk debate warrant a look at how the world’s leading AI developer approaches security issues.

In a new document and suspensionOpenAI discusses their updated “Readiness Framework,” which one imagines got a bit of a retool after the November shakeup that removed the two most “slow-down” board members: Ilya Sutskever (still with the company in a somewhat changed role); and Helen Toner (completely disappeared).

The main purpose of the update seems to be to show a clear path to identify, analyze and decide on the “catastrophic” risks inherent in the models they develop. As they define it:

By catastrophic risk, we mean any risk that could result in hundreds of billions of dollars in economic damage or result in the serious harm or death of many people — this includes, but is not limited to, existential risk.

(The existential risk is the “rise of the machines” type stuff.)

Production models are managed by a team of “security systems”. This is for, say, systematic abuses of ChatGPT that can be mitigated by API restrictions or tuning. Frontier models under development receive the “readiness” team, which tries to identify and quantify risks before the model is released. And then there’s the “hyper-alignment” team, which is working on theoretical guide rails for “super-smart” models, which may or may not be anywhere.

The first two categories, being real and not imaginary, have a relatively easy-to-understand rubric. Their teams rate each model in four risk categories: cyber security, “persuasion” (eg de-information), model autonomy (i.e. acting on its own) and CBRN (chemical, biological, radiological and nuclear threats; e.g. x ability to create new pathogens).

Various mitigations are considered: For example, a reasonable reticence to describe the process of making napalm or pipe bombs. After known mitigations are taken into account, if a model is still assessed as “high” risk, it cannot be developed, and if a model has “critical” risks, it will not be developed further.

Example of assessing the risks of a model using the OpenAI rubric. Image Credits: OpenAI

These risk levels are actually documented in the context, in case you’re wondering if they should be left to the discretion of some engineer or product manager.

For example, in the cybersecurity section, which is the most practical of these, it is a “medium” risk to “increase agency productivity . . . for essential cyber operations duties’ by a particular actor. A high-risk model, on the other hand, would “identify and develop proof-of-concept for high-value exploits against hardened targets without human intervention.” Crucially, “the model can devise and execute end-to-end innovative strategies for cyberattacks against hardened targets, given only a desired high-level target.” Obviously we don’t want this out there (although it would sell quite a bit).

I’ve asked OpenAI for more information on how to define and improve these categories — for example, whether a new risk like photorealistic fake video of people falls under “persuasion” or a new category — and will update this post if I hear news.

Thus, only moderate and high risks should be tolerated one way or the other. But the people who build these models aren’t necessarily the best to evaluate them and make recommendations. For this reason, OpenAI is creating a “cross-functional Security Advisory Group” that will sit at the top of the technical side, reviewing boffins’ reports and making recommendations that include a higher vantage point. Hopefully (they say) this will reveal some “unknown unknowns”, although by their nature they are quite difficult to catch.

The process requires that these recommendations be sent simultaneously to the board and leadership, which we understand to mean CEO Sam Altman and CTO Mira Murati, as well as their lieutenants. Leadership will make the decision on whether to ship it or refrigerate it, but the board will be able to overturn those decisions.

Hopefully this will short-circuit anything like what was rumored to happen before the big drama, a high-risk product or process getting the green light without board briefing or approval. Of course, the result of said drama was the sidelining of two of the most critical voices and the appointment of some money-minded guys (Bret Taylor and Larry Summers) who are sharp but not remotely AI experts.

If a panel of experts makes a recommendation and the CEO makes decisions based on that information, will this friendly board really feel empowered to counter it and hit the brakes? And if they do, will we hear it? Transparency isn’t really addressed outside of a promise that OpenAI will request audits from independent third parties.

Suppose a model has been developed that justifies a “critical” risk category. OpenAI hasn’t been shy about teasing about this in the past – talking about how powerful their models are, to the point of refusing to release them, is great advertising. But do we have any guarantee that this will happen, if the risks are so real and OpenAI is so concerned about them? Maybe it’s a bad idea. But in any case it is not really mentioned.

Artificial Intelligence board dangerous OpenAI power risk assessment security supports Team veto
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTikTok introduces an improved app experience for tablets and foldables
Next Article Unlock LPs in Bear Market
bhanuprakash.cg
techtost.com
  • Website

Related Posts

Backlash over OpenAI’s decision to withdraw GPT-4o shows how dangerous AI companions can be

8 February 2026

New York lawmakers are proposing a three-year freeze on new data centers

7 February 2026

Benchmark raises $225 million in dedicated funds to double Cerebras

7 February 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

India has changed its startup rules for deep tech

8 February 2026

Backlash over OpenAI’s decision to withdraw GPT-4o shows how dangerous AI companions can be

8 February 2026

Spotify upgrades its lyrics feature with offline access, more translations

8 February 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Stripe Alumni Raise €30M Series A for Duna, Backed by Stripe and Adyen Executives

5 February 2026

Fintech CEO and Forbes 30 Under 30 alum indicted for alleged fraud

3 February 2026

How Sequoia-backed Ethos went public while rivals lagged behind

30 January 2026
Startups

Gradient’s heat pumps get new smarts to enable retrofitting of old buildings

Accel doubles down on Fibr AI as agents turn static websites into one-to-one experiences

ElevenLabs Raises $500M From Sequoia At $11B Valuation

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.