Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Hyperscale Power is the latest startup to challenge 140-year-old transformer technology

YouTube extends fake AI detection to politicians, government officials and journalists

US military contractor likely built iPhone hacking tools used by Russian spies in Ukraine

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    Sandbar secures $23M Series A for AI note-taking ring

    10 March 2026

    OpenAI and Google employees are quick to defend Anthropic in the DOD lawsuit

    10 March 2026

    OpenAI hardware executive Caitlin Kalinowski resigns in response to Pentagon deal

    9 March 2026

    Will Pentagon standoff over Anthropic scare startups out of defense work?

    9 March 2026

    A roadmap for artificial intelligence, if anyone will listen

    8 March 2026
  • Apps

    X says it will suspend creators from revenue sharing program for AI posts without ‘armed conflict’ tag

    10 March 2026

    Periwinkle makes it even easier to host social media on Bluesky’s AT Protocol

    10 March 2026

    Meta will enable competing AI chatbots on WhatsApp in Europe, but for a fee

    9 March 2026

    Match Group COO out as dating apps struggle to connect with Gen Z

    9 March 2026

    Roblox launches real-time AI chat rewording to filter out banned language

    8 March 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    X taps William Shatner to give invitations to his payment service, X Money

    4 March 2026

    Stripe wants to turn your AI costs into a profit center

    3 March 2026

    3 days left: Save up to $680 on your ticket to Disrupt 2026

    25 February 2026

    More startups surpass $10M ARR in 3 months than ever before

    24 February 2026

    Stripe, PayPal Ventures Bet on India’s Xflow to Fix Cross-Border B2B Payments

    24 February 2026
  • Hardware

    Hyperscale Power is the latest startup to challenge 140-year-old transformer technology

    10 March 2026

    Whoop is launching a new blood test focused on women’s health

    10 March 2026

    Honor says its ‘Robot phone’ with moving camera can dance to music

    8 March 2026

    Apple unveils M5 Pro and M5 Max chips with new ‘Fusion Architecture’

    8 March 2026

    Eight Sleep raises $50 million at $1.5 billion valuation

    7 March 2026
  • Media & Entertainment

    YouTube extends fake AI detection to politicians, government officials and journalists

    10 March 2026

    Xprize Founder Peter Diamandis Launches New Contest To Announce New ‘Star Trek’

    10 March 2026

    It looks like the DOJ isn’t going to break up Live Nation and Ticketmaster

    9 March 2026

    PopSockets founder David Barnett talks about building a viral business

    7 March 2026

    Netflix acquires Ben Affleck’s AI film production company InterPositive

    6 March 2026
  • Security

    US military contractor likely built iPhone hacking tools used by Russian spies in Ukraine

    10 March 2026

    An iPhone hacking toolkit used by Russian spies likely came from a US military contractor

    10 March 2026

    Russian government hackers are targeting Signal and WhatsApp users, Dutch spies warn

    9 March 2026

    The Ring’s Jamie Siminoff tries to calm privacy fears from the Super Bowl, but his answers may not help

    9 March 2026

    Google says half of all zero-days it tracked in 2025 targeted buggy enterprise technology

    7 March 2026
  • Startups

    AI networking startup Eridu emerges from stealth with hefty $200M Series A

    10 March 2026

    Bluesky CEO Jay Graber is stepping down

    10 March 2026

    Science Corp. raises $230 million as it races to bring its brain implant to market

    6 March 2026

    EXCLUSIVE: Luma Launches Creative AI Agents Powered by New ‘Unified Intelligence’ Models

    6 March 2026

    How 1,000+ Customer Calls Shaped a Groundbreaking AI Business

    5 March 2026
  • Transportation

    Electric air taxi maker Archer hits back at Joby alleging hidden Chinese ties

    10 March 2026

    Electric air taxis are set to fly in 26 states

    10 March 2026

    The 2027 Chevy Bolt is the McRib of the automotive world

    9 March 2026

    TechCrunch Mobility: Rivian’s R2 game

    9 March 2026

    OSHA death detection at Rivian warehouse

    7 March 2026
  • Venture

    This SpaceX Veteran Says The Next Big Thing In Space Is Satellites Returning To Earth

    10 March 2026

    Founders Fund is approaching $6 billion for its latest growth fund, sources say

    10 March 2026

    Robinhood’s startup fund stumbles in its NYSE debut

    7 March 2026

    City Detect, which uses artificial intelligence to help cities stay safe and clean, raises $13M Series A

    7 March 2026

    Lio raises $30 million from Andreessen Horowitz and others to automate business procurement

    5 March 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»OpenAI supports the security team and gives the board veto power over dangerous AI
AI

OpenAI supports the security team and gives the board veto power over dangerous AI

techtost.comBy techtost.com18 December 202305 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Openai Supports The Security Team And Gives The Board Veto
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI is expanding its internal security processes to fend off the threat of harmful artificial intelligence. A new “security advisory group” will sit above the technical teams and make recommendations to leadership, and the board has a veto – of course, whether it will actually use it is another matter.

Normally, the details of policies like these do not require coverage, as in practice they amount to many closed-door meetings with obscure functions and flows of responsibility that outsiders will rarely know. While that’s likely the case here as well, recent leadership wrangling and the evolving AI risk debate warrant a look at how the world’s leading AI developer approaches security issues.

In a new document and suspensionOpenAI discusses their updated “Readiness Framework,” which one imagines got a bit of a retool after the November shakeup that removed the two most “slow-down” board members: Ilya Sutskever (still with the company in a somewhat changed role); and Helen Toner (completely disappeared).

The main purpose of the update seems to be to show a clear path to identify, analyze and decide on the “catastrophic” risks inherent in the models they develop. As they define it:

By catastrophic risk, we mean any risk that could result in hundreds of billions of dollars in economic damage or result in the serious harm or death of many people — this includes, but is not limited to, existential risk.

(The existential risk is the “rise of the machines” type stuff.)

Production models are managed by a team of “security systems”. This is for, say, systematic abuses of ChatGPT that can be mitigated by API restrictions or tuning. Frontier models under development receive the “readiness” team, which tries to identify and quantify risks before the model is released. And then there’s the “hyper-alignment” team, which is working on theoretical guide rails for “super-smart” models, which may or may not be anywhere.

The first two categories, being real and not imaginary, have a relatively easy-to-understand rubric. Their teams rate each model in four risk categories: cyber security, “persuasion” (eg de-information), model autonomy (i.e. acting on its own) and CBRN (chemical, biological, radiological and nuclear threats; e.g. x ability to create new pathogens).

Various mitigations are considered: For example, a reasonable reticence to describe the process of making napalm or pipe bombs. After known mitigations are taken into account, if a model is still assessed as “high” risk, it cannot be developed, and if a model has “critical” risks, it will not be developed further.

Example of assessing the risks of a model using the OpenAI rubric. Image Credits: OpenAI

These risk levels are actually documented in the context, in case you’re wondering if they should be left to the discretion of some engineer or product manager.

For example, in the cybersecurity section, which is the most practical of these, it is a “medium” risk to “increase agency productivity . . . for essential cyber operations duties’ by a particular actor. A high-risk model, on the other hand, would “identify and develop proof-of-concept for high-value exploits against hardened targets without human intervention.” Crucially, “the model can devise and execute end-to-end innovative strategies for cyberattacks against hardened targets, given only a desired high-level target.” Obviously we don’t want this out there (although it would sell quite a bit).

I’ve asked OpenAI for more information on how to define and improve these categories — for example, whether a new risk like photorealistic fake video of people falls under “persuasion” or a new category — and will update this post if I hear news.

Thus, only moderate and high risks should be tolerated one way or the other. But the people who build these models aren’t necessarily the best to evaluate them and make recommendations. For this reason, OpenAI is creating a “cross-functional Security Advisory Group” that will sit at the top of the technical side, reviewing boffins’ reports and making recommendations that include a higher vantage point. Hopefully (they say) this will reveal some “unknown unknowns”, although by their nature they are quite difficult to catch.

The process requires that these recommendations be sent simultaneously to the board and leadership, which we understand to mean CEO Sam Altman and CTO Mira Murati, as well as their lieutenants. Leadership will make the decision on whether to ship it or refrigerate it, but the board will be able to overturn those decisions.

Hopefully this will short-circuit anything like what was rumored to happen before the big drama, a high-risk product or process getting the green light without board briefing or approval. Of course, the result of said drama was the sidelining of two of the most critical voices and the appointment of some money-minded guys (Bret Taylor and Larry Summers) who are sharp but not remotely AI experts.

If a panel of experts makes a recommendation and the CEO makes decisions based on that information, will this friendly board really feel empowered to counter it and hit the brakes? And if they do, will we hear it? Transparency isn’t really addressed outside of a promise that OpenAI will request audits from independent third parties.

Suppose a model has been developed that justifies a “critical” risk category. OpenAI hasn’t been shy about teasing about this in the past – talking about how powerful their models are, to the point of refusing to release them, is great advertising. But do we have any guarantee that this will happen, if the risks are so real and OpenAI is so concerned about them? Maybe it’s a bad idea. But in any case it is not really mentioned.

Artificial Intelligence board dangerous OpenAI power risk assessment security supports Team veto
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTikTok introduces an improved app experience for tablets and foldables
Next Article Unlock LPs in Bear Market
bhanuprakash.cg
techtost.com
  • Website

Related Posts

Hyperscale Power is the latest startup to challenge 140-year-old transformer technology

10 March 2026

Sandbar secures $23M Series A for AI note-taking ring

10 March 2026

OpenAI and Google employees are quick to defend Anthropic in the DOD lawsuit

10 March 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Hyperscale Power is the latest startup to challenge 140-year-old transformer technology

10 March 2026

YouTube extends fake AI detection to politicians, government officials and journalists

10 March 2026

US military contractor likely built iPhone hacking tools used by Russian spies in Ukraine

10 March 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

X taps William Shatner to give invitations to his payment service, X Money

4 March 2026

Stripe wants to turn your AI costs into a profit center

3 March 2026

3 days left: Save up to $680 on your ticket to Disrupt 2026

25 February 2026
Startups

AI networking startup Eridu emerges from stealth with hefty $200M Series A

Bluesky CEO Jay Graber is stepping down

Science Corp. raises $230 million as it races to bring its brain implant to market

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.