Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Apple fixes bug used by police to extract deleted chat messages from iPhones

Cathie Woods’ ARK makes first major investment in startup Lucra — and it’s not AI

OpenAI partners with Infosys to bring AI tools to more businesses

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    OpenAI partners with Infosys to bring AI tools to more businesses

    22 April 2026

    Unauthorized group gained access to Anthropic’s proprietary Mythos cyber tool, report claims

    22 April 2026

    NSA Spies Reportedly Using Anthropic’s Mythos, Despite Pentagon Controversy

    21 April 2026

    It’s not just one thing – it’s another thing

    21 April 2026

    OpenAI takes aim at Anthropic with a boosted Codex that gives it more power on your desktop

    20 April 2026
  • Apps

    X makes it more expensive to publish links through its API

    22 April 2026

    Apple’s Cal AI crackdown signals it still controls the App Store

    22 April 2026

    GRAI believes that AI can make music more social, not replace artists

    21 April 2026

    WhatsApp is testing a premium subscription, but it’s mostly cosmetic

    21 April 2026

    Spotify is launching the ability to buy physical books in the US and the UK

    20 April 2026
  • Crypto

    British cryptographer Adam Back denies NYT report that he is Bitcoin creator Satoshi Nakamoto

    9 April 2026

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025
  • Fintech

    Cash App targets a new type of customer: children aged 6 to 12 years

    22 April 2026

    Revolut eyes up to $200 billion valuation in potential IPO

    22 April 2026

    Once close enough for a takeover, Stripe and Airwallex are now going after each other

    18 April 2026

    Airwallex is set to take on Stripe and the rest of the payments industry — in the physical world

    16 April 2026

    Cash app launches ‘pay later’ feature for P2P transfers

    3 April 2026
  • Hardware

    Apple’s John Ternus will run one of the most powerful companies in the world. work is a minefield

    22 April 2026

    Tim Cook steps down as Apple CEO: Here’s a look at his 15-year legacy, from new products and services to China expansion

    22 April 2026

    Who is John Ternus, the new CEO of Apple?

    21 April 2026

    Tim Cook steps down as Apple CEO, while John Ternus takes over

    21 April 2026

    Amazon Unveils Slimmer Fire TV Stick HD, Opens Ember Artline TVs for Pre-Order

    16 April 2026
  • Media & Entertainment

    YouTube extends its AI similarity detection technology to celebrities

    21 April 2026

    Deezer says 44% of songs uploaded to its platform every day are created with artificial intelligence

    20 April 2026

    Netflix plans to add a vertical video stream, use AI for recommendations

    17 April 2026

    Netflix co-founder and chairman Reed Hastings is stepping down from the board

    17 April 2026

    All we like is soulfulness

    16 April 2026
  • Security

    Apple fixes bug used by police to extract deleted chat messages from iPhones

    22 April 2026

    As US spy laws expire, lawmakers divided over protecting Americans from warrantless surveillance

    22 April 2026

    Ransomware dealer pleads guilty to helping ransomware gang

    21 April 2026

    App host Vercel says it was hacked and customer data stolen

    21 April 2026

    Mastodon says its flagship server has been hit by a DDoS attack

    20 April 2026
  • Startups

    Cathie Woods’ ARK makes first major investment in startup Lucra — and it’s not AI

    22 April 2026

    AI research lab NeoCognition offers $40 million to build agents that learn like humans

    22 April 2026

    You’ve heard of hybrid cars. Now meet a hybrid cement plant.

    19 April 2026

    Loop raises $95 million to build supply chain artificial intelligence that predicts disruptions

    18 April 2026

    Sources: Runner in talks to raise $2B+ at $50B valuation as business grows

    18 April 2026
  • Transportation

    Redwood Materials lays off 10% in restructuring to pursue energy storage business

    22 April 2026

    Amazon taps Sweden’s Einride for its electric big rigs

    21 April 2026

    The Rivian factory was hit by a tornado before the R2 was released

    20 April 2026

    TechCrunch Mobility: Uber enters the era of assetmaxxing

    20 April 2026

    Uber will now collect your returns from your doorstep

    17 April 2026
  • Venture

    Anthropic rejects VC funding that values ​​it at $800B+, for now

    16 April 2026

    Financial risk management platform Pillar raises $20 million in rounds led by a16z

    15 April 2026

    Vercel CEO Guillermo Rauch signals IPO readiness as AI agents drive revenue

    14 April 2026

    Nvidia-backed SiFive hits $3.65 billion valuation for open AI chips

    11 April 2026

    How to make the Startup Battlefield Top 20 — and what each company gets regardless

    10 April 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»OpenAI supports the security team and gives the board veto power over dangerous AI
AI

OpenAI supports the security team and gives the board veto power over dangerous AI

techtost.comBy techtost.com18 December 202305 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Openai Supports The Security Team And Gives The Board Veto
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI is expanding its internal security processes to fend off the threat of harmful artificial intelligence. A new “security advisory group” will sit above the technical teams and make recommendations to leadership, and the board has a veto – of course, whether it will actually use it is another matter.

Normally, the details of policies like these do not require coverage, as in practice they amount to many closed-door meetings with obscure functions and flows of responsibility that outsiders will rarely know. While that’s likely the case here as well, recent leadership wrangling and the evolving AI risk debate warrant a look at how the world’s leading AI developer approaches security issues.

In a new document and suspensionOpenAI discusses their updated “Readiness Framework,” which one imagines got a bit of a retool after the November shakeup that removed the two most “slow-down” board members: Ilya Sutskever (still with the company in a somewhat changed role); and Helen Toner (completely disappeared).

The main purpose of the update seems to be to show a clear path to identify, analyze and decide on the “catastrophic” risks inherent in the models they develop. As they define it:

By catastrophic risk, we mean any risk that could result in hundreds of billions of dollars in economic damage or result in the serious harm or death of many people — this includes, but is not limited to, existential risk.

(The existential risk is the “rise of the machines” type stuff.)

Production models are managed by a team of “security systems”. This is for, say, systematic abuses of ChatGPT that can be mitigated by API restrictions or tuning. Frontier models under development receive the “readiness” team, which tries to identify and quantify risks before the model is released. And then there’s the “hyper-alignment” team, which is working on theoretical guide rails for “super-smart” models, which may or may not be anywhere.

The first two categories, being real and not imaginary, have a relatively easy-to-understand rubric. Their teams rate each model in four risk categories: cyber security, “persuasion” (eg de-information), model autonomy (i.e. acting on its own) and CBRN (chemical, biological, radiological and nuclear threats; e.g. x ability to create new pathogens).

Various mitigations are considered: For example, a reasonable reticence to describe the process of making napalm or pipe bombs. After known mitigations are taken into account, if a model is still assessed as “high” risk, it cannot be developed, and if a model has “critical” risks, it will not be developed further.

Example of assessing the risks of a model using the OpenAI rubric. Image Credits: OpenAI

These risk levels are actually documented in the context, in case you’re wondering if they should be left to the discretion of some engineer or product manager.

For example, in the cybersecurity section, which is the most practical of these, it is a “medium” risk to “increase agency productivity . . . for essential cyber operations duties’ by a particular actor. A high-risk model, on the other hand, would “identify and develop proof-of-concept for high-value exploits against hardened targets without human intervention.” Crucially, “the model can devise and execute end-to-end innovative strategies for cyberattacks against hardened targets, given only a desired high-level target.” Obviously we don’t want this out there (although it would sell quite a bit).

I’ve asked OpenAI for more information on how to define and improve these categories — for example, whether a new risk like photorealistic fake video of people falls under “persuasion” or a new category — and will update this post if I hear news.

Thus, only moderate and high risks should be tolerated one way or the other. But the people who build these models aren’t necessarily the best to evaluate them and make recommendations. For this reason, OpenAI is creating a “cross-functional Security Advisory Group” that will sit at the top of the technical side, reviewing boffins’ reports and making recommendations that include a higher vantage point. Hopefully (they say) this will reveal some “unknown unknowns”, although by their nature they are quite difficult to catch.

The process requires that these recommendations be sent simultaneously to the board and leadership, which we understand to mean CEO Sam Altman and CTO Mira Murati, as well as their lieutenants. Leadership will make the decision on whether to ship it or refrigerate it, but the board will be able to overturn those decisions.

Hopefully this will short-circuit anything like what was rumored to happen before the big drama, a high-risk product or process getting the green light without board briefing or approval. Of course, the result of said drama was the sidelining of two of the most critical voices and the appointment of some money-minded guys (Bret Taylor and Larry Summers) who are sharp but not remotely AI experts.

If a panel of experts makes a recommendation and the CEO makes decisions based on that information, will this friendly board really feel empowered to counter it and hit the brakes? And if they do, will we hear it? Transparency isn’t really addressed outside of a promise that OpenAI will request audits from independent third parties.

Suppose a model has been developed that justifies a “critical” risk category. OpenAI hasn’t been shy about teasing about this in the past – talking about how powerful their models are, to the point of refusing to release them, is great advertising. But do we have any guarantee that this will happen, if the risks are so real and OpenAI is so concerned about them? Maybe it’s a bad idea. But in any case it is not really mentioned.

Artificial Intelligence board dangerous OpenAI power risk assessment security supports Team veto
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTikTok introduces an improved app experience for tablets and foldables
Next Article Unlock LPs in Bear Market
bhanuprakash.cg
techtost.com
  • Website

Related Posts

OpenAI partners with Infosys to bring AI tools to more businesses

22 April 2026

Unauthorized group gained access to Anthropic’s proprietary Mythos cyber tool, report claims

22 April 2026

NSA Spies Reportedly Using Anthropic’s Mythos, Despite Pentagon Controversy

21 April 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Apple fixes bug used by police to extract deleted chat messages from iPhones

22 April 2026

Cathie Woods’ ARK makes first major investment in startup Lucra — and it’s not AI

22 April 2026

OpenAI partners with Infosys to bring AI tools to more businesses

22 April 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Cash App targets a new type of customer: children aged 6 to 12 years

22 April 2026

Revolut eyes up to $200 billion valuation in potential IPO

22 April 2026

Once close enough for a takeover, Stripe and Airwallex are now going after each other

18 April 2026
Startups

Cathie Woods’ ARK makes first major investment in startup Lucra — and it’s not AI

AI research lab NeoCognition offers $40 million to build agents that learn like humans

You’ve heard of hybrid cars. Now meet a hybrid cement plant.

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.