Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Microsoft hires Sequoia-backed AI collaboration platform team Cove

Cyberattack on vehicle breathalyzer company leaves drivers stranded in US

AI startups are eating up the venture industry, and the returns, so far, are good

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    Microsoft is retiring some of the Copilot AI bloat on Windows

    21 March 2026

    The best AI investment may be in energy technology

    20 March 2026

    Bot traffic to overtake human traffic by 2027, says Cloudflare CEO

    20 March 2026

    Multiverse Computing is pushing its compressed AI models into the mainstream

    19 March 2026

    Sam Altman’s thank you to coders draws memes

    19 March 2026
  • Apps

    Google is introducing a new way for users to download Android apps that still protects against fraud

    21 March 2026

    Meta launches new AI content enforcement systems while reducing reliance on third-party vendors

    20 March 2026

    Bluesky Announces $100M Series B After CEO Transition

    20 March 2026

    Amazon is bringing Alexa+ to the UK

    19 March 2026

    Rebel Audio is a new AI podcasting tool aimed at first-time creators

    19 March 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    Amid legal turmoil, Kalshi is temporarily banned in Nevada

    20 March 2026

    Nominations for the Startup Battlefield 200 are still open

    19 March 2026

    Kalshi’s legal woes pile up as Arizona files first criminal charges for ‘illegal gambling operation’

    17 March 2026

    Fuse raises $25M to disrupt legacy loan origination systems used by US credit unions

    16 March 2026

    India neobank Fi removes banking services on its platform

    11 March 2026
  • Hardware

    Amazon is working on a new smartphone with Alexa at its core, the report says

    20 March 2026

    CEO Carl Pei says nothing about smartphone apps disappearing as they’re replaced by artificial intelligence agents

    18 March 2026

    MacBook Neo, AirPods Max 2, iPhone 17e and everything else Apple announced this month

    18 March 2026

    Oura enters India’s smart ring market with Ring 4

    17 March 2026

    Apple quietly launches AirPods Max 2

    17 March 2026
  • Media & Entertainment

    Tubi joins forces with popular TikTokers to create original streaming content

    19 March 2026

    Patreon CEO calls AI companies’ fair use argument ‘bogus’, says creators should be paid

    18 March 2026

    Meet Vurt, the first mobile streaming platform for indie filmmakers embracing vertical video

    18 March 2026

    BuzzFeed debuts AI applications for new revenue

    17 March 2026

    Facebook makes it easy for creators to report copycats

    14 March 2026
  • Security

    The US accuses the Iranian government of operating a hacktivist group that hacked the Stryker

    20 March 2026

    CISA Urges Companies to Secure Microsoft Intune Systems After Hackers Mass Wipe Stryker Devices

    20 March 2026

    FBI seizes websites of pro-Iranian hacker group after devastating Stryker attack

    19 March 2026

    FBI is buying location data to track US citizens, director confirms

    19 March 2026

    Russians caught stealing personal data from Ukrainians with new advanced iPhone hacking tools

    18 March 2026
  • Startups

    Microsoft hires Sequoia-backed AI collaboration platform team Cove

    21 March 2026

    Consumer-focused privacy firm Cloaked raises $375 million as it expands into the enterprise

    20 March 2026

    Tools for founders to navigate and move past conflicts

    20 March 2026

    Anori, Alphabet’s new X spinout, faces one of the world’s most expensive bureaucratic nightmares

    19 March 2026

    This startup wants to make enterprise software more like a prompt

    19 March 2026
  • Transportation

    Cyberattack on vehicle breathalyzer company leaves drivers stranded in US

    21 March 2026

    Arc expands into electric commercial and defense vessels with $50M raise

    20 March 2026

    Rivian Sacrifices 2027 Profit Target to Push Deeper into Autonomy

    20 March 2026

    K2 will launch its first high-powered computing satellite into space

    19 March 2026

    EV startup Harbinger unveils smaller work truck with electric and hybrid variants

    18 March 2026
  • Venture

    AI startups are eating up the venture industry, and the returns, so far, are good

    21 March 2026

    Sequen raised $16 million to bring TikTok-style personalization technology to any consumer company

    19 March 2026

    AI ‘boys club’ could widen wealth gap for women, says Rana el Kaliouby

    18 March 2026

    Billionaires made a promise – now some want to leave

    17 March 2026

    Antonio Gracias Says He Longs For ‘Pre-Entropic’ Startups – Those Built To Survive Chaos

    17 March 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»This week in AI: OpenAI moves away from security
AI

This week in AI: OpenAI moves away from security

techtost.comBy techtost.com18 May 202407 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
This Week In Ai: Openai Moves Away From Security
Share
Facebook Twitter LinkedIn Pinterest Email

Keeping up with an industry as fast-paced as artificial intelligence is a tall order. So, until an AI can do it for you, here’s a helpful roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

By the way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we’re increasing the pace of our semi-regular AI column, previously twice a month (or so), to weekly — so be on the lookout for more releases.

This week in artificial intelligence, OpenAI once again dominated the news cycle (despite Google’s best efforts) with a product launch, but also some palace intrigue. The company unveiled the GPT-4o, its most capable production model yet, and a few days later effectively disbanded a team working on the problem of developing controls to prevent “super-intelligent” AI systems from going rogue.

The group’s breakup generated a lot of headlines, as expected. Reports—including our own—suggest that OpenAI scrapped the team’s security research in favor of releasing new products like the aforementioned GPT-4o, ultimately leading to the resignation of the team’s two advocates, Jan Leike and OpenAI co-founder , Ilya Sutskever.

Superintelligent AI is more theory than reality at this point. It’s not clear when — or if — the tech industry will achieve the innovations necessary to create artificial intelligence capable of accomplishing any task a human can. But this week’s coverage seems to confirm one thing: that OpenAI’s leadership—particularly CEO Sam Altman—has increasingly chosen to prioritize products over safeguards.

Altman reportedly “enragedSutskever was quick to introduce AI-powered features at OpenAI’s first developer conference last November. And he is is said to have been criticizing Helen Toner, director at Georgetown’s Center for Security and Emerging Technologies and a former OpenAI board member, in a paper she co-authored that cast OpenAI’s approach to security in a critical light — to the point where she tried to push it back the table.

Over the last year or so, OpenAI has let its chatbot store fill up with spam and (supposedly) data scraping from YouTube against the platform’s terms of service, while expressing ambitions to let its AI generate porn and moaning imagery. Certainly, security seems to have taken a back seat at the company — and a growing number of OpenAI security researchers have concluded that their work would be better supported elsewhere.

Here are some other notable AI stories from the past few days:

  • OpenAI + Reddit: In more OpenAI news, the company has reached an agreement with Reddit to use the social networking site’s data to train artificial intelligence models. Wall Street welcomed the deal with open arms — but Reddit users might not be so happy.
  • Google AI: Google hosted its annual I/O developer conference this week, during which it debuted a ton artificial intelligence products. We’ve rounded them up here, from video-generating Veo to AI-curated results in Google Search to upgrades to Google’s Gemini chatbot apps.
  • Anthropic hires Krieger: Mike Krieger, one of the co-founders of Instagram and, most recently, the co-founder of personalized news app Artifact (which TechCrunch was recently acquired by Yahoo), is joining Anthropic as the company’s first chief product officer. He will oversee both the company’s consumer and business efforts.
  • AI for kids: Anthropic announced last week that it will begin allowing developers to create kid-focused apps and tools based on its AI models — as long as they follow certain rules. In particular, competitors such as Google do not allow their artificial intelligence to be integrated into applications aimed at younger ages.
  • AI Film Festival: AI startup Runway held its second AI film festival earlier this month. The takeaway? Some of the most powerful moments in the showcase came not from the AI, but from the more human elements.

More machine learning

AI security is obviously in the spotlight this week with OpenAI departures, but Google Deepmind is plowing ahead with a new ‘Border Security Framework’. Basically, it’s the organization’s strategy to detect and hopefully prevent any explosive potential — it doesn’t have to be AGI, it could be a malware generator gone mad or something like that.

Image Credits: Google Deepmind

The framework has three steps: 1. Identify potentially harmful features in a model by simulating its development paths. 2. Regularly evaluate models to identify when they have reached known “critical capability levels”. 3. Implement a mitigation plan to prevent infiltration (by others or self) or problematic development. There are more details here. It might sound like an obvious course of action, but it’s important to formalize them, otherwise everyone is doing it. This is how you get the evil AI.

A rather different danger has been identified by Cambridge researchers, who are rightly concerned about the proliferation of chatbots trained on the data of a dead person in order to provide a superficial simulation of that person. You might (like me) find the whole idea a bit repulsive, but it could be used in grief management and other scenarios if we’re careful. The problem is that we are not paying attention.

Image Credits: University of Cambridge / T. Hollanek

“This area of ​​artificial intelligence is an ethical minefield,” said lead researcher Katarzyna Nowaczyk-Basińska. “We need to start thinking now about how to mitigate the social and psychological risks of digital immortality, because the technology is already here.” The team identifies many scams, potential bad and good results and discusses the concept in general (including fake services) in a paper published in Philosophy & Technology. Black Mirror predicts the future once again!

In less creepy AI applications, physicists at MIT they consider a useful (to them) tool for predicting the phase or state of a physical system, usually a statistical task that can become burdensome with more complex systems. But train a machine learning model on the right data and ground it with some known hardware characteristics of a system and you have a much more efficient way to achieve this. Another example of how ML is finding places even in advanced science.

At CU Boulder, they talk about how artificial intelligence can be used in disaster management. The technology can be useful for quickly predicting where resources will be needed, mapping damage, and even helping train responders, but people are (reasonably) hesitant to apply it to life-and-death scenarios.

Workshop participants.
Image Credits: CU Boulder

Professor Amir Behzadan tries to advance this, saying that “Human-centered AI leads to more effective disaster response and recovery practices by promoting collaboration, understanding, and participation among team members, survivors, and stakeholders.” They’re still in the lab phase, but it’s important to think deeply about these things before trying to, say, automate the distribution of aid after a hurricane.

Finally, some interesting work from Disney Research, which looked at how to vary the output of diffusion imaging models, which can produce similar results over and over again for certain prompts. Their solution? “Our sampling strategy anneals the conditioning signal by adding programmed, monotonically reduced Gaussian noise to the conditioning vector during diversity equalization and condition alignment.” I just couldn’t have put it better.

Image Credits: Disney Research

The result is a much greater variety in angles, settings and general appearance in the image outputs. Sometimes you want this, sometimes you don’t, but it’s nice to have the option.

All included moves newsletter OpenAI security this week in AI this week in the ai newsletter Week
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleMeta’s latest experiment borrows from the core ideas of BeReal and Snapchat
Next Article Shipping logistics startup Harbor Lab raises $16M Series A led by Atomico
bhanuprakash.cg
techtost.com
  • Website

Related Posts

AI startups are eating up the venture industry, and the returns, so far, are good

21 March 2026

Microsoft is retiring some of the Copilot AI bloat on Windows

21 March 2026

The best AI investment may be in energy technology

20 March 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Microsoft hires Sequoia-backed AI collaboration platform team Cove

21 March 2026

Cyberattack on vehicle breathalyzer company leaves drivers stranded in US

21 March 2026

AI startups are eating up the venture industry, and the returns, so far, are good

21 March 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Amid legal turmoil, Kalshi is temporarily banned in Nevada

20 March 2026

Nominations for the Startup Battlefield 200 are still open

19 March 2026

Kalshi’s legal woes pile up as Arizona files first criminal charges for ‘illegal gambling operation’

17 March 2026
Startups

Microsoft hires Sequoia-backed AI collaboration platform team Cove

Consumer-focused privacy firm Cloaked raises $375 million as it expands into the enterprise

Tools for founders to navigate and move past conflicts

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.