OpenAI is looking to hire a new executive responsible for studying emerging AI-related risks in areas ranging from computer security to mental health.
In a post on XCEO Sam Altman acknowledged that AI models are “starting to present some real challenges,” including the “potential impact of models on mental health,” as well as models that are “so good at computer security that they’re starting to find critical vulnerabilities.”
“If you want to help the world figure out how to empower cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for harm, ideally making all systems more secure, and similarly for how we unleash biological potential and even gain confidence in the security of self-improving operating systems, please consider applying,” Altman wrote.
OpenAI’s listing for the Head of Preparedness role describes the job as being responsible for executing the company’s preparedness framework, “the framework explains our OpenAI approach to monitoring and preparing for frontier capabilities that create new risks of serious harm.”
Compensation for the role is listed as $555,000 plus equity.
The company first announced the creation of a preparedness group in 2023, saying it will be responsible for studying potential “catastrophic risks,” whether they are more immediate, such as phishing attacks, or more speculative, such as nuclear threats.
Less than a year later, OpenAI has reappointed head of readiness Aleksander Madry in a job focused on AI reasoning. Other security executives at OpenAI have also left the company or took on new roles out of readiness and security.
Techcrunch event
San Francisco
|
13-15 October 2026
The company also recently updated its Readiness Framework, stating that it may “adjust” its security requirements if a competing AI lab releases a “high-risk” model without similar protections.
As Altman mentioned in his post, AI chatbots being created are facing increasing scrutiny regarding their impact on mental health. Recent lawsuits allege that OpenAI’s ChatGPT enhanced users’ delusions, increased their social isolation, and even drove some to commit suicide. (The company said it continues to work to improve ChatGPT’s ability to recognize signs of emotional distress and connect users with real-world support.)
