India2 months ago3 min read

OpenAI offers $555,000 to fill ‘stressful’ AI safety role amid security, mental health concerns

TI

Byline

The Indian Express

India Correspondent

Covers india developments with editorial context for decision-focused readers.

OpenAI offers $555,000 to fill ‘stressful’ AI safety role amid security, mental health concerns
Image source: The Indian Express

Why it matters

(Image: Reuters)Amidst multiple wrongful death lawsuits, OpenAI is moving to fill a critical safety role that has reportedly been vacant for several months now.Last Saturday, the ChatGPT maker said it is looking to hire a new ‘head of preparedness’ to guide the AI startup’s safety strategy by helping it anticipate the potential harms of its models and how they can be abused, as per a job listing posted on X by OpenAI CEO Sam Altman.The head of preparedness will earn $555,000 per year, along with equity in the company.

Key takeaways

  • Its own study revealed that more than a million ChatGPT users (0.07 per cent of weekly active users) exhibited signs of mental health emergencies, including mania, psychosis, or suicidal thoughts.Acknowledging that the “potential impact of models on mental health was something we saw a preview of in 2025,” Altman said that the head of preparedness “is a critical role at an important time.”“If you want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” he wrote.“This will be a stressful job, and you’ll jump into the deep end pretty much immediately,” he added.OpenAI’s safety teams have experienced significant employee churn over the last few years.
  • Earlier this year, Candela announced he was moving away from the preparedness team to lead recruiting at OpenAI.In November 2025, Andrea Vallone, the head of a safety research team known as model policy, said she was leaving OpenAI at the end of the year.
  • In July 2024, the company reassigned then-head of preparedness Aleksander Madry, and said that the role would be taken over by two AI safety researchers, Joaquin Quinonero Candela and Lilian Weng.However, Weng left OpenAI a few months later.

OpenAI and other tech giants such as Alphabet and Microsoft have courted enterprise AI deals to help justify massive spikes in spending, though the returns across the industry have so far failed to match investment. (Image: Reuters)

Amidst multiple wrongful death lawsuits, OpenAI is moving to fill a critical safety role that has reportedly been vacant for several months now.

Last Saturday, the ChatGPT maker said it is looking to hire a new ‘head of preparedness’ to guide the AI startup’s safety strategy by helping it anticipate the potential harms of its models and how they can be abused, as per a job listing posted on X by OpenAI CEO Sam Altman.

The head of preparedness will earn $555,000 per year, along with equity in the company. The new hire “will lead the technical strategy and execution of OpenAI’s Preparedness framework, our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm,” the company said.

OpenAI’s hunt to fill the complex role comes at a time when the AI startup has been hit with numerous accusations about the impacts of ChatGPT on users’ mental health, including a few wrongful death lawsuits. Its own study revealed that more than a million ChatGPT users (0.07 per cent of weekly active users) exhibited signs of mental health emergencies, including mania, psychosis, or suicidal thoughts.

Acknowledging that the “potential impact of models on mental health was something we saw a preview of in 2025,” Altman said that the head of preparedness “is a critical role at an important time.”

“If you want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” he wrote.

“This will be a stressful job, and you’ll jump into the deep end pretty much immediately,” he added.

OpenAI’s safety teams have experienced significant employee churn over the last few years. In July 2024, the company reassigned then-head of preparedness Aleksander Madry, and said that the role would be taken over by two AI safety researchers, Joaquin Quinonero Candela and Lilian Weng.

However, Weng left OpenAI a few months later. Earlier this year, Candela announced he was moving away from the preparedness team to lead recruiting at OpenAI.

In November 2025, Andrea Vallone, the head of a safety research team known as model policy, said she was leaving OpenAI at the end of the year. The AI safety research leader reportedly helped shape ChatGPT’s responses to users experiencing mental health crises.

The Indian ExpressVerified

Curated by Shiv Shakti Mishra

Sources & Further Reading

Key references used for verification and additional context.

Verification

Grade D1 unique evidence links

Publisher: The Indian Express

Source tier: Tier 2

Editorial standards: Our process

Corrections: Report an issue

Published: Dec 29, 2025

Read time: 3 min

Category: India