Openai may soon require organizations to complete an identity verification process in order to access some future AI models, According to a support page Published on the company’s website last week.
The verification process, called a verified organization, is “a new way for developers to unlock access to the most advanced models and capabilities on the Openai platform,” the page says. Verification requires an identity issued by the government by one of the countries backed by the OpenAI API. An identifier can verify only one organization every 90 days and not all organizations will be eligible for verification, says Openai.
“At Openai, we take our responsibility seriously to ensure that AI is so widely accessible and used safely,” the page reads. “Unfortunately, a small minority of developers deliberately uses API OPENAI in violation of our use policies. We add the verification process to alleviate the unsafe use of AI while continuing to have advanced models to the wider programmers community.”
Openai has released a new verified organization status as a new way for developers to unlock access to the most advanced models and capabilities on the platform and be ready for the “next exciting model”
– Verification takes a few minutes and requires valid… pic.twitter.com/zwzs1oj8ve
– Tibor Blaho (@btibor91) April 12 2025
The new verification process could be intended to boost security around Openai products as they become more sophisticated and capable. The company has Published several reports About her efforts to detect and mitigate the malicious use of her models, including from groups allegedly based in North Korea.
It can also aim to prevent IP theft. According to a Bloomberg report earlier this year, Openai was investigating if A Deepseek-based team, the AI-based AI laboratory, has outraged large quantities of data through the API in late 2024, possibly for training models-inhabitation of the Openai terms.
Open blocked access in its services in China last summer.