OpenAI tightens institutional verification process

This article is machine translated
Show original

Measures to prevent unsafe AI use and protect intellectual property as OpenAI prepares to launch more powerful models.

OpenAI has just announced the "Verified Organization" process – a new verification system that requires organizations to provide valid identification documents to access advanced AI models in the future through the company's API.

According to the support page published last week, the organization verification process is described as "a new way for developers to unlock access to the most advanced models and capabilities on the OpenAI platform". This process requires government-issued identification from one of the countries supported by OpenAI's API.

Each ID can only verify one organization within 90 days, and it indicates that not all organizations are eligible for verification.

"At OpenAI, we are serious about ensuring AI is both widely accessible and safely used," quoted from the support page. "Unfortunately, a few developers intentionally use OpenAI's API in violation of usage policies. We are adding a verification process to minimize unsafe AI use while still providing advanced models to a broader developer community."

Enhancing Security and Protecting Intellectual Property

The new verification process may aim to enhance security for OpenAI's products as they become increasingly sophisticated and powerful. The company has published multiple reports about efforts to detect and mitigate harmful use of their models, including groups allegedly linked to North Korea.

OpenAI released a new Verified Organization status as a new way for developers to unlock access to the most advanced models and capabilities on the platform, and to be ready for the "next exciting model release"

– Verification takes a few minutes and requires a valid… pic.twitter.com/zWZs1Oj8vE

— Tibor Blaho (@btibor91) April 12, 2025

This measure may also aim to prevent intellectual property theft. According to a Bloomberg report earlier this year, OpenAI is investigating whether a group linked to DeepSeek – an AI lab based in China – extracted a large throughput of data through their API at the end of 2024, possibly to train their own models, violating OpenAI's terms.

Notably, OpenAI has blocked access to its services in China since last summer, reflecting growing concerns about the use of their AI technology abroad.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments