Recently, OpenAI announced a new policy on its website requiring organizations to verify their identity to access certain future AI models. This "Verified Organization" process aims to unlock access to OpenAI's most advanced models and features for developers, enhancing the security and control of AI technology.
According to the policy, organizations need to provide government-issued identification supported by the OpenAI API to complete verification. It's important to note that each ID can only verify one organization every 90 days, and not all organizations will qualify. OpenAI stated, "At OpenAI, we are committed to responsible development and ensuring the safe and broad accessibility of AI." The company acknowledges that a small number of developers may misuse its API, violating usage policies. Therefore, the verification process aims to mitigate unsafe AI use while continuing to provide advanced technology to the broader developer community.
As OpenAI's products become more complex and powerful, this new verification process is considered a necessary measure to enhance product security. The company has released multiple reports detailing how it detects and mitigates malicious model use, including recent investigations into certain groups originating from North Korea. Furthermore, this policy might also be designed to prevent intellectual property theft. Bloomberg reported that OpenAI is investigating an organization linked to the Chinese AI lab DeepSeek, which potentially stole a significant amount of data through its API in late 2024 to train its models, clearly violating OpenAI's terms of service. It's worth noting that OpenAI suspended access to its services for users in mainland China, Hong Kong, and Macau last summer.
The introduction of these measures signifies OpenAI's commitment to fostering the healthy development of AI technology while strengthening its oversight of technology usage, ensuring developers can leverage these cutting-edge technologies legally and compliantly.