According to Bloomberg, OpenAI has created an internal scale to track the progress of its large language models in the field of Artificial General Intelligence (AGI). This move not only demonstrates OpenAI's ambition in the AGI field but also provides the industry with a new standard for measuring AI development.
The scale is divided into five levels: 1. Level 1: Current chatbots, such as ChatGPT, fall into this category. 2. Level 2: Systems capable of solving basic problems at the level of a Ph.D. OpenAI claims to be close to this level. 3. Level 3: AI agents capable of taking actions on behalf of users. 4. Level 4: AI capable of creating new innovations. 5. Level 5: AI that can perform the entire work of an organization, considered the final step towards achieving AGI.
OpenAI previously defined AGI as "a highly autonomous system that surpasses humans on most economically valuable tasks." This definition is significant for the company's development direction, as OpenAI's structure and mission revolve around achieving AGI.
However, experts have differing opinions on the timeline for achieving AGI. OpenAI CEO Sam Altman said in October 2023 that there are "five years" left until the realization of AGI. But even if AGI can be achieved, it will require billions of dollars in computing resources.
Notably, the announcement of this rating standard coincided with OpenAI's collaboration with Los Alamos National Laboratory to explore how to safely utilize advanced AI models (such as GPT-4) to assist in biological research. This collaboration aims to establish a set of safety and other evaluation factors for the U.S. government to be used in testing various AI models in the future.
Although OpenAI declined to provide details on how models are assigned to these internal levels, Bloomberg reported that the company's leadership recently showcased a research project using the GPT-4 AI model, which demonstrated some new skills similar to human reasoning.
This quantification method of AGI progress helps to make more stringent definitions of AI development, avoiding subjective interpretations. However, it also raises some concerns about AI safety and ethics. In May of this year, OpenAI disbanded its security team, with some former employees stating that the company's security culture has been replaced by product development, although OpenAI denied this claim.