In the field of artificial intelligence, safety concerns have always been the Sword of Damocles hanging over our heads. However, recent changes at OpenAI have sparked widespread industry attention. According to IT Home, the company dedicated to developing AI technologies for the benefit of humanity has seen its AGI Safety team, focused on the long-term risks of superintelligent AI, lose nearly half of its members.
Daniel Kokotajlo, a former governance researcher at OpenAI, revealed that over the past few months, the AGI Safety team's size has shrunk from about 30 to around 16 members. These researchers were originally tasked with ensuring the safety of future AGI systems and preventing them from posing a threat to humanity. The reduction in the team's size raises concerns about whether OpenAI is gradually overlooking the issue of AI safety.
Kokotajlo pointed out that these departures were not an organized action but the result of individual team members gradually losing confidence. As OpenAI focuses more on products and commercialization, the reduction of the safety research team seems to be an inevitable trend.
In response to external concerns, OpenAI stated that the company is proud to provide the most capable and safest AI systems and believes it has the scientific methods to address risks.
Earlier this year, OpenAI co-founder and chief scientist Ilya Sutskever announced his resignation, leading to the dissolution of the "Superalignment" team responsible for safety issues. These changes have undoubtedly intensified external concerns about OpenAI's safety research.