OpenAI co-founder Ilya Sutskever recently announced that he will be stepping down from his role as Chief Scientist at OpenAI this May to focus on a new entrepreneurial venture. Along with his OpenAI colleague Daniel Levy and former Apple AI director and Cue co-founder Daniel Gross, they have announced the co-founding of Safe Superintelligence Inc., a startup dedicated to building safe superintelligence.
On the SSI website, the founders stated that building safe superintelligence is "the most important technological issue of our time." Additionally, "We view safety and capability as one, as a technological issue to be solved through revolutionary engineering and scientific breakthroughs. We plan to rapidly enhance capabilities while ensuring that our safety always stays ahead."
So, what is superintelligence? It is a hypothetical concept representing a theoretical entity with intelligence far surpassing that of the smartest humans.
This move is a continuation of Sutskever's work during his tenure at OpenAI. He was a member of the company's superalignment team, responsible for designing methods to control powerful new AI systems. However, with Sutskever's departure, the team was disbanded, a move that was harshly criticized by one of the former leaders, Jean Leike.
SSI claims it will "advance the development of safe superintelligence directly, focusing on one goal and one product."
Of course, the OpenAI co-founder played a significant role in the brief ousting of CEO Sam Altman in November 2023. Sutskever later expressed regret over his role in the incident.
The establishment of Safe Superintelligence Inc. will allow Sutskever and others to focus more intently on addressing the issue of safe superintelligence, achieving product development with a singular objective.