One of OpenAI's co-founders and former chief scientist, Ilya Sutskever, has established a new company, Safe Superintelligence Inc. (SSI), with two former colleagues just a month after leaving OpenAI. SSI is dedicated to addressing the safety issues of "superintelligent" AI systems.

Sutskever has long been committed to the field of AI safety, collaborating with Jan Leike from OpenAI and others to advance the company's efforts in enhancing AI safety. However, he had disagreements with OpenAI's leadership on how to handle these issues and left in May, following Leike who had joined Anthropic.

In a blog post, Sutskever and Leike predicted that AI would surpass human intelligence within ten years but might not necessarily be benevolent, necessitating research into control and restriction methods. Sutskever is now continuing to focus on this goal.

AI, Artificial Intelligence, Robots

Image Source: The image is generated by AI, provided by the image licensing service Midjourney

On Wednesday, Sutskever announced the establishment of SSI on social media, stating that the company's mission, name, and entire product roadmap are focused on achieving the safety of "superintelligent" AI. SSI views safety as a technical challenge and plans to address it through engineering and scientific breakthroughs.

He revealed that SSI plans to enhance the capabilities and safety of AI, scaling up while ensuring safety without being influenced by short-term commercial pressures. Unlike OpenAI, which transitioned from non-profit to for-profit, SSI is a for-profit entity from the outset.

Sutskever declined to comment on SSI's financing status. However, co-founder Gross stated that fundraising would not be an issue. SSI currently has offices in Palo Alto and Tel Aviv and is hiring technical talent.