The rapid advancement of artificial intelligence has raised concerns among many, particularly regarding the future possibility of AI becoming uncontrollable. Amidst this wave of anxiety, a new AI safety company — Safe Superintelligence (SSI) — has emerged, with OpenAI's former chief scientist Ilya Sutskever at its helm, once again capturing everyone's attention.
It is understood that Ilya Sutskever, in collaboration with Daniel Gross and Daniel Levy, co-founded SSI and has successfully raised $1 billion in funding, with the goal of developing AI systems that are both safe and surpass human capabilities.
SSI was established only three months ago but has already attracted the attention of many renowned investors, including Andreessen Horowitz and Sequoia Capital. Despite having only 10 employees, the company's valuation has reached $5 billion. The company plans to use these funds to acquire more computing power and attract top talent, with team members primarily located in Palo Alto, California, and Tel Aviv, Israel.
In an interview, Gross stated that they hope to find investors who understand and support the company's mission. Their goal is to focus on the development of safe superintelligence rather than rush to bring products to market. Despite the complex regulatory environment in the AI safety sector, SSI remains steadfast in its belief that it is necessary to prevent potential harm that AI might cause.
Sutskever mentioned that he chose to start SSI because he saw a different direction from his previous work. He previously led a team at OpenAI called "Superalignment," aimed at ensuring AI's behavior aligns with human values. Although his experience at OpenAI has given him extensive industry knowledge, he hopes to advance technology in a different way at SSI.
In terms of team culture, SSI places a strong emphasis on recruiting candidates with "good character." They value a candidate's potential and passion for the job more than just their qualifications. Sutskever and his team are looking for individuals who can bring innovative thinking to achieve breakthroughs in this rapidly evolving industry.
SSI's goal is not only to develop advanced AI technology but also to ensure that these technologies can safely serve humanity in the future, avoiding potential risks.
Key Points:
1. 💰 SSI, co-founded by OpenAI's former chief scientist Ilya Sutskever, has successfully secured $1 billion in funding, with a valuation of $5 billion.
2. 🌍 The company will focus on AI safety research and development, aiming to conduct thorough research and development before bringing products to market.
3. 🔍 SSI emphasizes hiring individuals with "good character," focusing on a candidate's potential and enthusiasm for the job, rather than solely relying on qualifications.