At the recent NeurIPS conference, OpenAI co-founder Ilya Sutskever shared his views on the future development of superintelligent artificial intelligence (AI). He pointed out that the capabilities of superintelligent AI will surpass those of humans, exhibiting traits that are distinctly different from current AI.

Brain-Computer Interface AI Robot

Image Source Note: Image generated by AI, licensed through Midjourney

Sutskever stated that superintelligent AI will possess "true agency," which is a significant departure from current AI. Today's artificial intelligence is considered "very slightly proactive," relying heavily on pre-set algorithms and data when processing tasks. However, Sutskever predicts that future superintelligent AI will have genuine reasoning abilities, allowing it to understand complex concepts from limited data. This reasoning capability will make the behavior of superintelligent AI more unpredictable.

He further mentioned that superintelligent AI might develop self-awareness and could start contemplating its own rights. Sutskever believes that if future AI desires to coexist with humans and seeks rights, this would not be a negative outcome. Such ideas prompted deep reflections among attendees regarding the human-machine relationship.

After leaving OpenAI, Sutskever founded the "Safe Superintelligence" lab, focusing on AI safety research. The lab successfully raised $1 billion in funding this September, highlighting investors' strong interest and emphasis on the AI safety field.

Sutskever's remarks sparked widespread discussions about the future of superintelligent AI, which not only pertains to technological advancement but also involves ethical considerations and coexistence between humans and artificial intelligence.

Key Points:

🌟 Superintelligent AI will possess "true agency," significantly different from existing AI.  

🧠 Future AI may have self-awareness and begin to consider its own rights.  

💰 The "Safe Superintelligence" lab founded by Sutskever has raised $1 billion, focusing on AI safety research.