In the intersection of neuroscience and artificial intelligence, renowned neuroscientist Anthony Zador engaged in a deep conversation with Paul Middlebrooks, host of the Brain Inspired podcast. As a pioneer in this field, Zador elaborated on his unique insights into the future development of NeuroAI.
From an initial resistance to the term "NeuroAI" to now being filled with anticipation for this field, Zador's transformation stemmed from a deep reflection on the essence of the issues. He pointed out that in the 1980s and 1990s, computational neuroscience and artificial neural networks were closely connected. However, as research progressed, he realized that merely focusing on the dynamic characteristics of neural circuits was insufficient; it was more important to understand how these circuits help organisms solve real-world problems.
Image source note: Image generated by AI, image authorized by service provider Midjourney
When discussing the current development of AI, Zador presented a thought-provoking perspective. He argued that the currently popular Transformer architecture may be a counterexample to the success of NeuroAI, as it bears little resemblance to how the brain operates. He explained that the success of ChatGPT is primarily due to the closed nature of its language system, rather than a true simulation of human cognitive processes.
Regarding the future direction of AI development, Zador particularly emphasized the key challenge of multi-objective coordination. He noted that existing AI systems excel at optimizing a single goal but often perform poorly when dealing with multiple objectives. In contrast, biological systems have evolved intricate mechanisms to balance multiple goals such as foraging, escaping, and reproduction. The way this balance is achieved may provide significant insights for the future development of AI.
In terms of development and learning, Zador proposed an innovative viewpoint. He suggested that the human genome can be seen as a "compressed representation" of neural circuits, generating complex structures through recursive rules. This idea is supported by his latest research, where his team successfully compressed large neural networks by 100 to 1000 times while maintaining their original performance.
On the topic of robotics development, Zador highlighted the challenges of sim-to-real transfer. He pointed out that biological systems exhibit remarkable adaptability in this regard, as canines of vastly different sizes can still share similar neural developmental instructions. Behind this adaptability lies a meticulously designed developmental process that achieves complex abilities by gradually solving sub-problems.
Looking ahead, Zador believes that curriculum learning may be an important direction for overcoming the current bottlenecks in AI development. By breaking down complex tasks into smaller sub-tasks and learning them sequentially in a reasonable order, AI systems may be more efficient than directly learning the final goal. This approach could not only accelerate learning speed but also enhance the system's adaptability when facing changes in the real world.
This conversation not only showcased the promising integration of neuroscience and artificial intelligence but also revealed the significant inspirations that biological intelligence provides for the development of artificial intelligence. As research deepens, this interdisciplinary exploration is sure to offer more insights into the future development of AI.