Recently, a research team led by Li Guoqi and Xu Bo from the Institute of Automation at the Chinese Academy of Sciences, in collaboration with institutions such as Tsinghua University and Peking University, has proposed a method for constructing a brain-like neuron model based on "endogenous complexity." The related research paper has been published in the journal Nature Computational Science.

The study first demonstrated the equivalence of the LIF model and HH model in terms of dynamic characteristics within spiking neural networks, proving that HH neurons can be equivalent to LIF neurons through four specific connection structures with time-varying parameters. Based on this discovery, the research team enhanced the endogenous complexity of the computational units by designing microarchitectures, enabling the HH network model to simulate the characteristics of larger-scale LIF network models.

Brain Large Model AI

Image source: The image was generated by AI, provided by the image authorization service Midjourney

Additionally, the team simplified the model into the s-LIF2HH model and validated its effectiveness in capturing complex dynamic behaviors through simulation experiments. The experimental results showed that the HH network model and the s-LIF2HH network model performed similarly in terms of representation capability and robustness, while the HH network model was more efficient in terms of computational resource consumption.

This research provides new methods and theoretical support for integrating artificial intelligence into the complex dynamic characteristics of neuroscience, and also offers solutions for the optimization and performance improvement of AI models. Currently, the research team has begun in-depth studies on larger-scale HH networks and more complex neurons, expecting to further enhance the computational efficiency and task processing capabilities of large models, accelerating their practical application.

Paper link: https://www.nature.com/articles/s43588-024-00674-9