Recently, Facebook AI Research (FAIR) published a significant study in Nature, introducing a large model named BrainMagick. This model can decode the speech content that the human brain intends to express by analyzing non-invasive brain activity data obtained from EEG (electroencephalogram) and MEG (magnetoencephalogram) recordings. In test datasets, the model accurately identified speech segments from among thousands of options using just 3 seconds of MEG recording, achieving a Top-10 accuracy rate of 72.5%. This breakthrough is of great significance for helping aphasic patients with language impairment regain their communication abilities. BrainMagick offers them a non-invasive communication method without the risk of brain surgery. After the publication of this research, it sparked widespread discussion online, with many people considering it a significant blessing for patients with language disorders. Additionally, the project code has been open-sourced on GitHub, allowing for training with a single GPU. As the amount of data increases, the model's performance can continue to improve.
Facebook's New Model Decodes Speech from Brain Waves, Bringing Hope to Aphasia Patients
智能涌现
96
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/1936