
Amyotrophic lateral sclerosis (SLA) or Charcot disease affects more than 200,000 people worldwide. In addition to progressive paralysis, it often leads to a total loss of speech. This new experimental device allows, for the first time, a fluid and expressive conversation directly from brain activity.
A voice synthesized from brain activity
Established as part of the Braingate2 clinical trial at UC Davis Health, this patient was able to express themselves thanks to a brain-computer interface. The system is based on four microelectrode networks surgically located in the brain area responsible for speech. These electrodes record neural activity, then send it to a computer that translates it into sounds.
The process is based on advanced artificial intelligence. To train the algorithm, the researchers asked the patient to try to pronounce sentences displayed on the screen. The electrodes then measured the signals sent by hundreds of neurons. By associating these signals with the sounds that the patient tried to produce at every moment, the algorithm learned to rebuild his voice from his brain intentions.
All this is done in a fortieth. This short delay is similar to the one he feels when he speaks and hears the sound of his own voice.
Professor Sergey Stavisky, co-author of the study, compares: “Translating the neural activity into text, as does our previous brain-computer interface does, amounts to sending SMS. It is a great progress compared to conventional assistance technologies, but it still leads to delays in the conversation. In comparison, this new vocal synthesis in real time is more like a vocal call “. He adds: “Thanks to instant vocal synthesis, neuroprothses users can be more included in a conversation. For example, they can interrupt, and people are less likely to cut them accidentally“.
He can speak, interact … and even sing
The results exceed expectations. The patient was able to say words never programmed in the systemformulate interjectionsto set down questions with natural intonationand even singing simple melodies. Algorithm rebuilt not only words, but also shades, rhythms and intentions of speech.
The interface also makes it possible to distinguish the real intentions of speaking and even modulating the intonations of the synthetic voice as Masteree Wairagkar, the first author of the study and researcher at the Neuroprosthetics Lab from UC Davis University: “Specifies:”The main obstacle to vocal synthesis in real time was not to know exactly when and how the person who lost the floor tries to speak. Our algorithms establish a correspondence between neural activity and intentional sounds at all times. This makes it possible to synthesize the nuances of speech and gives the participant the control of the rhythm of his voice via the brain-computer interface “.
A major advance, but still experimental
This technical success represents a key step for people deprived of speech Because of paralysis, stroke or neurodegenerative disease such as ALS. David Brandman, co-director of the neuroprothèses and neurosurgeon laboratory, underlines the importance of such sought: “Our voice is part of what defines us. Losing the ability to speak is devastating for people living with neurological diseases. This type of technology could transform the lives of people with paralysis “.
But for the moment, the results relate to a single patient. Larger clinical trials are necessary to validate the efficiency and universality of the system.