
Ann Johnson’s long silence, broken 18 years after his stroke
Ann Johnson had just married and welcomed her first child when, at only 30, a stroke has turned her life upside down. It was a trivial day, she played volleyball with friends when she suddenly collapsed. The attack caused a confinement syndrome, leaving it conscious but unable to move and speak. Her daughter grew up without ever knowing the sound of her voice. Eighteen years later, thanks to an experiment led by the University of California in San Francisco (UCSF) and the University of Berkeley, this silence was broken.
For Ann, the emotion was immediate. “”It was a surprise. I thought that my voice would be the one I have now, not the one before my accident. It made me feel like before“. Her voice could be recreated thanks to artificial intelligence, based on the recording of the speech she had made on her wedding day. Her husband Bill was also upset by this unexpected return:”I had never thought I could one day hear Ann’s voice again. It was as if a piece of us came back “.
A neuroprosthesis that decodes the intentions of speaking
The experience is based on a brain implant linked to an artificial intelligence. Rather than targeting only the classic areas of language, researchers have expanded their field to the sensorimotor cortex, which coordinates the muscles of the face and mouth to produce sounds. “”Our work shows that when we try to understand speech, we should not limit our vision to the cortical areas of language, but also to study the representations distributed in the sensorimotor cortex“Explains Professor Gopala Anumanchipalli.
Brain signals are translated into words and facial movements by a digital avatar, capable of restoring Ann’s voice. She herself chose the appearance of this avatar, but the resemblance remains limited, which motivates the future development of photorealist 3D avatars.
It is not a question of reading in thoughts: “What we decode is not thoughts. We decode the brain signals that show the intention to speak “specifies Kaylo Littlejohn, co-author of the study.
One of the recent advances concerns the fluidity of the conversation. While it was initially necessary to wait eight seconds for a sentence to be translated, a new streaming architecture made it possible to reduce this period to only one second, making the exchange much more natural. Despite everything, fluidity is still a scientific challenge.
Tomorrow, a more fluid and natural word thanks to AI
Researchers are now aiming to perfect technology: further reduce the response time, improve precision, develop wireless versions and associate these implants with photorealistic avatars. The neurosurgeon Edward Chang, team manager at UCSF, insists: “This is an essential step to show that these types of neuroprothèse can be practical for daily communication“.
For Gopala Anumanchipalli, the objective is clear: “We hope that our work will open the way to ready -to -use neuroproths who will restore a natural voice to the people who have lost it “.
Ann sees this experience in this experience a new chance. Her optimism remains intact despite 18 years of silence: she now hopes to become an advisor in a physical rehabilitation center in order to accompany other patients on their reconstruction path. As she says herself: “I want patients to see me and know that their lives are not over. I want to show them that handicap should not stop or slow down“.