Brain-computer interface allows man with ALS to talk in real time
Study: He was able to sing simple melodies using device

An investigational brain-computer interface has allowed a man diagnosed with amyotrophic lateral sclerosis (ALS) to speak in real time, and sing simple melodies, a new study reports.
The system can detect the sounds a person intends to produce instead of their intended words, and this allowed the man to adjust the duration of his words, emphasize a given word, and use a different intonation when saying a sentence or a question.
“Our voice is part of what makes us who we are. Losing the ability to speak is devastating for people living with neurological conditions,” David Brandman, MD, PhD, co-author of the study at the University of California, Davis (UC Davis), said in a university press release. “The results of this research provide hope for people who want to talk but can’t. We showed how a paralyzed man was empowered to speak with a synthesized version of his voice. This kind of technology could be transformative for people living with paralysis.”
The device’s development and use were described in the study “An instantaneous voice-synthesis neuroprosthesis,” published in Nature. Several of the study’s authors reported competing interests related to patents and consulting roles.
Real-time voice synthesis is more like a voice call, researcher says
In ALS, the nerve cells that control movement become damaged and die, leading to muscle weakness and paralysis. This often affects the muscles in the throat and around the mouth, which can make it difficult or impossible for people with ALS to speak.
Brain-computer interfaces, or BCIs, are an evolving technology that aims to implant sensors in a person’s brain, allowing the individual to interact with the outside world in ways that are no longer possible using just their body. One major goal of BCIs for ALS and other paralyzing diseases is to allow patients to communicate.
Previous work with BCIs has allowed patients to create text using their thoughts alone. This is an important step toward more effective communication, but it doesn’t allow the sort of instantaneous back-and-forth that typically happens when people are talking with each other. Here, researchers at the UC Davis, developed a new BCI that allowed a person with ALS to talk in real time.
“Translating neural activity into text, which is how our previous speech brain-computer interface works, is akin to text messaging. It’s a big improvement compared to standard assistive technologies, but it still leads to delayed conversation. By comparison, this new real-time voice synthesis is more like a voice call,” said Sergey Stavisky, PhD, study senior author at UC Davis.
Being able to talk in real time means “users will be able to be more included in a conversation. For example, they can interrupt, and people are less likely to interrupt them accidentally,” Stavisky added.
Patient able to control cadence of brain-computer interface voice
The 45-year-old patient in this study was still able to move his mouth a bit, so he could speak to an extent, but because of his condition, his speech was usually not understandable by the people around him.
He received a BCI as part of a clinical trial called BrainGate2 (NCT00912041). That study, sponsored by investigators at Massachusetts General Hospital, is actively recruiting adults with ALS and other paralyzing conditions at five sites in the U.S.
One of the challenges of developing a BCI for someone with ALS who cannot speak is that, in order to decode a patient’s brain signals, scientists need to know what the person is trying to say in the first place. To create this BCI, the patient first underwent a setup process where he tried to read a series of preestablished sentences provided for him, so the researchers knew for certain what he meant to say.
Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice.
The electrical signals from the man’s brain were then analyzed by advanced artificial intelligence algorithms. This basically detected which patterns of brain activity corresponded to specific sounds the man was trying to vocalize. The algorithms could then interpret new signals to put together what he was trying to say.
The time from when the signals were read in his brain to when the speech was produced was 10 milliseconds, similar to what happens in people who are moving their mouths to talk.
“Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice,” said Maitreyee Wairagkar, PhD, study co-author at UC Davis.
With device, man’s speech was intelligible most of the time
Notably, this approach meant the man wasn’t limited to saying a select number of words — he was able to vocalize new words, and make sounds that aren’t technically words but still play important roles in communication when people talk, like “ah” and “hmm.”
“We don’t always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,” Wairagkar said in a press release from Nature. “In order to do that, we have adopted this approach, which is completely unrestricted.”
In addition to making words, the man was able to modulate the tone of his computer-generated voice. This allowed him to change his tone to be more similar to natural speech — for example, changing his inflection when asking a question. He could sing simple melodies as well. This is a notable improvement over prior BCI technologies, which usually produced monotone speech.
“We are bringing in all these different elements of human speech which are really important,” Wairagkar said.
This speech BCI wasn’t perfect, but it was a notable improvement in the man’s ability to communicate. Without the BCI, listeners couldn’t understand the man’s speech most of the time. By contrast, although there were some times where he couldn’t be understood with the BCI, most of the time he was intelligible.
The researchers noted this study was limited to one person with ALS, so further work will be needed to continue developing and refining this type of BCI technology.