Brain-computer interface allows man with ALS to communicate

Highly accurate computer system uses man’s own voice

Marisa Wexler, MS avatar

by Marisa Wexler, MS |

Share this article:

Share article via email
A person in a wheelchair types on a laptop.

A brain-computer interface allowed Casey Harrell, a 45-year-old man with amyotrophic lateral sclerosis (ALS) whose disease had made it nearly impossible to speak, to communicate through a computer that used his own voice.

Harrell’s experience in the ongoing pilot BrainGate2 clinical trial (NCT00912041) was described in the study, “An Accurate and Rapidly Calibrating Speech Neuroprosthesis,” which was published in The New England Journal of Medicine.

“It has been immensely rewarding to see Casey regain his ability to speak with his family and friends through this technology,” Nicholas Card, PhD, the study’s first author and a postdoctoral scholar at the University of California Davis, said in a university news story.

“Not being able to communicate is so frustrating and demoralizing,” said Harrell. “It is like you are trapped. Something like this technology will help people back into life and society.”

ALS is marked by progressive muscle weakness that ultimately leads to paralysis. Weakness in the muscles of the tongue and jaw can make it difficult or impossible to speak, a condition known as dysarthria.

Recommended Reading
The word 'awareness' is shown in bold black letters against a backdrop of dozens of red awareness ribbons.

I AM ALS coalition aims for neurodegenerative disease unity

Brain-computer interface detects brain commands

To help people with dysarthria, there has been increasing interest in designing brain-computer interfaces. The basic idea is to detect signals in the brain when a person is trying to speak, then use a computer to interpret these signals into language.

“We’re really detecting their attempt to move their muscles and talk,” said Sergey Stavisky, PhD, one of the study’s co-senior authors and co-director of the UC Davis neuroprosthetics lab, which develops brain-computer interfaces.

The scientists “are recording from the part of the brain that’s trying to send these commands to the muscles,” Stavisky said. “And we are basically listening into that, and we’re translating those patterns of brain activity into a phoneme — like a syllable or the unit of speech — and then the words they’re trying to say.”

Although prior attempts to create brain-computer interfaces for communication have made substantial advancements, these systems often have high rates of errors where the computer doesn’t correctly interpret what the person is trying to say.

“Previous speech BCI [brain-computer interface] systems had frequent word errors,” said study co-author David Brandman, MD, PhD, a neurosurgeon at UC Davis and co-director of the lab. “This made it difficult for the user to be understood consistently and was a barrier to communication. Our objective was to develop a system that empowered someone to be understood whenever they wanted to speak.”

The BrainGate2 study is ongoing to further advance this technology in people with ALS and other paralyzing conditions. The trial is enrolling up to 22 patients at five centers in the U.S.

When Harrell enrolled in the trial, five years after the first ALS symptoms, he had substantial dysarthria. Although his caregivers could usually make sense of his speech, he was unintelligible to most people.

He underwent a surgical procedure in which a set of four electrode arrays were implanted into his brain above a specific region that’s crucial for speech. Each array had 64 electrodes, for a total of 256 electrodes to detect his brain activity.

About a month after the surgery, Harrell underwent a preliminary test asking him to say sentences using a vocabulary of 50 words.

“Decoded words were displayed on a screen and then vocalized with the use of text-to-speech software designed to sound like his pre-ALS voice,” the researchers wrote.

Recommended Reading
Banner for Juliet Taylor's column

The gift of presence made us less isolated while living with ALS

Accuracy rate of 97%

To further improve the BCI’s utility, it was also hooked up to an eye-tracking device, allowing Harrell to note incorrect words and make it easier for him to say things that weren’t included in the vocabulary, such as certain proper nouns.

The system was able to decode his words with more than 99% accuracy in initial tests. The next day, after some calibration, the system was 100% accurate with this limited vocabulary.

“The first time we tried the system, he cried with joy as the words he was trying to say correctly appeared on-screen,” Stavisky said. “We all did.”

In the second research session, the system was trained to include a vocabulary of more than 125,000 words, which “encompasses the majority of the English language,” the researchers wrote.

While the system made a mistake in about one in every 10 words, further calibration over about a dozen sessions increased accuracy to more than 97%, meaning that one out of every 40 words was incorrect.

“At this point, we can decode what Casey is trying to say correctly about 97% of the time, which is better than many commercially available smartphone applications that try to interpret a person’s voice,” Brandman said.

This accuracy was maintained over more than eight months of using the device. Harrell is now able to use the device to regularly converse with those around him.

Co-author Leigh Hochberg, MD, PhD, a neurologist at Massachusetts General Hospital and BrainGate2 sponsor-investigator, said Harrell and other participants “are truly extraordinary” and ” deserve tremendous credit for joining these early clinical trials.”

“They do this not because they’re hoping to gain any personal benefit, but to help us develop a system that will restore communication and mobility for other people with paralysis,” Hochberg said.