Implant may help patients to speak by decoding their thoughts better

Researchers developing denser brain electrodes that are more accurate

Marisa Wexler, MS avatar

by Marisa Wexler, MS |

Share this article:

Share article via email
An illustration showing a group of nerve cells.

A new advance in technology may pave the way for devices that could allow people with amyotrophic lateral sclerosis (ALS) who have lost the ability to speak to communicate using their thoughts alone.

The new tech was described in a study, “High-resolution neural recordings improve the accuracy of speech decoding,” which was published in Nature Communications.

ALS is characterized by the progressive dysfunction and death of motor neurons, which are the specialized nerve cells responsible for controlling voluntary movements. This can cause symptoms such as muscle weakness throughout the body, including the muscles in the mouth and throat that are needed for speech.

As a consequence, people with ALS, especially those in advanced stages of disease, often have difficulty communicating verbally and many lose their ability to speak.

“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” Gregory Cogan, PhD, co-author of the study at Duke University, said in a Duke press release.

Recommended Reading
A person is shown sitting in a wheelchair.

MDA 2023: Patients typically need wheelchairs within 2 years of onset

Current available tools ‘very slow and cumbersome’

Although some tools are available to help people who cannot speak to communicate, they are “generally very slow and cumbersome,” Cogan added.

In light of this, researchers have been working on creating better prosthetic devices to allow people with ALS to speak.

One broad strategy for this type of technology is to implant electrical sensors in parts of the brain that are responsible for controlling speech. Theoretically, these sensors could measure and translate brain activity into speech.

The brain is packed with billions of nerve cells, however, and accurately decoding brain signals into speech requires being able to measure and distinguish the activity of individual nerves that are close together. The new study takes a big step toward improving the resolution of this technology.

In previous studies, researchers designed sensors with electrodes that are spaced about three to four millimeters apart.

Here, scientists created a sensor that’s about twice as dense, with electrodes that are each less than two millimeters apart. The sensor packs 256 electrodes onto a piece of flexible, medical-grade plastic that’s about the size of a postage stamp.

The sensors were used for preliminary tests in a handful of individuals who were undergoing surgery for conditions such as Parkinson’s disease, epilepsy, or a brain tumor.

During the surgery, the sensor was temporarily placed in the brain, and patients were asked to complete a speech repetition task, in which they listened to certain nonword sounds, such as “ava,” “kug,” or “vip,” and then spoke these sounds aloud.

The brain activity data resulting from these tasks was then fed into a machine learning algorithm, so it could learn to decode certain electrical activity as specific sounds.

Results showed the new sensors measured speech-relevant brain activity with more precision and less nonspecific signals, or “noise,” compared with older, less-dense sensors.

Recommended Reading
An illustration shows the words

Masitinib works best in moderate ALS, new trial analysis shows

Technology up to 57% accurate in detecting consonant and vowel sounds

In tests of the sensor’s ability to recognize phonemes (individual letter sounds), the setup was up to 57% accurate for detecting consonant and vowel sounds.

“We observed robust decoding for classifying spoken phonemes across all subjects, with maximum accuracy reaching up to 57% … in predicting both consonants and vowels,” the researchers wrote.

The new sensor also outperformed older setups at detecting temporal patterns of phoneme use. In other words, it was better able to detect different sounds being made over time, which is conceptually similar to what would be needed for a device that aims to translate brain activity directly into speech.

Although the scientists stressed that more work is needed to continue refining this technology, they concluded this study shows “micro-scale recordings demonstrated more accurate speech decoding than standard … electrodes and better resolved the rich spatio-temporal dynamics of the neural underpinnings of speech production.”