Brain-computer interface to restore speech named award winner
Device helped man, 45, with ALS to communicate using voice recordings

A neurosurgeon and his team from the University of California (UC) Davis Health have won The Herbert Pardes Clinical Research Excellence Award — given by the nonprofit Clinical Research Forum — for their work on a brain-computer interface that translates brain signals into speech with greater than 95% accuracy.
The awarded clinical research study detailed the use of this interface to restore speech in a 45-year-old man with amyotrophic lateral sclerosis (ALS) who had lost the ability to communicate using his own voice. The work also was selected for a 2025 Top Ten Clinical Research Achievement Award earlier this year.
“It means a lot to us that our study was not only selected among the nation’s best published clinical research studies, but it has also won The Herbert Pardes Clinical Research Excellence Award!” David Brandman, MD, PhD, the neurosurgeon who led the study at the UC Davis neuroprosthetics lab, said in a university news story.
The award, which includes a $7,500 cash prize, is given to a study that shows a high degree of innovation and creativity in advancing the understanding or treatment of human disease. It is named after Herbert Pardes, a board member of the Clinical Research Forum, in recognition of his contributions to medical research and education.
Brandman, an assistant professor in the UC Davis neurological surgery department, said his team is “very honored by this recognition.” The neurosurgeon codirects the neuroprosthetics lab with neuroscientist Sergey Stavisky, PhD, also an assistant professor.
Kim E. Barrett, PhD, UC Davis Health vice dean for research, nominated Brandman for the award. Barrett called the team’s work “nothing short of remarkable,” noting “the technologies they have developed offer real hope to change the lives of those who have been robbed of their power to speak intelligibly by diseases such as ALS.”
Brain-computer interface wins $7,500 Pardes Research Excellence Award
Damage to nerve cells in ALS leads to muscle weakness that gets worse over time. When the muscles of the face weaken, patients can lose the ability to articulate words or make themselves understood — a condition known as dysarthria — making communication difficult.
Brain-computer interfaces can translate brain signals associated with speech into text displayed on a screen, synthesize voice recordings, or even control external devices. This technology aims to enable individuals with dysarthria to communicate effectively and better engage with their surroundings.
The brain-computer interface developed at UC Davis Health was implanted as part of BrainGate2 (NCT00912041), a pilot clinical study ongoing at five locations in the U.S. The study is still recruiting an estimated 27 adults with ALS and other paralyzing conditions.
Its main goal is to test the safety of this interface. Its use involves placing up to six small sensors in areas of the brain that control movement and speech. These sensors are connected to one, two, or three small connectors, called percutaneous pedestals, that pass through the skin. Brain signals are recorded through these sensors at least once a week.
An interim analysis of 14 patients, including six diagnosed with ALS, showed that the interface’s safety profile was similar to that of other medical devices that stay implanted in the body long-term. Over an average of about 2.4 years, the most common side effect was skin irritation around the percutaneous pedestals.
For Casey Harrell, the man with advanced ALS and dysarthria who was portrayed in the clinical research study, a set of four sensors allowed him to communicate his intended speech within minutes of activating the brain-computer interface.
The technology is transformative because it provides hope for people who want to speak but can’t. I hope that technology like this speech [brain-computer interface] will help future patients speak with their family and friends.
With some system training, starting 25 days after implantation, the brain-computer interface provided access to a 125,000-word vocabulary with an accuracy of up to 97%. This was achieved by recalibrating the decoder over the course of a dozen sessions.
At the time the study was published, Harrell said that being unable to communicate was “so frustrating and demoralizing.”
“It is like you are trapped,” Harrell said in an earlier news story. “Something like this technology will help people back into life and society.”
Harry P. Selker, MD, chair of the clinical research forum and dean of Tufts Clinical and Translational Science Institute at Tufts University, noted that Harrell was “actually able to speak using recordings of his own voice.”
“This study is spectacular,” Selker said, noting that it allowed Harrell to “communicate with his family and maintain his livelihood.”
Brandman said the team’s study “demonstrates the most accurate speech neuroprosthesis (device) ever reported. The technology is transformative because it provides hope for people who want to speak but can’t. I hope that technology like this speech [brain-computer interface] will help future patients speak with their family and friends.”