Speech-recognition Tool Can Distinguish ALS, May Offer Way of Evaluating Patients at Home

Ana Pena, PhD avatar

by Ana Pena, PhD |

Share this article:

Share article via email
Prosetin

Researchers developed a computational model able to recognize amyotrophic lateral sclerosis (ALS) based on patients’ speech patterns, suggesting it may one day be a non-invasive and low-cost way of evaluating disease severity and likely progression, possibly in a person’s home.

The study, by scientists at IBM Thomas J. Watson Research Center in New York, is titled “Detection of Amyotrophic Lateral Sclerosis (ALS) via Acoustic Analysis” and appeared in bioRxiv, a non-peer reviewed and open-access journal.

Slurred speech (dysarthria) is an early symptom of ALS, and a machine model that could use recorded talk to recognize and classify features of speech characteristic of ALS and its stages may be of use.

The team used ALS patient data gathered by the nonprofit organization Prize4Life and the ALS Mobile Analyzer, a cellphone app designed to help monitor ALS progression.

Data included measures of disease progression validated according to the revised ALS Functional Rating Scale (ALSFRS-R) to best ensure that patents studied covered the disease spectrum. A self-reporting questionnaire, the scale includes tasks aimed at speech and motor skills.

Speech recordings of ALS patients — 27 women and 40 men — were compared to their caregivers, serving as controls (30 women; 26 men). Not all were native English speakers and were of a variety of nationalities. All were asked to read at least three sentences and up to one paragraph in English. 

Speech features, including frequency, spectral, and voice quality parameters, were extracted using specialized audio software. Only features that were statistically significant for characterizing deterioration in ALS speech were selected for inclusion in final models, the study reports. Recordings were often done in people’s homes and made on standard equipment.

Using a machine learning approach called linear support vector machines (SVM), researchers then created a two gender-specific computational models, based on the distinct voices patterns of women and men, and compared them for accuracy against other such models. 

Based on the pre-selected features of speech, these models were to classify a person as having “ALS speech” or “non-ALS speech.”.

A cross-validation analysis showed the models’ accuracy rate to be 79% for men and 83% for women.

“We demonstrated successful recognition of ALS and non-ALS speech on a dataset collected in the wild with no special equipment,” the researchers wrote. “This end result was a gender-optimized solution with improved performance over comparable linear SVM classifiers.”

But the study had notable limitations, its team noted. Sample size was small, and patient and control cohorts were neither matched for age nor for disease stage. In addition, speech was not recorded under controlled conditions, and often in the presence of heavy background noise. 

Still, “[t]he advantage of using this kind of dataset … is that the resulting models will function on data collected in the wild, which is the level of robustness required for deploying mobile symptom tracking tools,” the team said.

Researchers believe that system performance can also be improved by using deep learning methods, particularly on larger datasets.

In the future, speech analysis combined with other ALS measurements may serve as a proxy for monitoring ALS progression, helping doctors in tracking their patients without requiring them to leave the homes.

This type of monitoring has the potential to generate large datasets of disease measurements, which may help research into biomarkers of ALS progression and the discovery of new treatments, the study concluded.