NIH $2.3M Grant to Test App to Diagnose, Monitor ALS at Home
The National Institutes of Health (NIH) has awarded $2.3 million to advance into testing an app that could improve how amyotrophic lateral sclerosis (ALS) is diagnosed, track the disease’s progression, and help assess a treatment’s effectiveness.
The three-year Small Business Technology Transfer (STTR) grant was awarded to Modality.AI and the Institute of Health Professions at the Massachusetts General Hospital (MGH), which partnered to develop the ALS diagnosis and monitoring app. It will support work into comparing the app’s accuracy in making assessments to current approaches.
“This could be a game-changer for tracking ALS and understanding the impact of the medicines used to treat it,” Jordan Green, PhD, a speech-language pathologist, director of the Speech and Feeding Disorders Lab at the MGH Institute of Health Professions, and the project’s principal investigator, said in a Modality.AI press release.
The artificial intelligence-based digital health app is intended for at-home and remote use, with patients guided through assessments with the help of a virtual “agent” called Tina. As such, it could also lower care costs for healthcare providers and patients and limit the need for travel to a clinic for an ALS evaluation.
Potentially ‘cheaper, faster’ way of assessing ALS
“I think if we’re successful, it’ll change the standard clinical care practice. Our work is mostly face to face, often expensive, and can be less attainable for the socially and economically disadvantaged and those who can’t travel or have mobility issues,” Green said. “With this app, we’ll be able to capture more data, and in turn, help more people. It’ll be cheaper, faster — and we’ll get more accurate assessments.”
It can take up to 18 months to diagnose ALS, a process that include tests to rule out other conditions and periods of evaluation of ALS symptoms.
An earlier diagnosis and assessment could bring patients into treatment faster, helping to preserve the motor neurons responsible for walking, breathing, speech, and swallowing. It may also be of use to a clinical trial evaluating potential disease treatments.
The Modality app, which may be used with smartphones, tablets, and computers, captures a person’s speech and facial movements, with data collected measured and analyzed using artificial intelligence.
Tina, its virtual agent, guides patients through an ALS evaluation and questions them in ways said to be similar to how clinicians interview patients for a diagnosis.
To begin a session, the patient clicks on a link and grants permission, activating a camera and microphone and prompting Tina to begin speaking.
The patient is asked, for example, to count numbers, repeat sentences, and read a paragraph. During this time, the app is collecting data to gauge variables such as speaking rate, the speed of lip and jaw movements, pausing patterns, and pitch variations. Information from speech acoustics and movements, gleaned from full-face video recordings, are then decoded for analysis.
“From the beginning, we realized that merely monitoring audio was inadequate to the task, so we developed a software platform to assess facial expressions, limb movements, and even cognitive functions,” said David Suendermann-Oeft, PhD, founder and CEO of Modality.
“The Modality system [also] … produces a dashboard of speech analytics that can be measured across time to make assessments, for example, if the individual is responding to medication, if they need extra support, or to characterize their overall disease progression,” Suendermann-Oeft added.
The NIH grant’s overarching goal is to determine whether the app and its Tina platform are as effective as current diagnostic and evaluation approaches in clinical use. “If the results match the results from clinicians and their state-of-the-art equipment, then researchers will know they have a valid approach,” Modality stated in its release.
“Our collaboration with the MGH Institute of Health Professions will create a standardized assessment app for ALS,” Suendermann-Oeft said, making “these complex assessments easy to use, accessible, equitable, and accurate for patients, clinicians, and researchers.”