I love being in win-win situations. We all do. Especially while world events continue to push the boundaries of our patience. And my latest project offers me many wins: It’s the perfect addition to my current home voice program, and I’m helping to research ALS voice issues.
Like 80% of all ALS patients, I have dysarthria, or an “ALS voice.” This is a slurred, slow speech with a nasal tone and an imprecise pronunciation of consonants. Decreased social interaction in recent months has limited my opportunities to practice talking, so I created my own home voice program and wrote about it in “Talking Myself into a New Habit.”
A few months ago, I learned about a study sponsored by the nonprofit group EverythingALS that aimed to create a high-quality database of speech data to develop enhanced voice recognition tools that improve quality of life for ALS patients.
The study requires participants to log on once a week from home for approximately 10-20 minutes to read a variety of sentences aloud.
Practice speaking? Contribute to ALS research? Count me in!
Does voice recognition need improvement?
Voice recognition software is showing up everywhere — in our smart speakers, our phones, and our homes. But for those of us with speech issues, no amount of repetition or cajoling will activate the software or make it respond to our commands. For some, it’s a minor annoyance, but for others, it can be a dehumanizing experience.
Occasionally, I have success with the Echo Show that sits on my work table. But I’m only successful if I make an effort to focus, sit in good alignment, breathe slowly, and then speak with as much enunciation as I can muster.
When that doesn’t work, I rely on a workaround I’ve devised for my other devices. Using the Text-to-Talk app on my cellphone, I simply point in the direction of the Echo Show and have one automated voice speak to another.
It’s called machine learning
Software designers don’t set out to build systems that ignore certain demographic groups or voice variations. But in their zeal to roll out new programs, they may not always be aware of their blind spots.
Machine learning involves the collection of a broad range of voice recordings. Engineers already use this method for developing speech translation apps that help travelers translate one language to another. The data collected in my study will be used to advance machine learning.
I am happy to contribute my “ALS voice” to help AI software learn to better understand me and my fellow pALS.
Meeting in the middle
Yes, I’m taking advantage of the study to add to my weekly practice of speaking out loud. These sessions are helping me learn to better manage and hopefully save what little voice I have.
But I also feel good about helping to improve ALS diagnosis, furthering knowledge about the voice issues experienced by those of us with ALS, and expanding technology to provide valuable assistance to those of us living with physical disabilities.
EverthingALS is still recruiting for the next phase of their study. If you’d like to participate, find out more here.
Let’s help technology learn more about ALS so that we can live well while living with ALS.
Note: ALS News Today is strictly a news and information website about the disease. It does not provide medical advice, diagnosis, or treatment. This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read on this website. The opinions expressed in this column are not those of ALS News Today or its parent company, BioNews, and are intended to spark discussion about issues pertaining to ALS.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?