TAMPA, Fla. — “We know that voice has been linked to diseases for a long time,” said Dr. Yael Bensoussan, director of the USF Health Voice Center and assistant professor of Laryngology.
That’s the purpose behind a new study being led by the University of South Florida and Weill Cornell Medicine.
This study, called "Voice as a Biomarker of Health," is a multi-million dollar effort funded by the National Institute of Health.
Bensoussan is the co-principal investigator of the project.
“This is a large, multi-institutional project that we do with USF, Cornell, and 10 other universities where we’re collecting human voices to link them to medical information to diagnose a certain disease or to screen for certain diseases with voice," said Bensoussan.
Researchers are using the human voice to diagnose and treat diseases and mood disorders like depression.
“We know, for example, when we talk about neurological disease, people that have strokes, the way they speak changes. The articulation doesn’t work as much as before. People who have Alzheimer’s, the content of their speech changes. People who have Parkinson’s, the way they talk can be slower, can be lower,” said Bensoussan.
One of the goals of this project is to demonstrate that doctors can use small changes in a patient’s voice to diagnose diseases.
“I think voice as a biomarker is the cheapest biomarker that exists. So when we think about different biomarkers like genetic information, it’s something where we have to take blood from people, we have to ask them to do swabs. They’re very resource intensive to analyze and usually cost about $1,000 just to analyze the samples," Bensoussan said. "When we think about CT scans, imaging, they have radiation for patients, and there’s always a little risk. They’re a little bit more invasive."
With voice, Bensoussan believes patients could easily just record themselves and send it to their doctor.
“You can do it out of your cellphone, you can do it out of remote communities. You don’t necessarily have to be in a big center with a lot of resources,” said Bensoussan.
As part of this project, researchers will also create a database of patient voices to “teach” artificial intelligence programs what to listen for— even just tiny differences that can warn of serious health conditions.
“If we don’t have data, it doesn’t matter. The computers and the algorithms need to learn from data,” said Bensoussan. “Once we’re able to prove that there’s really a link with voice and diseases, and it’s been proven at a smaller level, I think it’s really going to change the way we help people."
In this story, ABC Action News used two voice clips we received from USF Health.
- Glottic Cancer Patient: shortened voice sample from The Voice Foundation and St. John’s University.
- Walden, Patrick R (2020), “Perceptual Voice Qualities Database (PVQD)”, Mendeley Data, v2, https://data.mendeley.com/datasets/9dz247gnyb/1
- License: CC BY 4.0
- Parkison’s Disease patient: shortened voice sample from SpeechVive