Can AI-based voice analysis help identify mental disorders?

NEW YORK — Imagine a test as quick and easy as taking your temperature or measuring your blood pressure that could reliably identify an anxiety disorder or predict an impending relapse of depression.

Health care providers have many tools to assess a patient’s physical condition, but no reliable biomarkers – objective indicators of medical states observed from outside the patient – to assess mental health.

But some artificial intelligence researchers now think the sound of your voice could be the key to understanding your mental state – and artificial intelligence (AI) is perfectly suited to detect such changes, which are difficult, if not impossible, to perceive. otherwise. The result is a collection of apps and online tools designed to track your mental state, as well as programs that provide real-time mental health assessments to telehealth and call center providers.

Psychologists have long known that certain mental health problems can be detected by listening not only to what a person says, but also how they say it, said Dr. Maria Espinola, psychologist and assistant professor at the College of Medicine of the University of Cincinnati.

In depressed patients, Dr. Espinola said, “Their speech is generally more monotonous, flatter, and softer. They also have a reduced pitch range and lower volume. They pause more. They stop more often.

Anxiety patients feel more tension in their body, which can also alter the sound of their voice, she said. “They tend to speak faster. They have more difficulty breathing.

Today, these types of voice characteristics are being exploited by machine learning researchers to predict depression and anxiety, as well as other mental illnesses like schizophrenia and post-traumatic stress disorder. Using deep learning algorithms can reveal additional patterns and characteristics, as captured in short voice recordings, that might not be obvious even to trained experts.

“The technology we’re using now can extract features that may be meaningful that even the human ear can’t pick up,” said Dr. Kate Bentley, assistant professor at Harvard Medical School and clinical psychologist at Massachusetts General Hospital.

“There is a great deal of enthusiasm around the search for biological or more objective indicators of psychiatric diagnoses that go beyond the more subjective forms of assessment that are traditionally used, such as clinician-rated interviews or measures of self-assessment,” she said. Other clues researchers track include changes in activity levels, sleep patterns, and social media data.

These technological advances come at a time when the need for mental health care is particularly acute. According to a report by the National Alliance on Mental Illness, 1 in 5 adults in the United States suffered from mental illness in 2020. And the numbers continue to rise.

Although AI technology cannot meet the shortage of skilled mental health care providers – there are not enough to meet the country’s needs, Dr Bentley said – it is hoped that it will may reduce barriers to getting a correct diagnosis, help clinicians identify patients who may be reluctant to seek care, and facilitate self-monitoring between visits.

“A lot can happen between appointments, and technology can really give us the potential to improve monitoring and evaluation on a more continuous basis,” Dr. Bentley said.

To test out this new technology, I started by downloading the Mental Fitness app from Sonde Health, a health tech company, to see if my feelings of being unwell were a sign of something serious or just languishing. Described as “a voice-activated mental fitness tracking and journaling product”, the free app prompted me to record my first recording, a 30-second verbal diary entry, which would rank my mental health on a scale from 1 to 100.

A minute later, I had my score: a terrible 52. “Be careful” warned the application.

The app reported that the level of liveliness detected in my voice was particularly low. Did I sound monotonous just because I tried to speak softly? Should I heed the app’s suggestions for improving my mental fitness by walking around or decluttering my space? (The first question may indicate one of the app’s possible flaws: as a consumer, it may be difficult to know why your voice levels are fluctuating.)

Later, feeling nervous between interviews, I tested another voice analysis program, this one focused on detecting anxiety levels. The StressWaves test is a free online tool from Cigna, the healthcare and insurance conglomerate, developed in collaboration with artificial intelligence specialist Ellipsis Health to assess stress levels using samples from recorded speech of 60 seconds.

“What keeps you up at night?” was the website prompt. After spending a minute recounting my lingering worries, the program graded my recording and emailed me a statement: “Your stress level is moderate.” Unlike the Probe app, Cigna’s email offered no helpful self-improvement advice.

Other technologies add a potentially useful layer of human interaction, such as Kintsugi, a Berkeley, Calif.-based company that recently raised $20 million ($27 million) in Series A funding. Kintsugi is named after to the Japanese practice of repairing broken pottery with veins of gold.

Founded by Ms. Grace Chang and Mr. Rima Seiilova-Olson, who bonded over shared past experience of the struggle to access mental health care, Kintsugi develops technology for telehealth and call center providers that can help them identify patients who could benefit from additional support.

Using Kintsugi’s voice analysis program, a nurse can be asked, for example, to take an extra minute to interview a harassed parent with a colicky baby about their own well-being.

One of the concerns with the development of these types of machine learning technologies is the issue of bias – ensuring that programs work fairly for all patients, regardless of age, gender, ethnicity, their nationality and other demographic criteria.

“For machine learning models to work well, you really need to have a very large, diverse, and robust data set,” Chang said, noting that Kintsugi used voice recordings from around the world, in many languages. different, to guard against this particular problem.

Another major concern in this nascent field is privacy — particularly voice data, which can be used to identify individuals, Dr Bentley said.

And even when patients agree to be recorded, the question of consent is sometimes twofold. In addition to assessing a patient’s mental health, some voice analysis programs use the recordings to develop and refine their own algorithms.

Another challenge, Dr Bentley said, is potential consumer distrust of machine learning and so-called black box algorithms, which work in ways that even developers themselves cannot fully explain, especially the features they use to make predictions.

“There’s creating the algorithm, and there’s understanding the algorithm,” said Dr. Alexander Young, acting director of the Semel Institute for Neuroscience and Human Behavior and chair of psychiatry at UCLA. , echoing the concerns of many researchers about AI and machine learning in general: little or no human oversight during the training phase of the program.

For now, Dr. Young remains cautiously optimistic about the potential of voice analytics technologies, particularly as tools for patients to monitor themselves.

“I believe you can model people’s mental health status or approximate their mental health status in general,” he said. “People like being able to self-monitor their condition, especially with chronic illnesses.”

But before automated voice analysis technologies enter mainstream use, some are calling for rigorous investigations into their accuracy.

“We really need more validation not only of voice technology, but also of artificial intelligence and machine learning models based on other data streams,” Dr Bentley said. “And we need to get that validation from large-scale, well-designed representative studies.”

Until then, AI-based voice analysis technology remains a promising but unproven tool that could eventually be an everyday method for taking the temperature of our mental well-being.

This article originally appeared in The New York Times.

Comments are closed.