‘Like Modern-Day Phrenology’: Will a New Slew of Mobile Apps Improve Mental Health or Put Users at Risk?
Keerthi Vedantam is a bioscience reporter at dot.LA. She cut her teeth covering everything from cloud computing to 5G in San Francisco and Seattle. Before she covered tech, Keerthi reported on tribal lands and congressional policy in Washington, D.C. Connect with her on Twitter, Clubhouse (@keerthivedantam) or Signal at 408-470-0776.
The pandemic has been a disaster for our mental health.
According to the Kaiser Family Foundation, the number of people experiencing depression, anxiety and other mental health issues nearly doubled among Americans. Little surprise, then, that the demand for mental health services has skyrocketed.
As the country grapples with its mental health crisis, Apple has a proposal: Turn to your phone.
The tech giant is betting that the smartphone provides a trove of data that could help alleviate common mental health struggles — though not everyone is convinced it's such a good idea.
UCLA researchers have been working with Apple to see if an iPhone can detect depression and anxiety via so-called emotion recognition, the Wall Street Journal reported in September. The project is still in early development stages, and the company has not announced what the end product will look like.
Apple's venture with UCLA will triangulate a slew of data points including facial expressions, audible cues like tone, how often and how fast people are walking, heart rate, speed and accuracy of typing and what they type to determine what signals correspond with certain mental illnesses, according to the outlet. They will also ask participants to fill out surveys about how they feel. (Apple and UCLA did not respond to requests for comment.)
"If it's true that mental health affects people's day-to-day lives, then the question is in what ways does it affect people's day-to-day lives? And in what of those ways is there something measurable?" said UCLA professor Dr. Arash Naeim, who was not involved in the research with Apple.
But activists and experts warn the technology, which would use facial expressions as a data point in determining a user's mental health, could be used to exploit the people who use it.
"We have rampant mental health issues and no good routine way of screening for them. So if you were able to develop an automated way to really understand depression, anxiety, stress levels, that's really an advancement," said Ritika Chaturvedi, a precision medicine expert at the USC Schaeffer Center. "But — as with everything in tech — without guardrails, without regulation and without really understanding what the technology is actually doing, it's rife for abuse."
Facial recognition technology has long been used to match faces to identities, and is a trusted surveillance tool of government and law enforcement agents — often at the expense of activists, journalists and immigrants.
But emotion recognition takes it to another level by using one's face and attempting to infer people's emotions, which could heighten tensions in dangerous law enforcement situations, or have racist implications (like when Google's AI classified some photos of Black people as gorillas).
"I am a deep skeptic that such technology can actually work in any meaningful way," said Mark McKenna, a privacy law professor at UCLA. (McKenna was not involved in the research with Apple.) "So I think there's a huge risk of error here in ways that could be really damaging."
McKenna points to a variety of potential nefarious uses for this technology. For example, it might be deployed as a more sophisticated lie detector test — one that looks at emotional cues to determine whether someone is anxious. It could also bolster existing technology that claims to use facial recognition to determine someone's likelihood of committing a crime.
"It's kind of like modern-day phrenology done with digital tools," he said.
Doctors Are Relying More and More on Digital Technology
Even before this latest mental health bet, Apple was on the hunt to better utilize its products to improve users' health, as noted by its recent effort to expand the Apple Watch's capabilities as a glucose monitor. It has also added tools on the smartwatch to detect irregular heart rhythms and notifications to limit audio exposure on the iPhone.
These tools can often fill in the gaps for doctors who often rely on their patients to be accurate historians about their own health, diet and emotions. Arguably, the most intimate connection anyone has is with their smartphone. If patients can't articulate they're feeling depressed, aren't even aware they are dealing with a mental health struggle, it's almost impossible to provide them with proper care as poor mental health can be the root of other health problems.
"There's a lot of potential for technology to be able to leverage more information about the individual, make a contribution in their lives and health," said Naeim, the UCLA professor. "The future is having better data and using the better data in a smarter way to better make an impact on our patients' lives and as well as empowering patients themselves."
Health Privacy in a Digital World
But Apple isn't the only company that is collecting and leveraging highly personal data points. A plethora of period-tracking apps, mental health apps and food-tracking apps have the ability to collect and sell data on their customers.
"Health privacy has always been one of the biggest categories that people have been worried about" in data privacy circles, said McKenna. "We have a regulatory system, but that regulatory system is woefully inadequate for the kinds of tools that we have now, and especially where we live in a world where everybody is creating apps to track all kinds of information."
That regulatory system, the Health Insurance Portability and Accountability Act (HIPAA), was established in 1996 to standardize how doctors stored and shared patient health information so they don't jeopardize their privacy. The law only applies to health care institutions, and it's simply a disclosure that notifies a patient how their health information can be used.
"There's a huge industry of apps and health data That aren't regulated by HIPAA because they're not health care providers," McKenna said.
According to the Journal, Apple hopes to be able to warn people they may be struggling with mental health problems and ask them to seek care. But using facial recognition as a data point to determine one's emotion has use cases that reach far beyond the realm of mental health.
Photo by Carles Rabada on Unsplash
Emotion Recognition Is the New Wave of Facial Recognition
A 2019 study by psychologists at Northeastern Universityfound that there are no objective ways to measure how one's face corresponds with emotions. Furrowing eyebrows, upturned corners of the mouth and a scrunched nose are not conclusively tied to a specific emotion. People's faces are simply constructed in different ways; pinched eyebrows on one person could be just close eyebrows on another.
That doesn't mean the technology isn't being used. 4 Little Trees, a Hong Kong company, has found success in a program that is said to be able to detect emotions in children as they attend school from home. AI-based emotion-detection software has been used on the Uyghur Muslims in China.
Chaturvedi said there needs to be guardrails put in place to make sure people can enjoy the benefits of this kind of technology without being exploited, and it would need to come from the public policy leaders and privacy experts.
"Because you need to give up a certain level of privacy, it makes it so that bad actors exploit the same technology," she said. "What we need are the guardrails to protect from harm, rather than the privacy itself being the fundamental issue."
- Health and Wellness Leaders on the Future of the Industry in LA ... ›
- Psilocybin Mushroom Treatment for Mental Health Is Coming - dot.LA ›
- Headspace Health Merges With Mental Health Startup Ginger - dot.LA ›
- mPulse Mobile Expands As Health Care Embraces Chatbots - dot.LA ›
- Mindfuli, an Orange County-based therapy platform, Launches - dot.LA ›
Keerthi Vedantam is a bioscience reporter at dot.LA. She cut her teeth covering everything from cloud computing to 5G in San Francisco and Seattle. Before she covered tech, Keerthi reported on tribal lands and congressional policy in Washington, D.C. Connect with her on Twitter, Clubhouse (@keerthivedantam) or Signal at 408-470-0776.