AI and Mental Health: Are We Building a Divided System?

How can we avoid a split in care defined by buying power?

The advent and implementation of new technologies in any sector always bring unintended consequences, and even with the best intentions, we can travel far down the path of progress before we fully appreciate that we’ve left some behind.

In the last year, we’ve seen the introduction of augmented or artificial intelligence systems like ChatGPT, Midjourney, and a litany of others. Across the globe, there’s been enthusiastic uptake of AI technology in all business models. From finance to journalism, manufacturing to big tech, organizations globally integrate AI into their regular operations and market it heavily to consumers and business partners. It’s a stamp of modern thinking and an ability to stay with the curve, if not ahead of it. We’re told it’s the wave of the future.

The health industry is no exception, and there’s theorizing that AI could have a role to play in nearly every aspect of the patient and provider experience, making some currently burdensome activities more efficient or optimizing straightforward medical procedures. There has also been considerable discussion around what role AI can take in mental health care, with some going so far as to predict that AI chatbots will eventually supplant human therapists and counselors as the preferred care delivery mechanism.

That should raise some red flags.

Namely, the possibility now exists for a two-tiered system based on economic, employment, or insurance status. Imagine a scenario where someone is employed full-time with limited insurance benefits or high out-of-pocket costs. If they struggle with their mental health, they look at their options, and if the insurance company determines it’s cheaper to cover the cost of therapy via AI chatbot, the option to speak to a real human being might be off the table.

Conversely, someone with excellent benefits or sufficient resources to pay out-of-pocket for mental health care could access a real person instead of speaking to a computer program.

Should that divide occur, we have created a system whereby the well-off have access to a level of mental health care that’s unavailable to those of limited means.

One of the hallmarks of many varieties of mental illness — particularly depression and social anxiety disorders — is isolation. As a society, we still struggle to overcome the stigma around mental illness, which drives some to hide from others, sometimes in plain sight. Other times, the nature of the disease itself causes the individual to withdraw. In either case, a back-and-forth with an AI chatbot, no matter how sophisticated, might prove insufficient.

AI mental health care has a part to play, assuming it can be provided at low or no cost and can be relied upon to serve clients adequately; some quality mental health care is better than none. That being said, we must be careful to keep our enthusiasm for new technology from clouding our view such that we lose sight of the most important thing: health is about people.

There are undoubtedly clear use cases that make sense. Using AI mental health chatbots to triage patients, help those in less immediate danger of slipping into crisis, or provide surface-level or generic advice on practicing mindfulness or performing deep breathing exercises, for example, is an excellent use of the technology. But for those experiencing a crisis or suffering from a long-term or particularly acute condition, the connection that can be formed with another human being can be lifesaving.

Technology like AI should be leveraged to make the human experience better for all, not just for some. If we are thoughtful about the application of AI, it can indeed be transformative. However, handing the keys to someone’s mental health care wholesale carries more risk than we ought to be willing to bear. Human connection still matters, even in our increasingly digital world.

Suppose we keep that fundamental truth in mind. In that case, AI will become a tool in providers’ toolboxes, another way to provide streamlined and ongoing care to those who need it, regardless of economic position or life circumstance. Human-centered care must become and remain the watchword of the fragmented health ecosystem. People’s lives and well-being depend on it.

PATIENT ADVISORY

Medika Life has provided this material for your information. It is not intended to substitute for the medical expertise and advice of your health care provider(s). We encourage you to discuss any decisions about treatment or care with your health care provider. The mention of any product, service, or therapy is not an endorsement by Medika Life

Cullen Burnell
Cullen Burnellhttps://www.finnpartners.com/bio/cullen-burnell/
Cullen Burnell is Vice President and Chief of Staff to the Chair, Global Health and Purpose at FINN Partners. His previous professional experience includes stints in media, government, and BigLaw. He resides in Connecticut with his wife and two daughters.
More from this author

RELATED ARTICLES

RECENTLY PUBLISHED