Editors Choice

AI and Mental Health: Are We Building a Divided System?

The advent and implementation of new technologies in any sector always bring unintended consequences, and even with the best intentions, we can travel far down the path of progress before we fully appreciate that we’ve left some behind.

In the last year, we’ve seen the introduction of augmented or artificial intelligence systems like ChatGPT, Midjourney, and a litany of others. Across the globe, there’s been enthusiastic uptake of AI technology in all business models. From finance to journalism, manufacturing to big tech, organizations globally integrate AI into their regular operations and market it heavily to consumers and business partners. It’s a stamp of modern thinking and an ability to stay with the curve, if not ahead of it. We’re told it’s the wave of the future.

The health industry is no exception, and there’s theorizing that AI could have a role to play in nearly every aspect of the patient and provider experience, making some currently burdensome activities more efficient or optimizing straightforward medical procedures. There has also been considerable discussion around what role AI can take in mental health care, with some going so far as to predict that AI chatbots will eventually supplant human therapists and counselors as the preferred care delivery mechanism.

That should raise some red flags.

Namely, the possibility now exists for a two-tiered system based on economic, employment, or insurance status. Imagine a scenario where someone is employed full-time with limited insurance benefits or high out-of-pocket costs. If they struggle with their mental health, they look at their options, and if the insurance company determines it’s cheaper to cover the cost of therapy via AI chatbot, the option to speak to a real human being might be off the table.

Conversely, someone with excellent benefits or sufficient resources to pay out-of-pocket for mental health care could access a real person instead of speaking to a computer program.

Should that divide occur, we have created a system whereby the well-off have access to a level of mental health care that’s unavailable to those of limited means.

One of the hallmarks of many varieties of mental illness — particularly depression and social anxiety disorders — is isolation. As a society, we still struggle to overcome the stigma around mental illness, which drives some to hide from others, sometimes in plain sight. Other times, the nature of the disease itself causes the individual to withdraw. In either case, a back-and-forth with an AI chatbot, no matter how sophisticated, might prove insufficient.

AI mental health care has a part to play, assuming it can be provided at low or no cost and can be relied upon to serve clients adequately; some quality mental health care is better than none. That being said, we must be careful to keep our enthusiasm for new technology from clouding our view such that we lose sight of the most important thing: health is about people.

There are undoubtedly clear use cases that make sense. Using AI mental health chatbots to triage patients, help those in less immediate danger of slipping into crisis, or provide surface-level or generic advice on practicing mindfulness or performing deep breathing exercises, for example, is an excellent use of the technology. But for those experiencing a crisis or suffering from a long-term or particularly acute condition, the connection that can be formed with another human being can be lifesaving.

Technology like AI should be leveraged to make the human experience better for all, not just for some. If we are thoughtful about the application of AI, it can indeed be transformative. However, handing the keys to someone’s mental health care wholesale carries more risk than we ought to be willing to bear. Human connection still matters, even in our increasingly digital world.

Suppose we keep that fundamental truth in mind. In that case, AI will become a tool in providers’ toolboxes, another way to provide streamlined and ongoing care to those who need it, regardless of economic position or life circumstance. Human-centered care must become and remain the watchword of the fragmented health ecosystem. People’s lives and well-being depend on it.

Cullen Burnell

Cullen Burnell is Vice President and Chief of Staff to the Chair, Global Health and Purpose at FINN Partners. His previous professional experience includes stints in media, government, and BigLaw. He resides in Connecticut with his wife and two daughters.

Recent Posts

Mammogram Myth Busters: Too Many or Too Few? New Guidelines Explained

For three decades, the yearly mammogram breast cancer screening, but a recent guideline update has…

8 hours ago

Yes, You Can Change That Medical Consent Form

Consent forms are a usual part of many businesses, and in medicine, they are standard…

8 hours ago

LLM Cancer Mentor “Dave AI” Offers WAZE-like 24/7 Personalized Support, Making it a Game-Changer in Patient Care

Eliran Malki and Belong.Life Help People Navigate Cancer Care with an On-Demand AI Coach with…

5 days ago

Seven Habits of a Highly Effective Health-System CFO

Health system CFOs across the country face a unique quandary. Operating margins remain low, staffing…

6 days ago

The Rise of Consumer Health in Shaping Southeast Asia Treatment Accessibility

The Perfect Storm: Challenges Driving Consumer Health Demand in APAC

6 days ago

Digital Health AI and Innovation Summit to Gather in Boston this Fall

The digital health sector has its ups and downs! Some judge its success by investment…

1 week ago

This website uses cookies. Your continued use of the site is subject to the acceptance of these cookies. Please refer to our Privacy Policy for more information.

Read More