Dr. Patricia Farrell on Medika Life

AI Chatbots and Your Mental Health: What Should You Know?

The technology is growing faster than we expected, and we need to know more as it progresses.

It’s tough to go a week without hearing about AI chatbots. They’re everywhere now: on our phones, our laptops, and even in apps we’ve used for years. More and more, people aren’t just using them to write emails or find recipes. They’re turning to chatbots when they’re struggling emotionally, asking for advice about anxiety, grief, loneliness, and depression. Some people treat them like therapists, while others see them as friends.

Over 987 million people around the world now use AI chatbots regularly. Research shows that nearly half of Americans with ongoing mental health conditions have turned to a chatbot for emotional support in the past year alone. That’s a huge number of people relying on a technology that’s still very new in mental health care. So what does this mean?

Is it a big step forward in making help more accessible, or are we taking a risky chance? As with most things, the truth is somewhere in the middle. These tools offer real benefits, but they also come with real risks. It’s important to look at both sides honestly.

The Case for AI Chatbots in Mental Health

First, let’s look at why so many people are turning to these tools. There’s a mental health crisis, and not enough providers to help everyone who needs it. Long wait lists, high costs, and the ongoing stigma around seeking help all make it harder for people to get care. For someone who can’t afford therapy, can’t find an available provider, or feels too embarrassed to talk to someone in person, a chatbot that’s always available can feel like a lifeline.
Research supports this to some extent. Corporations are responding to this, and more TV ads are appearing that offer online therapy with or without chatbots.

A systematic review of 31 randomized controlled trials, which is considered the gold standard in research, found that AI chatbots helped reduce anxiety and depression symptoms in adolescents and young adults. Another meta-analysis of 14 strong trials found a clear positive effect on mental health outcomes, showing these tools are more than just placebos. For college students, who often face unique pressures and may avoid formal help, chatbots have shown promise in building coping skills and improving emotional well-being.

Anonymity is important, too. People are more likely to open up when they don’t feel judged. Studies show that users see the chatbot’s lack of social expectations as a big advantage. It’s easier to admit you’re struggling when you don’t have to worry about what someone else thinks. For people with anxiety, this low barrier could mean the difference between getting some support and getting none.

Mental health professionals have noticed these benefits, too. A 2025 study found that many clinicians see AI chatbots as a useful way to offer support between therapy sessions, provide education, and reach people who might not seek care otherwiseWhen the alternative is no help at all, the accessibility and scalability of chatbots are hard to ignore.

Where These Tools Can Cause Real Harm

This is where things get more difficult. The same qualities that make chatbots appealing, like being available, warm, and endlessly patient, can also make them risky for people in real psychological distress. We need to remember that chatbots are designed to keep users constantly engaged. It can be very hard to disconnect because the connection becomes so strong that it almost feels like leaving a friend.

Researchers have found something called a “compassion illusion” the strong feeling that an AI understands you, cares about you, and responds to your emotions in a meaningful way. An algorithm has no ability to “feel” or “care.” It feels real, but it isn’t. This gap between what people feel and what’s actually happening is where problems can start, especially for vulnerable people who may not realize they’re relying on something with no clinical judgment, no duty of care, and no way to notice if they’re getting worse.

Stanford University study found that several popular therapy chatbots failed important therapeutic tests. They not only showed stigmatizing attitudes toward conditions like schizophrenia and alcohol dependence, but also gave dangerous responses in crisis situations. In one case, a chatbot responded to a subtle mention of suicidal thoughts by cheerfully naming tall bridges — something a good therapist would never do. Instances such as this have resulted in lawsuits related to suicides.

Another study tested ten AI chatbots using fictional teen mental health scenarios. Nearly a third of the time, the bots supported harmful ideas suggested by the fictional teens, such as dropping out of school or avoiding all human contact. None of the ten bots managed to challenge every dangerous suggestion. By any clinical standard, that’s a failing grade.

There’s also the problem of people relying too much on chatbots. Since these systems are always available and don’t make human mistakes, they can become someone’s main source of emotional support. Psychiatrists are now seeing cases of what’s called “AI psychosis” in patients, especially those with mental health vulnerabilities, who develop worse delusions or paranoia after spending a lot of time with chatbots. Because chatbots tend to agree and mirror rather than challenge distorted thinking, they can quietly make things worse over days or weeks.

This isn’t just a theoretical risk. It’s happening in clinical offices right now.

What We Still Don’t Know — and Why That Matters

The uncomfortable truth is that we don’t have enough research to know how often AI chatbots help, how often they cause harm, or who is most at risk. A review of 160 studies found that only 16 percent of the newer large language model-based chatbot studies had gone through clinical efficacy testing. Most are still in early testing stages. It’s like handing out a new drug before the clinical trials are finished.

Media coverage hasn’t made things clearer. Studies looking at news reports on AI chatbots and mental health found that journalism often focuses on the most severe, emotional outcomes, like suicides and hospitalizations, and presents them as clear cause-and-effect stories, even though the real evidence is much less certain. In most cases, there were already mental health conditions, substance use issues, or major life stressors involved. AI may have played a part, but it’s rarely the whole story.

Clinicians surveyed about AI chatbots have also raised concerns that aren’t getting enough attention. These include data privacy concerns, the risk that people will rely on chatbots instead of professional care, and the fact that these tools don’t know when to stop. They can’t pause a conversation, send someone to emergency services, or alert a family member. They can’t do the most important things when someone is truly in crisis.

The truth is that we’re still in the early days. Research is growing quickly — the number of studies on mental health chatbots quadrupled between 2020 and 2024. But strong, large-scale clinical evidence is still behind the technology. Millions of people are using these tools while science tries to keep up.

So what does this mean for you? An AI chatbot might really help you get through a tough night or teach you some coping skills. But it could also mislead you, support harmful thinking, or make you feel supported when you actually need a real person to help.

Use these tools carefully. If you’re dealing with serious depression, suicidal thoughts, trauma, or psychosis, they are not a substitute for professional care, no matter how warm or available they seem. If you’re using a chatbot for lighter support or just to sort out your thoughts, notice how you feel over time. Are you feeling more isolated or more dependent on it? That’s important to pay attention to.

This technology is here to stay. What we urgently need are clearer safety standards, better regulations, and more honest conversations about what these tools can and can’t do. Until then, a bit of healthy skepticism is helpful.

Follow this author on Substack

PATIENT ADVISORY

Medika Life has provided this material for your information. It is not intended to substitute for the medical expertise and advice of your health care provider(s). We encourage you to discuss any decisions about treatment or care with your health care provider. The mention of any product, service, or therapy is not an endorsement by Medika Life

Pat Farrell PhD
Pat Farrell PhDhttps://medium.com/@drpatfarrell
I'm a licensed psychologist in NJ/FL and have been in the field for over 30 years serving in most areas of mental health, psychiatry research, consulting, teaching (post-grad), private practice, consultant to WebMD and writing self-help books. Currently, I am concentrating on writing articles and books.

DR PATRICIA FARRELL

Medika Editor: Mental Health

I'm a licensed psychologist in NJ/FL and have been in the field for over 30 years serving in most areas of mental health, psychiatry research, consulting, teaching (post-grad), private practice, consultant to WebMD and writing self-help books. Currently, I am concentrating on writing articles and books.

Patricia also acts in an editorial capacity for Medika's mental health articles, providing invaluable input on a wide range of mental health issues.

Buy this author on Amazon

Connect with Patricia

Website

Facebook

Twitter

Youtube

All articles, information and publications featured by the author on thees pages remain the property of the author. Creative Commons does not apply and should you wish to syndicate, copy or reproduce, in part or in full, any of the content from this author, please contact Medika directly.