Can ChatGPT really replace a therapist? We spoke to mental health experts to find out
AI therapy is on the rise – but can ChatGPT really support your mental health?

ChatGPT can be a proofreader, a travel agent, a coding assistant, a brainstorming partner, a tutor, a recipe creator, a career coach, a language translator, a workout planner… and, increasingly, a therapist.
It might sound surprising at first, but it makes sense. Therapy is expensive, wait times are long, and ChatGPT offers instant, judgment-free responses. But does it actually help? And is it safe? We spoke to experts to find out.
Why people are using ChatGPT for therapy
Dedicated AI therapy tools like Woebot and Wysa already exist, but many people are turning to ChatGPT for support in an organic way – without seeking out a mental health app, but simply by chatting.
For some, it starts as a casual conversation and gradually shifts into deeper emotional support. Many people have begun to rely on ChatGPT as a confidante, coach, or even a substitute for therapy altogether.
Mental health professionals recognize the appeal. "AI tools can offer journaling prompts and emotional guidance, which can be helpful starting points and reduce stigma around seeking support," says Joel Frank, a clinical psychologist who runs Duality Psychological Services.
Above all, AI is accessible and anonymous – qualities that make it particularly appealing to anyone who has been hesitant to open up to a therapist, or anyone, in the past.
"It’s becoming more common for people to take the initial step toward mental health support through AI rather than a human therapist," says Elreacy Dock, a thanatologist, certified grief educator, and behavioral health consultant.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
How AI therapy can help – and where it works best
One of the biggest advantages of AI therapy is availability. Chatbots like ChatGPT are accessible 24/7, providing support whenever someone needs it.
Another major benefit is the judgment-free nature of AI interactions. Some people feel more comfortable opening up to a chatbot because there's less fear about what a real-life therapist might think of them.
Most importantly, AI therapy is accessible and it's cheap. Many of us know all too well that traditional therapy can be expensive, and comes with long wait times or both, making mental health support increasingly difficult to access. AI, on the other hand, provides immediate, cost-free conversations.
Research has begun to highlight the effectiveness of AI therapy, particularly in structured approaches like Cognitive Behavioral Therapy (CBT), which follows clear, well-established techniques. “AI therapy tools can help walk users through mindfulness exercises and coping strategies,” says Frank.
A 2024 study of 3,477 participants found that AI chatbot therapy had a positive impact on depression and anxiety after just eight weeks of treatment. Similarly, a 2023 review and meta-analysis of 35 studies revealed that AI-based tools – referred to as conversational agents (CAs) in the study – significantly reduced symptoms of depression and distress.
The early evidence is promising, but the researchers emphasize the need for further studies to better understand the long-term outcomes and ensure the safe integration of chatbots into mental health care. Because while it's clear there's some potential, big concerns remain.
The risks and limitations of ChatGPT therapy
One of the biggest limitations is that AI lacks the knowledge, experience, and training of a real therapist. Beyond that, it also lacks emotional intelligence, the ability to truly listen, empathize, and respond in a deeply human way. A therapist can recognize subtle emotional cues, adjust their approach in real-time, and build a genuine therapeutic relationship, which are all essential for effective mental health treatment.
“Understanding a therapy model and applying it are two different things,” says Counseling Wise therapist Becky DeGrosse. She experimented with training ChatGPT to emulate Dick Schwartz, the founder of Internal Family Systems (IFS) therapy, and while some responses were surprisingly insightful, she found the experience ultimately lacking.
“IFS therapy requires the therapist to be deeply attuned to what is happening in the internal system of the client,” she explains. “The effectiveness of therapy depends on the therapist’s ability to read body signals and bring a genuine presence, so the client isn’t alone on their healing journey. AI, despite its capabilities, lacks this essential human element.”
This makes sense. ChatGPT is trained on vast amounts of text data and can be instructed to take on different roles, but it doesn’t think, feel, or understand like a human. It generates responses based on patterns in the data, not personal experience, emotions, or years of professional training.
This lack of deeper understanding can lead to several problems. Because mental health is highly sensitive, a chatbot’s responses can sometimes do more harm than good. AI tends to mirror the user’s emotions rather than challenge unhelpful thought patterns. For someone struggling with issues like self-doubt or depression, this can reinforce negative thinking rather than provide new perspectives.
Another major issue is misinformation. AI can “hallucinate,” meaning it may generate false or misleading information. In a crisis situation, this could be dangerous. While there are no known cases of ChatGPT directly causing harm in a mental health context, there have been reports of AI-driven character chatbots being implicated in deaths by suicide. While these cases are different, they highlight the need for caution when relying on AI tools for any kind of emotional support.
Privacy is another concern. Therapists follow strict ethical guidelines, including confidentiality rules designed to protect clients. AI does not. AI chatbots may store, analyze, or even pass on user data, raising significant privacy risks. "Users need to be mindful of how much personal and sensitive information they share," warns Elreacy Dock.
AI as a helpful tool, not a replacement
Conversations about mental health are complex. Everyone has different needs, challenges, and barriers to accessing care. With that in mind, it would be an oversimplification to say all AI therapy is bad.
After all, it seems some people do find value in it. While researching this article, we came across countless personal accounts from people who regularly use chatbots – often ChatGPT – to talk things through. Therapy-specific AI tools are also on the rise, signaling a growing demand for digital mental health support.
But experts suggest we need to shift our thinking. Rather than viewing AI as a substitute for therapy, it may be more helpful to see it as a supplemental tool.
“I believe AI’s most valuable role in therapy is as a tool to help us partner with our own therapeutic capacity – what some might call ‘inner wisdom’ or a ‘higher self,’” says Becky DeGrosse. “But we cannot hand over the reins entirely.”
The experts recommend approaching AI therapy as a resource for self-reflection, journaling, or learning about mental health concepts rather than using it as a replacement. They also suggest fact-checking advice, avoiding relying on chatbots in a crisis, and, most importantly, balancing AI interactions with real-world human connections as much as possible.
"AI chatbots can be helpful," says Elreacy Dock. "But when it comes to mental health, the most powerful healing still happens in human relationships."
You might also like
- I asked ChatGPT to work through some of the biggest philosophical debates of all time – here’s what happened
- OpenAI confirms 400 million weekly ChatGPT users - here's 5 great ways to use the world’s most popular AI chatbot
- I've become a ChatGPT expert by levelling up my AI prompts – here are my 8 top tips for success
Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

















