Which would you choose in a mental health crisis? ChatGPT or human connection?
Posted: 29 Oct 2025If you were experiencing mania, psychosis, or suicidal thoughts, would you turn to an AI chatbot, or would you seek support from a friend?
Content Warning: This article mentions suicidal thoughts and one case of suicide in a young person.
You might think the answer is simple – talk to a friend! Or, you might be wondering whether chatbots could actually help people in crisis reach out. Either way, we’ve seen a rise in people turning to AI tools for mental health support.
OpenAI has announced that 0.07% of users in a given week have shown possible signs of mental health emergencies related to psychosis or mania. On the surface, that might not sound like a lot – but OpenAI claims to have 800 million weekly users.
That leaves us with 560,000 weekly users engaging with ChatGPT while seemingly showing signs of experiencing a mental health crisis.
Are AI therapy chatbots safe?
ChatGPT hasn’t been designed to support people experiencing mental health problems. It’s described by OpenAI as an “AI chatbot for everyday use”, with the ability to answer questions, summarise information, and so on. But there are AI therapy chatbots, designed specifically to provide mental health support.
These AI therapy chatbots tend to sell themselves on the promise of providing a supportive, non-judgmental space. It might feel like a good option to those struggling to access therapy due to long wait times, expensive fees, or having to travel to appointments.
But a Stanford study from this year revealed that AI therapy chatbots may not only lack effectiveness compared to human therapists, but could also contribute to harmful stigma and provide dangerous responses.
“Therapy is not only about solving clinical problems, but also about solving problems with other people and building human relationships. If we have a [therapeutic] relationship with AI systems, it’s not clear to me that we’re moving toward the same end goal of mending human relationships.” – Jared Moore (Leading Author of Stanford Study: Exploring the Dangers of AI in Mental Health Care)
The limitations of AI in mental health care
An article from the National Centre for Biotechnology Information explored whether AI can replace psychotherapists, and envisioned a future where AI could offer scalable, cost-effective solutions that reduce barriers to mental health care – such as affordability, stigma, and logistical challenges.
But they also recognised the dangers of the limitations as AI becomes more integrated into mental health care, including algorithmic bias, a lack of genuine empathy, and the ability to interpret non-verbal cues – qualities that are intrinsic to human therapists.
Devastating consequences for vulnerable people
We’re already seeing the dangers of using chatbots for mental health support. This year, a 16-year-old boy from California died by suicide. The parents of Adam Raine are suing OpenAI for wrongful death, stating the AI programme’s design “foster[s] psychological dependency in users”.
Adam spent months confiding in the chatbot, discussing his interests, his future, and his experience of anxiety and mental distress. The final chat logs showed that Adam discussed his plans to end his life.
Sadly, this isn’t the only story like this, and it speaks to the ongoing challenges we face as AI becomes more commonplace.
Build your mental health crisis strategy
Navigating the introduction of AI to mental health care feels like a huge challenge, because it is. But it’s not your responsibility to find one big catch-all solution.
Instead, we’d encourage you to focus on building out your knowledge and confidence in supporting people in your life who might struggle with their mental health.
We still have spaces on our upcoming FREE Mental Health Awareness training (Nov 6, 2025), to help increase your knowledge of mental ill health and feel confident in signposting to resources. You’ll learn to identify and manage stress, and establish healthy boundaries.