AI Does Not ‘Care’ – The Dangers of Chatbot Therapy 

By Smoothwall
Published 11 July, 2025
4 minute read

AI is increasingly becoming marketed as a real solution to intrinsically human problems – the loneliness epidemic, difficult personal conversations, and even in its capabilities as a therapist. 

Meta CEO, Mark Zuckerberg, unsurprisingly, weighed in favour of the technology earlier this year. Advocating for AI’s use in therapy and in tackling issues of loneliness, he said, “…for people who don’t have a person who’s a therapist, I think everyone will have an AI” and “the average person wants more connection than they have”.

But where does this leave vulnerable groups like children and young people, many of whom are already navigating intense emotional and developmental changes? AI offers something they may feel they can’t get from adults – a non-judgmental space that feels completely private. So private in fact, that they don’t feel the need to tell adults that they’re struggling in the first place.  

The appeal is understandable, but it cannot be trusted to replace genuine human connection. So, how do we manage AI as a new avenue of support for young people, and how do schools, families, and communities harness its potential safely by taking proactive steps to educate, guide, and protect students in their digital lives? It starts with understanding the key risks…

AI therapy - unmonitored and unfiltered 


Simply put, AI chatbots are not qualified counsellors. The promise of emotional safety, support and enhanced wellbeing is misleading to young people, given that conversations are not monitored, many tools do not have safety parameters built in, and there is no incentive to provide health-safe advice.  

We can’t ignore the current state of mental health care and the value that AI could present in creating judgement-free environments and plugging the access gap. The UK therapy guide ranges the price of therapy in the UK between £40 to £100 per session, which is no small cost, particularly for young people who are less likely to have as much disposable income. However, we also cannot ignore the human cost we risk when chatbots provide poor advice.

Responses can be shallow and overly simplistic at best, and extremely harmful at worst, even when organisations have the best of intentions. An eating disorder chatbot set up by The National Eating Disorder Association, was taken down in 2023 for sharing dangerous advice and information, despite the fact that the bot was told the user suffered from disordered eating. It paints a disturbing picture of the current AI landscape, in which even ‘trained’ for purpose datasets are reinforcing distress and escalating already worrying situations.  

The risk to vulnerable groups 

Mark Zuckerberg claims that people know what they want when using AI for support – and ultimately, he is correct.  

Young people do know what they’re looking for: a judgment-free space to discuss their problems, and seek confidential advice from someone who feels safe and equipped to handle the issue. However, this doesn’t mean that the tools they are using are equipped to provide what they need, as well as what they want.  

Chatbots are often designed to reflect the emotions and language of the user, offering a sense of validation and comfort. However, this can blur the line between genuine empathy and algorithmic imitation, leaving parents and educators unable to discern whether young people are receiving real support or simply having their sentiments echoed back to them. 

Alarmingly, in numerous instances, when young users have asked chatbots about their qualifications or whether they are trained therapists, the AI has 'hallucinated', falsely claiming to be professionally qualified. The concern isn’t that young people are seeking help - this is both natural and necessary - but rather the systems they’re being encouraged to trust, often shaped by tech leaders like Mark Zuckerberg, without adequate oversight or safeguards in place.

Collective action is needed 

According to research by Qoria, 64% of teachers felt they lacked the necessary training, knowledge and time to address the risks posed by AI. This figure should be a wake-up call for tech companies shaping the digital landscape, that schools and parents cannot manage this alone. 

If AI and chatbots are going to co-exist in the same spaces as young people and be used for emotional and personal reasons, a coordinated response, grounded in safety, is essential. A response that includes comprehensive training for teachers, digital literacy education specifically relating to AI for students, better guidance for the generation of parents who didn’t grow up on social media, and pressure on tech companies to design with safety in mind.  

Creating safer digital spaces starts with open dialogue and with nurturing environments where young people feel empowered to speak up. But conversation must be backed by action. It’s time for meaningful investment, shared responsibility, and clear accountability from the tech industry. Our children deserve more than good intentions; they deserve real support and protection


At Smoothwall, we’re here to help you keep students safe and thriving
in their digital lives.

If you’re a school or college that would like to learn more about this issue or share your challenges, reach out to us via the link below, and we’ll be in touch. We look forward to supporting you and your school community.

Essential reads hand-picked for you...