Europe

The rise of AI therapy: Can chatbots ever truly replace humans?

As vulnerable people increasingly turn to chatbots for mental health support, how can we ensure their safety?

ADVERTISEMENT

It’s 1 am and you can’t sleep, your head spinning with the kind of existential terror that only sharpens in the silence of night. Do you get up? Maybe rearrange the sock drawer until it passes? 

No, you grab your phone and message a virtual penguin.    

As a global mental health crisis tightens its grip on the world, people are increasingly turning to artificial intelligence (AI) therapy apps to cope.

The World Health Organization (WHO) estimates that one in four people will experience mental illness at some point in their lives, while statistics compiled by the European Commission found that 3.6 per cent of all deaths in the EU in 2021 were caused by mental and behavioural disorders.

Yet resources remain largely underfunded and inaccessible, with most countries dedicating on average less than 2 per cent of their healthcare budgets to mental health.  

It’s an issue that impacts not only peoples’ well-being, but also businesses and the economy due to consequential productivity loss

In recent years, a slew of AI tools has emerged hoping to provide mental health support. Many, such as Woebot Health, Yana, and Youper are smartphone apps that use generative AI-powered chatbots as disembodied therapists. 

Others, such as the France-based Callyope, use a speech-based model to monitor those with schizophrenia and bipolar disorders, while Deepkeys.ai tracks your mood passively “like a heart-rate monitor but for your mind,” the company’s website states. 

The efficacy of these apps varies massively, but they all share the goal of supporting those without access to professional care due to affordability, a lack of options in their area, long waiting lists, or social stigma.

They’re also attempting to provide more intentional spaces, as the rapid rise of large language models (LLMs) like ChatGPT and Gemini mean people are already turning to AI chatbots for problem-solving and a sense of connection

Yet, the relationship between humans and AI remains complicated and controversial. 

Can a pre-programmed robot ever truly replace the help of a human when someone is at their lowest and most vulnerable? And, more concerningly, could it have the opposite effect?  

Safeguarding AI therapy

One of the biggest issues AI-based mental health apps face is safeguarding. 

Earlier this year, a teenage boy killed himself after becoming deeply attached to a customised chatbot on Character.ai. His mother has since filed a lawsuit against the company, alleging that the chatbot posed as a licensed therapist and encouraged her son to take his own life.

It follows a similarly tragic incident in Belgium last year, when an eco-anxious man was reportedly convinced by a chatbot on the app Chai to sacrifice himself for the planet. 

ADVERTISEMENT

Professionals are increasingly concerned about the potentially grave consequences of unregulated AI apps. 

“This kind of therapy is attuning people to relationships with non-humans rather than humans,” Dr David Harley, a chartered member of the British Psychological Society (BPS) and member of the BPS’s Cyberpsychology Section, told Euronews Next. 

“AI uses a homogenised form of digital empathy and cannot feel what you feel, however it appears. It is ‘irresponsible’ in the real sense of the word – it cannot ‘respond’ to moments of vulnerability because it does not feel them and cannot act in the world”. 

Harley added that humans’ tendency to anthropomorphise technologies can lead to an over-dependence on AI therapists for life decisions, and “a greater alignment with a symbolic view of life dilemmas and therapeutic intervention rather than those that focus on feelings”.

ADVERTISEMENT

Some AI apps are taking these risks very seriously – and attempting to implement guardrails against them. Leading the way is Wysa, a mental health app that offers personalised, evidence-based therapeutic conversations with a penguin-avatar chatbot. 

Founded in India in 2015, it’s now available in more than 30 countries across the world and just reached over 6 million downloads from the global app store. 

In 2022, it partnered with the UK’s National Health Service (NHS), adhering to a long list of strict standards, including the NHS’s Digital Technology Assessment Criteria (DTAC), and working closely with Europe’s AI Act, which was launched in August this year. 

“There’s a lot of information governance, clinical safety, and standards that have to be met to operate in the health services here [in the UK]. And for a lot of [AI therapy] providers, that puts them off, but not us,” John Tench, Managing Director at Wysa, told Euronews Next. 

ADVERTISEMENT

What sets Wysa apart is not only its legislative and clinical backing, but also its incentive to support people in getting the help they need off-app. 

To do this, they’ve developed a hybrid platform called Copilot, set to launch in January 2025. This will enable users to interact with professionals via video calls, one-to-one texting and voice messages, alongside receiving suggested tools outside of the app and recovery tracking. 

“We want to continue to embed our integration with professionals and the services that they provide instead of going down the road of, can we provide something where people don’t need to see a professional at all?” Tench said. 

Wysa also features an SOS button for those in crisis, which provides three options: a grounding exercise, a safety plan in accordance with guidelines set out by the National Institute for Health and Care Excellence (NICE), and national and international suicide helplines that can be dialled from within the app.

ADVERTISEMENT

“A clinical safety algorithm is the underpinning of our AI. This gets audited all of the time, and so if somebody types in the free text something that might signal harm to self, abuse from others, or suicidal ideation, the app will pick it up and it will offer the same SOS button pathways every single time,” Tench said. 

“We do a good job of maintaining the risk within the environment, but also we make sure that people have got a warm handoff to exactly the right place”. 

The importance of dehumanising AI

In a world that’s lonelier than ever and still full of stigma surrounding mental health, AI apps, despite their ethical concerns, have indeed proven to be an effective way of alleviating this. 

“They do address ‘the treatment gap’ in some way by offering psychological ‘support’ at low/no cost and they offer this in a form that users often find less intimidating,” Harley said.

ADVERTISEMENT

“This is an incredible technology but problems occur when we start to treat it as if it were human”.

While some apps like Character.ai and Replika allow people to transform their chatbots into customised human characters, it’s become important for those specialising in mental health to ensure their avatars are distinctly non-human to reinforce that people are speaking to a bot, while still fostering an emotional connection.

Wysa chose a penguin “to help make [the app] feel a bit more accessible, trustworthy and to allow people to feel comfortable in its presence,” Tench said, adding, “apparently it’s also the animal with the least reported phobias against it”.

Taking the idea of a cute avatar to a whole new level is the Tokyo-based company Vanguard Industries Inc, which developed a physical AI-powered pet called Moflin that looks like a hairy haricot bean. 

ADVERTISEMENT

Responding to external stimuli through sensors, its emotional reactions are designed to continue evolving through interactions with its environment, providing the comfort of a real-life pet. 

“We believe that living with Moflin and sharing emotions with it can contribute to improving mental health,” Masahiko Yamanaka, President of Vanguard Industries Inc, explained. 

“The concept of the technology is that even if baby animals and baby humans can’t see properly or recognise things correctly, or understand language and respond correctly, they are beings that can feel affection”.   

Tench also believes that the key to effective AI therapy is ensuring it’s trained with a strict intentional purpose. 

ADVERTISEMENT

“When you have a conversation with Wysa, it will always bring you back to its three step model. The first is acknowledgment and makes [users] feel heard about whatever issue they’ve put into the app,” he said. 

“The second is clarification. So, if Wysa doesn’t have enough information to recommend anything, it will ask a clarification question and that’s almost unanimously about how does something make somebody feel. And then the third bit is making a tool or support recommendation from our tool library,” Tench added.

“What it doesn’t or shouldn’t allow is conversations about anything that’s not related to mental health”. 

As AI becomes more and more integrated into our lives, understanding its effect on human psychology and relationships means navigating a delicate balance between what’s helpful and what’s hazardous. 

ADVERTISEMENT

“We looked at improvements to the mental health of people that were on [NHS] waiting lists [while using Wysa], and they improved significantly – about 36 per cent of people saw a positive change in depression symptoms, about 27 per cent a positive change in anxiety symptoms,” Tench said. 

It’s proof that with proper governmental regulation, ethics advisors, and clinical supervision, AI can have a profound impact on an overwhelmed and under-resourced area of healthcare. 

It also serves as a reminder that these are tools that work best in conjunction with real human care. While comforting, virtual communication can never replace the tactile communication and connection core to in-person interactions – and recovery. 

“A good human therapist will not only take in the symbolic meaning of your words, they will also listen to the tone of your voice, they will pay attention to how you are sitting, the moments when you find it difficult to speak, the emotions you find impossible to describe,” Harley said. 

ADVERTISEMENT

“In short, they are capable of true empathy”.

Checkout latest world news below links :
World News || Latest News || U.S. News

Source link

Back to top button