Artificial intelligence and psychotherapy


 

UKCP is creating a new strategy to define and steer our work for the next three years. We want to create a strategy that captures a collective vision for UKCP and are currently engaging with members on key areas of UKCP’s work as well as topical issues likely to impact the profession.

We asked writer, broadcaster and speaker, David Baker, who has written and spoken about artificial intelligence (AI) and the future of work, to explore the relationship between psychotherapy and AI. He sought views from across the profession and looked at the implications of this rapidly emerging technology.

 

AI and therapy

Scroll through Apple’s App Store, or its Android equivalent Google Play, and you’ll find hundreds of apps purporting to help with users’ mental health. Some, like stoic and Moodfit, are glorified note-taking programmes that allow users to keep track of their mood from day to day. Others, such as BetterHelp and TalkSpace, use artificial intelligence (AI) to assess users’ needs and connect them to a counsellor or therapist. And a third group, which includes apps such as Wysa, Youper and Woebot, each of which have been downloaded more than one million times, offer counselling that is provided entirely by AI, with minimal, if any, human oversight. And it is these AI-based interventions, says Rory Lees-Oakes, a counsellor and co-founder of Counselling Tutor which provides ongoing training for counsellors and therapists, that will have the biggest effect on those working in the sector – and on their jobs.

‘We all have to understand that therapy is a market like any other market and it’s changing,’ he says. ‘The old adage holds true: AI isn’t going to replace humans; it’s going to replace humans who don’t know about AI.’

 

Making therapy more accessible?

Almost all apps that use computing to replicate the role of the therapist are based on what are known as Large Language Models (LLMs), computer programmes that can imitate human language. LLMs, as we know them today, were first developed in 2018 (more information available here and here), but they entered the mainstream with the launch of ChatGPT in 2022, which initially amazed and then shocked many people with its ability to communicate in an uncannily human way. And key to the way they work is that they are able to learn from the interactions they have with human users.

In the mental-health world, these chatbot-based apps offer clear advantages, in particular by making therapy more accessible. Apps have no waiting lists; they are seen as non-judgmental and appeal to those who associate mental health with stigma; and they can be accessed whenever the user needs. The obvious question, though, is whether or not they are providing effective therapy, something Mr Lees-Oakes is unsure about.

 

The essence of therapy

‘There is something important that a human therapist brings,’ he says. ‘You're working on the affective domain and on the cognitive domain, but, to some extent, you're also working in the spiritual domain of a client. And I think that is what is missing with AI. When two humans meet, there is a kind of existential understanding that their lives are limited and that brings an immediacy to the therapy. I don’t think AI can replicate that.’

UKCP member Helen Molden agrees. A psychotherapist based in Hampshire, she says that the essence of therapy is the unique life and body experience that a human therapist brings to the encounter. And, she argues, AI is, so far, unable to replicate a human therapist’s ability to detect nuance in what a client brings to the session.

‘I think a really experienced psychotherapist is looking for what isn't there as much as what is there and I think that AI is a long way away from being programmed to do that,’ she says.

Others take a different view. Daniel Rubinstein, a therapist based in St Leonards on Sea, who is also a member of UKCP, argues that therapists should move away from seeing a strict division between the therapy that a human therapist offers and that offered by AI and find ways to partner with the tools that AI offers.

‘There is a sense that, in therapy, we always rely on some kind of artificial intelligence,’ he says. ‘The therapeutic encounter is qualitatively different from many other encounters. There is something special happening there and maybe this special something is a kind of “artificial intelligence” that is being generated between the two participants. What is, for instance, transference, if not a kind of virtual reality? Perhaps we shouldn't be thinking in a binary way about artificial intelligence and human intelligence. It's all much more entangled and intertwined.’

Even empathy, he argues, which feels like a paradigmatically human skill, is more artificial that we might imagine, especially within the therapeutic setting.

‘I’m being slightly facetious, but the empathy that you are trained to give for five hours a day as a jobbing counsellor is in many ways itself artificial, because, if it was real empathy, you would be burnt out to a cinder,’ he says.

 

Assessing the value of AI

The idea that AI can create a sense of empathy may at first seem strange, but it was borne out by a 2022 study of 1,200 users of the Wysa app, published in the online journal Frontiers in Digital Health, that found that many patients felt the app’s chatbot liked and cared for them. Anonymised transcripts of sessions that the researchers studied included comments such as ‘Thanks for being there,’ and ‘You’re the only person that helps me and listens to my problems.’

Dr Rubinstein argues that, rather than worrying about whether or not AI can truly ‘care’ for clients, therapists should focus on a different skill that, he feels, only humans can bring to a therapeutic encounter – creativity.

‘An AI chatbot can only regurgitate what it already has: a vast library of clichés. But it can never go beyond these clichés,’ he says. ‘Between two humans, something new can emerge, something that can give you a new perspective and make you feel something unexpected and original. It’s that important idea of RD Laing’s: that the best moments in therapy are unpredictable, unique, unforgettable, unrepeatable and often indescribable. That's what you get in human-to-human therapy.’

Debates about what therapy is and what does and doesn’t make it effective have been preoccupying therapists since Freud’s time and, according to Ellen Dunn, policy and research manager at UKCP, the problem with assessing the value of AI therapy apps is that, as yet, not enough good, large-scale and independent research has been done on the topic.

‘There have been some studies done on AI’s effectiveness in areas such as harm prevention, suicide prevention, risk management and so on, and that has shown some positive results. And there are some indications that people are at least willing to give AI therapists a try,’ she says. ‘But when it comes to the application of AI for other mental-health issues, there's less evidence confirming whether or not it's effective. And some research appears to have been carried out by people with a vested interest in the results.’

 

AI and personal data

Ms Dunn acknowledges that it will never be possible to mimic the kind of randomised control trials that are common in physical medicine, but that doesn’t mean the therapy sector should give up on the idea of research.

‘We need to help users make an informed choice, not only about how effective the services are but also about safety issues such as how their data is being used,’ she says.

The safety of personal data in an online world is something that also concerns Mr Lees-Oakes. He points out that many LLM-based therapy apps use material from therapy sessions to train themselves and that, in some cases, human moderators have access to what is said or typed in sessions. Moreover, companies behind apps that connect clients with human therapists often listen in on sessions or have access to text interactions.

And, even away from the world of online therapy and AI chatbots, human therapists may already be unwittingly compromising client confidentiality through their use of back-office AI tools, such as the cloud-based transcription services that many therapists use to create notes after a session.

‘There are lots of platforms that store your notes, video and so on and I’m not convinced that everyone using them knows what happens to that information,’ he says. ‘There are some that are UK based and GDPR compliant, but the long and short of it is that once that information is out there, you can't get it back. I think at the moment, I would take an X Files view of this and say, trust no one. I put my notes on a pen drive and lock them in a filing cabinet.’

 

The 'industrialisation' of therapy

Data safety is one of a considerable number of ethical issues that AI-based services have thrown up. Others, says Julie Stone, UKCP’s ethics lead, are emerging around the working conditions and competencies of therapists working for apps that connect clients with human therapists online.

‘I think we have to be looking at things like confidentiality, quality control, limits of competence and the length of therapy being offered online,’ she says. ‘Thousands of new counsellors and therapists qualify each year, and building a practice is challenging. Online services may be felt to be an attractive way for new therapists to find work.’

The danger, she says, is that they may find themselves working beyond their capabilities and not being able to identify when a patient is raising issues beyond the scope of their competence. And it is unclear how much, if any, supervision and ongoing continuous professional development apps offer the therapists who work for them.

Mr Lees-Oakes agrees. He says a big threat comes from what he calls the ‘industrialisation’ of therapy, where therapists sign up to an online service and find themselves overworked, either because of the demands of the company behind the app or because of the relatively low fees they are paid, something that will be familiar to workers in other sectors that are mediated by online apps, such as ride services or food delivery.

 

Quality control

If the ethical issues of working online for a private company, one that in many cases is based outside of the UK, are complex, those brought up by chatbot apps, where there is minimal or even no human involvement in the service offered, are even more daunting. Who, for example, would a client go to, if they feel that the chatbot has given them poor advice?

‘It’s going to be very difficult,’ says Sunita Thakore, UKCP’s complaints and conduct manager. ‘If there were any ethical issues, we wouldn't be able to deal with it. Our complaints procedure is based on a breach to our code of ethics that has been committed by a person. How can you deal with a breach done by a chatbot? The risk is for people who access AI therapy that there may not be a recourse for them.’

And, she says, there is an additional risk that a human therapist working with a client who is also using an AI service may face a complaint about something the AI has done, a situation she feels would be very hard to adjudicate.

Ms Thakore is hesitant about the idea of UKCP becoming involved in accrediting apps in the same way that it accredits human therapists. ‘It would be very resource-intensive,’ she says. (In fact, in the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) and the National Institute for Health and Care Excellence (Nice) are already engaged in a three-year investigation into how to establish and enforce minimum safety and efficacy standards for AI therapy tools.) But she does argue that therapists need guidance now about how to integrate AI tools into their practice safely, similar to the guidelines that UKCP produced at the beginning of the COVID pandemic when almost all therapy went online.

‘They were rough and ready guidelines,’ she says, ‘but they worked. The risk is that some of our members might rush to use AI while not having the skills or the knowledge or the understanding of how to use it.’

 

Transforming training

Away from the therapy encounter, one area in which many people feel AI is set to make a positive impact is in the training of therapists. Dr Peter Pearce, faculty head, Applied Social and Organisational Sciences at the Metanoia Institute, says AI may offer a number of opportunities in this area.

The first is AI’s ability to take notes, summarise conversations and curate topic areas and the referencing/illustration for each point. This could prove invaluable to students who, at the moment, may be excluded from training because of issues such as dyslexia or ADHD. AI could also support accessibility for those with, for example, hearing loss or visual impairment, through real-time subtitling or audio description. And, he says, it could strengthen the ability of students to take charge of their learning and individualise it, opening up the profession to a greater range of new people.

Dr Pearce also sees the potential for an even more significant transformation of training over time in the use of online avatars, AI ‘characters’ that will be able to present safely as early ‘practice clients’ able to interact with students.

‘Right now, we ask people to practise on their peers in training. Maybe the student you are working with is bringing something real, but they know the drill: it’s going to be ten minutes and it’s practising certain skills, so they will likely keep things at a certain level,’ he says. ‘But real learning takes place when you are taken to the edge of your competence. You've never worked with a client with severe post-traumatic stress until you work with a real client and that is something an avatar could replicate as key preparation within a safe environment for you and the “client”.’

Moreover, in the same way that clients are able to use AI chatbots to access therapy at a time that suits them, students will be able to call up and program avatars to help them practise their skills in between physical classes.

‘I envisage that will push us further away from written assignments and more towards experiential and personal learning, which, again in terms of broadening who can qualify as a therapist, may be a very good thing,’ he says.

 

Looking ahead

Thanks to AI, therapists today are facing challenges that were barely imagined just a few years ago, from the working conditions of therapists on online apps and the efficacy of therapy chatbots, to issues such as data safety and complaints and the way AI is threatening many therapists’ livelihoods. And many are looking to professional bodies for support.

UKCP is already part of the AI Coalition, which comprises mental-health professional bodies, including the National Counselling and Psychotherapy Society and the Association for Counselling and Therapy Online, as well as organisations with knowledge of privacy and data-protection issues. The Coalition will be looking into all aspects of AI in the sector, beginning with ethical issues, with the aim of developing resources for therapists to help them engage with the technology.

Brian Linfield, chair of UKCP’s Professional Conduct Committee, welcomes UKCP’s involvement and encourages the organisation to go further.

‘We need to be at the forefront of this, ahead of the game, and thinking now about how we are going to protect our members when they're using AI,’ he says. ‘Personally, I think we should be looking into how to accredit AI too. At the same time, we need to be getting the word out there to the public of the risks and the benefits of this new technology. And we need to do this fast.’

Helen Molden sees a role for UKCP in promoting better research both into AI’s efficacy and into the innovations in therapy that AI could engender.

‘I don't think human-to-human therapy will ever be replaced completely, but there are already so many good examples of humans and technology working together in this sector,’ she says. ‘I think, as a profession, we could do much more to explore the opportunities offered by clients being able to access online and face-to-face therapy side by side. And, above all, we need good, sound research that is not being done by the companies that are promoting the apps.’

Julie Stone says wider discussion is needed around regulating AI bots and online services that match therapists and clients. Private companies in the sector may be resistant to regulation and may not be happy to share their data. She stresses that UKCP therapists remain accountable, whatever environment they are working in. But, she says, all professional organisations, including UKCP, should be having a conversation about what ‘good’ and ‘bad’ looks like in the AI world. She does not see AI as a universal replacement for live therapy but acknowledges that using bots and AI can make therapy more available for some people.

‘There are advantages and disadvantages of AI-based therapy. We need to be open to the possibilities that AI offers and not make an automatic assumption that face-to-face therapy is the best for all people in all situations’ she says.

Mr Lees-Oakes agrees, ‘We are at the beginning of an unknown journey,’ he says. ‘And the big need is for information and training so we can all make good decisions. AI is here. How we deal with it now is up to us.’


For more information on the UKCP strategy, please visit our strategy webpage.

Share

Find a therapist near you