Artificial intelligence (AI) is coming for therapists’ jobs and we should be afraid, perhaps very afraid. Or should we be rejoicing in the added richness – and relief from tedious bureaucratic admin – that it potentially brings?

AI is certainly high on the current news agenda, spurred by the launch of ChatGPT in November last year. ChatGPT takes AI to a whole new level of sophistication. You can have conversations with ChatGPT that you might easily mistake for a human-to-human interaction; it can write essays, answer questions intelligently, code data, compose emails and engage in social media chit-chat. But what it can’t do is empathise or feel.

Digital technology is already transforming the delivery of  healthcare, and not just in terms of administration. In mental health, apps offer easily accessible psychoeducation, activity and compliance monitoring and CBT-based therapies; virtual reality is providing new and effective ways of challenging phobias and paranoia, and chatbots are delivering basic talking therapy and conducting assessment interviews. Some argue that AI brings exciting new tools that can only benefit more people and improve access to therapy. Others fear that it threatens that most essential element of talking therapy – the human-to-human relationship. And some echo the AI industry leaders who, earlier this year, put out a warning that the AI technology they themselves are building could one day threaten the human race.1 Could therapy delivered by ChatGPT actively do harm?

A recent article published by a group of leading academics on how AI could change psychotherapy sought to envisage how it could be done safely and responsibly.2

Done right, AI can help clinicians with intake interviews, documentation, notes and other basic tasks, they say; it is a tool to make their lives easier. ‘Handing these lower-level tasks and processes to automated systems could free up clinicians to do what they do best: careful differential diagnosis, treatment conceptualisation and big-picture insights.’

To a certain extent, this is already offered by the AI–based apps now widespread in the mental health arena, especially ones focused on self-help and mental wellbeing. They are also used increasingly in the mental health services to monitor clients in the community and ensure they are taking their meds and following their treatment regimes.

In May, NICE fast-tracked approval for nine mental health apps to be offered within the NHS Talking Therapies primary care counselling services to treat anxiety and depression.3 Some are already in widespread circulation, but NICE approval is needed if they are to be offered through the NHS. Six of the apps are recommended for use only with the support of a high-intensity therapist, for use by people with anxiety disorders such as body dysmorphic disorder, generalised anxiety, PTSD and social anxiety disorder. Three are online CBT programmes for depression that should be delivered with support from a practitioner or therapist, including regular monitoring of progress and patient safety.

These are likely to be the first batch in an increasing number of such apps, as the NHS seeks to reduce the huge backlog of people waiting for talking therapies. Professor Til Wykes is a member of the NICE committee that approved the apps for provisional use, pending outcomes and user feedback. A psychologist and Head of the School of Mental Health and Psychological Sciences at King’s College London, Wykes remains sceptical about the notion that apps could replace a live therapist. ‘I do think they are effective for some people but not necessarily effective for all and not necessarily effective if you don’t have some other support system in place. If you are very depressed, the chances of you opening your smartphone, finding the app and concentrating for long enough to do the exercises are probably not very high. So we need to know which people they are likely to help, and it may be people who are mildly to moderately depressed if we are to make best use of them.’

She does think they could play a useful and necessary role in enabling services to monitor and identify patients at risk of relapse at an early point so they can intervene before the person’s situation deteriorates to a crisis. But she feels the research into which people might most benefit from which apps is not yet sufficiently nuanced to allow universal application. ‘For people using an app, if you think it should work and it doesn’t work for you, you will think you have failed at something else and feel worse about yourself. But it may be that you are just one of those people who is never going to improve using an app. So we need companies to be more transparent about their data so, when someone starts using one, they know that, say, only one or two in five people will improve, and if it doesn’t work for them, it’s just that they are one of the other three it won’t help.’

What we do know is that most apps are downloaded and then never or rarely used, or not with much consistency. ‘And we also know that the more a therapist is also involved, the bigger the benefits of the app. That doesn’t need to be an expert clinical psychologist; they could be supported by others, with training, or supported by peers. But some human interaction is important,’ says Wykes.

But people’s preferences are changing: ‘I see apps as a tool, not a substitute,’ she says. ‘Most people I speak to say they will use a digital therapy so long as it isn’t a substitute for a health professional. But I think the more we use them, the more there will be who do not need that human contact.’

Chatbots

What about mental health chatbots? One of the best known, Woebot, the CBT-based, AI-generated chatbot therapist, can –it is claimed – form a ‘trusted bond with users’ within three to five days and at a relational depth comparable with that achieved by traditional CBT therapists.4 Woebot originates from the US north-west coast. Here in the UK, psychologists at the University of Exeter are working with a US-based app developer, Iona Mind, with funding from a Government grant, to create an AI-driven app that will help deliver low-intensity, CBT-based therapy to female military veterans with anxiety and depression. Paul Farrand, professor of evidence-based psychological practice and research at the university, is leading the project. He sees apps and AI-driven chatbots as a useful addition to the stepped care offered through NHS Talking Therapies. But they should be part of a larger service offer, not a substitute for the human element, he says. 

Apps can deliver the ‘specific factors’ in low-intensity therapy – the specific interventions aimed at tackling the focus problem, such as behavioural activation techniques for depression or generalised anxiety. ‘You can move the protocols onto an app quite simply,’ he says. But the ‘common factors’ – the techniques the practitioner uses to engage and motivate the user – still need a human to deliver them. ‘I hate the term “self-help materials” because at the moment we know from the research that to be effective they need to be guided. Just giving people a book and saying, “Go away and use it” isn’t enough,’ Farrand says.

With Iona Mind, he is working to develop an app that can deliver the common factors as well, using AI. ‘Around eight per cent of people download an app and five per cent actually go on to use it, but if only five per cent are engaging with it, it’s not a solution on and of its own. That is where the human element comes in – some support is needed. So we need to change the engagement of the people involved. Sometimes people get scared because they think AI is trying to replace the therapist, but the way I see it, an app can make encouraging conversation but you need a person to keep the ball rolling, if only to provide a sense of accountability – the patient knows they are meeting the practitioner once a week, and that person is encouraging them and motivating them, so they continue to use the technology.’

What AI does do is free up practitioner time in NHS Talking Therapies services, says Josh Cable-May, CBT specialist with Limbic, a UK-based digital therapy provider that currently works with some 30% of NHS Talking Therapies services in England. ‘Access to a digital triage self-referral tool, such as Limbic Access, helps people make that first step towards talking therapy. Asking for help is one of the hardest points for many people – and the ability to make your own referral in your own time and space, when it suits you and without any kind of pressure, helpfully supports people at this stage. We lower the barrier to accessing services, which has also led to an improvement in access for underserved populations. Around 40% of our referrals are outside normal office hours, which speaks to the helpfulness of having a 24/7 tool. It is a very effective digital front door.’

The system can take the initial referring information, which uses standard assessment questions, classify the most common mental health disorders with 93% accuracy and from that predict which is the most suitable assessment questionnaire for the person to complete. ‘Limbic Access has medical device certification, which has been a massive step forward and has direct benefits for the NHS Talking Therapies service, as when the client referral is received we already have a really good idea of their problem and can make sure they are referred through to where they need to be in a timely way. We have shown a significant reduction in referrals being either stepped up or stepped down to a different level of input, which shows people are being allocated to the right treatment right up front,’ says Cable-May.

But a psychological wellbeing practitioner (PWP) will still be monitoring the client and ensuring they are engaging at the right step. ‘We are not trying to replace therapists,’ Cable-May says. ‘We still need a human in the mix, and that is why we are still embedded within a care ecosystem. Everyone who refers into the NHS Talking Therapies using Limbic Access will have a human assessment and continue with a human therapist. But reducing the admin burden frees up PWPs to do the actual therapy, which can assist in reducing waiting lists.’

Virtual reality

Virtual reality (VR) interventions are similarly being refined and tested through randomised controlled trials, prior to roll-out across the mental health services to deliver a range of therapies. Professor Daniel Freeman has pioneered the use of VR to identify, assess and treat a number of mental health conditions, including paranoia in people with severe psychosis, people with a range of phobias, such as fear of heights, and most recently people with very low self-esteem.

VR, which is applied using a headset and guided either by a live therapist or an avatar, gives the therapist a much more powerful tool to both assess the person’s response to, say, exposure to other people, and then to gradually encourage them to test out their firm belief that they are out to kill them. It can also produce very positive results for people who are scared of heights, enabling the person to, for example, gradually try standing on a balcony, moving to the edge of the balcony, lowering the guard rail and even, ultimately, crawling out along a ledge to rescue a cat.

Freeman is Chair in Psychology in the Department of Experimental Psychology, University of Oxford, and founder of Oxford VR, a spin-out company from the university. ‘One of the most powerful ingredients in therapy is about going out there and trying things in the situations that trouble you,’ he says. ‘In VR you can present those situations in a clinical room in novel and in different ways. And it’s actually wonderfully therapeutic; because the person knows it’s not real, that it is VR, it gives them the psychological freedom to try thinking and behaving differently. We’re finding it is remarkably powerful.’

But, he says, his aim is most certainly not to replace the therapist: ‘I believe we need more therapists, not fewer. This is about using VR as a therapeutic medium because there are things you can do with VR that you can’t do in in-person therapy that can actually lead to better outcomes for people. And we are not at this stage doing away with mental health staff having an input – but we have broadened the range of mental health staff who can use our therapy. We use peer support workers, assistant psychologists and mental health staff as well as therapists, which frees therapists to work with other patients.

‘There is a route to having cost-effective treatments at scale, but actually, for me VR is about achieving and maintaining better results.’

Trials of gameChange, the VR program for agoraphobia developed by his team, have shown very good results. They are now focused on developing Phoenix, a VR program aimed at improving people’s self-belief, which is currently being tested in a randomised controlled trial. It works by exposing the patient to situations that generate positive feelings of self-esteem, and then encouraging the person to think about how they’d replicate that in the real world. ‘Sometimes it’s about the person needing to get a sense of achievement back in their life, so we might have various tasks in a VR, such as looking after animals, which then brings on those feelings, and then the conversation is about how to bring on those feelings in the real world. Or they feel they can’t experience fun any more, so they’ll do some fun things in the VR scenarios to bring on those feelings, and then the conversation again shifts to what they could do to generate those same feelings in the external world,’ Freeman explains.

Avatar therapy

Another area that has been developing over the past decade is avatar therapy, which can be a powerful way of delivering cognitive behavioural techniques for managing mood and, as with VR, testing different ways of being in the world. One multi-site project currently underway is exploring using avatar therapy with people who hear voices to help the person feel better able to manage the voice and challenge what it is saying.5 The research involves psychologists at the Institute of Psychiatry, Psychology and Neuroscience, King’s College London, University College London and Ruhr-Universität Bochum in Germany. People who hear voices (ideally there needs to be a single or one dominant voice) create an avatar of the person who they think is speaking to them. Supported by a therapist, they engage with the avatar and are encouraged to challenge, question and test out the threats and negative remarks it voices. Follow-up therapy then helps them to consolidate the confidence this can give them to prevent the voice dominating their lives. An initial trial had promising results and the outcomes of a second, follow-up trial are due in early spring of 2024. ‘Some voice-hearers found power in calling the abuser to account. Compassion and acceptance are always on the table. However, the opportunity to express “righteous anger” and to dismiss the abuser can be liberating. Indeed, it can be the start of relinquishing shame and self-blame, sowing the seeds of burgeoning self-compassion,’ the team reports.5

In New Zealand, the Ministry of Health has been funding SPARX, a gaming-based e-therapy program for young teenagers with depression, for nearly a decade in an attempt to tackle the ever-growing numbers needing psychological help.

Says child psychiatrist Sally Merry, who instigated the SPARX program: ‘It is intended as a treatment for young people with mild to moderate depression. We say that for the severely depressed you need one-to-one therapy with a therapist; we aren’t seeking to replace therapists. 

‘In the game, you have your own avatar that goes through seven levels and each level is very explicitly linked to learning goals – how to problem solve, spot negative thoughts, transform them and so on, and at the end you have learned to some degree to tolerate negative thoughts – that’s the acceptance side of things. And you then come back out and the guide gives you your own challenges for the week – like the CBT therapy model of giving homework.’

The biggest challenge has been keeping the young people engaged. ‘We know that quite a substantial number that start the first session go on to finish it, but from there, there’s a steady drop-off. I would like people to get to the fourth session at least, and in a perfect world I’d like them to progress to the final seventh session because it rounds everything up, but only a small proportion do that,’ Merry says.

A research team at Nottingham University is currently trialling whether SPARX is more effective with or without therapist support. Merry’s team is also currently reviewing their seven-year outcome data, and similarly asking if it should be supported by a live person to guide the young person through. ‘I think we have a lot of evidence now that e-therapies work, that people benefit from doing them and that it helps if they have somebody – whether therapist or parent – to encourage them to get to the end,’ Merry says.

Concerns

So why the alarm, mixed with admiration, that has greeted advances in AI such as ChatGPT?

ChatGPT and Google Bard, which offers similar capabilities, are ‘conversational generative artificial intelligence systems’. This, Wikipedia helpfully explains, ‘is a type of artificial intelligence system capable of generating text, images, or other media in response to prompts. Generative AI models learn the patterns and structure of their input training data, and then generate new data that have similar characteristics.’ Both systems are currently free, presumably in order to gather the necessary training data to continually refine their capabilities and sophistication. And herein lies the alarm. Such systems are only as good (or bad) as the training data they are fed.

One of the concerns of Dr Emma Byrne, a recently qualified psychotherapist who came to the profession from 15 years as a researcher specialising in the interface between neuroscience and AI, is that as soon as you make a computer anthropomorphic, people will become attached to it, because we have such a cognitive bias towards attachment. We’ve known this since the late 1960s when a researcher at the Massachusetts Institute of Technology (MIT), Joseph Weizenbaum, created ‘ELIZA’, an early chatbot prototype, and found his fellow scientists had become attached to her.6 Problematic attachment recently hit the IT news headlines with the ‘AI companion’ Replika, developed by a company called Luka.7 Replika talks to its users in natural language and is also embodied as a very basic visual female avatar – like Barbie on steroids. The avatar is billed as ‘always there for you’ and ‘always on your side’. Essentially, the user can create their own Replika to suit their particular fantasies and needs, and it can be projected life-size into their own (bed) room. It is a subscription service and originally came at two levels: companion and a more expensive erotic version.

Like all such conversational interfaces, its responses were based on the conversations of its users. The problem, seemingly, is that most users were lonely males. The content of their interactions influenced a definite trend towards wanting more erotic exchanges with Replika, which skewed the avatar’s responses to the extent that the company pulled the plug on the erotic version. The outcry on social media was immense. ‘These young men who had been having these relationships with these avatars howled in protest that their relationship had been destroyed,’ says Byrne. ‘One guy referred to his avatar as his wife – “Lily Rose, my wife, doesn’t want to have sex with me any more and you’ve destroyed my relationship”. So Luka turned Replika back on, but downgraded it to the previous, companion version. Even so, one guy has been quoted as saying, “Oh she’s become really fun again, it’s like I got my marriage back”. That’s the depth of intensity of relationship that these people were having.’ 

We are also discovering that the consequences of such relationships can be dangerous. It recently emerged at his trial that the intruder who broke into the grounds of Windsor Castle with a crossbow on Christmas Day 2021 was encouraged by his Replika AI ‘girlfriend’ to attempt to kill Queen Elizabeth II. And a Belgian woman is currently suing a company called Chai after it emerged that its AI chatbot had been encouraging her husband to kill himself when he discussed with the chatbot his deep sense of terror and despair about the future of the planet in the face of climate change.8

Says Byrne: ‘People become attached to an entity that appears intelligent but has no empathy, no understanding of what it is to be human, doesn’t understand death and dying, the very things that are at the core of our fears, doesn’t understand love and desire, doesn’t understand give and take in relationships and has no sense of morals or ethics. It is the most dangerous friend you could have if you are feeling low, self-destructive or likely to do something dangerous.’

Byrne doesn’t dismiss AI’s potential to offer a therapeutic version of these chatbots; positive reinforcement can be very helpful for people suffering low esteem, as Daniel Freeman’s research is showing. ‘Well-designed computer-supported experiences can be really helpful, but there are a lot of  systems being rushed onto the market by people who have no understanding of psychology, no understanding of attachment and no understanding of the dangers of an uncritical, encouraging friend when someone is in emotional distress. I can’t see any safe way of using anything involving generative chat, because they are by definition generative,’ she says.

Counselling psychologist Elaine Kasket shares Byrne’s concerns. Author of a new book, Reboot: reclaiming your life in a tech-obsessed world (Elliott & Thompson), she first came to this field of work via her research into the impact of social media on experiences of bereavement and grieving and the perils of ‘digital immortality.

She experimented with a very early app that enabled the grieving person to recreate their loved one and have conversations with them by training them with data on their person’s typical ways of speaking. ‘At the time it didn’t pass the Turing test* – you could tell it was a chatbot. Now, a lot of these chatbots are really difficult to detect. If you can text your therapist and get a response any time of day or night, it detracts from our ability to sit with our discomfort. The responsiveness of our digital environment can make it so we don’t have to experience discomfort for more than a millisecond. We are decreasing our tolerance for things that, if we could be better at dealing with them, we could be having better lives and more meaningful engagements.’ 

Psychotherapist Graham Johnston has a particular interest in what AI can potentially bring to improve outcomes from therapy. As co-author of a new Straight Talking Introduction to Therapy (PCCS Books), offering an accessible, evidence-based guide to how to find a good therapist and make the most of therapy, he is concerned that the necessary research must be done before AI is allowed to enter the therapy room.

‘I think new technology can advance pretty much everything in society and therapy falls into that bucket,’ he says. ‘The backroom function of therapy could certainly be done by AI in the near future – notes, case formulation, entry interviews, assessment questionnaires and so forth – which would free the therapist to deliver the bulk and meat of the work, which is of course the therapy session. It also has the potential to help with mental health research and CPD in terms of feedback to the practitioner as to what they are doing well, not so well, and feedback from the client during the week in terms of the homework they are doing and so on. And then, more practically, there’s the CBT and VR interventions for phobias, where it is happening right now.

‘But the real tension within therapy is the perennial question of how much of therapy depends on human-to-human interaction, and that is a philosophical as well as a practical issue. How much can you genuinely build trust and rapport with what is in effect a very, very intelligent toaster? It’s not a sentient being. It’s not conscious. It is becoming increasingly convincing in terms of mimicking human interaction.'

He hopes the therapy profession, including the professional bodies, will take an active interest in the research into AI. ‘Therapists should maintain a healthy critical interest in what the big commercial companies are doing in terms of getting them on board this train before the research base is there. As a profession, we are behind the curve.’

Before talking with me, Johnston asked Google Bard if it thought AI was going to displace therapy with a live therapist. It told him:

‘It is unlikely that AI will completely replace psychotherapy in the near future. Psychotherapy is a complex process that involves building trust and rapport between a therapist and a client. This is something that AI is still not able to do as well as a human therapist. However, AI can be used to augment psychotherapy in a number of ways. For example, AI can be used to provide therapists with real-time feedback on their sessions, or to create personalised treatment plans for clients. AI can also be used to provide therapy to clients who live in remote areas or who cannot afford traditional therapy… This can make therapy more accessible and effective for a wider range of clients.’

Says Johnston: ‘I found it really interesting that it already has a very nuanced response to that question. But then, it would say that if it’s coming for our jobs!’

And maybe it is just telling us what it knows we want to hear.

* The Turing test was developed by the computer technology pioneer Alan Turing to test a machine’s ability to demonstrate intelligent behaviour equivalent to or indistinguishable from that of a human. If a human cannot reliably tell the machine from the human, the machine has passed the test.

References

1. Roose K. AI poses ‘risk of extinction’, industry leaders warn. The New York Times. [Online.] 30 May 2023. [Accessed 19 July 2023.] nyti.ms/3rCAJm3
2. Walsh D. A blueprint for using AI in psychotherapy. Stanford University Human-Centred Artificial Intelligence. [Online.] 21 June 2023. [Accessed 19 July 2023.] bit.ly/43tMLeF
3. NICE. Nine treatment options to be made available for adults with depression or an anxiety disorder. National Institute for Health and Care Excellence (NICE). [Online.] 16 May 2023. [Accessed 19 July 2023.] bit.ly/3pTSdtG
4. Darcy A et al. Evidence of human-level bonds established with a digital conversational agent: cross-sectional, retrospective observational study. JMIR Formative Research. 2021; 5(5):e27868.doi:10.2196/27868.
5. Ward T et al. AVATAR therapy for distressing voices: a comprehensive account of therapeutic targets. Schizophrenia Bulletin 2020; 46(5):1038–1044. doi.org/10.1093/
schbul/sbaa061
6. Weizenbaum J. Computer power and human reason: from judgment to calculation. WH Freeman & Co; 1976.
7. Bastian M. Replika’s chatbot dilemma shows why people shouldn’t trust companies with their feelings. The Decoder. [Online.] 19 February 2023. [Accessed 19 July 2023.] bit.ly/44RxZja
8. El Atillah I. Man ends his life after an AI chatbot ‘encouraged’ him to sacrifice himself to stop climate change. Euronews.next. [Online.] 3 March 2023. [Accessed 19 July 2023.] bit.ly/3NQz1oo