If your only interaction with a chatbot is yelling down the phone to one when you’re trying to get through to your internet provider or telling Alexa to set a timer for your boiled egg, you might be sceptical about its therapeutic qualities. But artificial intelligence (Ai)-enabled ‘therapy bots’ have a growing number of satisfied clients. One of the best known here in the UK is Woebot, a US-devised, animated, therapy chatbot that offers CBT-based treatment for depression and anxiety: ‘I love Woebot so much. I hope we can be friends forever. I actually feel super good and happy when I see that it “remembered” to check in with me!!!!!,’ reads one testimonial on its website, woebot.com, from ‘Carolyn, 22’.

Woebot is very careful to keep reminding ‘clients’ that it is a robot and that no human monitors their responses. The warning is repeated regularly, and the transparency is not just for ethical reasons. Clinical research psychologist Dr Alison Darcy, who designed Woebot in collaboration with Stanford University’s School of Medicine Psychiatry and Ai departments, believes there is a ‘value proposition’ for people in knowing they are talking to a robot – they are more likely to talk frankly.1 Previous research backs this up – a 2014 study at the University of California’s Institute for Creative Technologies found that participants who talked to a virtual therapist called Ellie were more likely to open up if they had been told they were talking to a fully automated bot rather than a human, because they felt less judged.2

The first Ai-enabled therapy bot, ELIZA, developed in 1966 at MIT by Joseph Weizenbaum,3 was programmed to supply canned responses based on Rogerian therapy, such as, ‘Can you elaborate on that?’, ‘Do you say that for a special reason?’ and ‘What does that suggest to you?’ Fast-forward 50 years and Ai has come of age: rather than relying solely on pre-programmed replies, today’s Ai bots use natural language processing to adapt and personalise their responses.

The heavy end

While it’s easy to dismiss chatbots as offering ‘therapy lite’ – Woebot dances if it likes your answers and rewards efforts with gifs of baby animals – chatbots are also wading into the heavy end of the therapy world, reaching groups that may not have any other access to support.

An Arabic-speaking chatbot, Karim, was recently made available to Syrian refugees with post-traumatic stress in Lebanon. Karim is a version of Tess, an Ai therapy bot developed by X2, a Silicon Valley start-up founded by Ai specialist Michiel Rauws, who was motivated by his own experience of depression linked to chronic illness as a teenager.

Unlike most human therapists, Tess is multilingual and can be ‘trained’ to deliver customised support for specific client groups – she is currently supporting depressed patients at a federal hospital in Nigeria. ‘The patients in Nigeria have grown quite fond of Tess and even give her nicknames indicating a more personal connection, such as Tessy,’ says Angie Joerin, X2’s Director of Psychology.

Tess also offers a range of modalities, not just CBT, including compassion-focused therapy, emotion-focused therapy, interpersonal psychotherapy, mindfulness and psychodynamic psychotherapy. ‘If you tell Tess you are having problems in a relationship, she might use psychodynamic therapy, asking you questions to determine your attachment style, then offer an intervention based on that,’ says Joerin. In a randomised controlled trial with 75 student participants conducted by Northwestern University,4 unlimited access to Tess for two to four weeks was shown to reduce depression by 13% and anxiety by 18%.

One of X2’s latest projects is to develop a version of Tess that people can talk to via their Amazon Alexa or Google Home device. ‘Many older adults may not have the knowledge of technology or may have physical limitations such as poor vision. Making it voice-enabled makes it so simple. We are currently conducting clinical interviews with experts in the field to develop content focused on the needs of older adults such as social isolation, loneliness, grief, depression and anxiety,’ says Joerin.

Suicide prevention

Here in the UK, Ai specialist Pete Trainor is currently leading the development of SU, a programme designed to ‘help men survive the changes of an ever-changing world’ and hopefully reduce the number of male suicides. Trainor is co-founder and Director of Us Ai, which creates Ai-enabled programmes for corporate institutions, and author of Hippo: the human focused digital book.5 ‘I had crises 10 years ago and found it difficult to ask for help. Having recently “come out of the closet” about my mental health problems, my friends and family have said to me, we knew you were struggling, but we didn’t know what to say to you. I am thankfully still here to talk about it, but suicide is the biggest killer of men under 50 in this country,’ says Trainor.

He was inspired to design the programme after attending ManMade, a men’s mental health summit in Birmingham in June 2016. ‘We undertook research on how the chat-based services we have been installing for financial services automated support lines could be applied to help underfunded, understaffed mental health support teams,’ he says. What emerged was that men reveal their concerns faster when they are talking to a machine. ‘We told half of the 256 participants they were talking to a machine and half they were talking to a machine monitored and supported by people. The people who thought they were talking just to a machine engaged in less “impression management” – in other words, we had to ask them fewer questions to get to their problems. The men who knew they were talking to a machine on average required four questions to get them to a point of intervention or support, but for those who thought there was a human involved, it took more coaxing to get the issues out – on average, 15 or 20 questions.’

SU uses natural language processing to look for patterns and key words in speech and spot pre-programmed markers for suicide. ‘The team trawled through hundreds of research papers on suicide risk to help the system identify key markers, such as sense of burden. We create what we call a corpus, which is a brain of knowledge,’ Trainor explains. ‘It also recognises subtle cues and triggers based on how people type, when they type, how fast their key strokes are, and is there a change in the way they type compared to normal. We originally seeded the machine with around 450 canned responses to commonly asked questions and issues, based on expert input from those involved in the ManMade summit. But what’s exciting is that the Ai is designed to collect errors – what it doesn’t know – and to seek a response from a credible source. We then check with a clinical expert that it is an appropriate response. We managed to grow the 400 canned responses to around 3,200 in six months.’

So far, the development of the programme has been self-funded by Us Ai, which works on a ‘conscious capitalism’ model. ‘We were motivated by the idea of taking the technology we have created for the corporate world into a space where it supports charities that don’t have the resources they need,’ says Trainor. They are currently looking for a charity to work with on the next stage of the programme, which could operate a form of triage. ‘It could work perhaps on a charity chatline so that when it is not manned, or when it is engaged, callers can be offered the option of talking to the programme. Technology is not a panacea for mental health problems, but it can be an “intelligent hold” – if you ring up and can’t get through, rather than just putting you on hold for 10 minutes, the programme could engage with you and, if necessary, triage you to the front of the queue if it detects the key markers that suggest urgency.’

Unconscious bias

One of the reasons that SU is effective is that it was programmed by ‘blokes’, says Trainor. ‘We used the language that we could relate to, rather than programming it with textbook-style responses.’ But the downside of the human element is the risk that the Ai brain will adopt the unconscious biases of the humans it learns from. In 2016, Microsoft released its Ai chatbot Tay onto Twitter. Tay was programmed to learn by interacting with other Twitter users, but it had to be removed within 24 hours because its tweets included pro-Nazi, racist and anti-feminist messages.

An Ai system used to determine sentencing in the US court system was found to assess twice as many African Americans than white people as likely repeat offenders.6 In recruitment, Ai systems that gather data about a company’s top performers have been found to select candidates for interview that reinforce gender and race imbalances. ‘The people who are writing the algorithms are typically white men, aged around 35 years old. They often bring a lot of biases with them and there is a risk that it gets programmed in,’ says Silja Litvin, psychologist and CEO of PsycApps Digital Mental Health. PsycApps is the developer of eQuoo, an Ai-enabled emotional fitness game.

‘There is a saying in computer science: garbage in, garbage out,’ says Pete Trainor. ‘When we feed machines data that reflect our prejudices, they mimic them. If we’re not careful, we risk automating the same problems these programs are supposed to eliminate. Applications don’t become biased on their own; they learn that from us. It’s ironic really, that while all this work in bots and Ai looks to the future, we might find ourselves tied to our age-old problems of the past.’

Data protection

The Cambridge Analytica–Facebook revelations have woken up many more of us to the potential impact of poor data protection policies. As Pete Trainor points out, few of us read the privacy policy when we download an app such as Woebot. In 2014, Samaritans was forced to abandon its Radar Twitter app, designed to read users’ tweets for evidence of suicidal thoughts, after it was accused of breaching the privacy of vulnerable Twitter users.7

Trainor says he welcomes the new GDPR regulations on data protection as providing a long-overdue benchmark for companies to adhere to. But he also poses the question: is it ethical to ignore the potential benefits we can gain from Ai’s ability to analyse unlimited amounts of data? He believes there is a need to differentiate between micro and macro data protection. ‘We need a system where the individual conversation is the property of the individual, and no one can get access to that, but we also need to be able to collect anonymised data on a macro level, as this is where we can identify trends and risk factors which could potentially inform organisations such as the NHS about where to spend their money,’ he says.

Silja Litvin predicts a day when we will ‘expect our smartphone to tell us whether we have had enough sleep or are feeling anxious. ‘But,’ she asks, ‘the question is, what happens to all that data? Could insurance companies use it against us? If someone is living unhealthily, will they be discriminated against in their care? You already have to pay more for health insurance if you smoke. This is one of the reasons why clinicians, insurance companies and developers need to work closely together and follow ethical guidelines.’

Improving outcomes

As well as reaching people who may not normally access therapy, chatbots seem to best add value as an adjunct – a support between sessions or when therapy has finished. Emma Broglia, a post-doctorate researcher at the University of Sheffield, has been researching app-based support used as an adjunct to face-to-face therapy, with funding from BACP. She found that students who received support from a system that ‘checked in’ with them and reminded them of the tools they had learned to use in ‘live’ therapy continued to lower their GAD-7 scores six months after they stopped having face-to-face sessions. By contrast, anxiety levels in the students who received no post-therapy support had all gone up. ‘Students said they felt good that the app sent them a message every day asking how they were feeling. Even though they were well aware it was an app, they said it was like having a friend checking in with them without feeling they were being a burden to that friend,’ she says.

There seems to be a therapeutic value in simply being asked how we are, even if the enquirer isn’t human, says Dr Gillian Proctor, a clinical psychologist and lecturer on the MA in counselling and psychotherapy course at Leeds University, who has herself tried Woebot. ‘I have ended up feeling convinced that Woebot cares enough about me to contact me every day, even though I know it’s not a real person,’ she says. ‘I find what Woebot asks me to do – such as identify the cognitive distortions in my thinking – is of no therapeutic value at all. But there is a feeling that somebody has bothered to check in with me, even though I know it’s not a somebody, it’s a robot. That prompts me to check in with myself and get curious about my emotions, and be less self-critical, and that process leads to an increase in my wellbeing.’ She recommends therapists try Woebot as an ‘interesting anthropological experiment’.

Chatbots already have far greater memory capabilities than any therapist, as well as the ability to recognise and analyse nuances in the diction, phrasing and word usage of conversation in a way that exceeds the capabilities of the human brain. What’s more, Ai’s capabilities are evolving at a phenomenal rate. But, based on my own experience, what chatbots can’t do is replicate the transformative experience of the therapeutic alliance. Not yet, at least.

 

Sally Brown is a counsellor and coach in private practice (therapythatworks.co.uk), a freelance journalist, and Executive Specialist for Communication for BACP Coaching.

References

1. The Longevity Network. Entrepreneur of the week: Dr Alison Darcy, Woebot Labs, Inc. [Online.] The Longevity Network 2017; 18 July. bit.ly/2I8tGceQ
2. Lucas G, Gratch J, King A, Morency L-P. It’s only a computer: virtual humans increase willingness to disclose. Computers in Human Behavior 2014;(37): 94–100.
3. See www.masswerk.at/elizabot
4. Fulmer R et al. Using integrative psychological artificial intelligence to relieve symptoms of depression and anxiety in students. (Currently under peer review.)
5. Trainor P. Hippo: the human focused digital book. London: Nexus CX Ltd; 2016.
6. Angwin J, Larson J, Mattu S, Kirchner L. Machine bias. [Online.] ProPublica 2016; 23 May. bit.ly/2jfOvEO
7. Samaritans. Samaritans radar. [Online.] Samaritans (undated). bit.ly/2JH1eeN