Having spent 20 years working in the technology industry, I am fascinated by the current discourse about the impact of artificial intelligence (AI), generally and in terms of our profession. When I was asked to present on the impact of AI in therapy at the BACP Private Practice Conference 2025, I felt sufficiently equipped to tackle this complex topic. I pitched my presentation for a non-technical audience, making an educated guess that colleagues had a superficial understanding of AI, but might benefit from a deeper understanding, particularly around the implications for therapy.
Definitions
The obvious starting point is to try to define what AI is and what it isn’t. This definition from the Information Commissioner’s Office (ICO) is a useful place to begin: ‘Artificial Intelligence (AI) can be defined in many ways. However, within this guidance, we define it as an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking.’1 I’m interested in exploring whether AI can replace human thinking, and perhaps more importantly for our profession, human experience.
It’s important to clarify here that there are different types of AI. I find it helpful to use the example of a fitness tracker that uses physical AI, and a chatbot that uses generative AI. We don’t expect a fitness tracker to know how we feel, even though it knows quite a lot about us. This is because fitness trackers are constrained to physical AI, which doesn’t require human interaction; it responds to a set of predefined tasks and environmental inputs.
The reason people feel as though a chatbot understands them is, in part, because we interact with it – we put out a signal and get a response. The other component is because it uses generative AI. Here, the algorithm is set to respond to human interaction by leveraging large datasets (large language models). Generative AI can create new content based on these interactions. As a result, creative industries are currently lobbying governments because of the risk of copyright infringement and mass job displacement. There is also a human element, as replacing creatives with machines devalues lived experience.
An ‘outside-in’ approach
During my research, I interviewed hundreds of clients and therapists to ascertain the different levels of understanding of what AI is and isn’t, as well as to gain a better understanding of chatbot use. I did this organically over a period of 12 months in the form of a series of informal interviews, as well as by following several practitioners from BACP, BPS, UKCP and BABCP who were actively posting on social media on this topic.
The biggest personal takeaway from my research was the discovery that all my clients use chatbots and have been doing so for some time. My disclosure that I was presenting at the conference gave me the opportunity to do what I consider to be one of the most important aspects of person-to-person therapy: to bring the ‘outside in’. I am conscious of the ethical implications of using clients to form the basis of research; however, most of my clients are long term and have already been happy to provide feedback, case study material and testimonials for previous BACP-related work. On each of these occasions, I discussed the ethical implications with my supervisor, and the potential impact on each individual client. Because of my work as a BACP media representative, I also updated a section of my written contract to reflect this and the implications for both existing and new clients.
‘Outside In’ is a business strategy concept that has caught on in the technology sector since 2010. Popularised in the book, Strategy from the Outside In: profiting from customer value,2 the concept refers to how digital technologies can harness innovation from everyone, rather than dictating it from the inside out, and demonstrates how business strategy often seeks to democratise, minimise bias, and emphasise empathy and user centricity.
In a recent issue of Therapy Today, Alistair Berlin3 shared his top 10 fundamental questions for working relationally. He illustrated that a non-relational way of working sees the therapist as a ‘detached observer’ of the client’s process, whereas, in a relational approach, the focus of the therapeutic exploration is the relationship between therapist and client. Reading this article in the days before I presented at the conference gave me psychological permission – both that I had included my clients in my research for the conference – and supported my takeaway to colleagues about AI.
Algorithmic bias and the ‘female load’
As a queer, neurodivergent psychotherapist, my client base is predominantly female and non-binary. Many of my clients in the LGBTQ community self-identify as ‘neurospicy’ or have a diagnosis of ADHD, autism, or autism and ADHD (AuDHD). Having trained and worked with Rape Crisis, my work remains largely trauma informed, but I consider myself integrative. In private practice, I work as a dual practitioner (therapist/coach), and feminist principles underpin my services. My aim is to empower my clients by acknowledging the impact of societal gender roles, challenging power imbalances, and promoting autonomy and equality within the therapeutic relationship.
One of the benefits of discussing chatbot use with clients is the ongoing conversations we are subsequently having about whether generative AI tools add to or detract from the invisible mental load many women carry. In the largest study of chatbot use to date, OpenAI (the owners of ChatGPT) announced that for the first time, women were using ChatGPT more than men.4 And 70% of usage was classified as ‘non-professional’.
Since the conference, I have spoken on BBC Radio 5 and BBC One’s Morning Live about women using chatbots for mental health support and relationship advice. While I appreciate the need for shortcuts to alleviate the burden of the ‘female load’, together with the accessibility of chatbots (compared to both NHS and private forms of therapy), I am troubled by the fact that women are the main users of a technology in which they are not adequately represented.
As I pointed out in my presentation, algorithms are biased because we are biased. Generative AI tools consume us. Every piece of digital content we generate, from entries on online directories to websites, or our work in the media, has already been consumed. But while, as members of BACP, we subscribe to the Ethical Framework – engaging in 30 hours of CPD annually to broaden our clinical and cultural knowledge, undertaking regular supervision to increase awareness of our own biases, exploring our clients’ frame of reference to deepen our understanding of difference – chatbots cannot do this.
While generative AI tools could be described as intelligent, they do not have a mind. Chatbots have no lived experience; they have never had a cold, broken a bone, experienced adversity or been in a relationship. Whatever our professional stance on self-disclosure, as therapists, we bring our whole selves to our work. Our lived experience shapes our training, our place of work and how we are with our clients. In my research, I discovered that sympathetic colleagues highlighted the lack of historical female data as a key issue, with algorithmic bias in areas such as women’s health.
While this is correct, it’s only part of the problem. Global technology giant IBM5 describes three types of algorithmic bias: prejudice, negative legacy and underestimation. Not all of this comes from poor quality data. This is demonstrated by news stories, such as when, in 2018, Amazon had to withdraw its in-house AI recruitment tool because it completely screened out women.6 In 2025, a collective action suit was approved in the US against Workday,7 a global recruitment software company, for discriminating against several protected characteristics, including age, gender and race. The foundation of algorithmic bias is in the core data, but it is often amplified by systematic errors in machine learning. Whether intentional or not, there is currently no legislation to hold technologists accountable, either in the UK or globally.
Features and bugs
Though most of my clients live in the UK, not all of them are native English speakers. For colleagues working with a higher proportion of non-native English-speaking clients, it’s important to be aware that over 90% of large language model (LLM) data used to train AI are in English.8 Again, this is a choice by technologists – one that is integral to why algorithms produce discriminatory results.
In the technology industry, there is a concept in software development called ‘features and bugs’. A feature is designed into a product or service and aimed at improving the user experience. This could be a functional adaptation – like a bagless hoover – or an aesthetic one, like an iPhone’s sleek design. A bug detracts from the user experience and developers seek to fix these in software updates. However, there are always trade-offs and sometimes what might feel like a feature for the user can, over time, become a bug.
For example, I spoke to users of the many AI-based therapy tools, such as Woebot, Earkick and Wysa, many of which are currently free. The fact that it costs nothing to use these tools is clearly a feature for users. But, free at the point of entry is not the same as having no cost. Digital applications are designed to keep us on their platforms for as long as possible. This is to make us dependent on them and to upsell additional products and services. Most therapy bots and chatbots have subscription services.
A ‘frictionless existence’
In generational terms, my clients span Gen Z through to Boomers, and, without exception, social media dependence is a consistent topic in sessions, and its impact on relationships, concentration, work and self-care. For colleagues working with Gen Z particularly, this statistic from Ofcom9 is revealing: ‘On average, 20-year-olds in the UK spend approximately six hours and one minute online each day. This translates to around 42 hours per week. Some young adults even spend up to seven hours a day online, equating to a full day per week.’
Digital natives are already living so much of their lives online. Esther Perel calls this the pull of a ‘frictionless existence’.10 The interactive nature of chatbots adds another layer of dependence. There is something tantalising about the call and response. In my research, I noticed an interesting parallel between chatbots and humanistic therapists. Reading an article in The Guardian11 about a man who married his chatbot (despite being married in real life), it became clear that ‘Travis’ experienced the unconditional positive regard, which chatbots have become renowned for, as love. The rise of ‘AI girlfriends’ is often discussed in purely aesthetic terms, but I think this is an important part of the appeal.
As an integrative therapist, I use several approaches in my work, and, reflecting on Travis’s experience, consider the role of affection (and how clients perceive it) in therapy – for example, ‘strokes’ in transactional analysis, ‘phantasy’ in psychodynamic work and ‘compliments’ in solution-focused therapy. Just as some clients have told me that they found a person-centred approach nurturing and supportive, others have described it as ‘saccharine’ and ‘frustrating’. Correspondingly, many criticisms of chatbots are around overly affirming language and faux humility. Open AI has responded to these criticisms by addressing this algorithmic slant. However, for people who have never felt seen or heard, this is a desirable feature, not a bug.
While Travis might have experience of a relationship in real life, many people don’t. In an increasingly technologically plugged-in world, people are meeting up less. Most of my clients work from home. They order food to go rather than eat out. They exercise by watching YouTube videos instead of going to the gym. This shift in reduced exposure to ‘in real life’ activities is not limited to younger people. According to the World Health Organization (WHO), ‘In the UK, an estimated two million people over 50 are projected to experience loneliness by 2025/26, a significant increase from 1.4 million in 2016/17.’12 For those of us with clients who are socially isolated, it is easy to understand why they might turn to chatbots for comfort.
During a radio interview, a listener messaged the programme to say that she used chatbots to offload, something that she would never consider doing with friends or family. The key benefit in her view was that she could say her darkest thoughts and judgments ‘without having to face anyone’. It struck me, in this and other examples from conversations with clients, how chatbots are becoming the new journaling. The addition of the empathetic response they provide is, for many, a valuable feature.
In my introduction at the conference, I referenced some of the tragic stories making the headlines about AI use and reflected on how they reminded me of similar stories about internet use over a decade ago. I warned that by focusing on them, we are in danger of demonising AI and entrenching ourselves in ignorance. I pointed out that tragedies happen, even in person-to-person work. There will always be people whose mental health or circumstances are at an extreme end of the bell curve.
The impact of AI on relationships
My main concern – which is shared by many colleagues – is the impact of AI on our ability to form and maintain relationships. Irrespective of our modality, most would agree that the therapeutic relationship is one of our key tools. On BBC One’s Morning Live recently, I spoke of how a key part of my work is to challenge clients to expand their window of tolerance and empower them to have resilience in relationships. Relationships are inherently complex and messy, yet also rewarding and life-giving. The frictionless life enabled by digital applications does nothing to prepare us for relationships in real life. As Perel comments: ‘We are in some way planning our own extinction. With predictive technologies that are presenting a life that is basically frictionless, you forget that relationships, and people, by their very nature, are unpredictable and imperfect.’10
At the time of writing, I am preparing a workshop for the BACP Workplace division on working with AI. My focus will be on helping colleagues understand the opportunities and ethical considerations of integrating AI tools in therapeutic practice, whereas my focus at the BACP Private Practice Conference was on the impact of AI tools on clients. Both are important. Perhaps our collective focus now should be on how to bring AI tools into the therapeutic frame – whether that’s understanding our clients’ engagement with them or using them as part of our work, in an empowering and ethical way.
References
1 ICO. Definitions. https://tinyurl.com/3mh7rha3 (accessed 9 January 2026).
2 Day GS, Moorman C. Strategy from the outside in: profiting from customer value. New York, NY: McGraw Hill; 2010.
3 Berlin A. Working relationally. Therapy Today 25; 36(7): 55–59.
4 OpenAI. How people are using ChatGPT. https://tinyurl.com/45k79jk3 (accessed 13 January 2026).
5 IBM. What is AI bias? https://tinyurl.com/3ks8zfrf (accessed 9 January 2026).
6 BBC. Amazon scrapped ‘sexist AI’ tool. https://tinyurl.com/3usu4tpr (accessed 9 January 2026).
7 Dobkin R. Lawsuit claiming discrimination by the Workday HR program could have huge impacts on how AI is used in hiring. https://tinyurl.com/3mxy4tty (accessed 9 January 2026).
8 Slator. Meta warns its latest large language model ‘may not be suitable’ for non-English use. https://tinyurl.com/59mw2aj8 (accessed 9 January 2026).
9 Ofcom. Adults’ media use and attitudes report. https://tinyurl.com/4u4njz8d (accessed 9 January 2026).
10 The Prof G Pod. Esther Perel on how technology is changing love and work. https://tinyurl.com/yuxrr6r8 (accessed 9 January 2026).
11 Heritage S. ‘I felt pure, unconditional love’: the people who marry their AI chatbots. https://tinyurl. com/bdkx2uu6 (accessed 9 January 2026).
12 World Health Organization. Social isolation and loneliness. https://tinyurl.com/4v3sm6a9 (accessed 9 January 2026).