When I began training in counselling and psychotherapy, I never imagined I’d one day be writing to an artificial-intelligence company about suicide risk. But that’s exactly what I’ve done.

In recent months I’ve been watching something quietly alarming unfold. People in real psychological crisis, including those feeling suicidal, are turning to ChatGPT and other AI chat systems for help. Not as a novelty, but as their first and only point of contact.

And that’s what prompted me to write an open letter to OpenAI, the company behind ChatGPT, and to share those concerns with BACP and UKCP.

The Illusion of Care

To someone in distress, an AI system can sound calm, empathic and endlessly available. It mirrors feelings. It validates pain. It stays awake at 2 a.m. when no one else does.

But it isn’t human. And it isn’t safe.

AI can imitate empathy, but it doesn’t understand it. It can sound caring, but it carries no responsibility if its words make things worse. To a frightened or isolated person, that distinction may not be clear. The result is the illusion of care without the safety of relationship.

A Question of Duty

As counsellors and trainees, we’re taught that duty of care isn’t optional, it’s ethical ground zero. So I asked OpenAI a simple question: if your product is now acting as a first point of contact for people in life-or-death distress, do you accept that you have a duty of care?

You can’t market emotional understanding one minute and claim neutrality the next.

Safeguarding the Space Between Human and Machine

This isn’t about demonising technology. AI has its place, many of us use it for admin, research or reflection. But when it starts occupying relational space, the space where empathy and presence belong, it becomes part of the helping environment. And that means it needs safeguarding, transparency and clinical oversight.

In my letter I proposed a global minimum safeguarding standard: clear disclaimers written in each country’s legal language; local crisis numbers visible to every user; a “Crisis Mode” that connects people to real-time human help; independent clinical oversight in every country; and, crucially, that none of this ever sits behind a paywall.

Safeguarding should never depend on a subscription plan.

Before the Deaths Happen Here

Artificial intelligence has been publicly available in the UK for years, yet only now are professional bodies beginning to react. I didn’t raise this for recognition, I raised it because silence costs lives.

As counsellors and psychotherapists, I invite you to pause and reflect. Have you encountered clients turning to AI for emotional support? How are you responding ethically, in the absence of binding guidance?

Because without clear standards, we are all left to navigate this with our own moral compass. And that leaves one urgent question hanging in the air: where does our duty of care begin and end when technology starts to sound like us?