Psychotherapy research is at a watershed. The supremacy of the randomised controlled trial (RCT) as the gold-standard measure of therapy’s efficacy is increasingly being challenged. The last couple of decades have seen a torrent of these trials sweep counselling and psychotherapy, kicking and screaming, towards manualised, technique-driven approaches that more closely resemble the medical treatments that RCTs were designed to test. Lately, the force of the torrent has perhaps lessened, checked by a growing number of studies and reviews that question RCT findings and their validity, especially those on the efficacy of CBT. There is also an emerging recognition that large sets of data collected from routine practice have much to tell us about what ‘works’ in therapy.

Even NICE, the UK’s supreme arbiter of clinical effectiveness, seems to be faltering: publication of the revised guideline on treating adult depression keeps being shifted further back, with no explanation. There are many who hope that, finally, NICE is listening to its critics who say its hierarchy of evidence is too narrow, that RCTs are only one way of assessing the effectiveness of psychotherapies and that they don’t give a full enough picture to be used as the primary, gold-standard measure.

Naomi Moller, joint Head of Research at BACP, is among those leading this critique. ‘In my view, it’s the funding context and the politics that drives a lot of the RCT research. RCTs offer a great method to compare treatments. My issue is that focusing so much on one factor means ignoring all the other factors that might make a difference – like therapist or services variables. It also often means ignoring process – the complex, detailed stuff that happens in the room. But in today’s NHS, if you can’t evidence that your therapy approach “works”, it won’t get commissioned.’

Robert Elliott, Professor of Counselling at the University of Strathclyde, calls it a horse race, ‘and as soon as you have a horse race, you get people cheating’. He says this with tongue in cheek, but there’s a serious underlying message. ‘The thing about the horse-race effect is that it is used to gain power over other people: research has become about politics not science.’

Andy Rogers agrees. A university and college counselling service co-ordinator, he is a vocal critic of the direction in which he believes the counselling establishment is taking his profession. ‘Every therapeutic approach wants validation by NICE. The political reality is that we are all vying for dominance and legitimacy, and that’s to do with the power of the NHS. As the biggest employer of counsellors in the UK, it has a distorting effect on what is involved in our professional practice.’

The ‘cheating’ is not necessarily deliberate. As a paper published last autumn in the leading US medical journal JAMA reveals,1 it’s simply that RCTs are not the unflawed, objective, super-scientific model they have been made out to be. Researchers at the University of Giessen in Germany found that just 17% of the RCTs used to demonstrate the efficacy of CBT for anxiety and depression were of ‘high quality’; that more than 80% of trials of CBT for anxiety and 44% of those for depression compared it with a waiting list control group, which ‘is not a strong proof of efficacy and may lead to overestimating the efficacy of CBT’. They raise the problem of ‘researcher allegiance’ (or bias, more bluntly), which they say is highly likely to have occurred in several major CBT studies, skewing the results (in one, a study of therapy for trauma, the therapists in the non-CBT comparison group were not allowed to directly address the trauma). CBT, they conclude, ‘is not a panacea’, and by implication, RCTs are not either.

Others would say they are also a very blunt instrument that is ill-suited to the highly complex, multifaceted processes of counselling and psychotherapy, the outcomes of which are dependent on the interplay of a huge array of interpersonal factors between therapist and client and all the other, external influences on both of them. No single research method is going to paint a true picture of the process.

Social healing

‘Psychotherapy is social healing. It’s not surprising it’s so difficult to research,’ says Bruce Wampold, Emeritus Professor of Counseling Psychology at the University of Wisconsin–Madison and a much-cited authority in the world of therapy effectiveness research. RCTs are the wrong method, measuring the wrong things, he argues: ‘The NHS in England has wasted an immense amount of money training therapists to deliver CBT without any evidence of improvement in outcomes. CBT is effective, but it’s no better than any other treatment given by skilled practitioners.’

Mick Cooper, Professor of Counselling Psychology at Roehampton University, says it is unrealistic to entirely dismiss RCTs and their contribution to our understanding of therapy outcomes. He is currently leading a major RCT, the ETHOS study, on the effectiveness and cost-effectiveness of humanistic, school-based counselling. ‘RCTs are not the source of all evil. What they do well is give us an indication of the average effect and average cost-effectiveness of a particular intervention. They have their limitations – they are not always representative of particular client sub-groups and what is done in real-life clinical practice – but they do allow a comparison of what happens when you do something and when you don’t, and that’s often what commissioners want to know.

‘When a commissioner says, “Why should I be paying you for providing this therapy?”, you need to have studies showing that what you do is, overall, effective,’ he says. ‘Presenting them with a copy of Rogers’ The necessary and sufficient conditions of therapeutic personality change is unlikely to convince. Qualitative research is critical to our field, but we’ve yet to work out how to use it to inform national clinical guidelines. That is an important area for further work.’

Pim Cuijpers, Professor of Clinical Psychology at the Vrije Universiteit Amsterdam, is a leading authority on RCTs for depression. He led the 2016 meta-analysis that the JAMA paper cites as conclusively exploding the claims for CBT’s superiority over other kinds of talking therapy.2 Nevertheless, while he unhesitatingly points out the flaws in the current RCT evidence base for talking therapies for depression, he also argues that RCTs are an essential tool in healthcare research: ‘RCTs are the only method that enables you to establish that it is the therapy that makes the difference, not other factors outside the therapy room – which is often the case with a condition like depression. People can get better spontaneously and then think that, because they are in therapy, that is the reason, and it’s not necessarily true.

‘There are a lot of alternative therapies that have very satisfied patients who swear that they were helped by them, and therapists who are sincerely convinced that their approach is highly effective, when it is not so,’ he says. ‘The only way to distinguish this kind of practice from serious treatments is by doing RCTs. Any other kind of research does not result in scientifically strong evidence. Without RCTs, all you have is opinion from clinicians and patients about what works. It’s the same across all medicine, it’s not only psychotherapy.’

So, if RCTs don’t give the full, or even accurate, picture, what does? Most counsellors and therapists will be familiar with the so-called dodo bird verdict: ‘Everybody has won and all must have prizes.’ Coined originally by US psychologist Saul Rosenzweig in 1936 and borrowed from Lewis Carroll’s book Alice in Wonderland, it answers the perennial question of whether particular therapy models or techniques are more effective than others – or, using Elliott’s analogy, which horse wins the race. Evidence from numerous studies shows that, across all populations and all types of presenting issues, all therapies achieve roughly the same outcomes.

Common factors

The dodo bird finding has led researchers to explore whether the magic ingredients in counselling and psychotherapy are in fact those that are common across all the modalities – the so-called common factors – not the different techniques they use. Another line of inquiry is ‘what works for whom?’ A therapy might be effective overall, but is it demonstrably effective, say, with depression, or with older adults, or with older adults with depression?

The common factors are legion – up to 89 have been proposed. Bruce Wampold’s 2015 paper is generally regarded as the most authoritative summary of the evidence for their impact.3 Summarised, they include the alliance between therapist and client, therapist empathy, the client’s expectations of therapy, the skill of the individual therapist (‘therapist effects’), and the adaptation of evidence-based treatments to accommodate different cultural needs and norms (acceptability of the therapy to the client).

The importance of these variations in the client, the therapist and, indeed, the therapy service is demonstrated in a study part-sponsored by BACP that compared outcomes from CBT and generic counselling in more than 100 IAPT services and 33,000 patients.4 The outcomes were broadly comparable, but what did make a difference across both modalities was service variables. Naomi Moller says more research efforts need to focus on drilling down into these differences if therapy is to develop its capacity to contribute meaningfully to tackling mental ill health. This means looking beyond the horse race, and in particular to the quantities of client data being harvested by IAPT. But it is also why Professor Michael Barkham’s PRaCTICED trial at the University of Sheffield is so important, because it is an RCT comparing CBT with counselling for depression in a real-life setting – an actual IAPT service.

‘We have to hold on to the value of the range of research, quantitative and qualitative, process and outcome, while being pragmatic about the kind of research that is necessary to effectively influence policymakers and funders,’ Moller says. ‘We criticised the 2018 Children and Young People’s Mental Health green paper because it totally ignores the evidence for the effectiveness of school-based counselling. We were also very critical that NICE still refuses to accept the relevance of findings based on large routine outcome datasets like the IAPT dataset, and that the latest depression guideline revision didn’t bother to update the service-user experience part. It is shameful not to look at the research on users’ preferences when there is so much research showing that it makes a difference to outcomes in therapy.’

Where to now?

Miranda Wolpert, Professor of Evidence-based Practice and Research at University College London, thinks therapy research is too much focused on the competing modalities and what goes on inside the counselling room. Rather, research should look at external factors in clients’ lives and the resources that contribute to client behaviour change, so therapy can build on them in the counselling relationship. ‘We need to move away from our obsession with modalities, and explore what people do to manage their own mental wellbeing in the 99% of their lives when they aren’t in therapy,’ she says. She also argues that therapy research needs to look more at ‘what works for whom’ – ‘what individuals and groups of individuals find helpful, in and outside of therapy, and how we tailor the therapeutic engagement and advice to the specific needs of different groups’.

Robert Elliott backs change process research to drill down into the common therapeutic factors, within a pluralist approach. This combines quantitative outcome data, qualitative research into ‘helpful factors’, what he calls ‘hermeneutic single-case efficacy studies’, plus microanalyses of what happens moment by moment in the interactions between individual therapist and individual client. Change process research is by no means perfect, he says, but these very intensive case studies can be very fruitful, both for the individual therapist working with the individual client and in drawing more general conclusions from the convergence of data from several sources.

Naomi Moller says psychotherapy has much to learn from the evidence emerging from genetic and biological research. ‘Currently the focus is on developing tightly specified treatments for particular presenting diagnoses and particular populations, and then trying to evidence that they indeed “work” so that you can get your “NICE-recommended” gold star. An alternative is to focus on transdiagnostic treatment approaches that target common mechanisms that underlie a variety of mental health difficulties. In my view, this is an important area for both research and therapy development.’

Bruce Wampold says research should be focusing more on developing highly skilled therapists, from the selection of students through to continuing professional development and supervision. ‘The hallmark of expertise in any activity is to improve your skills over time. The therapy profession is not set up to do this. The attitude is, “Let’s see what is the most effective treatment and train everyone to do it”, when what we should be doing is structuring the professional context to help therapists improve through professional development and supervision.’ He says looking at the influence of the organisations within which therapy is delivered is also important: the clinic environment and how clients are greeted and even spoken to on the phone by reception staff can greatly influence outcomes.

‘We are not learning from each other and from our own data,’ says John McLeod, Visiting Professor of Psychology at Oslo University and Therapy Today Research Editor. He would like to see more research within agencies and organisations delivering therapy, using their outcomes and client feedback to continuously assess and improve what they offer, in an action research cycle. ‘There is very little research that follows this cycle through. More typically, someone does a study and says the implications are such and such, and it stops there. No one does the follow-through to find out if, when they adopted the recommendations, it made a difference.’

He also agrees with Wampold: ‘It’s pretty evident from research that some therapists get much better results than others. We should be using research to look at why that is so and whether feeding that back into CPD for therapists or particular types of supervision does result in better outcomes.’ This is what Tony Rousmaniere has dubbed ‘deliberate practice’.5

Relational inquiry

Art Bohart, emeritus professor, retired clinical psychologist and author of countless books and papers on person-centred counselling, says it is the client who is the most important common factor in counselling: ‘It is the client who makes the therapy work. Mobilising their creativity and resilience may be the most important factor in therapy.’ This necessarily also embraces factors in their life outside the therapy room that impact on their emotional and mental wellbeing. ‘Therapy is a complex, interactive process between the therapist and an intelligent being, who is not only being acted on by the therapist but has a life outside the therapy room, where they get ideas from all kinds of other people and influences, put it together with what they are learning in therapy and generate their own ideas. There will never be a simple relationship between what the therapist does and outcomes. The effect emerges from the interaction. We would be far better investigating what happens in that process.’

Case studies are, he argues, much more sensitive than RCTs to the nuances of the therapeutic process: ‘Case studies may not have the rigour of an RCT, but I believe, by piecing together what went on in sessions and changes in the client, you can begin to set out a plausible relationship between them. If you do that with a number of cases, you would get a relatively strong convergence of evidence that is much more faithful to the reality of the therapy process.’ That said, he emphasises that there is still a need for RCTs: ‘It will be the convergence of case studies with RCTs that will produce the most informative insights into how psychotherapy works and whether it works at all.’

Andy Rogers disputes what he regards as the reification of research, which he thinks has become something of a ‘condition of worth’ – you can only be a ‘good therapist’ if you are reading and applying the research. So often, studies end with a recommendation for further research and a list of its limitations; how is that helpful to the practitioner, he asks? He cites Dave Mearns, who wrote that what goes on in person-centred therapy – listening to clients, moment by moment tracking what is going on, following the threads of what is being communicated – is an act of research itself. ‘Counselling is a process of relational inquiry into human being; two people together exploring human distress and its meaning; empathic investigation. But these days we have to wear this new garb of the scientific, evidence-based practitioner to meet the demands of the NHS and win contracts and favour in the corridors of power. It removes us from the artfulness of therapy, the collaborative process where you are alongside someone who is fixing things themselves, and it’s OK not to know, to be a step behind the client.’

Bringing us crashing back to earth, Pim Cuijpers points out there are no hard data to support the common factors hypothesis, and we are unlikely to get them: ‘Maybe they are the key but we simply don’t know, because it requires a lot of money to do this research, and in this field, we don’t have it. To look at specific factors, you need to conduct dismantling studies, and to do that, you need at least 500 people for each component. And even then, even if you examine all these mediators, moderators and mechanisms one by one, you won’t get a definitive answer. RCTs can only show that therapies work, not how. Showing how scientifically is much more complicated. It’s the same for any other area of biomedical science.’

He says we already know the answer to the question about the most effective way to apply therapy to achieve the greatest impact: employ more, less highly trained therapists to deliver guided self-help to as many people as possible. It sounds a lot like IAPT.

‘The overall effects of treatment with any therapy are very small. A lot of mental health problems can be solved by low-intensity therapies. That doesn’t mean some people don’t need longer treatment, but if we want to make a difference to the burden of mental ill health, we should focus on the larger numbers with common disorders, using the least costly, effective interventions, not the few with chronic illness,’ Pim Cuijpers says.

Mick Cooper wants counselling as a profession to engage much more with research. ‘A research culture tends to be missing in our field. Often, students are being taught theory developed back in the 1950s and 60s, without reference to more recent developments and findings. In some respects, that is why CBT is ahead in the horse race. CBT practitioners are often trying out new methods and learning from the evidence of what’s working. I don’t hear many CBT people say, “Beck said this in 1970 and we can’t do it any other way.”’ He says it’s ironic that therapy is all about helping people who are stuck in certain ways of being that might have been helpful at the time but are now holding them back. ‘As therapists, we can also get stuck in fixed theoretical positions that don’t evolve, and that’s stagnating. Research – both quantitative and qualitative – can be a means of helping us to get unstuck.’

Ultimately, of course, what gets researched comes down to funding and politics. There is very little funding for mental health research across the board, and those who pay the piper get to call the tune. The danger is that, as John McLeod points out, amid the jockeying for pole position in the horse race and fighting for the survival of our own schools and modalities, ‘People seem to have forgotten the whole point of therapy – to help clients who come to us get better’.


Pim Cuijpers and Robert Elliott are both keynote speakers at the 24th BACP Research Conference on 11–12 May in London.

Catherine Jackson is Editor of Therapy Today

References

1. Leichsenring F, Steinert C. Is cognitive behavioral therapy the gold standard for psychotherapy? JAMA 2017; 318(14): 1323-1324. doi:10.1001/jama.2017.13737
2. Cuijpers P, Cristea IA, Karyotaki E et al. How effective are cognitive behavior therapies for major depression and anxiety disorders? A meta-analytic update of the evidence. World Psychiatry 2016; 15(3): 245–258.
3. Wampold BE. How important are the common factors in psychotherapy? An update. World Psychiatry 2015; 14: 270–277.
4. Pybis J, Saxon D, Hill A, Barkham M. The comparative effectiveness and efficiency of cognitive behavioural therapy and generic counselling in the treatment of depression: evidence from the 2nd UK National Audit of psychological therapies. BMC Psychiatry 2017; 17: 215.
5. Rousmaniere T. Deliberate practice for psychotherapists. London: Routledge; 2016.