Beyond the Specific Factors vs. Common Factors Debate in Psychotherapy Research: (Re)-Considering a Contextual Model (original) (raw)
Related papers
The difficulties of demonstrating that any specific form of psychotherapy is more effective than any other has led to the formulation of the so-called Dodo Bird Verdict (that all forms of therapy are equally effective) and to the suggestion that what really matters for therapeutic efficiency are factors that are common to different forms of therapy. The term " common factors " , however, is seldom defined in an unambiguous way. In this paper, two different models of " common factors " are differentiated, and their implications are compared. The first model is referred to as the Relational-Procedural Persuasion (RPP) model and is primarily based on the writings of Frank and Wampold; according to this model effective psychotherapy requires a good therapeutic relationship, a specified therapeutic procedure, and a rhetorically skilful psychotherapist who persuades the client of a new explanation that provides new perspectives and meanings in life. The contents of these procedures and perspectives, however, are less important – according to this model, the treatment procedures are beneficial to the client because of the meaning attributed to these procedures rather than because of the specific nature of the procedures. The other model, the Methodological Principles and Skills (MPS) model, is based on the assumption that effective psychotherapy relies on common methodological principles that are instantiated in various ways in different forms of psychotherapy, and on the therapist's capacity of applying these principles in a skillful way. According to this model, method matters, and it is possible to improve existing methods. Whereas the MPS model carries a hope for the improvement of psychotherapy, the RPP model implies a more pessimistic view of psychotherapy as forever bound by the limits of the Dodo Bird Verdict. It is concluded that psychotherapy research may benefit from using the MPS model as a working hypothesis, but that a comprehensive model of common factors in psychotherapy also needs to integrate important insights from the RPP model, as well as an understanding of the structural characteristics that psychotherapy shares with other kinds of social interaction.
Psychotherapy research is known for its pursuit of Evidence-Based Treatment (EBT). Psychotherapeutic efficacy is assessed by calculation of aggregated differences between pre treatment- and post treatment symptom levels. As this 'gold standard methodology' is regarded as 'procedurally objective', the efficacy number that results from the procedure is taken as a valid indicator of treatment efficacy. However, I argue that the assumption of procedural objectivity is not justified, as the methodology is build upon a problematic numerical basis. I use an empirical case study to show (1) how measurement problems practically occur in the first step of data collection, i.e. in individual symptom measurement. These problems have been discussed and acknowledged for decades, but still measurement is regarded as the best epistemic means to gain evidence on psychotherapeutic efficacy. Therefore, I show (2) how initial measurement problems are overlooked in the remainder of the methodological procedure, which harms the 'evidence-base' of psychotherapeutic EBT. Via this applied analysis, I exhibit concerns that are increasingly raised in the literature in an empirical way, to emphasize the need for a non-idealized consideration of the 'gold standard methodology' as a means towards its clinical end.
How do we find out what works for whom? Evaluating the efficacy and effectiveness of psychotherapy
Controlled randomized clinical trials of psychotherapy have traditionally been used to test the efficacy of specific forms of psychotherapy for specific problems. The value of findings from such efficacy studies for practicing psychotherapists has been questioned because these studies involve clients and therapy procedures that are radically different from those typically used in routine clinical practice. Opponents of efficacy research have proposed health service-based effectiveness research as a more valuable alternative to efficacy research. Arguments for and against rigorously controlled efficacy research on the one hand, and ‘real-world’ effectiveness research on the other are explored in this paper.