Towards Alternative Criteria for the Validation of Psychological Treatments (original) (raw)
Related papers
Methodological Recommendations for Trials of Psychological Interventions
Psychotherapy and psychosomatics, 2018
Recent years have seen major developments in psychotherapy research that suggest the need to address critical methodological issues. These recommendations, developed by an international group of researchers, do not replace those for randomized controlled trials, but rather supplement strategies that need to be taken into account when considering psychological treatments. The limitations of traditional taxonomy and assessment methods are outlined, with suggestions for consideration of staging methods. Active psychotherapy control groups are recommended, and adaptive and dismantling study designs offer important opportunities. The treatments that are used, and particularly their specific ingredients, need to be described in detail for both the experimental and the control groups. Assessment should be performed blind before and after treatment and at long-term follow-up. A combination of observer- and self-rated measures is recommended. Side effects of psychotherapy should be evaluated...
Empirically Supported Psychological Treatments
Journal of Nervous and Mental Disease, 2014
Clear and transparent standards are required to establish whether a therapeutic method is "evidence based." Even when research demonstrates a method to be efficacious, it may not become available to patients who could benefit from it, a phenomenon known as the "translational gap." Only 30% of therapies cross the gap, and the lag between empirical validation and clinical implementation averages 17 years. To address these problems, Division 12 of the American Psychological Association published a set of standards for "empirically supported treatments" in the mid-1990s that allows the assessment of clinical modalities. This article reviews these criteria, identifies their strengths, and discusses their impact on the translational gap, using the development of a clinical innovation called Emotional Freedom Techniques (EFT) as a case study. Twelve specific recommendations for updates of the Division 12 criteria are made based on lessons garnered from the adoption of EFT within the clinical community. These recommendations would shorten the cycle from the research setting to clinical practice, increase transparency, incorporate recent scientific advances, and enhance the capacity for succinct comparisons among treatments.
Empirically supported psychological therapies
Journal of Consulting and Clinical Psychology, 1998
This article introduces the special section of the Journal of Consulting and Clinical Psychology on empirically supported psychological therapies. After a discussion of the rationale for the selection of the specific terms in the label, several justifications are considered for conducting and learning from empirical evaluations of psychological therapies. Finally, the process that guided the special section is described.
Effects of psychological therapies in randomized trials and practice-based studies
British Journal of Clinical Psychology, 2008
Background. Randomized trials of the effects of psychological therapies seek internal validity via homogeneous samples and standardized treatment protocols. In contrast, practice-based studies aim for clinical realisma nd external validity via heterogeneous samples of clients treated under routine practice conditions. We compared indices of treatment effects in these two types of studies.
The CPA Presidential Task Force on Evidence-Based Practice of Psychological Treatments
Canadian Psychology / Psychologie canadienne, 2014
The Board of Directors of the Canadian Psychological Association (CPA) launched a Task Force on Evidence-Based Practice of Psychological Treatments to support and guide practice as well as to inform stakeholders. This article describes the work of this Task Force, outlining its raison d'etre, providing a comprehensive definition of evidence-based practice (EBP), and advancing a hierarchy of evidence that is respectful of diverse research methodologies, palatable to different groups, and yet comprehensive and compelling. The primary objective was to present an overarching methodology or approach to thinking about EBP so that psychologists can provide and implement the best possible psychological treatments. To this end, our intention for this document was to provide a set of guidelines and standards that will foster interest, encourage development, and promote effectiveness in EBP.
Empirically supported treatments (or therapies; ESTs) are the gold standard in therapeutic interventions for psychopathology. Based on a set of methodological and statistical criteria, the APA has assigned particular treatment-diagnosis combinations EST status and has further rated their empirical support as Strong, Modest, and/or Controversial. Emerging concerns about the replicability of research findings in clinical psychology highlight the need to critically examine the evidential value of EST research. We therefore conducted a meta-scientific review of the EST literature, using clinical trials reported in an existing online APA database of ESTs, and a set of novel evidential value metrics (i.e., rates of misre-ported statistics, statistical power, R-Index, and Bayes Factors). Our analyses indicated that power and replicability estimates were concerningly low across almost all ESTs, and individually, some ESTs scored poorly across multiple metrics, with Strong ESTs failing to continuously outperform their Modest counterparts. Lastly, we found evidence of improvements over time in statistical power within the EST literature, but not for the strength of evidence of EST efficacy. We describe the implications of our findings for practicing psychotherapists and offer recommendations for improving the evidential value of EST research moving forward. General Scientific Summary: This review suggests that although the underlying evidence for a small number of empirically supported therapies is consistently strong across a range of metrics, the evidence is mixed or consistently weak for many, including some classified by Division 12 of the APA as "Strong." Data, analysis code, supplementary material: https://osf.io/73drs/
Canadian Journal of Behavioural Science / Revue canadienne des sciences du comportement, 2013
In behavioral science research there is often the need to determine if an outcome variable differs, or is equivalent, across groups. Significance tests are the most prevalently applied data analysis method for this type of question. The purpose of this study was to examine how statistical tests for equivalence and difference have been applied to compare clinical interventions. Peer-reviewed journal articles that made treatment comparisons were examined. For each study, the primary hypothesis, statistical test usage, and the stated conclusion were recorded. Of the 270 studies investigated, 54.4% inappropriately made equivalence-based conclusions from difference-based test statistics (e.g., t test, ANOVA). Significance tests are often applied as a matter of course regardless of the research question. We have found that difference tests are similarly favored and have been applied to examine difference and inappropriately applied to examine equivalence. We discuss our findings and provide resources for researchers who want to statistically evaluate between-groups equivalence.