The Evidence Project risk of bias tool: assessing study rigor for both randomized and non-randomized intervention studies - PubMed (original) (raw)

The Evidence Project risk of bias tool: assessing study rigor for both randomized and non-randomized intervention studies

Caitlin E Kennedy et al. Syst Rev. 2019.

Abstract

Background: Different tools exist for assessing risk of bias of intervention studies for systematic reviews. We present a tool for assessing risk of bias across both randomized and non-randomized study designs. The tool was developed by the Evidence Project, which conducts systematic reviews and meta-analyses of behavioral interventions for HIV in low- and middle-income countries.

Methods: We present the eight items of the tool and describe considerations for each and for the tool as a whole. We then evaluate reliability of the tool by presenting inter-rater reliability for 125 selected studies from seven published reviews, calculating a kappa for each individual item and a weighted kappa for the total count of items.

Results: The tool includes eight items, each of which is rated as being present (yes) or not present (no) and, for some items, not applicable or not reported. The items include (1) cohort, (2) control or comparison group, (3) pre-post intervention data, (4) random assignment of participants to the intervention, (5) random selection of participants for assessment, (6) follow-up rate of 80% or more, (7) comparison groups equivalent on sociodemographics, and (8) comparison groups equivalent at baseline on outcome measures. Together, items (1)-(3) summarize the study design, while the remaining items consider other common elements of study rigor. Inter-rater reliability was moderate to substantial for all items, ranging from 0.41 to 0.80 (median κ = 0.66). Agreement between raters on the total count of items endorsed was also substantial (κw = 0.66).

Conclusions: Strengths of the tool include its applicability to a range of study designs, from randomized trials to various types of observational and quasi-experimental studies. It is relatively easy to use and interpret and can be applied to a range of review topics without adaptation, facilitating comparability across reviews. Limitations include the lack of potentially relevant items measured in other tools and potential threats to validity of some items. To date, the tool has been applied in over 30 reviews. We believe it is a practical option for assessing risk of bias in systematic reviews of interventions that include a range of study designs.

Keywords: Critical appraisal; Quality assessment; Rigor assessment; Rigor score; Risk of bias; Study quality; Study rigor.

PubMed Disclaimer

Conflict of interest statement

Authors’ information

CEK is an Associate Professor and Director of the Social and Behavioral Interventions Program, Department of International Health, Johns Hopkins Bloomberg School of Public Health. She currently serves as Co-Investigator for the Evidence Project.

VAF is an Assistant Professor in the Department of Psychiatry and Behavioral Sciences at the Medical University of South Carolina. She currently serves as Co-Investigator for the Evidence Project.

KSA is a Statistician in the Department of Psychiatry and Behavioral Sciences at the Medical University of South Carolina. He currently serves as statistician for the Evidence Project.

JAD is an Associate Professor in the Social and Behavioral Interventions Program, Department of International Health, Johns Hopkins Bloomberg School of Public Health. She helped develop the risk of bias tool as the original study coordinator for the Evidence Project.

PTY is a Research Associate in the Social and Behavioral Interventions Program, Department of International Health, Johns Hopkins Bloomberg School of Public Health. She currently serves as study coordinator for the Evidence Project.

KRO is a Clinical Associate Professor in the Department of Psychiatry and Behavioral Sciences at the Medical University of South Carolina. He jointly founded the Evidence Project and currently serves as Co-Investigator.

MDS is a Professor in the Department of Psychiatry and Behavioral Sciences at the Medical University of South Carolina. He jointly founded the Evidence Project and currently serves as Principal Investigator.

Not applicable.

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Similar articles

Cited by

References

    1. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326. doi: 10.1371/journal.pmed.1000326. - DOI - PMC - PubMed
    1. Page MJ, Shamseer L, Altman DG, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016;13(5):e1002028. doi: 10.1371/journal.pmed.1002028. - DOI - PMC - PubMed
    1. Viswanathan M, Patnode CD, Berkman ND, et al. Recommendations for assessing the risk of bias in systematic reviews of health-care interventions. J Clin Epidemiol. 2018;97:26–34. doi: 10.1016/j.jclinepi.2017.12.004. - DOI - PubMed
    1. Katrak P, Bialocerkowski AE, Massy-Westropp N, Kumar S, Grimmer KA. A systematic review of the content of critical appraisal tools. BMC Med Res Methodol. 2004;4:22. doi: 10.1186/1471-2288-4-22. - DOI - PMC - PubMed
    1. Deeks JJ, Dinnes J, D'Amico R, et al. Evaluating non-randomised intervention studies. Health Technol Assessment (Winchester) 2003;7(27):iii–iix. - PubMed

Publication types

MeSH terms

LinkOut - more resources