Alternatives to randomisation in the evaluation of public health interventions: design challenges and solutions (original) (raw)
Related papers
Research Synthesis Methods, 2012
Non-randomized studies may provide valuable evidence on the effects of interventions. They are the main source of evidence on the intended effects of some types of interventions and often provide the only evidence about the effects of interventions on long-term outcomes, rare events or adverse effects. Therefore, systematic reviews on the effects of interventions may include various types of non-randomized studies. In this second paper in a series, we address how review authors might articulate the particular non-randomized study designs they will include and how they might evaluate, in general terms, the extent to which a particular non-randomized study is at risk of important biases. We offer guidance for describing and classifying different non-randomized designs based on specific features of the studies in place of using non-informative study design labels. We also suggest criteria to consider when deciding whether to include non-randomized studies. We conclude that a taxonomy of study designs based on study design features is needed. Review authors need new tools specifically to assess the risk of bias for some non-randomized designs that involve a different inferential logic compared with parallel group trials.
Common Methodological Problems in Randomized Controlled Trials of Preventive Interventions
Prevention Science, 2021
Randomized controlled trials (RCTs) are often considered the gold standard in evaluating whether intervention results are in line with causal claims of beneficial effects. However, given that poor design and incorrect analysis may lead to biased outcomes, simply employing an RCT is not enough to say an intervention "works." This paper applies a subset of the Society for Prevention Research (SPR) Standards of Evidence for Efficacy, Effectiveness, and Scale-up Research, with a focus on internal validity (making causal inferences) to determine the degree to which RCTs of preventive interventions are welldesigned and analyzed, and whether authors provide a clear description of the methods used to report their study findings. We conducted a descriptive analysis of 851 RCTs published from 2010 to 2020 and reviewed by the Blueprints for Healthy Youth Development web-based registry of scientifically proven and scalable interventions. We used Blueprints' evaluation criteria that correspond to a subset of SPR's standards of evidence. Only 22% of the sample satisfied important criteria for minimizing biases that threaten internal validity. Overall, we identified an average of 1-2 methodological weaknesses per RCT. The most frequent sources of bias were problems related to baseline non-equivalence (i.e., differences between conditions at randomization) or differential attrition (i.e., differences between completers versus attritors or differences between study conditions that may compromise the randomization). Additionally, over half the sample (51%) had missing or incomplete tests to rule out these potential sources of bias. Most preventive intervention RCTs need improvement in rigor to permit causal inference claims that an intervention is effective. Researchers also must improve reporting of methods and results to fully assess methodological quality. These advancements will increase the usefulness of preventive interventions by ensuring the credibility and usability of RCT findings.
Strengthening causal inference from randomised controlled trials of complex interventions
BMJ Global Health, 2022
Researchers conducting randomised controlled trials (RCTs) of complex interventions face design and analytical challenges that are not fully addressed in existing guidelines. Further guidance is needed to help ensure that these trials of complex interventions are conducted to the highest scientific standards while maximising the evidence that can be extracted from each trial. The key challenge is how to manage the multiplicity of outcomes required for the trial while minimising false positive and false negative findings. To address this challenge, we formulate three principles to conduct RCTs: (1) outcomes chosen should be driven by the intent and programme theory of the intervention and should thus be linked to testable hypotheses; (2) outcomes should be adequately powered and (3) researchers must be explicit and fully transparent about all outcomes and hypotheses before the trial is started and when the results are reported. Multiplicity in trials of complex interventions should be managed through careful planning and interpretation rather than through post hoc analytical adjustment. For trials of complex interventions, the distinction between primary and secondary outcomes as defined in current guidelines does not adequately protect against false positive and negative findings. Primary outcomes should be defined as outcomes that are relevant based on the intervention intent and programme theory, declared (ie, registered), and adequately powered. The possibility of confirmatory causal inference is limited to these outcomes. All other outcomes (either undeclared and/or inadequately powered) are secondary and inference relative to these outcomes will be exploratory.
Adaptive Designs for Randomized Trials in Public Health
Annual Review of Public Health, 2009
In this article, we present a discussion of two general ways in which the traditional randomized trial can be modified or adapted in response to the data being collected. We use the term adaptive design to refer to a trial in which characteristics of the study itself, such as the proportion assigned to active intervention versus control, change during the trial in response to data being collected. The term adaptive sequence of trials refers to a decision-making process that fundamentally informs the conceptualization and conduct of each new trial with the results of previous trials. Our discussion below investigates the utility of these two types of adaptations for public health evaluations. Examples are provided to illustrate how adaptation can be used in practice. From these case studies, we discuss whether such evaluations can or should be analyzed as if they were formal randomized trials, and we discuss practical as well as ethical issues arising in the conduct of these new-generation trials.