Assessing the Population Impact of Published Intervention Studies (original) (raw)

Background: Despite greater spending on health care and biomedical research, the United States has poorer health outcomes than competitive nations. Information is needed on the potential impact of interventions to better guide resources allocation. Objective: To assess whether research on interventions is concentrated in areas with the greatest potential population health benefit. Design: Secondary data analysis to perform a best-case study of the potential population impact of published intervention studies. Study selection: A random sample of 20 intervention studies published in the New England Journal of Medicine in 2011. Data extraction: One reviewer extracted data using a standardized form, and another reviewer verified the data. Measurements: The incremental gain of applying the intervention versus the control estimated in quality-adjusted life years (QALY) at the population level. Results: Of the 20 studies, 13 had a statistically significant effect size, and 3 studies accounted for 80 percent of the total population health impact. Studies of less common conditions had smaller population health impact, though greater individual level impact. Studies generally did not report the information required to estimate the anticipated population health impact. Limitations: The heterogeneity of outcome measures and the use of multiple data sources result in a large degree of uncertainty in the estimates. The use of an intervention effect measured in a study setting is likely to overestimate its real-world impact. Although random, the sample of studies selected here may not be representative of intervention studies in general. Conclusions: Research priorities should be heavily informed by the potential population health impact. Researchers, proposal reviewers, and funders should understand those impacts before intervention studies are initiated. We recommend that this information be uniformly included in research proposals and reports. * Indicates articles describing a secondary analysis of a study; n/a, not applicable based on the criterion that there was no significant gain in effect between intervention and control procedure; HR, hazard ratio. Only the first author's name of each study is cited here; see references for full citations.

Common Methodological Problems in Randomized Controlled Trials of Preventive Interventions

Prevention Science, 2021

Randomized controlled trials (RCTs) are often considered the gold standard in evaluating whether intervention results are in line with causal claims of beneficial effects. However, given that poor design and incorrect analysis may lead to biased outcomes, simply employing an RCT is not enough to say an intervention "works." This paper applies a subset of the Society for Prevention Research (SPR) Standards of Evidence for Efficacy, Effectiveness, and Scale-up Research, with a focus on internal validity (making causal inferences) to determine the degree to which RCTs of preventive interventions are welldesigned and analyzed, and whether authors provide a clear description of the methods used to report their study findings. We conducted a descriptive analysis of 851 RCTs published from 2010 to 2020 and reviewed by the Blueprints for Healthy Youth Development web-based registry of scientifically proven and scalable interventions. We used Blueprints' evaluation criteria that correspond to a subset of SPR's standards of evidence. Only 22% of the sample satisfied important criteria for minimizing biases that threaten internal validity. Overall, we identified an average of 1-2 methodological weaknesses per RCT. The most frequent sources of bias were problems related to baseline non-equivalence (i.e., differences between conditions at randomization) or differential attrition (i.e., differences between completers versus attritors or differences between study conditions that may compromise the randomization). Additionally, over half the sample (51%) had missing or incomplete tests to rule out these potential sources of bias. Most preventive intervention RCTs need improvement in rigor to permit causal inference claims that an intervention is effective. Researchers also must improve reporting of methods and results to fully assess methodological quality. These advancements will increase the usefulness of preventive interventions by ensuring the credibility and usability of RCT findings.

Epidemiology and reporting characteristics of overviews of reviews of healthcare interventions published 2012-2016: protocol for a systematic review

Systematic reviews, 2017

Overviews of systematic reviews (overviews) attempt to systematically retrieve and summarize the results of multiple systematic reviews (SRs) for a given condition or public health problem. Two prior descriptive analyses of overviews found substantial variation in the methodological approaches used in overviews, and deficiencies in reporting of key methodological steps. Since then, new methods have been developed so it is timely to update the prior descriptive analyses. The objectives are to: (1) investigate the epidemiological, descriptive, and reporting characteristics of a random sample of 100 overviews published from 2012 to 2016 and (2) compare these recently published overviews (2012-2016) to those published prior to 2012 (based on the prior descriptive analyses). Medline, EMBASE, and CDSR will be searched for overviews published 2012-2016, using a validated search filter for overviews. Only overviews written in English will be included. All titles and abstracts will be screen...

Improving the reporting of public health intervention research: advancing TREND and CONSORT

Journal of Public Health, 2008

Background Evidence-based public health decision-making depends on high quality and transparent accounts of what interventions are effective, for whom, how and at what cost. Improving the quality of reporting of randomized and non-randomized study designs through the CONSORT and TREND statements has had a marked impact on the quality of study designs. However, public health users of systematic reviews have been concerned with the paucity of synthesized information on context, development and rationale, implementation processes and sustainability factors.

Population health intervention research: what is the place for pilot studies?

Trials, 2019

Background: An international workshop on population health intervention research (PHIR) was organized to foster exchanges between experts from different disciplines and different fields. Aims: This paper aims to summarize the discussions around one of the issues addressed: the place or role of pilot studies in PHIR. Pilot studies are well-established in biomedical research, but the situation is more ambiguous for PHIR, in which a pilot study could refer to different purposes. Methods: The workshop included formal presentations of participants and moderated discussions. An oral synthesis was carried out by a rapporteur to validate by expert consensus the key points of the discussion and the recommendations. All discussions have been recorded and fully transcribed. Discussion: PHIR generally addresses complex interventions. Thus, numerous tasks may be required to inform the intervention and test different aspects of its design and implementation. While in clinical research the pilot study mainly concerns the preparation of the trial, in PHIR the pilot study focuses on the preparation of both the intervention and the trial. In particular, pilot studies in PHIR could be used for viability evaluation and theory development. Recommendations from the workshop participants: The following recommendations were generated by consensus from the workshop discussions: i) terms need to be clarified for PHIR; ii) reporting and publication should be standardized and transparency should be promoted; iii) the objectives and research questions should drive the methods used and be clearly stated; iv) a pilot study is generally needed for complex intervention evaluation and for research-designed programs; and v) for field-designed programs, it is important to integrate evaluability assessments as pilot studies. Conclusion: Pilot studies play an important role in intervention development and evaluation. In particular, they contribute to a better understanding of the mechanisms of intervention and the conditions of its applicability and transferability. Pilot studies could therefore facilitate evidence-based decisions about design and conduct of main studies aimed to generate evidence to inform public health policy.

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.