Methods for Developing Evidence Reviews in Short Periods of Time: A Scoping Review (original) (raw)

Correction: Methods for Developing Evidence Reviews in Short Periods of Time: A Scoping Review

PloS one, 2017

Introduction Rapid reviews (RR), using abbreviated systematic review (SR) methods, are becoming more popular among decision-makers. This World Health Organization commissioned study sought to summarize RR methods, identify differences, and highlight potential biases between RR and SR. Methods Review of RR methods (Key Question 1 [KQ1]), meta-epidemiologic studies comparing reliability/ validity of RR and SR methods (KQ2), and their potential associated biases (KQ3). We searched Medline, EMBASE, Cochrane Library, grey literature, and checked reference lists, used personal contacts, and crowdsourcing (e.g. email listservs). Selection and data extraction was conducted by one reviewer (KQ1) or two reviewers independently (KQ2-3). Results Across all KQs, we identified 42,743 citations through the literature searches. KQ1: RR methods from 29 organizations were reviewed. There was no consensus on which aspects of the SR process to abbreviate. KQ2: Studies comparing the conclusions of RR and SR (n = 9) found them to be generally similar. Where major differences were identified, it was attributed to the inclusion of evidence from different sources (e.g. searching different databases or including different study designs). KQ3: Potential biases introduced into the review process were well-identified although not necessarily supported by empirical evidence, and focused mainly on selective outcome reporting and publication biases.

Expediting systematic reviews: methods and implications of rapid reviews

Implementation …, 2010

Background: Policy makers and others often require synthesis of knowledge in an area within six months or less. Traditional systematic reviews typically take at least 12 months to conduct. Rapid reviews streamline traditional systematic review methods in order to synthesize evidence within a shortened timeframe. There is great variation in the process of conducting rapid reviews. This review sought to examine methods used for rapid reviews, as well as implications of methodological streamlining in terms of rigour, bias, and results.

Appraising systematic reviews: a comprehensive guide to ensuring validity and reliability

Systematic reviews play a crucial role in evidence-based practices as they consolidate research findings to inform decision-making. However, it is essential to assess the quality of systematic reviews to prevent biased or inaccurate conclusions. This paper underscores the importance of adhering to recognized guidelines, such as the PRISMA statement and Cochrane Handbook. These recommendations advocate for systematic approaches and emphasize the documentation of critical components, including the search strategy and study selection. A thorough evaluation of methodologies, research quality, and overall evidence strength is essential during the appraisal process. Identifying potential sources of bias and review limitations, such as selective reporting or trial heterogeneity, is facilitated by tools like the Cochrane Risk of Bias and the AMSTAR 2 checklist. The assessment of included studies emphasizes formulating clear research questions and employing appropriate search strategies to construct robust reviews. Relevance and bias reduction are ensured through meticulous selection of inclusion and exclusion criteria. Accurate data synthesis, including appropriate data extraction and analysis, is necessary for drawing reliable conclusions. Meta-analysis, a statistical method for aggregating trial findings, improves the precision of treatment impact estimates. Systematic reviews should consider crucial factors such as addressing biases, disclosing conflicts of interest, and acknowledging review and methodological limitations. This paper aims to enhance the reliability of systematic reviews, ultimately improving decision-making in healthcare, public policy, and other domains. It provides academics, practitioners, and policymakers with a comprehensive understanding of the evaluation process, empowering them to make well-informed decisions based on robust data.

The Systematic Review Data Repository (SRDR): descriptive characteristics of publicly available data and opportunities for research

Systematic Reviews, 2019

Background Conducting systematic reviews (“reviews”) requires a great deal of effort and resources. Making data extracted during reviews available publicly could offer many benefits, including reducing unnecessary duplication of effort, standardizing data, supporting analyses to address secondary research questions, and facilitating methodologic research. Funded by the US Agency for Healthcare Research and Quality (AHRQ), the Systematic Review Data Repository (SRDR) is a free, web-based, open-source, data management and archival platform for reviews. Our specific objectives in this paper are to describe (1) the current extent of usage of SRDR and (2) the characteristics of all projects with publicly available data on the SRDR website. Methods We examined all projects with data made publicly available through SRDR as of November 12, 2019. We extracted information about the characteristics of these projects. Two investigators extracted and verified the data. Results SRDR has had 2552 ...

Systematic Reviews to Support Evidence‐based Medicine by Khalid Khan, Regina Kunz, Jos Kleijnen and Gerd Antes: A Review

Research Synthesis …, 2013

In many scientific disciplines, systematic reviews and meta-analyses are increasingly indispensable as summaries of the evidence in relation to a particular phenomenon, making it ever more important for scientists to know how best to review evidence. Khan, Kunz, Kleijnen, and Antes's Systematic Reviews to Support Evidence-Based Medicine (2nd edition, 2011, CRC Press, ISBN-13: 9781853157943) provides a fine, brief introduction to the subject, in particular for those in medicine and public health fields. Our review details strengths of this book, such as its conciseness and clarity and its close match to current conventions in systematic reviewing. We also discuss nuances of the subject that might augment a future edition or lead readers to other resources. In doing so, these discussions also address how present practices in systematic reviewing might improve.

Adherence to Preferred Reporting Items for Systematic Reviews and Meta-analysis Protocols (PRISMA-P) guidelines: A cross-sectional analysis from Medical Databases

Aims: Guidelines have been designed to prepare best quality systematic review and meta-analysis reports to provide rational concise predictions about elaborate and complex of clinical trial data. Latest update in these guidelines is given by PRISMA-P 2015, which we wish to analyze for predicting its acceptability over the old PRISMA 2009. Methods: We studied 287 articles from 143 Journals listed in Pubmed and sorted them on the basis of inclusion and exclusion criteria to predict the number of articles published in 2015 which followed the latest PRISMA-P checklist. Results: Out of 287 articles 208 relevant articles were selected from which 182 (87.5%) followed the old PRISMA 2009 statement, 4(1.9%) did not follow PRISMA guideline, while 14 (6.7%) partially followed the same. Only 8 (3.8%) of the articles published in 2015 after February, followed the updated PRISMA-P statement. Conclusion: Results of the present study predicts probable apprehension of authors towards PRISMA-P 2015 statement.

Barriers to the uptake of evidence from systematic reviews and meta-analyses: a systematic review of decision makers' perceptions

BMJ open, 2012

To review the barriers to the uptake of research evidence from systematic reviews by decision makers. We searched 19 databases covering the full range of publication years, utilised three search engines and also personally contacted investigators. Reference lists of primary studies and related reviews were also consulted. Studies were included if they reported on the views and perceptions of decision makers on the uptake of evidence from systematic reviews, meta-analyses and the databases associated with them. All study designs, settings and decision makers were included. One investigator screened titles to identify candidate articles then two reviewers independently assessed the quality and the relevance of retrieved reports. Two reviewers described the methods of included studies and extracted data that were summarised in tables and then analysed. Using a pre-established taxonomy, the barriers were organised into a framework according to their effect on knowledge, attitudes or beh...