Exploring sources of heterogeneity in systematic reviews of diagnostic tests (original) (raw)
Related papers
Guidelines for meta-analyses evaluating diagnostic tests
Annals of Internal Medicine, 1994
Objectives: To introduce guidelines for the conduct, reporting, and critical appraisal of meta-analyses evaluating diagnostic tests and to apply these guidelines to recently published meta-analyses of diagnostic tests. • Data Sources: Based on current concepts of how to assess diagnostic tests and conduct meta-analyses. They are applied to all meta-analyses evaluating diagnostic tests published in English-language journals from January 1990 through December 1991, identified through MEDLINE searching and by experts in the field. • Study Selection: Meta-analyses were included if at least two of three independent readers regarded their main purpose as the evaluation of diagnostic tests against a concurrent reference standard. • Data Extraction: By three independent readers on the extent to which meta-analyses fulfilled each guideline, with consensus defined as agreement by at least two readers. • Data Synthesis: The guidelines are concerned with determining the objective of the meta-analysis, identifying the relevant literature and extracting the data, estimating diagnostic accuracy, and identifying the extent to which variability is explained by study design characteristics and characteristics of the patients and diagnostic test. In general, the guidelines were only partially fulfilled. • Conclusion: Meta-analysis is potentially important in the assessment of diagnostic tests. Those reading meta-analyses evaluating diagnostic tests should critically appraise them; those doing meta-analyses should apply recently developed methods. The conduct and reporting of primary studies on which meta-analyses are based require improvement.
Impact of Adjustment for Quality on Results of Metaanalyses of Diagnostic Accuracy
Clinical Chemistry, 2006
Methods: We evaluated the methodological quality of 487 diagnostic-accuracy studies in 30 systematic reviews with the QUADAS (Quality Assessment of Diagnostic-Accuracy Studies) checklist. We applied 3 strategies that varied both in the definition of quality and in the statistical approach to incorporate the quality-assessment results into metaanalyses. We compared magnitudes of diagnostic odds ratios, widths of their confidence intervals, and changes in a hypothetical clinical decision between strategies. Results: Following 2 definitions of quality, we concluded that only 70 or 72 of 487 studies were of "high quality". This small number was partly due to poor reporting of quality items. None of the strategies for accounting for differences in quality led systematically to accuracy estimates that were less optimistic than ignoring quality in metaanalyses. Limiting the review to high-quality studies considerably reduced the number of studies in all reviews, with wider confidence intervals as a result. In 18 reviews, the quality adjustment would have resulted in a different decision about the usefulness of the test.
JAMA, 2018
IMPORTANCE Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. OBJECTIVE To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. DESIGN Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. FINDINGS The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. CONCLUSIONS AND RELEVANCE The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.
Journal of clinical epidemiology
To collect reasons for selecting the methods for meta-analysis of diagnostic accuracy from authors of systematic reviews and improve guidance on recommended methods. Online survey in authors of recently published meta-analyses of diagnostic accuracy. We identified 100 eligible reviews, of which 40 had used more advanced methods of meta-analysis (hierarchical random-effects approach), 52 more traditional methods (summary receiver operating characteristic curve based on linear regression or a univariate approach), and 8 combined both. Fifty-nine authors responded to the survey; 29 (49%) authors had used advanced methods, 25 (42%) authors traditional methods, and 5 (9%) authors combined traditional and advanced methods. Most authors who had used advanced methods reported to do so because they believed that these methods are currently recommended (n = 27; 93%). Most authors who had used traditional methods also reported to do so because they believed that these methods are currently rec...
Reporting of measures of accuracy in systematic reviews of diagnostic literature
BMC Health Services Research, 2002
Background: There are a variety of ways in which accuracy of clinical tests can be summarised in systematic reviews. Variation in reporting of summary measures has only been assessed in a small survey restricted to meta-analyses of screening studies found in a single database. Therefore, we performed this study to assess the measures of accuracy used for reporting results of primary studies as well as their meta-analysis in systematic reviews of test accuracy studies.
How does study quality affect the results of a diagnostic meta-analysis
BMC Medical Research Methodology, 2005
The use of systematic literature review to inform evidence based practice in diagnostics is rapidly expanding. Although the primary diagnostic literature is extensive, studies are often of low methodological quality or poorly reported. There has been no rigorously evaluated, evidence based tool to assess the methodological quality of diagnostic studies.
Clinical chemistry, 2018
We evaluated the completeness of reporting of diagnostic test accuracy (DTA) systematic reviews using the recently developed Preferred Reporting Items for Systematic Reviews and MetaAnalyses (PRISMA)-DTA guidelines. MEDLINE® was searched for DTA systematic reviews published October 2017 to January 2018. The search time span was modulated to reach the desired sample size of 100 systematic reviews. Reporting on a per-item basis using PRISMA-DTA was evaluated. One hundred reviews were included. Mean reported items were 18.6 of 26 (71%; SD = 1.9) for PRISMA-DTA and 5.5 of 11 (50%; SD = 1.2) for PRISMA-DTA for abstracts. Items in the results were frequently reported. Items related to protocol registration, characteristics of included studies, results synthesis, and definitions used in data extraction were infrequently reported. Infrequently reported items from PRISMA-DTA for abstracts included funding information, strengths and limitations, characteristics of included studies, and assess...
Systematic reviews to evaluate diagnostic tests
European Journal of Obstetrics & Gynecology and Reproductive Biology, 2001
Diagnostic testing and screening is a critical part of the clinical process because inappropriate diagnostic strategies put patients at risk and entail a serious waste of resources. It is being increasingly recognised that absence of clear summaries of individual research studies on the repeatability, accuracy and impact of tests, which are often scattered across many different journals, is a major impediment. Just as the need to develop means to systematically review research assessing the effectiveness of treatments has been pursued over the last decade, so more recently attention has focused on how research on diagnostic tests might also be systematically reviewed. These reviews present a huge methodological challenge. This paper describes the use of a systematic approach to collation, appraisal and synthesis of information contained in the primary literature about accuracy of diagnostic strategies. #