Variation of a test's sensitivity and specificity with disease prevalence (original) (raw)
Related papers
Methodology
Although measures such as sensitivity and specificity are used in the study of diagnostic test accuracy, these are not appropriate for integrating heterogeneous studies. Therefore, it is essential to assess in detail all related aspects prior to integrating a set of studies so that the correct model can then be selected. This work describes the scheme employed for making decisions regarding the use of the R, STATA and SAS statistical programs. We used the R Program Meta-Analysis of Diagnostic Accuracy package for determining the correlation between sensitivity and specificity. This package considers fixed, random and mixed effects models and provides excellent summaries and assesses heterogeneity. For selecting various cutoff points in the meta-analysis, we used the STATA module for meta-analytical integration of diagnostic test accuracy studies, which produces bivariate outputs for heterogeneity.
Archives of Microbiology and Immunology, 2023
In the process of medical diagnostics many types of tests are used, among them in vitro diagnostic laboratory tests. The performances of such tests are usually examined in clinical studies with a disease prevalence that is different from the prevalence of the disease in another clinical setting. The question then is whether diagnostic test characteristics like sensitivity, specificity and likelihood ratios are independent on the prevalence of the disease. The answer to this question is quite important when applying the test performance characteristics of a clinical study in a different clinical diagnostic setting. Here, it is demonstrated that, apart from special cases, test characteristics are indeed dependent on the prevalence of the disease. First, the underlying theoretical model of this dependence is demonstrated and, second, the model is validated with three practical diagnostic examples, i.e myocardial infarction, autoimmune disease, and vitamin-B12 deficiency.
Systematic Reviews, 2015
Background: Small-study effects and time trends have been identified in meta-analyses of randomized trials. We evaluated whether these effects are also present in meta-analyses of diagnostic test accuracy studies. Methods: A systematic search identified test accuracy meta-analyses published between May and September 2012. In each meta-analysis, the strength of the associations between estimated accuracy of the test (diagnostic odds ratio (DOR), sensitivity, and specificity) and sample size and between accuracy estimates and time since first publication were evaluated using meta-regression models. The regression coefficients over all meta-analyses were summarized using random effects meta-analysis.
Exploring sources of heterogeneity in systematic reviews of diagnostic tests
Statistics in Medicine, 2002
Itt is indispensable for any meta-analysis that potential sources of heterogeneity are examined,, before one considers pooling the results of primary studies into summary estimatess with enhanced precision. In reviews of studies on the diagnostic accuracy off tests, variability in results beyond chance can be attributed to between study differencess in the selected cut points for positivity, in patient selection and clinical setting,, in the type of test used, in the type of reference standard, or any combinationn of these factors. In addition, heterogeneity in study results can also be causedd by flaws in study design.
JAMA, 2018
IMPORTANCE Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. OBJECTIVE To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. DESIGN Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. FINDINGS The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. CONCLUSIONS AND RELEVANCE The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.