Towards Complete and Accurate Reporting of Studies of Diagnostic Accuracy: The STARD Initiative (original) (raw)
Related papers
Toward complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative
Academic radiology, 2003
To comprehend the results of diagnostic accuracy studies, readers must understand the design, conduct, and analysis of such studies. The authors sought to develop guidelines for improving the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers better to assess the validity and generalizability of study results. The Standards for Reporting of Diagnostic Accuracy group steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and to extract potential guidelines for authors and editors. An extensive list of items was prepared. Members of the steering committee then met for 2 days with other researchers, editors, methodologists, statisticians, and members of professional organizations to develop a checklist and a prototypical flowchart to guide authors and editors of studies of diagnostic accuracy. The search for published guidelines on diagnostic research yielded 33 p...
The STARD Statement for Reporting Studies of Diagnostic Accuracy: Explanation and Elaboration
Annals of Internal Medicine, 2003
The quality of reporting of studies of diagnostic accuracy is less than optimal. Complete and accurate reporting is necessary to enable readers to assess the potential for bias in a study and to evaluate the generalisability of the results. A group of scientists and editors has developed the STARD (Standards for Reporting of Diagnostic Accuracy) statement to improve the quality of reporting of studies of diagnostic accuracy. The statement consists of a checklist of 25 items and flow diagram that authors can use to ensure that all relevant information is present. This explanatory document aims to facilitate the use, understanding and dissemination of the checklist. The document contains a clarification of the meaning, rationale and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The STARD statement, checklist, flowchart, and this explanation and elaboration document should be useful resources to improve the reporting of diagnostic accuracy studies. Complete and informative reporting can only lead to better decisions in health care.
The quality of diagnostic accuracy studies since the STARD statement: Has it improved?
Neurology, 2006
Objective: To assess whether the quality of reporting of diagnostic accuracy studies has improved since the publication of the Standards for the Reporting of Diagnostic Accuracy studies (STARD statement). Methods: The quality of reporting of diagnostic accuracy studies published in 12 medical journals in 2000 (pre-STARD) and 2004 (post-STARD) was evaluated by two reviewers independently. For each article, the number of reported STARD items was counted (range 0 to 25). Differences in completeness of reporting between articles published in 2000 and 2004 were analyzed, using multilevel analyses. Results: We included 124 articles published in 2000 and 141 articles published in 2004. Mean number of reported STARD items was 11.9 (range 3.5 to 19.5) in 2000 and 13.6 (range 4.0 to 21.0) in 2004, an increase of 1.81
Results of diagnostic accuracy studies are not always validated
Journal of Clinical Epidemiology, 2006
Background and Objective: Internal validation of a diagnostic test estimates the degree of random error, using the original data of a diagnostic accuracy study. External validation requires a new study in an independent but similar population. Here we describe whether diagnostic research is validated, which technique is used, and to what extent the validation study results differ from the original.
BMC medical research methodology, 2006
In January 2003, STAndards for the Reporting of Diagnostic accuracy studies (STARD) were published in a number of journals, to improve the quality of reporting in diagnostic accuracy studies. We designed a study to investigate the inter-assessment reproducibility, and intra- and inter-observer reproducibility of the items in the STARD statement. Thirty-two diagnostic accuracy studies published in 2000 in medical journals with an impact factor of at least 4 were included. Two reviewers independently evaluated the quality of reporting of these studies using the 25 items of the STARD statement. A consensus evaluation was obtained by discussing and resolving disagreements between reviewers. Almost two years later, the same studies were evaluated by the same reviewers. For each item, percentages agreement and Cohen's kappa between first and second consensus assessments (inter-assessment) were calculated. Intraclass Correlation coefficients (ICC) were calculated to evaluate its reliab...
STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration
Diagnostic accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a diagnostic accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of diagnostic accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a diagnostic accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http:// www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports.
STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies
Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting Diagnostic Accuracy (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.
Toward a checklist for reporting of studies of diagnostic accuracy of medical tests
Clinical chemistry, 2000
Background: "Diagnostic accuracy" refers to the ability of medical tests to provide accurate information about diagnosis, prognosis, risk of disease, and other clinical issues. Published reports on diagnostic accuracy of medical tests frequently fail to adhere to minimal clinical epidemiological standards, and such failures lead to overly optimistic assessments of evaluated tests. Our aim was to enumerate key items for inclusion in published reports on diagnostic accuracy, with a related aim of making the reports more useful for systematic reviews.
Tools for critical appraisal of evidence in studies of diagnostic accuracy
Autoimmunity Reviews, 2012
Studies of accuracy are often more complex to understand than clinical trials, since there can be more than one outcome and scope (screening, diagnosis, and prognosis) and because results have to be reported in more than one way, than in clinical trials (relative risk or odds ratio). Sensitivity and specificity are common terms for practitioners, but to remember that sensitivity is the "ratio between true positive rate and true positive rate plus false negative rate" may sometime cause some frustration. Moreover, likelihood ratio, predictive values, diagnostic odds ratio, and pre-and post-test probability complicate the framework. To summarize these indexes from multiple studies can be also a little more difficult. However, understanding diagnostic test accuracy from different study results and how to interpret systematic reviews and meta-analysis can help every practitioner improve critical appraisal of evidence about the best use of diagnostic tests. Avoiding complicated mathematical formulas, this paper attempts to explain the meaning of the most important diagnostic indexes and how to read a Forest plot and a summary Receiver Operative Characteristic curve.