The quality of diagnostic accuracy studies since the STARD statement: Has it improved? (original) (raw)
Related papers
The STARD Statement for Reporting Studies of Diagnostic Accuracy: Explanation and Elaboration
Annals of Internal Medicine, 2003
The quality of reporting of studies of diagnostic accuracy is less than optimal. Complete and accurate reporting is necessary to enable readers to assess the potential for bias in a study and to evaluate the generalisability of the results. A group of scientists and editors has developed the STARD (Standards for Reporting of Diagnostic Accuracy) statement to improve the quality of reporting of studies of diagnostic accuracy. The statement consists of a checklist of 25 items and flow diagram that authors can use to ensure that all relevant information is present. This explanatory document aims to facilitate the use, understanding and dissemination of the checklist. The document contains a clarification of the meaning, rationale and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The STARD statement, checklist, flowchart, and this explanation and elaboration document should be useful resources to improve the reporting of diagnostic accuracy studies. Complete and informative reporting can only lead to better decisions in health care.
Towards Complete and Accurate Reporting of Studies of Diagnostic Accuracy: The STARD Initiative
American Journal of Roentgenology, 2003
Objective. To improve the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers to assess the potential for bias in the study and to evaluate its generalisability. Methods. The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organizations shortened this list during a two-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. Results. The search for published guidelines regarding diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to 25 items, using evidence on bias whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. Conclusions. Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of the clinicians, researchers, reviewers, journals, and the public.
Toward complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative
Academic radiology, 2003
To comprehend the results of diagnostic accuracy studies, readers must understand the design, conduct, and analysis of such studies. The authors sought to develop guidelines for improving the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers better to assess the validity and generalizability of study results. The Standards for Reporting of Diagnostic Accuracy group steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and to extract potential guidelines for authors and editors. An extensive list of items was prepared. Members of the steering committee then met for 2 days with other researchers, editors, methodologists, statisticians, and members of professional organizations to develop a checklist and a prototypical flowchart to guide authors and editors of studies of diagnostic accuracy. The search for published guidelines on diagnostic research yielded 33 p...
Results of diagnostic accuracy studies are not always validated
Journal of Clinical Epidemiology, 2006
Background and Objective: Internal validation of a diagnostic test estimates the degree of random error, using the original data of a diagnostic accuracy study. External validation requires a new study in an independent but similar population. Here we describe whether diagnostic research is validated, which technique is used, and to what extent the validation study results differ from the original.
Quality of reporting of diagnostic accuracy studies
Radiology, 2005
To evaluate quality of reporting in diagnostic accuracy articles published in 2000 in journals with impact factor of at least 4 by using items of Standards for Reporting of Diagnostic Accuracy (STARD) statement published later in 2003. English-language articles on primary diagnostic accuracy studies in 2000 were identified with validated search strategy in MEDLINE. Articles published in journals with impact factor of 4 or higher that regularly publish articles on diagnostic accuracy were selected. Two independent reviewers evaluated quality of reporting by using STARD statement, which consists of 25 items and encourages use of a flow diagram. Total STARD score for each article was calculated by summing number of reported items. Subgroup analyses were performed for study design (case-control or cohort study) by using Student t tests for continuous outcomes and chi(2) tests for dichotomous outcomes. Included were 124 articles published in 2000 in 12 journals: 33 case-control and 91 co...
STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration
Diagnostic accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a diagnostic accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of diagnostic accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a diagnostic accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http:// www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports.
BMC medical research methodology, 2006
In January 2003, STAndards for the Reporting of Diagnostic accuracy studies (STARD) were published in a number of journals, to improve the quality of reporting in diagnostic accuracy studies. We designed a study to investigate the inter-assessment reproducibility, and intra- and inter-observer reproducibility of the items in the STARD statement. Thirty-two diagnostic accuracy studies published in 2000 in medical journals with an impact factor of at least 4 were included. Two reviewers independently evaluated the quality of reporting of these studies using the 25 items of the STARD statement. A consensus evaluation was obtained by discussing and resolving disagreements between reviewers. Almost two years later, the same studies were evaluated by the same reviewers. For each item, percentages agreement and Cohen's kappa between first and second consensus assessments (inter-assessment) were calculated. Intraclass Correlation coefficients (ICC) were calculated to evaluate its reliab...
Development and validation of methods for assessing the quality of diagnostic accuracy studies
Health technology assessment (Winchester, England), 2004
To develop a quality assessment tool which will be used in systematic reviews to assess the quality of primary studies of diagnostic accuracy. Electronic databases including MEDLINE, EMBASE, BIOSIS and the methodological databases of both CRD and the Cochrane Collaboration. Three systematic reviews were conducted to provide an evidence base for the development of the quality assessment tool. A Delphi procedure was used to develop the quality assessment tool and the information provided by the reviews was incorporated into this. A panel of nine experts in the area of diagnostic accuracy studies took part in the Delphi procedure to agree on the items to be included in the tool. Panel members were also asked to provide feedback on various other items and whether they would like to see the development of additional topic and design specific items. The Delphi procedure produced the quality assessment tool, named the QUADAS tool, which consisted of 14 items. A background document was prod...
Tools for critical appraisal of evidence in studies of diagnostic accuracy
Autoimmunity Reviews, 2012
Studies of accuracy are often more complex to understand than clinical trials, since there can be more than one outcome and scope (screening, diagnosis, and prognosis) and because results have to be reported in more than one way, than in clinical trials (relative risk or odds ratio). Sensitivity and specificity are common terms for practitioners, but to remember that sensitivity is the "ratio between true positive rate and true positive rate plus false negative rate" may sometime cause some frustration. Moreover, likelihood ratio, predictive values, diagnostic odds ratio, and pre-and post-test probability complicate the framework. To summarize these indexes from multiple studies can be also a little more difficult. However, understanding diagnostic test accuracy from different study results and how to interpret systematic reviews and meta-analysis can help every practitioner improve critical appraisal of evidence about the best use of diagnostic tests. Avoiding complicated mathematical formulas, this paper attempts to explain the meaning of the most important diagnostic indexes and how to read a Forest plot and a summary Receiver Operative Characteristic curve.