Serum γ-Glutamyltransferase Isoform Complexed to LDL in the Diagnosis of Small Hepatocellular Carcinoma (original) (raw)
Related papers
Critical Reviews in Clinical Laboratory Sciences, 2018
Reference Intervals (RIs) and clinical decision limits (CDLs) are a vital part of the information supplied by laboratories to support the interpretation of numerical clinical pathology results. RIs describe the typical distribution of results seen in a healthy reference population while CDLs are associated with a significantly higher risk of adverse clinical outcomes or are diagnostic for the presence of a specific disease. However, as the two concepts are sometimes confused, there is a need to clarify the differences between these terms and to ensure they are easily distinguished, especially because CDLs have a clinical association with specific diseases and risks, thereby implying that effective clinical interventions are available. It is important to note that, because population-based RIs are derived from the range of values expected in a typical community population, laboratory results that fall outside a RI do not necessarily indicate a disease but rather that additional medical follow-up and/or treatment may be warranted. In contrast, CDLs are associated with a risk of specific adverse outcomes, and are commonly used to interpret laboratory test results, including lipid parameters, glucose, hemoglobin A1c (HbA1c), and tumor markers, to determine risk of disease, to diagnose or to treat. In recent years, the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) has focused primarily on RIs and has performed multicenter studies to obtain common RIs. However, the broader responsibility of the Committee, from its name, includes "decision limits". C-RIDL now aims to emphasize the importance of the correct use of both RIs and CDLs and to encourage laboratories to specify the appropriate information to clinicians as needed. This review discusses RIs and CDLs in detail, describes the similarities and the differences between these two important tools in laboratory medicine, and clearly explains the processes used to define them. C-RIDL encourages the involvement of laboratory professionals in the establishment of both RIs and CDLs.
The Meaning of Diagnostic Test Results: A Spreadsheet for Swift Data Analysis
Clinical Radiology, 2000
AIMS: To design a spreadsheet program to: (a) analyse rapidly diagnostic test result data produced in local research or reported in the literature; (b) correct reported predictive values for disease prevalence in any population; (c) estimate the post-test probability of disease in individual patients. MATERIALS AND METHODS: Microsoft Excel TM was used. Section A: a contingency (2 × 2) table was incorporated into the spreadsheet. Formulae for standard calculations [sample size, disease prevalence, sensitivity and specificity with 95% confidence intervals, predictive values and likelihood ratios (LRs)] were linked to this table. The results change automatically when the data in the true or false negative and positive cells are changed. Section B: this estimates predictive values in any population, compensating for altered disease prevalence. Sections C-F: Bayes' theorem was incorporated to generate individual post-test probabilities. The spreadsheet generates 95% confidence intervals, LRs and a table and graph of conditional probabilities once the sensitivity and specificity of the test are entered. The latter shows the expected post-test probability of disease for any pre-test probability when a test of known sensitivity and specificity is positive or negative. RESULTS: This spreadsheet can be used on desktop and palmtop computers. The MS Excel TM version can be downloaded via the Internet from the URL ftp://radiography.com/pub/Rad-data99.xls CONCLUSION: A spreadsheet is useful for contingency table data analysis and assessment of the clinical meaning of diagnostic test results. MacEneaney, P. M., Malone, D. E. (2000). Clinical Radiology 55, 227-235. ᭧ 2000 The Royal College of Radiologists
Journal of General Internal Medicine, 2006
BACKGROUND: Clinical experience, features of data collection process, or both, affect diagnostic accuracy, but their respective role is unclear. OBJECTIVE, DESIGN: Prospective, observational study, to determine the respective contribution of clinical experience and data collection features to diagnostic accuracy. METHODS: Six Internists, 6 second year internal medicine residents, and 6 senior medical students worked up the same 7 cases with a standardized patient. Each encounter was audiotaped and immediately assessed by the subjects who indicated the reasons underlying their data collection. We analyzed the encounters according to diagnostic accuracy, information collected, organ systems explored, diagnoses evaluated, and final decisions made, and we determined predictors of diagnostic accuracy by logistic regression models. RESULTS: Several features significantly predicted diagnostic accuracy after correction for clinical experience: early exploration of correct diagnosis (odds ratio [OR] 24.35) or of relevant diagnostic hypotheses (OR 2.22) to frame clinical data collection, larger number of diagnostic hypotheses evaluated (OR 1.08), and collection of relevant clinical data (OR 1.19). CONCLUSION: Some features of data collection and interpretation are related to diagnostic accuracy beyond clinical experience and should be explicitly included in clinical training and modeled by clinical teachers. Thoroughness in data collection should not be considered a privileged way to diagnostic success.
Toward a checklist for reporting of studies of diagnostic accuracy of medical tests
Clinical chemistry, 2000
Background: "Diagnostic accuracy" refers to the ability of medical tests to provide accurate information about diagnosis, prognosis, risk of disease, and other clinical issues. Published reports on diagnostic accuracy of medical tests frequently fail to adhere to minimal clinical epidemiological standards, and such failures lead to overly optimistic assessments of evaluated tests. Our aim was to enumerate key items for inclusion in published reports on diagnostic accuracy, with a related aim of making the reports more useful for systematic reviews.