Integrating Al Algorithms into the Clinical Workflow (original) (raw)
Radiology: Artificial Intelligence
A rtificial intelligence (AI) applications are increasingly being developed for diagnostic imaging (1). These AI applications can be divided broadly into two categories: first, those pertaining to logistic workflows, including order scheduling, patient screening, radiologist reporting, and other operational analytics (also termed upstream AI); and second, those pertaining to the acquired imaging data themselves, such as automated detection and segmentation of findings or features, automated interpretation of findings, and image postprocessing (2) (also termed downstream AI). Numerous downstream AI applications have been developed in recent years. More than 120 AI applications in medical imaging are currently cleared by the U.S. Food and Drug Administration (3). Although there are a variety of applications available, a major unaddressed issue is the difficulty of adopting AI algorithms into the workflow of clinical practice. AI algorithms are generally siloed systems that are not easily incorporated into existing information systems in a radiology department. Additionally, tools to measure and monitor the performance of AI systems within clinical workflows are lacking. We sought to define the requirements for effective AI deployment in the clinical workflow by considering an exemplar downstream AI application-automated interpretation and reporting of lymphoscintigraphy examinations-and to use that exemplar to develop generalizable components to meet the defined requirements. Materials and Methods The institutional review board approved this retrospective study for development of the AI algorithm, which was compliant with the Health Insurance Portability and Accountability Act, and waived requirements for written informed consent. Understanding the General Workflow and Particular Use Case Our use case for deploying AI within the clinical workflow was an AI algorithm for evaluating lymphoscintigraphy examinations. These examinations are performed to identify sentinel lymph nodes (SLNs) in patients with invasive breast cancer, which potentially increases the accuracy of staging (4). The examination comprises images of the breasts and axillae (Fig 1), and the radiology report describes the location and positivity of SLN. The AI algorithm we developed for this use case takes the images as inputs and outputs the following data for reporting: (a) observed sites of injection (right breast only, left breast only, bilateral breasts), (b) probability of radiotracer accumulation in the axillae (probability scores for none, right, left, or bilateral axillae), (c) number of right axillary lymph nodes (integer), and (d) number of left axillary lymph nodes (integer).