Pierre Elnajjar - Academia.edu (original) (raw)

Papers by Pierre Elnajjar

Research paper thumbnail of Deep learning achieves radiologist-level performance at segmenting breast cancers on MRI

Purpose: The purpose was to develop a deep network architecture that achieves fully-automated rad... more Purpose: The purpose was to develop a deep network architecture that achieves fully-automated radiologist-level segmentation of cancers in breast MRI. Materials and Methods: We leveraged 38,229 exams (64,063 individual breast scans) collected retrospectively from women aged 12-94 (mean age 54) who presented between 2002 and 2014 at a single clinical site. For network training, we selected 2,555 breast cancers which were segmented in 2D by radiologists, as well as 60,108 benign breasts, which served as examples of non-cancerous tissue during training. For testing, an additional 250 breast cancers were segmented independently in 2D by four radiologists. We selected among several 3D deep convolutional neural network architectures, input modalities and harmonization methods. The outcome measure was the Dice score for 2D segmentation, and was compared between the network and radiologists using the Wilcoxon signed-rank test and the TOST procedure. Results: The best-performing network on t...

Research paper thumbnail of Radiologist-Level Performance by Using Deep Learning for Segmentation of Breast Cancers on MRI Scans

Radiology: Artificial Intelligence, 2022

S egmentation of breast tumors provides image features such as shape, morphologic structure, text... more S egmentation of breast tumors provides image features such as shape, morphologic structure, texture, and enhancement dynamics that can improve diagnosis and prognosis in patients with breast cancer (1-3). To our knowledge, reliable automated tumor segmentation does not yet exist, and manual segmentation is labor intensive; this has precluded routine clinical evaluation of tumor volume despite mounting evidence that it is a good predictor of patient survival (2). Automatic segmentation with modern deep network techniques has the potential to meet this clinical need. Deep learning methods have been applied in breast tumor segmentation (4,5) and diagnosis (6-11) on mammograms; large datasets of up to 1 million images are available, which greatly boosts the performance of the machine learning systems (12,13). Unlike MRI, however, mammography cannot depict the exact three-dimensional (3D) location and volumetric extent of a lesion. Breast MRI has a higher diagnostic accuracy than mammography (14-16) and outperforms mammography in detection of residual tumors after neoadjuvant therapy (17). Additionally, background parenchymal enhancement measured at MRI with dynamic contrast enhancement is predictive of cancer risk (18). Several studies have automated tumor segmentation in breast MRI by using modern deep networks such as U-Nets or DeepMedic

Research paper thumbnail of Deep learning achieves radiologist-level performance of tumor segmentation in breast MRI

ArXiv, 2020

Purpose: The goal of this research was to develop a deep network architecture that achieves fully... more Purpose: The goal of this research was to develop a deep network architecture that achieves fully-automated radiologist-level segmentation of breast tumors in MRI. Materials and Methods: We leveraged 38,229 clinical MRI breast exams collected retrospectively from women aged 12-94 (mean age 54) who presented between 2002 and 2014 at a single clinical site. The training set for the network consisted of 2,555 malignant breasts that were segmented in 2D by experienced radiologists, as well as 60,108 benign breasts that served as negative controls. The test set consisted of 250 exams with tumors segmented independently by four radiologists. We selected among several 3D deep convolutional neural network architectures, input modalities and harmonization methods. The outcome measure was the Dice score for 2D segmentation, and was compared between the network and radiologists using the Wilcoxon signed-rank test and the TOST procedure. Results: The best-performing network on the training set ...

Research paper thumbnail of An Informatics Approach to Facilitate Clinical Management of Patients With Retrievable Inferior Vena Cava Filters

American Journal of Roentgenology, 2018

Va s c u l a r a n d I nt e r ve nt io n a l R a d io l og y • O r ig i n a l R e s e a rc h WEB ... more Va s c u l a r a n d I nt e r ve nt io n a l R a d io l og y • O r ig i n a l R e s e a rc h WEB This is a web exclusive article

Research paper thumbnail of Implementation of a Point-of-Care Radiologist-Technologist Communication Tool in a Quality Assurance Program

American Journal of Roentgenology, 2017

T ight communication between radiologists and referring physicians and between radiologists and p... more T ight communication between radiologists and referring physicians and between radiologists and patients has been recognized as an important part of radiology quality [1, 2]. There has been less discussion about the importance of tight communication between radiologists and technologists. With the increasing size of radiology departments and the larger number of imaging sites within a department, there is an increasing need for communication tools that enable radiologists and technologists to address a variety of acquisition and documentation issues that are key components to radiology quality. Proper image acquisition and documentation are components of a larger radiology quality control program [3-5]. Generally performed by licensed technologists, proper image acquisition and documentation include ensuring that the patient is ap

Research paper thumbnail of Building Blocks for Integrating Image Analysis Algorithms into a Clinical Workflow

PurposeStarting from a broad-based needs assessment and utilizing an image analysis algorithm (IA... more PurposeStarting from a broad-based needs assessment and utilizing an image analysis algorithm (IAA) developed at our institution, the purpose of this study was to define generalizable building blocks necessary for the integration of any IAA into a clinical practice.MethodsAn IAA was developed in our institution to process lymphoscintigraphy exams. A team of radiologists defined a set of building blocks for integration of this IAA into clinical workflow. The building blocks served the following roles: (1) Timely delivery of images to the IAA, (2) quality control, (3) IAA results processing, (4) results presentation & delivery, (5) IAA error correction, (6) system performance monitoring, and (7) active learning. Utilizing these modules, the lymphoscintigraphy IAA was integrated into the clinical workflow at our institution. System performance was tested over a 1 month period, including assessment of number of exams processed and delivered, and error rates and corrections.ResultsFrom J...

Research paper thumbnail of Application of deep learning techniques for characterization of 3D radiological datasets: a pilot study for detection of intravenous contrast in breast MRI

Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications

Categorization of radiological images according to characteristics such as modality, scanner para... more Categorization of radiological images according to characteristics such as modality, scanner parameters, body part etc, is important for quality control, clinical efficiency and research. The metadata associated with images stored in the DICOM format reliably captures scanner settings such as tube current in CT or echo time (TE) in MRI. Other parameters such as image orientation, body part examined and presence of intravenous contrast, however, are not inherent to the scanner settings, and therefore require user input which is prone to human error. There is a general need for automated approaches that will appropriately categorize images, even with parameters that are not inherent to the scanner settings. These approaches should be able to process both planar 2D images and full 3D scans. In this work, we present a deep learning based approach for automatically detecting one such parameter: the presence or absence of intravenous contrast in 3D MRI scans. Contrast is manually injected by radiology staff during the imaging examination, and its presence cannot be automatically recorded in the DICOM header by the scanner. Our classifier is a convolutional neural network (CNN) based on the ResNet architecture. Our data consisted of 1000 breast MRI scans (500 scans with and 500 scans without intravenous contrast), used for training and testing a CNN on 80%/20% split, respectively. The labels for the scans were obtained from the series descriptions created by certified radiological technologists. Preliminary results of our classifier are very promising with an area under the ROC curve (AUC) of 0.98, sensitivity and specificity of 1.0 and 0.9 respectively (at the optimal ROC cut-off point), demonstrating potential usefulness in both clinical as well as research settings.

Research paper thumbnail of Federated Learning used for predicting outcomes in SARS-COV-2 patients

‘Federated Learning’ (FL) is a method to train Artificial Intelligence (AI) models with data from... more ‘Federated Learning’ (FL) is a method to train Artificial Intelligence (AI) models with data from multiple sources while maintaining anonymity of the data thus removing many barriers to data sharing. During the SARS-COV-2 pandemic, 20 institutes collaborated on a healthcare FL study to predict future oxygen requirements of infected patients using inputs of vital signs, laboratory data, and chest x-rays, constituting the “EXAM” (EMR CXR AI Model) model. EXAM achieved an average Area Under the Curve (AUC) of over 0.92, an average improvement of 16%, and a 38% increase in generalisability over local models. The FL paradigm was successfully applied to facilitate a rapid data science collaboration without data exchange, resulting in a model that generalised across heterogeneous, unharmonized datasets. This provided the broader healthcare community with a validated model to respond to COVID-19 challenges, as well as set the stage for broader use of FL in healthcare.

Research paper thumbnail of Automatic Forecasting of Radiology Examination Volume Trends for Optimal Resource Planning and Allocation

Journal of Digital Imaging

The aim of the study was to evaluate the performance of the Prophet forecasting procedure, part o... more The aim of the study was to evaluate the performance of the Prophet forecasting procedure, part of the Facebook opensource Artificial Intelligence portfolio, for forecasting variations in radiological examination volumes. Daily CT and MRI examination volumes from our institution were extracted from the radiology information system (RIS) database. Data from January 1, 2015, to December 31, 2019, was used for training the Prophet algorithm, and data from January 2020 was used for validation. Algorithm performance was then evaluated prospectively in February and August 2020. Total error and mean error per day were evaluated, and computational time was logged using different Markov chain Monte Carlo (MCMC) samples. Data from 610,570 examinations were used for training; the majority were CTs (82.3%). During retrospective testing, prediction error was reduced from 19 to < 1 per day in CT (total 589 to 17) and from 5 to < 1 per day (total 144 to 27) in MRI by fine-tuning the Prophet procedure. Prospective prediction error in February was 11 per day in CT (9934 predicted, 9667 actual) and 1 per day in MRI (2484 predicted, 2457 actual) and was significantly better than manual weekly predictions (p = 0.001). Inference with MCMC added no substantial improvements while vastly increasing computational time. Prophet accurately models weekly, seasonal, and overall trends paving the way for optimal resource allocation for radiology exam acquisition and interpretation.

Research paper thumbnail of Integrating Al Algorithms into the Clinical Workflow

Radiology: Artificial Intelligence

A rtificial intelligence (AI) applications are increasingly being developed for diagnostic imagin... more A rtificial intelligence (AI) applications are increasingly being developed for diagnostic imaging (1). These AI applications can be divided broadly into two categories: first, those pertaining to logistic workflows, including order scheduling, patient screening, radiologist reporting, and other operational analytics (also termed upstream AI); and second, those pertaining to the acquired imaging data themselves, such as automated detection and segmentation of findings or features, automated interpretation of findings, and image postprocessing (2) (also termed downstream AI). Numerous downstream AI applications have been developed in recent years. More than 120 AI applications in medical imaging are currently cleared by the U.S. Food and Drug Administration (3). Although there are a variety of applications available, a major unaddressed issue is the difficulty of adopting AI algorithms into the workflow of clinical practice. AI algorithms are generally siloed systems that are not easily incorporated into existing information systems in a radiology department. Additionally, tools to measure and monitor the performance of AI systems within clinical workflows are lacking. We sought to define the requirements for effective AI deployment in the clinical workflow by considering an exemplar downstream AI application-automated interpretation and reporting of lymphoscintigraphy examinations-and to use that exemplar to develop generalizable components to meet the defined requirements. Materials and Methods The institutional review board approved this retrospective study for development of the AI algorithm, which was compliant with the Health Insurance Portability and Accountability Act, and waived requirements for written informed consent. Understanding the General Workflow and Particular Use Case Our use case for deploying AI within the clinical workflow was an AI algorithm for evaluating lymphoscintigraphy examinations. These examinations are performed to identify sentinel lymph nodes (SLNs) in patients with invasive breast cancer, which potentially increases the accuracy of staging (4). The examination comprises images of the breasts and axillae (Fig 1), and the radiology report describes the location and positivity of SLN. The AI algorithm we developed for this use case takes the images as inputs and outputs the following data for reporting: (a) observed sites of injection (right breast only, left breast only, bilateral breasts), (b) probability of radiotracer accumulation in the axillae (probability scores for none, right, left, or bilateral axillae), (c) number of right axillary lymph nodes (integer), and (d) number of left axillary lymph nodes (integer).

Research paper thumbnail of Bag-of-Words Technique in Natural Language Processing: A Primer for Radiologists

RadioGraphics

Natural language processing (NLP) is a methodology designed to extract concepts and meaning from ... more Natural language processing (NLP) is a methodology designed to extract concepts and meaning from human-generated unstructured (free-form) text. It is intended to be implemented by using computer algorithms so that it can be run on a corpus of documents quickly and reliably. To enable machine learning (ML) techniques in NLP, free-form text must be converted to a numerical representation. After several stages of preprocessing including tokenization, removal of stop words, token normalization, and creation of a master dictionary, the bag-of-words (BOW) technique can be used to represent each remaining word as a feature of the document. The preprocessing steps simplify the documents but also potentially degrade meaning. The values of the features in BOW can be modified by using techniques such as term count, term frequency, and term frequency-inverse document frequency. Experience and experimentation will guide decisions on which specific techniques will optimize ML performance. These and other NLP techniques are being applied in radiology. Radiologists' understanding of the strengths and limitations of these techniques will help in communication with data scientists and in implementation for specific tasks.

Research paper thumbnail of Federated learning for predicting clinical outcomes in patients with COVID-19

Nature Medicine

T he scientific, academic, medical and data science communities have come together in the face of... more T he scientific, academic, medical and data science communities have come together in the face of the COVID-19 pandemic crisis to rapidly assess novel paradigms in artificial intelligence (AI) that are rapid and secure, and potentially incentivize data sharing and model training and testing without the usual privacy and data ownership hurdles of conventional collaborations 1,2. Healthcare providers, researchers and industry have pivoted their focus to address unmet and critical clinical needs created by the crisis, with remarkable results 3-9. Clinical trial recruitment has been expedited and facilitated by national regulatory bodies and an international cooperative spirit 10-12. The data analytics and AI disciplines have always fostered open

Research paper thumbnail of Federated learning for predicting clinical outcomes in patients with COVID-19

Nature Medicine

T he scientific, academic, medical and data science communities have come together in the face of... more T he scientific, academic, medical and data science communities have come together in the face of the COVID-19 pandemic crisis to rapidly assess novel paradigms in artificial intelligence (AI) that are rapid and secure, and potentially incentivize data sharing and model training and testing without the usual privacy and data ownership hurdles of conventional collaborations 1,2. Healthcare providers, researchers and industry have pivoted their focus to address unmet and critical clinical needs created by the crisis, with remarkable results 3-9. Clinical trial recruitment has been expedited and facilitated by national regulatory bodies and an international cooperative spirit 10-12. The data analytics and AI disciplines have always fostered open

Research paper thumbnail of Deep learning achieves radiologist-level performance at segmenting breast cancers on MRI

Purpose: The purpose was to develop a deep network architecture that achieves fully-automated rad... more Purpose: The purpose was to develop a deep network architecture that achieves fully-automated radiologist-level segmentation of cancers in breast MRI. Materials and Methods: We leveraged 38,229 exams (64,063 individual breast scans) collected retrospectively from women aged 12-94 (mean age 54) who presented between 2002 and 2014 at a single clinical site. For network training, we selected 2,555 breast cancers which were segmented in 2D by radiologists, as well as 60,108 benign breasts, which served as examples of non-cancerous tissue during training. For testing, an additional 250 breast cancers were segmented independently in 2D by four radiologists. We selected among several 3D deep convolutional neural network architectures, input modalities and harmonization methods. The outcome measure was the Dice score for 2D segmentation, and was compared between the network and radiologists using the Wilcoxon signed-rank test and the TOST procedure. Results: The best-performing network on t...

Research paper thumbnail of Radiologist-Level Performance by Using Deep Learning for Segmentation of Breast Cancers on MRI Scans

Radiology: Artificial Intelligence, 2022

S egmentation of breast tumors provides image features such as shape, morphologic structure, text... more S egmentation of breast tumors provides image features such as shape, morphologic structure, texture, and enhancement dynamics that can improve diagnosis and prognosis in patients with breast cancer (1-3). To our knowledge, reliable automated tumor segmentation does not yet exist, and manual segmentation is labor intensive; this has precluded routine clinical evaluation of tumor volume despite mounting evidence that it is a good predictor of patient survival (2). Automatic segmentation with modern deep network techniques has the potential to meet this clinical need. Deep learning methods have been applied in breast tumor segmentation (4,5) and diagnosis (6-11) on mammograms; large datasets of up to 1 million images are available, which greatly boosts the performance of the machine learning systems (12,13). Unlike MRI, however, mammography cannot depict the exact three-dimensional (3D) location and volumetric extent of a lesion. Breast MRI has a higher diagnostic accuracy than mammography (14-16) and outperforms mammography in detection of residual tumors after neoadjuvant therapy (17). Additionally, background parenchymal enhancement measured at MRI with dynamic contrast enhancement is predictive of cancer risk (18). Several studies have automated tumor segmentation in breast MRI by using modern deep networks such as U-Nets or DeepMedic

Research paper thumbnail of Deep learning achieves radiologist-level performance of tumor segmentation in breast MRI

ArXiv, 2020

Purpose: The goal of this research was to develop a deep network architecture that achieves fully... more Purpose: The goal of this research was to develop a deep network architecture that achieves fully-automated radiologist-level segmentation of breast tumors in MRI. Materials and Methods: We leveraged 38,229 clinical MRI breast exams collected retrospectively from women aged 12-94 (mean age 54) who presented between 2002 and 2014 at a single clinical site. The training set for the network consisted of 2,555 malignant breasts that were segmented in 2D by experienced radiologists, as well as 60,108 benign breasts that served as negative controls. The test set consisted of 250 exams with tumors segmented independently by four radiologists. We selected among several 3D deep convolutional neural network architectures, input modalities and harmonization methods. The outcome measure was the Dice score for 2D segmentation, and was compared between the network and radiologists using the Wilcoxon signed-rank test and the TOST procedure. Results: The best-performing network on the training set ...

Research paper thumbnail of An Informatics Approach to Facilitate Clinical Management of Patients With Retrievable Inferior Vena Cava Filters

American Journal of Roentgenology, 2018

Va s c u l a r a n d I nt e r ve nt io n a l R a d io l og y • O r ig i n a l R e s e a rc h WEB ... more Va s c u l a r a n d I nt e r ve nt io n a l R a d io l og y • O r ig i n a l R e s e a rc h WEB This is a web exclusive article

Research paper thumbnail of Implementation of a Point-of-Care Radiologist-Technologist Communication Tool in a Quality Assurance Program

American Journal of Roentgenology, 2017

T ight communication between radiologists and referring physicians and between radiologists and p... more T ight communication between radiologists and referring physicians and between radiologists and patients has been recognized as an important part of radiology quality [1, 2]. There has been less discussion about the importance of tight communication between radiologists and technologists. With the increasing size of radiology departments and the larger number of imaging sites within a department, there is an increasing need for communication tools that enable radiologists and technologists to address a variety of acquisition and documentation issues that are key components to radiology quality. Proper image acquisition and documentation are components of a larger radiology quality control program [3-5]. Generally performed by licensed technologists, proper image acquisition and documentation include ensuring that the patient is ap

Research paper thumbnail of Building Blocks for Integrating Image Analysis Algorithms into a Clinical Workflow

PurposeStarting from a broad-based needs assessment and utilizing an image analysis algorithm (IA... more PurposeStarting from a broad-based needs assessment and utilizing an image analysis algorithm (IAA) developed at our institution, the purpose of this study was to define generalizable building blocks necessary for the integration of any IAA into a clinical practice.MethodsAn IAA was developed in our institution to process lymphoscintigraphy exams. A team of radiologists defined a set of building blocks for integration of this IAA into clinical workflow. The building blocks served the following roles: (1) Timely delivery of images to the IAA, (2) quality control, (3) IAA results processing, (4) results presentation & delivery, (5) IAA error correction, (6) system performance monitoring, and (7) active learning. Utilizing these modules, the lymphoscintigraphy IAA was integrated into the clinical workflow at our institution. System performance was tested over a 1 month period, including assessment of number of exams processed and delivered, and error rates and corrections.ResultsFrom J...

Research paper thumbnail of Application of deep learning techniques for characterization of 3D radiological datasets: a pilot study for detection of intravenous contrast in breast MRI

Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications

Categorization of radiological images according to characteristics such as modality, scanner para... more Categorization of radiological images according to characteristics such as modality, scanner parameters, body part etc, is important for quality control, clinical efficiency and research. The metadata associated with images stored in the DICOM format reliably captures scanner settings such as tube current in CT or echo time (TE) in MRI. Other parameters such as image orientation, body part examined and presence of intravenous contrast, however, are not inherent to the scanner settings, and therefore require user input which is prone to human error. There is a general need for automated approaches that will appropriately categorize images, even with parameters that are not inherent to the scanner settings. These approaches should be able to process both planar 2D images and full 3D scans. In this work, we present a deep learning based approach for automatically detecting one such parameter: the presence or absence of intravenous contrast in 3D MRI scans. Contrast is manually injected by radiology staff during the imaging examination, and its presence cannot be automatically recorded in the DICOM header by the scanner. Our classifier is a convolutional neural network (CNN) based on the ResNet architecture. Our data consisted of 1000 breast MRI scans (500 scans with and 500 scans without intravenous contrast), used for training and testing a CNN on 80%/20% split, respectively. The labels for the scans were obtained from the series descriptions created by certified radiological technologists. Preliminary results of our classifier are very promising with an area under the ROC curve (AUC) of 0.98, sensitivity and specificity of 1.0 and 0.9 respectively (at the optimal ROC cut-off point), demonstrating potential usefulness in both clinical as well as research settings.

Research paper thumbnail of Federated Learning used for predicting outcomes in SARS-COV-2 patients

‘Federated Learning’ (FL) is a method to train Artificial Intelligence (AI) models with data from... more ‘Federated Learning’ (FL) is a method to train Artificial Intelligence (AI) models with data from multiple sources while maintaining anonymity of the data thus removing many barriers to data sharing. During the SARS-COV-2 pandemic, 20 institutes collaborated on a healthcare FL study to predict future oxygen requirements of infected patients using inputs of vital signs, laboratory data, and chest x-rays, constituting the “EXAM” (EMR CXR AI Model) model. EXAM achieved an average Area Under the Curve (AUC) of over 0.92, an average improvement of 16%, and a 38% increase in generalisability over local models. The FL paradigm was successfully applied to facilitate a rapid data science collaboration without data exchange, resulting in a model that generalised across heterogeneous, unharmonized datasets. This provided the broader healthcare community with a validated model to respond to COVID-19 challenges, as well as set the stage for broader use of FL in healthcare.

Research paper thumbnail of Automatic Forecasting of Radiology Examination Volume Trends for Optimal Resource Planning and Allocation

Journal of Digital Imaging

The aim of the study was to evaluate the performance of the Prophet forecasting procedure, part o... more The aim of the study was to evaluate the performance of the Prophet forecasting procedure, part of the Facebook opensource Artificial Intelligence portfolio, for forecasting variations in radiological examination volumes. Daily CT and MRI examination volumes from our institution were extracted from the radiology information system (RIS) database. Data from January 1, 2015, to December 31, 2019, was used for training the Prophet algorithm, and data from January 2020 was used for validation. Algorithm performance was then evaluated prospectively in February and August 2020. Total error and mean error per day were evaluated, and computational time was logged using different Markov chain Monte Carlo (MCMC) samples. Data from 610,570 examinations were used for training; the majority were CTs (82.3%). During retrospective testing, prediction error was reduced from 19 to < 1 per day in CT (total 589 to 17) and from 5 to < 1 per day (total 144 to 27) in MRI by fine-tuning the Prophet procedure. Prospective prediction error in February was 11 per day in CT (9934 predicted, 9667 actual) and 1 per day in MRI (2484 predicted, 2457 actual) and was significantly better than manual weekly predictions (p = 0.001). Inference with MCMC added no substantial improvements while vastly increasing computational time. Prophet accurately models weekly, seasonal, and overall trends paving the way for optimal resource allocation for radiology exam acquisition and interpretation.

Research paper thumbnail of Integrating Al Algorithms into the Clinical Workflow

Radiology: Artificial Intelligence

A rtificial intelligence (AI) applications are increasingly being developed for diagnostic imagin... more A rtificial intelligence (AI) applications are increasingly being developed for diagnostic imaging (1). These AI applications can be divided broadly into two categories: first, those pertaining to logistic workflows, including order scheduling, patient screening, radiologist reporting, and other operational analytics (also termed upstream AI); and second, those pertaining to the acquired imaging data themselves, such as automated detection and segmentation of findings or features, automated interpretation of findings, and image postprocessing (2) (also termed downstream AI). Numerous downstream AI applications have been developed in recent years. More than 120 AI applications in medical imaging are currently cleared by the U.S. Food and Drug Administration (3). Although there are a variety of applications available, a major unaddressed issue is the difficulty of adopting AI algorithms into the workflow of clinical practice. AI algorithms are generally siloed systems that are not easily incorporated into existing information systems in a radiology department. Additionally, tools to measure and monitor the performance of AI systems within clinical workflows are lacking. We sought to define the requirements for effective AI deployment in the clinical workflow by considering an exemplar downstream AI application-automated interpretation and reporting of lymphoscintigraphy examinations-and to use that exemplar to develop generalizable components to meet the defined requirements. Materials and Methods The institutional review board approved this retrospective study for development of the AI algorithm, which was compliant with the Health Insurance Portability and Accountability Act, and waived requirements for written informed consent. Understanding the General Workflow and Particular Use Case Our use case for deploying AI within the clinical workflow was an AI algorithm for evaluating lymphoscintigraphy examinations. These examinations are performed to identify sentinel lymph nodes (SLNs) in patients with invasive breast cancer, which potentially increases the accuracy of staging (4). The examination comprises images of the breasts and axillae (Fig 1), and the radiology report describes the location and positivity of SLN. The AI algorithm we developed for this use case takes the images as inputs and outputs the following data for reporting: (a) observed sites of injection (right breast only, left breast only, bilateral breasts), (b) probability of radiotracer accumulation in the axillae (probability scores for none, right, left, or bilateral axillae), (c) number of right axillary lymph nodes (integer), and (d) number of left axillary lymph nodes (integer).

Research paper thumbnail of Bag-of-Words Technique in Natural Language Processing: A Primer for Radiologists

RadioGraphics

Natural language processing (NLP) is a methodology designed to extract concepts and meaning from ... more Natural language processing (NLP) is a methodology designed to extract concepts and meaning from human-generated unstructured (free-form) text. It is intended to be implemented by using computer algorithms so that it can be run on a corpus of documents quickly and reliably. To enable machine learning (ML) techniques in NLP, free-form text must be converted to a numerical representation. After several stages of preprocessing including tokenization, removal of stop words, token normalization, and creation of a master dictionary, the bag-of-words (BOW) technique can be used to represent each remaining word as a feature of the document. The preprocessing steps simplify the documents but also potentially degrade meaning. The values of the features in BOW can be modified by using techniques such as term count, term frequency, and term frequency-inverse document frequency. Experience and experimentation will guide decisions on which specific techniques will optimize ML performance. These and other NLP techniques are being applied in radiology. Radiologists' understanding of the strengths and limitations of these techniques will help in communication with data scientists and in implementation for specific tasks.

Research paper thumbnail of Federated learning for predicting clinical outcomes in patients with COVID-19

Nature Medicine

T he scientific, academic, medical and data science communities have come together in the face of... more T he scientific, academic, medical and data science communities have come together in the face of the COVID-19 pandemic crisis to rapidly assess novel paradigms in artificial intelligence (AI) that are rapid and secure, and potentially incentivize data sharing and model training and testing without the usual privacy and data ownership hurdles of conventional collaborations 1,2. Healthcare providers, researchers and industry have pivoted their focus to address unmet and critical clinical needs created by the crisis, with remarkable results 3-9. Clinical trial recruitment has been expedited and facilitated by national regulatory bodies and an international cooperative spirit 10-12. The data analytics and AI disciplines have always fostered open

Research paper thumbnail of Federated learning for predicting clinical outcomes in patients with COVID-19

Nature Medicine

T he scientific, academic, medical and data science communities have come together in the face of... more T he scientific, academic, medical and data science communities have come together in the face of the COVID-19 pandemic crisis to rapidly assess novel paradigms in artificial intelligence (AI) that are rapid and secure, and potentially incentivize data sharing and model training and testing without the usual privacy and data ownership hurdles of conventional collaborations 1,2. Healthcare providers, researchers and industry have pivoted their focus to address unmet and critical clinical needs created by the crisis, with remarkable results 3-9. Clinical trial recruitment has been expedited and facilitated by national regulatory bodies and an international cooperative spirit 10-12. The data analytics and AI disciplines have always fostered open