A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays - PubMed (original) (raw)

A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays

Avantika Vardhan et al. Bioelectron Med. 2023.

Abstract

Chest radiographs (CXRs) are the most widely available radiographic imaging modality used to detect respiratory diseases that result in lung opacities. CXR reports often use non-standardized language that result in subjective, qualitative, and non-reproducible opacity estimates. Our goal was to develop a robust deep transfer learning framework and adapt it to estimate the degree of lung opacity from CXRs. Following CXR data selection based on exclusion criteria, segmentation schemes were used for ROI (Region Of Interest) extraction, and all combinations of segmentation, data balancing, and classification methods were tested to pick the top performing models. Multifold cross validation was used to determine the best model from the initial selected top models, based on appropriate performance metrics, as well as a novel Macro-Averaged Heatmap Concordance Score (MA HCS). Performance of the best model is compared against that of expert physician annotators, and heatmaps were produced. Finally, model performance sensitivity analysis across patient populations of interest was performed. The proposed framework was adapted to the specific use case of estimation of degree of CXR lung opacity using ordinal multiclass classification. Acquired between March 24, 2020, and May 22, 2020, 38,365 prospectively annotated CXRs from 17,418 patients were used. We tested three neural network architectures (ResNet-50, VGG-16, and ChexNet), three segmentation schemes (no segmentation, lung segmentation, and lateral segmentation based on spine detection), and three data balancing strategies (undersampling, double-stage sampling, and synthetic minority oversampling) using 38,079 CXR images for training, and validation with 286 images as the out-of-the-box dataset that underwent expert radiologist adjudication. Based on the results of these experiments, the ResNet-50 model with undersampling and no ROI segmentation is recommended for lung opacity classification, based on optimal values for the MAE metric and HCS (Heatmap Concordance Score). The degree of agreement between the opacity scores predicted by this model with respect to the two sets of radiologist scores (OR or Original Reader and OOBTR or Out Of Box Reader) in terms of performance metrics is superior to the inter-radiologist opacity score agreement.

Keywords: Chest X-ray (CXR); Deep transfer learning; Heatmap concordance; Lung opacity; Ordinal classification; Pretrained model.

© 2022. The Author(s).

PubMed Disclaimer

Conflict of interest statement

The authors declare they have no competing interests.

Figures

Fig. 1

Fig. 1

Schematic of the proposed pipeline. The different steps of the pipeline are denoted with letter from A–I. Overall pipeline of CXR framework for scoring opacity using deep learning. Steps include (A) Data Preparation: DICOM to PNG conversion and application of exclusion criteria; (B) image preprocessing and ROI extraction; (C) train/test data split and data balancing; (D) transfer learning setup for testing models generated using multiple combinations of X-ray segmentation schemes, sampling schemes to overcome dataset bias, and CNN architectures; (E) level 1 – single-fold analysis to determine top ‘N’ models; (F) level 2 – K-fold cross validation to determine best model; (G) comparison of best model with reader scores; (H) heatmap for visualization; and (I) model performance analysis across different patient populations, grouped by sex, race, and COVID-19 status

Fig. 2

Fig. 2

Schematic of exclusion criteria and transfer learning framework. (A) Data exclusion criteria: multiple stages of image acquisition, DICOM to PNG conversion, and application of other exclusion criteria during the data pre-processing stage prior to final data creation; and (B) transfer learning framework used to leverage weights from pre-trained models along with our dataset of CXRs with scored opacities. The two different schemes and stages explored using this framework are portrayed

Fig. 3

Fig. 3

Examples of generated heatmaps and classifications from the model, the OR, and the OOBTR. The value within the parantheses in the figure refers to the predicted (in case of model) or annotated (in case of reader) opacity scores for the left and right lungs respectively. For example, OR (2,3) indicates an Original Reader (OR) CXR opacity score of 2 for the left lung, and 3 for the right lung

References

    1. Alghamdi HS, Amoudi G, Elhag S, Saeedi K, Nasser J. Deep learning approaches for detecting COVID-19 from chest X-ray images: a survey. IEEE Access. 2021;9:20235–20254. doi: 10.1109/ACCESS.2021.3054484. -DOI -PMC -PubMed
    1. Alzubaidi L, Al-Amidie M, Al-Asadi A, Humaidi AJ, Al-Shamma O, Fadhel MA, Zhang J, Santamaría J, Duan Y. Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data. Cancers. 2021;13:1590. doi: 10.3390/cancers13071590. -DOI -PMC -PubMed
    1. Au-Yong I, Higashi Y, Giannotti E, Fogarty A, Morling JR, Grainge M, Race A, Juurlink I, Simmonds M, Briggs S, Cruikshank S, Hammond-Pears S, West J, Crooks CJ, Card T. Chest radiograph scoring alone or combined with other risk scores for predicting outcomes in COVID-19. Radiology. 2022;302:460–469. doi: 10.1148/radiol.2021210986. -DOI -PMC -PubMed
    1. Balbi M, Caroli A, Corsi A, Milanese G, Surace A, Di Marco F, Novelli L, Silva M, Lorini FL, Duca A, Cosentini R, Sverzellati N, Bonaffini PA, Sironi S. Chest X-ray for predicting mortality and the need for ventilatory support in COVID-19 patients presenting to the emergency department. Eur Radiol. 2021;31:1999–2012. doi: 10.1007/s00330-020-07270-1. -DOI -PMC -PubMed
    1. Brady AP. Error and discrepancy in radiology: inevitable or avoidable? Insights Imaging. 2017;8:171–182. doi: 10.1007/s13244-016-0534-1. -DOI -PMC -PubMed

LinkOut - more resources