A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images - PubMed (original) (raw)

A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images

Jun Xu et al. Neurocomputing (Amst). 2016.

Abstract

Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions.

Keywords: Breast histopathology; Colorectal cancer; Deep Convolutional Neural Networks; Feature representation; The classification of epithelial and stromal regions.

PubMed Disclaimer

Figures

Fig. 1

Fig. 1

The architecture of the new DCNN employed in this work. The approach comprises of (a) two alternating convolutional layers (or C-layers) with the convolutional operation (b) and max-pooling layers (or P-layers) with the max-pooling operation, and (c) two full connection layers (or FC-layers), and an output layer.

Fig. 2

Fig. 2

The illustration of DCNN+SMC approach for Epithelial and Stromal segmentation and classification for H&E (a–f) and IHC (g–k) stained histologic images. The original H&E (a) and IHC (g) stained images are over-segmented into sub-images using a SLIC (b) and fixed-size square window based approach (h), respectively. An exemplar patch (c) is resized into smaller 50 × 50 sub-images (d). The sub-images (d and i) are then fed to a DCNN (e and j) for segmentation and classification of epithelial and stromal regions, shown in panels (f) and (k), respectively. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

Fig. 3

Fig. 3

Segmentation of epithelial (red) and stromal (green) regions on a tissue image (a) using the different segmentation approaches on _D_1. (b) The ground truth (a) annotations of stromal and epithelial regions by an expert pathologist. The classification results are shown for DCNN-Ncut-SVM (c), DCNN-Ncut-SMC (d), DCNN-SLIC-SVM (e), DCNN-SLIC-SMC (f), DCNN-SW-SVM (g), DCNN-SW-SMC (h), and Color-SW-SVM (i), respectively. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

Fig. 4

Fig. 4

The probability maps rendered by the different DCNN based approaches (Columns 3 and 4) and [19] (in Column 2) for classifying EP ((a)–(e) in the left block, Column 1) and ST ((f)–(k) in the right block, Column 1) patches on _D_2. The false-color (defined by the heat map (l)) of sub-images in Columns 2–4 reflect the confidence score in predicting them as EP/ST regions via Linda [19], DCNN+SVM, and DCNN+SMC, respectively. The various colors in the heat map (l) correspond to the predicted confidence scores (red=EP with 100% likelihood and blue=ST with 100% likelihood). (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

Fig. 5

Fig. 5

The ROC curves for the different models (see Table 3) for detecting EP and ST regions on NKI (a) and VGH (b) data cohorts within _D_1.

Fig. 6

Fig. 6

A plot of the score value on the _X_-axis versus the number of image patches on the _Y_-axis for the model DCNN-SW-SVM on dataset _D_2.

Fig. 7

Fig. 7

Plot of AUC versus window size for the DCNN-SW-SVM model on NKI and VGH datasets within _D_1.

Similar articles

Cited by

References

    1. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Susstrunk S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell. 2012;34(11):2274–2282. - PubMed
    1. Ali S, et al. Spatially aware cell clusters graphs: predicting outcome in HPV associated oropharyngeal tumors. Medical Image Computing and Computer-Assisted Intervention. 2013;8149:412–519. - PubMed
    1. Amaral T, McKenna S, Robertson K, Thompson A. Classification and immunohistochemical scoring of breast tissue microarray spots. IEEE Trans Biomed Eng. 2013 Oct;60(10):2806–2814. - PubMed
    1. Beck AH, et al. Systematic analysis of breast cancer morphology uncovers stromal features associated with survival. Sci Transl Med. 2011;3:108ra113. - PubMed
    1. Bengio Y, Lamblin P, Popovici D, Larochelle H, et al. Greedy layer-wise training of deep networks. Adv Neural Inf Process Syst. 2007;19:153.

Grants and funding

LinkOut - more resources