Beyond Classification: Structured Regression for Robust Cell Detection Using Convolutional Neural Network - PubMed (original) (raw)

Comparative Study

Beyond Classification: Structured Regression for Robust Cell Detection Using Convolutional Neural Network

Yuanpu Xie et al. Med Image Comput Comput Assist Interv. 2015 Oct.

Abstract

Robust cell detection serves as a critical prerequisite for many biomedical image analysis applications. In this paper, we present a novel convolutional neural network (CNN) based structured regression model, which is shown to be able to handle touching cells, inhomogeneous background noises, and large variations in sizes and shapes. The proposed method only requires a few training images with weak annotations (just one click near the center of the object). Given an input image patch, instead of providing a single class label like many traditional methods, our algorithm will generate the structured outputs (referred to as proximity patches). These proximity patches, which exhibit higher values for pixels near cell centers, will then be gathered from all testing image patches and fused to obtain the final proximity map, where the maximum positions indicate the cell centroids. The algorithm is tested using three data sets representing different image stains and modalities. The comparative experiments demonstrate the superior performance of this novel method over existing state-of-the-art.

PubMed Disclaimer

Figures

Fig. 1

Fig. 1

The CNN architecture used in the proposed structured regression model. C, M and F represents the convolutional layer, max pooling layer, and fully connected layer, respectively. The purple arrows from the last layer illustrate the mapping between the final layer's outputs to the final proximity patch.

Fig. 2

Fig. 2

(A): The training data generation process. Each original image has a proximity mask of the same size and each local image patch has an proximity patch used as the structured label. (B) The fusion process. Each pixel receives predictions from it's neighborhoods. For example, the red dot collects all the predictions from its 25 neighboring pixels and an average value will be assigned as final result. In this figure, we only display 4 out of 25 proximity patches.

Fig. 3

Fig. 3

Cell detection results on three sample images from the three data sets. Yellow dots represent the detected cell centers. The ground truth annotations are represented by green circles for better illustrations.

Fig. 4

Fig. 4

Precision-recall curves of the four variations of the proposed algorithm on three data sets. SR-5 achieves almost the same results as SR-1. The proposed SR-1 significantly outperforms the other two pixel-wise methods using CNN.

Similar articles

Cited by

References

    1. Al-Kofahi Y, Lassoued W, Lee W, Roysam B. Improved automatic detection and segmentation of cell nuclei in histopathology images. TBME. 2010;57(4):841–]852. - PubMed
    1. Arteta C, Lempitsky V, Noble JA, Zisserman A. Learning to detect cells using non-overlapping extremal regions. MICCAI. 2012;7510:348–]356. - PubMed
    1. Byun J, Verardo MR, Sumengen B, Lewis G, Manjunath BS, Fisher SK. Automated tool for the detection of cell nuclei in digital microscopic images: application to retinal images. Mol. Vis. 2006;12:949–]960. - PubMed
    1. Ciresan D, Giusti A, Gambardella LM, Schmidhuber J. Mitosis detection in breast cancer histology images with deep neural networks. MICCAI. 2013;8150:411–]418. - PubMed
    1. Ciresan D, Giusti A, Gambardella L. Schmidhuber: Deep neural networks segment neuronal membranes in electron microscopy images. NIPS. 2012:2852–]2860.

Publication types

MeSH terms

LinkOut - more resources