Patient-Specific Semi-supervised Learning for Postoperative Brain Tumor Segmentation (original) (raw)

Segmentation of Gliomas in Pre-operative and Post-operative Multimodal Magnetic Resonance Imaging Volumes Based on a Hybrid Generative-Discriminative Framework

Brainlesion : glioma, multiple sclerosis, stroke and traumatic brain injuries : second International Workshop, BrainLes 2016, with the challenges on BRATS, ISLES and mTOP 2016, held in conjunction with MICCAI 2016, Athens, Greece, Octob..., 2016

We present an approach for segmenting both low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed framework is an extension of our previous work [6,7], with an additional component for segmenting post-operative scans. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative model based on a joint segmentation-registration framework is used to segment the brain scans into cancerous and healthy tissues. Secondly, a gradient boosting classification scheme is used to refine tumor segmentation based on information from multiple patients. We evaluated our approach in 218 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2016 challenge and report promising results. During the testing phase, the proposed approach was ranked among the top performing methods, after being additionally evaluated in 191 unseen cases.

Interactive segmentation of MR images from brain tumor patients

2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), 2014

Medical doctors often do not trust the result of fully automatic segmentations because they have no possibility to make corrections if necessary. On the other hand, manual corrections can introduce a user bias. In this work, we propose to integrate the possibility for quick manual corrections into a fully automatic segmentation method for brain tumor images. This allows for necessary corrections while maintaining a high objectiveness. The underlying idea is similar to the well-known Grab-Cut algorithm, but here we combine decision forest classification with conditional random field regularization for interactive segmentation of 3D medical images. The approach has been evaluated by two different users on the BraTS2012 dataset. Accuracy and robustness improved compared to a fully automatic method and our interactive approach was ranked among the top performing methods. Time for computation including manual interaction was less than 10 minutes per patient, which makes it attractive for clinical use.

Semi-supervised discriminative classification with application to tumorous tissues segmentation of MR brain images

Pattern Analysis and Applications, 2008

Due to the large data size of 3D MR brain images and the blurry boundary of the pathological tissues, tumor segmentation work is difficult. This paper introduces a discriminative classification algorithm for semi-automated segmentation of brain tumorous tissues. The classifier uses interactive hints to obtain models to classify normal and tumor tissues. A non-parametric Bayesian Gaussian random field in the semi-supervised mode is implemented. Our approach uses both labeled data and a subset of unlabeled data sampling from 2D/3D images for training the model. Fast algorithm is also developed. Experiments show that our approach produces satisfactory segmentation results comparing to the manually labeled results by experts. Keywords Magnetic resonance imaging (MRI) Á Brain tumor segmentation Á Semi-automated segmentation Á Gaussian random field (GRF) Á Gaussian process (GP)

MRI Brain Tumor Segmentation and Patient Survival Prediction Using Random Forests and Fully Convolutional Networks

Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

In this paper, we propose a novel learning based method for automated segmentation of brain tumor in multimodal MRI images. The machine learned features from fully convolutional neural network (FCN) and hand-designed texton features are used to classify the MRI image voxels. The score map with pixelwise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The learned features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumor. The method was evaluated on BRATS 2013 challenge dataset. The results show that the application of the random forest classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth is 0.88, 080 and 0.73 for complete tumor, core and enhancing tumor, respectively.

MRI Brain Tumor Segmentation using Random Forests and Fully Convolutional Networks

ArXiv, 2019

In this paper, we propose a novel learning based method for automated segmentation of brain tumor in multimodal MRI images, which incorporates two sets of machine -learned and hand crafted features. Fully convolutional networks (FCN) forms the machine learned features and texton based features are considered as hand-crafted features. Random forest (RF) is used to classify the MRI image voxels into normal brain tissues and different parts of tumors, i.e. edema, necrosis and enhancing tumor. The method was evaluated on BRATS 2017 challenge dataset. The results show that the proposed method provides promising segmentations. The mean Dice overlap measure for automatic brain tumor segmentation against ground truth is 0.86, 0.78 and 0.66 for whole tumor, core and enhancing tumor, respectively.

An Optimized Segmentation Framework Applied to Glioma Delimitation

Studies in Informatics and Control

In this article we describe our segmentation framework applied to glioma delimitation in multimodal magnetic resonance images. Statistical pattern recognition strategies are applied to create a discriminative function. The discriminative classifier is the result of an automatic learning process based on random forest (RF) algorithm. This algorithm is used for two different purposes as well as in the construction of segmentation classifiers, as in the variable importance evaluation task. In the training phase the most important local image features are selected and the most adequate optimal parameters of the RF classifier are determined. The goal is to find the discriminative model that allows us to obtain the best possible segmentation performances. The segmentation framework obtained was evaluated online using the brain tumor segmentation benchmark system, and the performances were compared to the best ones reported in the literature.

Multimodal MRI brain tumor segmentation using random forests with features learned from fully convolutional neural network

arXiv (Cornell University), 2017

In this paper, we propose a novel learning based method for automated segmentation of brain tumor in multimodal MRI images. The machine learned features from fully convolutional neural network (FCN) and hand-designed texton features are used to classify the MRI image voxels. The score map with pixelwise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The learned features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumor. The method was evaluated on BRATS 2013 challenge dataset. The results show that the application of the random forest classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth is 0.88, 080 and 0.73 for complete tumor, core and enhancing tumor, respectively.

Semi-supervised clustering for MR brain image segmentation

Expert Systems with Applications, 2014

Magnetic resonance (MR) brain image segmentation of different anatomical structures or tissue types has become a critical requirement in the diagnosis of neurological diseases. Depending on the availability of the training samples, image segmentation can be either supervised or unsupervised. While supervised learning requires a sufficient amount of labelled training data, which is expensive and time-consuming, unsupervised learning techniques suffer from the problem of local traps. Semi-supervised algorithms that includes prior knowledge into the unsupervised learning can enhance the segmentation process without the need of labelled training data. This paper proposes a method to improve the quality of MR brain tissue segmentation and to accelerate the convergence process. The proposed method is a clustering based semi-supervised classifier that does not need a set of labelled training data and uses less human expert analysis than a supervised approach. The proposed classifier labels the voxels clusters of an image slice and then uses statistics and class labels information of the resultant clusters to classify the remaining image slices by applying Gaussian Mixture Model (GMM). The experimental results show that the proposed semi-supervised approach accelerates the convergence and improves the results accuracy when comparing with the classical GMM approach.

Bayesian Network and Structured Random Forest Cooperative Deep Learning for Automatic Multi-label Brain Tumor Segmentation

Proceedings of the 10th International Conference on Agents and Artificial Intelligence, 2018

Brain cancer phenotyping and treatment is highly informed by radiomic analyses of medical images. Specifically, the reliability of radiomics, which refers to extracting features from the tumor image intensity, shape and texture, depends on the accuracy of the tumor boundary segmentation. Hence, developing fullyautomated brain tumor segmentation methods is highly desired for processing large imaging datasets. In this work, we propose a cooperative learning framework for multi-label brain tumor segmentation, which leverages on Structured Random Forest (SRF) and Bayesian Networks (BN). Basically, we embed both strong SRF and BN classifiers into a multi-layer deep architecture, where they cooperate to better learn tumor features for our multi-label classification task. The proposed SRF-BN cooperative learning integrates two complementary merits of both classifiers. While, SRF exploits structural and contextual image information to perform classification at the pixel-level, BN represents the statistical dependencies between image components at the superpixel-level. To further improve this SRF-BN cooperative learning, we 'deepen' this cooperation through proposing a multi-layer framework, wherein each layer, BN inputs the original multi-modal MR images along with the probability maps generated by SRF. Through transfer learning from SRF to BN, the performance of BN improves. In turn, in the next layer, SRF will also benefit from the learning of BN through inputting the BN segmentation maps along with the original multimodal images. With the exception of the first layer, both classifiers use the output segmentation maps resulting from the previous layer, in the spirit of auto-context models. We evaluated our framework on 50 subjects with multimodal MR images (FLAIR, T1, T1-c) to segment the whole tumor, its core and enhanced tumor. Our segmentation results outperformed those of several comparison methods, including the independent (non-cooperative) learning of SRF and BN.

Preoperative Brain Tumor Imaging: Models and Software for Segmentation and Standardized Reporting

Frontiers in Neurology

For patients suffering from brain tumor, prognosis estimation and treatment decisions are made by a multidisciplinary team based on a set of preoperative MR scans. Currently, the lack of standardized and automatic methods for tumor detection and generation of clinical reports, incorporating a wide range of tumor characteristics, represents a major hurdle. In this study, we investigate the most occurring brain tumor types: glioblastomas, lower grade gliomas, meningiomas, and metastases, through four cohorts of up to 4,000 patients. Tumor segmentation models were trained using the AGU-Net architecture with different preprocessing steps and protocols. Segmentation performances were assessed in-depth using a wide-range of voxel and patient-wise metrics covering volume, distance, and probabilistic aspects. Finally, two software solutions have been developed, enabling an easy use of the trained models and standardized generation of clinical reports: Raidionics and Raidionics-Slicer. Segme...