Accurate Segmentation of CT Male Pelvic Organs via Regression-based Deformable Models and Multi-task Random Forests (original) (raw)

Deformable landmark-free active appearance models: application to segmentation of multi-institutional prostate MRI data

Prostate segmentation is a necessary step for computer aided diagnosis systems, volume estimation, and treatment planning. The use of standard datasets is vital for comparing different segmentation algorithms, and 100 datasets from 4 institutions were gather to test different algorithms on T2-weighted MR imagery. In this paper, a landmark-free Active Appearance Model based segmentation algorithm was employed to segment the prostate from MR images. A deformable registration framework was created to register a new image to a trained appearance model, which was subsequently applied to the prostate shape to yield a final segmentation. Results on 50 training studies yielded a median Dice coefficient of 0.80, and the fully automated algorithm was able to segment each prostate in under 3 minutes.

Segmentation of pelvic structures for planning CT using a geometrical shape model tuned by a multi-scale edge detector

Physics in Medicine and Biology, 2014

Accurate segmentation of the prostate and organs at risk in computed tomography (CT) images is a crucial step for radiotherapy (RT) planning. Manual segmentation, as performed nowadays, is a time consuming process and prone to errors due to the a high intra-and inter-expert variability. This paper introduces a new automatic method for prostate, rectum and bladder segmentation in planning CT using a geometrical shape model under a Bayesian framework. A set of prior organ shapes are first built by applying Principal Component Analysis (PCA) to a population of manually delineated CT images. Then, for a given individual, the most similar shape is obtained by mapping a set of multi-scale edge observations to the space of organs with a customized likelihood function. Finally, the selected shape is locally deformed to adjust the edges of each organ. Experiments were performed with real data from a population of 116 patients treated for prostate cancer. The data set was split in training and test groups, with 30 and 86 patients, respectively. Results show that the method produces competitive segmentations w.r.t standard methods (Averaged Dice = 0.91 for prostate, 0.94 for bladder, 0.89 for Rectum) and outperforms the majority-vote multi-atlas approaches (using rigid registration, free-form deformation (FFD) and the demons algorithm)

Joint Registration and Segmentation via Multi-Task Learning for Adaptive Radiotherapy of Prostate Cancer

IEEE Access, 2021

Medical image registration and segmentation are two of the most frequent tasks in medical image analysis. As these tasks are complementary and correlated, it would be beneficial to apply them simultaneously in a joint manner. In this paper, we formulate registration and segmentation as a joint problem via a Multi-Task Learning (MTL) setting, allowing these tasks to leverage their strengths and mitigate their weaknesses through the sharing of beneficial information. We propose to merge these tasks not only on the loss level, but on the architectural level as well. We studied this approach in the context of adaptive image-guided radiotherapy for prostate cancer, where planning and follow-up CT images as well as their corresponding contours are available for training. At testing time the contours of the follow-up scans are not available, which is a common scenario in adaptive radiotherapy. The study involves two datasets from different manufacturers and institutes. The first dataset was divided into training (12 patients) and validation (6 patients), and was used to optimize and validate the methodology, while the second dataset (14 patients) was used as an independent test set. We carried out an extensive quantitative comparison between the quality of the automatically generated contours from different network architectures as well as loss weighting methods. Moreover, we evaluated the quality of the generated deformation vector field (DVF). We show that MTL algorithms outperform their Single-Task Learning (STL) counterparts and achieve better generalization on the independent test set. The best algorithm achieved a mean surface distance of 1.06 ± 0.3 mm, 1.27 ± 0.4 mm, 0.91 ± 0.4 mm, and 1.76 ± 0.8 mm on the validation set for the prostate, seminal vesicles, bladder, and rectum, respectively. The high accuracy of the proposed method combined with the fast inference speed, makes it a promising method for automatic re-contouring of follow-up scans for adaptive radiotherapy, potentially reducing treatment related complications and therefore improving patients quality-of-life after treatment. The source code is available at https://github.com/moelmahdy/JRS-MTL. INDEX TERMS Image segmentation, deformable image registration, adaptive radiotherapy, contour propagation, convolutional neural networks (CNN), multi task learning (MTL), uncertainty weighting, dynamic weight averaging.

Combining a deformable model and a probabilistic framework for an automatic 3D segmentation of prostate on MRI

International Journal of Computer Assisted Radiology and Surgery, 2009

Purpose Accurate localization and contouring of prostate are crucial issues in prostate cancer diagnosis and/or therapies. Although several semi-automatic and automatic segmentation methods have been proposed, manual expert correction remains necessary. We introduce a new method for automatic 3D segmentation of the prostate gland from magnetic resonance imaging (MRI) scans. Methods A statistical shape model was used as an a priori knowledge, and gray levels distribution was modeled by fitting histogram modes with a Gaussian mixture. Markov fields were used to introduce contextual information regarding voxels' neighborhoods. Final labeling optimization is based on Bayesian a posteriori classification, estimated with the iterative conditional mode algorithm. Results We compared the accuracy of this method, free from any manual correction, with contours outlined by an expert radiologist. In 12 cases, including prostates with cancer and benign prostatic hypertrophy, the mean Hausdorff distance and overlap ratio were 9.94 mm and 0.83, respectively. Conclusion This new automatic prostate MRI segmentation method produces satisfactory results, even at prostate's base and apex. The method is computationally feasible and efficient.

A three-dimensional deformable model for segmentation of human prostate from ultrasound images

Medical Physics, 2001

Segmentation of human prostate from ultrasound ͑US͒ images is a crucial step in radiation therapy, especially in real-time planning for US image-guided prostate seed implant. This step is critical to determine the radioactive seed placement and to ensure the adequate dose coverage of prostate. However, due to the low contrast of prostate and very low signal-to-noise ratio in US images, this task remains as an obstacle. The manual segmentation of this object is time consuming and highly subjective. In this work, we have proposed a three-dimensional ͑3D͒ deformable surface model for automatic segmentation of prostate. The model has a discrete structure made from a set of vertices in the 3D space that form triangle facets. The model converges from an initial shape to its equilibrium iteratively, by a weighted sum of the internal and external forces. Internal forces are based on the local curvature of the surface and external forces are extracted from the volumetric image data by applying an appropriate edge filter. We have also developed a method for initialization of the model from a few initial contours that are drawn on different slices. During the deformation, a resampling procedure is used to maintain the resolution of the model. The entire model is applied in a multiscale scheme, which increases the robustness and speed, and guarantees a better convergence. The model is tested on real clinical data and initial results are very promising.

Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning

Biomedical Physics & Engineering Express, 2018

Inter-and intra-observer variation in delineating regions of interest (ROIs) occurs because of differences in expertise level and preferences of the radiation oncologists. We evaluated the accuracy of a segmentation model using the U-Net structure to delineate the prostate, bladder, and rectum in male pelvic CT images. The dataset used for training and testing the model consisted of raw CT scan images of 85 prostate cancer patients. We designed a 2D U-Net model to directly learn a mapping function that converts a 2D CT grayscale image to its corresponding 2D OAR segmented image. Our network contains blocks of convolution 2D layers with variable kernel sizes, channel number, and activation functions. On the left side of the U-Net model, we used three 3x3 convolutions, each followed by a rectified linear unit (ReLu) (activation function), and one max pooling operation. On the right side of the U-Net model, we used a 2x2 transposed convolution and two 3x3 convolution networks followed by a ReLu activation function. The automatic segmentation using the U-Net generated an average dice similarity coefficient (DC) and standard deviation (SD) of the following: DC ± SD (0.88 ± 0.12), (0.95 ± 0.04), and (0.92 ± 0.06) for the prostate, bladder, and rectum, respectively. Furthermore, the mean of average surface Hausdorff distance (ASHD) and SD were 1.2 ± 0.9 mm, 1.08 ± 0.8 mm, and 0.8 ± 0.6 mm for the prostate, bladder, and rectum, respectively. Our proposed method, which employs the U-Net structure, is highly accurate and reproducible for automated ROI segmentation. This provides a foundation to improve automatic delineation of the boundaries between the target and surrounding normal soft tissues on a standard radiation therapy planning CT scan.

Automatic segmentation of pelvic structures from MRI images for prostate cancer radiotherapy

Purpose: Target-volume and organ-at-risk delineation is a time-consuming task in radiotherapy planning. The development of automated segmentation tools remains problematic, because of pelvic organ shape variability. We evaluate a three-dimensional (3D), deformable-model approach and a seeded region-growing algorithm for automatic delineation of the prostate and organs-at-risk on magnetic resonance images. Methods and Materials: Manual and automatic delineation were compared in 24 patients using a sagittal T2-weighted (T2-w) turbo spin echo (TSE) sequence and an axial T1-weighted (T1-w) 3D fast-field echo (FFE) or TSE sequence. For automatic prostate delineation, an organ model-based method was used. Prostates without seminal vesicles were delineated as the clinical target volume (CTV). For automatic bladder and rectum delineation, a seeded region-growing method was used. Manual contouring was considered the reference method. The following parameters were measured: volume ratio (Vr) (automatic/manual), volume overlap (Vo) (ratio of the volume of intersection to the volume of union; optimal value ‫؍‬ 1), and correctly delineated volume (Vc) (percent ratio of the volume of intersection to the manually defined volume; optimal value ‫؍‬ 100).

Automatic segmentation of pelvic structures from magnetic resonance images for prostate cancer radiotherapy

International Journal of …, 2007

Purpose: Target-volume and organ-at-risk delineation is a time-consuming task in radiotherapy planning. The development of automated segmentation tools remains problematic, because of pelvic organ shape variability. We evaluate a three-dimensional (3D), deformable-model approach and a seeded region-growing algorithm for automatic delineation of the prostate and organs-at-risk on magnetic resonance images. Methods and Materials: Manual and automatic delineation were compared in 24 patients using a sagittal T2-weighted (T2-w) turbo spin echo (TSE) sequence and an axial T1-weighted (T1-w) 3D fast-field echo (FFE) or TSE sequence. For automatic prostate delineation, an organ model-based method was used. Prostates without seminal vesicles were delineated as the clinical target volume (CTV). For automatic bladder and rectum delineation, a seeded region-growing method was used. Manual contouring was considered the reference method. The following parameters were measured: volume ratio (Vr) (automatic/manual), volume overlap (Vo) (ratio of the volume of intersection to the volume of union; optimal value ‫؍‬ 1), and correctly delineated volume (Vc) (percent ratio of the volume of intersection to the manually defined volume; optimal value ‫؍‬ 100).

A supervised learning framework for automatic prostate segmentation in trans rectal ultrasound images

2012

Heterogeneous intensity distribution inside the prostate gland, significant variations in prostate shape, size, inter dataset contrast variations, and imaging artifacts like shadow regions and speckle in Trans Rectal Ultrasound (TRUS) images challenge computer aided automatic or semi-automatic segmentation of the prostate. In this paper, we propose a supervised learning schema based on random forest for automatic initialization and propagation of statistical shape and appearance model. Parametric representation of the statistical model of shape and appearance is derived from principal component analysis (PCA) of the probability distribution inside the prostate and PCA of the contour landmarks obtained from the training images. Unlike traditional statistical models of shape and intensity priors, the appearance model in this paper is derived from the posterior probabilities obtained from random forest classification. This probabilistic information is then used for the initialization and propagation of the statistical model. The proposed method achieves mean Dice Similarity Coefficient (DSC) value of 0.96±0.01, with a mean segmentation time of 0.67±0.02 seconds when validated with 24 images from 6 datasets with considerable shape, size, and intensity variations, in a leave-one-patient-out validation framework. The model achieves statistically significant t-test p-value<0.0001 in mean DSC and mean mean absolute distance (MAD) values compared to traditional statistical models of shape and intensity priors.