Automatic segmentation of pelvic organs-at-risk using a fusion network model based on limited training samples (original) (raw)
Related papers
Frontiers in Oncology
Background/HypothesisMRI-guided online adaptive radiotherapy (MRI-g-OART) improves target coverage and organs-at-risk (OARs) sparing in radiation therapy (RT). For patients with locally advanced cervical cancer (LACC) undergoing RT, changes in bladder and rectal filling contribute to large inter-fraction target volume motion. We hypothesized that deep learning (DL) convolutional neural networks (CNN) can be trained to accurately segment gross tumor volume (GTV) and OARs both in planning and daily fractions’ MRI scans.Materials/MethodsWe utilized planning and daily treatment fraction setup (RT-Fr) MRIs from LACC patients, treated with stereotactic body RT to a dose of 45-54 Gy in 25 fractions. Nine structures were manually contoured. MASK R-CNN network was trained and tested under three scenarios: (i) Leave-one-out (LOO), using the planning images of N- 1 patients for training; (ii) the same network, tested on the RT-Fr MRIs of the “left-out” patient, (iii) including the planning MRI...
Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning
Biomedical Physics & Engineering Express, 2018
Inter-and intra-observer variation in delineating regions of interest (ROIs) occurs because of differences in expertise level and preferences of the radiation oncologists. We evaluated the accuracy of a segmentation model using the U-Net structure to delineate the prostate, bladder, and rectum in male pelvic CT images. The dataset used for training and testing the model consisted of raw CT scan images of 85 prostate cancer patients. We designed a 2D U-Net model to directly learn a mapping function that converts a 2D CT grayscale image to its corresponding 2D OAR segmented image. Our network contains blocks of convolution 2D layers with variable kernel sizes, channel number, and activation functions. On the left side of the U-Net model, we used three 3x3 convolutions, each followed by a rectified linear unit (ReLu) (activation function), and one max pooling operation. On the right side of the U-Net model, we used a 2x2 transposed convolution and two 3x3 convolution networks followed by a ReLu activation function. The automatic segmentation using the U-Net generated an average dice similarity coefficient (DC) and standard deviation (SD) of the following: DC ± SD (0.88 ± 0.12), (0.95 ± 0.04), and (0.92 ± 0.06) for the prostate, bladder, and rectum, respectively. Furthermore, the mean of average surface Hausdorff distance (ASHD) and SD were 1.2 ± 0.9 mm, 1.08 ± 0.8 mm, and 0.8 ± 0.6 mm for the prostate, bladder, and rectum, respectively. Our proposed method, which employs the U-Net structure, is highly accurate and reproducible for automated ROI segmentation. This provides a foundation to improve automatic delineation of the boundaries between the target and surrounding normal soft tissues on a standard radiation therapy planning CT scan.
Computers in Biology and Medicine, 2022
Computed Tomography (CT) imaging is used in Radiation Therapy planning, where the treatment is carefully tailored to each patient in order to maximize radiation dose to the target while decreasing adverse effects to nearby healthy tissues. A crucial step in this process is manual organ contouring, which if performed automatically could considerably decrease the time to starting treatment and improve outcomes. Computerized segmentation of male pelvic organs has been studied for decades and deep learning models have brought considerable advances to the field, but improvements are still demanded. A two-step framework for automatic segmentation of the prostate, bladder and rectum is presented: a convolutional neural network enhanced with attention gates performs an initial segmentation, followed by a region-based active contour model to fine-tune the segmentations to each patient's specific anatomy. The framework was evaluated on a large collection of planning CTs of patients who had Radiation Therapy for prostate cancer. The Surface Dice Coefficient improved from 79.41 to 81.00% on segmentation of the prostate, 94.03 to 95.36% on the bladder and 82.17 to 83.68% on the rectum, comparing the proposed framework with the baseline convolutional neural network. This study shows that traditional image segmentation algorithms can help improve the immense gains that deep learning models have brought to the medical imaging segmentation field.
Kidney and Renal Tumor Segmentation Using a Hybrid V-Net-Based Model
Mathematics, 2020
Kidney tumors represent a type of cancer that people of advanced age are more likely to develop. For this reason, it is important to exercise caution and provide diagnostic tests in the later stages of life. Medical imaging and deep learning methods are becoming increasingly attractive in this sense. Developing deep learning models to help physicians identify tumors with successful segmentation is of great importance. However, not many successful systems exist for soft tissue organs, such as the kidneys and the prostate, of which segmentation is relatively difficult. In such cases where segmentation is difficult, V-Net-based models are mostly used. This paper proposes a new hybrid model using the superior features of existing V-Net models. The model represents a more successful system with improvements in the encoder and decoder phases not previously applied. We believe that this new hybrid V-Net model could help the majority of physicians, particularly those focused on kidney and k...
Medical Physics, 2021
Purpose:Brachytherapy (BT) combined with external beam radiotherapy (EBRT) is the standard treatment for cervical cancer and has been shown to improve overall survival rates compared to EBRT only. Magnetic resonance (MR) imaging is used for radiotherapy (RT) planning and image guidance due to its excellent soft tissue image contrast. Rapid and accurate segmentation of organs at risk (OAR) is a crucial step in MR image-guided RT. In this paper, we propose a fully automated two-step convolutional neural network (CNN) approach to delineate multiple OARs from T2-weighted (T2W) MR images.Methods:We employ a coarse-to-fine segmentation strategy. The coarse segmentation step first identifies the approximate boundary of each organ of interest and crops the MR volume around the centroid of organ-specific region of interest (ROI). The cropped ROI volumes are then fed to organ-specific fine segmentation networks to produce detailed segmentation of each organ. A three-dimensional (3-D) U-Net is trained to perform the coarse segmentation. For the fine segmentation, a 3-D Dense U-Net is employed in which a modified 3-D dense block (DB) is incorporated into the 3-D U-Net-like network to acquire inter and intra-slice features and improve information flow while reducing computational complexity. Two sets of T2W MR images (221 cases for MR1 and 62 for MR2) were taken with slightly different imaging parameters and used for our network training and test. The network was first trained on MR1 which was a larger sample set. The trained model was then transferred to the MR2 domain via a fine-tuning approach. Active learning strategy was utilized for selecting the most valuable data from MR2 to be included in the adaptation via transfer learning.Results:The proposed method was tested on 20 MR1 and 32 MR2 test sets. Mean±SD dice similarity coefficients (DSCs) are 0.93±0.04, 0.87±0.03, and 0.80±0.10 on MR1 and 0.94±0.05, 0.88±0.04, and 0.80±0.05 on MR2 for bladder, rectum, and sigmoid, respectively. Hausdorff distances (95th percentile) are 4.18±0.52, 2.54±0.41, and 5.03±1.31 mm on MR1 and 2.89±0.33, 2.24±0.40, and 3.28±1.08 mm on MR2, respectively. The performance of our method is superior to other state-of-the-art segmentation methods.Conclusions:We proposed a two-step CNN approach for fully automated segmentation of female pelvic MR bladder, rectum, and sigmoid from T2W MR volume. Our experimental results demonstrate that the developed method is accurate, fast, and reproducible, and outperforms alternative state-of-the-art methods for OAR segmentation significantly (p<0.05).
Life
Proper delineation of both target volumes and organs at risk is a crucial step in the radiation therapy workflow. This process is normally carried out manually by medical doctors, hence demanding timewise. To improve efficiency, auto-contouring methods have been proposed. We assessed a specific commercial software to investigate its impact on the radiotherapy workflow on four specific disease sites: head and neck, prostate, breast, and rectum. For the present study, we used a commercial deep learning-based auto-segmentation software, namely Limbus Contour (LC), Version 1.5.0 (Limbus AI Inc., Regina, SK, Canada). The software uses deep convolutional neural network models based on a U-net architecture, specific for each structure. Manual and automatic segmentation were compared on disease-specific organs at risk. Contouring time, geometrical performance (volume variation, Dice Similarity Coefficient—DSC, and center of mass shift), and dosimetric impact (DVH differences) were evaluated...
Radiation Oncology, 2020
Background Structure delineation is a necessary, yet time-consuming manual procedure in radiotherapy. Recently, convolutional neural networks have been proposed to speed-up and automatise this procedure, obtaining promising results. With the advent of magnetic resonance imaging (MRI)-guided radiotherapy, MR-based segmentation is becoming increasingly relevant. However, the majority of the studies investigated automatic contouring based on computed tomography (CT). Purpose In this study, we investigate the feasibility of clinical use of deep learning-based automatic OARs delineation on MRI. Materials and methods We included 150 patients diagnosed with prostate cancer who underwent MR-only radiotherapy. A three-dimensional (3D) T1-weighted dual spoiled gradient-recalled echo sequence was acquired with 3T MRI for the generation of the synthetic-CT. The first 48 patients were included in a feasibility study training two 3D convolutional networks called DeepMedic and dense V-net (dV-net)...
Oncology and Translational Medicine, 2022
Objective To introduce an end-to-end automatic segmentation method for organs at risk (OARs) in chest computed tomography (CT) images based on dense connection deep learning and to provide an accurate auto-segmentation model to reduce the workload on radiation oncologists. Methods CT images of 36 lung cancer cases were included in this study. Of these, 27 cases were randomly selected as the training set, six cases as the validation set, and nine cases as the testing set. The left and right lungs, cord, and heart were auto-segmented, and the training time was set to approximately 5 h. The testing set was evaluated using geometric metrics including the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and average surface distance (ASD). Thereafter, two sets of treatment plans were optimized based on manually contoured OARs and automatically contoured OARs, respectively. Dosimetric parameters including Dmax and Vx of the OARs were obtained and compared. Results The proposed model was superior to U-Net in terms of the DSC, HD95, and ASD, although there was no significant difference in the segmentation results yielded by both networks (P > 0.05). Compared to manual segmentation, auto-segmentation significantly reduced the segmentation time by nearly 40.7% (P < 0.05). Moreover, the differences in dose-volume parameters between the two sets of plans were not statistically significant (P > 0.05). Conclusion The bilateral lung, cord, and heart could be accurately delineated using the DenseNetbased deep learning method. Thus, feature map reuse can be a novel approach to medical image autosegmentation .
Medical Imaging 2020: Image Processing, 2020
Convolutional neural networks (CNNs) have been widely and successfully used for medical image segmentation. However, CNNs are typically considered to require large numbers of dedicated expert-segmented training volumes, which may be limiting in practice. This work investigates whether clinically obtained segmentations which are readily available in picture archiving and communication systems (PACS) could provide a possible source of data to train a CNN for segmentation of organs-at-risk (OARs) in radiotherapy treatment planning. In such data, delineations of structures deemed irrelevant to the target clinical use may be lacking. To overcome this issue, we use multi-label instead of multi-class segmentation. We empirically assess how many clinical delineations would be sufficient to train a CNN for the segmentation of OARs and find that increasing the training set size beyond a limited number of images leads to sharply diminishing returns. Moreover, we find that by using multi-label segmentation, missing structures in the reference standard do not have a negative effect on overall segmentation accuracy. These results indicate that segmentations obtained in a clinical workflow can be used to train an accurate OAR segmentation model.
Region-based Convolution Neural Network Approach for Accurate Segmentation of Pelvic Radiograph
2019 26th National and 4th International Iranian Conference on Biomedical Engineering (ICBME), 2019
With the increasing usage of radiograph images as a most common medical imaging system for diagnosis, treatment planning, and clinical studies, it is increasingly becoming a vital factor to use machine learning-based systems to provide reliable information for surgical pre-planning. Segmentation of pelvic bone in radiograph images is a critical preprocessing step for some applications such as automatic pose estimation and disease detection. However, the encoder-decoder style network known as U-Net has demonstrated limited results due to the challenging complexity of the pelvic shapes, especially in severe patients. In this paper, we propose a novel multi-task segmentation method based on Mask R-CNN architecture. For training, the network weights were initialized by large non-medical dataset and finetuned with radiograph images. Furthermore, in the training process, augmented data was generated to improve network performance. Our experiments show that Mask R-CNN utilizing multi-task learning, transfer learning, and data augmentation techniques achieve 0.96 DICE coefficient, which significantly outperforms the U-Net. Notably, for a fair comparison, the same transfer learning and data augmentation techniques have been used for U-net training.