Age encoded adversarial learning for pediatric CT segmentation (original) (raw)

Data Efficient Segmentation of Various 3D Medical Images Using Guided Generative Adversarial Networks

IEEE Access, 2020

The recent significant increase in accuracy of medical image processing is attributed to the use of deep neural networks as manual segmentation generates errors in interpretation besides, is very arduous and inefficient. Generative adversarial networks (GANs) is a particular interest to medical researchers, as it implements adversarial loss without explicit modeling of the probability density function. Medical image segmentation methods face challenges of generalization and over-fitting, as medical data suffers from various shapes and diversity of organs. Furthermore, generating a sufficiently large annotated dataset at a clinical site is costly. To generalize learning with a small amount of training data, we propose guided GANs (GGANs) that can decimate samples from an input image and guide networks to generate images and corresponding segmentation mask. The decimated sampling is the key element of the proposed method employed to reduce network size using only a few parameters. Moreover, this method yields promising results by generating several outputs, such as bagging approach. Furthermore, errors of loss function increase, during the generation of original images and corresponding segmentation mask, in comparison to generating only the segmentation mask. Minimization of increased error leads (GGANs) to enhance the performance of segmentation using smaller datasets and less testing time. This method can be applied to a wide range of segmentation problems for different modalities and various organs (such as aortic root, left atrium, knee cartilage, and brain tumors) during a real-time crisis in hospitals. The proposed network also yields high accuracy compared to state-of-the-art networks. INDEX TERMS Medical image segmentation, generative adversarial network, aortic valve, left atrium, knee cartilage, brain tumor, decimated sample.

Generative adversarial networks and its applications in the biomedical image segmentation: a comprehensive survey

International Journal of Multimedia Information Retrieval, 2022

Recent advancements with deep generative models have proven significant potential in the task of image synthesis, detection, segmentation, and classification. Segmenting the medical images is considered a primary challenge in the biomedical imaging field. There have been various GANs-based models proposed in the literature to resolve medical segmentation challenges. Our research outcome has identified 151 papers; after the twofold screening, 138 papers are selected for the final survey. A comprehensive survey is conducted on GANs network application to medical image segmentation, primarily focused on various GANs-based models, performance metrics, loss function, datasets, augmentation methods, paper implementation, and source codes. Secondly, this paper provides a detailed overview of GANs network application in different human diseases segmentation. We conclude our research with critical discussion, limitations of GANs, and suggestions for future directions. We hope this survey is beneficial and increases awareness of GANs network implementations for biomedical image segmentation tasks.

A Conditional Adversarial Network for Semantic Segmentation of Brain Tumor

Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 2018

Automated medical image analysis has a significant value in diagnosis and treatment of lesions. Brain tumors segmentation has a special importance and difficulty due to the difference in appearances and shapes of the different tumor regions in magnetic resonance images. Additionally the data sets are heterogeneous and usually limited in size in comparison with the computer vision problems. The recently proposed adversarial training has shown promising results in generative image modeling. In this paper we propose a novel end-to-end trainable architecture for brain tumor semantic segmentation through conditional adversarial training. We exploit conditional Generative Adversarial Network (cGAN) and train a semantic segmentation Convolution Neural Network (CNN) along with an adversarial network that discriminates segmentation maps coming from the ground truth or from the segmentation network for BraTS 2017 segmentation task[15,4,2,3]. We also propose an end-to-end trainable CNN for survival day prediction based on deep learning techniques for BraTS 2017 prediction task [15,4,2,3]. The experimental results demonstrate the superior ability of the proposed approach for both tasks. The proposed model achieves on validation data a DICE score, Sensitivity and Specificity respectively 0.68, 0.99 and 0.98 for the whole tumor, regarding online judgment system.

A Novel Generative Adversarial Network-Based Approach for Automated Brain Tumour Segmentation

Medicina

Background: Medical image segmentation is more complicated and demanding than ordinary image segmentation due to the density of medical pictures. A brain tumour is the most common cause of high mortality. Objectives: Extraction of tumorous cells is particularly difficult due to the differences between tumorous and non-tumorous cells. In ordinary convolutional neural networks, local background information is restricted. As a result, previous deep learning algorithms in medical imaging have struggled to detect anomalies in diverse cells. Methods: As a solution to this challenge, a deep convolutional generative adversarial network for tumour segmentation from brain Magnetic resonance Imaging (MRI) images is proposed. A generator and a discriminator are the two networks that make up the proposed model. This network focuses on tumour localisation, noise-related issues, and social class disparities. Results: Dice Score Coefficient (DSC), Peak Signal to Noise Ratio (PSNR), and Structural I...

Lesion segmentation in lung CT scans using unsupervised adversarial learning

Medical & Biological Engineering & Computing, 2022

Lesion segmentation in medical images is difficult yet crucial for proper diagnosis and treatment. Identifying lesions in medical images is costly and time-consuming and requires highly specialized knowledge. For this reason, supervised and semi-supervised learning techniques have been developed. Nevertheless, the lack of annotated data, which is common in medical imaging, is an issue; in this context, interesting approaches can use unsupervised learning to accurately distinguish between healthy tissues and lesions, training the network without using the annotations. In this work, an unsupervised learning technique is proposed to automatically segment coronavirus disease 2019 (COVID-19) lesions on 2D axial CT lung slices. The proposed approach uses the technique of image translation to generate healthy lung images based on the infected lung image without the need for lesion annotations. Attention masks are used to improve the quality of the segmentation further. Experiments showed the capability of the proposed approaches to segment the lesions, and it outperforms a range of unsupervised lesion detection approaches. The average reported results for the test dataset based on the metrics: Dice Score, Sensitivity, Specificity, Structure Measure, Enhanced-Alignment Measure, and Mean Absolute Error are 0.695, 0.694, 0.961, 0.791, 0.875, and 0.082 respectively. The achieved results are promising compared with the state-of-the-art and could constitute a valuable tool for future developments.

RescueNet: An unpaired GAN for brain tumor segmentation

Even with proper acquisition of brain tumor images, the accurate and reliable segmentation of tumors in brain is a complicated job. Automatic segmentation become possible with development of deep learning algorithms that brings plethora of solutions in this research prospect. In this paper, we designed a network architecture named as residual cyclic unpaired encoder-decoder network (RescueNet) using residual and mirroring principles. RescueNet uses unpaired adversarial training to segment the whole tumor followed by core and enhance regions in a brain MRI scan. The problem in automatic brain tumor analysis is preparing large scale labeled data for training of deep networks which is a time consuming and tedious task. To eliminate this need of paired data we used unpaired training approach to train the proposed network. Performance evaluation parameters are taken as DICE and Sensitivity measure. The experimental results are tested on BraTS 2015 and BraTS 2017 [1] dataset and the result outperforms the existing methods for brain tumor segmentation. The combination of domain-specific segmentation methods and general-purpose adversarial learning loomed to leverage huge advantages for medical imaging applications and can improve the ability of automated algorithms to assist radiologists.

Versatile Convolutional Networks Applied to Computed Tomography and Magnetic Resonance Image Segmentation

Journal of Medical Systems

Medical image segmentation has seen positive developments in recent years but remains challenging with many practical obstacles to overcome. The applications of this task are wide-ranging in many fields of medicine, and used in several imaging modalities which usually require tailored solutions. Deep learning models have gained much attention and have been lately recognized as the most successful for automated segmentation. In this work we show the versatility of this technique by means of a single deep learning architecture capable of successfully performing segmentation on two very different types of imaging: computed tomography and magnetic resonance. The developed model is fully convolutional with an encoder-decoder structure and high-resolution pathways which can process whole three-dimensional volumes at once, and learn directly from the data to find which voxels belong to the regions of interest and localize those against the background. The model was applied to two publicly available datasets achieving equivalent results for both imaging modalities, as well as performing segmentation of different organs in different anatomic regions with comparable success.

Generative adversarial network in medical imaging: A review

Medical Image Analysis, 2019

Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.

Cardiac aging synthesis from cross-sectional data with conditional generative adversarial networks

Frontiers in Cardiovascular Medicine

Age has important implications for health, and understanding how age manifests in the human body is the first step for a potential intervention. This becomes especially important for cardiac health, since age is the main risk factor for development of cardiovascular disease. Data-driven modeling of age progression has been conducted successfully in diverse applications such as face or brain aging. While longitudinal data is the preferred option for training deep learning models, collecting such a dataset is usually very costly, especially in medical imaging. In this work, a conditional generative adversarial network is proposed to synthesize older and younger versions of a heart scan by using only cross-sectional data. We train our model with more than 14,000 different scans from the UK Biobank. The induced modifications focused mainly on the interventricular septum and the aorta, which is consistent with the existing literature in cardiac aging. We evaluate the results by measuring...

Segmentation of Lungs in Chest X-ray Image using Generative Adversarial Networks

IEEE Access, 2020

Chest X-ray (CXR) is a low-cost medical imaging technique. It is a common procedure for the identification of many respiratory diseases compared to MRI, CT, and PET scans. This paper presents the use of generative adversarial networks (GAN) to perform the task of lung segmentation on a given CXR. GANs are popular to generate realistic data by learning the mapping from one domain to another. In our work, the generator of the GAN is trained to generate a segmented mask of a given input CXR. The discriminator distinguishes between a ground truth and the generated mask, and updates the generator through the adversarial loss measure. The objective is to generate masks for the input CXR, which are as realistic as possible compared to the ground truth masks. The model is trained and evaluated using four different discriminators referred to as D1, D2, D3 and D4 respectively. Experimental results on three different CXR datasets reveal that the proposed model is able to achieve a dice-score of 0.9740, and IOU score of 0.943, which are better than other reported state-of-the art results. INDEX TERMS Deep learning, generative adversarial networks, lung segmentation, medical imaging.