Conditional GANs for Sonar Image Filtering with Applications to Underwater Occupancy Mapping (original) (raw)

Full-Scale Continuous Synthetic Sonar Data Generation with Markov Conditional Generative Adversarial Networks

2020 IEEE International Conference on Robotics and Automation (ICRA), 2020

Deployment and operation of autonomous underwater vehicles is expensive and time-consuming. High-quality realistic sonar data simulation could be of benefit to multiple applications, including training of human operators for postmission analysis, as well as tuning and validation of autonomous target recognition (ATR) systems for underwater vehicles. Producing realistic synthetic sonar imagery is a challenging problem as the model has to account for specific artefacts of real acoustic sensors, vehicle attitude, and a variety of environmental factors. We propose a novel method for generating realistic-looking sonar side-scans of full-length missions, called Markov Conditional pix2pix (MC-pix2pix). Quantitative assessment results confirm that the quality of the produced data is almost indistinguishable from real. Furthermore, we show that bootstrapping ATR systems with MC-pix2pix data can improve the performance. Synthetic data is generated 18 times faster than real acquisition speed, with full user control over the topography of the generated data.

Improving the realistic rendering of artificial sonar images using Cycle Generative Adversarial Networks

Proceedings of Meetings on Acoustics

The advances of recent decades in marine technologies have enabled the development of high-resolution acoustic imaging systems capable of generating high-resolution imagery of the underwater environment. Based on deep learning approaches, the collected data is used to develop Automatic Target Recognition (ATR) algorithms to detect suspicious objects on the seafloor and classify each as an object of interest (e.g., a mine) or not. However, because obtaining labelled underwater images demands time and effort, applying deep learning based approaches in underwater environment remains a challenge due to the scarcity of training data. This paper presents a work on the improvement of the realism of synthetic sonar data generated by a simulator capable of massively producing labelled images. The simulator is based on the numerical modeling of the propagation, reverberation and reflection of acoustics waves in an artificial underwater environment. The reduction of the gap of realism with real sonar images is conducted by applying Cycle Generative Adversarial Networks. Then, we study the successive addition of a growing quantity of these refined synthetic data in a set of real training sonar image of limited size and the impact of this data augmentation on the performance of a CNN-based object detector.

Generating Synthetic Sidescan Sonar Snippets Using Transfer-Learning in Generative Adversarial Networks

Journal of Marine Science and Engineering

The training of a deep learning model requires a large amount of data. In case of sidescan sonar images, the number of snippets from objects of interest is limited. Generative adversarial networks (GAN) have shown to be able to generate photo-realistic images. Hence, we use a GAN to augment a baseline sidescan image dataset with synthetic snippets. Although the training of a GAN with few data samples is likely to cause mode collapse, a combination of pre-training using simple simulated images and fine-tuning with real data reduces this problem. However, for sonar data, we show that this approach of transfer-learning a GAN is sensitive to the pre-training step, meaning that the vanishing of the gradients of the GAN’s discriminator becomes a critical problem. Here, we demonstrate how to overcome this problem, and thus how to apply transfer-learning to GANs for generating synthetic sidescan snippets in a more robust way. Additionally, in order to further investigate the GAN’s ability t...

Virtual Underwater Datasets for Autonomous Inspections

Journal of Marine Science and Engineering

Underwater Vehicles have become more sophisticated, driven by the off-shore sector and the scientific community’s rapid advancements in underwater operations. Notably, many underwater tasks, including the assessment of subsea infrastructure, are performed with the assistance of Autonomous Underwater Vehicles (AUVs). There have been recent breakthroughs in Artificial Intelligence (AI) and, notably, Deep Learning (DL) models and applications, which have widespread usage in a variety of fields, including aerial unmanned vehicles, autonomous car navigation, and other applications. However, they are not as prevalent in underwater applications due to the difficulty of obtaining underwater datasets for a specific application. In this sense, the current study utilises recent advancements in the area of DL to construct a bespoke dataset generated from photographs of items captured in a laboratory environment. Generative Adversarial Networks (GANs) were utilised to translate the laboratory ob...

Robust and Fair Undersea Target Detection with Automated Underwater Vehicles for Biodiversity Data Collection

Remote Sensing

Undersea/subsea data collection via automated underwater vehicles (AUVs) plays an important role for marine biodiversity research, while it is often much more challenging than the data collection above ground via satellites or AUVs. To enable the automated undersea/subsea data collection system, the AUVs are expected to be able to automatically track the objects of interest through what they can “see” from their mounted underwater cameras, where videos or images could be drastically blurred and degraded in underwater lighting conditions. To solve this challenge, in this work, we propose a cascaded framework by combining a DCGAN (deep convolutional generative adversarial network) with an object detector, i.e., single-shot detector (SSD), named DCGAN+SSD, for the detection of various underwater targets from the mounted camera of an automated underwater vehicle. In our framework, our assumption is that DCGAN can be leveraged to alleviate the impact of underwater conditions and provide ...

GAMMA: Generative Augmentation for Attentive Marine Debris Detection

arXiv (Cornell University), 2022

We propose an efficient and generative augmentation approach to solve the inadequacy concern of underwater debris data for visual detection. We use cycleGAN as a data augmentation technique to convert openly available, abundant data of terrestrial plastic to underwater-style images. Prior works just focus on augmenting or enhancing existing data, which moreover adds bias to the dataset. Compared to our technique, which devises variation, transforming additional in-air plastic data to the marine background. We also propose a novel architecture for underwater debris detection using an attention mechanism. Our method helps to focus only on relevant instances of the image, thereby enhancing the detector performance, which is highly obliged while detecting the marine debris using Autonomous Underwater Vehicle (AUV). We perform extensive experiments for marine debris detection using our approach. Quantitative and qualitative results demonstrate the potential of our framework that significantly outperforms the state-of-the-art methods.

MLFcGAN: Multilevel Feature Fusion-Based Conditional GAN for Underwater Image Color Correction

IEEE Geoscience and Remote Sensing Letters

Color correction for underwater images has received increasing interests, due to its critical role in facilitating available mature vision algorithms for underwater scenarios. Inspired by the stunning success of deep convolutional neural networks (DCNNs) techniques in many vision tasks, especially the strength in extracting features in multiple scales, we propose a deep multiscale feature fusion net based on the conditional generative adversarial network (GAN) for underwater image color correction. In our network, multi-scale features are extracted first, followed by augmenting local features in each scale with global features. This design was verified to facilitate more effective and faster network learning, resulting in better performance in both color correction and detail preservation. We conducted extensive experiments and compared with the state-of-the-arts approaches quantitatively and qualitatively, showing that our method achieves significant improvements.

Fast Underwater Image Enhancement for Improved Visual Perception

IEEE Robotics and Automation Letters, 2020

In this paper, we present a conditional generative adversarial network-based model for real-time underwater image enhancement. To supervise the adversarial training, we formulate an objective function that evaluates the perceptual image quality based on its global content, color, local texture, and style information. We also present EUVP, a largescale dataset of a paired and an unpaired collection of underwater images (of 'poor' and 'good' quality) that are captured using seven different cameras over various visibility conditions during oceanic explorations and human-robot collaborative experiments. In addition, we perform several qualitative and quantitative evaluations which suggest that the proposed model can learn to enhance underwater image quality from both paired and unpaired training. More importantly, the enhanced images provide improved performances of standard models for underwater object detection, human pose estimation, and saliency prediction. These results validate that it is suitable for real-time preprocessing in the autonomy pipeline by visually-guided underwater robots. The model and associated training pipelines are available at https://github.com/xahidbuffon/ funie-gan.

Stochastic Ship-Radiated Noise Modelling Via Generative Adversarial Networks

2020

The design and performance evaluation of underwater acoustic (UA) communication systems in shallow water and harbour environments is a continuous challenge due to the numerous degrading factors present in the UA channel, one of which is the presence of noise generated due to nearby shipping activity. However, few research studies have examined the properties of ship-radiated noise in terms of its time-domain statistical characteristics and its negative effects on UA communication systems. We propose the use of unsupervised learning techniques to train generative models that capture the time-domain stochastic behaviours of ship-radiated noise using a publicly available database of long-term acoustic shipping noise recordings. These models can then be used for further analysis of ship-radiated noise and performance evaluation of UA orthogonal frequency-division multiplexing systems in the presence of such interference. For further validation, we include experimentally acquired ship-ra...

Autonomous Underwater Vehicle Navigation Using Sonar Image Matching based on Convolutional Neural Network

IFAC-PapersOnLine, 2019

This paper presents an image matching algorithm based on convolutional neural network (CNN) to aid in the navigating of an Autonomous Underwater Vehicle (AUV) where external navigation aids are not available. We aim to solve the problem where traditional image feature representations and similarity learning are not learned jointly and to improve the matching accuracy of sonar images in deep ocean with dynamic backgrounds, low-intensity and high-noise scenes. In our work, the proposed CNN-based model can train the texture features of sonar images without any manually designed feature descriptors, which can jointly optimize the representation of the input data conditioned on the similarity measure being used. The validation studies show the feasibility and veracity of the proposed method for many general and offset cases using collected sonar images.