Image Restoration using Feature-guidance (original) (raw)
Related papers
Training a Task-Specific Image Reconstruction Loss
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
The choice of a loss function is an important factor when training neural networks for image restoration problems, such as single image super resolution. The loss function should encourage natural and perceptually pleasing results. A popular choice for a loss is a pre-trained network, such as VGG, which is used as a feature extractor for computing the difference between restored and reference images. However, such an approach has multiple drawbacks: it is computationally expensive, requires regularization and hyperparameter tuning, and involves a large network trained on an unrelated task. Furthermore, it has been observed that there is no single loss function that works best across all applications and across different datasets. In this work, we instead propose to train a set of loss functions that are application specific in nature. Our loss function comprises a series of discriminators that are trained to detect and penalize the presence of application-specific artifacts. We show that a single natural image and corresponding distortions are sufficient to train our feature extractor that outperforms state-of-the-art loss functions in applications like single image super resolution, denoising, and JPEG artifact removal. Finally, we conclude that an effective loss function does not have to be a good predictor of perceived image quality, but instead needs to be specialized in identifying the distortions for a given restoration method.
Spatially-Adaptive Residual Networks for Efficient Image and Video Deblurring
2019
In this paper, we address the problem of dynamic scene deblurring in the presence of motion blur. Restoration of images affected by severe blur necessitates a network design with a large receptive field, which existing networks attempt to achieve through a simple increment in the number of generic convolution layers, kernel-size, or the scales at which the image is processed. However, increasing the network capacity in this manner comes at the expense of an increase in model size and inference time, and ignoring the non-uniform nature of blur. We present a new architecture composed of spatially adaptive residual learning modules that implicitly discover the spatially varying shifts responsible for non-uniform blur in the input image and learn to modulate the filters. This capability is complemented by a self-attentive module which captures non-local relationships among the intermediate features and enhances the receptive field. We then incorporate a spatiotemporal recurrent module i...
Can fully convolutional networks perform well for general image restoration problems?
2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), 2017
We present a fully convolutional network(FCN) based approach for color image restoration. FCNs have recently shown remarkable performance for high-level vision problem like semantic segmentation. In this paper, we investigate if FCN models can show promising performance for low-level problems like image restoration as well. We propose a fully convolutional model, that learns a direct end-to-end mapping between the corrupted images as input and the desired clean images as output. Our proposed method takes inspiration from domain transformation techniques but presents a data-driven task specific approach where filters for novel basis projection, task dependent coefficient alterations, and image reconstruction are represented as convolutional networks. Experimental results show that our FCN model outperforms traditional sparse coding based methods and demonstrates competitive performance compared to the state-of-the-art methods for image denoising. We further show that our proposed model can solve the difficult problem of blind image inpainting and can produce reconstructed images of impressive visual quality.
IJIRIS:: AM Publications,India, 2024
Image restoration is an integral component of computer vision that tries to restore pictures that have been deteriorated or corrupted to their original or enhanced condition. In this study, we look into the wide picture restoration techn models. There perform quite well, particularly when i rely on handcrafted filters restricts their adaptation to more complicated forms of been revolutionized by deep learning, which is led by co learning sophisticated representations of visual data. It is because of this that CNNs are able to deal with a wide variety of degradations, such as noise, blurring, artifacts, and missing data. Ge GANs, are continually pushing the limits of what is possible by utilizing adversarial training to accomplish spectacular outcomes in the areas of to overcome: Understanding limited interpretability of the the training of successful models may be quite computationally rigorous. make navigation revolutionize image processing and analysis, ultimately contributing to advancements across a wide range of scientific and technological domains. This can be concentrating on the promising research directions that are currently being pursued.
COLA-Net: Collaborative Attention Network for Image Restoration
IEEE Transactions on Multimedia, 2021
Local and non-local attention-based methods have been well studied in various image restoration tasks while leading to promising performance. However, most of the existing methods solely focus on one type of attention mechanism (local or nonlocal). Furthermore, by exploiting the self-similarity of natural images, existing pixel-wise non-local attention operations tend to give rise to deviations in the process of characterizing longrange dependence due to image degeneration. To overcome these problems, in this paper we propose a novel collaborative attention network (COLA-Net) for image restoration, as the first attempt to combine local and non-local attention mechanisms to restore image content in the areas with complex textures and with highly repetitive details respectively. In addition, an effective and robust patch-wise non-local attention model is developed to capture longrange feature correspondences through 3D patches. Extensive experiments on synthetic image denoising, real image denoising and compression artifact reduction tasks demonstrate that our proposed COLA-Net is able to achieve state-of-the-art performance in both peak signal-to-noise ratio and visual perception, while maintaining an attractive computational complexity. The
Simultaneous Fidelity and Regularization Learning for Image Restoration
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019
Most existing non-blind restoration methods are based on the assumption that a precise degradation model is known. As the degradation process can only be partially known or inaccurately modeled, images may not be well restored. Rain streak removal and image deconvolution with inaccurate blur kernels are two representative examples of such tasks. For rain streak removal, although an input image can be decomposed into a scene layer and a rain streak layer, there exists no explicit formulation for modeling rain streaks and the composition with scene layer. For blind deconvolution, as estimation error of blur kernel is usually introduced, the subsequent non-blind deconvolution process does not restore the latent image well. In this paper, we propose a principled algorithm within the maximum a posterior framework to tackle image restoration with a partially known or inaccurate degradation model. Specifically, the residual caused by a partially known or inaccurate degradation model is spatially dependent and complexly distributed. With a training set of degraded and ground-truth image pairs, we parameterize and learn the fidelity term for a degradation model in a task-driven manner. Furthermore, the regularization term can also be learned along with the fidelity term, thereby forming a simultaneous fidelity and regularization learning model. Extensive experimental results demonstrate the effectiveness of the proposed model for image deconvolution with inaccurate blur kernels, deconvolution with multiple degradations and rain streak removal.
Effects of Image Degradation and Degradation Removal to CNN-Based Image Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019
Just like many other topics in computer vision, image classification has achieved significant progress recently by using deep learning neural networks, especially the Convolutional Neural Networks (CNNs). Most of the existing works focused on classifying very clear natural images, evidenced by the widely used image databases, such as Caltech-256, PASCAL VOCs, and ImageNet. However, in many real applications, the acquired images may contain certain degradations that lead to various kinds of blurring, noise, and distortions. One important and interesting problem is the effect of such degradations to the performance of CNN-based image classification and whether degradation removal helps CNN-based image classification. More specifically, we wonder whether image classification performance drops with each kind of degradation, whether this drop can be avoided by including degraded images into training, and whether existing computer vision algorithms that attempt to remove such degradations can help improve the image classification performance. In this article, we empirically study those problems for nine kinds of degraded images-hazy images, motion-blurred images, fish-eye images, underwater images, low resolution images, salt-and-peppered images, images with white Gaussian noise, Gaussian-blurred images, and out-of-focus images. We expect this article can draw more interests from the community to study the classification of degraded images.
Exploring Overcomplete Representations for Single Image Deraining Using CNNs
IEEE Journal of Selected Topics in Signal Processing
Removal of rain streaks from a single image is an extremely challenging problem since the rainy images often contain rain streaks of different size, shape, direction and density. Most recent methods for deraining use a deep network following a generic "encoder-decoder" architecture which captures lowlevel features across the initial layers and high-level features in the deeper layers. For the task of deraining, the rain streaks which are to be removed are relatively small and focusing much on global features is not an efficient way to solve the problem. To this end, we propose using an overcomplete convolutional network architecture which gives special attention in learning local structures by restraining the receptive field of filters. We combine it with U-Net so that it does not lose out on the global structures as well while focusing more on lowlevel features, to compute the derained image. The proposed network called, Over-and-Under Complete Deraining Network (OUCD), consists of two branches: overcomplete branch which is confined to small receptive field size in order to focus on the local structures and an undercomplete branch that has larger receptive fields to primarily focus on global structures. Extensive experiments on synthetic and real datasets demonstrate that the proposed method achieves significant improvements over the recent state-of-the-art methods.
IEEE Access, 2023
Various vision-based tasks suffer from inaccurate navigation and poor performance due to inevitable problems, such as adverse weather conditions like haze, fog, rain, snow, and clouds affecting ground and aerial navigation, as well as underwater images being degraded with blue-green tones and mud affecting marine navigation. Existing techniques in the literature typically focus on restoring specific degradations using separate models, leading to computational inefficiency. To address this, an all-in-one Multidomain Attention-based Conditional Generative Adversarial Network (MACGAN) is proposed to improve scene visibility for optimal ground, aerial, and marine navigation, using the same set of parameters across all domains. The MACGAN is a lightweight network with four encoder and decoder blocks and multiple attention blocks in between, which enhances the image restoration process by focusing on the most important features. To evaluate the effectiveness of MACGAN, extensive qualitative and quantitative comparisons are conducted with state-of-the-art image-to-image translation models, all-in-one adverse weather removal models, and single-effect removal models. The results highlight the superior performance of MACGAN in terms of scene visibility improvement and restoration quality. Additionally, MACGAN is tested on real-world unseen image domains, including smog, dust, fog, rain, snow, and lightning, further validating its generalizability and robustness. Furthermore, an ablation study is conducted to analyze the contributions of the discriminator and attention blocks within the MACGAN architecture. The results confirm that both components play significant roles in the effectiveness of MACGAN, with the discriminator ensuring adversarial training and the attention blocks effectively capturing and enhancing important image features.
Towards Perceptually Plausible Training of Image Restoration Neural Networks
2019 Ninth International Conference on Image Processing Theory, Tools and Applications (IPTA), 2019
Learning-based black-box approaches have proven to be successful at several tasks in image and video processing domain. Many of these approaches depend on gradient-descent and back-propagation algorithms which requires to calculate the gradient of the loss function. However, many of the visual metrics are not differentiable, and despite their superior accuracy, they cannot be used to train neural networks for imaging tasks. Most of the image restoration neural networks rely on mean squared error to train. In this paper, we investigate visual system based metrics in order to provide perceptual loss functions that can replace mean squared error for gradient descentbased algorithms. We also share our preliminary results on the proposed approach.