Efficient, blind, spatially-variant deblurring for shaken images (original) (raw)

Deblurring a Camera-Shake image using a Thinning Kernel

Proceedings of The 6th IIAE International Conference on Intelligent Systems and Image Processing 2018, 2018

The task of blind deblurring usually consists of estimation of interim images and blur kernels. Due to the lack of information in kernels compared to that in interim images, when only a blurred image is available, most of deblurring methods emphasis the estimation of interim images. However, the resulting kernel is often wider than it should be, thus degrading the quality of the deconvolved image. To remedy the problem of wide kernels, we present a thinning scheme to better estimate a kernel. In this way, a clear image can be recovered from a camera-shake blurred image. To mitigate the insufficient information of blur kernels, we make simple inferences and assumptions for kernels based on the trajectory of the camera shake. Under these inferences and assumptions, we use a three-step approach to estimate the blur kernel. Firstly, we relax the condition to find the shape of the blur kernel. Next, we use a thinning algorithm to obtain the skeleton of the blur kernel. Thirdly, we reweight the blur kernel by Gaussian distribution. By repeating these steps a few times we can get a more accurate blur kernel. Finally, we can reconstruct a high quality deblurred image by using the blur kernel. The proposed method is tested by a public database and our results outperform those of two similar methods.

Single Image Deblurring Using Motion Density Functions

2010

We present a novel single image deblurring method to estimate spatially non-uniform blur that results from camera shake. We use existing spatially invariant deconvolution methods in a local and robust way to compute initial estimates of the latent image. The camera motion is represented as a Motion Density Function (MDF) which records the fraction of time spent in each discretized portion of the space of all possible camera poses. Spatially varying blur kernels are derived directly from the MDF. We show that 6D camera motion is well approximated by 3 degrees of motion (in-plane translation and rotation) and analyze the scope of this approximation. We present results on both synthetic and captured data. Our system out-performs current approaches which make the assumption of spatially invariant blur.

Fast Non-uniform Deblurring using Constrained Camera Pose Subspace

Camera shake during exposure time often results in non-uniform blur across the entire image. Recent algorithms model the non-uniform blurry image as a linear combination of images observed by the camera at discretized poses, and focus on estimating the time fraction positioned at each pose. While these algorithms show promising results, they nevertheless entail heavy computational loads. In this work, we propose a novel single image deblurring algorithm to remove non-uniform blur. We estimate the local blur kernels at different image regions and obtain an initial guess of possible camera poses using backprojection. By restraining the possible camera poses in a low-dimensional subspace, we iteratively estimate the weight for each pose in the camera pose space. Experimental validations with the state-of-the-art methods demonstrate the efficiency and effectiveness of our algorithm for non-uniform deblurring.

Modeling and removing spatially-varying optical blur

2011 IEEE International Conference on Computational Photography (ICCP), 2011

Photo deblurring has been a major research topic in the past few years. So far, existing methods have focused on removing the blur due to camera shake and object motion. In this paper, we show that the optical system of the camera also generates significant blur, even with professional lenses. We introduce a method to estimate the blur kernel densely over the image and across multiple aperture and zoom settings. Our measures show that the blur kernel can have a non-negligible spread, even with top-of-the-line equipment, and that it varies nontrivially over this domain. In particular, the spatial variations are not radially symmetric and not even left-right symmetric. We develop and compare two models of the optical blur, each of them having its own advantages. We show that our models predict accurate blur kernels that can be used to restore photos. We demonstrate that we can produce images that are more uniformly sharp unlike those produced with spatially-invariant deblurring techniques.

A Novel Approach for Shaken Image Deblurring

International Journal of Signal Processing, Image Processing and Pattern Recognition, 2014

Blurry images are the bane of many photographers. Although sometimes these images could be retaken in the hope that the next exposure will not be blurred, but frequently they are of some unique event that could only be captured once. The most common cause of blurry images is camera shake. Camera shake means that during the exposure the camera moved. This movement may be very small but still creates blurry images. In this paper, a novel blinddeblurring approach for removing the effect of camera shake from blurry image is proposed. Starting with an image that has been blurred by camera shake, we recover the unknown shape image in two phases: (i) a kernel estimation phase using Radon transform method, and (ii) the shape image recovery based on EM algorithm. Comprehensive comparisons on a number of blurry images show that our approach is not only substantially faster, but it also leads to better deblurring results. Our experimental results are also shown for comparisons. Visually, we find that the restored images are better than those given by the algorithm in other methods from previous works.

Recording and Playback of Camera Shake: Benchmarking Blind Deconvolution with a Real-World Database

Computer Vision – ECCV 2012, 2012

Motion blur due to camera shake is one of the predominant sources of degradation in handheld photography. Single image blind deconvolution (BD) or motion deblurring aims at restoring a sharp latent image from the blurred recorded picture without knowing the camera motion that took place during the exposure. BD is a long-standing problem, but has attracted much attention recently, cumulating in several algorithms able to restore photos degraded by real camera motion in high quality. In this paper, we present a benchmark dataset for motion deblurring that allows quantitative performance evaluation and comparison of recent approaches featuring non-uniform blur models. To this end, we record and analyse real camera motion, which is played back on a robot platform such that we can record a sequence of sharp images sampling the six dimensional camera motion trajectory. The goal of deblurring is to recover one of these sharp images, and our dataset contains all information to assess how closely various algorithms approximate that goal. In a comprehensive comparison, we evaluate state-of-the-art single image BD algorithms incorporating uniform and non-uniform blur models.

MC-Blur: A Comprehensive Benchmark for Image Deblurring

arXiv (Cornell University), 2021

Blur artifacts can seriously degrade the visual quality of images, and numerous deblurring methods have been proposed for specific scenarios. However, in most real-world images, blur is caused by different factors, e.g., motion, and defocus. In this paper, we address how other deblurring methods perform in the case of multiple types of blur. For in-depth performance evaluation, we construct a new large-scale multicause image deblurring dataset (MC-Blur), including real-world and synthesized blurry images with different blur factors. The images in the proposed MC-Blur dataset are collected using other techniques: averaging sharp images captured by a 1000-fps highspeed camera, convolving Ultra-High-Definition (UHD) sharp images with large-size kernels, adding defocus to images, and real-world blurry images captured by various camera models. Based on the MC-Blur dataset, we conduct extensive benchmarking studies to compare SOTA methods in different scenarios, analyze their efficiency, and investigate the buildataset's capacity. These benchmarking results provide a comprehensive overview of the advantages and limitations of current deblurring methods, revealing our dataset's advances.

Deblurring Shaken and Partially Saturated Images

International Journal of Computer Vision, 2014

We address the problem of deblurring images degraded by camera shake blur and saturated (over-exposed) pixels. Saturated pixels violate the common assumption that the image-formation process is linear, and often cause ringing in deblurred outputs. We provide an analysis of ringing in general, and show that in order to prevent ringing, it is insufficient to simply discard saturated pixels. We show that even when saturated pixels are removed, ringing is caused by attempting to estimate the values of latent pixels that are brighter than the sensor's maximum output. Estimating these latent pixels is likely to cause large errors, and these errors propagate across the rest of the image in the form of ringing. We propose a new deblurring algorithm that locates these error-prone bright pixels in the latent sharp image, and by decoupling them from the remainder of the latent image, greatly reduces ringing. In addition, we propose an approximate forward model for saturated images, which allows us to estimate these error-prone pixels separately without causing artefacts. Results are shown for non-blind deblurring of Parts of this work were previously published in the IEEE Workshop on Color and Photometry in Computer Vision, with ICCV 2011 (Whyte et al. 2011

Adaptive Single Image Deblurring

2022

This paper tackles the problem of dynamic scene deblurring. Although end-to-end fully convolutional designs have recently advanced the state-of-the-art in non-uniform motion deblurring, their performance-complexity trade-off is still sub-optimal. Existing approaches achieve a large receptive field by a simple increment in the number of generic convolution layers, kernel-size, which comes with the burden of the increase in model size and inference speed. In this work, we propose an efficient pixel adaptive and feature attentive design for handling large blur variations within and across different images. We also propose an effective content-aware global-local filtering module that significantly improves the performance by considering not only the global dependencies of the pixel but also dynamically using the neighboring pixels. We use a patch hierarchical attentive architecture composed of the above module that implicitly discover the spatial variations in the blur present in the in...

Deblurring by Realistic Blurring

arXiv (Cornell University), 2020

Existing deep learning methods for image deblurring typically train models using pairs of sharp images and their blurred counterparts. However, synthetically blurring images do not necessarily model the genuine blurring process in real-world scenarios with sufficient accuracy. To address this problem, we propose a new method which combines two GAN models, i.e., a learning-to-Blur GAN (BGAN) and learning-to-DeBlur GAN (DBGAN), in order to learn a better model for image deblurring by primarily learning how to blur images. The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images. In order to reduce the discrepancy between real blur and synthesized blur, a relativistic blur loss is leveraged. As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images. Our experiments show that the proposed method achieves consistently superior quantitative performance as well as higher perceptual quality on both the newly proposed dataset and the public GOPRO dataset.