Learning to See through Turbulent Water (original) (raw)
Abstract:
Imaging through dynamic refractive media, such as looking into turbulent water, or through hot air, is challenging since light rays are bent by unknown amounts leading to complex geometric distortions. Inverting these distortions and recovering high quality images is an inherently ill-posed problem, leading previous works to require extra information such as high frame-rate video or a template image, which limits their applicability in practice. This paper proposes training a deep convolution neural network to undistort dynamic refractive effects using only a single image. The neural network is able to solve this ill-posed problem by learning image priors as well as distortion priors. Our network consists of two parts, a warping net to remove geometric distortion and a color predictor net to further refine the restoration. Adversarial loss is used to achieve better visual quality and help the network hallucinate missing and blurred information. To train our network, we collect a large training set of images distorted by a turbulent water surface. Unlike prior works on water undistortion, our method is trained end-to-end, only requires a single image and does not use a ground truth template at test time. Experiments show that by exploiting the structure of the problem, our network outperforms state-of-the-art deep image to image translation.
Download
![]() |
Paper7.0 MB | ||||||||
---|---|---|---|---|---|---|---|---|---|
![]() |
Training set8.4 GB | ![]() |
Validation set131 MB | ![]() |
Test set251 MB | ![]() |
Source code | ![]() |
Pretrained Models427 MB |
Bibtex
@inproceedings{li2018learning, title={Learning to See Through Turbulent Water}, author={Li, Zhengqin and Murez, Zak and Kriegman, David and Ramamoorthi, Ravi and Chandraker, Manmohan}, booktitle={Applications of Computer Vision (WACV), 2018 IEEE Winter Conference on}, pages={512--520}, year={2018}, organization={IEEE} }