Upsampling Range Data in Dynamic Environments -- CVPR 2010 (original) (raw)

Jennifer Dolson Jongmin Baek Christian Plagemann Sebastian Thrun

CVPR 2010

Abstract We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object shape, motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.

Paper

Videos

Example of Processing the highway1 dataset: AVI Video (4.1 MB)

Data Sets

One download: All 8 data sets
Individual data sets:

Implementations
(Note: all analysis in the paper uses the GPU version of the algorithm)

Here is the GPU implementation, as described in the paper, which runs on linux with NVIDIA's cuda installed (in theory it should also run on Windows, but that theory has not been tested). There's a README file and sample Makefile included: download

Here is the CPU implementation, also with a README and sample Makefile: download.