Real-time extraction of surface patches with associated uncertainties by means of Kinect cameras (original) (raw)
Related papers
Accuracy Analysis of Kinect Depth Data
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2011
This paper presents an investigation of the geometric quality of depth data obtained by the Kinect sensor. Based on the mathematical model of depth measurement by the sensor a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimetres up to about 4 cm at the maximum range of the sensor. The accuracy of the data is also found to be influenced by the low resolution of the depth measurements.
Real-time estimation of planar surfaces in arbitrary environments using microsoft kinect sensor
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2013
We propose an algorithm, suitable for real-time robot applications, for modeling and reconstruction of complex scenes. The environment is seen as a collection of planes and the algorithm extracts in real time their parameters from the 3D point cloud provided by the Kinect sensor. The execution speed of the procedure depends on the desired reconstruction quality and on the complexity of the surroundings. Implementation issues are discussed and experiments on a real scene are included.
3D Reconstruction using Kinect Sensor and Parallel Processing on 3D Graphics Processing Unit
REV Journal on Electronics and Communications, 2013
One of depth cameras such as the Microsoft Kinect is much cheaper than conventional 3D scanning devices, thus it can be acquired for everyday users easily. However, the depth data captured by Kinect over a certain distance is of low quality. In this work, we implement a set of algorithms allowing users to capture 3D surfaces by using the handheld Kinect. As a classic alignment algorithm such as the Iterative Closest Point (ICP) does not show efficacy in aligning point clouds that have limited overlapped regions, another coarse alignment using the Sample Consensus Initial Alignment (SAC-IA) is incorporated in to the registration process in order to ameliorate 3D point clouds’ fitness. Two robust reconstruction methods namely the Alpha Shapes and the Grid Projection are also implemented to reconstruct 3D surface from registered point clouds. The experimental results have shown the efficiency and applicability of of our blueprint. The constructed system obtains acceptable results in a ...
3D modeling of indoor environments using Kinect sensor
2013 IEEE Second International Conference on Image Information Processing (ICIIP-2013), 2013
3D scene modeling for indoor environments has stirred significant interest in the last few years. The obtained photo-realistic rendering of internal structures are being used in a huge variety of civilian and military applications such as training, simulation, patrimonies conservation, localization and mapping. Whereas, building such complicated maps poses significant challenges for both computer vision and robotic communities (low lighting and textureless structures, transparent and specular surfaces, registration and fusion problems, coverage of all details, real time constraint, etc.). Recently, the Microsoft Kinect sensors, originally developed as a gaming interface, have received a great deal of attention as being able to produce high quality depth maps in real time. However, we realized that these active sensors failed completely on transparent and specular surfaces due to many technical causes. As these objects should be involved into the 3D model, we have investigated methods to inspect them without any modification of the hardware. In particular, the Structure from Motion (SFM) passive technique can be efficiently integrated to the reconstruction process to improve the detection of these surfaces. In fact, we proposed to fill the holes in the depth map provided by the Infrared (IR) kinect sensor with new values passively retrieved by the SFM technique. This helps to acquire additional huge amount of depth information in a relative short time from two consecutive RGB frames. To conserve the real time aspect of our approach we propose to select key-RGB-images instead of using all the available frames. The experiments show a strong improvement in the indoor reconstruction as well as transparent object inspection.
Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications
Sensors, 2012
Consumer-grade range cameras such as the Kinect sensor have the potential to be used in mapping applications where accuracy requirements are less strict. To realize this potential insight into the geometric quality of the data acquired by the sensor is essential. In this paper we discuss the calibration of the Kinect sensor, and provide an analysis of the accuracy and resolution of its depth data. Based on a mathematical model of depth measurement from disparity a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimeters up to about 4 cm at the maximum range of the sensor. The quality of the data is also found to be influenced by the low resolution of the depth measurements.
KinectFusion: Real-time dense surface mapping and tracking
2011
: Example output from our system, generated in real-time with a handheld Kinect depth camera and no other sensing infrastructure. Normal maps (colour) and Phong-shaded renderings (greyscale) from our dense reconstruction system are shown. On the left for comparison is an example of the live, incomplete, and noisy data from the Kinect sensor (used as input to our system).
Comparative evaluation of methods for filtering Kinect depth data
Multimedia Tools and Applications, 2014
The release of the Kinect has fostered the design of novel methods and techniques in several application domains. It has been tested in different contexts, which span from home entertainment to surgical environments. Nonetheless, to promote its adoption to solve real-world problems, the Kinect should be evaluated in terms of precision and accuracy. Up to now, some filtering approaches have been proposed to enhance the precision and accuracy of the Kinect sensor, and preliminary studies have shown promising results. In this work, we discuss the results of a study in which we have compared the most commonly used filtering approaches for Kinect depth data, in both static and dynamic contexts, by using novel metrics. The experimental results show that each approach can be profitably used to enhance the precision and/or accuracy of Kinect depth data in a specific context, whereas the temporal filtering approach is able to reduce noise in different experimental conditions.
Comparison of Kinect v1 and v2 Depth Images in Terms of Accuracy and Precision
RGB-D cameras like the Microsoft Kinect had a huge impact on recent research in Computer Vision as well as Robotics. With the release of the Kinect v2 a new promising device is available, which will -most probably -be used in many future research. In this paper, we present a systematic comparison of the Kinect v1 and Kinect v2. We investigate the accuracy and precision of the devices for their usage in the context of 3D reconstruction, SLAM or visual odometry. For each device we rigorously figure out and quantify influencing factors on the depth images like temperature, the distance of the camera or the scene color. Furthermore, we demonstrate errors like flying pixels and multipath interference. Our insights build the basis for incorporating or modeling the errors of the devices in follow-up algorithms for diverse applications.
Robust depth map refining using color image, 2024
Depth maps are essential for various applications, providing spatial information about object arrangement in a scene. They play a crucial role in fields such as computer vision, robotics, augmented and virtual reality, autonomous systems, and medical imaging. However, generating accurate, high-quality depth maps is challenging due to issues like texture-copying artifacts, edge leakage, and depth edge distortion. This study introduces a novel method for refining depth maps by integrating information from color images, combining structural and statistical techniques for superior results. The proposed approach employs a structural method to calculate affinities within a regularization framework, utilizing minimum spanning trees (MST) and minimum spanning forests (MSF). Superpixel segmentation is used to prevent MST construction across depth edges, addressing edge-leaking artifacts while preserving details. An edge inconsistency measurement model further reduces texture-copying artifacts. Additionally, an adaptive regularization window dynamically adjusts its bandwidth based on local depth variations, enabling effective handling of noise and maintaining sharp depth edges. Experimental evaluations across multiple datasets show the method's robustness and accuracy. It consistently achieves the lowest mean absolute deviation (MAD) compared to existing techniques across various upsampling factors, including 2×, 4×, 8×, and 16×. Visual assessments confirm its ability to produce depth maps free of texture-copying artifacts and blurred edges, yielding results closest to ground truth. Computational efficiency is ensured through a divide-and-conquer algorithm for spanning tree computations, reducing complexity while maintaining precision. This research underscores the importance of combining structural and statistical information in depth map refinement. By overcoming the limitations of existing methods, the proposed approach provides a practical solution for improving depth maps in applications requiring high precision and efficiency, such as robotics, virtual reality, and autonomous systems. Future work will focus on real-time applications and integration with advanced depth-sensing technologies.