Extrinsic calibration of a set of range cameras in 5 seconds without pattern (original) (raw)
Related papers
Fast Visual Odometry for 3-D Range Sensors
IEEE Transactions on Robotics, 2015
This paper presents a new dense method to compute the odometry of a free-flying range sensor in real time. The method applies the range flow constraint equation to sensed points in the temporal flow to derive the linear and angular velocity of the sensor in a rigid environment. Although this approach is applicable to any range sensor, we particularize its formulation to estimate the 3-D motion of a range camera. The proposed algorithm is tested with different image resolutions and compared with two state-of-the-art methods: generalized iterative closest point (GICP) [1] and robust dense visual odometry (RDVO) [2]. Experiments show that our approach clearly overperforms GICP which uses the same geometric input data, whereas it achieves results similar to RDVO, which requires both geometric and photometric data to work. Furthermore, experiments are carried out to demonstrate that our approach is able to estimate fast motions at 60 Hz running on a single CPU core, a performance that has never been reported in the literature. The algorithm is available online under an open source license so that the robotic community can benefit from it.
Plane-based Odometry using an RGB-D Camera
Procedings of the British Machine Vision Conference 2013, 2013
Odometry consists in using data from a moving sensor to estimate change in position over time. It is a crucial step for several applications in robotics and computer vision. This paper presents a novel approach for estimating the relative motion between successive RGB-D frames that uses plane-primitives instead of point features. The planes in the scene are extracted and the motion estimation is cast as a plane-to-plane registration problem with a closed-form solution. Point features are only extracted in the cases where the plane surface configuration is insufficient to determine motion with no ambiguity. The initial estimate is refined in a photo-geometric optimization step that takes full advantage of the plane detection and simultaneous availability of depth and visual appearance cues. Extensive experiments show that our plane-based approach is as accurate as state-of-the-art point-based approaches when the camera displacement is small, and significantly outperforms them in case of wide-baseline and/or dynamic foreground. 1 Introduction Visual odometry is the process of estimating the motion of a robot using the input of a single or multiple cameras attached to it. It has important applications in robotics, for control and navigation in the absence of an external reference system. Research has been made in order to tackle this problem using RGB cameras [8, 10]. However, these methods face significant challenges including the reconstruction of textureless regions. RGB-D sensors, such as the Microsoft Kinect and the Asus Xtion Pro Live, cope with this issue since they provide the 3D geometry and the visual appearance of the scene simultaneously. Recently, odometry methods that take advantage of the depth and color information provided by RGB-D sensors have been developed [9, 13]. They run in real-time and provide accurate estimations for high frame rate acquisitions and moderate sensor velocity. However, they are not able to properly cope with large displacements between consecutive frames. In [13], it has been experimentally shown that the performance of the method degrades as the
2009
The combined use of 3D Laser Range Finders (LRF) and cameras is increasingly common in the navigation application for autonomous mobile robots. The integration of laser range information and images requires the estimation of the Euclidean 3-dimensional transformation between the coordinate systems of the LRF and the cameras. This paper describes a new and efficient method to perform the extrinsic calibration between a 3D LRF and a stereo camera with the aid of inertial data. The main novelty of the proposed approach compared to other state of the art calibration procedures is the use of an Inertial Measurement Unit (IMU), which decreases the number of points needed to a robust calibration. Furthermore, a freely moving bright spot is the only calibration object. A set of virtual 3D points is made by waving the bright spot through the working volume in three different planes. Its projections onto the images are found with sub-pixel precision and verified by a robust RANSAC analysis. These same points are extracted according to the laser scan data and are corresponded to the virtual 3D points in the stereo pair. Experimental results presented in this paper demonstrate the accuracy of our approach. Furthermore, the applicability of the proposed technique is high, only requiring of an inertial sensor coupled to the sensor platform. This approach has been also extended for a camera network.
Extrinsic Calibration of Multiple RGB-D Cameras From Line Observations
IEEE Robotics and Automation Letters, 2018
This paper presents a novel method to estimate the relative poses between RGB and depth cameras without the requirement of an overlapping field of view, thus providing flexibility to calibrate a variety of sensor configurations. This calibration problem is relevant to robotic applications which can benefit of using several cameras to increase the field of view. In our approach, we extract and match lines of the scene in the RGB and depth cameras, and impose geometric constraints to find the relative poses between the sensors. An analysis of the observability properties of the problem is presented. We have validated our method in both synthetic and real scenarios with different camera configurations, demonstrating that our approach achieves good accuracy and is very simple to apply, in contrast with previous methods based on trajectory matching using visual odometry or SLAM.
Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes
2007
In this paper, we describe a new approach for the extrinsic calibration of a camera with a 3D laser range finder, that can be done on the fly. This approach does not require any calibration object. Only few point correspondences are used, which are manually selected by the user from a scene viewed by the two sensors. The proposed method relies on a novel technique to visualize the range information obtained from a 3D laser scanner. This technique converts the visually ambiguous 3D range information into a 2D map where natural features of a scene are highlighted. We show that by enhancing the features the user can easily find the corresponding points of the camera image points. Therefore, visually identifying lasercamera correspondences becomes as easy as image pairing. Once point correspondences are given, extrinsic calibration is done using the well-known PnP algorithm followed by a nonlinear refinement process. We show the performance of our approach through experimental results. In these experiments, we will use an omnidirectional camera. The implication of this method is important because it brings 3D computer vision systems out of the laboratory and into practical use.
Line-based extrinsic calibration of range and image sensors
2013 IEEE International Conference on Robotics and Automation, 2013
Fig. 1: Overview of the proposed method: lines are extracted from (a) an image and (b) from a point cloud of the same scene. (c) By registering 2D and 3D lines the corresponding photorealistic representation can be generated.
2014 International Conference on 3D Imaging (IC3D), 2014
Many applications require the use of multiple cameras to cover a large volume. In this paper, we evaluate several pairwise calibration techniques dedicated to multiple range cameras. We compare the precision of a self-calibration technique based on the movement in front of the cameras to object based calibration. While the self-calibration technique is less precise than its counterparts, it yields a first estimation of the transformation between the cameras and permits to detect when the cameras become mis-aligned. Therefore, this technique is useful in a practical situations.
SLAM-based automatic extrinsic calibration of a multi-camera rig
2011
Abstract Cameras are often a good choice as the primary outward-looking sensor for mobile robots, and a wide field of view is usually desirable for responsive and accurate navigation, SLAMand relocalisation. While this can potentially be provided by a single omnidirectional camera, it can also be flexibly achieved by multiple cameras with standard optics mounted around the robot. However, such setups are difficult to calibrate.
RGB-D camera calibration and trajectory estimation for indoor mapping
Autonomous Robots, 2020
In this paper, we present a system for estimating the trajectory of a moving RGB-D camera with applications to building maps of large indoor environments. Unlike the current most researches, we propose a 'feature model' based RGB-D visual odometry system for a computationallyconstrained mobile platform, where the 'feature model' is persistent and dynamically updated from new observations using a Kalman filter. In this paper, we firstly propose a mixture of Gaussians model for the depth random noise estimation, which is used to describe the spatial uncertainty of the feature point cloud. Besides, we also introduce a general depth calibration method to remove sys
A solution for SLAM through augmenting vision and range information
Intelligent Robots and …
This paper proposes a method for augmenting the information of a monocular camera and a range finder. This method is a valuable step towards solving the SLAM problem in unstructured environments free from problems of using encoders' data. Proposed algorithm causes the robot to benefit from a feature-based map for filtering purposes, while it exploits an accurate motion model, based on point-wise raw range scan matching rather than unreliable feature-based range scan matching, in unstructured environments. Moreover, robust loop closure detection procedure is the other consequence of this method. Experiments with a low-cost IEEE 1394 webcam and a range finder illustrate the effectiveness of the proposed method in drift-free SLAM at loop closing motions in unstructured environments.