High-recall calibration monitoring for stereo cameras (original) (raw)

2024, Pattern analysis and applications/Pattern analysis & applications

Cameras are the prevalent sensors used for perception in autonomous robotic systems, but their initial calibration may degrade over time due to dynamic factors. This may lead to a failure of downstream tasks, such as simultaneous localization and mapping (SLAM) or object recognition. Hence, a computationally lightweight process that detects the decalibration is of interest. We describe a modification of StOCaMo, an online calibration monitoring procedure for a stereoscopic system. The method uses robust kernel correlation based on epipolar constraints; it validates extrinsic calibration parameters on a single frame with no temporal tracking. In this paper, we present a modified StOCaMo with an improved recall rate on small decalibrations through a confirmation technique based on resampled variance. With fixed parameters learned on a realistic synthetic dataset from CARLA, StOCaMo and its proposed modification were tested on multiple sequences from two realworld datasets: KITTI and EuRoC MAV. The modification improved the recall of StOCaMo by 25 % (to 91 % and 82 %, respectively), and the accuracy by 12 % (to 94.7 % and 87.5 %, respectively), while labeling at most one-third of the input data as uninformative. The upgraded method achieved the rank correlation between StOCaMo V-index and downstream SLAM error of 0.78 (Spearman).

Sign up to get access to over 50M papers

Sign up for access to the world's latest research

Online Stereo Camera Calibration

1992

The work presented in this document was not published in a journal. However the approach was extended to accommodate a 4 DOF stereo head and this was published a year later at the BMVC. The stereo head rig has now been dismantled, but, this document describes the basis for the calibration software which still survives in the Calib Tool in the Tina vision system, and the motivation for taking this approach.

Learning priors for calibrating families of stereo cameras

2007 IEEE 11th International Conference on Computer Vision, 2007

Online camera recalibration is necessary for long-term deployment of computer vision systems. Existing algorithms assume that the source of recalibration information is a set of features in a general 3D scene; and that enough features are observed that the calibration problem is wellconstrained. However, these assumptions are frequently invalid outside the laboratory. Real-world scenes often lack texture, contain repeated texture, or are mostly planar, making calibration difficult or impossible. In this paper we consider the calibration of families of stereo cameras, where each camera is assumed to have parameters drawn from a common but unknown prior distribution. We show how estimation of this prior using a small-number of offline-calibrated cameras (e.g. from the same production line) allows online calibration of additional cameras using a small number of point correspondences; and that using the estimated prior significantly increases the accuracy and robustness of stereo camera calibration.

Self calibration of a vision system embedded in a visual SLAM framework

2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011

This paper presents a novel approach to self calibrate the extrinsic parameters of a camera mounted on a mobile robot in the context of fusion with the odometry sensor. Calibrating precisely such system can be difficult if the camera is mounted on a vehicule where the frame is difficult to localize precisely (like on a car for example). However, the knowledge of the camera pose in the robot frame is essential in order to make a consistent fusion of the sensor measurements. Our approach is based on a Simultaneous Localization and Mapping (SLAM) framework: the estimation of the parameteres is made when the robot moves in an unknown environment which is only viewed by the camera. First, a study of the observability properties of the system is made in ordrer to charecterize conditions that its inputs have to satisfy to make possible the calibration process. Then, we show on 3 real experimentations with an omnidirectional camera the validity of the conditions and the quality of the estimation of the 3D pose of the camera with respect to the odometry frame.

Stereo camera system calibration: the need of two sets of parameters

ArXiv, 2021

The reconstruction of a scene via a stereo-camera system is a two-steps process, where at first images from different cameras are matched to identify the set of point-to-point correspondences that then will actually be reconstructed in the three dimensional real world. The performance of the system strongly relies of the calibration procedure, which has to be carefully designed to guarantee optimal results. We implemented three different calibration methods and we compared their performance over 19 datasets. We present the experimental evidence that, due to the image noise, a single set of parameters is not sufficient to achieve high accuracy in the identification of the correspondences and in the 3D reconstruction at the same time. We propose to calibrate the system twice to estimate two different sets of parameters: the one obtained by minimizing the reprojection error that will be used when dealing with quantities defined in the 2D space of the cameras, and the one obtained by mi...

Joint Forward-Backward Visual Odometry for Stereo Cameras

2019

Visual odometry is a widely used technique in the field of robotics and automation to keep a track on the location of a robot using visual cues alone. In this paper, we propose a joint forward-backward visual odometry framework by combining both, the forward motion and backward motion estimated from stereo cameras. The basic framework of LIBVIOS2 is used here for pose estimation as it can run in real-time on standard CPUs. The complementary nature of errors in the forward and backward mode of visual odometry helps in providing a refined motion estimation upon combining these individual estimates. In addition, two reliability measures, that is, forward-backward relative pose error and forward-backward absolute pose error have been proposed for evaluating visual odometry frameworks on its own without the requirement of any ground truth data. The proposed scheme is evaluated on the KITTI visual odometry dataset. The experimental results demonstrate improved accuracy of the proposed sch...

SLAM-based automatic extrinsic calibration of a multi-camera rig

2011 IEEE International Conference on Robotics and Automation, 2011

Cameras are often a good choice as the primary outward-looking sensor for mobile robots, and a wide field of view is usually desirable for responsive and accurate navigation, SLAM and relocalisation. While this can potentially be provided by a single omnidirectional camera, it can also be flexibly achieved by multiple cameras with standard optics mounted around the robot. However, such setups are difficult to calibrate. Here we present a general method for fully automatic extrinsic auto-calibration of a fixed multi camera rig, with no requirement for calibration patterns or other infrastructure, which works even in the case where the cameras have completely non-overlapping views. The robot is placed in a natural environment and makes a set of programmed movements including a full horizontal rotation and captures a synchronized image sequence from each camera. These sequences are processed individually with a monocular visual SLAM algorithm. The resulting maps are matched and fused robustly based on corresponding invariant features, and then all estimates are optimised full joint bundle adjustment, where we constrain the relative poses of the cameras to be fixed. We present results showing accurate performance of the method for various two and four camera configurations.

A Hybrid Feature Parametrization for Improving Stereo-SLAM Consistency

In visual simultaneous localization and mapping (SLAM) field, especially for feature based stereo-SLAM, data association is one of the most important and time-consuming sub-tasks. In this paper, we investigate the roles of different measured features during the data association process and present a new hybrid feature parametrization approach for stereo SLAM, which only selects a subset of the matched features that contributes most and treats nearby and distant features separately with different parametrization. We formulate a pipeline to filter, store and track the features which saves time for further state estimation. For different types of features on manifold and Euclidean space we apply corresponding designed maximum likelihood estimator with quadratic constraints and thus get a near-optimal estimation. Experimental results on EuRoC dataset and real tests show that our proposed algorithm leads to accurate state estimation with big progress in consistency.

A Novel Georeferenced Dataset for Stereo Visual Odometry

In this work, we present a novel dataset for assessing the accuracy of stereo visual odometry. The dataset has been acquired by a small-baseline stereo rig mounted on the top of a moving car. The groundtruth is supplied by a consumer grade GPS device without IMU. Synchronization and alignment between GPS readings and stereo frames are recovered after the acquisition. We show that the attained groundtruth accuracy allows to draw useful conclusions in practice. The presented experiments address influence of camera calibration, baseline distance and zero-disparity features to the achieved reconstruction performance.

A Practical Method for Camera Calibration in Stereo Vision Mobile Robot Navigation

This paper presents a method of camera calibration of stereo pair for stereo vision application. The method is using Jean-Yves Bouguet tool which produces the intrinsic and extrinsic parameters of stereo pair. The data of calibration will be used in image rectification between two images. Then the rectified image will go through the block matching process. The block matching technique is briefly described with the performance of its output. The disparity mapping is generated by the algorithm with the reference to the left image coordinate. The algorithm uses Sum of Absolute Differences (SAD) which is developed using Matlab software.

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.