Calibration of the Multi-Camera Registration System for Visual Navigation Benchmarking (original) (raw)

The registration system for the evaluation of indoor visual slam and odometry algorithms

This paper presents the new benchmark data registraon system aimed at facilita ng the development and evalua on of the visual odometry and SLAM algorithms. The WiFiBOT LAB V3 wheeled robot equipped with three cameras, XSENS MTi a tude and heading reference system (AHRS) and Hall encoders can be used to gather data in indoor explora on scenarios. The ground truth trajectory of the robot is obtained using the visual mo on tracking system. Addi onal sta c cameras simula ng the surveillance network, as well as ar ficial markers augmen ng the naviga on are incorporated in the system. The datasets registered with the presented system will be freely available for research purposes.

Video-based realtime IMU-camera calibration for robot navigation

SPIE Proceedings, 2012

This paper introduces a new method for fast calibration of inertial measurement units (IMU) with cameras being rigidly coupled. That is, the relative rotation and translation between the IMU and the camera is estimated, allowing for the transfer of IMU data to the cameras coordinate frame. Moreover, the IMUs nuisance parameters (biases and scales) and the horizontal alignment of the initial camera frame are determined. Since an iterated Kalman Filter is used for estimation, information on the estimations precision is also available. Such calibrations are crucial for IMU-aided visual robot navigation, i.e. SLAM, since wrong calibrations cause biases and drifts in the estimated position and orientation. As the estimation is performed in realtime, the calibration can be done using a freehand movement and the estimated parameters can be validated just in time. This provides the opportunity of optimizing the used trajectory online, increasing the quality and minimizing the time effort for calibration. Except for a marker pattern, used for visual tracking, no additional hardware is required. As will be shown, the system is capable of estimating the calibration within a short period of time. Depending on the requested precision trajectories of 30 seconds to a few minutes are sufficient. This allows for calibrating the system at startup. By this, deviations in the calibration due to transport and storage can be compensated. The estimation quality and consistency are evaluated in dependency of the traveled trajectories and the amount of IMU-camera displacement and rotation misalignment. It is analyzed, how different types of visual markers, i.e. 2-and 3-dimensional patterns, effect the estimation. Moreover, the method is applied to mono and stereo vision systems, providing information on the applicability to robot systems. The algorithm is implemented using a modular software framework, such that it can be adopted to altered conditions easily.

A Practical Method for Camera Calibration in Stereo Vision Mobile Robot Navigation

This paper presents a method of camera calibration of stereo pair for stereo vision application. The method is using Jean-Yves Bouguet tool which produces the intrinsic and extrinsic parameters of stereo pair. The data of calibration will be used in image rectification between two images. Then the rectified image will go through the block matching process. The block matching technique is briefly described with the performance of its output. The disparity mapping is generated by the algorithm with the reference to the left image coordinate. The algorithm uses Sum of Absolute Differences (SAD) which is developed using Matlab software.

Visual Navigation System for Mobile robots

hh.diva-portal.org

We present two different methods based on visual odometry for pose estimation (x, y, Ө) of a robot. The methods proposed are: one appearance based method that computes similarity measures between consecutive images, and one method that computes visual flow of particular features, i.e. spotlights on ceiling. Both techniques are used to correct the pose (x, y, Ө) of the robot, measuring heading change between consecutive images. A simple Kalman filter, extended Kalman filter and simple averaging filter are used to fuse the estimated heading from visual odometry methods with odometry data from wheel encoders. Both techniques are evaluated on three different datasets of images obtained from a warehouse and the results showed that both methods are able to minimize the drift in heading compare to using wheel odometry only.

A Review of Registration Methods on Mobile Robots

Technologies for Machine Learning and Vision Applications

The task of registering three dimensional data sets with rigid motions is a fundamental problem in many areas as computer vision, medical images, mobile robotic, arising whenever two or more 3D data sets must be aligned in a common coordinate system. In this chapter, the authors review registration methods. Focusing on mobile robots area, this chapter reviews the main registration methods in the literature. A possible classification could be distance-based and feature-based methods. The distance based methods, from which the classical Iterative Closest Point (ICP) is the most representative, have a lot of variations which obtain better results in situations where noise, time, or accuracy conditions are present. Feature based methods try to reduce the great number or points given by the current sensors using a combination of feature detector and descriptor which can be used to compute the final transformation with a method like RANSAC or Genetic Algorithms.

SLAM-based automatic extrinsic calibration of a multi-camera rig

2011 IEEE International Conference on Robotics and Automation, 2011

Cameras are often a good choice as the primary outward-looking sensor for mobile robots, and a wide field of view is usually desirable for responsive and accurate navigation, SLAM and relocalisation. While this can potentially be provided by a single omnidirectional camera, it can also be flexibly achieved by multiple cameras with standard optics mounted around the robot. However, such setups are difficult to calibrate. Here we present a general method for fully automatic extrinsic auto-calibration of a fixed multi camera rig, with no requirement for calibration patterns or other infrastructure, which works even in the case where the cameras have completely non-overlapping views. The robot is placed in a natural environment and makes a set of programmed movements including a full horizontal rotation and captures a synchronized image sequence from each camera. These sequences are processed individually with a monocular visual SLAM algorithm. The resulting maps are matched and fused robustly based on corresponding invariant features, and then all estimates are optimised full joint bundle adjustment, where we constrain the relative poses of the cameras to be fixed. We present results showing accurate performance of the method for various two and four camera configurations.

Mobile Robot Localization via Efficient Calibration Technique of a Fixed Remote Camera

2013

Tactical mobility systems have gained focus and interest in several areas including: homeland security, battlefield intelligence, space missions, surveillance systems, etc. Tactical systems work in a very sophisticated environment, and usually they need accuracy and consistency to achieve robust situational awareness. One important aspect of situation awareness is reliable localization of object of interest. This paper describes the design and implementation of an indoor mobile robot localization system using Machine Vision methods, namely, Vision-Based Indoor Positioning System (VIPS). This approach relies on color detection to identify the robot in the environment using a ceiling-mounted camera. Camera calibration model is developed to dynamically estimate the robot position in real time. The camera calibration model is based on environment grid modeling and Euclidian interpolation method to enhance the estimated gross position. Finally, Gaussian zero-phase filter is exploited to ...

Simultaneous calibration of odometry and camera for a differential drive mobile robot

2010

Differential-drive mobile robots are usually equipped with video-cameras for navigation purposes. In order to ensure proper operational capabilities of such systems, several calibration steps are required to estimate the following quantities: the video-camera intrinsic and extrinsic parameters, the relative pose between the camera and the vehicle frame and, finally, the odometric parameters of the vehicle. In this paper the simultaneous estimation of the above mentioned quantities is achieved by a systematic and effective calibration procedure that does not require any iterative step. The calibration procedure needs only on-board measurements given by the wheels encoders, the camera and a number of properly taken camera snapshots of a set of known landmarks. Numerical simulations and experimental results with a mobile robot Khepera III equipped with a low-cost camera confirm the effectiveness of the proposed technique.

Mobile robot position determination using visual landmarks

IEEE Transactions on Industrial Electronics, 1994

This paper is concerned with the problem of determining the position of a mobile vehicle during navigation. In order to achieve this objective a multisensor navigation system for self location of the robot has been developed. By tracking a few known landmarks with a vision module, the system is able to monitor continuously its position and to integrate these estimates with the measures provided by the vehicle odometers. This paper describes in detail the vision module used by the navigation system.

Computer vision system for an autonomous mobile robot

2000

The purpose of this paper is to compare three methods for three-dimensional measurements of line position used for the vision guidance to navigate an autonomous mobile robot. A model is first developed to map three dimensional ground points into image points to be developed using homogeneous coordinates. Then using the ground plane constraint, the inverse transformation that maps image points into three dimensional ground points is determined. And then the system identification problem is solved using a calibration device. Calibration data is used to determine the model parameters by minimizing the mean square error between model and calibration points. A novel simplification is then presented which provides surprisingly accurate results. This method is called the magic matrix approach and uses only the calibration data. A more standard variation of this approach is also considered. The significance of this work is that it shows that three methods that are based on three-dimensional measurements may be used for mobile robot navigation and that a simple method can achieve accuracy to a fraction of an inch which is sufficient in some applications.