On the use of IMUs in the PnP problem (original) (raw)
Related papers
Camera and inertial sensor fusion for the PnP problem: algorithms and experimental results
Machine Vision and Applications
In this work, we face the problem of estimating the relative position and orientation of a camera and an object, when they are both equipped with inertial measurement units (IMUs), and the object exhibits a set of n landmark points with known coordinates (the so-called Pose estimation or PnP Problem). We present two algorithms that, fusing the information provided by the camera and the IMUs, solve the PnP problem with good accuracy. These algorithms only use the measurements given by IMUs’ inclinometers, as the magnetometers usually give inaccurate estimates of the Earth magnetic vector. The effectiveness of the proposed methods is assessed by numerical simulations and experimental tests. The results of the tests are compared with the most recent methods proposed in the literature.
On the use of the inclinometers in the PnP Problem
This paper deals with the problem of estimating the relative pose between a camera and an object. It is assumed that both the camera and the object are equipped with an Inertial Measurement Unit (IMU) which measures their inclinations with respect to the gravity vector. Moreover, it is assumed that the object contains a feature of n ≥ 2 points, the position of which in the object reference frame is known a priori. The resulting pose estimation problem can be seen as a PnP problem with inclination information. Early results of the authors on the subject showed that in this case the P2P problem always gives 2 solutions, except for a few singular configurations where the number of solution is infinite. Moreover, to avoid singular configurations and to resolve ambiguities a very simple solution for the P3P problem, based on the idea of re-projection, was proposed. In this paper, it will be shown that, thanks to a simple test based on geometrical considerations, very often one of two solutions of the P2P problem can be discarded. Moreover to solve the remaining ambiguities and to ameliorate the pose estimation, a novel and more robust algorithm for the general PnP problem will be proposed. The results will be validated through numerical and experimental tests.
Relative pose calibration of a spherical camera and an IMU
2008
This paper is concerned with the problem of estimating the relative translation and orientation of an inertial measurement unit and a spherical camera, which are rigidly connected. The key is to realize that this problem is in fact an instance of a standard problem within the area of system identification, referred to as a gray-box problem. We propose a new algorithm for estimating the relative translation and orientation, which does not require any additional hardware, except a piece of paper with a checkerboard pattern on it. The experimental results show that the method works well in practice.
Fast Relative Pose Calibration for Visual and Inertial Sensors
Springer Tracts in Advanced Robotics, 2009
Accurate vision-aided inertial navigation depends on proper calibration of the relative pose of the camera and the inertial measurement unit (IMU). Calibration errors introduce bias in the overall motion estimate, degrading navigation performance -sometimes dramatically. However, existing camera-IMU calibration techniques are difficult, time-consuming and often require additional complex apparatus. In this paper, we formulate the camera-IMU relative pose calibration problem in a filtering framework, and propose a calibration algorithm which requires only a planar camera calibration target. The algorithm uses an unscented Kalman filter to estimate the pose of the IMU in a global reference frame and the 6-DoF transform between the camera and the IMU. Results from simulations and experiments with a low-cost solid-state IMU demonstrate the accuracy of the approach.
Modeling and Calibration of Inertial and Vision Sensors
The International Journal of Robotics Research, 2010
This paper is concerned with the problem of estimating the relative translation and orientation of an inertial measurement unit and a camera, which are rigidly connected. The key is to realize that this problem is in fact an instance of a standard problem within the area of system identification, referred to as a gray-box problem. We propose a new algorithm for estimating the relative translation and orientation, which does not require any additional hardware, except a piece of paper with a checkerboard pattern on it. The method is based on a physical model which can also be used in solving for example sensor fusion problems. The experimental results show that the method works well in practice, both for perspective and spherical cameras.
Estimation of Inertial Sensor to Camera Rotation from Single Axis Motion
2009
The aim of the present work is to define a calibration framework to estimate the relative orientation between a camera and an inertial orientation sensor AHRS (Attitude Heading Reference System). Many applications in computer vision and in mixed reality frequently work in cooperation with such class of inertial sensors, in order to increase the accuracy and the reliability of their results. In this context the heterogeneous measurements must be represented in a unique common reference frame (rf.) in order to carry out a joint processing. The basic framework is given by the estimation of the vertical direction, defined by a 3D vector expressed in the camera rf. as well as in the AHRS rf. In this paper a new approach has been adopted to retrieve such direction by using different geometrical entities which may be inferred from the analysis of single axis motion projective geometry. Their performances have been evaluated on simulated data as well as on real data.
Relative Pose Calibration Between Visual and Inertial Sensors
International Journal of Robotic Research, 2007
This paper proposes an approach to calibrate off-the-shelf cameras and inertial sensors to have a useful integrated system to be used in static and dynamic situations. The rotation between the camera and the inertial sensor can be estimated, when calibrating the camera, by having both sensors observe the vertical direction, using a vertical chessboard target and gravity. The translation between the two can be estimated using a simple passive turntable and static images, provided that the system can be adjusted to turn about the inertial sensor null point in several poses. Simulation and real data results are presented to show the validity and simple requirements of the proposed method.
Fusion of Visual and Inertial Measurements for Pose Estimation
In this paper, we sketch a multi-sensor framework for estimating an object's pose. For this purpose, we combine an inertial measurement unit, consisting of gyroscopes, accelerometers and magnetometers, with a visual pose estimation algorithm applied on images obtained from a low cost web cam. Using all measurements of the different sensors, we state the equations to model the various sensors and give an idea of how to fuse the different measurements by using the Extended Kalman filter.
Calibration of The Spatial Pose Between Inertial and Visual Sensors With An Inclinometer
The Open Cybernetics & Systemics Journal, 2015
An improved inertial and visual sensors calibration method is proposed in this paper, which is the method of fractional step to calibrate between inertial and visual sensors. The relationship of the rotation between inertial and vision sensors can be obtain by solving the improvement of hand-eye calibration equations. The inclinometer sensor is introduced in the process of calibration and the output of inclinometer data is utilized to optimize the rotation matrix between inertial and vision sensors; the translation between inertial and vision sensors can be obtain through solving the basic hand-eye calibration equations. The experiment results demonstrate that the method can precisely calibrate the relative position of the space between inertial and visual sensors.