New perspectives on camera calibration using geometric algebra (original) (raw)
Closed-Form Solutions for the Euclidean Calibration of a Stereo Rig
1998
In this paper we describe a method for estimating the internal parameters of the left and right cameras associated with a stereo image pair. The stereo pair has known epipolar geometry and therefore 3-D projective reconstruction of pairs of matched image points is available. The stereo pair is allowed to move and hence there is a collineation relating the two projective reconstructions computed before and after the motion. We show that this collineation has similar but different parameterizations for general and ground-plane rigid motions and we make explicit the relationship between the internal camera parameters and such a collineation. We devise a practical method for recovering four camera parameters from a single general motion or three camera parameters from a single ground-plane motion. Numerous experiments with simulated, calibrated and natural data validate the calibration method.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000
ÐWe introduce the concept of self-calibration of a 1D projective camera from point correspondences, and describe a method for uniquely determining the two internal parameters of a 1D camera, based on the trifocal tensor of three 1D images. The method requires the estimation of the trifocal tensor which can be achieved linearly with no approximation unlike the trifocal tensor of 2D images and solving for the roots of a cubic polynomial in one variable. Interestingly enough, we prove that a 2D camera undergoing planar motion reduces to a 1D camera. From this observation, we deduce a new method for self-calibrating a 2D camera using planar motions. Both the self-calibration method for a 1D camera and its applications for 2D camera calibration are demonstrated on real image sequences.
Generic self-calibration of central cameras
Computer Vision and Image Understanding, 2010
We consider the self-calibration problem for a generic imaging model that assigns projection rays to pixels without a parametric mapping. We consider the central variant of this model, which encompasses all camera models with a single effective viewpoint. Self-calibration refers to calibrating a camera's projection rays, purely from matches between images, i.e. without knowledge about the scene such as using a calibration grid. In order to do this we consider specific camera motions, concretely, pure translations and rotations, although without the knowledge of rotation and translation parameters (rotation angles, axis of rotation, translation vector). Knowledge of the type of motion, together with image matches, gives geometric constraints on the projection rays. We show for example that with translational motions alone, self-calibration can already be performed, but only up to an affine transformation of the set of projection rays. We then propose algorithms for full metric self-calibration, that use rotational and translational motions or just rotational motions.
Towards Generic Self-Calibration of Central Cameras
2005
We consider the self-calibration problem for the generic imaging model that assigns projection rays to pixels without a parametric mapping. In this paper, we consider the central variant of this model, which encompasses all camera models with a single effective viewpoint. Self-calibration refers to calibrating a camera's projection rays, purely from matches between images, i.e. without knowledge about the scene such as using a calibration grid. This paper presents our first steps towards generic self-calibration; we consider specific camera motions, concretely, pure translations and rotations, although without knowing rotation angles etc. Knowledge of the type of motion, together with image matches, gives geometric constraints on the projection rays. These constraints are formulated and we show for example that with translational motions alone, self-calibration can already be performed, but only up to an affine transformation of the set of projection rays. We then propose a practical algorithm for full metric self-calibration, that uses rotational and translational motions.
Wiley Encyclopedia of Computer Science and Engineering, 2007
Geometric camera calibration is a prerequisite for making accurate geometric measurements from image data and it is hence a fundamental task in computer vision. This article gives a discussion about the camera models and calibration methods used in the eld. The emphasis is on conventional calibration methods where the parameters of the camera model are determined by using images of
Conformal Geometric Algebra for Robotic Vision
Journal of Mathematical Imaging and Vision, 2005
In this paper the authors introduce the conformal geometric algebra in the field of visually guided robotics. This mathematical system keeps our intuitions and insight of the geometry of the problem at hand and it helps us to reduce considerably the computational burden of the problems. As opposite to the standard projective geometry, in conformal geometric algebra we can deal simultaneously with incidence algebra operations (meet and join) and conformal transformations represented effectively using spinors. In this regard, this framework appears promising for dealing with kinematics, dynamics and projective geometry problems without the need to resort to different mathematical systems (as most current approaches do). This paper presents real tasks of perception and action, treated in a very elegant and efficient way: body-eye calibration, 3D reconstruction and robot navigation, the computation of 3D kinematics of a robot arm in terms of spheres, visually guided 3D object grasping making use of the directed distance and intersections of lines, planes and spheres both involving conformal transformations. We strongly believe that the framework of conformal geometric algebra can be, in general, of great advantage for applications using stereo vision, range data, laser, omnidirectional and odometry based systems.
Monocular Camera Calibration using Projective Invariants
Camera calibration is a crucial step to improve the accuracy of the images captured by optical devices. In this paper, we take advantage of projective geometry properties to select frames with quality control points in the data acquisition stage and, further on, perform an accurate camera calibration. The proposed method consists of four steps. Firstly, we select acceptable frames based on the position of the control points, later on we use projective invariants properties to find the optimal control points to perform an initial camera calibration using the camera calibration algorithm implemented in OpenCV. Finally, we perform an iterative process of control point refinement, projective invariants properties check and recalibration; until the results of the calibrations converge to a minimum defined threshold.
Camera self-calibration: Theory and experiments
Computer Vision — ECCV'92, 1992
The problem of finding the internal orientation of a camera (camera calibration) is extremely important for practical applications. In this paper a complete method for calibrating a camera is presented. In contrast with existing methods it does not require a calibration object with a known 3D shape. The new method requires only point matches from image sequences. It is shown, using experiments with noisy data, that it is possible to calibrate a camera just by pointing it at the environment, selecting points of interest and then tracking them in the image as the camera moves. It is not necessary to know the camera motion. The camera calibration is computed in two steps. In the first step the epipolar transformation is found. Two methods for obtaining the epipoles are discussed, one due to Sturm is based on projective invariants, the other is based on a generalisation of the essential matrix. The second step of the computation uses the so-called Kruppa equations which link the epipolar transformation to the image of the absolute conic. After the camera has made three or more movements the Kruppa equations can be solved for the coefficients of the image of the absolute conic. The solution is found using a continuation method which is briefly described. The intrinsic parameters of the camera are obtained from the equation for the image of the absolute conic. The results of experiments with synthetic noisy data are reported and possible enhancements to the method are suggested.
A linear method for camera pair self-calibration
Computer Vision and Image Understanding, 2021
We examine 3D reconstruction of architectural scenes in unordered sets of uncalibrated images. We introduce a linear method to self-calibrate and find the metric reconstruction of a camera pair. We assume unknown and different focal lengths but otherwise known internal camera parameters and a known projective reconstruction of the camera pair. We recover two possible camera configurations in space and use the Cheirality condition, that all 3D scene points are in front of both cameras, to disambiguate the solution. We show in two Theorems, first that the two solutions are in mirror positions and then the relations between their viewing directions. Our new method performs on par (median rotation error ∆R = 3.49 •) with the standard approach of Kruppa equations (∆R = 3.77 •) for self-calibration and 5-Point algorithm for calibrated metric reconstruction of a camera pair. We reject erroneous image correspondences by introducing a method to examine whether point correspondences appear in the same order along x, y image axes in image pairs. We evaluate this method by its precision and recall and show that it improves the robustness of point matches in architectural and general scenes. Finally, we integrate all the introduced methods to a 3D reconstruction pipeline. We utilize the numerous camera pair metric recontructions using rotation-averaging algorithms and a novel method to average focal length estimates.
Digital Camera Calibration, Relative Orientation and Essential Matrix Parameters
WSEAS Transactions on Signal Processing archive, 2017
The fundamental matrix, based on the co-planarity condition, even though it is very interesting for theoretical issues, it does not allow finding the camera calibration parameters, and the base and rotation parameters altogether. In this work we present an easy calibration method for calculating the internal parameters: pixel dimensions and image center pixel coordinates. We show that the method is slightly easier if the camera rotation angles, in relation with the general referential system, are small. The accuracy of the four calibration parameters are evaluated by simulations. In addition, a method to improve the accuracy is explained. When the calibration parameters are known, the fundamental matrix can be reduced to the essential matrix. In order to find the relative orientation parameters in stereo vision, there is also presented a new method to extract the base and the camera rotation by means of the essential matrix. The proposed method is simple to implement. We also includ...