Self-Calibration of a Moving Camera from Point Correspondences and Fundamental Matrices (original) (raw)
Related papers
1999
This paper deals with a fundamental problem in motion and stereo analysis, namely that of determining the camera intrinsic calibration parameters. A novel method is proposed that follows the autocalibration paradigm, according to which calibration is achieved not with the aid of a calibration pattern but by observing a number of image features in a set of successive images. The proposed method relies upon the Singular Value Decomposition of the fundamental matrix, which leads to a particularly simple form of the Kruppa equations. In contrast to the classical formulation that yields an over-determined system of constraints, the derivation proposed here provides a straightforward answer to the problem of determining which constraints to employ among the set of available ones. Moreover, the derivation is a purely algebraic one, without a need for resorting to the somewhat non-intuitive geometric concept of the absolute conic. Apart from the fundamental matrix itself, no other quantities that can be extracted from it (e.g. the epipoles) are needed for the derivation. Experimental results from extensive simulations and several image sequences demonstrate the effectiveness of the proposed method in accurately estimating the intrinsic calibration matrices. It is also shown that the computed intrinsic calibration matrices are sufficient for recovering 3D motion and performing metric measurements from uncalibrated images.
3-D Reconstruction and Camera Calibration from Images with known Objects
1995
We present a method for camera calibration and metric reconstruction of the three-dimensional structure of scenes with several, possibly small and nearly planar objects from one or more images. We formulate the projection of object models explicitly according to the pin-hole camera model in order to be able to estimate the pose parameters for all objects as well as relative poses and the focal lengths of the cameras. This is accomplished by minimising a multivariate non-linear cost function. The only information needed is simple geometric object models, the correspondence between model and image features, and the correspondence of objects in the images if more than one view of the scene is used. Additionally, we present a new method for the projection of circles using projective invariants. Results using both simulated and real images are presented. keywords: Least-squares model fitting, model-based vision, 3-D reconstruction, camera calibration, projective invariants. 1 Introductio...
Recovering 3D metric structure and motion from multiple uncalibrated cameras
Proceedings. International Conference on Information Technology: Coding and Computing
An optimized linear factorization method for recovering both the 3D geometry of a scene and the camera parameters from multiple uncalibrated images is presented. In a first step, we recover a projective approximation using a well known iterative approach. Then, we are able to upgrade from projective to Euclidean structure by computing the projective distortion matrix in a way that is analogous to estimating the absolute quadric. Using the Singular Value Decomposition (SVD) as a main tool, and from the study of the ranks of the matrices involved in the process, we are able to enforce an accurate Euclidean reconstruction. Moreover, in contrast to other approaches our process is essentially a linear one and does not require an initial estimation of the solution. Examples of synthetic and real data reconstructions are presented.
Camera calibration and 3D reconstruction from a single view based on scene constraints
Image and Vision Computing, 2005
This paper mainly focuses on the problem of camera calibration and 3D reconstruction from a single view of structured scene. It is well known that three constraints on the intrinsic parameters of a camera can be obtained from the vanishing points of three mutually orthogonal directions. However, there usually exist one or several pairs of line segments, which are mutually orthogonal and lie in the pencil of planes defined by two of the vanishing directions in the structured scenes. It is proved in this paper that a new independent constraint to the image of the absolute conic can be obtained if the pair of line segments is of equal length or with known length ratio in space. The constraint is further studied both in terms of the vanishing points and the images of circular points. Hence, four independent constraints on a camera are obtained from one image, and the camera can be calibrated under the widely accepted assumption of zero-skew. This paper also presents a simple method for the recovery of camera extrinsic parameters and projection matrix with respect to a given world coordinate system. Furthermore, several methods are presented to estimate the positions and poses of space planar surfaces from the recovered projection matrix and scene constraints. Thus, a scene structure can be reconstructed by combining the planar patches. Extensive experiments on simulated data and real images, as well as a comparative test with other methods in the literature, validate our proposed methods.
A linear method for camera pair self-calibration
Computer Vision and Image Understanding, 2021
We examine 3D reconstruction of architectural scenes in unordered sets of uncalibrated images. We introduce a linear method to self-calibrate and find the metric reconstruction of a camera pair. We assume unknown and different focal lengths but otherwise known internal camera parameters and a known projective reconstruction of the camera pair. We recover two possible camera configurations in space and use the Cheirality condition, that all 3D scene points are in front of both cameras, to disambiguate the solution. We show in two Theorems, first that the two solutions are in mirror positions and then the relations between their viewing directions. Our new method performs on par (median rotation error ∆R = 3.49 •) with the standard approach of Kruppa equations (∆R = 3.77 •) for self-calibration and 5-Point algorithm for calibrated metric reconstruction of a camera pair. We reject erroneous image correspondences by introducing a method to examine whether point correspondences appear in the same order along x, y image axes in image pairs. We evaluate this method by its precision and recall and show that it improves the robustness of point matches in architectural and general scenes. Finally, we integrate all the introduced methods to a 3D reconstruction pipeline. We utilize the numerous camera pair metric recontructions using rotation-averaging algorithms and a novel method to average focal length estimates.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000
ÐWe introduce the concept of self-calibration of a 1D projective camera from point correspondences, and describe a method for uniquely determining the two internal parameters of a 1D camera, based on the trifocal tensor of three 1D images. The method requires the estimation of the trifocal tensor which can be achieved linearly with no approximation unlike the trifocal tensor of 2D images and solving for the roots of a cubic polynomial in one variable. Interestingly enough, we prove that a 2D camera undergoing planar motion reduces to a 1D camera. From this observation, we deduce a new method for self-calibrating a 2D camera using planar motions. Both the self-calibration method for a 1D camera and its applications for 2D camera calibration are demonstrated on real image sequences.
Structure from Motion with Known Camera Positions
2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1 (CVPR'06)
The wide availability of GPS sensors is changing the landscape in the applications of structure from motion techniques for localization. In this paper, we study the problem of estimating camera orientations from multiple views, given the positions of the viewpoints in a world coordinate system and a set of point correspondences across the views. Given three or more views, the above problem has a finite number of solutions for three or more point correspondences. Given six or more views, the problem has a finite number of solutions for just two or more points. In the three-view case, we show the necessary and sufficient conditions for the three essential matrices to be consistent with a set of known baselines. We also introduce a method to recover the absolute orientations of three views in world coordinates from their essential matrices. To refine these estimates we perform a least-squares minimization on the group cross product SO(3) × SO(3) × SO(3). We report experiments on synthetic data and on data from the ICCV2005 Computer Vision Contest.
D Reconstruction and Camera Calibration from Images with known Objects
We present a method for camera calibration and metric reconstruction of the three-dimensional structure of scenes with several, possibly small and nearly planar objects from one or more images. We formulate the projection of object models explicitly according to the pin-hole camera model in order to be able to estimate the pose parameters for all objects as well as relative poses and the focal lengths of the cameras. This is accomplished by minimising a multivariate non-linear cost function. The only information needed is simple geometric object models, the correspondence between model and image features, and the correspondence of objects in the images if more than one view of the scene is used. Additionally, we present a new method for the projection of circles using projective invariants. Results using both simulated and real images are presented. keywords: Least-squares model fitting, model-based vision, 3-D reconstruction, camera calibration, projective invariants. 1
Reconstruction of three-dimensional scenes from a sequence of images which only one is calibrate
2019
In this work, we propose a method for reconstructing three-dimensional scenes from a sequence of images which only one image is calibrate. The system is initialized by a calibrated image. The projection matrices of the other images in the sequence are estimated from the first projection matrix (obtained by the calibration of the camera), and the matching between the images is determined by solving a system of linear equations. The intrinsic and extrinsic parameters of the camera used are estimated from the Cholesky decomposition of the projection matrix. Finally, obtaining the 3D scene is based on the Euclidean reconstruction of the detected and matched control points in the pairs of images.