Calibration Method For An Augmented Reality System (original) (raw)
Calibration requirements and procedures for a monitor-based augmented reality system
IEEE Transactions on Visualization and Computer Graphics, 1995
Augmented reality e n tails the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a w a y that they blend into the real world in terms of alignments, perspectives, illuminations, etc. For practical reasons the information necessary to obtain this realistic blending cannot be known a priori, and cannot be hard-wired into a system. Instead a number of calibration procedures are necessary so that the location and parameters of each of the system components are known. In this paper we identify the calibration steps necessary to build a computer model of the real world and then, using the monitor-based augmented reality system developed at ECRC Grasp as an example, we describe each of the calibration processes. These processes determine the internal parameters of our imaging devices scan converter, frame grabber, and video camera, as well as the geometric transformations that relate all of the physical objects of the system to a known world coordinate system.
Calibration Requirements and Procedures for Augmented Reality
1997
Augmented reality entails the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a way that they blend into the real world in terms of alignments, perspectives, illuminations, etc. For practical reasons the information necessary to obtain this realistic blending cannot be known a priori, and cannot be hard-wired into a system. Instead a number of calibration procedures are necessary so that the location and parameters of each of the system components are known. In this paper we identify the calibration steps necessary to build a complete computer model of the real world and then, using the augmented reality system developed at ECRC (Grasp) as an example, we describe each of the calibration processes.
Object calibration for augmented reality
Computer Graphics …, 1995
Augmented reality involves the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a way that they align with their corresponding ...
Calibration Errors in Augmented Reality: A Practical Study
2005
This paper confronts some theoretical camera models to reality and evaluates the suitability of these models for effective augmented reality (AR). It analyses what level of accuracy can be expected in real situations using a particular camera model and how robust the results are against realistic calibration errors. An experimental protocol is used that consists of taking images of a particular scene from different quality cameras mounted on a 4DOF micro-controlled device. The scene is made of a calibration target and three markers placed at different distances of the target. This protocol enables us to consider assessment criteria specific to AR as alignment error and visual impression, in addition to the classical camera positioning error.
Robust camera pose estimation using 2d fiducials tracking for real-time augmented reality systems
Proceedings of the 2004 ACM SIGGRAPH international conference on Virtual Reality continuum and its applications in industry - VRCAI '04, 2004
Augmented reality (AR) deals with the problem of dynamically and accurately align virtual objects with the real world. Among the used methods, vision-based techniques have advantages for AR applications, their registration can be very accurate, and there is no delay between the motion of real and virtual scenes. However, the downfall of these approaches is their high computational cost and lack of robustness. To address these shortcomings we propose a robust camera pose estimation method based on tracking calibrated fiducials in a known 3D environment, the camera location is dynamically computed by the Orthogonal Iteration Algorithm. Experimental results show the robustness and the effectiveness of our approach in the context of real-time AR tracking.
Registration Based on Projective Reconstruction Technique for Augmented Reality Systems
IEEE Transactions on Visualization and Computer Graphics, 2005
In AR systems, registration is one of the most difficult problems currently limiting their application. In this paper, we propose a simple registration method using projective reconstruction. This method consists of two steps: embedding and tracking. Embedding involves specifying four points to build the world coordinate system on which a virtual object will be superimposed. In tracking, a projective reconstruction technique is used to track these four specified points to compute the model view transformation for augmentation. This method is simple, as only four points need to be specified at the embedding stage and the virtual object can then be easily augmented onto a real scene from a video sequence. In addition, it can be extended to a scenario using the projective matrix that has been obtained from previous registration results using the same AR system. The proposed method has three advantages: 1) It is fast because the linear least square method can be used to estimate the related matrix in the algorithm and it is not necessary to calculate the fundamental matrix in the extended case. 2) A virtual object can still be superimposed on a related area even if some parts of the specified area are occluded during the whole process. 3) This method is robust because it remains effective even when not all the reference points are detected during the whole process, as long as at least six pairs of related reference points correspondences can be found. Some experiments have been conducted to validate the performance of the proposed method.
A camera-based calibration for automotive augmented reality Head-Up-Displays
2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2013
Using Head-up-Displays (HUD) for Augmented Reality requires to have an accurate internal model of the image generation process, so that 3D content can be visualized perspectively correct from the viewpoint of the user. We present a generic and costeffective camera-based calibration for an automotive HUD which uses the windshield as a combiner. Our proposed calibration model encompasses the view-independent spatial geometry, i.e. the exact location, orientation and scaling of the virtual plane, and a viewdependent image warping transformation for correcting the distortions caused by the optics and the irregularly curved windshield. View-dependency is achieved by extending the classical polynomial distortion model for cameras and projectors to a generic fivevariate mapping with the head position of the viewer as additional input. The calibration involves the capturing of an image sequence from varying viewpoints, while displaying a known target pattern on the HUD. The accurate registration of the camera path is retrieved with state-of-the-art vision-based tracking. As all necessary data is acquired directly from the images, no external tracking equipment needs to be installed. After calibration, the HUD can be used together with a head-tracker to form a head-coupled display which ensures a perspectively correct rendering of any 3D object in vehicle coordinates from a large range of possible viewpoints. We evaluate the accuracy of our model quantitatively and qualitatively.
International Journal of Advanced Robotic Systems
Computer vision systems have demonstrated to be useful in applications of autonomous navigation, especially with the use of stereo vision systems for the three-dimensional mapping of the environment. This article presents a novel camera calibration method to improve the accuracy of stereo vision systems for three-dimensional point localization. The proposed camera calibration method uses the least square method to model the error caused by the image digitalization and the lens distortion. To obtain particular three-dimensional point coordinates, the stereo vision systems use the information of two images taken by two different cameras. Then, the system locates the two-dimensional pixel coordinates of the three-dimensional point in both images and coverts them into angles. With the obtained angles, the system finds the three-dimensional point coordinates through a triangulation process. The proposed camera calibration method is applied in the stereo vision systems, and a comparative ...
Linear Augmented Reality Registration
Lecture Notes in Computer Science, 2001
Augmented reality requires the geometric registration of virtual or remote worlds with the visual stimulus of the user. This registration can be achieved by tracking the head pose of the user with respect to the reference coordinate system of the virtual objects. If tracking is achieved with headmounted cameras, registration becomes pose estimation as it is known in computer vision. Augmented reality is by definition a real-time problem, so we are interested only in bounded and short computational time. We propose a new linear algorithm for pose estimation. The algorithm shows a better performance than the linear algorithm by Quan and Lan and is comparable to the non-predicted time iterative algorithm by Kumar and Hanson.
Simple measurement and annotation technique of real objects in augmented reality environments
2013
The paper describes a technique that allows measuring and annotating real objects in an Augmented Reality (AR) environment. The technique is based on the marker tracking, and aims at enabling the user to define the three-dimensional position of points, within the AR scene, by selecting them directly on the video stream. The technique consists in projecting the points, which are directly selected on the monitor, on a virtual plane defined according to the bi-dimensional marker, which is used for the tracking. This plane can be seen as a virtual depth cue that helps the user to place these points in the desired position. The user can also move this virtual plane to place points within the whole 3D scene. By using this technique, the user can place virtual points around a real object with the aim of taking some measurements of the object, by calculating the minimum distance between the points, or in order to put some annotations on the object. Up to date, these kinds of activities can be carried out by using more complex systems or it is needed to know the shape of the real object a priori. The paper describes the functioning principles of the proposed technique and discusses the results of a testing session carried out with users to evaluate the overall precision and accuracy.