João Alves | IST - Academia.edu (original) (raw)
Related Authors
Università degli Studi di Milano - State University of Milan (Italy)
Uploads
Papers by João Alves
This article presents a technique for modeling and calibrating a camera with integrated low-cost ... more This article presents a technique for modeling and calibrating a camera with integrated low-cost inertial sensors, three gyros and three accelerometers for full 3D sensing. Inertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system, are fused with vision at an early processing stage. Vision systems in autonomous vehicles can also benefit by taking inertial cues into account. Camera calibration has been extensively studied, and standard techniques established. Inertial navigation systems, relying on high-end sensors, also have established techniques. Nevertheless, in order to use off-the-shelf inertial sensors attached to a camera, appropriate modeling and calibration techniques are required. For inertial sensor alignment, a pendulum instrumented with an encoded shaft is used to estimate the bias and scale factor of inertial measurements. For camera calibration, a standard and reliable camera calibration technique is used, based on images of a planar grid. Having both the camera and the inertial sensors calibrated and observing the vertical direction at different poses, the rigid rotation between the two frames of reference is estimated, using a mathematical model based on unit quaternions. The technique for this alignment and consequent results with simulated and real data are presented at the end of this article.
This article presents a technique for modeling and calibrating a camera with integrated low-cost ... more This article presents a technique for modeling and calibrating a camera with integrated low-cost inertial sensors, three gyros and three accelerometers for full 3D sensing. Inertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system, are fused with vision at an early processing stage. Vision systems in autonomous vehicles can also benefit by taking inertial cues into account. Camera calibration has been extensively studied, and standard techniques established. Inertial navigation systems, relying on high-end sensors, also have established techniques. Nevertheless, in order to use off-the-shelf inertial sensors attached to a camera, appropriate modeling and calibration techniques are required. For inertial sensor alignment, a pendulum instrumented with an encoded shaft is used to estimate the bias and scale factor of inertial measurements. For camera calibration, a standard and reliable camera calibration technique is used, based on images of a planar grid. Having both the camera and the inertial sensors calibrated and observing the vertical direction at different poses, the rigid rotation between the two frames of reference is estimated, using a mathematical model based on unit quaternions. The technique for this alignment and consequent results with simulated and real data are presented at the end of this article.