Advanced 3D localization by fusing measurements from GPS, inertial and vision sensors (original) (raw)

Loosely coupled Kalman filtering for fusion of Visual Odometry and inertial navigation

Visual Odometry (VO) is the process of estimating the motion of a system using single or stereo cameras. Performance of VO is comparable to that of wheel odometers and GPS under certain conditions; therefore it is an accepted choice for integration with inertial navigation systems especially in GPS denied environments. In general, VO is integrated with the inertial sensors in a state estimation framework. Despite the various instances of estimation filters, the underlying concepts remain the same, an assumed kinematic model of the system is combined with measurements of the states of that system. The drawback of using kinematic models for state transition is that the state estimate will only be as good as the precision of the model used in the filter. A common approach in navigation community is to use an error propagation model of the navigation solution using inertial sensor instead of an assumed dynamical model. High rate IMU will trace the dynamic better than an assumed model. In this paper, we propose a loosely coupled indirect feedback Kalman filter integration for visual odometry and inertial navigation system that is based on error propagation model and takes into account different characteristics of individual sensors for optimum performance, reliability and robustness. Two measurement models are derived for the accumulated and incremental visual odometry measurements. A practical measurement model approach is proposed for the delta position and attitude change measurements that inherently includes delayed-state. The non-Gaussian, non-stationary and correlated error characteristics of VO, that is not suitable to model in a standard Kalman filter, is tackled with averaging the measurements over a Kalman period and utilizing a sigma-test within the filter.

Robust real-time tracking by fusing measurements from inertial and vision sensors

Journal of Real-Time Image Processing, 2007

The problem of estimating and predicting position and orientation (pose) of a camera is approached by fusing measurements from inertial sensors (accelerometers and rate gyroscopes) and vision. The sensor fusion approach described in this contribution is based on non-linear filtering of these complementary sensors. This way, accurate and robust pose estimates are available for the primary purpose of augmented reality applications, but with the secondary effect of reducing computation time and improving the performance in vision processing.

Indoor Real-Time Localisation for Multiple Autonomous Vehicles Fusing Vision, Odometry and IMU Data

Lecture Notes in Computer Science, 2016

Due to the increasing usage of service and industrial autonomous vehicles, a precise localisation is an essential component required in many applications, e.g. indoor robot navigation. In open outdoor environments, differential GPS systems can provide precise positioning information. However, there are many applications in which GPS cannot be used, such as indoor environments. In this work, we aim to increase robot autonomy providing a localisation system based on passive markers, that fuses three kinds of data through extended Kalman filters. With the use of low cost devices, the optical data are combined with other robots' sensor signals, i.e. odometry and inertial measurement units (IMU) data, in order to obtain accurate localisation at higher tracking frequencies. The entire system has been developed fully integrated with the Robotic Operating System (ROS) and has been validated with real robots.

Multi-Sensor Fusion for Navigation of Ground Vehicles

2022 26th International Conference on Methods and Models in Automation and Robotics (MMAR)

Navigation is a core requirement for autonomous vehicles and robotics. The objective of this thesis is to compute the navigation solution of a ground vehicle by fusing data from Inertial Navigation System (INS), Visual Odometry (VO), and Global Positioning System (GPS) using a Dual Extended Kalman Filter (DEKF) algorithm. The research in this thesis is conducted in three phases. The first phase deals with the development of a VO navigation system. In this phase the traditional Stereo Visual Odometry (SVO) methodology is analyzed, and an improvement is proposed at the pose estimation and pose optimization stages to present the Modified Stereo Visual Odometry (ModSVO) algorithm. The second phase deals with the development of INS/VO and INS/GPS integrated systems using EKF. It is shown that while accuracy improves compared to standalone sensors, but in case of VO or GPS failure the accuracy deteriorates. The third phase presents a solution to this problem by developing the INS/VO/GPS system using a Dual Extended Kalman Filter (DEKF) scheme. It is shown that the INS/VO/GPS system outperforms INS/VO and INS/GPS systems in cases of VO failure or GPS failure.

Fusion of Visual and Inertial Measurements for Pose Estimation

In this paper, we sketch a multi-sensor framework for estimating an object's pose. For this purpose, we combine an inertial measurement unit, consisting of gyroscopes, accelerometers and magnetometers, with a visual pose estimation algorithm applied on images obtained from a low cost web cam. Using all measurements of the different sensors, we state the equations to model the various sensors and give an idea of how to fuse the different measurements by using the Extended Kalman filter.

Sensor Fusion for Augmented Reality

Lecture Notes in Computer Science, 2003

In Augmented Reality (AR), the position and orientation of the camera have to be estimated with high accuracy and low latency. This nonlinear estimation problem is studied in the present paper. The proposed solution makes use of measurements from inertial sensors and computer vision. These measurements are fused using a Kalman filtering framework, incorporating a rather detailed model for the dynamics of the camera. Experiments show that the resulting filter provides good estimates of the camera motion, even during fast movements.

Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter

Sensors, 2017

Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).

Delayed Fusion for Real-Time Vision-Aided Inertial Navigation

Journal of Real-Time Image processing, 2013

In this paper, we consider the effects of delay caused by real-time image acquisition and feature tracking in a previously documented vision-augmented inertial navigation system. At first, the paper illustrates how delay caused by image processing, if not explicitly taken into account, can lead to appreciable performance degradation of the estimator. Next, three different existing methods of delayed fusion and a novel combined one are considered and compared. Simulations and Monte Carlo analyses are used to assess the estimation errors and computational effort of the various methods. Finally, a best performing formulation is identified that properly handles the fusion of delayed measurements in the estimator without increasing the time burden of the filter.