On the use of inverse scaling in monocular SLAM (original) (raw)
Related papers
Monocular SLAM with Inverse Scaling Parametrization
2008
The recent literature has shown that it is possible to solve the monocular Simultaneous Localization And Mapping using both undelayed features initialization and an Extedend Kalman Filter. The key concept, to achieve this result, was the introduction of a new parametrization called Unified Inverse Depth that produces measurements equations with a high degree of linearity and allows an efficient and accurate modeling of uncertainties. In this paper we present a monocular EKF SLAM filter based on an alternative parametrization, i.e., the Inverse Scaling Parametrization, characterized by a reduced number of parameters, a more linear measurement model, and a better modeling of features uncertainty for both low and high parallax features. Experiments in simulation demonstrate that the use of the Inverse Scaling solution improves the monocular EKF SLAM filter when compared with the Unified Inverse Depth approach, while experiments on real data show the system working as well.
Unified inverse depth parametrization for monocular SLAM
2006
Abstract—Recent work has shown that the probabilistic SLAM approach of explicit uncertainty propagation can succeed in permitting repeatable 3D real-time localization and mapping even in the 'pure vision'domain of a single agile camera with no extra sensing. An issue which has caused difficulty in monocular SLAM however is the initialization of features, since information from multiple images acquired during motion must be combined to achieve accurate depth estimates.
Inverse depth to depth conversion for monocular slam
2007
Abstract Recently it has been shown that an inverse depth parametrization can improve the performance of real-time monocular EKF SLAM, permitting undelayed initialization of features at all depths. However, the inverse depth parametrization requires the storage of 6 parameters in the state vector for each map point. This implies a noticeable computing overhead when compared with the standard 3 parameter XYZ Euclidean encoding of a 3D point, since the computational complexity of the EKF scales poorly with state vector size.
Inertial aiding of inverse depth SLAM using a monocular camera
Proceedings of the 2007 Ieee International Conference on Robotics and Automation, Vols 1-10, 2007
This paper presents the benefits of using a low cost inertial measurement unit to aid in an implementation of inverse depth initialized SLAM using a hand-held monocular camera. Results are presented with and without inertial observations for different assumed initial ranges to features on the same dataset. When using only the camera, the scale of the scene is not observable. As expected, the scale of the map depends on the prior used to initialize the depth of the features and may drift when exploring new terrain, precluding loop closure. The results show that the inertial observations help to improve the estimated trajectory of the camera leading to a better estimation of the map scale and a more accurate localization of features.
Delayed Features Initialization for Inverse Depth Monocular SLAM
2007
Recently, the unified inverse depth parametrization has shown to be a good option for challenging monocular SLAM problem, in a scheme of EKF for the estimation of the stochastic map and camera pose. In the original approach, features are initialized in the first frame observed (undelayed initialization), this aspect has advantages but also some problems. In this paper a delayed feature initialization is proposed for adding new features to the stochastic map. The results show that delayed initialization can improve some aspects without losing the performance and unified aspect of the original method, when initial reference points are used in order to fix a metric scale in the map.
Delayed Inverse Depth Monocular SLAM
Proceedings of the 17th IFAC World Congress, 2008, 2008
The 6-DOF monocular camera case possibly represents the harder variant in the context of simultaneous localization and mapping problem. In the last years, several advances have been appeared in this area; however the application of these techniques to real world applications it's difficult so far. Recently, the unified inverse depth parametrization has shown to be a good option this challenging problem, in a scheme of EKF for the estimation of the stochastic map and camera pose. In this paper a new delayed initialization scheme is proposed for adding new features to the stochastic map. The results show that delayed initialization can improve some aspects without losing the performance and unified aspect of the original method, when initial reference points are used in order to fix a metric scale in the map.
Monocular SLAM for Autonomous Robots with Enhanced Features Initialization
Sensors, 2014
This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.
Article Monocular SLAM for Autonomous Robots with Enhanced Features Initialization
2014
This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DID monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.
A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System
Sensors, 2013
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning Sensors 2013, 13 8502 of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.
A Minimum Energy solution to Monocular Simultaneous Localization and Mapping
IEEE Conference on Decision and Control and European Control Conference, 2011
In this paper we propose an alternative solution to the Monocular Simultaneous Localization and Mapping (SLAM) problem. This approach uses a Minimum-Energy Observer for Systems with Perspective Outputs and provides an optimal solution. Contrarily to the most famous EKF-SLAM algorithm, this method yields a global solution and no linearization procedures are required. Furthermore, we show that the estimation error converges exponentially fast toward a neighborhood of zero, where this region increases gracefully with the magnitude of the input disturbance, output noise and initial camera position uncertainty.