Self-location from monocular uncalibrated vision using reference omniviews (original) (raw)

Visual odometry from an omnidirectional vision system

2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422)

Lecturer of the M.Sc. course "Autonomous Mobiles Robots" • Scientific Manager of the "sFly" European Project (FP7) • Project Leader of the ETH team which will participate in the European Micro Aerial Vehicle competition. We will show an autonomous micro helicopter capable of exploring an apartment using only vision for navigation. • Research activities in the field of vision based 3D mapping of urban environments and vision based navigation of micro-aerial vehicles

Single Vision-Based Self-Localization for Autonomous Robotic Agents

2019 7th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), 2019

We present a single vision-based, selflocalization method for autonomous mobile robots in a known, indoor environment. This absolute localization method is landmark assisted, therefore, we propose an algorithm that requires the extraction of a single landmark feature i.e., the length of a known edge. Our technique is based on measuring the distance from two distinct, arbitrarily positioned landmarks in the robot's environment, the locations of which are known a priori. A single camera vision system is used to perform distance estimation. The developed framework is applied to tracking a robot's pose, i.e., its position and orientation, in a Cartesian coordinate system. The position of the robot is estimated using a bilateration method, while its orientation calculation utilizes tools from projective geometry. The validity and feasibility of the approach are demonstrated through experiments.

Monocular visual odometry in urban environments using an omnidirectional camera

2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008

We present a system for Monocular Simultaneous Localization and Mapping (Mono-SLAM) relying solely on video input. Our algorithm makes it possible to precisely estimate the camera trajectory without relying on any motion model. The estimation is fully incremental: at a given time frame, only the current location is estimated while the previous camera positions are never modified. In particular, we do not perform any simultaneous iterative optimization of the camera positions and estimated 3D structure (local bundle adjustment). The key aspects of the system is a fast and simple pose estimation algorithm that uses information not only from the estimated 3D map, but also from the epipolar constraint. We show that the latter leads to a much more stable estimation of the camera trajectory than the conventional approach. We perform high precision camera trajectory estimation in urban scenes with a large amount of clutter. Using an omnidirectional camera placed on a vehicle, we cover the longest distance ever reported, up to 2.5 kilometers.

Self-localization in non-stationary environments using omni-directional vision

Robotics and Autonomous Systems, 2007

This paper presents an image-based approach for localization in non-static environments using local feature descriptors, and its experimental evaluation in a large, dynamic, populated environment where the time interval between the collected data sets is up to two months. By using local features together with panoramic images, robustness and invariance to large changes in the environment can be handled. Results from global place recognition with no evidence accumulation and a Monte Carlo localization method are shown. To test the approach even further, experiments were conducted with up to 90% virtual occlusion in addition to the dynamic changes in the environment.

Adapting a real-time monocular visual SLAM from conventional to omnidirectional cameras

2011

The SLAM (Simultaneous Localization and Mapping) problem is one of the essential challenges for the current robotics. Our main objective in this work is to develop a real-time visual SLAM system using monocular omnidirectional vision. Our approach is based on the Extended Kalman Filter (EKF). We use the Spherical Camera Model to obtain geometric information from the images. This model is integrated in the EKF-based SLAM through the linearization of the direct and the inverse projections. We introduce a new computation of the descriptor patch for catadioptric omnidirectional cameras which aims to reach rotation and scale invariance. We perform experiments with omnidirectional images comparing this new approach with the conventional one. The experimentation confirms that our approach works better with omnidirectional cameras since features last longer and constructed maps are bigger.

Navigation from uncalibrated monocular vision

1998

In this paper we propose a method to correct the heading of an indoor mobile robot using an uncalibrated monocular vision system. Neither environment map nor explicit reconstruction is obtained and no memory of the past is recorded. We extract straight edges and classify them as vertical and non-vertical. From the non-vertical lines we obtain the vanishing point to compute the robot orientation. From corresponding vertical lines in two uncalibrated images we obtain robot heading using the focus of expansion, and a qualitative depth map used to compute the commanded heading.

Localization method based on omnidirectional stereoscopic vision and dead-reckoning

Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), 1999

This paper presents a system of absolute localization based on the cooperation of a stereoscopic omnidirectional vision system and a dead-reckoning system. To do it we use an original perception system which allows our omnidirectional vision sensor SYCLOP to move along a rail. The first part of our study will deal with the problem of building the sensorial model with the help of the two stereoscopic omnidirectional images. To solve this problem we propose an approach based on the fusion of several criteria which will be made according to Dempster-Shafer rules. As for the second part, it will be devoted to exploiting this sensorial model to localize the robot thanks to matching the sensorial primitives with the environment map. We use the dead-reckoning prediction to decrease the combinatory aspect of the matching algorithm. We analyze the performance of our global absolute localization system on several robot's elementary moves, in an indoor environment.

Omnidirectional vision for robot localization in urban environments

monicareggiani.net, 2008

This paper addresses the problem of long term mobile robot localization in large urban environments. Typically, GPS is the preferred sensor for outdoor operation. However, using GPS-only localization methods leads to significant performance degradation in urban areas where tall nearby structures obstruct the view of the satellites. In our work, we use omnidirectional visionbased techniques to supplement GPS and odometry and provide accurate localization. We also present some novel Monte Carlo Localization optimizations and we introduce the concept of on line knowledge acquisition and integration presenting a framework able to perform long term robot localization in real environments. The vision system identifies prominent features in the scene and matches them with a database of geo-referenced features already known or integrated during the localization process. Results of robot localization in the old town of Fermo are presented. Results show good performance and the whole architecture behaves well also in long term experiments, showing a suitable and good system for real life robot applications.

Omnidirectional vision for robot navigation

2000

We describe a method for visual based robot navigation with a single omni-directional (catadioptric) camera. We show how omni-directional images can be used to generate the representations needed for two main navigation modalities: Topological Navigation and Visual Path Following.