Assessments Of Different Speeded Up Robust Features (SURF) Algorithm Resolution For Pose Estimation Of UAV (original) (raw)

Extended Robust Feature-based Visual Navigation System for UAVs

—The current study seeks to advance in the direction of building a robust feature-based passive visual navigation system utilising unique characteristics of the features present in an image to obtain position of the aircraft. This is done by extracting, prioritising and associating such features as road centrelines, road intersections and using other natural landmarks as a context. It is shown that extending the system with com-plimentary feature matching blocks and choice of the features prevalent in the scene improves the performance of the navigation system. The work also discusses the constraints, cost and the impact of the introduced optical flow component. The algorithm is evaluated on a simulated dataset containing satellite imagery.

An Integrated UAV Navigation System Based on Aerial Image Matching

2008 IEEE Aerospace Conference, 2008

The aim of this paper is to explore the possibility of using geo-referenced satellite or aerial images to augment an Unmanned Aerial Vehicle (UAV) navigation system in case of GPS failure. A vision based navigation system which combines inertial sensors, visual odometer and registration of a UAV on-board video to a given geo-referenced aerial image has been developed and tested on real flight-test data. The experimental results show that it is possible to extract useful position information from aerial imagery even when the UAV is flying at low altitude. It is shown that such information can be used in an automated way to compensate the drift of the UAV state estimation which occurs when only inertial sensors and visual odometer are used.

Correlation-extreme visual navigation of unmanned aircraft systems based on speed-up robust features

Aviation, 2014

The peculiarities of correlation-extreme visual navigation are considered. Descriptors with 64 elements of feature points of surface images are selected on the basis of the speed-up robust feature method. An analysis of possible criteria correlation functions is carried out to find the best match between the template descriptors and current images. The use of normalized correlation function is proposed based on the matrix multiplication properties of descriptors. It allows minimizing the number of false matches in comparison with the Euclidean distance in the descriptor space. The proposed matching strategy sufficiently decreases the computation time.

Vision-Based Unmanned Aerial Vehicle Navigation Using Geo-Referenced Information

EURASIP Journal on Advances in Signal Processing, 2009

This paper investigates the possibility of augmenting an Unmanned Aerial Vehicle (UAV) navigation system with a passive video camera in order to cope with long-term GPS outages. The paper proposes a vision-based navigation architecture which combines inertial sensors, visual odometry, and registration of the on-board video to a geo-referenced aerial image. The vision-aided navigation system developed is capable of providing high-rate and drift-free state estimation for UAV autonomous navigation without the GPS system. Due to the use of image-to-map registration for absolute position calculation, drift-free position performance depends on the structural characteristics of the terrain. Experimental evaluation of the approach based on offline flight data is provided. In addition the architecture proposed has been implemented on-board an experimental UAV helicopter platform and tested during vision-based autonomous flights.

Computer Vision Onboard UAVs for Civilian Tasks

Journal of Intelligent and Robotic Systems, 2009

Computer vision is much more than a technique to sense and recover environmental information from an UAV. It should play a main role regarding UAVs’ functionality because of the big amount of information that can be extracted, its possible uses and applications, and its natural connection to human driven tasks, taking into account that vision is our main interface to world understanding. Our current research’s focus lays on the development of techniques that allow UAVs to maneuver in spaces using visual information as their main input source. This task involves the creation of techniques that allow an UAV to maneuver towards features of interest whenever a GPS signal is not reliable or sufficient, e.g. when signal dropouts occur (which usually happens in urban areas, when flying through terrestrial urban canyons or when operating on remote planetary bodies), or when tracking or inspecting visual targets—including moving ones—without knowing their exact UMT coordinates. This paper also investigates visual servoing control techniques that use velocity and position of suitable image features to compute the references for flight control. This paper aims to give a global view of the main aspects related to the research field of computer vision for UAVs, clustered in four main active research lines: visual servoing and control, stereo-based visual navigation, image processing algorithms for detection and tracking, and visual SLAM. Finally, the results of applying these techniques in several applications are presented and discussed: this study will encompass power line inspection, mobile target tracking, stereo distance estimation, mapping and positioning.

Real-time Experiment of Feature Tracking/Mapping using a low-cost Vision and GPS/INS System on an UAV platform

Journal of Global Positioning Systems, 2004

This paper presents the real-time results of an air-to-ground feature tracking algorithm using a passive vision camera and a low-cost GPS/INS navigation system on a UAV (Uninhabited Air Vehicle) platform. The vision payload is able to observe a number of ground features, and the GPS/INS navigation system is used in conjunction with a waypoints-based guidance and flight control module. Due to limited processing resources, the vision node employs a simple but fast method of point based feature extraction algorithm. The feature tracking performance is greatly affected by the accuracy of the onboard navigation system. Conversely though, it can be used as a performance indicator of the navigation filter by comparing it with the truth feature location and some simple geometry. This paper will present the results of targeting performance against known location of features, and hence verifying the accuracy of the real time GPS/INS system

Vision Navigation System to Manoeuvre Unmanned Aerial Vehicle (UAV)

UK-RAS Conference for PhD and Early Career Researchers Proceedings, 2020

This paper describes the development and the implementation of an omnidirectional multiple stereo cameras vision system. The vision system is compact and light enough to operate on board a commercially available off the shelf miniature UAV (Unmanned Aerial Vehicle)-Quadrotor. The vision system contains several stereo cameras ruggedly fixed onboard the UAV-Quadrotor, it is oriented in such a way that it has a 360 degrees omnidirectional visual coverage. The paper demonstrates that by combining several stereo cameras, it can provide and combine depth information and optical flow data in real time to an on-board image processing computer. One can estimate the position and the orientation such as roll, pitch and yaw of the UAV in 3D (three dimensions) space accurately without the aid of GPS (Global Positing System), IMU (Initial Measurement Unit) or any other external navigation or orientation aid. This method can be described as "Simultaneous Localization and Exploration Oriented Visual Navigation". It is a development of a vision system capable of navigating autonomously an UAV-Quadrotor through random free spaces within an unknown complex environment and without any mapping, it simply performs by detecting and avoiding obstacles calculating the required "through flight path".

A Visual Navigation System for Uas Based on Geo-Referenced Imagery

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2012

This article presents an approach to the terrain-aided navigation problem suitable for unmanned aerial vehicles flying at low altitude. The problem of estimating the state parameters of a flying vehicle is addressed in the particular situation where the GPS information is unavailable or unreliable due to jamming situations for instance. The proposed state estimation approach fuses information from inertial and image sensors. Absolute localization is obtained through image-to-map registration. For this purpose, 2D satellite images are used. The algorithms presented are implemented and tested on-board an industrial unmanned helicopter. Flight-test results will be presented.

Performance Evaluation of Vision-Based Navigation and Landing on a Rotorcraft Unmanned Aerial Vehicle

2007 IEEE Workshop on Applications of Computer Vision (WAC V '07), 2007

A Rotorcraft UAV provides an ideal experimental platform for vision-based navigation. This paper describes the flight tests of the US Army PALACE project, which implements Moravec's pseudo-normalized correlation tracking algorithm. The tracker uses the movement of the landing site in the camera, a laser range, and the aircraft attitude from an IMU to estimate the relative motion of the UAV. The position estimate functions as a GPS equivalent to enable the rotorcraft to maneuver without the aid of GPS. With GPS data as a baseline, tests were performed in simulation and in flight that measure the accuracy of the position estimation.

A simple visual navigation system for an UAV

International Multi-Conference on Systems, Sygnals & Devices, 2012

We present a simple and robust monocular camerabased navigation system for an autonomous quadcopter. The method does not require any additional infrastructure like radio beacons, artificial landmarks or GPS and can be easily combined with other navigation methods and algorithms. Its computational complexity is independent of the environment size and it works even when sensing only one landmark at a time, allowing its operation in landmark poor environments. We also describe an FPGA based embedded realization of the method's most computationally demanding phase.