Evaluation of registration methods for sparse 3D laser scans (original) (raw)

Mapping with Micro Aerial Vehicles by Registration of Sparse 3D Laser Scans

Micro aerial vehicles (MAVs) pose speci c constraints on onboard sensing, mainly limited payload and limited processing power. For accurate 3D mapping even in GPS denied environments, we have designed a light-weight 3D laser scanner speci cally for the application on MAVs. Similar to other custom-built 3D laser scanners composed of a rotating 2D laser range nder, it exhibits di erent point densities within and between individual scan lines. When rotated fast, such non-uniform point densities in uence neighborhood searches which in turn may negatively a ect local feature estimation and scan registration. We present a complete pipeline for 3D mapping including pair-wise registration and global alignment of 3D scans acquired in- ight. For registration, we extend a state-of-the-art registration algorithm to include topological information from approximate surface reconstructions. For global alignment, we use a graph-based approach making use of the same error metric and iteratively re ne the complete vehicle trajectory. In experiments, we show that our approach can compensate for the e ects caused by different point densities up to very low angular resolutions and that we can build accurate and consistent 3D maps in- ight with a micro aerial vehicle.

Registration of Non-Uniform Density 3D Laser Scans for Mapping with Micro Aerial Vehicles

Micro aerial vehicles (MAVs) pose specific constraints on onboard sensing, mainly limited payload and limited processing power. For accurate 3D mapping even in GPS-denied environments, we have designed a lightweight 3D laser scanner specifically for the application on MAVs. Similar to other custom-built 3D laser scanners composed of a rotating 2D laser range finder, it exhibits different point densities within and between individual scan lines. When rotated fast, such non-uniform point densities influence neighborhood searches which in turn may negatively affect local feature estimation and scan registration. We present a complete pipeline for 3D mapping including pair-wise registration and global alignment of such non-uniform density 3D point clouds acquired in-flight. For registration, we extend a state-of-the-art registration algorithm to include topological information from approximate surface reconstructions. For global alignment, we use a graph-based approach making use of the same error metric and iteratively refine the complete vehicle trajectory. In experiments, we show that our approach can compensate for the effects caused by different point densities up to very low angular resolutions and that we can build accurate and consistent 3D maps in-flight with a micro aerial vehicle.

Registration of Non-Uniform Density 3D Point Clouds using Approximate Surface Reconstruction

3D laser scanners composed of a rotating 2D laser range scanner exhibit different point densities within and between individual scan lines. Such non-uniform point densities influence neighbor searches which in turn may negatively affect feature estimation and scan registration. To reliably register such scans, we extend a state-of-the-art registration algorithm to include topological information from approximate surface reconstructions. We show that our approach outperforms related approaches in both refining a good initial pose estimate and registering badly aligned point clouds if no such estimate is available. In an example application, we demonstrate local 3D mapping with a micro aerial vehicle by registering sequences of non-uniform density point clouds acquired in-flight with a continuously rotating lightweight 3D scanner. 1 Introduction 3D scanners provide robots with the ability to extract spatial information about their surroundings, detect obstacles and avoid collisions, build 3D maps, and localize. In the course of a larger project on mapping inaccessible areas with autonomous micro aerial vehicles (MAVs), we have developed a light-weight 3D scanner [13] specifically suited for the application on MAVs. It consists of a Hokuyo 2D laser scanner, a rotary actuator and a slip ring to allow continuous rotation. Just as with other rotated scanners, the acquired point clouds (aggregated over one full or half rotation) show the particular characteristic of having non-uniform point densities: usually a high density within each scan line and a certain angle between scan lines which depends on the rotation speed of the scanner (see Figure 1). In our setup, we aggregate individual scans of the continuously rotating laser scanner and form 3D point clouds over one half rotation (covering an omnidirectional field of view). To compensate for the MAV’s motion during aggregation, we use visual odometry [18] and inertial sensors as rough estimates, and transform the scans into a common coordinate frame. Since we use the laser scanner not only for mapping and localization but also for collision avoidance, we rotate the scanner fast resulting in a particularly low angular resolution of roughly 9. This reduces the point density in the aggregated point clouds but increases the frequency with which we perceive (omnidirectionally) the surroundings of the MAV (2 Hz). The resulting non-uniform point densities affect classic neighborhood searches in 3D and cause problems in local feature estimation and registration. To compensate for the non-uniform point densities, we extend the state-of-the-art registration algorithm Generalized-ICP [19] to include topological surface information instead of the 3D neighborhood of points. r u+1 u u-1 Figure 1: Classic neighbor searches in non-uniform density point clouds may only find points in the same scan line (red), whereas a topological neighborhood (green) can better reflect the underlying surface The remainder of this paper is organized as follows: after giving a brief overview on related work in Section 2, we present our approach in Section 3. In experiments, we demonstrate the superior performance of our approach and discuss the achieved results in Section 4. 2 RelatedWork 2.1 Laser Scanners for MAVs For mobile ground robots, 3D laser scanning sensors are widely used due to their accurate distance measurements even in bad lighting conditions and their large field-ofview. For instance, autonomous cars often perceive obstacles by means of a rotating laser scanner with a 360 horizontal field-of-view, allowing for the detection of obstacles in every direction [12]. Up to

Local Multi-Resolution Representation for 6D Motion Estimation and Mapping with a Continuously Rotating 3D Laser Scanner

Micro aerial vehicles (MAV) pose a challenge in designing sensory systems and algorithms due to their size and weight constraints and limited computing power. We present an efficient 3D multi-resolution map that we use to aggregate measurements from a lightweight continuously rotating laser scanner. We estimate the robot’s motion by means of visual odometry and scan registration, aligning consecutive 3D scans with an incrementally built map. By using local multi-resolution, we gain computational efficiency by having a high resolution in the near vicinity of the robot and a lower resolution with increasing distance from the robot, which correlates with the sensor’s characteristics in relative distance accuracy and measurement density. Compared to uniform grids, local multi-resolution leads to the use of fewer grid cells without loosing information and consequently results in lower computational costs. We efficiently and accurately register new 3D scans with the map in order to estimate the motion of the MAV and update the map in-flight. In experiments, we demonstrate superior accuracy and efficiency of our registration approach compared to state-of-theart methods such as GICP. Our approach builds an accurate 3D obstacle map and estimates the vehicle’s trajectory in real-time.

Joint 3D Laser and Visual Fiducial Marker based SLAM for a Micro Aerial Vehicle

Laser scanners have been proven to provide reliable and highly precise environment perception for micro aerial vehicles (MAV). This oftentimes makes them the first choice for tasks like obstacle avoidance, close inspection of structures, self-localization, and mapping. However, artificial environments may pose problems if the scene is self-similar or symmetric and, hence, localization becomes ambiguous if only relying on distance measurements (e.g., when flying along a parallel aisle).

Challenging data sets for point cloud registration algorithms

International Journal of Robotics Research, 2012

Many registration solutions have bloomed lately in the literature. The iterative closest point, for example, could be considered as the backbone of many laser-based localization and mapping systems. Although they are widely used, it is a common challenge to compare registration solutions on a fair base. The main limitation is to overcome the lack of accurate ground truth in current data sets, which usually cover environments only over a small range of organization levels. In computer vision, the Stanford 3D Scanning Repository pushed forward point cloud registration algorithms and object modeling fields by providing high-quality scanned objects with precise localization. We aim at providing similar high-caliber working material to the robotic and computer vision communities but with sceneries instead of objects. We propose 8 point cloud sequences acquired in locations covering the environment diversity that modern robots are susceptible to encounter, ranging from inside an apartment to a woodland area. The core of the data sets consists of 3D laser point clouds for which supporting data (Gravity, Magnetic North and GPS) are given at each pose. A special effort has been made to ensure a global positioning of the scanner within millimeter range precision, independently of environmental conditions. This will allow for the development of improved registration algorithms when mapping challenging environments, such as found in real world situations.

Omnidirectional Perception for Lightweight MAVs using a Continuously Rotating 3D Laser Scanner

Micro aerial vehicles (MAV) are restricted in their size and weight, making the design of sensory systems for these vehicles challenging. We designed a small and lightweight continuously rotating 3D laser scanner—allowing for environment perception in a range of 30m in almost all directions. This sensor is well suited for applications such as 3D obstacle detection, 6D motion estimation, localisation, and mapping. Reliably perceiving obstacles in the surroundings of the MAV is a prerequisite for fully autonomous flight in complex environments. Due to varying shape and reflectance properties of objects, not all obstacles are perceived in every 3D laser scan (one half rotation of the scanner). Especially farther away from the MAV, multiple scans may be necessary in order to adequately detect an obstacle. In order to increase the probability of detecting obstacles, we aggregate acquired scans over short periods of time in an egocentric grid-based map. We register acquired scans against this local map to estimate the motion of our MAV and to consistently update the map. In experiments, we show that our approaches to pose estimation and laser scan matching allow for reliable aggregation of 3D scans over short periods of time, sufficiently accurate to improve detection probability without causing inaccuracies in the estimation of the position of detected obstacles. Furthermore, we assess the probability of detecting different types of obstacles in varying distances from the MAV.

Assessment and comparison of registration algorithms between aerial images and laser point clouds

ISPRS, Symposium:’From sensor to imagery, 2006

ABSTRACT: Photogrammetry, has been providing accurate coordinate measurements through the stereoscopic method for many years. LiDAR on the other hand, is becoming the prime method for large-scale acquisition of elevation data due to its capability of directly measuring 3D coordinates of a huge number of points. LiDAR can provide measurements in areas where traditional photogrammetric techniques encounter problems mainly due to occlusions or shadows. However LiDAR has also its limitations due to its inability of ...

Towards Model-Free SLAM Using a Single Laser Range Scanner for Helicopter MAV

cs.technion.ac.il

A new solution for the SLAM problem is presented which makes use of a scan matching algorithm, and does not rely on bayesian filters. The virtual map is represented in the form of an occupancy grid, which stores laser scans based on the estimated position. The occupancy grid is scanned by means of ray casting to get a scan of the virtual world, called "virtual scan". The virtual scan therefore contains data from all previously acquired laser measurements and hence serves as the best representation of the surroundings. New laser scans are matched against the virtual scan to get an estimate of the new position. The scan matching cost function is minimized via an adaptive direct search with boundary updating until convergence. The resulting method is model-free and can be applied to various platforms, including micro aerial vehicles that lack dynamic models. Experimental validation of the SLAM method is presented by mapping a typical office hallway environment with a closed loop, using a manually driven platform and a laser range scanner. The mapping results are highly accurate and the loop closure area appears to be seamless, in spite of no loop closure algorithms and no post-mapping correction processes.

Improved Uav-Borne 3D Mapping by Fusing Optical and Laserscanner Data

ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013

In this paper, a new method for fusing optical and laserscanner data is presented for improved UAV-borne 3D mapping. We propose to equip an unmanned aerial vehicle (UAV) with a small platform which includes two sensors: a standard low-cost digital camera and a lightweight Hokuyo UTM-30LX-EW laserscanning device (210 g without cable). Initially, a calibration is carried out for the utilized devices. This involves a geometric camera calibration and the estimation of the position and orientation offset between the two sensors by lever-arm and bore-sight calibration. Subsequently, a feature tracking is performed through the image sequence by considering extracted interest points as well as the projected 3D laser points. These 2D results are fused with the measured laser distances and fed into a bundle adjustment in order to obtain a Simultaneous Localization and Mapping (SLAM). It is demonstrated that an improvement in terms of precision for the pose estimation is derived by fusing optical and laserscanner data.