Co-registration of aerial photogrammetric and LiDAR point clouds in urban environments using automatic plane correspondence (original) (raw)
Related papers
Semi-Automatic Co-Registration of Photogrammetric and Lidar Data Using Buildings
ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences
In this work, the co-registration steps between LiDAR and photogrammetric DSM 3Ddata are analyzed and a solution based on automated plane matching is proposed and implemented. For a robust 3D geometric transformation both planes and points are used. Initially planes are chosen as the co-registration primitives. To confine the search space for the plane matching a sequential automatic building matching is performed first. For matching buildings from the LiDAR and the photogrammetric data, a similarity objective function is formed based on the roof height difference (RHD), the 3D histogram of the building attributes, and the building boundary area of a building. A region growing algorithm based on a Triangulated Irregular Network (TIN) is implemented to extract planes from both datasets. Next, an automatic successive process for identifying and matching corresponding planes from the two datasets has been developed and implemented. It is based on the building boundary region and determines plane pairs through a robust matching process thus eliminating outlier pairs. The selected correct plane pairs are the input data for the geometric transformation process. The 3D conformal transformation method in conjunction with the attitude quaternion is applied to obtain the transformation parameters using the normal vectors of the corresponding plane pairs. Following the mapping of one dataset onto the coordinate system of the other, the Iterative Closest Point (ICP) algorithm is then applied, using the corresponding building point clouds to further refine the transformation solution. The results indicate that the combination of planes and points improve the co-registration outcomes.
Co-registration of lidar and photogrammetric data for updating building databases
Three dimensional data are required for the modeling of urban environments and determining their spatio-temporal changes. The required data are mainly acquired using photogrammetric and lidar collection methods and the data are collected either simultaneously or at different time epochs. In this paper we present the approach and the preliminary results of co-registering these two types of data. This data alignment is based on the 3D surface transformation, where the lidar point cloud is transformed in to the DSM reference system, thus permitting the accurate transformation of lidar data in the image space for determining any spatial changes. The transformation parameters are determined based on corresponding planar surfaces extracted in each data set. A region growing algorithm based on the normal vector directions of the generated TIN data is applied to extract planar clusters following by plane fitting to derive the plane primitives. The rotation and translation parameters between the DSM and lidar data are them determined based on the plane normal vectors and plane centroids. The transformed lidar point cloud is t then back-projected on the images used to derive the DSM and their image location serve to assess the quality of the co-registration.
Robust Building-Based Registration of Airborne Lidar Data and Optical Imagery on Urban Scenes
IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium
The motivation of this paper is to address the problem of registering airborne LiDAR data and optical aerial or satellite imagery acquired from different platforms, at different times, with different points of view and levels of detail. In this paper, we present a robust registration method based on building regions, which are extracted from optical images using mean shift segmentation, and from LiDAR data using a 3D point cloud filtering process. The matching of the extracted building segments is then carried out using Graph Transformation Matching (GTM) which allows to determine a common pattern of relative positions of segment centers. Thanks to this registration, the relative shifts between the data sets are significantly reduced, which enables a subsequent fine registration and a resulting high-quality data fusion.
ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.
Assessment and comparison of registration algorithms between aerial images and laser point clouds
ISPRS, Symposium:’From sensor to imagery, 2006
ABSTRACT: Photogrammetry, has been providing accurate coordinate measurements through the stereoscopic method for many years. LiDAR on the other hand, is becoming the prime method for large-scale acquisition of elevation data due to its capability of directly measuring 3D coordinates of a huge number of points. LiDAR can provide measurements in areas where traditional photogrammetric techniques encounter problems mainly due to occlusions or shadows. However LiDAR has also its limitations due to its inability of ...
Point Cloud Registration Based on Image Correspondences
International Journal of Heritage in the Digital Era, 2013
Since the early '80s, when Digital Photogrammetry was in its infancy, important progress has been made in the area of automations using the most common photogrammetric procedures. The main objective of this paper is the full automation of 3D Point Cloud Registration process. In particular, an alternative registration method is being presented, based on corresponding points, which are detected on overlapping images that come with each scan using a structured light scanner. The algorithm detects corresponding points applying Feature Based Matching techniques. These are then interpolated directly to any given texture map, thereby the transition to correspondent 3D vertices is achieved. Finally, the algorithm computes the 3D Rigid Body transformation, which is applied to the 3D point clouds, in order to transform one scan's reference system into the other. Experimental results obtained by the proposed method are presented, evaluated and compared with those obtained by standard ICP implementation.
Registration of 3D-LiDAR Data With Visual Imagery Using Shape Matching
Multi-modal image registration plays a central role for many applications. In this paper we establish a new method for multi-perspective registration of 3D dimensional Light Detection and Ranging (LiDAR) data and visual imagery. The proposed method consists of three steps. In the first step, common features are identified and extracted from both modalities. Two dimensional objects are extracted from both modalities. We introduce a new method for LiDAR segmentation and object boundary extraction. Similarly we extract objects from visual images using k-means segmentation algorithm. The second step is features matching and and determination of corresponding points. In order to match the objects extracted from both modalities, a position, scale and rotation invariant shape signature is used to represent each object. A bipartite graph consisting of two sets, LiDAR objects and visual objects is constructed. A histogram similarity metric is used to assign a matching cost between every pair of objects signatures. This graph is used as input to the Hungarian algorithm to find the best matching. After determining the best matching objects, a set of matching points is selected using objects with lowest matching cost. In the third step, these points are used to compute the mapping between the two modalities. Experiments are conducted on synthetic data and on real world data. we also introduced a new metric for computing the registration error using dynamic time warping.
Co-registration of photogrammetric and LIDAR data: Methodology and case study
2009
Registration activities combine data from different sources in order to attain higher accuracy and derive more information than available from one source. The increasing availability of a wide variety of sensors capable of capturing high quality and complementary data requires parallel efforts for developing accurate and robust registration techniques. Currently, photogrammetric and LIDAR systems are being incorporated in a wide spectrum of mapping applications such as city modeling, surface reconstruction, and object recognition. Photogrammetric processing of overlapping imagery provides accurate information regarding object space break-lines in addition to an explicit semantic description of the photographed objects. On the other hand, LIDAR systems supply dense geometric surface information in the form of non-selective points. Considering the properties of photogrammetric and LIDAR data, it is clear that the two technologies provide complementary information. However, the synergic characteristics of both systems can be fully utilized only after successful registration of the photogrammetric and LIDAR data relative to a common reference frame. The registration methodology has to deal with three issues: registration primitives, transformation function, and similarity measure. This paper presents two methodologies for utilizing straight-line features derived from both datasets as the registration primitives. The first methodology directly incorporates the LIDAR lines as control information in the photogrammetric triangulation. The second methodology starts by generating a photogrammetric model relative to an arbitrary datum. Then, LIDAR features are used as control information for the absolute orientation of the photogrammetric model. In addition to the registration methodologies, the paper presents a comparative analysis between two approaches for extracting linear features from raw and processed/interpolated LIDAR data. Also, a comparative analysis between metric analog and amateur digital cameras within the registration process will be presented. The performance analysis is based on the quality of fit of the final alignment between the LIDAR and photogrammetric models.
An Experimental Study of a New Keypoint Matching Algorithm for Automatic Point Cloud Registration
ISPRS International Journal of Geo-Information
Light detection and ranging (LiDAR) data systems mounted on a moving or stationary platform provide 3D point cloud data for various purposes. In applications where the interested area or object needs to be measured twice or more with a shift, precise registration of the obtained point clouds is crucial for generating a healthy model with the combination of the overlapped point clouds. Automatic registration of the point clouds in the common coordinate system using the iterative closest point (ICP) algorithm or its variants is one of the frequently applied methods in the literature, and a number of studies focus on improving the registration process algorithms for achieving better results. This study proposed and tested a different approach for automatic keypoint detecting and matching in coarse registration of the point clouds before fine registration using the ICP algorithm. In the suggested algorithm, the keypoints were matched considering their geometrical relations expressed by ...
Automatic integration of 3-D point clouds from UAS and airborne LiDAR platforms
Journal of Unmanned Vehicle Systems, 2017
An approach to automatically co-register 3-D point cloud surfaces from Unmanned Aerial Systems (UASs) and Light Detection and Ranging (LiDAR) systems is presented. A 3-D point cloud co-registration method is proposed to automatically compute all transformation parameters without the need for initial, approximate values. The approach uses a pair of point cloud height map images for automated feature point correspondence. Initially, keypoints are extracted on the height map images, and then a log-polar descriptor is used as an attribute for matching the keypoints via a Euclidean distance similarity measure. Our study area is the Peace-Athabasca Delta (PAD) situated in northeastern Alberta, Canada. The PAD is a world heritage site, therefore regular monitoring of this wetland is important. Our method automatically co-registers UAS point clouds with airborne LiDAR data collected over the PAD. Together with UAS data acquisition, our approach can potentially be used in the future to facilitate automated co-registration of heterogeneous data throughout the PAD region. Reported transformation parameter accuracies are: a scale error of 0.02, an average rotation error of 0.123° and an average translation error of 0.237m.