Building Facade Modeling From 3D Urban Point Clouds (original) (raw)

A 3D city model represents the earth surface and objects related to urban areas such as building models, street furniture, and vegetation. As a component of the 3D city model, a building model consists of several sub-components, namely building footprint, building height, roof structure, building facade, and interior structure. The last two sub-components are necessary for a 3D city model to impart a high level of detail. There are many data sources to model a building facade. One of the most popular is urban 3D point cloud, which is acquired by laser scanning or photogrammetry. However, extracting building facade information from 3D point cloud followed by constructing the facade geometric model is still a challenging task due to the enormous data volume, high complexity, shape regularities, computational efficiency, and data diversity. As a solution, this thesis presents a complete workflow to construct building facade model from urban 3D point clouds. The workflow provides a solution to segment the facade points from urban 3D point clouds, to find primitive geometries on the building facade points, to extract and regularize the features of the primitive geometries, to geometrically model the building facade, and to visualize the generated building facade model. The building facade points are extracted from the 3D point clouds using geometrical and morphological operations. All the operations are performed on the pixel-based implementation. Thus, it is efficient, flexible, and simple to implement; even the data is unstructured and comprises a large number of points. In the following step, the appearances of plane geometries are searched over the building facade points. A combination of eigen-analysis, normal distribution transformation, and voxel growing techniques are implemented for this purpose. This method successfully takes advantages of these three techniques while reducing drawbacks. It also succeeds in detecting the planes locally, avoiding the weakness of the global plane segmentation approaches. The idea of the improved slicing method is then adapted to detect the boundary points and the openings. It is modified during the implementation so that it works effectively and can keep topological information of the plane, bounded objects, and the boundary points. This information is beneficial for the rest of the stages. Another task is the boundary line detection, which is performed using 2D Hough transform algorithm. Spurious detected lines as the effect of the global approach can be avoided by taking advantage of the above-mentioned topological information. The detected lines are then used to regularize the bounded shapes and to generate the corner points. The produced line segments and corner points are represented as graphs so that the shape errors can be easily detected and refined. In the next step, the properties of the cyclic graph are explored to geometrically model the bounded shapes as inner and outer polygons. The topology information between inner and outer polygons are then modeled to get the complete polygonal face representation. For visualization, these polygonal faces are printed into a Well-Known Text format. The proposed approach is implemented on two different datasets. The implementation processes and the obtained results are thoroughly elaborated. A comprehensive evaluation is conducted in the next chapter, while the conclusions and possible outlooks are discussed in the final chapter.