Performance comparison of 3D correspondence grouping algorithm for 3D plant point clouds (original) (raw)
Related papers
3D Plant Phenotyping: All You Need is Labelled Point Cloud Data
Computer Vision – ECCV 2020 Workshops, 2020
In the realm of modern digital phenotyping technological advancements, the demand of annotated datasets is increasing for either training machine learning algorithms or evaluating 3D phenotyping systems. While a few 2D datasets have been proposed in the community in last few years, very little attention has been paid to the construction of annotated 3D point cloud datasets. There are several challenges associated with the creation of such annotated datasets. Acquiring the data requires instruments having good precision and accuracy levels. Reconstruction of full 3D model from multiple views is a challenging task considering plant architecture complexity and plasticity, as well as occlusion and missing data problems. In addition, manual annotation of the data is a cumbersome task that cannot easily be automated. In this context, the design of synthetic datasets can play an important role. In this paper, we propose an idea of automatic generation of synthetic point cloud data using virtual plant models. Our approach leverages the strength of the classical procedural approach (like L-systems) to generate the virtual models of plants, and then perform point sampling on the surface of the models. By applying stochasticity in the procedural model, we are able to generate large number of diverse plant models and the corresponding point cloud data in a fully automatic manner. The goal of this paper is to present a general strategy to generate annotated 3D point cloud datasets from virtual models. The code (along with some generated point cloud models) are available at: https://gitlab.inria.fr/mosaic/publications/lpy2pc.
Proceedings of Computer Vision Problems in Plant Phenotyping, 2014
Functional-structural modeling and high-throughput phenomics demand tools for 3D measurements of plants. In this work, structure from motion is employed to estimate the position of a hand-held camera, moving around plants, and to recover a sparse 3D point cloud sampling the plants’ surfaces. Multiple-view stereo is employed to extend the sparse model to a dense 3D point cloud. The model is automatically segmented by spectral clustering, properly separating the plant’s leaves whose surfaces are estimated by fitting trimmed B-splines to their 3D points. These models are accurate snapshots for the aerial part of the plants at the image acquisition moment and allow the measurement of different features of the specimen phenotype. Such state-of-the-art computer vision techniques are able to produce accurate 3D models for plants using data from a single free moving camera, properly handling occlusions and diversity in size and structure for specimens presenting sparse canopies. A data set formed by the input images and the resulting camera poses and 3D points clouds is available, including data for sunflower and soybean specimens.
Novel Low Cost 3D Surface Model Reconstruction System for Plant Phenotyping
Journal of Imaging
Accurate high-resolution three-dimensional (3D) models are essential for a non-invasive analysis of phenotypic characteristics of plants. Previous limitations in 3D computer vision algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present an image-based 3D plant reconstruction system that can be achieved by using a single camera and a rotation stand. Our method is based on the structure from motion method, with a SIFT image feature descriptor. In order to improve the quality of the 3D models, we segmented the plant objects based on the PlantCV platform. We also deducted the optimal number of images needed for reconstructing a high-quality model. Experiments showed that an accurate 3D model of the plant was successfully could be reconstructed by our approach. This 3D surface model reconstruction system provides a simple and accurate computational platform for non-destructive, plant phenotyping.
3D Scanning System for Automatic High-Resolution Plant Phenotyping
2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2016
Thin leaves, fine stems, self-occlusion, non-rigid and slowly changing structures make plants difficult for threedimensional (3D) scanning and reconstruction-two critical steps in automated visual phenotyping. Many current solutions such as laser scanning, structured light, and multiview stereo can struggle to acquire usable 3D models because of limitations in scanning resolution and calibration accuracy. In response, we have developed a fast, low-cost, 3D scanning platform to image plants on a rotating stage with two tilting DSLR cameras centred on the plant. This uses new methods of camera calibration and background removal to achieve high-accuracy 3D reconstruction. We assessed the system's accuracy using a 3D visual hull reconstruction algorithm applied on 2 plastic models of dicotyledonous plants, 2 sorghum plants and 2 wheat plants across different sets of tilt angles. Scan times ranged from 3 minutes (to capture 72 images using 2 tilt angles), to 30 minutes (to capture 360 images using 10 tilt angles). The leaf lengths, widths, areas and perimeters of the plastic models were measured manually and compared to measurements from the scanning system: results were within 3-4% of each other. The 3D reconstructions obtained with the scanning system show excellent geometric agreement with all six plant specimens, even plants with thin leaves and fine stems.
Computers and Electronics in Agriculture, 2019
This paper presents an approach to estimating the plant or fruit size from objects on the digital images of natural scenes. Two images are taken from camera positions with a known distance between them. For a selected object and a pair of its boundary pixels in the first image, a corresponding pixel pair is found in the second image. The Euclidean distances within the two pairs compare in similar triangles and combine with the known distance between the camera positions to estimate the size of the object in metric units. A dense correspondence among images is first required. It can be determined by joint information of multiple 2D views of the same scene utilizing image registration techniques. These are used to align corresponding image points, without having to rely on distinct features, such as geometric properties. This is most advantageous for weak textured areas, e.g. uniform fruit shades, where many methods for corresponding point detection fail. The pixel neighbourhoods of candidate corresponding pairs are compared by template matching to verify their similarity and to select the most reliable correspondences. A stratified rigid-to-elastic registration approach generates a deformation matrix whose elements define translational vectors on a pixel basis. The accuracy of correspondence, checked by colour template matching, is assessed by mean absolute error, D, between corresponding pixel-pair intensities within the templates. By matching 26 random fruit image pairs, the proposed approach detected 1390.5 ± 1129.8 reliable corresponding points on the surface of the fruit with the accuracy of ≤ 5 D , on average. This means approximately 18% of all objects' pixels and considerably exceeds the results of comparable methods for highdensity correspondence detection, such as correlation-based correspondence, non-rigid dense correspondence (NRDC), and scale invariant feature transform (SIFT). The number and quality of corresponding points obtained by the proposed algorithm turned out to be reliable and sufficiently robust for an accurate estimation of distances between objects and camera positions (with an overall accuracy of 0.11 m ± 0.06 m) and height of plants (with an accuracy of 0.14 m ± 0.1 m) when taking two similar outdoor photographs of a scene.
Accurate Multi-View Stereo 3D Reconstruction for Cost-Effective Plant Phenotyping
Lecture Notes in Computer Science, 2014
Phenotyping, which underpins much of plant biology and breeding, involves the measurement of characteristics or traits. Traditionally, this has been often destructive and/or subjective but the dynamic objective measurement of traits as they change in response to genetic mutation or environmental influences is an important goal. 3-D imaging technologies are increasingly incorporated into mass produced consumer goods (3D laser scanning, structured light and digital photography) and may represent a cost-effective alternative to current commercial phenotyping platforms. We evaluate their performance, cost and practicability for plant phenotyping and present a 3D reconstruction method for plants from multi-view images acquired with domestic quality cameras. We exploit an efficient Structure-From-Motion followed by stereo matching and depth-map merging processes. Experimental results show that the proposed method is flexible, adaptable and inexpensive, and promising as an generalized groundwork for phenotyping various plant species.
Measuring Ground Truth for 3 D Reconstruction of Plants
2018
3D object reconstruction from camera images is a standard problem of computer vision with a multitude of solutions [2, 8, 11]. However, accurate and reliable automated measurements of 3D plant shape are not yet available for complex plants despite image-based analysis of plants being well established (see e.g. [1, 5, 10, 13] and many more). Ground truth data with accuracy bounds for multiview 3D object reconstruction is publicly available [3, 12]. However for plant reconstruction no such data can be found in the public domain. Therefore, the primary objective of this research is to develop a method for generating 3D points clouds of plants, such that they can be used as ground truth for other experiments. Clearly, better ground truth data will enable better analysis [4]. Here, we use a stereo-camera-based structured light scanner (LMI HDI Advance R4x [6] see Figure 1) for capturing depth information from five different specimens (Figure 1). This scanner has been tested to be well su...
Multi-scale Space-time Registration of Growing Plants
2021 International Conference on 3D Vision (3DV), 2021
In this paper, we introduce a new method for the spacetime registration of a growing plant that is based on matching the plant at different geometric scales. The proposed method starts with the creation of a topological skeleton of the plant at each time step. This skeleton is then used to segment the plant into parts that we call branches. Then these branches are further divided into smaller segments that possess a simple geometric structure. These segments are matched between two time steps using a random forest classifier based on their topological and geometric features. Then, for each pair of segments matched, a point-wise registration is devised using a non-rigid registration method based on a local ICP. We applied our method to various types of plants, including arabidopsis, tomato plant and maize. We established three different metrics for 3D point-wise shape correspondence to test the accuracy, continuity, and cycle consistency of the mapping. We then compared our method with the state-of-the-art. Our results show that our approach achieves better or similar results with a shorter running time.
Localized Registration of Point Clouds of Botanic Trees
2012
Abstract A global registration is often insufficient for estimating dendrometric characteristics of trees because individual branches of the same tree may exhibit different positions between two scanning procedures. Therefore, we introduce a localized approach to register point clouds of botanic trees. Given two roughly registered point clouds $ hbox {PC} _ {1} $ and $ hbox {PC} _ {2} $ of a tree, we apply a skeletonization method to both point clouds.
Machine Vision System for 3D Plant Phenotyping
IEEE/ACM Transactions on Computational Biology and Bioinformatics
Machine vision for plant phenotyping is an emerging research area for producing high throughput in agriculture and crop science applications. Since 2D based approaches have their inherent limitations, 3D plant analysis is becoming state of the art for current phenotyping technologies. We present an automated system for analyzing plant growth in indoor conditions. A gantry robot system is used to perform scanning tasks in an automated manner throughout the lifetime of the plant. A 3D laser scanner mounted as the robot's payload captures the surface point cloud data of the plant from multiple views. The plant is monitored from the vegetative to reproductive stages in light/dark cycles inside a controllable growth chamber. An efficient 3D reconstruction algorithm is used, by which multiple scans are aligned together to obtain a 3D mesh of the plant, followed by surface area and volume computations. The whole system, including the programmable growth chamber, robot, scanner, data transfer and analysis is fully automated in such a way that a naive user can, in theory, start the system with a mouse click and get back the growth analysis results at the end of the lifetime of the plant with no intermediate intervention. As evidence of its functionality, we show and analyze quantitative results of the rhythmic growth patterns of the dicot Arabidopsis thaliana(L.), and the monocot barley (Hordeum vulgare L.) plants under their diurnal light/dark cycles.