A combined texture-shape descriptor for enhanced 3D feature matching (original) (raw)

Density-Based 3D Shape Descriptors

EURASIP Journal on Advances in Signal Processing, 2007

We propose a novel probabilistic framework for the extraction of density-based 3D shape descriptors using kernel density estimation. Our descriptors are derived from the probability density functions (pdf) of local surface features characterizing the 3D object geometry. Assuming that the shape of the 3D object is represented as a mesh consisting of triangles with arbitrary size and shape, we provide efficient means to approximate the moments of geometric features on a triangle basis. Our framework produces a number of 3D shape descriptors that prove to be quite discriminative in retrieval applications. We test our descriptors and compare them with several other histogram-based methods on two 3D model databases, Princeton Shape Benchmark and Sculpteur, which are fundamentally different in semantic content and mesh quality. Experimental results show that our methodology not only improves the performance of existing descriptors, but also provides a rigorous framework to advance and to test new ones.

Integrating shape and color cues for textured 3D object recognition

Proceedings of the 2013 IEEE 8th Conference on Industrial Electronics and Applications, ICIEA 2013, 2013

3D object recognition is a fundamental research topic. However, shape only feature descriptors for 3D object recognition have been the main focus of research. With the availability of low cost range plus color sensors, color based descriptors have attracted increasing attention lately. In this paper we present novel 3D object recognition algorithms which use not only shape but also color cues. We first extend our previously proposed Shape only Rotational Projection Statistics (hereby denoted S-RoPS) to obtain a Color only RoPS (C-RoPS) feature descriptor. The C-RoPS descriptor is based on the color space instead of the 3D shape coordinates. We then use feature level and decision level fusion approaches to combine the shape and color information. Experiments were performed on two popular datasets. The results show that decision level fusion achieves better results than either modality when they are used independently. The performance of C-RoPS was further tested using various color spaces e.g., RGB, HSV, YCbCr and CIELAB.

A Unifying Computational Framework for Histogram-based 3D Shape Descriptors

2000

In this paper, we propose a novel framework for 3D histogrambased shape descriptors based on modeling the probability density functions of geometrical quantities, shape functions which measure 3D surface properties, as mixtures of Gaussians. We make use of the special geometry of triangular meshes in 3D, provide efficient means to approximate the moments of shape functions per triangle, and as

A framework for histogram-induced 3D descriptors

2006 14th European Signal Processing Conference, 2006

We present a novel framework to describe 3D shapes, based on modeling the probability density of their shape functions. These functions are conceived to reflect the 3D geometrical properties of the shape surfaces. The densities are modeled as mixtures of Gaussians, each component being the distribution induced by a mesh triangle. A fast algorithm is developed exploiting both the special geometry of 3D triangles with numerical approximations as well as a transform technique. We test and compare the proposed descriptors to other histogram-based methods on two different 3D model databases. It is shown that 3D shape descriptors outperform all of its competitors except one in retrieval applications. Furthermore our methodology provides a fertile ground to introduce and test new descriptors.

Boosting 3D LBP-based face recognition by fusing shape and texture descriptors on the mesh

2015 IEEE International Conference on Image Processing (ICIP), 2015

In this paper, we present a novel approach for fusing shape and texture local binary patterns (LBP) for 3D face recognition. Using a recently proposed framework [1], we compute LBP directly on the face mesh surface, then we construct a grid of the regions on the facial surface that can accommodate global and partial descriptions. Compared to its depthimage counterpart, our approach is distinguished by the following features: a) inherits the intrinsic advantages of mesh surface (e.g., preservation of the full geometry); b) does not require normalization; c) can accommodate partial matching. In addition, it allows early-level fusion of texture and shape modalities. Through experiments conducted on the BU-3DFE and Bosphorus databases, we assess different variants of our approach with regard to facial expressions and missing data.

Density-Based Shape Descriptors for 3D Object Retrieval

Lecture Notes in Computer Science, 2006

We develop a probabilistic framework that computes 3D shape descriptors in a more rigorous and accurate manner than usual histogram-based methods for the purpose of 3D object retrieval. We first use a numerical analytical approach to extract the shape information from each mesh triangle in a better way than the sparse sampling approach. These measurements are then combined to build a probability density descriptor via kernel density estimation techniques, with a rule-based bandwidth assignment. Finally, we explore descriptor fusion schemes. Our analytical approach reveals the true potential of densitybased descriptors, one of its representatives reaching the top ranking position among competing methods.

Structural descriptors for 3d shapes

Content-Based Retrieval, 2006

Assessing the similarity among 3D shapes is a very complex and challenging research topic. While human perception have been widely studied and produced theories that received a large consensus, the computational aspects of 3D shape retrieval and matching have been only recently addressed. The majority of the methods proposed in the literature mainly focus on the geometry of shapes, in the sense of considering its spatial distribution or extent in the 3D space. From a practical point of view, the main advantage ...

Matching 3D face scans using interest points and local histogram descriptors

Computers & Graphics, 2013

In this work, we propose and experiment an original solution to 3D face recognition that supports face matching also in the case of probe scans with missing parts. In the proposed approach, distinguishing traits of the face are captured by first extracting 3D keypoints of the scan and then measuring how the face surface changes in the keypoints neighborhood using local shape descriptors. In particular: 3D keypoints detection relies on the adaptation to the case of 3D faces of the meshDOG algorithm that has been demonstrated to be effective for 3D keypoints extraction from generic objects; as 3D local descriptors we used the HOG descriptor and also proposed two alternative solutions that develop, respectively, on the histogram of orientations and the geometric histogram descriptors. Face similarity is evaluated by comparing local shape descriptors across inlier pairs of matching keypoints between probe and gallery scans. The face recognition accuracy of the approach has been first experimented on the difficult probes included in the new 2D/3D Florence face dataset that has been recently collected and released at the University of Firenze, and on the Binghamton University 3D facial expression dataset. Then, a comprehensive comparative evaluation has been performed on the Bosphorus, Gavab and UND/FRGC v2.0 databases, where competitive results with respect to existing solutions for 3D face biometrics have been obtained.

Visual Similarity Based 3D Shape Retrieval Using Bag-of-Features

2010 Shape Modeling International Conference, 2010

This paper presents a novel 3D shape retrieval method, which uses Bag-of-Features and an efficient multiview shape matching scheme. In our approach, a properly normalized object is first described by a set of depth-buffer views captured on the surrounding vertices of a given unit geodesic sphere. We then represent each view as a word histogram generated by the vector quantization of the view's salient local features. The dissimilarity between two 3D models is measured by the minimum distance of their all (24) possible matching pairs. This paper also investigates several critical issues including the influence of the number of views, codebook, training data, and distance function. Experiments on four commonly-used benchmarks demonstrate that: 1) Our approach obtains superior performance in searching for rigid models. 2) The local feature and global feature based methods are somehow complementary. Moreover, a linear combination of them significantly outperforms the stateof-the-art in terms of retrieval accuracy.