Learning the 2-D Topology of Images (original) (raw)
Related papers
Geometric Data Analysis Based on Manifold Learning with Applications for Image Understanding
2017 30th SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T), 2017
Nowadays, pattern recognition, computer vision, signal processing and medical image analysis, require the managing of large amount of multidimensional image databases, possibly sampled from nonlinear manifolds. The complex tasks involved in the analysis of such massive data lead to a strong demand for nonlinear methods for dimensionality reduction to achieve efficient representation for information extraction. In this avenue, manifold learning has been applied to embed nonlinear image data in lower dimensional spaces for subsequent analysis. The result allows a geometric interpretation of image spaces with relevant consequences for data topology, computation of image similarity, discriminant analysis/classification tasks and, more recently, for deep learning issues. In this paper, we firstly review Riemannian manifolds that compose the mathematical background in this field. Such background offers the support to set up a data model that embeds usual linear subspace learning and discriminant analysis results in local structures built from samples drawn from some unknown distribution. Afterwards, we discuss topological issues in data preparation for manifold learning algorithms as well as the determination of manifold dimension. Then, we survey dimensionality reduction techniques with particular attention to Riemannian manifold learning. Besides, we discuss the application of concepts in discrete and polyhedral geometry for synthesis and data clustering over the recovered Riemannian manifold with emphasis in face images in the computational experiments. Next, we discuss promising perspectives of manifold learning and related topics for image analysis, classification and relationships with deep learning methods. Specifically, we discuss the application of foliation theory, discriminant analysis and kernel methods in curved spaces. Besides, we take differential geometry in manifolds as a paradigm to discuss deep generative models and metric learning algorithms.
Leveraging image manifolds for visual learning
2016
The field of computer vision has recently witnessed remarkable progress, due mainly to visual data availability and machine learning advances. Modeling the visual data is challenging due to several factors, such as loss of information while projecting 3D world to 2D plain, high dimensionality of the visual data, and existence of nuisance parameters such as occlusion, clutter, illumination and noise. In this dissertation, we focus on modeling the inter and intra image manifold variability. The dissertation shows that modeling the image manifold helps to achieve recognition invariance and perform robust regression within the manifold. It leverages the power of Homeomorphic Manifold Analysis (HMA) framework to utilize the known topological information about data manifolds. HMA builds mappings from a conceptual space to the feature space. These mappings are based on topological homeomorphism between points in the two spaces.
A Survey of Manifold Learning for Images
IPSJ Transactions on Computer Vision and Applications, 2009
Many natural image sets are samples of a low-dimensional manifold in the space of all possible images. Understanding this manifold is a key first step in understanding many sets of images, and manifold learning approaches have recently been used within many application domains, including face recognition, medical image segmentation, gait recognition and handwritten character recognition. This paper attempts to characterize the special features of manifold learning on image data sets, and to highlight the value and limitations of these approaches.
Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds
Journal of Machine Learning Research
The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data. The data, assumed to be sampled from an underlying manifold, are mapped into a single global coordinate system of lower dimensionality. The mapping is derived from the symmetries of locally linear reconstructions, and the actual computation of the embedding reduces to a sparse eigenvalue problem. Notably, the optimizations in LLE--though capable of generating highly nonlinear embeddings--are simple to implement, and they do not involve local minima. In this paper, we describe the implementation of the algorithm in detail and discuss several extensions that enhance its performance. We present results of the algorithm applied to data sampled from known manifolds, as well as to collections of images of faces, lips, and handwritten digits. These examples are used to provide extensive illustrations of the algorithm's performance--both successes and failures--and to relate the algorithm to previous and ongoing work in nonlinear dimensionality reduction.
Non-isometric manifold learning: Analysis and an algorithm
2007
Abstract In this work we take a novel view of nonlinear manifold learning. Usually, manifold learning is formulated in terms of finding an embedding or'unrolling'of a manifold into a lower dimensional space. Instead, we treat it as the problem of learning a representation of a nonlinear, possibly non-isometric manifold that allows for the manipulation of novel points. Central to this view of manifold learning is the concept of generalization beyond the training data.
Geometric Elements of Manifold Learning
Manifold learning has been widely exploited in high dimensional data analysis with applications in pattern recognition, data mining and computer vision. The general problem is to extract a low-dimensional representation of a high-dimensional data set such that it leads to a more compact description of the data and simplifies its analysis. If we assume that the input lies on a low-dimensional manifold, embedded in some high-dimensional space, this problem can be solved by computing a suitable embedding procedure. This is the problem of manifold learning. This paper presents an introduction to this field with focus on the geometric aspects of the manifold and subspace learning methods. Therefore, we provide some review in the area followed by a discussion that aims to motivate its study. The basic idea of manifold learning and its relationship with linear/non-linear dimensionality reduction techniques are presented using a data set lying on a smooth curve (one dimensional differential...
Unsupervised nonlinear manifold learning
2007
This communication deals with data reduction and regression. A set of high dimensional data (e.g., images) usually has only a few degrees of freedom with corresponding variables that are used to parameterize the original data set. Data understanding, visualization and classification are the usual goals. The proposed method reduces data considering a unique set of low-dimensional variables and a user-defined cost function in the multidimensional scaling framework. Mapping of the reduced variables to the original data is also addressed, which is another contribution of this work. Typical data reduction methods, such as Isomap or LLE, do not deal with this important aspect of manifold learning. We also tackle the inversion of the mapping, which makes it possible to project high-dimensional noisy points onto the manifold, like PCA with linear models. We present an application of our approach to several standard data sets such as the SwissRoll.
A Discussion On the Validity of Manifold Learning
ArXiv, 2021
Dimensionality reduction (DR) and manifold learning (ManL) have been applied extensively in many machine learning tasks, including signal processing, speech recognition, and neuroinformatics.However, the understanding of whether DR and ManL models can generate valid learning results remains unclear. In this work, we investigate the validity of learning results of some widely used DR and ManL methods through the chart mapping function of a manifold. We identify a fundamental problem of these methods: the mapping functions induced by these methods violate the basic settings of manifolds, and hence they are not learning manifold in the mathematical sense. To address this problem, we provide a provably correct algorithm called fixed points Laplacian mapping (FPLM), that has the geometric guarantee to find a valid manifold representation (up to a homeomorphism). Combining one additional condition (orientation preserving), we discuss a sufficient condition for an algorithm to be bijective...
Learning to traverse image manifolds
2007
Abstract We present a new algorithm, Locally Smooth Manifold Learning (LSML), that learns a warping function from a point on an manifold to its neighbors. Important characteristics of LSML include the ability to recover the structure of the manifold in sparsely populated regions and beyond the support of the provided data.