Manifold learning Research Papers - Academia.edu (original) (raw)
2025
The direction of axons in white matter can be estimated using a deterministic fibre tracking algorithms and diffusion weighted imaging. The aim of this work was to evaluate the data, obtained from pig spines phantom measurements with... more
The direction of axons in white matter can be estimated using a deterministic fibre tracking algorithms and diffusion weighted imaging. The aim of this work was to evaluate the data, obtained from pig spines phantom measurements with relatively low b-value, using two types of reconstructions: diffusion tensor imaging (DTI) and q-ball approach. Pigs spines submerged in agar gel were used to prepare a phantom with two crossing populations of fibres. The phantoms were measured in 3T MR scanned for b-value of 1000 and 2000 s/mm2 for q-ball and 200-2000s/mm2 for DTI reconstruction. Analysis of crossing and single fibre population regions in the scanners showed that the median dispersions from the reference directions in case of single fibre population were c.a. 4° and for crossing area c.a. 12° and 6.5° for b-value of 1000 s/mm2 and 2000 s/mm2 respectively. The q-ball approach was able to resolve crossing problem for both low b-values. It was shown here that coherent results can be achie...
2025, 2007 IEEE Conference on Computer Vision and Pattern Recognition
A good distance metric is crucial for unsupervised learning from high-dimensional data. To learn a metric without any constraint or class label information, most unsupervised metric learning algorithms appeal to projecting observed data... more
A good distance metric is crucial for unsupervised learning from high-dimensional data. To learn a metric without any constraint or class label information, most unsupervised metric learning algorithms appeal to projecting observed data onto a low-dimensional manifold, where geometric relationships such as local or global pairwise distances are preserved. However, the projection may not necessarily improve the separability of the data, which is the desirable outcome of clustering. In this paper, we propose a novel unsupervised Adaptive Metric Learning algorithm, called AML, which performs clustering and distance metric learning simultaneously. AML projects the data onto a low-dimensional manifold, where the separability of the data is maximized. We show that the joint clustering and distance metric learning can be formulated as a trace maximization problem, which can be solved via an iterative procedure in the EM framework. Experimental results on a collection of benchmark data sets demonstrated the effectiveness of the proposed algorithm.
2025, IEEE Transactions on Image Processing
This paper presents a novel manifold learning algorithm for high-dimensional data sets. The scope of the application focuses on the problem of motion tracking in video sequences. The framework presented is twofold. First, it is assumed... more
This paper presents a novel manifold learning algorithm for high-dimensional data sets. The scope of the application focuses on the problem of motion tracking in video sequences. The framework presented is twofold. First, it is assumed that the samples are time ordered, providing valuable information that is not presented in the current methodologies. Second, the manifold topology comprises multiple charts, which contrasts to the most current methods that assume one single chart, being overly restrictive. The proposed algorithm, Gaussian process multiple local models (GP-MLM), can deal with arbitrary manifold topology by decomposing the manifold into multiple local models that are probabilistic combined using Gaussian process regression. In addition, the paper presents a multiple filter architecture where standard filtering techniques are integrated within the GP-MLM. The proposed approach exhibits comparable performance of state-of-the-art trackers, namely multiple model data association and deep belief networks, and compares favorably with Gaussian process latent variable models. Extensive experiments are presented using real video data, including a publicly available database of lip sequences and left ventricle ultrasound images, in which the GP-MLM achieves state of the art results.
2025, CITY UNIVERSITY MALAYSIA
The 'Thrilled Vacuum' concept introduces a novel and motivational perspective to data clustering, presenting both challenges and innovative opportunities. This study aims to identify the challenges and real-time applications of clustering... more
The 'Thrilled Vacuum' concept introduces a novel and motivational perspective to data clustering, presenting both challenges and innovative opportunities. This study aims to identify the challenges and real-time applications of clustering techniques, with the goal of inspiring further research and practical innovation. 'Thrilled Vacuum' reimagines data clustering as a dynamic and motivating challenge. Clustering, an unsupervised machine learning method, groups unstructured, semi-structured, or structured data based on similarities. It uses a diverse range of clustering techniques to identify similarities between data points and group them accordingly. While machine learning datasets often contain millions of instances, most clustering algorithms face scalability challenges. Traditional approaches that compute pairwise similarities between all examples exhibit quadratic time complexity (O(n³)), making them computationally prohibitive for large-scale datasets. This limitation makes classical clustering methods impractical for realworld applications involving millions of data points. Unstructured data is continuously growing in this modern era because of using internet, IoT devices, social networks, the animation revolution, AI generated content and multimedia usage. The data volume is expanding dramatically at an unprecedented rate. It is fueled by people engaging with the internet.
2025, Architectural Science Review
This study is founded on the idea that an analysis of the visual gaze dynamics of pedestrians can increase our understanding of how important architectural features in urban environments are perceived by pedestrians. The results of such... more
This study is founded on the idea that an analysis of the visual gaze dynamics of pedestrians can increase our understanding of how important architectural features in urban environments are perceived by pedestrians. The results of such an analysis can lead to improvements in urban design. However, a technical challenge arises when trying to determine the gaze direction of pedestrians recorded on video. High "noise" levels and the subtlety of human gaze dynamics hamper precise calculations. However, as robots can be programmed and analysed more efficiently than humans this study employs them for developing and training a gaze analysis system with the aim to later apply it to human video data using the machine learning technique of manifold alignment. For the present study a laboratory was set up to become a model street scene in which autonomous humanoid robots of approximately 55cm in height simulate the behaviour of human pedestrians. The experiments compare the inputs from several cameras as the robot walks down the model street and changes its behaviour upon encountering "visually attractive objects". Overhead recordings and the robot's internal joint signals are analysed after filtering to provide "true" data against which the recorded data can be compared for accuracy testing. A central component of the research is the calculation of a torus-like manifold that represents all different 3D head directions of a robot head and which allows for ordering extracted 3D gaze vectors obtained from video sequences. We briefly describe how the obtained multidimensional trajectory data can be analysed by using a temporal behaviour analysis technique based on support vector machines that was developed separately.
2025
A great challenge today, arising in many fields of science, is the proper mapping of datasets to explore their structure and gain information that otherwise would remain concealed due to the high-dimensionality. This task is impossible... more
A great challenge today, arising in many fields of science, is the proper mapping of datasets to explore their structure and gain information that otherwise would remain concealed due to the high-dimensionality. This task is impossible without appropriate tools helping the experts to understand the data. A promising way to support the experts in their work is the topographic mapping of the datasets to a low-dimensional space where the structure of the data can be visualized and understood. This thesis focuses on Neural Gas and Self-Organizing Maps as particularly successful methods for prototype-based topographic maps. The aim of the thesis is to extend these methods such that they can deal with real life datasets which are possibly very huge and complex, thus probably not treatable in main memory, nor embeddable in Euclidean space. As a foundation, we propose and investigate a fast batch scheme for topographic mapping which features quadratic convergence. This formulation allows to...
2025, Dr.M.Gokilavani
22PCOAM16 ML Unit IV Full notes & QB
2025
Most fibre tracking techniques were developed in a deterministic framework reducing the diffusion information to one single main direction, thus not taking into account the fibre directional uncertainty related to the diffusion tensor. We... more
Most fibre tracking techniques were developed in a deterministic framework reducing the diffusion information to one single main direction, thus not taking into account the fibre directional uncertainty related to the diffusion tensor. We implemented a different approach that was statistical in nature and that took in account the whole diffusion information. This algorithm was then used to model globally the brain connectivity. The mass of connections thus generated was then virtually dissected to uncover different tracts. The connectivity pattern and the individual fibre tracts were then compared to known anatomical data; a good matching was found.
2025, Entropy
An innovative data-driven model-order reduction technique is proposed to model dilute micrometric or nanometric suspensions of microcapsules, i.e., microdrops protected in a thin hyperelastic membrane, which are used in Healthcare as... more
An innovative data-driven model-order reduction technique is proposed to model dilute micrometric or nanometric suspensions of microcapsules, i.e., microdrops protected in a thin hyperelastic membrane, which are used in Healthcare as innovative drug vehicles. We consider a microcapsule flowing in a similar-size microfluidic channel and vary systematically the governing parameter, namely the capillary number, ratio of the viscous to elastic forces, and the confinement ratio, ratio of the capsule to tube size. The resulting space-time-parameter problem is solved using two global POD reduced bases, determined in the offline stage for the space and parameter variables, respectively. A suitable low-order spatial reduced basis is then computed in the online stage for any new parameter instance. The time evolution of the capsule dynamics is achieved by identifying the nonlinear low-order manifold of the reduced variables; for that, a point cloud of reduced data is computed and a diffuse ap...
2025
Manifold learning has proven to be a very powerful tool in data analysis. However, manifold learning application for images are mainly based on holistic vectorized representations of images. The challenging question that we address in... more
Manifold learning has proven to be a very powerful tool in data analysis. However, manifold learning application for images are mainly based on holistic vectorized representations of images. The challenging question that we address in this paper is how can we learn image manifolds from a punch of local features in a smooth way that captures the feature similarity and spatial arrangement variability between images. We introduce a novel framework for learning a manifold representation from collections of local features in images. We first show how we can learn a feature embedding representation that preserves both the local appearance similarity as well as the spatial structure of the features. We also show how we can embed features from a new image by introducing a solution for the out-of-sample that is suitable for this context. By solving these two problems and defining a proper distance measure in the feature embedding space, we can reach an image manifold embedding space.
2025, IEEE Transactions on Geoscience and Remote Sensing
In recent publications, we have presented a datadriven approach to representing the nonlinear structure of hyperspectral imagery using manifold coordinates. The approach relies on graph methods to derive geodesic distances on the... more
In recent publications, we have presented a datadriven approach to representing the nonlinear structure of hyperspectral imagery using manifold coordinates. The approach relies on graph methods to derive geodesic distances on the highdimensional hyperspectral data manifold. From these distances, a set of intrinsic manifold coordinates that parameterizes the data manifold is derived. Scaling the solution relied on divideconquer-and-merge strategies for the manifold coordinates because of the computational and memory scaling of the geodesic coordinate calculations. In this paper, we improve the scaling performance of isometric mapping (ISOMAP) and achieve fullscene global manifold coordinates while removing artifacts generated by the original methods. The CPU time of the enhanced ISOMAP approach scales as O(N log 2 (N )), where N is the number of samples, while the memory requirement is bounded by O(N log(N )). Full hyperspectral scenes of O(10 6 ) samples or greater are obtained via a reconstruction algorithm, which allows insertion of large numbers of samples into a representative "backbone" manifold obtained for a smaller but representative set of O(10 5 ) samples. We provide a classification example using a coastal hyperspectral scene to illustrate the approach.
2025, arXiv_ ALLE | Firdowsi University of Mashhad,Iran
Manifold learning techniques, such as Locally linear embedding (LLE), are designed to preserve the local neighborhood structures of high-dimensional data during dimensionality reduction. Traditional LLE employs Euclidean distance to... more
Manifold learning techniques, such as Locally linear embedding (LLE), are designed to preserve the local neighborhood structures of high-dimensional data during dimensionality reduction. Traditional LLE employs Euclidean distance to define neighborhoods, which can struggle to capture the intrinsic geometric relationships within complex data. A novel approach, Adaptive locally linear embedding(ALLE), is introduced to address this limitation by incorporating a dynamic, data-driven metric that enhances topological preservation. This method redefines the concept of proximity by focusing on topological neighborhood inclusion rather than fixed distances. By adapting the metric based on the local structure of the data, it achieves superior neighborhood preservation, particularly for datasets with complex geometries and high-dimensional structures. Experimental results demonstrate that ALLE significantly improves the alignment between neighborhoods in the input and feature spaces, resulting in more accurate and topologically faithful embeddings. This approach advances manifold learning by tailoring distance metrics to the underlying data, providing a robust solution for capturing intricate relationships in high-dimensional datasets.
2025, IEEE Transactions on Visualization and Computer Graphics
Fig. 1. A variant of an manifold learning technique allows us to extract contour trees: (a) The 3D distribution of the Swiss roll dataset (left) is elongated onto the 2D plane (right) through conventional nonlinear dimensionality... more
Fig. 1. A variant of an manifold learning technique allows us to extract contour trees: (a) The 3D distribution of the Swiss roll dataset (left) is elongated onto the 2D plane (right) through conventional nonlinear dimensionality reduction. (b) 3D scattered samples of a surface with two peaks and one pit (left) can be transformed into an approximate contour tree (right) using our approach. Different colors are assigned to the samples where the color of each axis represents the associated correspondence with their coordinates. Abstract-A contour tree is a powerful tool for delineating the topological evolution of isosurfaces of a single-valued function, and thus has been frequently used as a means of extracting features from volumes and their time-varying behaviors. Several sophisticated algorithms have been proposed for constructing contour trees while they often complicate the software implementation especially for higher-dimensional cases such as time-varying volumes. This paper presents a simple yet effective approach to plotting in 3D space, approximate contour trees from a set of scattered samples embedded in the high-dimensional space. Our main idea is to take advantage of manifold learning so that we can elongate the distribution of high-dimensional data samples to embed it into a low-dimensional space while respecting its local proximity of sample points. The contribution of this paper lies in the introduction of new distance metrics to manifold learning, which allows us to reformulate existing algorithms as a variant of currently available dimensionality reduction scheme. Efficient reduction of data sizes together with segmentation capability is also developed to equip our approach with a coarse-to-fine analysis even for large-scale datasets. Examples are provided to demonstrate that our proposed scheme can successfully traverse the features of volumes and their temporal behaviors through the constructed contour trees.
2025, Carolina Digital Repository (University of North Carolina at Chapel Hill)
In this paper, we focus on joint regression and classification for Alzheimer's disease diagnosis and propose a new feature selection method by embedding the relational information inherent in the observations into a sparse multi-task... more
In this paper, we focus on joint regression and classification for Alzheimer's disease diagnosis and propose a new feature selection method by embedding the relational information inherent in the observations into a sparse multi-task learning framework. Specifically, the relational information includes three kinds of relationships (such as feature-feature relation, response-response relation, and sample-sample relation), for preserving three kinds of the similarity, such as for the features, the response variables, and the samples, respectively. To conduct feature selection, we first formulate the objective function by imposing these three relational characteristics along with an ℓ 2,1 -norm regularization term, and further propose a computationally efficient algorithm to optimize the proposed objective function. With the reduced data, we train two support vector regression models to predict the clinical scores of ADAS-Cog and MMSE, respectively, and also a support vector classification model to determine the clinical label. We conducted extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to validate the effectiveness of the proposed method. Our experimental results showed the efficacy of the proposed method in enhancing the performances of both clinical score prediction and disease status identification, compared to the state-of-the-art methods.
2025, Medical image analysis
In this paper, we focus on joint regression and classification for Alzheimer's disease diagnosis and propose a new feature selection method by embedding the relational information inherent in the observations into a sparse multi-task... more
In this paper, we focus on joint regression and classification for Alzheimer's disease diagnosis and propose a new feature selection method by embedding the relational information inherent in the observations into a sparse multi-task learning framework. Specifically, the relational information includes three kinds of relationships (such as feature-feature relation, response-response relation, and sample-sample relation), for preserving three kinds of the similarity, such as for the features, the response variables, and the samples, respectively. To conduct feature selection, we first formulate the objective function by imposing these three relational characteristics along with an ℓ2,1-norm regularization term, and further propose a computationally efficient algorithm to optimize the proposed objective function. With the dimension-reduced data, we train two support vector regression models to predict the clinical scores of ADAS-Cog and MMSE, respectively, and also a support vecto...
2025, arXiv (Cornell University)
We introduce a new definiton of exponential family of Markov chains, and show that many characteristic properties of the usual exponential family of probability distributions are properly extended to Markov chains. The method of... more
We introduce a new definiton of exponential family of Markov chains, and show that many characteristic properties of the usual exponential family of probability distributions are properly extended to Markov chains. The method of information geometry is effectively applied to our framework, which enables us to characterize the divergence rate of Markov chain from a differential geometric viewpoint.
2025
In the graph classification problem, given is a family of graphs and a group of different categories, and we aim to classify all the graphs (of the family) into the given categories. Earlier approaches, such as graph kernels and graph... more
In the graph classification problem, given is a family of graphs and a group of different categories, and we aim to classify all the graphs (of the family) into the given categories. Earlier approaches, such as graph kernels and graph embedding techniques have focused on extracting certain features by processing the entire graph. However, real world graphs are complex and noisy and these traditional approaches are computationally intensive. With the introduction of the deep learning framework, there have been numerous attempts to create more efficient classification approaches. We modify a kernel graph convolutional neural network approach, that extracts subgraphs (patches) from the graph using various community detection algorithms. These patches are provided as input to a graph kernel and max pooling is applied. We use different community detection algorithms and a shortest path graph kernel and compare their efficiency and performance. In this paper we compare three methods: a gr...
2025, Pattern Recognition
The low-dimensional representation of high-dimensional data and the concise description of its intrinsic structures are central problems in data analysis. In this paper, an unsupervised learning algorithm called weighted locally linear... more
The low-dimensional representation of high-dimensional data and the concise description of its intrinsic structures are central problems in data analysis. In this paper, an unsupervised learning algorithm called weighted locally linear embedding (WLLE) is presented to discover the intrinsic structures of data, such as neighborhood relationships, global distributions and clustering. The WLLE algorithm is motivated by locally linear embedding (LLE) algorithm and cam weighted distance, a novel distance measure which usually gives a deflective cam contours for equal-distance contour in classification for an improved classification. It is a major advantage of the WLLE to optimize the process of intrinsic structure discovery by avoiding unreasonable neighbor searching, and at the same time, allow the discovery adapt to the characteristics of input data set. Furthermore, the algorithm discovers intrinsic structures which can be used to compute manipulative embedding for potential classification and recognition purposes, thus can work as a feature extraction algorithm. Simulation studies demonstrate that the WLLE can give better results in manifold learning and dimension reduction than LLE and neighborhood linear embedding (NLE), and is more robust to parameter changes. Experiments on face images data sets and comparison to other famous face recognition methods such as kernel-PCA (KPCA) and kernel direct discriminant analysis (KDDA) are done to show the potential of WLLE for real world problem.
2025, International Conference on Machine Learning
Denoising auto-encoders (DAEs) have been successfully used to learn new representations for a wide range of machine learning tasks. During training, DAEs make many passes over the training dataset and reconstruct it from partial... more
Denoising auto-encoders (DAEs) have been successfully used to learn new representations for a wide range of machine learning tasks. During training, DAEs make many passes over the training dataset and reconstruct it from partial corruption generated from a pre-specified corrupting distribution. This process learns robust representation, though at the expense of requiring many training epochs, in which the data is explicitly corrupted. In this paper we present the marginalized Denoising Auto-encoder (mDAE), which (approximately) marginalizes out the corruption during training. Effectively, the mDAE takes into account infinitely many corrupted copies of the training data in every epoch, and therefore is able to match or outperform the DAE with much fewer training epochs. We analyze our proposed algorithm and show that it can be understood as a classic auto-encoder with a special form of regularization. In empirical evaluations we show that it attains 1-2 order-of-magnitude speedup in training time over other competing approaches.
2025, Biomedicines
Alzheimer’s disease (AD) is the most common form of dementia. An increasing number of studies have confirmed epigenetic changes in AD. Consequently, a robust phenotyping mechanism must take into consideration the environmental effects on... more
Alzheimer’s disease (AD) is the most common form of dementia. An increasing number of studies have confirmed epigenetic changes in AD. Consequently, a robust phenotyping mechanism must take into consideration the environmental effects on the patient in the generation of phenotypes. Positron Emission Tomography (PET) is employed for the quantification of pathological amyloid deposition in brain tissues. The objective is to develop a new methodology for the hyperparametric analysis of changes in cognitive scores and PET features to test for there being multiple AD phenotypes. We used a computational method to identify phenotypes in a retrospective cohort study (532 subjects), using PET and Magnetic Resonance Imaging (MRI) images and neuropsychological assessments, to develop a novel computational phenotyping method that uses Partial Volume Correction (PVC) and subsets of neuropsychological assessments in a non-biased fashion. Our pipeline is based on a Regional Spread Function (RSF) m...
2025, IEEE Sensors Journal
In recent times, the rapid growth of human-robot interaction (HRI)-based utilities can be observed in industrial and domestic applications. To ensure seamless operation in daily life HRI spaces, several aspects of environmental hazards... more
In recent times, the rapid growth of human-robot interaction (HRI)-based utilities can be observed in industrial and domestic applications. To ensure seamless operation in daily life HRI spaces, several aspects of environmental hazards need to be taken care of. Vision-based systems, in particular, are prone to challenges such as degradation in ambient illumination and noise in the sensor accessories. For highdimensional vision sensor data, manifold learning-based dimensionality reduction techniques like locality preserving projection (LPP) have shown promising results. To handle the sensitivity of LPP towards noise and outliers arising from such adversities, methods like 2D-LPP and robust 2D-LPP (2DRLPP) have been introduced further. However, in these methods also, the projection maps remain susceptible to the spatial perturbations in the data. To address these limitations, this paper proposes a granular feature-aided 2DRLPP scheme to enhance robustness against noise, intensity variations, and spatial outliers. To obtain the robust feature information, a new granular computing (GrC)based technique is introduced. A novel density-based neighborhood granulation (dNG) algorithm is proposed for extracting granular information from real-world vision sensor data. Additionally, an RGB-channel fuzzy decoding scheme is developed to decode the granular information and utilize it for constructing a robust projection kernel. The proposed technique is validated in a real-world assistive robotic environment, where flag-stick visual cues are used by human participants to supervise an assistive mobile robot in performing necessary tasks.
2025, IEEE Transactions on Emerging Topics in Computational Intelligence
In image clustering applications, deep feature clustering has recently demonstrated impressive performance, which employs deep neural networks for feature learning that favors clustering exercises. In this context, density-based methods... more
In image clustering applications, deep feature clustering has recently demonstrated impressive performance, which employs deep neural networks for feature learning that favors clustering exercises. In this context, density-based methods have emerged as the preferred choice for the clustering mechanism within the framework of deep feature clustering. However, as the performance of these clustering algorithms is primarily effective on the low-dimensional feature data, deep feature learning models play a crucial role here. With far infrared (FIR) thermal imaging systems working in real-world scenarios, the images captured are largely affected by blurred edges, background noise, thermal irregularities, few details, etc. In this work, we demonstrate the effectiveness of granular computing-based techniques in such scenarios, where the input data contains indiscernible image regions and vague boundary regions. We propose a novel adaptive non-homogeneous granulation (ANHG) technique here that can adaptively select the smallest possible size of granules within a purview of unequally-sized granulation, based on a segmentation assessment index. Proposed ANHG in combination with deep feature learning helps in extracting complex, indiscernible information from the image data and capturing the local intensity variation of the data. Experimental results show significant performance improvement of the density-based deep feature clustering method after the incorporation of the proposed granulation scheme.
2025, IOP
The objective of an effective human-robot collaborative (HRC) task is the maximization of human-robot competencies while ensuring the user's convenience. In photometrically challenging and unstructured HRC environments, data obtained from... more
The objective of an effective human-robot collaborative (HRC) task is the maximization of human-robot competencies while ensuring the user's convenience. In photometrically challenging and unstructured HRC environments, data obtained from vision sensors often tend to get degraded due to illumination irregularities and spatio-temporal complexities. To extract useful and discriminative features from the data under such situations, locality-sensitive methods like locality preserving projections (LPPs) become quite useful as it captures the local geometric structure of the high-dimensional data. In LPP, the local structural information is encoded in the form of weight values between two samples in the higher-dimensional Euclidean space. The weight values are learned in a regular and continuous manner which only depends on the spatial distribution of the data. Moreover, because of its weight dependency solely on the Euclidean distance, improper weight values can occur frequently, as the Euclidean distance is susceptible to noise, outliers, and different types of geometrical transformations. This paper proposes an adaptive weight learning method to be utilized in the weight computation of LPP, which allows it to adaptively select and extract more discriminative features from the higher-dimensional input data while preserving the intrinsic structural information of the data. Additionally, to alleviate the issues with spatial dependency, the concept of bilateral filtering that incorporates the range weights from the feature space along with the similarity weight in the Euclidean space has been utilized here. This paper proposes an augmented version of adaptive spatial-feature kernel-guided bilateral filtering inspired LPP which addresses two of these basic and fundamental issues of the conventional LPP.
2025, IEEE
Locality preserving projection (LPP) is a manifold learning-based nonlinear dimensionality reduction (DR) technique, which has seen successful implementations in pattern recognition problems. In the case of 2-D images, if they are... more
Locality preserving projection (LPP) is a manifold learning-based nonlinear dimensionality reduction (DR) technique, which has seen successful implementations in pattern recognition problems. In the case of 2-D images, if they are vectorized in 1-D shapes before applying LPP, significant spatial neighborhood information can be lost. Two-dimensional LPP (2DLPP) can overcome this problem but suffers when data are susceptible to noise, outliers, and intensity variation. To address these issues, we propose a novel rough entropy-based granular fusion (REGF) scheme to capture the intensity variation in the data in the form of feature information and propose hybridization of REGF with 2DLPP for feature extraction. The fusion technique simultaneously imbibes good features of crisp granulation (CG) and quad-tree decomposition (QTD), and considers the uncertainties caused by these homogenous and non-homogeneous granulation techniques in defining the indiscernible image regions. Moreover, it works in the RGB color space to alleviate the loss of information encountered by the conventional granulation techniques in the gray space. Extensive experimental studies in a real-world vision sensorbased human-robot interaction (HRI) framework have been conducted to demonstrate the effectiveness of the proposed granular computing (GrC)-based technique, especially in rugged environments.
2025, IEEE
The present letter proposes a new variant of bilateral locality preserving projection (BLPP), a nonlinear manifold learning technique, for vision sensor-based robot navigation guidance in challenging environments, suffering from both... more
The present letter proposes a new variant of bilateral locality preserving projection (BLPP), a nonlinear manifold learning technique, for vision sensor-based robot navigation guidance in challenging environments, suffering from both variations in ambient illumination levels and vision sensor noises and uncertainties in image acquisition. In its basic form, the adjacency graph in BLPP is formulated both in feature subspace and spatial subspace simultaneously. The present work proposes to compute the feature weights using local ternary pattern (LTP), a lesser noise-sensitive feature descriptor that is also robust against illumination variations. In addition, histogram refinement has been performed on the feature descriptor to encode the local neighborhood structure of the data, proposing a further augmented version of BLPP named histogram refined local ternary pattern-based BLPP (HRLTP-BLPP). Extensive real-life experiments with challenging photometric conditions and sensor noises have been performed to show the usefulness of HRLTP-BLPP over other competing, state-of-the-art approaches utilized for robot navigation guidance.
2025
This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available:
2025
This paper is devoted to thoroughly inves- tigating how to bootstrap the ROC curve, a widely used visual tool for evaluating the accuracy of test/scoring statistics s(X) in the bipartite setup. The issue of confidence bands for the ROC... more
This paper is devoted to thoroughly inves- tigating how to bootstrap the ROC curve, a widely used visual tool for evaluating the accuracy of test/scoring statistics s(X) in the bipartite setup. The issue of confidence bands for the ROC curve is considered and a resampling procedure based on a smooth ver- sion of the empirical distribution called the "smoothed bootstrap" is introduced. Theo- retical arguments and simulation results are presented to show that the "smoothed boot- strap" is preferable to a "naive" bootstrap in order to construct accurate confidence bands.
2025
In this paper an object recognition method using saliency maps is proposed. Saliency maps are used to find the most informative part in an image. Label Propagation Saliency (LPS) method is used to generate saliency maps on the basis of... more
In this paper an object recognition method using saliency maps is proposed. Saliency maps are used to find the most informative part in an image. Label Propagation Saliency (LPS) method is used to generate saliency maps on the basis of background and objectness labels extracted from the image. Object recognition can be performed on saliency maps using Principal Component Analysis (PCA) and k-Nearest Neighbor search (k-NN). During training phase 357 images are trained by applying PCA to retrieve the key features and they are stored in database. During testing phase PCA is applied on the input image to retrieve features. The stored values are loaded and a kd-Tree is constructed and searching is done on it which trains the features into different categories based on Euclidean distance. kd-Tree search values and PCA values of input image are given as input to k-NN algorithm to find the category of input image. Keywords— Saliency Maps, Background labels, Objectness labels, Label Propagat...
2024, … of Workshop on …
In this paper we describe an online/incremental linear binary classifier based on an inter-esting approach to estimate the Fisher subspace. The proposed method allows to deal with datasets having high cardinality, being dynamically... more
In this paper we describe an online/incremental linear binary classifier based on an inter-esting approach to estimate the Fisher subspace. The proposed method allows to deal with datasets having high cardinality, being dynamically supplied, and it efficiently copes ...
2024
Nonlinear manifold learning algorithms, mainly isometric feature mapping (Isomap) and local linear embedding (LLE), determine the low-dimensional embedding of the original high dimensional data by finding the geometric distances between... more
Nonlinear manifold learning algorithms, mainly isometric feature mapping (Isomap) and local linear embedding (LLE), determine the low-dimensional embedding of the original high dimensional data by finding the geometric distances between samples. Researchers in the remote sensing community have successfully applied Isomap to hyperspectral data to extract useful information. Although results are promising, computational requirements of the local search process are exhorbitant. Landmark-Isomap, which utilizes randomly selected sample points to perform the search, mitigates these problems, but samples of some classes are located in spatially disjointed clusters in the embedded space. We propose an alternative approach to selecting landmark points which focuses on the boundaries of the clusters, rather than randomly selected points or cluster centers. The unique Isomap is evaluated by SStress, a goodof-fit measure, and reconstructed with reduced computation, which makes implementation with other classifiers plausible for large data sets. The new method is implemented and applied to Hyperion hyperspectral data collected over the Okavango Delta of Botswana.
2024
Tracking of left ventricles in 3D echocardiography is a challenging topic because of the poor quality of ultrasound images and the speed consideration. In this paper, a fast and accurate learning based 3D tracking algorithm is presented.... more
Tracking of left ventricles in 3D echocardiography is a challenging topic because of the poor quality of ultrasound images and the speed consideration. In this paper, a fast and accurate learning based 3D tracking algorithm is presented. A novel one-step forward prediction is proposed to generate the motion prior using motion manifold learning. Collaborative trackers are introduced to achieve both temporal consistence and tracking robustness. The algorithm is completely automatic and computationally efficient. The mean point-to-mesh error of our algorithm is 1.28 mm. It requires less than 1.5 seconds to process a 3D volume (160 × 148 × 208 voxels). Index Terms Tracking; Ultrasound; Left Ventricles Recently, the idea of utilizing detection for tracking to achieve the robustness in noisy environment is proven to be quite effective. Tracking by detection does not accumulate errors from previous frames and can therefore avoid template drifting. However, it still has two major problems in 3D boundary tracking: 1
2024, arXiv (Cornell University)
The t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm is a ubiquitously employed dimensionality reduction (DR) method. Its non-parametric nature and impressive efficacy motivated its parametric extension. It is however bounded... more
The t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm is a ubiquitously employed dimensionality reduction (DR) method. Its non-parametric nature and impressive efficacy motivated its parametric extension. It is however bounded to a user-defined perplexity parameter, restricting its DR quality compared to recently developed multi-scale perplexity-free approaches. This paper hence proposes a multi-scale parametric t-SNE scheme, relieved from the perplexity tuning and with a deep neural network implementing the mapping. It produces reliable embeddings with out-of-sample extensions, competitive with the best perplexity adjustments in terms of neighborhood preservation on multiple data sets.
2024
Dimensionality reduction aims at representing high-dimensional data into low-dimensionality space. In order to make sense, the low-dimensional representation, or embedding, has to preserve some well-defined structural properties of data.... more
Dimensionality reduction aims at representing high-dimensional data into low-dimensionality space. In order to make sense, the low-dimensional representation, or embedding, has to preserve some well-defined structural properties of data. The general idea is that similar data items should be displayed close to each other, whereas longer distances should separate dissimilar ones. This principle applies to data that can
2024, The European Symposium on Artificial Neural Networks
Nonlinear dimensionality reduction aims at providing lowdimensional representions of high-dimensional data sets. Many new methods have been proposed in the recent years, but the question of their assessment and comparison remains open.... more
Nonlinear dimensionality reduction aims at providing lowdimensional representions of high-dimensional data sets. Many new methods have been proposed in the recent years, but the question of their assessment and comparison remains open. This paper reviews some of the existing quality measures that are based on distance ranking and K-ary neighborhoods. Many quality criteria actually rely on the analysis of one or several sub-blocks of a co-ranking matrix. The analogy between the co-ranking matrix and a Shepard diagram is highlighted. Finally, a unifying framework is sketched, new measures are proposed and illustrated in a short experiment.
2024
Network intrusion detection (NID) is a complex classification problem. In this paper, we combine classification with recent and scalable nonlinear dimensionality reduction (NLDR) methods. Classification and DR are not necessarily... more
Network intrusion detection (NID) is a complex classification problem. In this paper, we combine classification with recent and scalable nonlinear dimensionality reduction (NLDR) methods. Classification and DR are not necessarily adversarial, provided adequate cluster magnification occurring in NLDR methods like ttt-SNE: DR mitigates the curse of dimensionality, while cluster magnification can maintain class separability. We demonstrate experimentally the effectiveness of the approach by analyzing and comparing results on the big KDD99 dataset, using both NLDR quality assessment and classification rate for SVMs and random forests. Since data involves features of mixed types (numerical and categorical), the use of Gower's similarity coefficient as metric further improves the results over the classical similarity metric.
2024
Dimensionality reduction is a key stage for both the design of a pattern recognition system or data visualization. Recently, there has been a increasing interest in those methods aimed at preserving the data topology. Among them,... more
Dimensionality reduction is a key stage for both the design of a pattern recognition system or data visualization. Recently, there has been a increasing interest in those methods aimed at preserving the data topology. Among them, Laplacian eigenmaps (LE) and stochastic neighbour embedding (SNE) are the most representative. In this work, we present a brief comparative among very recent methods being alternatives to LE and SNE. Comparisons are done mainly on two aspects: algorithm implementation, and complexity. Also, relations between methods are depicted. The goal of this work is providing researches on this field with some discussion as well as criteria decision to choose a method according to the user's needs and/or keeping a good trade-off between performance and required processing time.
2024
Stochastic neighbor embedding (SNE) is a method of dimensionality reduction that involves softmax similarities measured between all pairs of data points. To build a suitable embedding, SNE tries to reproduce in a low-dimensional space the... more
Stochastic neighbor embedding (SNE) is a method of dimensionality reduction that involves softmax similarities measured between all pairs of data points. To build a suitable embedding, SNE tries to reproduce in a low-dimensional space the similarities that are observed in the high-dimensional data space. Previous work has investigated the immunity of such similarities to norm concentration, as well as enhanced cost functions. This paper proposes an additional refinement, in the form of multiscale similarities, namely averages of softmax ratios with decreasing bandwidths. The objective is to maximize the embedding quality at all scales, with a better preservation of both local and global neighborhoods, and also to exempt the user from having to fix a scale arbitrarily. Experiments on several data sets show that this multiscale version of SNE, combined with an appropriate cost function (sum of Jensen-Shannon divergences), outperforms all previous variants of SNE.
2024, Lecture Notes in Computer Science
Simbed, standing for similarity-based embedding, is a new method of embedding high-dimensional data. It relies on the preservation of pairwise similarities rather than distances. In this respect, Simbed can be related to other techniques... more
Simbed, standing for similarity-based embedding, is a new method of embedding high-dimensional data. It relies on the preservation of pairwise similarities rather than distances. In this respect, Simbed can be related to other techniques such as stochastic neighbor embedding and its variants. A connection with curvilinear component analysis is also pointed out. Simbed differs from these methods by the way similarities are defined and compared in both the data and embedding spaces. In particular, similarities in Simbed can account for the phenomenon of norm concentration that occurs in high-dimensional spaces. This feature is shown to reinforce the advantage of Simbed over other embedding techniques in experiments with a face database.
2024, Proceedings of COMPSTAT'2010
Similarity-based embedding is a paradigm that recently gained interest in the field of nonlinear dimensionality reduction. It provides an elegant framework that naturally emphasizes the preservation of the local structure of the data set.... more
Similarity-based embedding is a paradigm that recently gained interest in the field of nonlinear dimensionality reduction. It provides an elegant framework that naturally emphasizes the preservation of the local structure of the data set. An emblematic method in this trend is t-distributed stochastic neighbor embedding (t-SNE), which is acknowledged to be an efficient method in the recent literature. This paper aims at analyzing the reasons of this success, together with the impact of the two metaparameters embedded in the method. Moreover, the paper shows that t-SNE can be interpreted as a distance-preserving method with a specific distance transformation, making the link with existing methods. Experiments on artificial data support the theoretical discussion.
2024, Pattern Recognition Letters
Dimensionality reduction aims at representing high-dimensional data in low-dimensional spaces, in order to facilitate their visual interpretation. Many techniques exist, ranging from simple linear projections to more complex nonlinear... more
Dimensionality reduction aims at representing high-dimensional data in low-dimensional spaces, in order to facilitate their visual interpretation. Many techniques exist, ranging from simple linear projections to more complex nonlinear transformations. The large variety of methods emphasizes the need of quality criteria that allow for fair comparisons between them. This paper extends previous work about rank-based quality criteria and proposes to circumvent their scale dependency. Most dimensionality reduction techniques indeed rely on a scale parameter that distinguish between local and global data properties. Such a scale dependency can be similarly found in usual quality criteria: they assess the embedding quality on a certain scale. Experiments with various dimensionality reduction techniques eventually show the strengths and weaknesses of the proposed scale-independent criteria.
2024, Neurocomputing
Stochastic neighbor embedding (SNE) and its variants are methods of dimensionality reduction (DR) that involve normalized softmax similarities derived from pairwise distances. These methods try to reproduce in the low-dimensional... more
Stochastic neighbor embedding (SNE) and its variants are methods of dimensionality reduction (DR) that involve normalized softmax similarities derived from pairwise distances. These methods try to reproduce in the low-dimensional embedding space the similarities observed in the high-dimensional data space. Their outstanding experimental results, compared to previous state-of-the-art methods, originate from their capability to foil the curse of dimensionality. Previous work has shown that this immunity stems partly from a property of shift invariance that allows appropriately normalized softmax similarities to mitigate the phenomenon of norm concentration. This paper investigates a complementary aspect, namely, the cost function that quantifies the mismatch between similarities computed in the high-and low-dimensional spaces. Stochastic neighbor embedding and its variant t-SNE rely on a single Kullback-Leibler divergence, whereas a weighted mixture of two dual KL divergences is used in neighborhood retrieval and visualization (NeRV). We propose in this paper a different mixture of KL divergences, which is a scaled version of the generalized Jensen-Shannon divergence. We show experimentally that this divergence produces embeddings that better preserve small K-ary neighborhoods, as compared to both the single KL divergence used in SNE and t-SNE and the mixture used in NeRV. These results allow us to conclude that future improvements in similaritybased DR will likely emerge from better definitions of the cost function.
2024, International Journal of Electrical and Computer Engineering (IJECE)
Information-theoretic measures play a vital role in training learning systems. Many researchers proposed non-parametric entropy estimators that have applications in adaptive systems. In this work, a kernel density estimator using Kapur... more
Information-theoretic measures play a vital role in training learning systems. Many researchers proposed non-parametric entropy estimators that have applications in adaptive systems. In this work, a kernel density estimator using Kapur entropy of order α and type β has been proposed and discussed with the help of theorems and properties. From the results, it has been observed that the proposed density measure is consistent, minimum, and smooth for the probability density function (PDF) underlying given conditions and validated with the help of theorems and properties. The objective of the paper is to understand the theoretical viewpoint behind the underlying concept.
2024, SIAM Journal on Applied Algebra and Geometry
A bottleneck of a smooth algebraic variety X ⊂ C n is a pair (x, y) of distinct points x, y ∈ X such that the Euclidean normal spaces at x and y contain the line spanned by x and y. The narrowness of bottlenecks is a fundamental... more
A bottleneck of a smooth algebraic variety X ⊂ C n is a pair (x, y) of distinct points x, y ∈ X such that the Euclidean normal spaces at x and y contain the line spanned by x and y. The narrowness of bottlenecks is a fundamental complexity measure in the algebraic geometry of data. In this paper we study the number of bottlenecks of affine and projective varieties, which we call the bottleneck degree. The bottleneck degree is a measure of the complexity of computing all bottlenecks of an algebraic variety, using for example numerical homotopy methods. We show that the bottleneck degree is a function of classical invariants such as Chern classes and polar classes. We give the formula explicitly in low dimension and provide an algorithm to compute it in the general case.
2024, HAL (Le Centre pour la Communication Scientifique Directe)
This paper is devoted to thoroughly investigating how to bootstrap the ROC curve, a widely used visual tool for evaluating the accuracy of test/scoring statistics s(X) in the bipartite setup. The issue of confidence bands for the ROC... more
This paper is devoted to thoroughly investigating how to bootstrap the ROC curve, a widely used visual tool for evaluating the accuracy of test/scoring statistics s(X) in the bipartite setup. The issue of confidence bands for the ROC curve is considered and a resampling procedure based on a smooth version of the empirical distribution called the "smoothed bootstrap" is introduced. Theoretical arguments and simulation results are presented to show that the "smoothed bootstrap" is preferable to a "naive" bootstrap in order to construct accurate confidence bands.
2024, arXiv (Cornell University)
The purpose of this paper is to introduce Harvey-Lawson manifolds and review the construction of certain "mirror dual" Calabi-Yau submanifolds inside a G2 manifold. More specifically, given a Harvey-Lawson manifold HL, we explain how to... more
The purpose of this paper is to introduce Harvey-Lawson manifolds and review the construction of certain "mirror dual" Calabi-Yau submanifolds inside a G2 manifold. More specifically, given a Harvey-Lawson manifold HL, we explain how to assign a pair of tangent bundle valued 2 and 3-forms to a G2 manifold (M, HL, ϕ, Λ), with the calibration 3-form ϕ and an oriented 2-plane field Λ. As in [AS2] these forms can then be used to define different complex and symplectic structures on certain 6-dimensional subbundles of T (M ). When these bundles are integrated they give mirror CY manifolds (related thru HL manifolds).
2024
The purpose of this paper is to introduce Harvey-Lawson manifolds and review the construction of certain "mirror dual" Calabi-Yau submanifolds inside a G2 manifold. More specifically, given a Harvey-Lawson manifold HL, we explain how to... more
The purpose of this paper is to introduce Harvey-Lawson manifolds and review the construction of certain "mirror dual" Calabi-Yau submanifolds inside a G2 manifold. More specifically, given a Harvey-Lawson manifold HL, we explain how to assign a pair of tangent bundle valued 2 and 3-forms to a G2 manifold (M, HL, ϕ, Λ), with the calibration 3-form ϕ and an oriented 2-plane field Λ. As in [AS2] these forms can then be used to define different complex and symplectic structures on certain 6-dimensional subbundles of T (M ). When these bundles are integrated they give mirror CY manifolds (related thru HL manifolds).
2024, Pattern Recognition
The low-dimensional representation of high-dimensional data and the concise description of its intrinsic structures are central problems in data analysis. In this paper, an unsupervised learning algorithm called weighted locally linear... more
The low-dimensional representation of high-dimensional data and the concise description of its intrinsic structures are central problems in data analysis. In this paper, an unsupervised learning algorithm called weighted locally linear embedding (WLLE) is presented to discover the intrinsic structures of data, such as neighborhood relationships, global distributions and clustering. The WLLE algorithm is motivated by locally linear embedding (LLE) algorithm and cam weighted distance, a novel distance measure which usually gives a deflective cam contours for equal-distance contour in classification for an improved classification. It is a major advantage of the WLLE to optimize the process of intrinsic structure discovery by avoiding unreasonable neighbor searching, and at the same time, allow the discovery adapt to the characteristics of input data set. Furthermore, the algorithm discovers intrinsic structures which can be used to compute manipulative embedding for potential classification and recognition purposes, thus can work as a feature extraction algorithm. Simulation studies demonstrate that the WLLE can give better results in manifold learning and dimension reduction than LLE and neighborhood linear embedding (NLE), and is more robust to parameter changes. Experiments on face images data sets and comparison to other famous face recognition methods such as kernel-PCA (KPCA) and kernel direct discriminant analysis (KDDA) are done to show the potential of WLLE for real world problem.
2024, Computer Vision – ECCV 2006
In recent years, nonlinear dimensionality reduction (NLDR) techniques have attracted much attention in visual perception and many other areas of science. We propose an efficient algorithm called Riemannian manifold learning (RML). A... more
In recent years, nonlinear dimensionality reduction (NLDR) techniques have attracted much attention in visual perception and many other areas of science. We propose an efficient algorithm called Riemannian manifold learning (RML). A Riemannian manifold can be constructed in the form of a simplicial complex, and thus its intrinsic dimension can be reliably estimated. Then the NLDR problem is solved by constructing Riemannian normal coordinates (RNC). Experimental results demonstrate that our algorithm can learn the data's intrinsic geometric structure, yielding uniformly distributed and well organized low-dimensional embedding data.
2024
High dimensional data is difficult to visualize and as the dimensionality increases, the data starts behaving in an unexpected manner. Dimensionality Reduction is a technique used to summarize a large set of input parameters into a... more
High dimensional data is difficult to visualize and as the dimensionality increases, the data starts behaving
in an unexpected manner. Dimensionality Reduction is a technique used to summarize a large set of input
parameters into a smaller set with little or no redundancy, and to analyse the reduced form of the high-
dimensional data. Redundancy in data leads to the parameters that can characterize other sets of units not
becoming independent from each other. Therefore, the units that can be replaced by others are removed and
the data set is made smaller. All these factors have increased the demand for dimensionality reduction
techniques in various industries such as healthcare,
This paper discusses existing linear and non-linear techniques in dimensionality reduction, and aims to find
the best technique for performing dimensionality reduction across industries such as engineering, health
and medicine, banking, marketing, finance and so on. The paper first discusses a linear method, known as
Principal Component Analysis (PCA). PCA models have been created in Python to prove that they improve
the performance and efficiency of Machine Learning algorithms. In certain cases, PCA fails to deliver
results which leads to finding better techniques such as Kernel Principal Component Analysis, Linear
Discriminant Analysis and t-Distributed Stochastic Neighbouring Embedding. These techniques are more
flexible with the data available. Therefore, the second part of the paper discusses these three non-linear
techniques in detail and compares their performance with that of PCA. Enhancements are made on the
applied models to provide better results with four different data sets. All the techniques used have been
judged on their accuracy scores, time taken for operation, computational complexity and applicability in
different scenarios. The uncertainty involved with all the experiments has also been illustrated in this report.
The paper provides mathematical derivations, design for the experiment, Python codes developed for
experimentation, applications of dimensionality reduction techniques across different industries, results of
the experiments and analysis of the results for all the methodologies discussed, thus providing the best
techniques for dimensionality reduction.