Anand Rangarajan - Profile on Academia.edu (original) (raw)
Papers by Anand Rangarajan
Proceedings of SPIE, May 13, 2011
In this work, we propose DEformable BAyesian Networks (DEBAN), a probabilistic graphical model fr... more In this work, we propose DEformable BAyesian Networks (DEBAN), a probabilistic graphical model framework where model selection and statistical inference can be viewed as two key ingredients in the same iterative process. While this concept has shown successful results in computer vision community, 1-4 our proposed approach generalizes the concept such that it is applicable to any data type. Our goal is to infer the optimal structure/model to fit the given observations. The optimal structure conveys an automatic way to find not only the number of clusters in the data set, but also the multiscale graph structure illustrating the dependence relationship among the variables in the network. Finally, the marginal posterior distribution at each root node is regarded as the fused information of its corresponding observations, and the most probable state can be found from the maximum a posteriori (MAP) solution with the uncertainty of the estimate in the form of a probability distribution which is desired for a variety of applications.
Proceedings of SPIE, May 13, 2011
In this work, we propose DEformable BAyesian Networks (DEBAN), a probabilistic graphical model fr... more In this work, we propose DEformable BAyesian Networks (DEBAN), a probabilistic graphical model framework where model selection and statistical inference can be viewed as two key ingredients in the same iterative process. While this concept has shown successful results in computer vision community, 1-4 our proposed approach generalizes the concept such that it is applicable to any data type. Our goal is to infer the optimal structure/model to fit the given observations. The optimal structure conveys an automatic way to find not only the number of clusters in the data set, but also the multiscale graph structure illustrating the dependence relationship among the variables in the network. Finally, the marginal posterior distribution at each root node is regarded as the fused information of its corresponding observations, and the most probable state can be found from the maximum a posteriori (MAP) solution with the uncertainty of the estimate in the form of a probability distribution which is desired for a variety of applications.
arXiv (Cornell University), Mar 31, 2016
Traditional language models treat language as a finite state automaton on a probability space ove... more Traditional language models treat language as a finite state automaton on a probability space over words. This is a very strong assumption when modeling something inherently complex such as language. In this paper, we challenge this by showing how the linear chain assumption inherent in previous work can be translated into a sequential composition tree. We then propose a new model that marginalizes over all possible composition trees thereby removing any underlying structural assumptions. As the partition function of this new model is intractable, we use a recently proposed sentence level evaluation metric Contrastive Entropy to evaluate our model. Given this new evaluation metric, we report more than 100% improvement across distortion levels over current state of the art recurrent neural network based language models.
arXiv (Cornell University), Mar 31, 2016
Traditional language models treat language as a finite state automaton on a probability space ove... more Traditional language models treat language as a finite state automaton on a probability space over words. This is a very strong assumption when modeling something inherently complex such as language. In this paper, we challenge this by showing how the linear chain assumption inherent in previous work can be translated into a sequential composition tree. We then propose a new model that marginalizes over all possible composition trees thereby removing any underlying structural assumptions. As the partition function of this new model is intractable, we use a recently proposed sentence level evaluation metric Contrastive Entropy to evaluate our model. Given this new evaluation metric, we report more than 100% improvement across distortion levels over current state of the art recurrent neural network based language models.
In this work, we employ the well-known Hamilton-Jacobi to Schrödinger connection to present a uni... more In this work, we employ the well-known Hamilton-Jacobi to Schrödinger connection to present a unified framework for computing both the Euclidean distance function and its gradient density in two dimensions. Previous work in this direction considered two different formalisms for independently computing these quantities. While the two formalisms are very closely related, their lack of integration is theoretically troubling and practically cumbersome. We introduce a novel Schrödinger wave function for representing the Euclidean distance transform from a discrete set of points. An approximate distance transform is computed from the magnitude of the wave function while the gradient density is estimated from the Fourier transform of the phase of the wave function. In addition to its simplicity and efficient O(N log N ) computation, we prove that the wave function-based density estimator increasingly, closely approximates the distance transform gradient density (as a free parameter approaches zero) with the added benefit of not requiring the true distance function.
In this work, we employ the well-known Hamilton-Jacobi to Schrödinger connection to present a uni... more In this work, we employ the well-known Hamilton-Jacobi to Schrödinger connection to present a unified framework for computing both the Euclidean distance function and its gradient density in two dimensions. Previous work in this direction considered two different formalisms for independently computing these quantities. While the two formalisms are very closely related, their lack of integration is theoretically troubling and practically cumbersome. We introduce a novel Schrödinger wave function for representing the Euclidean distance transform from a discrete set of points. An approximate distance transform is computed from the magnitude of the wave function while the gradient density is estimated from the Fourier transform of the phase of the wave function. In addition to its simplicity and efficient O(N log N ) computation, we prove that the wave function-based density estimator increasingly, closely approximates the distance transform gradient density (as a free parameter approaches zero) with the added benefit of not requiring the true distance function.
Neural Information Processing Systems, Nov 29, 1993
With a point matching distance measure which is invariant under translation, rotation and permuta... more With a point matching distance measure which is invariant under translation, rotation and permutation, we learn 2-D point-set objects, by clustering noisy point-set images. Unlike traditional clustering methods which use distance measures that operate on feature vectors -a representation common to most problem domains -this object-based clustering technique employs a distance measure specific to a type of object within a problem domain. Formulating the clustering problem as two nested objective functions, we derive optimization dynamics similar to the Expectation-Maximization algorithm used in mixture models.
Neural Information Processing Systems, Nov 29, 1993
With a point matching distance measure which is invariant under translation, rotation and permuta... more With a point matching distance measure which is invariant under translation, rotation and permutation, we learn 2-D point-set objects, by clustering noisy point-set images. Unlike traditional clustering methods which use distance measures that operate on feature vectors -a representation common to most problem domains -this object-based clustering technique employs a distance measure specific to a type of object within a problem domain. Formulating the clustering problem as two nested objective functions, we derive optimization dynamics similar to the Expectation-Maximization algorithm used in mixture models.
arXiv (Cornell University), Mar 23, 2022
In this paper, we present a new self-supervised scene flow estimation approach for a pair of cons... more In this paper, we present a new self-supervised scene flow estimation approach for a pair of consecutive point clouds. The key idea of our approach is to represent discrete point clouds as continuous probability density functions using Gaussian mixture models. Scene flow estimation is therefore converted into the problem of recovering motion from the alignment of probability density functions, which we achieve using a closed-form expression of the classic Cauchy-Schwarz divergence. Unlike existing nearest-neighbor-based approaches that use hard pairwise correspondences, our proposed approach establishes soft and implicit point correspondences between point clouds and generates more robust and accurate scene flow in the presence of missing correspondences and outliers. Comprehensive experiments show that our method makes noticeable gains over the Chamfer Distance and the Earth Mover's Distance in real-world environments and achieves state-of-the-art performance among selfsupervised learning methods on FlyingThings3D and KITTI, even outperforming some supervised methods with ground truth annotations.
arXiv (Cornell University), Mar 23, 2022
In this paper, we present a new self-supervised scene flow estimation approach for a pair of cons... more In this paper, we present a new self-supervised scene flow estimation approach for a pair of consecutive point clouds. The key idea of our approach is to represent discrete point clouds as continuous probability density functions using Gaussian mixture models. Scene flow estimation is therefore converted into the problem of recovering motion from the alignment of probability density functions, which we achieve using a closed-form expression of the classic Cauchy-Schwarz divergence. Unlike existing nearest-neighbor-based approaches that use hard pairwise correspondences, our proposed approach establishes soft and implicit point correspondences between point clouds and generates more robust and accurate scene flow in the presence of missing correspondences and outliers. Comprehensive experiments show that our method makes noticeable gains over the Chamfer Distance and the Earth Mover's Distance in real-world environments and achieves state-of-the-art performance among selfsupervised learning methods on FlyingThings3D and KITTI, even outperforming some supervised methods with ground truth annotations.
Advances in Pure Mathematics, 2019
We prove that the density function of the gradient of a sufficiently smooth function S : Ω ⊂ R d ... more We prove that the density function of the gradient of a sufficiently smooth function S : Ω ⊂ R d → R, obtained via a random variable transformation of a uniformly distributed random variable, is increasingly closely approximated by the normalized power spectrum of φ = exp iS τ as the free parameter τ → 0. The result is shown using the stationary phase approximation and standard integration techniques and requires proper ordering of limits. We highlight a relationship with the well-known characteristic function approach to density estimation, and detail why our result is distinct from this approach.
Advances in Pure Mathematics, 2019
We prove that the density function of the gradient of a sufficiently smooth function S : Ω ⊂ R d ... more We prove that the density function of the gradient of a sufficiently smooth function S : Ω ⊂ R d → R, obtained via a random variable transformation of a uniformly distributed random variable, is increasingly closely approximated by the normalized power spectrum of φ = exp iS τ as the free parameter τ → 0. The result is shown using the stationary phase approximation and standard integration techniques and requires proper ordering of limits. We highlight a relationship with the well-known characteristic function approach to density estimation, and detail why our result is distinct from this approach.
arXiv (Cornell University), Sep 5, 2022
We present a new approach to unsupervised shape correspondence learning between pairs of point cl... more We present a new approach to unsupervised shape correspondence learning between pairs of point clouds. We make the first attempt to adapt the classical locally linear embedding algorithm (LLE)-originally designed for nonlinear dimensionality reduction-for shape correspondence. The key idea is to find dense correspondences between shapes by first obtaining high-dimensional neighborhood-preserving embeddings of low-dimensional point clouds and subsequently aligning the source and target embeddings using locally linear transformations. We demonstrate that learning the embedding using a new LLE-inspired point cloud reconstruction objective results in accurate shape correspondences. More specifically, the approach comprises an end-to-end learnable framework of extracting high-dimensional neighborhood-preserving embeddings, estimating locally linear transformations in the embedding space, and reconstructing shapes via divergence measure-based alignment of probabilistic density functions built over reconstructed and target shapes. Our approach enforces embeddings of shapes in correspondence to lie in the same universal/canonical embedding space, which eventually helps regularize the learning process and leads to a simple nearest neighbors approach between shape embeddings for finding reliable correspondences. Comprehensive experiments show that the new method makes noticeable improvements over state-of-the-art approaches on standard shape correspondence benchmark datasets covering both human and nonhuman shapes.
arXiv (Cornell University), Sep 5, 2022
We present a new approach to unsupervised shape correspondence learning between pairs of point cl... more We present a new approach to unsupervised shape correspondence learning between pairs of point clouds. We make the first attempt to adapt the classical locally linear embedding algorithm (LLE)-originally designed for nonlinear dimensionality reduction-for shape correspondence. The key idea is to find dense correspondences between shapes by first obtaining high-dimensional neighborhood-preserving embeddings of low-dimensional point clouds and subsequently aligning the source and target embeddings using locally linear transformations. We demonstrate that learning the embedding using a new LLE-inspired point cloud reconstruction objective results in accurate shape correspondences. More specifically, the approach comprises an end-to-end learnable framework of extracting high-dimensional neighborhood-preserving embeddings, estimating locally linear transformations in the embedding space, and reconstructing shapes via divergence measure-based alignment of probabilistic density functions built over reconstructed and target shapes. Our approach enforces embeddings of shapes in correspondence to lie in the same universal/canonical embedding space, which eventually helps regularize the learning process and leads to a simple nearest neighbors approach between shape embeddings for finding reliable correspondences. Comprehensive experiments show that the new method makes noticeable improvements over state-of-the-art approaches on standard shape correspondence benchmark datasets covering both human and nonhuman shapes.
This paper presents Data-Driven Tree-structured Bayesian network (DDT), a novel probabilistic gra... more This paper presents Data-Driven Tree-structured Bayesian network (DDT), a novel probabilistic graphical model for hierarchical unsupervised image segmentation. Like[1, 2], DDT captures long and short-ranged correlations between neighboring regions in each image using a tree-structured prior. Unlike other previous work, DDT first segments an input image into superpixels and learn a tree-structured prior based on the topology of superpixels in different scales. Such a tree structure is refered to as data-driven tree structure. Each superpixel is represented by a variable node taking a discrete value of class/label of the segmentation. The probabilistic relationships among the nodes are represented by edges in the network. The unsupervised image segmentation, hence, can be viewed as an inference problem of the nodes in the tree structure of DDT, which can be carried out efficiently. We evaluate quantitatively our results with respect to the ground-truth segmentation, demonstrating that our proposed framework performs competitively with the state of the art in unsupervised image segmentation and contour detection.
This paper presents Data-Driven Tree-structured Bayesian network (DDT), a novel probabilistic gra... more This paper presents Data-Driven Tree-structured Bayesian network (DDT), a novel probabilistic graphical model for hierarchical unsupervised image segmentation. Like[1, 2], DDT captures long and short-ranged correlations between neighboring regions in each image using a tree-structured prior. Unlike other previous work, DDT first segments an input image into superpixels and learn a tree-structured prior based on the topology of superpixels in different scales. Such a tree structure is refered to as data-driven tree structure. Each superpixel is represented by a variable node taking a discrete value of class/label of the segmentation. The probabilistic relationships among the nodes are represented by edges in the network. The unsupervised image segmentation, hence, can be viewed as an inference problem of the nodes in the tree structure of DDT, which can be carried out efficiently. We evaluate quantitatively our results with respect to the ground-truth segmentation, demonstrating that our proposed framework performs competitively with the state of the art in unsupervised image segmentation and contour detection.
Springer eBooks, 2005
Matching 3D shapes is important in many medical imaging applications. We show that a joint cluste... more Matching 3D shapes is important in many medical imaging applications. We show that a joint clustering and diffeomorphism estimation strategy is capable of simultaneously estimating correspondences and a diffeomorphism between unlabeled 3D point-sets. Correspondence is established between the cluster centers and this is coupled with a simultaneous estimation of a 3D diffeomorphism of space. The number of clusters can be estimated by minimizing the Jensen-Shannon divergence on the registered data. We apply our algorithm to both synthetically warped 3D hippocampal shapes as well as real 3D hippocampal shapes from different subjects.
Springer eBooks, 2005
Matching 3D shapes is important in many medical imaging applications. We show that a joint cluste... more Matching 3D shapes is important in many medical imaging applications. We show that a joint clustering and diffeomorphism estimation strategy is capable of simultaneously estimating correspondences and a diffeomorphism between unlabeled 3D point-sets. Correspondence is established between the cluster centers and this is coupled with a simultaneous estimation of a 3D diffeomorphism of space. The number of clusters can be estimated by minimizing the Jensen-Shannon divergence on the registered data. We apply our algorithm to both synthetically warped 3D hippocampal shapes as well as real 3D hippocampal shapes from different subjects.
arXiv (Cornell University), Mar 25, 2020
We propose a novel integrated formulation for multiclass and multilabel support vector machines (... more We propose a novel integrated formulation for multiclass and multilabel support vector machines (SVMs). A number of approaches have been proposed to extend the original binary SVM to an allin-one multiclass SVM. However, its direct extension to a unified multilabel SVM has not been widely investigated. We propose a straightforward extension to the SVM to cope with multiclass and multilabel classification problems within a unified framework. Our framework deviates from the conventional soft margin SVM framework with its direct oppositional structure. In our formulation, class-specific weight vectors (normal vectors) are learned by maximizing their margin with respect to an origin and penalizing patterns when they get too close to this origin. As a result, each weight vector chooses an orientation and a magnitude with respect to this origin in such a way that it best represents the patterns belonging to its corresponding class. Opposition between classes is introduced into the formulation via the minimization of pairwise inner products of weight vectors. We also extend our framework to cope with nonlinear separability via standard reproducing kernel Hilbert spaces (RKHS). Biases which are closely related to the origin need to be treated properly in both the original feature space and Hilbert space. We have the flexibility to incorporate constraints into the formulation (if they better reflect the underlying geometry) and improve the performance of the classifier. To this end, specifics and technicalities such as the origin in RKHS are addressed. Results demonstrates a competitive classifier for both multiclass and multilabel classification problems.
arXiv (Cornell University), Mar 25, 2020
We propose a novel integrated formulation for multiclass and multilabel support vector machines (... more We propose a novel integrated formulation for multiclass and multilabel support vector machines (SVMs). A number of approaches have been proposed to extend the original binary SVM to an allin-one multiclass SVM. However, its direct extension to a unified multilabel SVM has not been widely investigated. We propose a straightforward extension to the SVM to cope with multiclass and multilabel classification problems within a unified framework. Our framework deviates from the conventional soft margin SVM framework with its direct oppositional structure. In our formulation, class-specific weight vectors (normal vectors) are learned by maximizing their margin with respect to an origin and penalizing patterns when they get too close to this origin. As a result, each weight vector chooses an orientation and a magnitude with respect to this origin in such a way that it best represents the patterns belonging to its corresponding class. Opposition between classes is introduced into the formulation via the minimization of pairwise inner products of weight vectors. We also extend our framework to cope with nonlinear separability via standard reproducing kernel Hilbert spaces (RKHS). Biases which are closely related to the origin need to be treated properly in both the original feature space and Hilbert space. We have the flexibility to incorporate constraints into the formulation (if they better reflect the underlying geometry) and improve the performance of the classifier. To this end, specifics and technicalities such as the origin in RKHS are addressed. Results demonstrates a competitive classifier for both multiclass and multilabel classification problems.
Proceedings of SPIE, May 13, 2011
In this work, we propose DEformable BAyesian Networks (DEBAN), a probabilistic graphical model fr... more In this work, we propose DEformable BAyesian Networks (DEBAN), a probabilistic graphical model framework where model selection and statistical inference can be viewed as two key ingredients in the same iterative process. While this concept has shown successful results in computer vision community, 1-4 our proposed approach generalizes the concept such that it is applicable to any data type. Our goal is to infer the optimal structure/model to fit the given observations. The optimal structure conveys an automatic way to find not only the number of clusters in the data set, but also the multiscale graph structure illustrating the dependence relationship among the variables in the network. Finally, the marginal posterior distribution at each root node is regarded as the fused information of its corresponding observations, and the most probable state can be found from the maximum a posteriori (MAP) solution with the uncertainty of the estimate in the form of a probability distribution which is desired for a variety of applications.
Proceedings of SPIE, May 13, 2011
In this work, we propose DEformable BAyesian Networks (DEBAN), a probabilistic graphical model fr... more In this work, we propose DEformable BAyesian Networks (DEBAN), a probabilistic graphical model framework where model selection and statistical inference can be viewed as two key ingredients in the same iterative process. While this concept has shown successful results in computer vision community, 1-4 our proposed approach generalizes the concept such that it is applicable to any data type. Our goal is to infer the optimal structure/model to fit the given observations. The optimal structure conveys an automatic way to find not only the number of clusters in the data set, but also the multiscale graph structure illustrating the dependence relationship among the variables in the network. Finally, the marginal posterior distribution at each root node is regarded as the fused information of its corresponding observations, and the most probable state can be found from the maximum a posteriori (MAP) solution with the uncertainty of the estimate in the form of a probability distribution which is desired for a variety of applications.
arXiv (Cornell University), Mar 31, 2016
Traditional language models treat language as a finite state automaton on a probability space ove... more Traditional language models treat language as a finite state automaton on a probability space over words. This is a very strong assumption when modeling something inherently complex such as language. In this paper, we challenge this by showing how the linear chain assumption inherent in previous work can be translated into a sequential composition tree. We then propose a new model that marginalizes over all possible composition trees thereby removing any underlying structural assumptions. As the partition function of this new model is intractable, we use a recently proposed sentence level evaluation metric Contrastive Entropy to evaluate our model. Given this new evaluation metric, we report more than 100% improvement across distortion levels over current state of the art recurrent neural network based language models.
arXiv (Cornell University), Mar 31, 2016
Traditional language models treat language as a finite state automaton on a probability space ove... more Traditional language models treat language as a finite state automaton on a probability space over words. This is a very strong assumption when modeling something inherently complex such as language. In this paper, we challenge this by showing how the linear chain assumption inherent in previous work can be translated into a sequential composition tree. We then propose a new model that marginalizes over all possible composition trees thereby removing any underlying structural assumptions. As the partition function of this new model is intractable, we use a recently proposed sentence level evaluation metric Contrastive Entropy to evaluate our model. Given this new evaluation metric, we report more than 100% improvement across distortion levels over current state of the art recurrent neural network based language models.
In this work, we employ the well-known Hamilton-Jacobi to Schrödinger connection to present a uni... more In this work, we employ the well-known Hamilton-Jacobi to Schrödinger connection to present a unified framework for computing both the Euclidean distance function and its gradient density in two dimensions. Previous work in this direction considered two different formalisms for independently computing these quantities. While the two formalisms are very closely related, their lack of integration is theoretically troubling and practically cumbersome. We introduce a novel Schrödinger wave function for representing the Euclidean distance transform from a discrete set of points. An approximate distance transform is computed from the magnitude of the wave function while the gradient density is estimated from the Fourier transform of the phase of the wave function. In addition to its simplicity and efficient O(N log N ) computation, we prove that the wave function-based density estimator increasingly, closely approximates the distance transform gradient density (as a free parameter approaches zero) with the added benefit of not requiring the true distance function.
In this work, we employ the well-known Hamilton-Jacobi to Schrödinger connection to present a uni... more In this work, we employ the well-known Hamilton-Jacobi to Schrödinger connection to present a unified framework for computing both the Euclidean distance function and its gradient density in two dimensions. Previous work in this direction considered two different formalisms for independently computing these quantities. While the two formalisms are very closely related, their lack of integration is theoretically troubling and practically cumbersome. We introduce a novel Schrödinger wave function for representing the Euclidean distance transform from a discrete set of points. An approximate distance transform is computed from the magnitude of the wave function while the gradient density is estimated from the Fourier transform of the phase of the wave function. In addition to its simplicity and efficient O(N log N ) computation, we prove that the wave function-based density estimator increasingly, closely approximates the distance transform gradient density (as a free parameter approaches zero) with the added benefit of not requiring the true distance function.
Neural Information Processing Systems, Nov 29, 1993
With a point matching distance measure which is invariant under translation, rotation and permuta... more With a point matching distance measure which is invariant under translation, rotation and permutation, we learn 2-D point-set objects, by clustering noisy point-set images. Unlike traditional clustering methods which use distance measures that operate on feature vectors -a representation common to most problem domains -this object-based clustering technique employs a distance measure specific to a type of object within a problem domain. Formulating the clustering problem as two nested objective functions, we derive optimization dynamics similar to the Expectation-Maximization algorithm used in mixture models.
Neural Information Processing Systems, Nov 29, 1993
With a point matching distance measure which is invariant under translation, rotation and permuta... more With a point matching distance measure which is invariant under translation, rotation and permutation, we learn 2-D point-set objects, by clustering noisy point-set images. Unlike traditional clustering methods which use distance measures that operate on feature vectors -a representation common to most problem domains -this object-based clustering technique employs a distance measure specific to a type of object within a problem domain. Formulating the clustering problem as two nested objective functions, we derive optimization dynamics similar to the Expectation-Maximization algorithm used in mixture models.
arXiv (Cornell University), Mar 23, 2022
In this paper, we present a new self-supervised scene flow estimation approach for a pair of cons... more In this paper, we present a new self-supervised scene flow estimation approach for a pair of consecutive point clouds. The key idea of our approach is to represent discrete point clouds as continuous probability density functions using Gaussian mixture models. Scene flow estimation is therefore converted into the problem of recovering motion from the alignment of probability density functions, which we achieve using a closed-form expression of the classic Cauchy-Schwarz divergence. Unlike existing nearest-neighbor-based approaches that use hard pairwise correspondences, our proposed approach establishes soft and implicit point correspondences between point clouds and generates more robust and accurate scene flow in the presence of missing correspondences and outliers. Comprehensive experiments show that our method makes noticeable gains over the Chamfer Distance and the Earth Mover's Distance in real-world environments and achieves state-of-the-art performance among selfsupervised learning methods on FlyingThings3D and KITTI, even outperforming some supervised methods with ground truth annotations.
arXiv (Cornell University), Mar 23, 2022
In this paper, we present a new self-supervised scene flow estimation approach for a pair of cons... more In this paper, we present a new self-supervised scene flow estimation approach for a pair of consecutive point clouds. The key idea of our approach is to represent discrete point clouds as continuous probability density functions using Gaussian mixture models. Scene flow estimation is therefore converted into the problem of recovering motion from the alignment of probability density functions, which we achieve using a closed-form expression of the classic Cauchy-Schwarz divergence. Unlike existing nearest-neighbor-based approaches that use hard pairwise correspondences, our proposed approach establishes soft and implicit point correspondences between point clouds and generates more robust and accurate scene flow in the presence of missing correspondences and outliers. Comprehensive experiments show that our method makes noticeable gains over the Chamfer Distance and the Earth Mover's Distance in real-world environments and achieves state-of-the-art performance among selfsupervised learning methods on FlyingThings3D and KITTI, even outperforming some supervised methods with ground truth annotations.
Advances in Pure Mathematics, 2019
We prove that the density function of the gradient of a sufficiently smooth function S : Ω ⊂ R d ... more We prove that the density function of the gradient of a sufficiently smooth function S : Ω ⊂ R d → R, obtained via a random variable transformation of a uniformly distributed random variable, is increasingly closely approximated by the normalized power spectrum of φ = exp iS τ as the free parameter τ → 0. The result is shown using the stationary phase approximation and standard integration techniques and requires proper ordering of limits. We highlight a relationship with the well-known characteristic function approach to density estimation, and detail why our result is distinct from this approach.
Advances in Pure Mathematics, 2019
We prove that the density function of the gradient of a sufficiently smooth function S : Ω ⊂ R d ... more We prove that the density function of the gradient of a sufficiently smooth function S : Ω ⊂ R d → R, obtained via a random variable transformation of a uniformly distributed random variable, is increasingly closely approximated by the normalized power spectrum of φ = exp iS τ as the free parameter τ → 0. The result is shown using the stationary phase approximation and standard integration techniques and requires proper ordering of limits. We highlight a relationship with the well-known characteristic function approach to density estimation, and detail why our result is distinct from this approach.
arXiv (Cornell University), Sep 5, 2022
We present a new approach to unsupervised shape correspondence learning between pairs of point cl... more We present a new approach to unsupervised shape correspondence learning between pairs of point clouds. We make the first attempt to adapt the classical locally linear embedding algorithm (LLE)-originally designed for nonlinear dimensionality reduction-for shape correspondence. The key idea is to find dense correspondences between shapes by first obtaining high-dimensional neighborhood-preserving embeddings of low-dimensional point clouds and subsequently aligning the source and target embeddings using locally linear transformations. We demonstrate that learning the embedding using a new LLE-inspired point cloud reconstruction objective results in accurate shape correspondences. More specifically, the approach comprises an end-to-end learnable framework of extracting high-dimensional neighborhood-preserving embeddings, estimating locally linear transformations in the embedding space, and reconstructing shapes via divergence measure-based alignment of probabilistic density functions built over reconstructed and target shapes. Our approach enforces embeddings of shapes in correspondence to lie in the same universal/canonical embedding space, which eventually helps regularize the learning process and leads to a simple nearest neighbors approach between shape embeddings for finding reliable correspondences. Comprehensive experiments show that the new method makes noticeable improvements over state-of-the-art approaches on standard shape correspondence benchmark datasets covering both human and nonhuman shapes.
arXiv (Cornell University), Sep 5, 2022
We present a new approach to unsupervised shape correspondence learning between pairs of point cl... more We present a new approach to unsupervised shape correspondence learning between pairs of point clouds. We make the first attempt to adapt the classical locally linear embedding algorithm (LLE)-originally designed for nonlinear dimensionality reduction-for shape correspondence. The key idea is to find dense correspondences between shapes by first obtaining high-dimensional neighborhood-preserving embeddings of low-dimensional point clouds and subsequently aligning the source and target embeddings using locally linear transformations. We demonstrate that learning the embedding using a new LLE-inspired point cloud reconstruction objective results in accurate shape correspondences. More specifically, the approach comprises an end-to-end learnable framework of extracting high-dimensional neighborhood-preserving embeddings, estimating locally linear transformations in the embedding space, and reconstructing shapes via divergence measure-based alignment of probabilistic density functions built over reconstructed and target shapes. Our approach enforces embeddings of shapes in correspondence to lie in the same universal/canonical embedding space, which eventually helps regularize the learning process and leads to a simple nearest neighbors approach between shape embeddings for finding reliable correspondences. Comprehensive experiments show that the new method makes noticeable improvements over state-of-the-art approaches on standard shape correspondence benchmark datasets covering both human and nonhuman shapes.
This paper presents Data-Driven Tree-structured Bayesian network (DDT), a novel probabilistic gra... more This paper presents Data-Driven Tree-structured Bayesian network (DDT), a novel probabilistic graphical model for hierarchical unsupervised image segmentation. Like[1, 2], DDT captures long and short-ranged correlations between neighboring regions in each image using a tree-structured prior. Unlike other previous work, DDT first segments an input image into superpixels and learn a tree-structured prior based on the topology of superpixels in different scales. Such a tree structure is refered to as data-driven tree structure. Each superpixel is represented by a variable node taking a discrete value of class/label of the segmentation. The probabilistic relationships among the nodes are represented by edges in the network. The unsupervised image segmentation, hence, can be viewed as an inference problem of the nodes in the tree structure of DDT, which can be carried out efficiently. We evaluate quantitatively our results with respect to the ground-truth segmentation, demonstrating that our proposed framework performs competitively with the state of the art in unsupervised image segmentation and contour detection.
This paper presents Data-Driven Tree-structured Bayesian network (DDT), a novel probabilistic gra... more This paper presents Data-Driven Tree-structured Bayesian network (DDT), a novel probabilistic graphical model for hierarchical unsupervised image segmentation. Like[1, 2], DDT captures long and short-ranged correlations between neighboring regions in each image using a tree-structured prior. Unlike other previous work, DDT first segments an input image into superpixels and learn a tree-structured prior based on the topology of superpixels in different scales. Such a tree structure is refered to as data-driven tree structure. Each superpixel is represented by a variable node taking a discrete value of class/label of the segmentation. The probabilistic relationships among the nodes are represented by edges in the network. The unsupervised image segmentation, hence, can be viewed as an inference problem of the nodes in the tree structure of DDT, which can be carried out efficiently. We evaluate quantitatively our results with respect to the ground-truth segmentation, demonstrating that our proposed framework performs competitively with the state of the art in unsupervised image segmentation and contour detection.
Springer eBooks, 2005
Matching 3D shapes is important in many medical imaging applications. We show that a joint cluste... more Matching 3D shapes is important in many medical imaging applications. We show that a joint clustering and diffeomorphism estimation strategy is capable of simultaneously estimating correspondences and a diffeomorphism between unlabeled 3D point-sets. Correspondence is established between the cluster centers and this is coupled with a simultaneous estimation of a 3D diffeomorphism of space. The number of clusters can be estimated by minimizing the Jensen-Shannon divergence on the registered data. We apply our algorithm to both synthetically warped 3D hippocampal shapes as well as real 3D hippocampal shapes from different subjects.
Springer eBooks, 2005
Matching 3D shapes is important in many medical imaging applications. We show that a joint cluste... more Matching 3D shapes is important in many medical imaging applications. We show that a joint clustering and diffeomorphism estimation strategy is capable of simultaneously estimating correspondences and a diffeomorphism between unlabeled 3D point-sets. Correspondence is established between the cluster centers and this is coupled with a simultaneous estimation of a 3D diffeomorphism of space. The number of clusters can be estimated by minimizing the Jensen-Shannon divergence on the registered data. We apply our algorithm to both synthetically warped 3D hippocampal shapes as well as real 3D hippocampal shapes from different subjects.
arXiv (Cornell University), Mar 25, 2020
We propose a novel integrated formulation for multiclass and multilabel support vector machines (... more We propose a novel integrated formulation for multiclass and multilabel support vector machines (SVMs). A number of approaches have been proposed to extend the original binary SVM to an allin-one multiclass SVM. However, its direct extension to a unified multilabel SVM has not been widely investigated. We propose a straightforward extension to the SVM to cope with multiclass and multilabel classification problems within a unified framework. Our framework deviates from the conventional soft margin SVM framework with its direct oppositional structure. In our formulation, class-specific weight vectors (normal vectors) are learned by maximizing their margin with respect to an origin and penalizing patterns when they get too close to this origin. As a result, each weight vector chooses an orientation and a magnitude with respect to this origin in such a way that it best represents the patterns belonging to its corresponding class. Opposition between classes is introduced into the formulation via the minimization of pairwise inner products of weight vectors. We also extend our framework to cope with nonlinear separability via standard reproducing kernel Hilbert spaces (RKHS). Biases which are closely related to the origin need to be treated properly in both the original feature space and Hilbert space. We have the flexibility to incorporate constraints into the formulation (if they better reflect the underlying geometry) and improve the performance of the classifier. To this end, specifics and technicalities such as the origin in RKHS are addressed. Results demonstrates a competitive classifier for both multiclass and multilabel classification problems.
arXiv (Cornell University), Mar 25, 2020
We propose a novel integrated formulation for multiclass and multilabel support vector machines (... more We propose a novel integrated formulation for multiclass and multilabel support vector machines (SVMs). A number of approaches have been proposed to extend the original binary SVM to an allin-one multiclass SVM. However, its direct extension to a unified multilabel SVM has not been widely investigated. We propose a straightforward extension to the SVM to cope with multiclass and multilabel classification problems within a unified framework. Our framework deviates from the conventional soft margin SVM framework with its direct oppositional structure. In our formulation, class-specific weight vectors (normal vectors) are learned by maximizing their margin with respect to an origin and penalizing patterns when they get too close to this origin. As a result, each weight vector chooses an orientation and a magnitude with respect to this origin in such a way that it best represents the patterns belonging to its corresponding class. Opposition between classes is introduced into the formulation via the minimization of pairwise inner products of weight vectors. We also extend our framework to cope with nonlinear separability via standard reproducing kernel Hilbert spaces (RKHS). Biases which are closely related to the origin need to be treated properly in both the original feature space and Hilbert space. We have the flexibility to incorporate constraints into the formulation (if they better reflect the underlying geometry) and improve the performance of the classifier. To this end, specifics and technicalities such as the origin in RKHS are addressed. Results demonstrates a competitive classifier for both multiclass and multilabel classification problems.
Russellian monism (RM) has emerged as a leading contender for a solution to the mind-body problem... more Russellian monism (RM) has emerged as a leading contender for a solution to the mind-body problem. Since we cannot expect deep familiarity with it, we briefly summarize the essentials before turning to our model of RM. RM basically agrees with dualism that qualia cannot be easily derived from the physical. At the same time, RM sides with physicalism which denies that qualia are fundamental and not grounded in anything. How can RM hold both perspectives at the same time? The answer is that RM posits a deeper and more fundamental reality from which both the conventional physical and qualia emerge. Consequently, in RM we do not assume that fundamental particles are the ground of the physical and phenomenal, thereby opening a space for grounding qualia in a deeper physical. Given this starting point, the rest is entirely straightforward: we use category theory (a leading mathematical framework) to model the mappings from new foundations to both the conventional physical and qualia. Since we cannot expect a background in category theory, we briefly summarize its essentials. Category theory is a framework for modeling compositional aspects of systems. Given a basic category (comprising objects and morphisms or transformations between pairs of objects), the mapping from one category to another is accomplished via functors (mappings which take objects in one category to objects in another category). Since this may appear to be very abstract, consider the illuminating example from quantum field theory: the mapping from quantum fields to particles - a.k.a. second quantization - is a functor. Therefore, even in present day physics, we can conceive of category theory driven mappings at work and raising foundational questions as to the origin of fundamental particles. Given the above, it is natural to ask if RM can be formulated using category theory and if so, what does this entail for both the physical and the phenomenal. We believe the answer lies in a middle-out as opposed to either top-down or bottom-up implementations. We can begin from a foundational category which is presently unknown and first create functors to the category of fundamental particles (like fermions and bosons) AND fields. This should not be too controversial since the ontology of quantum fields is far from settled at present. At the same time, we can construct a second functor from the foundational to the category of phenomenal objects which we call ``selfons'' in homage to fermions and bosons (which are seen as also emerging from this new foundation). Qualia in this framework are properties of selfons and grounded in the selfon category. We now circle back to our original RM-based intuition which eschews both dualism and present-day physicalism. Since selfons are derived from a second functor, this model asserts that qualia cannot be grounded in the physical (if that is taken to mean the category of fundamental particles). But, qualia are not foundational either since selfons are derived from a new RM-based foundational category. Thereby, we obtain a middle-out model of Russellian monism accommodating both the physical and the phenomenal.
The Science of Consciousness, Taormina, Sicily, 2023
We begin with the premise that Strawson's thin subject and Zahavi's minimal self are indispensabl... more We begin with the premise that Strawson's thin subject and Zahavi's minimal self are indispensable concepts in consciousness studies. While these concepts eschew naive Cartesianism, they affirm something basic in our phenomenology, that experience is always accompanied by a subject or perspective and this needs to be acknowledged in some form. We also note that there is a widespread belief that Buddhist studies affirm the concept of no-self which at first glance seems to be at odds with the thin subject and/or minimal self. The goal of this paper is to show that there is a Tibetan Buddhist tradition which affirms the thin subject (and/or minimal self) and this fact should be more widely disseminated in consciousness studies. A brief description of Tibetan Mahamudra follows: there are four yogas/stages in the Mahamudra tradition. In the first emptiness of self yoga, the practitioner is encouraged to discover that their prior (Cartesian) assumption of a fixed self is mistaken and that their phenomenology is filled with dynamic content. In the second emptiness of the world stage, the practitioner is encouraged to see that the manifold of phenomenal content is itself empty and underneath it lies a vast awareness. In the third One Taste stage, the practitioner begins to see that awareness and phenomenal content are always co-present and simultaneous and appreciates the danger of reifying awareness. In fourth stage, the practitioner sees meditation itself as a construct and starts noticing that (an awakened) awareness comes and goes accompanied by phenomenal content. It is this vital post-meditation stage that interests us. Here, the return of awareness is marked as significant and is seen as responsible for the very creation of a limited perspective connecting awareness and bounded phenomenal content. In fact, this significant return of awareness is taken up further by the Dzogchen traditions with the Mahamudra yogas acting as a base of operations. We mention all this to point out the crucial role played by the post-meditation stage in Mahamudra: the self or a perspective, awareness and phenomenal content all return together and then disappear. This mimics the descriptions of Strawson and Zahavi to such a remarkable degree that it is surprising that we don't see discussions in consciousness studies focused on this concordance. In contrast, we see recent attempts by philosophers such as Garfield to proceed in the opposite direction and discard the thin subject while claiming support from the Buddhist traditions. We show that this approach is misguided and results in not taking either experience or awareness seriously. To summarize, the return of awareness/phenomenal content in a spatio-temporally bounded form cannot be denied. Buddhist Mahamudra explicitly acknowledges this via the metaphor of taking the perspective of the flashlight (and not focusing either on the illuminated content or on the holder of the flashlight). This thin subject has qualia (sensations, perceptions, emotions etc.) and is capable of intentional acts. Situating the thin subject in the physical world instead of eliminating it should therefore be a paramount concern of consciousness studies.
We begin with the assumption that all emergentist approaches are inadequate to solve the hard pro... more We begin with the assumption that all emergentist approaches are inadequate to solve the hard problem of experience. Consequently, it's hard to escape the conclusion that consciousness is fundamental and that some form of panpsychism is true. Unfortunately, panpsychism faces the combination problem – why should proto-experiences combine to form full fledged experiences? Since the combination problem has resisted many attempts, we argue for compositionality as the missing ingredient needed to explain mid level experiences such as ours. Since this is controversial, we carefully present the full argument below. To begin, we assume, following Frege, that experience cannot exist with being accompanied by a subject of experience (SoE). An SoE provides the structural and spatio-temporally bounded “container” for experience and following Strawson is conceived as a thin subject. Thin subjects exhibit a phenomenal unity with different types of phenomenal content (sensations, thoughts etc.) occurring during their temporal existence. Next, following Stoljar, we invoke our ignorance of the true physical as the reason for the explanatory gap between present day physical processes (events, properties) and experience. We are therefore permitted to conceive of thin subjects as physical compositions. Compositionality has been an intensely studied area in the past twenty years. While there is no clear consensus here, we argue, following Koslicki, that a case can be made for a restricted compositionality principle and that thin subjects are physical compositions of a certain natural kind. In this view, SoEs are natural kind objects with a yet to be specified compositionality relation connecting them to the physical world. The specifics of this relation will be detailed by a new physics and at this juncture, all we can provide are guiding metaphors. We suggest that the relation binding an SoE to the physical is akin to the relation between a particle and field. In present day physics, a particle is conceived as a coherent excitation of a field and is spatially and temporally bounded (with the photon being the sole exception). Under the right set of circumstances, a particle coalesces out of a field and dissipates. We suggest that an SoE can be conceived as akin to a particle coalescing out of physical fields, persisting for a brief period of time and then dissipating – in a manner similar to the phenomenology of a thin subject. Experiences are physical properties of SoEs with the constraint (specified by a similarity metric) that SoEs belonging to the same natural kind will have similar experiences. The counter-intuitive aspect of this proposal is the unexpected “complexity” exhibited by SoE particles but we have been prepared for this by the complex behavior of elementary particles in over ninety years of experimental physics. Consequently, while it is odd at first glance to conceive of subjects of experience as particles, the spatial and temporal unity exhibited by particles as opposed to fields and the expectation that SoEs are new kinds of particles, paves the way for cementing this notion. Panpsychism and compositionality are therefore new bedfellows aiding us in resolving the hard problem.