Bayesian sparse factor analysis with kernelized observations (original) (raw)

Effective Bayesian inference for sparse factor analysis models

2011

We study how to perform effective Bayesian inference in high-dimensional sparse Factor Analysis models with a zero-norm, sparsity-inducing prior on the model parameters. Such priors represent a methodological ideal, but Bayesian inference in such models is usually regarded as impractical. We test this view. After empirically characterising the properties of existing algorithmic approaches, we use techniques from statistical mechanics to derive a theory of optimal learning in the restricted setting of sparse PCA with a single factor. Finally, we describe a novel `Dense Message Passing' algorithm (DMP) which achieves near-optimal performance on synthetic data generated from this model.DMP exploits properties of high-dimensional problems to operate successfully on a densely connected graphical model. Similar algorithms have been developed in the statistical physics community and previously applied to inference problems in coding and sparse classification. We demonstrate that DMP ou...

Bayesian inference in kernel feature space

2012

We present a framework for Bayesian estimation in kernel feature space with implicit statistical inference in a high or even infinite dimensional feature space. Like in kernel PCA, this space is related to the input space by a nonlinear map consisting of all entities of interests. Inference is performed by means of a Gaussian model in the feature space transforming into a non Gaussian posterior pdf in the input space. Due to the kernel trick and the idea of kernel PCA only scalar products of elements in the input space as well as low dimensional vectors need to be computed.

Inference algorithms and learning theory for Bayesian sparse factor analysis

Journal of Physics: Conference Series, 2009

Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

Bayesian Dimensionality Reduction With PCA Using Penalized Semi-Integrated Likelihood

Journal of Computational and Graphical Statistics, 2017

We discuss the problem of estimating the number of principal components in Principal Components Analysis (PCA). Despite of the importance of the problem and the multitude of solutions proposed in the literature, it comes as a surprise that there does not exist a coherent asymptotic framework which would justify different approaches depending on the actual size of the data set. In this paper we address this issue by presenting an approximate Bayesian approach based on Laplace approximation and introducing a general method for building the model selection criteria, called PEnalized SEmi-integrated Likelihood (PESEL). Our general framework encompasses a variety of existing approaches based on probabilistic models, like e.g. Bayesian Information Criterion for the Probabilistic PCA (PPCA), and allows for construction of new criteria, depending on the size of the data set at hand and additional prior information. Specifically, we apply PESEL to derive two new criteria for data sets where the number of variables substantially exceeds the number of observations, which is out of the scope of currently existing approaches. We also report results of extensive simulation studies and real data analysis, which illustrate good properties of our proposed criteria as compared to the state-of-the-art methods and very recent proposals. Specifically, these simulations show that PESEL based criteria can be quite robust against deviations from the probabilistic model assumptions. Selected PESEL based criteria for the estimation of the number of principal components are implemented in R package varclust, which is available on github (https://github.com/psobczyk/varclust).

Multi-view Regression Via Canonical Correlation Analysis

Lecture Notes in Computer Science, 2007

In the multi-view regression problem, we have a regression problem where the input variable (which is a real vector) can be partitioned into two different views, where it is assumed that either view of the input is sufficient to make accurate predictions -this is essentially (a significantly weaker version of) the co-training assumption for the regression problem. We provide a semi-supervised algorithm which first uses unlabeled data to learn a norm (or, equivalently, a kernel) and then uses labeled data in a ridge regression algorithm (with this induced norm) to provide the predictor. The unlabeled data is used via canonical correlation analysis (CCA, which is a closely related to PCA for two random variables) to derive an appropriate norm over functions. We are able to characterize the intrinsic dimensionality of the subsequent ridge regression problem (which uses this norm) by the correlation coefficients provided by CCA in a rather simple expression. Interestingly, the norm used by the ridge regression algorithm is derived from CCA, unlike in standard kernel methods where a special apriori norm is assumed (i.e. a Banach space is assumed). We discuss how this result shows that unlabeled data can decrease the sample complexity.

Controlling for sparsity in sparse factor analysis models: adaptive latent feature sharing for piecewise linear dimensionality reduction

2020

Ubiquitous linear Gaussian exploratory tools such as principle component analysis (PCA) and factor analysis (FA) remain widely used as tools for: exploratory analysis, pre-processing, data visualization and related tasks. However, due to their rigid assumptions including crowding of high dimensional data, they have been replaced in many settings by more flexible and still interpretable latent feature models. The Feature allocation is usually modelled using discrete latent variables assumed to follow either parametric Beta-Bernoulli distribution or Bayesian nonparametric prior. In this work we propose a simple and tractable parametric feature allocation model which can address key limitations of current latent feature decomposition techniques. The new framework allows for explicit control over the number of features used to express each point and enables a more flexible set of allocation distributions including feature allocations with different sparsity levels. This approach is used...

A Bayesian Framework for Learning Shared and Individual Subspaces from Multiple Data Sources

Lecture Notes in Computer Science, 2011

This paper presents a novel Bayesian formulation to exploit shared structures across multiple data sources, constructing foundations for effective mining and retrieval across disparate domains. We jointly analyze diverse data sources using a unifying piece of metadata (textual tags). We propose a method based on Bayesian Probabilistic Matrix Factorization (BPMF) which is able to explicitly model the partial knowledge common to the datasets using shared subspaces and the knowledge specific to each dataset using individual subspaces. For the proposed model, we derive an efficient algorithm for learning the joint factorization based on Gibbs sampling. The effectiveness of the model is demonstrated by social media retrieval tasks across single and multiple media. The proposed solution is applicable to a wider context, providing a formal framework suitable for exploiting individual as well as mutual knowledge present across heterogeneous data sources of many kinds.

Bayesian Semi-parametric Factor Models

2012

Identifying a lower-dimensional latent space for representation of high-dimensional observations is of significant importance in numerous biomedical and machine learning applications. In many such applications, it is now routine to collect data where the ...

Sparse Bayesian modeling with adaptive kernel learning

IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, 2009

Sparse kernel methods are very efficient in solving regression and classification problems. The sparsity and performance of these methods depend on selecting an appropriate kernel function, which is typically achieved using a cross-validation procedure. In this paper, we propose an incremental method for supervised learning, which is similar to the relevance vector machine (RVM) but also learns the parameters of the kernels during model training. Specifically, we learn different parameter values for each kernel, resulting in a very flexible model. In order to avoid overfitting, we use a sparsity enforcing prior that controls the effective number of parameters of the model. We present experimental results on artificial data to demonstrate the advantages of the proposed method and we provide a comparison with the typical RVM on several commonly used regression and classification data sets.

A unifying framework for vector-valued manifold regularization and multi-view learning

2013

This paper presents a general vector-valued reproducing kernel Hilbert spaces (RKHS) formulation for the problem of learning an unknown functional dependency between a structured input space and a structured output space, in the Semi-Supervised Learning setting. Our formulation includes as special cases Vector-valued Manifold Regularization and Multi-view Learning, thus provides in particular a unifying framework linking these two important learning approaches. In the case of least square loss function, we provide a closed form solution with an efficient implementation. Numerical experiments on challenging multi-class categorization problems show that our multi-view learning formulation achieves results which are comparable with state of the art and are significantly better than single-view learning.