Mixtures of Factor Analyzers with Common Factor Loadings: Applications to the Clustering and Visualization of High-Dimensional Data (original) (raw)
Related papers
Modelling high-dimensional data by mixtures of factor analyzers
Computational Statistics & Data Analysis, 2003
We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be ÿtted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments.
Extending mixtures of factor models using the restricted multivariate skew-normal distribution
Journal of Multivariate Analysis
The mixture of factor analyzers (MFA) model provides a powerful tool for analyzing high-dimensional data as it can reduce the number of free parameters through its factor-analytic representation of the component covariance matrices. This paper extends the MFA model to incorporate a restricted version of the multivariate skew-normal distribution to model the distribution of the latent component factors, called mixtures of skew-normal factor analyzers (MSNFA). The proposed MSNFA model allows us to relax the need for the normality assumption for the latent factors in order to accommodate skewness in the observed data. The MSNFA model thus provides an approach to model-based density estimation and clustering of high-dimensional data exhibiting asymmetric characteristics. A computationally feasible ECM algorithm is developed for computing the maximum likelihood estimates of the parameters. Model selection can be made on the basis of three commonly used information-based criteria. The potential of the proposed methodology is exemplified through applications to two real examples, and the results are compared with those obtained from fitting the MFA model.
Mixtures of factor analyzers: an extension with covariates
Journal of multivariate analysis, 2005
This paper examines the analysis of an extended finite mixture of factor analyzers (MFA) where both the continuous latent variable (common factor) and the categorical latent variable (component label) are assumed to be influenced by the effects of fixed observed covariates. A polytomous logistic regression model is used to link the categorical latent variable to its corresponding covariate, while a traditional linear model with normal noise is used to model the effect of the covariate on the continuous latent variable. The proposed model turns out be in various ways an extension of many existing related models, and as such offers the potential to address some of the issues not fully handled by those previous models. A detailed derivation of an EM algorithm is proposed for parameter estimation, and latent variable estimates are obtained as by-products of the overall estimation procedure.
Adaptive Mixtures of Factor Analyzers
A mixture of factor analyzers is a semi-parametric density estimator that generalizes the well-known mixtures of Gaussians model by allowing each Gaussian in the mixture to be represented in a different lower-dimensional manifold. This paper presents a robust and parsimonious model selection algorithm for training a mixture of factor analyzers, carrying out simultaneous clustering and locally linear, globally nonlinear dimensionality reduction. Permitting different number of factors per mixture component, the algorithm adapts the model complexity to the data complexity. We compare the proposed algorithm with related automatic model selection algorithms on a number of benchmarks. The results indicate the effectiveness of this fast and robust approach in clustering, manifold learning and class-conditional modeling.
Maximum likelihood estimation in constrained parameter spaces for mixtures of factor analyzers
Mixtures of factor analyzers are becoming more and more popular in the area of model based clustering of multivariate data. According to the likelihood approach in data modeling, it is well known that the unconstrained likelihood function may present spurious maxima and singularities. To reduce such drawbacks, in this paper we introduce a procedure for parameter estimation of mixtures of factor analyzers, which maximizes the likelihood function under the mild requirement that the eigenvalues of the covariance matrices lie into some interval [a, b].Moreover, we give a recipe on how to select appropriate bounds for the constrained EM algorithm, directly from the handled data. We then analyze and measure its performance, compared with the usual non-constrained approach, and also with other constrained models in the literature. Results show that the data-driven constraints improve the estimation and the subsequent classification, at the same time.
Clustering and classification via cluster-weighted factor analyzers
Advances in Data Analysis and Classification, 2013
In model-based clustering and classification, the cluster-weighted model is a convenient approach when the random vector of interest is constituted by a response variable Y and by a vector X of p covariates. However, its applicability may be limited when p is high. To overcome this problem, this paper assumes a latent factor structure for X in each mixture component, under Gaussian assumptions. This leads to the cluster-weighted factor analyzers (CWFA) model. By imposing constraints on the variance of Y and the covariance matrix of X, a novel family of sixteen CWFA models is introduced for model-based clustering and classification. The alternating expectation-conditional maximization algorithm, for maximum likelihood estimation of the parameters of all models in the family, is described; to initialize the algorithm, a 5-step hierarchical procedure is proposed, which uses the nested structures of the models within the family and thus guarantees the natural ranking among the sixteen likelihoods. Artificial and real data show that these models have very good clustering and classification performance and that the algorithm is able to recover the parameters very well.
Data driven EM constraints for mixtures of factor analyzers
Mixtures of factor analyzers are becoming more and more popular in the area of model based clustering of high-dimensional data. In this paper we implement a data-driven methodology to maximize the likelihood function in a constrained parameter space, to overcome the well known issue of singularities and to reduce spurious maxima in the EM algorithm. Simulation results and applications to real data show that the problematic convergence of the EM, even more critical when dealing with factor analyzers, can be greatly improved.
Data-driven EM constraints for Gaussian mixtures of factor analyzers
Mixtures of factor analyzers are becoming more and more popular in the area of model based clustering of high-dimensional data. In this paper we implement a data-driven methodology to maximize the likelihood function in a constrained parameter space, to overcome the well known issue of singularities and to reduce spurious maxima in the EM algorithm. Simulation results and applications to real data show that the problematic convergence of the EM, even more critical when dealing with factor analyzers, can be greatly improved.
Mixtures of factor analysers. Bayesian estimation and inference by stochastic simulation
Machine Learning, 2003
Factor Analysis (FA) is a well established probabilistic approach to unsupervised learning for complex systems involving correlated variables in high-dimensional spaces. FA aims principally to reduce the dimensionality of the data by projecting high-dimensional vectors on to lower-dimensional spaces. However, because of its inherent linearity, the generic FA model is essentially unable to capture data complexity when the input space is nonhomogeneous. A finite Mixture of Factor Analysers (MFA) is a globally nonlinear and therefore more flexible extension of the basic FA model that overcomes the above limitation by combining the local factor analysers of each cluster of the heterogeneous input space. The structure of the MFA model offers the potential to model the density of high-dimensional observations adequately while also allowing both clustering and local dimensionality reduction. Many aspects of the MFA model have recently come under close scrutiny, from both the likelihood-based and the Bayesian perspectives. In this paper, we adopt a Bayesian approach, and more specifically a treatment that bases estimation and inference on the stochastic simulation of the posterior distributions of interest. We first treat the case where the number of mixture components and the number of common factors are known and fixed, and we derive an efficient Markov Chain Monte Carlo (MCMC) algorithm based on Data Augmentation to perform inference and estimation. We also consider the more general setting where there is uncertainty about the dimensionalities of the latent spaces (number of mixture components and number of common factors unknown), and we estimate the complexity of the model by using the sample paths of an ergodic Markov chain obtained through the simulation of a continuous-time stochastic birth-and-death point process. The main strengths of our algorithms are that they are both efficient (our algorithms are all based on familiar and standard distributions that are easy to sample from, and many characteristics of interest are by-products of the same process) and easy to interpret. Moreover, they are straightforward to implement and offer the possibility of assessing the goodness of the results obtained. Experimental results on both artificial and real data reveal that our approach performs well, and can therefore be envisaged as an alternative to the other approaches used for this model.
Computational Statistics & Data Analysis, 2016
Mixtures of Gaussian factors are powerful tools for modeling an unobserved heterogeneous population, offering-at the same time-dimension reduction and model-based clustering. The high prevalence of spurious solutions and the disturbing effects of outlying observations in maximum likelihood estimation may cause biased or misleading inferences. Restrictions for the component covariances are considered in order to avoid spurious solutions, and trimming is also adopted, to provide robustness against violations of the normality assumptions of the underlying latent factors. A detailed AECM algorithm for this new approach is presented. Simulation results and an application to the AIS dataset show the aim and effectiveness of the proposed methodology.