Hierarchical kernel stick-breaking process for multi-task image analysis (original) (raw)
Related papers
A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation
Advances in neural information processing systems, 2009
A non-parametric Bayesian model is proposed for processing multiple images. The analysis employs image features and, when present, the words associated with accompanying annotations. The model clusters the images into classes, and each image is segmented into a set of objects, also allowing the opportunity to assign a word to each object (localized labeling). Each object is assumed to be represented as a heterogeneous mix of components, with this realized via mixture models linking image features to object types. The number of image classes, number of object types, and the characteristics of the object-feature mixture models are inferred nonparametrically. To constitute spatially contiguous objects, a new logistic stick-breaking process is developed. Inference is performed efficiently via variational Bayesian analysis, with example results presented on two image databases.
A Bayesian framework for image segmentation with spatially varying mixtures
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2010
A new Bayesian model is proposed for image segmentation based upon Gaussian mixture models (GMM) with spatial smoothness constraints. This model exploits the Dirichlet compound multinomial (DCM) probability density to model the mixing proportions (i.e., the probabilities of class labels) and a Gauss-Markov random field (MRF) on the Dirichlet parameters to impose smoothness. The main advantages of this model are two. First, it explicitly models the mixing proportions as probability vectors and simultaneously imposes spatial smoothness. Second, it results in closed form parameter updates using a maximum a posteriori (MAP) expectation-maximization (EM) algorithm. Previous efforts on this problem used models that did not model the mixing proportions explicitly as probability vectors or could not be solved exactly requiring either time consuming Markov Chain Monte Carlo (MCMC) or inexact variational approximation methods. Numerical experiments are presented that demonstrate the superiority of the proposed model for image segmentation compared to other GMM-based approaches. The model is also successfully compared to state of the art image segmentation methods in clustering both natural images and images degraded by noise.
Pattern Recognition, 2013
This paper introduces a novel enhancement for unsupervised feature selection based on generalized Dirichlet (GD) mixture models. Our proposal is based on the extension of the finite mixture model previously developed in [1] to the infinite case, via the consideration of Dirichlet process mixtures, which can be viewed actually as a purely nonparametric model since the number of mixture components can increase as data are introduced. The infinite assumption is used to avoid problems related to model selection (i.e. determination of the number of clusters) and allows simultaneous separation of data in to similar clusters and selection of relevant features. Our resulting model is learned within a principled variational Bayesian framework that we have developed. The experimental results reported for both synthetic data and real-world challenging applications involving image categorization, automatic semantic annotation and retrieval show the ability of our approach to provide accurate models by distinguishing between relevant and irrelevant features without over-or under-fitting the data.
Annotating Images and Image Objects using a Hierarchical Dirichlet Process Model
Many applications call for learning to label individual objects in an image where the only information available to the learner is a dataset of images with their associated captions, i.e., words that describe the image content without specifically labeling the individual objects. We address this problem using a multi-modal hierarchical Dirichlet process model (MoM-HDP) -a nonparametric Bayesian model which provides a generalization for multi-model latent Dirichlet allocation model (MoM-LDA) used for similar problems in the past. We apply this model for predicting labels of objects in images containing multiple objects. During training, the model has access to an un-segmented image and its caption, but not the labels for each object in the image. The trained model is used to predict the label for each region of interest in a segmented image. MoM-HDP generalizes a multi-modal latent Dirichlet allocation model in that it allows the number of components of the mixture model to adapt to the data. The model parameters are efficiently estimated using variational inference. Our experiments show that MoM-HDP performs just as well as or better than the MoM-LDA model (regardless the choice of the number of clusters in the MoM-LDA model).
IJMER
Abstract: We propose a new Bayesian Network model that segments the region of interest from the image and categorize the segmented image. A Bayesian Network is constructed from an over segmentation to model the statistical dependencies among edge segments, vertices, corners and blobs. The proposed interactive image segmentation involves active user intervention for interactive image segmentation while existing interactive segmentation approaches are often passively depend on the user to provide exact intervention. Image categorization is done based on the Spatial Markov Kernel (SMK) that categorizes the segmented image. Index Terms: Bayesian Network, Image Segmentation, Interactive image segmentation, Image Categorization, Spatial Markov Kernel algorithm
Pattern Analysis and Applications, 2019
Developing effective machine learning methods for multimedia data modeling continues to challenge computer vision scientists. The capability of providing effective learning models can have significant impact on various applications. In this work, we propose a nonparametric Bayesian approach to address simultaneously two fundamental problems, namely clustering and feature selection. The approach is based on infinite generalized Dirichlet (GD) mixture models constructed through the framework of Dirichlet process and learned using an accelerated variational algorithm that we have developed. Furthermore, we extend the proposed approach using another nonparametric Bayesian prior, namely Pitman-Yor process, to construct the infinite generalized Dirichlet mixture model. Our experiments, which were conducted through synthetic data sets, the clustering analysis of real-world data sets and a challenging application, namely automatic human action recognition, indicate that the proposed framework provides good modeling and generalization capabilities.
A variational bayes approach to image segmentation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2007
In this note we will discuss how image segmentation can be handled by using Bayesian learning and inference. In particular variational techniques relying on free energy minimization will be introduced. It will be shown how to embed a spatial diffusion process on segmentation labels within the Variational Bayes learning procedure so to enforce spatial constraints among labels.
Bayesian segmentation supported by neighborhood configurations
First Canadian Conference on Computer and Robot Vision 2004 Proceedings, 2004
From the statistical point of view, segmentation methods are dependent upon how the characteristics in image are formulated and where they are extracted from. In this paper, the joint conditional probability is exploited to characterize the statistical properties and is also localized to better capture the local properties of the neighborhood. Two different neighborhood configurations are defined and each of them incorporates with given prior information through Bayesian formula. It is considered as a criterion function in the proposed method. The proposed method segments images by maximizing the given criterion function. The results show the comparison of the results from four different methods depending on the combination of neighborhood configurations with prior information.
Model-Based segmentation of image data using spatially constrained mixture models
Neurocomputing, 2018
In this paper, a novel Bayesian statistical approach is proposed to tackle the problem of natural image segmentation. The proposed approach is based on finite Dirichlet mixture models in which contextual proportions (i.e., the probabilities of class labels) are modeled with spatial smoothness constraints. The major merits of our approach are summarized as follows: Firstly, it exploits the Dirichlet mixture model which can obtain a better statistical performance than commonly used mixture models (such as the Gaussian mixture model), especially for proportional data (i.e, normalized histogram). Secondly, it explicitly models the mixing contextual proportions as probability vectors and simultaneously integrate spatial relationship between pixels into the Dirichlet mixture model, which results in a more robust framework for image segmentation. Finally, we develop a variational Bayes learning method to update the parameters in a closed-form expression. The effectiveness of the proposed approach is compared with other mixture modeling-based image segmentation approaches through extensive experiments that involve both simulated and natural color images.
Hierarchic Bayesian models for kernel learning
2005
The integration of diverse forms of informative data by learning an optimal combination of base kernels in classification or regression problems can provide enhanced performance when compared to that obtained from any single data source. We present a Bayesian hierarchical model which enables kernel learning and present effective variational Bayes estimators for regression and classification. Illustrative experiments demonstrate the utility of the proposed method. Matlab code replicating results reported is available at