Adaptive Monte Carlo for Bayesian Variable Selection in Regression Models (original) (raw)
Related papers
A novel Bayesian approach for variable selection in linear regression models
Computational Statistics & Data Analysis
We propose a novel Bayesian approach to the problem of variable selection in multiple linear regression models. In particular, we present a hierarchical setting which allows for direct specification of a-priori beliefs about the number of nonzero regression coefficients as well as a specification of beliefs that given coefficients are nonzero. To guarantee numerical stability, we adopt a g-prior with an additional ridge parameter for the unknown regression coefficients. In order to simulate from the joint posterior distribution an intelligent random walk Metropolis-Hastings algorithm which is able to switch between different models is proposed. Testing our algorithm on real and simulated data illustrates that it performs at least on par and often even better than other well-established methods. Finally, we prove that under some nominal assumptions, the presented approach is consistent in terms of model selection.
Adaptive Bayesian criteria in variable selection for generalized linear models
2007
For the problem of variable selection in generalized linear models, we develop various adaptive Bayesian criteria. Using a hierarchical mixture setup for model uncertainty, combined with an integrated Laplace approximation, we derive Empirical Bayes and Fully Bayes criteria that can be computed easily and quickly. The performance of these criteria is assessed via simulation and compared to other criteria such as AIC and BIC on normal, logistic and Poisson regression model classes. A Fully Bayes criterion based on a restricted region hyperprior seems to be the most promising. Finally, our criteria are illustrated and compared with competitors on a data example.
Comparison of Bayesian objective procedures for variable selection in linear regression
TEST, 2008
In the objective Bayesian approach to variable selection in regression a crucial point is the encompassing of the underlying nonnested linear models. Once the models have been encompassed one can define objective priors for the multiple testing problem involved in the variable selection problem. There are two natural ways of encompassing: one way is to encompass all models into the model containing all possible regressors, and the other one is to encompass the model containing the intercept only into any other. In this paper we compare the variable selection procedures that result from each of the two mentioned ways of encompassing by analysing their theoretical properties and their behavior in simulated and real data. Relations with frequentist criteria for model selection such as those based on the R 2 adj , and Mallows C p are provided incidentally.
arXiv: Computation, 2016
In this paper we briefly review the main methodological aspects concerned with the application of the Bayesian approach to model choice and model averaging in the context of variable selection in regression models. This includes prior elicitation, summaries of the posterior distribution and computational strategies. We then examine and compare various publicly available {\tt R}-packages for its practical implementation summarizing and explaining the differences between packages and giving recommendations for applied users. We find that all packages reviewed lead to very similar results, but there are potentially important differences in flexibility and efficiency of the packages.
Methods and Tools for Bayesian Variable Selection and Model Averaging in Normal Linear Regression
International Statistical Review, 2018
This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving. A note on versions: The version presented here may differ from the published version or, version of record, if you wish to cite this item you are advised to consult the publisher's version. Please see the 'permanent WRAP url' above for details on accessing the published version and note that access may require a subscription.
Objective Bayesian Variable Selection
Journal of the American Statistical Association, 2006
A novel fully automatic Bayesian procedure for variable selection in normal regression model is proposed. The procedure uses the posterior probabilities of the models to drive a stochastic search. The posterior probabilities are computed using intrinsic priors, which can be considered default priors for model selection problems. That is, they are derived from the model structure and are free from tuning parameters. Thus, they can be seen as objective priors for variable selection. The stochastic search is based on a Metropolis-Hastings algorithm with a stationary distribution proportional to the model posterior probabilities. The procedure is illustrated on both simulated and real examples.
Approaches for Bayesian variable selection
1997
This paper describes and compares various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models. These include the nonconjugate SSVS formulation of George and McCulloch (1993), as well as conjugate formulations which allow for analytical simplification. Hyperparameter settings which base selection on practical significance, and the implications of using mixtures with point priors are discussed. Computational methods for posterior evaluation and exploration are considered. Rapid updating methods are seen to provide feasible methods for exhaustive evaluation using Gray Code sequencing in moderately sized problems, and fast Markov Chain Monte Carlo exploration in large problems. Estimation of normalization constants is seen to provide improved posterior estimates of individual model probabilities and the total visited probability. Various procedures are illustrated on simulated sample problems and on a real problem concerning the construction of financial index tracking portfolios.
Adaptive Markov Chain Monte Carlo for Bayesian Variable Selection
Journal of Computational and Graphical Statistics, 2013
We describe adaptive Markov chain Monte Carlo (MCMC) methods for sampling posterior distributions arising from Bayesian variable selection problems. Point mass mixture priors are commonly used in Bayesian variable selection problems in regression. However, for generalized linear and nonlinear models where the conditional densities cannot be obtained directly, the resulting mixture posterior may be difficult to sample using standard MCMC methods due to multimodality. We introduce an adaptive MCMC scheme which automatically tunes the parameters of a family of mixture proposal distributions during simulation. The resulting chain adapts to sample efficiently from multimodal target distributions. For variable selection problems point mass components are included in the mixture, and the associated weights adapt to approximate marginal posterior variable inclusion probabilities, while the remaining components approximate the posterior over non-zero values. The resulting sampler transitions efficiently between models, performing parameter estimation and variable selection simultaneously. Ergodicity and convergence are guaranteed by limiting the
On efficient calculations for Bayesian variable selection
Computational Statistics & Data Analysis, 2012
We describe an efficient, exact Bayesian algorithm applicable to both variable selection and model averaging problems. A fully Bayesian approach provides a more complete characterization of the posterior ensemble of possible sub-models, but presents a computational challenge as the number of candidate variables increases. While several approximation techniques have been developed to deal with problems that contain a large numbers of candidate variables, including BMA, IBMA, MCMC and Gibbs Sampling approaches, here we focus on improving the time complexity of exact inference using a recursive algorithm (Exact Bayesian Inference in Regression, or EBIR) that uses components of one sub-model to rapidly generate another and prove that its time complexity is O(m 2),
Bayesian Variable Selection in Linear Regression and a Comparison
Hacettepe Journal of Mathematics and Statistics, 2001
In this study, Bayesian approaches, such as Zellner, Occam's Window and Gibbs sampling, have been compared in terms of selecting the correct subset for the variable selection in a linear regression model. The aim of this comparison is to analyze Bayesian variable selection and the behavior of classical criteria by taking into consideration the different values of β and σ and prior expected levels.