Power-Expected-Posterior Priors as Mixtures of g-Priors (original) (raw)
Related papers
Power-Expected-Posterior Priors as Mixtures of g-Priors in Normal Linear Models
Bayesian Analysis
One of the main approaches used to construct prior distributions for objective Bayes methods is the concept of random imaginary observations. Under this setup, the expected-posterior prior (EPP) offers several advantages, among which it has a nice and simple interpretation and provides an effective way to establish compatibility of priors among models. In this paper, we study the power-expected posterior prior as a generalization to the EPP in objective Bayesian model selection under normal linear models. We prove that it can be represented as a mixture of g-prior, like a wide range of prior distributions under normal linear models, and thus posterior distributions and Bayes factors are derived in closed form, keeping therefore computational tractability. Comparisons with other mixtures of g-prior are made and emphasis is given in the posterior distribution of g and its effect on Bayesian model selection and model averaging.
Power-Expected-Posterior Priors for Generalized Linear Models
Bayesian Analysis
The power-expected-posterior (PEP) prior provides an objective, automatic, consistent and parsimonious model selection procedure. At the same time it resolves the conceptual and computational problems due to the use of imaginary data. Namely, (i) it dispenses with the need to select and average across all possible minimal imaginary samples, and (ii) it diminishes the effect that the imaginary data have upon the posterior distribution. These attributes allow for large sample approximations, when needed, in order to reduce the computational burden under more complex models. In this work we generalize the applicability of the PEP methodology, focusing on the framework of generalized linear models (GLMs), by introducing two new PEP definitions which are in effect applicable to any general model setting. Hyper-prior extensions for the power parameter that regulates the contribution of the imaginary data are introduced. We further study the validity of the predictive matching and of the model selection consistency, providing analytical proofs for the former and empirical evidence supporting the latter. For estimation of posterior model and inclusion probabilities we introduce a tuning-free Gibbs-based variable selection sampler. Several simulation scenarios and one real life example are considered in order to evaluate the performance of the proposed methods compared to other commonly used approaches based on mixtures of g-priors. Results indicate that the GLM-PEP priors are more effective in the identification of sparse and parsimonious model formulations.
Power-Conditional-Expected Priors: Using g -priors with Random Imaginary Data for Variable Selection
Journal of Computational and Graphical Statistics, 2015
The Zellner's g-prior and its recent hierarchical extensions are the most popular default prior choices in the Bayesian variable selection context. These prior set-ups can be expressed powerpriors with fixed set of imaginary data. In this paper, we borrow ideas from the power-expectedposterior (PEP) priors in order to introduce, under the g-prior approach, an extra hierarchical level that accounts for the imaginary data uncertainty. For normal regression variable selection problems, the resulting power-conditional-expected-posterior (PCEP) prior is a conjugate normalinverse gamma prior which provides a consistent variable selection procedure and gives support to more parsimonious models than the ones supported using the g-prior and the hyper-g prior for finite samples. Detailed illustrations and comparisons of the variable selection procedures using the proposed method, the g-prior and the hyper-g prior are provided using both simulated and real data examples.
Prior Distributions for Objective Bayesian Analysis
Bayesian Analysis
We provide a review of prior distributions for objective Bayesian analysis. We start by examining some foundational issues and then organize our exposition into priors for: i) estimation or prediction; ii) model selection; iii) highdimensional models. With regard to i), we present some basic notions, and then move to more recent contributions on discrete parameter space, hierarchical models, nonparametric models, and penalizing complexity priors. Point ii) is the focus of this paper: it discusses principles for objective Bayesian model comparison, and singles out some major concepts for building priors, which are subsequently illustrated in some detail for the classic problem of variable selection in normal linear models. We also present some recent contributions in the area of objective priors on model space. With regard to point iii) we only provide a short summary of some default priors for high-dimensional models, a rapidly growing area of research.
An Objective Bayesian Criterion to Determine Model Prior Probabilities
We discuss the problem of selecting among alternative parametric models within the Bayesian framework. For model selection problems which involve non-nested models, the common objective choice of a prior on the model space is the uniform distribution. The same applies to situations where the models are nested. It is our contention that assigning equal prior probability to each model is over simplistic. Consequently, we introduce a novel approach to objectively determine model prior probabilities conditionally on the choice of priors for the parameters of the models. The idea is based on the notion of the worth of having each model within the selection process. At the heart of the procedure is the measure of this worth using the Kullback-Leibler divergence between densities from different models.
Power-Expected-Posterior Priors for Variable Selection in Gaussian Linear Models
2012
Summary: Imaginary training samples are often used in Bayesian statistics to develop prior distributions, with appealing interpretations, for use in model comparison. Expected-posterior priors are defined via imaginary training samples coming from a common underlying predictive distribution m∗, using an initial baseline prior distribution. These priors can have subjective and also default Bayesian implementations, based on different choices of m∗ and of the baseline prior.
The power-conditional-expected-posterior (PCEP) prior developed for variable selection in normal regression models combines ideas from the power-prior and expected-posterior prior, relying on the concept of random imaginary data, and provides a consistent variable selection method which leads to parsimonious inference. In this paper we discuss the computational limitations of applying the PCEP prior to generalized linear models (GLMs) and present two PCEP prior variations which are easily applicable to regression models belonging to the exponential family of distributions. We highlight the differences between the initial PCEP prior and the two GLM-based PCEP prior adaptations and compare their properties in the conjugate case of the normal linear regression model. Hyper prior extensions for the PCEP power parameter are further considered. We consider several simulation scenarios and one real data example for evaluating the performance of the proposed methods compared to other common...
Variations of power-expected-posterior priors in normal regression models
Computational Statistics & Data Analysis, 2019
The power-expected-posterior (PEP) prior is an objective prior for Gaussian linear models, which leads to consistent model selection inference, under the Mclosed scenario, and tends to favor parsimonious models. Recently, two new forms of the PEP prior were proposed which generalize its applicability to a wider range of models. The properties of these two PEP variants within the context of the normal linear model are examined thoroughly, focusing on the prior dispersion and on the consistency of the induced model selection procedure. Results show that both PEP variants have larger variances than the unit-information g-prior and that they are M-closed consistent as the limiting behavior of the corresponding marginal likelihoods matches that of the BIC. The consistency under the M-open case, using three different model misspecification scenarios is further investigated.
Bayesian Model Averaging Using Power-Expected-Posterior Priors
Econometrics, 2020
This paper focuses on the Bayesian model average (BMA) using the power–expected– posterior prior in objective Bayesian variable selection under normal linear models. We derive a BMA point estimate of a predicted value, and present computation and evaluation strategies of the prediction accuracy. We compare the performance of our method with that of similar approaches in a simulated and a real data example from economics.