ON DUAL EXPRESSION OF PRIOR INFORMATION IN BAYESIAN PARAMETER ESTIMATION (original) (raw)

Quantification of prior information revised

International Journal of Adaptive Control and Signal Processing, 2001

Quanti"cation of prior information about possibly high-dimensional unknown parameters of dynamic probabilistic models is addressed. Their prior probability density function (pdf) is chosen in conjugate form. Individual pieces of information are converted into a common basis of "ctitious data so that di!erent nature and uncertainty levels are respected. Then, available measured data are used for assessing con"dence to various information pieces and "nal prior pdf is obtained as a geometric mean of individual pdfs weighted by respective con"dence weights. The algorithm is elaborated for a rich exponential family and normal regression model with external inputs as its prominent example. Positive in#uence of proper prior information on a design of adaptive controllers is demonstrated.

On the method of likelihood-induced priors

ArXiv, 2019

We demonstrate that the functional form of the likelihood contains a sufficient amount of information for constructing a prior for the unknown parameters. We develop a four-step algorithm by invoking the information entropy as the measure of uncertainty and show how the information gained from coarse-graining and resolving power of the likelihood can be used to construct the likelihood-induced priors. As a consequence, we show that if the data model density belongs to the exponential family, the likelihood-induced prior is the conjugate prior to the corresponding likelihood.

Bayesian Inference Under Partial Prior Information

Scandinavian Journal of Statistics, 2003

Partial prior information on the marginal distribution of an observable random variable is considered. When this information is incorporated into the statistical analysis of an assumed parametric model, the posterior inference is typically non-robust so that no inferential conclusion is obtained. To overcome this difficulty a method based on the standard default prior associated to the model and an intrinsic

The prior can generally only be understood in the context of the likelihood

2017

A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation. 1. The role of the prior distribution in a Bayesian analysis Both in theory and in practice, the prior distribution can play many roles in a Bayesian analysis. Perhaps most formally the prior serves to encode information germane to the problem being analyzed, but in prac...

The Prior Can Often Only Be Understood in the Context of the Likelihood

Entropy, 2017

A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys' priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.

A Note on Bayesian Prediction from the Regression Model with Informative Priors

Australian <html_ent glyph="@amp;" ascii="&"/> New Zealand Journal of Statistics, 2001

This paper considers the problem of undertaking a predictive analysis from a regression model when proper conjugate priors are used. It shows how the prior information can be incorporated as a virtual experiment by augmenting the data, and it derives expressions for both the prior and the posterior predictive densities. The results obtained are of considerable practical importance to practitioners of Bayesian regression methods.

Parameter estimation via conditional expectation: a Bayesian inversion

Advanced Modeling and Simulation in Engineering Sciences, 2016

When a mathematical or computational model is used to analyse some system, it is usual that some parameters resp. functions or fields in the model are not known, and hence uncertain. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes's theory is the proper mathematical background for this identification process. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. We show how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations.

Operational parameters in Bayesian models

Test, 1994

Operational parameters are parameters that are defined in terms of the data that are being mtxlelled. This paper shows under what conditions they can be used in place of the usual formal parameters and discusses the advantages of doing this. Example applications include the normal, exponential and uniform model, the multivariate-normal and other multivariate models, finitepopulation versions, and also several new models. Also presented is their relationship to de Finetti's work on exchangeability and other symmetrybased approaches.

Specification of prior distributions under model uncertainty

2009

We consider the specification of prior distributions for Bayesian model comparison, focusing on regression-type models. We propose a particular joint specification of the prior distribution across models so that sensitivity of posterior model probabilities to the dispersion of prior distributions for the parameters of individual models (Lindley¹s paradox) is diminished. We illustrate the behavior of inferential and predictive posterior quantities in linear and log-linear regressions under our proposed prior densities with a series of simulated and real data examples.