On the Empirical Validity of the Bayesian Method (original) (raw)
Related papers
Bayesianism II: Applications and Criticisms
Philosophy Compass, 2011
In the first paper, I discussed the basic claims of Bayesianism (that degrees of belief are important, that they obey the axioms of probability theory, and that they are rationally updated by either standard or Jeffrey conditionalization) and the arguments that are often used to support them. In this paper, I will discuss some applications these ideas have had in confirmation theory, epistemology, and statistics, and criticisms of these applications.
Bayesianism I: Introduction and Arguments in Favor
Philosophy Compass, 2011
Bayesianism is a popular position (or perhaps, positions) in the philosophy of science, epistemology, statistics, and other related areas, which represents belief as coming in degrees, measured by a probability function. In this article, I give an overview of the unifying features of the different positions called 'Bayesianism', and discuss several of the arguments traditionally used to support them.
On Bayesian Measures of Evidential Support: Theoretical and Empirical Issues
Philosophy of Science, 2007
Epistemologists and philosophers of science have often attempted to express formally the impact of a piece of evidence on the credibility of a hypothesis. In this paper we will focus on the Bayesian approach to evidential support. We will propose a new formal treatment of the notion of degree of confirmation and we will argue that it overcomes some limitations of the currently available approaches on two grounds: (i) a theoretical analysis of the confirmation relation seen as an extension of logical deduction and (ii) an empirical comparison of competing measures in an experimental inquiry concerning inductive reasoning in a probabilistic setting.
Bayesian Argumentation: The Practical Side of Probability
Springer eBooks, 2012
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.
Positive evidence for non-arbitrary assignments of probability
AIP Conference Proceedings, 2007
How to assign numerical values for probabilities that do not seem artificial or arbitrary is a central question in Bayesian statistics. The case of assigning a probability to the truth of a proposition or event for which there is no evidence other than that the event is contingent, is contrasted with the assignment of probability in the case where there is definte evidence that the event can happen in a finite set of ways. The truth of a proposition of this kind is frequently assigned a probability via arguments of ignorance, symmetry, randomness, the Principle of Indiffernce, the Principal Principal, non-informativeness, or by other methods. These concepts are all shown to be flawed or to be misleading. The statistical syllogism introduced by Williams in 1947 is shown to fix the problems that the other arguments have. An example in the context of model selection is given.
Bayesianism: An Algorithmic Analysis
Information and Computation, 1996
The Bayesian program in statistics starts from the assumption that an individual can always ascribe a definite probability to any event. It will be demonstrated that this assumption is incompatible with the natural requirement that the individual's subjective probability distribution should be computable. We shall construct a probabilistic algorithm producing with probability extremely close to 1 an infinite binary sequence which is not random with respect to any computable probability distribution (we use Dawid's notion of randomness, computable calibration, but the results hold for other widely known notions of randomness as well). Since the Bayesian knows the algorithm, he must believe that this sequence will be noncalibrable. On the other hand, it seems that the Bayesian must believe that the sequence is random with respect to his own probability distribution. We hope that the discussion of this apparent paradox will clarify the foundations of Bayesian statistics. We analyse also the time of computation and the place of``losing randomness.'' We show that we need only polynomial time and space to demonstrate non-calibration effects on finite sequences.
An Objective Justification of Bayesianism I: Measuring Inaccuracy
Philosophy of Science, 2010
In this article and its sequel, we derive Bayesianism from the following norm: Accuracy—an agent ought to minimize the inaccuracy of her partial beliefs. In this article, we make this norm mathematically precise. We describe epistemic dilemmas an agent might face if she attempts to follow Accuracy and show that the only measures of inaccuracy that do not create these dilemmas are the quadratic inaccuracy measures. In the sequel, we derive Bayesianism from Accuracy and show that Jeffrey Conditionalization violates Accuracy unless Rigidity is assumed. We describe the alternative updating rule that Accuracy mandates in the absence of Rigidity.
A Bayesian Perspective on Confidence
Uncertainty in Artificial Intelligence, 1987
We present a representation of partial confidence in belief and preference that is consistent with the tenets of decision-theory. The fundamental insight underlying the representation is that if a person is not completely confident in a probability or utility assessment, additional modeling of the assessment may improve decisions to which it is relevant. We show how a traditional decision-analytic approach can be used to balance the benefits of additional modeling with associated costs. The approach can be used during knowledge acquisition to focus the attention of a knowledge engineer or expert on parts of a decision model that deserve additional refinement.
A Bayesian account of establishing
The British Journal for the Philosophy of Science, 2021
When a proposition is established, it can be taken as evidence for other propositions. Can the Bayesian theory of rational belief and action provide an account of establishing? I argue that it can, but only if the Bayesian is willing to endorse objective constraints on both probabilities and utilities, and willing to deny that it is rationally permissible to defer wholesale to expert opinion. I develop a new account of deference that accommodates this latter requirement.