Foundations of Probability theory, statistical inference, and statistical theories of science (original) (raw)

A Note on Foundations of Bayesianism

2000

We discuss the justifications of Bayesianism by Cox and Jaynes, and relate them to a recent critique by Halpern(JAIR, vol 10(1999), pp 67–85). We show that a problem with Halperns example is that a finite and natural refinement of the model leads to inconsistencies, and that the same is the case with every model in which rescalability to probability cannot be done. We also discuss other problems with the justifications and assumptions usually made on the function F describing plausibility of conjunction. We note that the commonly postulated monotonicity condition should be strengthened to strict monotonicity before Cox justification becomes convincing. On the other hand, we note that the commonly assumed regularity requirements on F (like continuity) or its domain (like denseness) are unnecessary.

The Bayesian model of probabilistic inference and the probability of theories 1

Andrés Rivadulla: Éxito, razón y cambio en física, Madrid: Ed. Trotta, 2004

In these pages I offer my solution to the problem of inductive probability of theories. Against the existing expectations in certain areas of the current philosophy of science, I argue that Bayes’s Theorem does not constitute an appropriate tool to assess the probability of theories and that we would do well to banish the question about how likely a certain scientific theory is to be true, or to what extent one theory is more likely true than another. Although I agree with Popper that inductive probability is impossible, I disagree with him in the way Sir Karl presents his argument, as I have showed elsewhere, so my proof is completely different. The argument I present in this paper is based on applying Bayes’s Theorem to specific situations that show its inefficiency both in the case of whether a hypothesis becomes all the more likely true the greater the empirical evidence that supports it, as whether the probability calculus allows to identify a given hypothesis from a set of hypotheses incompatible with each other as the most likely true.

Bayesianism I: Introduction and Arguments in Favor

Philosophy Compass, 2011

Bayesianism is a popular position (or perhaps, positions) in the philosophy of science, epistemology, statistics, and other related areas, which represents belief as coming in degrees, measured by a probability function. In this article, I give an overview of the unifying features of the different positions called 'Bayesianism', and discuss several of the arguments traditionally used to support them.

Bayesianism II: Applications and Criticisms

Philosophy Compass, 2011

In the first paper, I discussed the basic claims of Bayesianism (that degrees of belief are important, that they obey the axioms of probability theory, and that they are rationally updated by either standard or Jeffrey conditionalization) and the arguments that are often used to support them. In this paper, I will discuss some applications these ideas have had in confirmation theory, epistemology, and statistics, and criticisms of these applications.

Positive evidence for non-arbitrary assignments of probability

AIP Conference Proceedings, 2007

How to assign numerical values for probabilities that do not seem artificial or arbitrary is a central question in Bayesian statistics. The case of assigning a probability to the truth of a proposition or event for which there is no evidence other than that the event is contingent, is contrasted with the assignment of probability in the case where there is definte evidence that the event can happen in a finite set of ways. The truth of a proposition of this kind is frequently assigned a probability via arguments of ignorance, symmetry, randomness, the Principle of Indiffernce, the Principal Principal, non-informativeness, or by other methods. These concepts are all shown to be flawed or to be misleading. The statistical syllogism introduced by Williams in 1947 is shown to fix the problems that the other arguments have. An example in the context of model selection is given.

Four foundational questions in probability theory and statistics

Physics Essays, 2017

This study has the purpose of addressing four questions that lie at the base of the probability theory and statistics, and includes two main steps. As first, we conduct the textual analysis of the most significant works written by eminent probability theorists. The textual analysis turns out to be a rather innovative method of study in this domain, and shows how the sampled writersno matter he is a frequentist or a subjectivistshare a similar approach. Each author argues on the multifold aspects of probability then he establishes the mathematical theory on the basis of his intellectual conclusions. It may be said that mathematics ranks second. Hilbert foresees an approach far different from that used by the sampled authors. He proposes to axiomatize the probability calculus notably to describe the probability concepts using purely mathematical criteria. In the second stage of the present research we address the four issues of the probability theory and statistics following Hilberts' recommendations. Specifically, we use two theorems that prove how the frequentist and the subjectivist models are not incompatible as many believe. Probability has distinct meanings under different hypotheses, and in turn classical statistics and Bayesian statistics are available for adoption in different circumstances. Subsequently, these results are commented upon, followed by our conclusions

On the foundations of Bayesianism

AIP Conference Proceedings, 2001

We discuss precise assumptions entailing Bayesianism in the line of investigations started by Cox, and relate them to a recent critique by Halpern. We show that every finite model which cannot be rescaled to probability violates a natural and simple refinability principle. A new condition, separability, was found sufficient and necessary for rescalability of infinite models. We finally characterize the acceptable ways to handle uncertainty in infinite models based on Cox's assumptions. Certain closure properties must be assumed before all the axioms of ordered fields are satisfied. Once this is done, a proper plausibility model can be embedded in an ordered field containing the reals, namely either standard probability (field of reals) for a real valued plausibility model, or extended probability (field of reals and infinitesimals) for an ordered plausibility model. The end result is that if our assumptions are accepted, all reasonable uncertainty management schemes must be based on sets of extended probability distributions and Bayes conditioning.

Why Bayesianism? A Primer on a Probabilistic Philosophy of Science

Several attempts have been made both in the present and past to impose some a priori desiderata on statistical/inductive inference (Fitleson, 1999, Jeffreys, 1961, Zellner, 1996, Jaynes, 2003, Lele, 2004). Bringing this literature on desiderata to the fore, I argue that these attempts to understand inference could be controversial.

Mutual Influence between Different Views of Probability and Statistical Inference

PARADIGMA

In this paper, we analyse the various meanings of probability and its different applications, and we focus especially on the classical, the frequentist, and the subjectivist view. We describe the different problems of how probability can be measured in each of the approaches, and how each of them can be well justified by a mathematical theory. We analyse the foundations of probability, where the scientific analysis of the theory that allows for a frequentist interpretation leads to unsolvable problems. Kolmogorov’s axiomatic theory does not suffice to establish statistical inference without further definitions and principles. Finally, we show how statistical inference essentially determines the meaning of probability and a shift emerges from purely objectivist views to a complementary conception of probability with frequentist and subjectivist constituents. For didactical purpose, the result of the present analyses explains basic problems of teaching, originating from a biased focus...