Can Bayes' Rule be Justified by Cognitive Rationality Principles (original) (raw)
Related papers
Belief revision in probability theory
Proceedings of the Ninth Conference on Uncertainty in …, 1993
In a probability-based reasoning system, Bayes' theorem and its variations are often used to revise the system's beliefs. However, if the explicit con ditions and the implicit conditions of probability assignments are properly distinguished, it follows that Bayes' theorem is not a generally applicable revision rule. Upon properly distinguishing be lief revision from belief updating, we see that Jeffrey's rule and its variations are not revision rules, either. Without these distinctions, the lim itation of the Bayesian approach is often ignored or underestimated. Revision, in its general form, cannot be done in the Bayesian approach, because a probability distribution function alone does not contain the information needed by the operation. 1. mE {0, 1 }, that is, the new evidence is binary-valued, so it can be simply written as A or-.A. 2. A E S, otherwise its probability is undefined. 3. Pc(A) > 0, otherwise it cannot be used as a denomi nator in Bayes' theorem. 3 EXPLICIT CONDITION VS. IMPLICIT CONDITION Why do we need a revision rule in a plausible reasoning system?
A Simple Modal Logic for Belief Revision
Synthese, 2005
We propose a modal logic based on three operators, representing intial beliefs, information and revised beliefs. Three simple axioms are used to provide a sound and complete axiomatization of the qualitative part of Bayes' rule. Some theorems of this logic are derived concerning the interaction between current beliefs and future beliefs. Information flows and iterated revision are also discussed.
How to Revise Beliefs from Conditionals: A New Proposal
2021
A large body of work has demonstrated the utility of the Bayesian framework for capturing inference in both specialist and everyday contexts. However, the central tool of the framework, conditionalization via Bayes’ rule, does not apply directly to a common type of learning: the acquisition of conditional information. How should an agent change her beliefs on learning that “If A, then C”? This issue, which is central to both reasoning and argumentation, has recently prompted considerable research interest. In this paper, we critique a prominent proposal and provide a new, alternative, answer.
Justifying conditionalization: Conditionalization maximizes expected epistemic utility
Mind, 2006
According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalisation: that is, her posterior subjective probability after taking account of evidence X, pnew, is to be set equal to her prior conditional probability p old (·|X). Bayesians can be challenged to provide a justification for their claim that conditionalisation is recommended by rationality -whence the normative force of the injunction to conditionalise?
Justifying conditionalisation: conditionalisation maximises expected epistemic utility
According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization: that is, her posterior subjective probability after taking account of evidence X, p_new, is to be set equal to her prior conditional probability p_old(.|X). Bayesians can be challenged to provide a justification for their claim that conditionalization is recommended by rationality - whence the normative force of the injunction to conditionalize? There are several existing justifications for conditionalization, but none directly addresses the idea that conditionalization will be epistemically rational if and only if it can reasonably be expected to lead to epistemically good outcomes. We apply the approach of cognitive decision theory to provide a justification for conditionalization using precisely that idea. We assign epistemic utility functions to epistemically rational agents; an agent's epistemic utility is to depend both upon the actual state of the world and on the agent's credence distribution over possible states. We prove that, under independently motivated conditions, conditionalization is the unique updating rule that maximizes expected epistemic utility.
On the Logic of Iterated Belief Revision
Artificial Intelligence, 1997
We show in this paper that the AGM postulates are too weak to ensure the rational preservation of conditional beliefs during belief revision, thus permitting improper responses to sequences of obser- wtions. We remedy this weakness by augmenting the AGM system with four additional postulates, which are sound relative to a qualitative version of probabilistic conditioning. Finally, we establish a
A Rule For Updating Ambiguous Beliefs
Theory and Decision, 2002
When preferences are such that there is no unique additive prior, the issue of which updating rule to use is of extreme importance. This paper presents an axiomatization of the rule which requires updating of all the priors by Bayes rule. The decision maker has conditional preferences over acts. It is assumed that preferences over acts conditional on event E happening, do not depend on lotteries received on E c , obey axioms which lead to maxmin expected utility representation with multiple priors, and have common induced preferences over lotteries. The paper shows that when all priors give positive probability to an event E, a certain coherence property between conditional and unconditional preferences is satisfied if and only if the set of subjective probability measures considered by the agent given E is obtained by updating all subjective prior probability measures using Bayes rule.
Revising beliefs on the basis of evidence
International Journal of Approximate Reasoning, 2012
Approaches to belief revision most commonly deal with categorical information: an agent has a set of beliefs and the goal is to consistently incorporate a new item of information given by a formula. However, most information about the real world is not categorical. In revision, one may circumvent this fact by assuming that, in some fashion or other, an agent has elected to accept a formula φ, and the task of revision is to consistently incorporate φ into its belief corpus. Nonetheless, it is worth asking whether probabilistic information and noncategorical beliefs may be reconciled with, or even inform, approaches to revision. In this paper, one such account is presented. An agent receives uncertain information as input, and its probabilities on (a finite set of) possible worlds are updated via Bayesian conditioning. A set of formulas among the noncategorical beliefs is identified as the agent's categorical belief set. The effect of this updating on the belief set is examined with respect to its appropriateness as a revision operator. We show that few of the classical AGM belief revision postulates are satisfied by this approach. Most significantly, though not surprisingly, the success postulate is not guaranteed to hold. However it does hold after a sufficient number of iterations. As well, it proves to be the case that in revising by a formula consistent with the agent's beliefs, revision does not correspond to expansion. Postulates for iterated revision also examined, and it proves to be the case that most such postulates also do not hold. On the other hand, limiting cases of the presented approach correspond to specific approaches to revision that have appeared in the literature.
Conditioning and updating evidence
International Journal of Approximate Reasoning, 2004
A new interpretation of Dempster-Shafer conditional notions based directly upon the mass assignments is provided. The masses of those propositions that may imply the complement of the conditioning proposition are shown to be completely annulled by the conditioning operation; conditioning may then be construed as a re-distribution of the masses of some of these propositions to those that definitely imply the conditioning proposition. A complete characterization of the propositions whose masses are annulled without re-distribution, annulled with re-distribution and enhanced by the re-distribution of masses is provided. A new evidence updating strategy that is composed of a linear combination of the available evidence and the conditional evidence is also proposed. It enables one to account for the Ôintegrity' and Ôinertia' of the available evidence and its Ôflexibility' to updating by appropriate selection of the linear combination weights. Several such strategies, including one that has a probabilistic interpretation, are also provided.