Belief revision as a truth-tracking process (original) (raw)
Related papers
Truth Tracking and Belief Revision
We analyze the learning power of iterated belief revision methods, and in particular their universality: whether or not they can learn everything that can be learnt. We look in particular at three popular methods: conditioning, lexicographic revision and minimal revision. Our main result is that conditioning and lexicographic revision are universal on arbitrary epistemic states, provided that the observational setting is sound and complete (only true data are observed, and all true data are eventually observed) and provided that a non-standard (non-well-founded) prior plausibility relation is allowed. We show that a standard (well-founded) beliefrevision setting is in general too narrow for this. We also show that minimal revision is not universal. Finally, we consider situations in which observational errors (false observations) may occur. Given a fairness condition (saying that only finitely many errors occur, and that every error is eventually corrected), we show that lexicographic revision is still universal in this setting, while the other two methods are not.
Truth-Tracking by Belief Revision
2014
We study the learning power of iterated belief-revision methods. Successful learning is understood as convergence to correct, i.e., true, beliefs. We focus on the issue of universality: whether or not a particular belief-revision method is able to learn everything that in principle is learnable. We provide a general framework for interpreting belief-revision policies as learning methods. We focus on three popular cases: conditioning , lexicographic revision, and minimal revision. Our main result is that conditioning and lexicographic revision can drive a universal learning mechanism, provided that the observations include only and all true data, and provided that a non-standard, i.e., non-well-founded prior plausibility relation is allowed. We show that a standard, i.e., well-founded belief-revision setting is in general too narrow to guarantee universality of any learning method based on belief revision. We also show that minimal revision is not universal. Finally , we consider situations in which observational errors (false observations) may occur. Given a fairness condition, which says that only finitely many errors occur, and that every error is eventually corrected, we show that lexicographic revision is still universal in this setting, while the other two methods are not.
On the Logic of Iterated Belief Revision
Artificial Intelligence, 1997
We show in this paper that the AGM postulates are too weak to ensure the rational preservation of conditional beliefs during belief revision, thus permitting improper responses to sequences of obser- wtions. We remedy this weakness by augmenting the AGM system with four additional postulates, which are sound relative to a qualitative version of probabilistic conditioning. Finally, we establish a
A conditional logic for iterated belief revision
ECAI, 2000
In this paper we propose a conditional logic IBCto represent iterated belief revision. We define an iterated belief revision system by strengthening the postulates proposed by Darwiche and Pearl [3]. First, following the line of Darwiche and Pearl, we modify AGM postulates to make belief revision a function of epistemic states rather than of belief sets. Then we propose a set of postulates for iterated revision which, together with the (modified) AGM postulates, entail Darwiche and Pearl's ones. The conditional logic IBChas a standard semantics in terms of selection function models and provides a natural representation of epistemic states. IBCcontains conditional axioms, corresponding to the postulates for iterated revision. We provide a representation result, which establishes a one to one correspondence between iterated belief revision systems and IBC-models. We prove that Gärdenfors' Triviality Result does not apply to IBC.
A Framework for Iterated Belief Revision Using Possibilistic Counterparts to Jeffrey's Rule
Fundamenta Informaticae, 2010
Intelligent agents require methods to revise their epistemic state as they acquire new information. Jeffrey's rule, which extends conditioning to probabilistic inputs, is appropriate for revising probabilistic epistemic states when new information comes in the form of a partition of events with new probabilities and has priority over prior beliefs. This paper analyses the expressive power of two possibilistic counterparts to Jeffrey's rule for modeling belief revision in intelligent agents. We show that this rule can be used to recover several existing approaches proposed in knowledge base revision, such as adjustment, natural belief revision, drastic belief revision, and the revision of an epistemic state by another epistemic state. In addition, we also show that some recent forms of revision, called improvement operators, can also be recovered in our framework.
Characterizing relevant belief revision operators
2011
Parikh has proposed a postulate in addition to the AGM postulates which characterizes the notion of relevant change: statements which are not related to the incoming statement should not change. However, Parikh left open the problem of the definition of methods that satisfies his relevance postulate. In this paper, we propose to characterize the set of methods that respect Parikh's postulate. For this, we represent beliefs as a set of prime implicants and we show how initial beliefs and incoming information can be combined so that all beliefs which are consistent with incoming information are preserved. We define prime implicant based revision as the selection of some of these formulas. Since in the revision process, we enforce the selection of statements which keep as much as possible of original beliefs, we show that we have actually characterized the family of operators which satisfies the Parikh's postulate.
A Conditional Logic for Belief Revision
1998
In this paper we introduce a conditional logic BC to represent belief revision. Logic BC has a standard semantics in terms of possible worlds structures with a selection function and has strong similarities with Stalnaker’s logic C2. Moreover, Gärdenfors’ Triviality Result does not apply to BC. We provide a representation result, which shows that each belief revision system corresponds to a BC-model and every BC model satisfying the covering condition determines a belief revision system.
Theoretical Foundations for Belief Revision
The Journal of Symbolic Logic, 1988
Belief revision systems are AI programs that deal with contradictions. They work with a knowledge base, performing reasoning from the propositions in the knowledge base, "filtering" those propositions so that only part of the knowledge base is perceived -the set of propositions that are under consideration. This set of propositions is called the set of believed propositions.
We propose a new approach to belief revision that provides a way to change knowledge bases with a minimum of effort. We call this way of revising belief states optimal belief revision. Our revision method gives special attention to the fact that most belief revision processes are directed to a specific informational objective. This approach to belief change is founded on notions such as optimal context and accessibility. For the sentential model of belief states we provide both a formal description of contexts as sub-theories determined by three parameters and a method to construct contexts. Next, we introduce an accessibility ordering for belief sets, which we then use for selecting the best (optimal) contexts with respect to the processing effort involved in the revision. Then, for finitely axiomatizable knowledge bases, we characterize a finite accessibility ranking from which the accessibility ordering for the entire base is generated and show how to determine the ranking of an arbitrary sentence in the language. Finally, we define the adjustment of the accessibility ranking of a revised base of a belief set.