Truth-Tracking by Belief Revision (original) (raw)
Related papers
Truth Tracking and Belief Revision
We analyze the learning power of iterated belief revision methods, and in particular their universality: whether or not they can learn everything that can be learnt. We look in particular at three popular methods: conditioning, lexicographic revision and minimal revision. Our main result is that conditioning and lexicographic revision are universal on arbitrary epistemic states, provided that the observational setting is sound and complete (only true data are observed, and all true data are eventually observed) and provided that a non-standard (non-well-founded) prior plausibility relation is allowed. We show that a standard (well-founded) beliefrevision setting is in general too narrow for this. We also show that minimal revision is not universal. Finally, we consider situations in which observational errors (false observations) may occur. Given a fairness condition (saying that only finitely many errors occur, and that every error is eventually corrected), we show that lexicographic revision is still universal in this setting, while the other two methods are not.
Belief revision as a truth-tracking process
2011
We analyze the learning power of iterated belief revision methods, and in particular their universality: whether or not they can learn everything that can be learnt. We look in particular at three popular methods: conditioning, lexicographic revision and minimal revision. Our main result is that conditioning and lexicographic revision are universal on arbitrary epistemic states, provided that the observational setting is sound and complete (only true data are observed, and all true data are eventually observed) and provided that a non-standard (non-well-founded) prior plausibility relation is allowed. We show that a standard (well-founded) belief-revision setting is in general too narrow for this. We also show that minimal revision is not universal. Finally, we consider situations in which observational errors (false observations) may occur. Given a fairness condition (saying that only fi nitely many errors occur, and that every error is eventually corrected), we show that lexicographic revision is still universal in this setting, while the other two methods are not.
Tracking probabilistic truths: a logic for statistical learning
Synthese
We propose a new model for forming and revising beliefs about unknown probabilities. To go beyond what is known with certainty and represent the agent’s beliefs about probability, we consider a plausibility map, associating to each possible distribution a plausibility ranking. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds (or more generally, truth in all the worlds that are plausible enough). We consider two forms of conditioning or belief update, corresponding to the acquisition of two types of information: (1) learning observable evidence obtained by repeated sampling from the unknown distribution; and (2) learning higher-order information about the distribution. The first changes only the plausibility map (via a ‘plausibilistic’ version of Bayes’ Rule), but leaves the given set of possible distributions essentially unchanged; the second rules out some distributions, thus shrinking the set of possibilities, without changing their ...
On the Logic of Iterated Belief Revision
Artificial Intelligence, 1997
We show in this paper that the AGM postulates are too weak to ensure the rational preservation of conditional beliefs during belief revision, thus permitting improper responses to sequences of obser- wtions. We remedy this weakness by augmenting the AGM system with four additional postulates, which are sound relative to a qualitative version of probabilistic conditioning. Finally, we establish a
Characterizing relevant belief revision operators
2011
Parikh has proposed a postulate in addition to the AGM postulates which characterizes the notion of relevant change: statements which are not related to the incoming statement should not change. However, Parikh left open the problem of the definition of methods that satisfies his relevance postulate. In this paper, we propose to characterize the set of methods that respect Parikh's postulate. For this, we represent beliefs as a set of prime implicants and we show how initial beliefs and incoming information can be combined so that all beliefs which are consistent with incoming information are preserved. We define prime implicant based revision as the selection of some of these formulas. Since in the revision process, we enforce the selection of statements which keep as much as possible of original beliefs, we show that we have actually characterized the family of operators which satisfies the Parikh's postulate.
A Framework for Iterated Belief Revision Using Possibilistic Counterparts to Jeffrey's Rule
Fundamenta Informaticae, 2010
Intelligent agents require methods to revise their epistemic state as they acquire new information. Jeffrey's rule, which extends conditioning to probabilistic inputs, is appropriate for revising probabilistic epistemic states when new information comes in the form of a partition of events with new probabilities and has priority over prior beliefs. This paper analyses the expressive power of two possibilistic counterparts to Jeffrey's rule for modeling belief revision in intelligent agents. We show that this rule can be used to recover several existing approaches proposed in knowledge base revision, such as adjustment, natural belief revision, drastic belief revision, and the revision of an epistemic state by another epistemic state. In addition, we also show that some recent forms of revision, called improvement operators, can also be recovered in our framework.
A conditional logic for iterated belief revision
ECAI, 2000
In this paper we propose a conditional logic IBCto represent iterated belief revision. We define an iterated belief revision system by strengthening the postulates proposed by Darwiche and Pearl [3]. First, following the line of Darwiche and Pearl, we modify AGM postulates to make belief revision a function of epistemic states rather than of belief sets. Then we propose a set of postulates for iterated revision which, together with the (modified) AGM postulates, entail Darwiche and Pearl's ones. The conditional logic IBChas a standard semantics in terms of selection function models and provides a natural representation of epistemic states. IBCcontains conditional axioms, corresponding to the postulates for iterated revision. We provide a representation result, which establishes a one to one correspondence between iterated belief revision systems and IBC-models. We prove that Gärdenfors' Triviality Result does not apply to IBC.
Artificial Intelligence, 1988
It is generally recognized that the possibility of detecting contradictions and identifying their sources is an important feature of an intelligent system. Systems that are able to detect contradictions, identify their causes, or readjust their knowledge bases to remove the contradiction, called Belief Revision Systems, Truth Maintenance Systems, or Reason Maintenance Systems, have been studied by several researchers in Artificial Intelligence (AI).