Truth Tracking and Belief Revision (original) (raw)
We analyze the learning power of iterated belief revision methods, and in particular their universality: whether or not they can learn everything that can be learnt. We look in particular at three popular methods: conditioning, lexicographic revision and minimal revision. Our main result is that conditioning and lexicographic revision are universal on arbitrary epistemic states, provided that the observational setting is sound and complete (only true data are observed, and all true data are eventually observed) and provided that a non-standard (non-well-founded) prior plausibility relation is allowed. We show that a standard (well-founded) beliefrevision setting is in general too narrow for this. We also show that minimal revision is not universal. Finally, we consider situations in which observational errors (false observations) may occur. Given a fairness condition (saying that only finitely many errors occur, and that every error is eventually corrected), we show that lexicographic revision is still universal in this setting, while the other two methods are not.
Sign up for access to the world's latest research.
checkGet notified about relevant papers
checkSave papers to use in your research
checkJoin the discussion with peers
checkTrack your impact