Dynamic Belief Revision over Multi-Agent Plausibility Models (original) (raw)
Related papers
The Logic of Conditional Doxastic Actions: A theory of dynamic multi-agent belief revision
2000
be Abstract. We present a logic of conditional doxastic actions, obtained by incorporating ideas from belief revision theory into the usual dynamic logic of epistemic actions. We do this by extending to actions the setting of epistemic plausibility models , developed in Baltag and Smets (2006) for representing (static) conditional beliefs. We introduce a natural extension of the notion of
Multi-Agent Belief Revision With Linked Plausibilities
CWI Amsterdam, in Proceedings LOFT VIII, …, 2009
In [11] it is shown how propositional dynamic logic (PDL) can be interpreted as a logic of belief revision that extends the logic of communication and change (LCC) given in [7]. This new version of epistemic/doxastic PDL does not impose any constraints on the basic relations and ...
Conditional Doxastic Models: A Qualitative Approach to Dynamic Belief Revision
Electronic Notes in Theoretical Computer Science, 2006
In this paper, we present a semantical approach to multi-agent belief revision and belief update. For this, we introduce relational structures called conditional doxastic models (CDM's, for short). We show this setting to be equivalent to an epistemic version of the classical AGM Belief Revision theory. We present a logic of conditional beliefs that is complete w.r.t. CDM's. Moving then to belief updates (sometimes called “dynamic” belief revision) induced by epistemic actions, we consider two particular cases: public announcements and private announcements to subgroups of agents. We show how the standard semantics for these types of updates can be appropriately modified in order to apply it to CDM's, thus incorporating belief revision into our notion of update. We provide a complete axiomatization of the corresponding dynamic doxastic logics. As an application, we solve a “cheating version” of the Muddy Children Puzzle.
Dynamic Logics of Belief Change
2015
This chapter gives an overview of current dynamic logics that describe belief update and revision, both for single agents and in multi-agent settings. We employ a mixture of ideas from AGM belief revision theory and dynamic-epistemic logics of information-driven agency. After describing the basic background, we review logics of various kinds of beliefs based on plausibility models, and then go on to various sorts of belief change engendered by changes in current models through hard and soft information. We present matching complete logics with dynamic-epistemic recursion axioms, and develop a very general perspective on belief change by the use of event models and priority update. The chapter continues with three topics that naturally complement the setting of single steps of belief change: connections with probabilistic approaches to belief change, long-term temporal process structure including links with formal learning theory, and multi-agent scenarios of information flow and belief revision in games and social networks. We end with a discussion of alternative approaches, further directions, and windows to the broader literature, while links with relevant philosophical traditions are discussed throughout.
The Algebra of Multi-Agent Dynamic Belief Revision
Electronic Notes in Theoretical Computer Science, 2006
We refine our algebraic axiomatization in of epistemic actions and epistemic update (notions defined in using Kripke-style semantics), to incorporate a mechanism for dynamic belief revision in a multi-agent setting. We encode revision as a particular form of epistemic update, as a result of which we can revise with epistemic propositions as well as facts, we can also revise theories about actions as well as about states of the worlds, and we can do multi-agent belief revision. We show how our setting can be applied to a cheating version of the muddy children puzzle where by using this logic, after the cheating happens, honest children will not get contradictory beliefs.
A Qualitative Theory of Dynamic Interactive Belief Revision
2008
We present a logical setting that incorporates a belief-revision mechanism within Dynamic-Epistemic logic. As the " static " basis for belief revision, we use epistemic plausibility models, together with a modal language based on two epistemic operators: a " knowledge " modality K (the standard S5, fully introspective, notion), and a " safe belief " modality 2 (" weak " , non-negatively-introspective, notion, capturing a version of Lehrer's " indefeasible knowledge "). To deal with " dynamic " belief revision, we introduce action plausibility models, representing various types of " doxastic events ". Action models " act " on state models via a modified update product operation: the " Action-Priority " Update. This is the natural dynamic generalization of AGM revision, giving priority to the incoming information (i.e., to " actions ") over prior beliefs. We completely axiomatize this logic, and show how our update mechanism can " simulate " , in a uniform manner , many different belief-revision policies.
A model for belief revision in a multi-agent environment (abstract
ACM Sigois Bulletin, 1992
In modeling the knowledge processing structure of an Agent in a Multi-Agent world it becomes necessary to enlarge the traditional concept of Belief Revision. For detecting contradictions and identifying their sources it is sufficient to maintain informations about what has been told; but to "solve" a contradiction it is necessary to keep informations about who said it or, in general, about the source where that knowledge came from. We can take as certain the fact that an agent gave an information, but we can take the given information only as a revisable assumption. The Belief Revision system can't leave the sources of the informations out of consideration because of their relevance in giving the additional notion of "strength of belief" [Galliers 89]. In fact, the reliability of the source affects the credibility of the information and vice-versa. It is necessary to develop systems that deal with couples <assumption, source of the assumption>. In [Dragoni 91] we've proposed a system that moves in this direction. Here we give a short description of that system. In the first two parts we describe the agent's knowledge processing structure using a particular characterization of the "Assumption Based Belief Revision" concept; in part three we outline the project of an embedded device that enables the overall system to deal with couples <assumption, source of the assumption> in an rather anthropomorphic manner.
A MODEL FOR BELIEF REVISION IN A MULTI-AGENT ENVIRONMENT
In modeling the knowledge processing structure of an Agent in a Multi-Agent world it becomes necessary to enlarge the traditional concept of Belief Revision. For detecting contradictions and identifying their sources it is sufficient to maintain informations about what has been told; but to "solve" a contradiction it is necessary to keep informations about who said it or, in general, about the source where that knowledge came from. We can take as certain the fact that an agent gave an information, but we can take the given information only as a revisable assumption. The Belief Revision system can't leave the sources of the informations out of consideration because of their relevance in giving the additional notion of "strength of belief" [Galliers 89]. In fact, the reliability of the source affects the credibility of the information and vice-versa. It is necessary to develop systems that deal with couples <assumption, source of the assumption>. In [Dragoni 91] we've proposed a system that moves in this direction. Here we give a short description of that system. In the first two parts we describe the agent's knowledge processing structure using a particular characterization of the "Assumption Based Belief Revision" concept; in part three we outline the project of an embedded device that enables the overall system to deal with couples <assumption, source of the assumption> in an rather anthropomorphic manner.
Belief revision through the belief-function formalism in a multi-agent environment
The abilities of detecting contradictions and rearranging the cognitive space in order to cope with them are important to be embedded in the BDI architecture of an agent acting in a complex and dynamic world. However, to be accomplished in a multi-agent environment, “belief revision” must depart considerably from its original definitions. According to us, the main changes should be the following ones: replacing the “priority to the incoming information principle” with the “recoverability principle”: any previously believed piece of information must belong to the current cognitive state whenever it is possible dealing not just with pieces of information but with couples <source, information> since the reliability of the source affects the credibility of the information and vice-versa. The “belief-function” formalism is here accepted as a simple and intuitive way to transfer the sources' reliability to the information's credibility.