Belief revision in multi-agent systems (original) (raw)
Related papers
The ability to respond sensibly to changing and conflicting beliefs is an integral part of intelligent agency. To this end, we outline the design and implementation of a Distributed Assumption-based Truth Maintenance System (DATMS) appropriate for controlling cooperative problem solving in a dynamic real world multi-agent community. Our DATMS works on the principle of local coherence which means that different agents can have different perspectives on the same fact provided that these stances are appropriately ...
Improving Assumption-based Distributed Belief Revision
SCAI, 1995
Belief revision is a critical issue in real world DAI applications. A Multi-Agent System not only has to cope with the intrinsic incompleteness and the constant change of the available knowledge (as in the case of its stand alone counterparts), but also has to deal with possible conflicts between the agents' perspectives. Each semi-autonomous agent, designed as a combination of a problem solver – assumption based truth maintenance system (ATMS), was enriched with improved capabilities: a distributed context management facility allowing the user to dynamically ...
Solving conflicting beliefs with a distributed belief revision approach
Advances in Artificial Intelligence, 2000
The ability to solve conflicting beliefs is crucial for multi-agent systems where the information is dynamic, incomplete and dis-tributed over a group of autonomous agents. The proposed distributed belief revision approach consists of a distributed truth maintenance sy-stem and a set of autonomous belief revision methodologies. The agents have partial views and, frequently, hold disparate beliefs which are au-tomatically detected by system's reason maintenance mechanism. The nature of these conflicts is dynamic and requires adequate ...
A MODEL FOR BELIEF REVISION IN A MULTI-AGENT ENVIRONMENT
In modeling the knowledge processing structure of an Agent in a Multi-Agent world it becomes necessary to enlarge the traditional concept of Belief Revision. For detecting contradictions and identifying their sources it is sufficient to maintain informations about what has been told; but to "solve" a contradiction it is necessary to keep informations about who said it or, in general, about the source where that knowledge came from. We can take as certain the fact that an agent gave an information, but we can take the given information only as a revisable assumption. The Belief Revision system can't leave the sources of the informations out of consideration because of their relevance in giving the additional notion of "strength of belief" [Galliers 89]. In fact, the reliability of the source affects the credibility of the information and vice-versa. It is necessary to develop systems that deal with couples <assumption, source of the assumption>. In [Dragoni 91] we've proposed a system that moves in this direction. Here we give a short description of that system. In the first two parts we describe the agent's knowledge processing structure using a particular characterization of the "Assumption Based Belief Revision" concept; in part three we outline the project of an embedded device that enables the overall system to deal with couples <assumption, source of the assumption> in an rather anthropomorphic manner.
A model for belief revision in a multi-agent environment (abstract
ACM Sigois Bulletin, 1992
In modeling the knowledge processing structure of an Agent in a Multi-Agent world it becomes necessary to enlarge the traditional concept of Belief Revision. For detecting contradictions and identifying their sources it is sufficient to maintain informations about what has been told; but to "solve" a contradiction it is necessary to keep informations about who said it or, in general, about the source where that knowledge came from. We can take as certain the fact that an agent gave an information, but we can take the given information only as a revisable assumption. The Belief Revision system can't leave the sources of the informations out of consideration because of their relevance in giving the additional notion of "strength of belief" [Galliers 89]. In fact, the reliability of the source affects the credibility of the information and vice-versa. It is necessary to develop systems that deal with couples <assumption, source of the assumption>. In [Dragoni 91] we've proposed a system that moves in this direction. Here we give a short description of that system. In the first two parts we describe the agent's knowledge processing structure using a particular characterization of the "Assumption Based Belief Revision" concept; in part three we outline the project of an embedded device that enables the overall system to deal with couples <assumption, source of the assumption> in an rather anthropomorphic manner.
IEEE Transactions on Systems, Man, and Cybernetics, 1991
The concept of logical consistency of belief among a group of computational agents that are able to reason nonmonotonically is defined. An algorithm for truth maintenance is then provided that guarantees local consistency for each agent and global consistency for data shared by the agents. Furthermore, the algorithm is shown to be complete, in the sense that if a consistent state exists, the algorithm will either find it or report failure. The implications and limitations of this for cooperating agents are discussed, and several extensions are described. The algorithm has been implemented in the RAD distributed expert system shell. David M. Bridgeland received the B.A. degree in computer science and the B.A. degree in English literature in 1983 f r m the University of Michigan, Ann Arbor. He received the masters degree in computer science in 1989 from the University of Texas at Austin, where he wrote a thesis on qualitative simulation models in microeconomics and finance.
Beliefs and Conflicts in a Real World Multiagent System
1998
In a real world multiagent system, where the agents are faced with partial, incomplete and intrinsically dynamic knowledge, conflicts are inevitable. Frequently, different agents have goals or beliefs that cannot hold simultaneously. Conflict resolution methodologies have to be adopted to to overcome such undesirable occurrences. In this paper we investigate the application of distributed belief revision techniques as the support
Theoretical Aspects of Rationality and Knowledge, 1996
We introduce a basic framework for multi-agent belief revision in heterogeneous societies where agents are required to be consistent in their beliefs on shared variables. We identify several properties one may require a general multi-agent belief revision operator to satisfy, and show several basic implications of these requirements. Our work reveals the connection between multi-agent belief revision and the the-