Belief Revision and Uncertain Reasoning (original) (raw)
Related papers
Belief Change as Propositional Update
Cognitive Science, 1997
In this study, we examine the problem of belief revision, defined as deciding whic h of several initially-accepted sentences to disbelieve, when new information presents a l ogical inconsistency with the initial set. In the first three experiments, the initial sentence set included a conditional sentence, a non-conditional sentence, and an inferred conclusi on drawn from the first two. The new information contradicted the inferred conclusion.
Changing Minds by Reasoning About Belief Revision: A Challenge for Cognitive Systems
In this paper, we explore the representational and inferential requirements for supporting a rich notion of belief revision. Our analysis extends beyond the typical case of a single agent revising its beliefs in light of newinformation into the realm of social engagement. More to the point, we argue that, although belief revision mechanisms surely operate at the level of single agents, we must also consider the need to lift an agent's understanding of the belief revision process to the knowledge level in order to intentionally guide other agents' revision processes with whom it socially inter- acts. In exploring belief revision at the knowledge level, we identify reasons for rejecting classical formulations of the problem and identify constraints by which alternative accounts must abide.
ON CAUSAL AND CONSTRUCTIVE MODELLING OF BELIEF CHANGE
Our life in various phases can be construed as involving continuous belief revision activity with a bundle of accepted beliefs, unaccepted beliefs (disbelief) and some even misunderstood beliefs. What is evident from such an activity is that we pass through various phases of our life effortlessly without being conscious of the rationale 1 for such changing sets of beliefs. In this process, we give up many mistakenly held beliefs of the earlier phase and replace them with new sets of belief. Of course, in the process, one sometimes undertakes the examination of evidence and resorts to reasoning leading finally to a new set of beliefs. 2 . Thus, both our beliefs and knowledge. At this moment we would not make a fine philosophical distinction between "beliefs" and "knowledge" (defined often as"justified true belief") are in a constant state of flux.
Belief change as change in epistemic entrenchment
Synthese, 1996
In this paper, it is argued that both the belief state and its input should be represented as epistemic entrenchment (EE) relations. A belief revision operation is constructed that updates a given EE relation to a new one in light of an evidential EE relation, and an axiomatic characterization of this operation is given. Unlike most belief revision operations, the one developed here can handle both "multiple belief revision" and "iterated belief revision".
The Dynamics of Belief Systems
In this article I want to discuss some philosophical problems one encounters when trying to model the dynamics of epistemic states. Apart from being of interest in themselves, I believe that solutions to these problems will be crucial for any attempt to use computers to handle changes of knowledge systems. Problems concerning knowledge representation and the updating of such representations have become the focus of much recent research in artificial intelligence (AI).
Belief Revision as Propositional Update
1996
In this study, we examine the problem of belief revision, defined as deciding which of several initially-accepted sentences to disbelieve, when new information presents a logical inconsistency with the initial set. In the first three experiments, the initial sentence set included a conditional sentence, a non-conditional sentence, and an inferred conclusion drawn from the first two. The new information contradicted the inferred conclusion. Results indicated that the conditional sentences were more readily abandoned than non-conditional sentences, even when either choice would lead to a consistent belief state, and that this preference was more pronounced when problems used natural language cover stories rather than symbols. The pattern of belief revision choices differed depending on whether the contradicted conclusion from the initial belief set had been a modus ponens or modus tollens inference. Two additional experiments examined alternative model-theoretic definitions of minimal change to a belief state, using problems that contained multiple models of the initial belief state and of the new information that provided the contradiction. The results indicated that people did not follow any of four formal definitions of minimal change on these problems. The new information and the contradiction it offered was not, for example, used to select a particular model of the initial belief state as a way of reconciling the contradiction. The preferred revision was to retain only those initial sentences that had the same, unambiguous truth value within and across both the initial and new information sets. The study and results are presented in the context of certain logic-based formalizations of belief revision, syntactic and model-theoretic representations of belief states, and performance models of human deduction. Principles by which some types of sentences might be more "entrenched" than others in the face of contradiction are also discussed from the perspective of induction and theory revision. Belief Revision as Propositional Update 3 Belief Change as Propositional Update Suppose you need to send an express courier package to a colleague who is away at a conference. You believe that whenever she is in New York City and the New York Rangers are playing a home game, she stays at the Westin Mid-Manhattan Hotel. You also believe that she is in New York City this weekend and that the Rangers are playing this weekend as well. You call up the Westin Mid-Manhattan Hotel and you find out that she isn't there. Something doesn't fit. What do you believe now? Well, assuming that you accept the hotel's word that she isn't there, there are various (logically consistent) ways to reconcile the contradiction between what you used to believe and this new information. First, you could believe that she is in New York City and that the Rangers are indeed playing, but disbelieve the conditional that says whenever both of these are true, then she stays at the Westin Mid-Manhattan Hotel. Alternatively, you could continue to believe the conditional, but decide that either she isn't in New York this weekend or that the Rangers aren't playing a home game (or possibly both). Which do you choose as your new set of beliefs? Belief change-the process by which a rational agent makes the transition from one belief state to another-is an important component for most intelligent activity done by epistemic agents, both human and artificial. When such agents learn new things about the world, they sometimes come to recognize that new information extends or conflicts with their existing belief state. In the latter case, rational reasoners would identify which of the old and new beliefs clash to create the inconsistency, decide whether in fact to accept the new information, and, if that is the choice, to eliminate certain old beliefs in favor of the new information. Alternatively, new information may not create any inconsistency with old information at all. In this case, the reasoner can simply add the new information to the current set of beliefs, along with whatever additional consequences this might entail. Although this is an intuitively attractive picture, the principles behind belief-state change are neither well-understood nor agreed-upon. Belief revision has been studied Belief Revision as Propositional Update 4 from a formal perspective in the artificial intelligence (AI) and philosophy literatures and from an empirical perspective in the psychology and management-science literatures. One of the practical motivations for AI's concern with belief revision, as portrayed in our opening scenario, is the development of knowledge bases as a kind of intelligent database: one enters information into the knowledge base and the knowledge base itself constructs and stores the consequences of this information-a process which is nonmonotonic in nature (i.e., accepted consequences of previously-believed information may be abandoned). More generally, the current belief state of any artificial agent may be contradicted either when the world itself changes (an aspect of the so-called frame problem) or when an agent's knowledge about a static world simply increases. Katsuno and Mendelson (1991) distinguish between these two cases, calling the former belief update and latter belief revision. Although much of the AI belief revision work focuses on formalizing competence theories of update and revision, prescriptive principles for how artificial agents "should" resolve conflict in the belief revision case-where there is a need to contract the set of accepted propositions in order to resolve a recognized contradiction-are far from settled. From the perspective of human reasoning, we see an important interplay between issues of belief revision and deductive reasoning, particularly in terms of the kind of representational assumptions made about how a belief state should be modeled. But while human performance on classical deductive problems has been extensively studied, both Rips (1994, p. 299) and Harman (1986, p. 7) have noted the need for descriptive data and theories on how people resolve inconsistency when new information about a static world is presented. The studies we present in this article are concerned exactly with this issue. We make two simplifications in our portrayal of belief revision and the paradigm we used to investigate it. The first concerns what we refer to as "beliefs." Here, beliefs are sentences that people are told to accept as true, in the context of resolving some (subsequent) contradiction arising from new information that is provided. Being told to Belief Revision as Propositional Update 5 accept something as true is not necessarily the same as believing it to be true. The contradictions we introduce in our paradigm are not probes into a person's pre-existing belief system (e.g., as in social cognition investigations of attitude change; see Petty, Priester, & Wegener, 1994) or of a person's hypotheses that are acquired over time via direct interactions with the world. The second simplification we make is treating beliefs as propositions that are believed either to be true or to be false (or, sometimes, that have a belief status of "uncertain"). This idealization characterizes the perspective of AI researchers who are interested in showing how classical deductive reasoning is related to belief revision. We will call this perspective "classical belief revision," to distinguish it from other frameworks, including one direction in formal studies of defeasible reasoning, that map statistical or probabilistic information about a proposition into a degrees of
Reasoning About Belief Revision to Change Minds: A Challenge for Cognitive Systems
In this paper, we explore the representational and inferential requirements for supporting a rich notion of belief revision. Our analysis extends beyond the typical case of a single agent revising its beliefs in light of new information into the realm of social engagement. More to the point, we argue that, although belief revision mechanisms surely operate at the level of single agents, we must also consider the need to lift an agent’s understanding of the belief revision process to the knowledge level in order to intentionally guide other agents’ revision processes with whom it socially interacts. In exploring belief revision at the knowledge level, we identify reasons for rejecting classical formulations of the problem and identify constraints by which alternative accounts must abide.
Lecture Notes in Computer Science, 1992
This paper draws a distinction between the set of explicit beliefs of a reasoner, the "belief base", and the beliefs that are merely implicit. We study syntax-based belief changes that are governed exclusively by the structure of the belief base. In answering the question whether this kind of belief change can be reconstructed with the help of something like an epistemic entrenchment relation in the sense of G~rdenfors and Makinson [8], we extract several candidate relations from a belief base. The answer to our question is negative, but an approximate solution is possible, and in some cases the agreement is even perfect. Two interpretations of the basic idea of epistemic entrenchment are offered. It is argued that epistemic entrenchment properly understood involves multiple belief changes, i.e., changes by sets of sentences. Since none of our central definitions presupposes the presence of propositional connectives in the object language, the notion of epistemie entrenchment becomes applicable to the style of knowledge representation realized in inheritance networks and truth maintenance systems.