On consequence in approximate reasoning (original) (raw)
Related papers
A quantitative-informational approach to logical consequence
DOI: 10.1007/978-3-319-15368-1_3. http://link.springer.com/chapter/10.1007/978-3-319-15368-1\_3., 2015
In this work, we propose a definition of logical consequence based on the relation between the quantity of information present in a particular set of formulae and a particular formula. As a starting point, we use Shannon’s quantitative notion of information, founded on the concepts of logarithmic function and probability value. We first consider some of the basic elements of an axiomatic probability theory, and then construct a probabilistic semantics for languages of classical propositional logic. We define the quantity of information for the formulae of these languages and introduce the concept of informational logical consequence, identifying some important results, among them: certain arguments that have traditionally been considered valid, such as modus ponens, are not valid from the informational perspective; the logic underlying informational logical consequence is not classical, and is at the least paraconsistent sensu lato; informational logical consequence is not a Tarskian logical consequence.
On the logics of similarity-based approximate and strong entailments
2010
We consider two kinds of logics for approximate reasoning: one is weaker than classical logic and the other is stronger. In the rst case, we are led by the principle that from given premises we can jump to conclusions which are only approximately (or possibly) correct. In the second case, which was not considered so far, in contrast, we follow the principle that conclusions must remain (necessarily) correct even if the premises are slightly changed. In this paper we recall the denitions and characterizations of the rst logic, and we investigate the basic properties of the second logic, as well as its soundness and completeness with respect to Ruspini’s semantics based on fuzzy similarity relations.