Natural Logic in AI and Cognitive Science (original) (raw)

A Natural Logic for Artificial Intelligence, and its Risks and Benefits

This paper is a multidisciplinary project proposal, submitted in the hopes that it may garner enough interest to launch it with members of the AI research community along with linguists and philosophers of mind and language interested in constructing a semantics for a natural logic for AI. The paper outlines some of the major hurdles in the way of “semantics-driven” natural language processing based on standard predicate logic and sketches out the steps to be taken toward a “natural logic”, a semantic system explicitly defined on a well-regimented (but indefinitely expandable) fragment of a natural language that can, therefore, be “intelligently” processed by computers, using the semantic representations of the phrases of the fragment.

Natural Logic for Natural Language

Logic, Language, and Computation, 2007

For a cognitive account of reasoning it is useful to factor out the syntactic aspect-the aspect that has to do with pattern matching and simple substitution-from the rest. The calculus of monotonicity, alias the calculus of natural logic, does precisely this, for it is a calculus of appropriate substitutions at marked positions in syntactic structures. We first introduce the semantic and the syntactic sides of monotonicity reasoning or 'natural logic', and propose an improvement to the syntactic monotonicity calculus, in the form of an improved algorithm for monotonicity marking. Next, we focus on the role of monotonicity in syllogistic reasoning. In particular, we show how the syllogistic inference rules (for traditional syllogistics, but also for a broader class of quantifiers) can be decomposed in a monotonicity component, an argument swap component, and an existential import component. Finally, we connect the decomposition of syllogistics to the doctrine of distribution.

A natural logic inference system

Proceedings of the 2nd Workshop on …, 2000

This paper develops a version of Natural Logic-an inference system that works directly on natural language syntactic representations, with no intermediate translation to logical formulae. Following work by Sánchez (1991), we develop a small fragment that computes semantic order relations between derivation trees in Categorial Grammar. Unlike previous works, the proposed system has the following new characteristics: (i) It uses orderings between derivation trees as purely syntactic units, derivable by a formal calculus. (ii) The system is extended for conjunctive phenomena like coordination and relative clauses. This allows a simple account of non-monotonic expressions that are reducible to conjunctions of monotonic ones. (iii) A preliminary proof search algorithm based on a tree generating regular system is developed for Sánchez' smaller fragment of Natural Logic.

On the Relationship between a Computational Natural Logic and Natural Language

Proceedings of the 8th International Conference on Agents and Artificial Intelligence, 2016

This paper makes a case for adopting appropriate forms of natural logic as target language for computational reasoning with descriptive natural language. Natural logics are stylized fragments of natural language where reasoning can be conducted directly by natural reasoning rules reflecting intuitive reasoning in natural language. The approach taken in this paper is to extend natural logic stepwise with a view to covering successively larger parts of natural language. We envisage applications for computational querying and reasoning, in particular within the life-sciences. For better or for worse, most of the reasoning that is done in the world is done in natural language.

An extended model of natural logic

Proceedings of the Eighth International Conference on Computational Semantics - IWCS-8 '09, 2009

We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.

Natural logic for textual inference

Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing - RTE '07, 2007

This paper presents the first use of a computational model of natural logic-a system of logical inference which operates over natural language-for textual inference. Most current approaches to the PAS-CAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.

NaturalLI: Natural Logic Inference for Common Sense Reasoning

Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014

Common-sense reasoning is important for AI applications, both in NLP and many vision and robotics tasks. We propose NaturalLI: a Natural Logic inference system for inferring common sense facts-for instance, that cats have tails or tomatoes are round-from a very large database of known facts. In addition to being able to provide strictly valid derivations, the system is also able to produce derivations which are only likely valid, accompanied by an associated confidence. We both show that our system is able to capture strict Natural Logic inferences on the Fra-CaS test suite, and demonstrate its ability to predict common sense facts with 49% recall and 91% precision.

Logics in Artificial Intelligence: 9th European Conference, JELIA 2004, Lisbon, Portugal, September 27-30, 2004, Proceedings

2004

Logics have, for many years, laid claim to providing a formal basis for the study of artificial intelligence. With the depth and maturity of methodologies, formalisms, procedures, implementations, and their applications available today, this claim is stronger than ever, as witnessed by increasing amount and range of publications in the area, to which the present proceedings accrue. The European series of Workshops on Logics in Artificial Intelligence (or Journées Européennes sur la Logique en Intelligence Artificielle-JELIA) began in response to the need for a European forum for the discussion of emerging work in this burgeoning field. JELIA 2000 is the seventh such workshop in the

Integrating Special Rules Rooted in Natural Language Semantics into the System of Natural Deduction

Proceedings of the 12th International Conference on Agents and Artificial Intelligence, 2020

The paper deals with natural language processing and question answering over large corpora of formalised natural language texts. Our background theory is the system of Transparent Intensional Logic (TIL). Having a fine-grained analysis of natural language sentences in the form of TIL constructions, we apply Gentzen's system of natural deduction to answer questions in an 'intelligent' way. It means that our system derives logical consequences entailed by the input sentences rather than merely searching answers by keywords. Natural language semantics is rich, and plenty of its special features must be taken into account in the process of inferring answers. The TIL system makes it possible to formalise all these semantically salient features in a fine-grained way. In particular, since TIL is a logic of partial functions, it deals with non-referring terms and sentences with truth-value gaps in an appropriate way. This is important because sentences often come attached with a presupposition that must be true in order that a given sentence had any truth-value. Yet, a problem arises how to integrate those special semantic rules into a standard deduction system. Proposal of the solution is one of the goals of this paper. The second novel result is this. There is a problem how to search relevant sentences in the labyrinth of input text data and how to vote for relevant applicable rules to meet the goal, i.e. to answer a given question. To this end, we propose a heuristic method driven by constituents of a given question.