1 Design and Realization of a Modular Architecture for Textual Entailment (original) (raw)

Design and realization of a modular architecture for textual entailment

2013

A key challenge at the core of many NLP tasks is the ability to determine which conclusions can be inferred from a given natural language text. This problem, called the Recognition of Textual Entailment (RTE), has initiated the development of a range of algorithms, methods and technologies. Unfortunately, research on TE (like semantics research more generally), is fragmented into studies focussing on various aspects of semantics such as world knowledge, lexical and syntactic relations, or more specialized kinds of inference. This fragmentation has problematic practical consequences. Notably, interoperability among existing RTE systems is poor, and reuse of resources and algorithms is mostly infeasible. This also makes systematic evaluations very difficult to carry out. Finally, TE presents a wide array of approaches to potential end users with little guidance on which to pick. Our contribution to this situation is the novel EXCITEMENT architecture, which was developed to enable and encourage the consolidation of methods and resources in the TE area. It decomposes RTE into components with strongly typed interfaces. We specify (a) a modular linguistic analysis pipeline and (b) a decomposition of the "core" RTE methods

A semantic approach to textual entailment: System evaluation and task analysis

Proceedings of the ACL- …, 2007

Recognizing and generating textual entailment and paraphrases are regarded as important technologies in a broad range of NLP applications, including, information extraction, summarization, question answering, information retrieval, machine translation and text generation. Both textual entailment and paraphrasing address relevant aspects of natural language semantics. Entailment is a directional relation between two expressions in which one of them implies the other, whereas paraphrase is a relation in which two expressions convey essentially the same meaning. Indeed, paraphrase can be defined as bi-directional entailment. While it may be debatable how such semantic definitions can be made well-founded, in practice we have already seen evidence that such knowledge is essential for many applications.

Recognizing Textual Entailment

Since 2005, researchers have worked on a broad task called Recognizing Textual Entailment (RTE), which is designed to focus efforts on general textual inference capabilities, but without constraining participants to use a specific representation or reasoning approach. There have been promising developments in this sub-field of Natural Language Processing (NLP), with systems showing steady improvement, and investigations of a range of approaches to the problem.

Recognizing textual entailment: Rational, evaluation and approaches

2009

Abstract The goal of identifying textual entailment–whether one piece of text can be plausibly inferred from another–has emerged in recent years as a generic core problem in natural language understanding. Work in this area has been largely driven by the PASCAL Recognizing Textual Entailment (RTE) challenges, which are a series of annual competitive meetings.

TALP at TAC 2008: A Semantic Approach to Recognizing Textual Entailment

2008

This paper describes our experiments on Textual Entailment in the context of the Fourth Recognising Textual Entailment (RTE-4) Evaluation Challenge at TAC 2008 contest. Our system uses a Machine Learning approach with AdaBoost to deal with the RTE challenge. We perform a lexical, syntactic, and semantic analysis of the entailment pairs. From this information we compute a set of semantic-based distances between sentences. We improved our baseline system for the RTE-3 challenge with more Language Processing techniques, an hypothesis classifier, and new semantic features. The results show no general improvement with respect to the baseline.

Recognizing textual entailment: Rational, evaluation and approaches – Erratum

Natural Language Engineering, 2010

The goal of identifying textual entailment -whether one piece of text can be plausibly inferred from another -has emerged in recent years as a generic core problem in natural language understanding. Work in this area has been largely driven by the PASCAL Recognizing Textual Entailment (RTE) challenges, which are a series of annual competitive meetings. The current work exhibits strong ties to some earlier lines of research, particularly automatic acquisition of paraphrases and lexical semantic relationships and unsupervised inference in applications such as question answering, information extraction and summarization. It has also opened the way to newer lines of research on more involved inference methods, on knowledge representations needed to support this natural language understanding challenge and on the use of learning methods in this context. RTE has fostered an active and growing community of researchers focused on the problem of applied entailment. This special issue of the JNLE provides an opportunity to showcase some of the most important work in this emerging area.

A survey on Recognizing Textual Entailment as an NLP Evaluation

Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, 2020

Recognizing Textual Entailment (RTE) was proposed as a unified evaluation framework to compare semantic understanding of different NLP systems. In this survey paper, we provide an overview of different approaches for evaluating and understanding the reasoning capabilities of NLP systems. We then focus our discussion on RTE by highlighting prominent RTE datasets as well as advances in RTE dataset that focus on specific linguistic phenomena that can be used to evaluate NLP systems on a fine-grained level. We conclude by arguing that when evaluating NLP systems, the community should utilize newly introduced RTE datasets that focus on specific linguistic phenomena.

Recognizing Textual Entailment with Logical Inference

2000

With the goal of producing explainable en- tailment decisions, and ultimately having the computer "understand" the sentences it is processing, we have been pursuing a (somewhat) "logical" approach to recogniz- ing entailment. First our system performs semantic interpretation of the sentence pairs. Then, it tries to determine if the (logic for) the H sentence subsumes (i.e., is implied by) some

A Logic-based Approach for Recognizing Textual Entailment Supported by Ontological Background Knowledge

We present the architecture and the evaluation of a new system for recognizing textual entailment (RTE). In RTE we want to identify automatically the type of a logical relation between two input texts. In particular, we are interested in proving the existence of an entailment between them. We conceive our system as a modular environment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis we combine a deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and the accuracy of results. For RTE we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, more experimental sources with, e.g., manually defined presupposition reso...