Statistical Relational Learning to Recognise Textual Entailment (original) (raw)
Related papers
A logic-based semantic approach to recognizing textual entailment
Proceedings of the COLING/ACL on Main conference poster sessions -, 2006
This paper proposes a knowledge representation model and a logic proving setting with axioms on demand successfully used for recognizing textual entailments. It also details a lexical inference system which boosts the performance of the deep semantic oriented approach on the RTE data. The linear combination of two slightly different logical systems with the third lexical inference system achieves 73.75% accuracy on the RTE 2006 data.
Recognizing textual entailment using a machine learning approach
2010
We present our experiments on Recognizing Textual Entailment based on modeling the entailment relation as a classification problem. As features used to classify the entailment pairs we use a symmetric similarity measure and a non-symmetric similarity measure. Our system achieved an accuracy of 66% on the RTE-3 development dataset (with 10-fold cross validation) and accuracy of 63% on the RTE-3 test dataset.
TALP at TAC 2008: A Semantic Approach to Recognizing Textual Entailment
2008
This paper describes our experiments on Textual Entailment in the context of the Fourth Recognising Textual Entailment (RTE-4) Evaluation Challenge at TAC 2008 contest. Our system uses a Machine Learning approach with AdaBoost to deal with the RTE challenge. We perform a lexical, syntactic, and semantic analysis of the entailment pairs. From this information we compute a set of semantic-based distances between sentences. We improved our baseline system for the RTE-3 challenge with more Language Processing techniques, an hypothesis classifier, and new semantic features. The results show no general improvement with respect to the baseline.
Relation alignment for textual entailment recognition
2009
Abstract We present an approach to textual entailment recognition, in which inference is based on a shallow semantic representation of relations (predicates and their arguments) in the text and hypothesis of the entailment pair, and in which specialized knowledge is encapsulated in modular components with very simple interfaces. We propose an architecture designed to integrate different, unscaled Natural
We present the architecture and the evaluation of a new system for recognizing textual entailment (RTE). In RTE we want to identify automatically the type of a logical relation between two input texts. In particular, we are interested in proving the existence of an entailment between them. We conceive our system as a modular environment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis we combine a deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and the accuracy of results. For RTE we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, more experimental sources with, e.g., manually defined presupposition reso...
A Lexico-Syntactic-Semantic Approach to Recognizing Textual Entailment
Advances in Computational Intelligence and Robotics, 2020
Given two textual fragments, called a text and a hypothesis, respectively, recognizing textual entailment (RTE) is a task of automatically deciding whether the meaning of the second fragment (hypothesis) logically follows from the meaning of the first fragment (text). The chapter presents a method for RTE based on lexical similarity, dependency relations, and semantic similarity. In this method, called LSS-RTE, each of the two fragments is converted to a dependency graph, and the two obtained graph structures are compared using dependency triple matching rules, which have been compiled after a thorough and detailed analysis of various RTE development datasets. Experimental results show 60.5%, 64.4%, 62.8%, and 61.5% accuracy on the well-known RTE1, RTE2, RTE3, and RTE4 datasets, respectively, for the two-way classification task and 54.3% accuracy for three-way classification task on the RTE4 dataset.
UB. dmirg: Learning Textual Entailment Relationships Using Lexical Semantic Features
2010
This paper describes our Recognizing Textual Entailment (RTE) system developed at University of Ballarat, Australia for participation in the Text Analysis Conference RTE 2010 competition. This year, we participated in the Main task and used a machine learning approach for learning textual entailment relationships using parse-free lexical semantic features. For this, we employed FrameNet and WordNet resources to extract event-based and semantic features from both hypotheses and texts. Our system also used the longest common substring of lemmas when learning the entailment relationships.
Recognizing Textual Entailment
Since 2005, researchers have worked on a broad task called Recognizing Textual Entailment (RTE), which is designed to focus efforts on general textual inference capabilities, but without constraining participants to use a specific representation or reasoning approach. There have been promising developments in this sub-field of Natural Language Processing (NLP), with systems showing steady improvement, and investigations of a range of approaches to the problem.