Montague Meets Markov: Deep Semantics with Probabilistic Logical Form (original) (raw)
Related papers
Probabilistic Soft Logic for Semantic Textual Similarity
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2014
Probabilistic Soft Logic (PSL) is a recently developed framework for probabilistic logic. We use PSL to combine logical and distributional representations of natural-language meaning, where distributional information is represented in the form of weighted inference rules. We apply this framework to the task of Semantic Textual Similarity (STS) (i.e. judging the semantic similarity of naturallanguage sentences), and show that PSL gives improved results compared to a previous approach based on Markov Logic Networks (MLNs) and a purely distributional approach.
Statistical Relational Learning to Recognise Textual Entailment
Lecture Notes in Computer Science, 2014
We propose a novel approach to recognise textual entailment (RTE) following a two-stage architecture-alignment and decision-where both stages are based on semantic representations. In the alignment stage the entailment candidate pairs are represented and aligned using predicate-argument structures. In the decision stage, a Markov Logic Network (MLN) is learnt using rich relational information from the alignment stage to predict an entailment decision. We evaluate this approach using the RTE Challenge datasets. It achieves the best results for the RTE-3 dataset and shows comparable performance against the state of the art approaches for other datasets.
Variational Inference for Logical Inference
Functional Distributional Semantics is a framework that aims to learn, from text, semantic representations which can be interpreted in terms of truth. Here we make two contributions to this framework. The first is to show how a type of logical inference can be performed by evaluating conditional probabilities. The second is to make these calculations tractable by means of a variational approximation. This approximation also enables faster convergence during training, allowing us to close the gap with state-of-the-art vector space models when evaluating on semantic similarity. We demonstrate promising performance on two tasks.
Recognizing textual entailment with deep-shallow semantic analysis and logical inference
In this paper, the architecture and evaluation of a new system for recognizing textual entailment (RTE) is presented. It is conceived as an adaptable and modular environ-ment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis it combines an HPSG-based deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and accuracy of results. For recognizing textual entailment we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, experimental sources with, e.g., manually defined presupposition resolutions, or with general and com-mon sense knowledge. The system comes with a graphical user interface for control and presentation p...
An inference model for semantic entailment in natural language
2006
Abstract. Semantic entailment is the problem of determining if the meaning of a given sentence entails that of another. We present a principled approach to semantic entailment that builds on inducing re-representations of text snippets into a hierarchical knowledge representation along with an optimization-based inferential mechanism that makes use of it to prove semantic entailment. This paper provides details and analysis of the knowledge representation and knowledge resources issues encountered.
Distributional semantic models for detection of textual entailment
2016
We present our experiments on integrating and evaluating distributional semantics with the recognising textual entailment task (RTE). We consider entailment as semantic similarity between text and hypothesis coupled with additional heuristic, which can be either selecting the top scoring hypothesis or a pre-defined threshold. We show that a distributional model is particularly good at detecting entailment related to “world knowledge”, and that aligning the hypothesis with the text improves detection of lexical
Integrating logical representations with probabilistic information using Markov logic
Proceedings of the Ninth International …, 2011
First-order logic provides a powerful and flexible mechanism for representing natural language semantics. However, it is an open question of how best to integrate it with uncertain, probabilistic knowledge, for example regarding word meaning. This paper describes the first steps of an approach to recasting first-order semantics into the probabilistic models that are part of Statistical Relational AI. Specifically, we show how Discourse Representation Structures can be combined with distributional models for word meaning inside a Markov Logic Network and used to successfully perform inferences that take advantage of logical concepts such as factivity as well as probabilistic information on word meaning in context.
Semantic Parsing using Distributional Semantics and Probabilistic Logic
Proceedings of the ACL 2014 Workshop on Semantic Parsing, 2014
We propose a new approach to semantic parsing that is not constrained by a fixed formal ontology and purely logical inference. Instead, we use distributional semantics to generate only the relevant part of an on-the-fly ontology. Sentences and the on-the-fly ontology are represented in probabilistic logic. For inference, we use probabilistic logic frameworks like Markov Logic Networks (MLN) and Probabilistic Soft Logic (PSL). This semantic parsing approach is evaluated on two tasks, Textual Entitlement (RTE) and Textual Similarity (STS), both accomplished using inference in probabilistic logic. Experiments show the potential of the approach.
A logic-based semantic approach to recognizing textual entailment
Proceedings of the COLING/ACL on Main conference poster sessions -, 2006
This paper proposes a knowledge representation model and a logic proving setting with axioms on demand successfully used for recognizing textual entailments. It also details a lexical inference system which boosts the performance of the deep semantic oriented approach on the RTE data. The linear combination of two slightly different logical systems with the third lexical inference system achieves 73.75% accuracy on the RTE 2006 data.