Cracking the Figurative Code: A Survey of Metaphor Detection Techniques (original) (raw)

Deep and Contextually Engineered Features for Metaphor Detection

The focal point of this work is to automatically detect metaphor instances in short texts. It is the study of extricating the most optimal features for the task by using a deep learning architecture combined with carefully handcrafted contextual features. All of these will be discussed in detail in this paper. It is also found that a few sets performed well when they are used independently, but the others not so much. However, even the latter sets become very useful after the combination process with the former sets. Finally, the combined feature sets undergo the classification using well-known machine learning classification algorithms. It is decided that all the five algorithms are used for the purpose of comparison. The best algorithm for this task is found to be Support Vector Machine (SVM). The outcome of all the experiments are good in all the metrics used. Furthermore, result comparison in terms of F1-measure to existing works in the same domain is compiled and stated in this...

Metaphor Detection

2013 19th International Conference on Control Systems and Computer Science, 2013

ABSTRACT Because of the ubiquity of metaphors in language, metaphor processing is a very important task in the field of natural language processing. The first step towards metaphor processing, and probably the most difficult one, is metaphor detection. In the first part of this paper, we review the theoretical background for metaphors and the models and implementations that have been proposed for their detection. We then build corpora for detecting three types of metaphors: IS-A metaphors, metaphors formed with the preposition 'of' and metaphors formed with a verb. For the first two tasks, we train supervised classifiers using semantic features. For the third task, we use features commonly used in text categorization.

Metaphor Detection Using Contextual Word Embeddings From Transformers

Proceedings of the Second Workshop on Figurative Language Processing, 2020

The detection of metaphors can provide valuable information about a given text and is crucial to sentiment analysis and machine translation. In this paper, we outline the techniques for word-level metaphor detection used in our submission to the Second Shared Task on Metaphor Detection. We propose using both BERT and XLNet language models to create contextualized embeddings and a bidirectional LSTM to identify whether a given word is a metaphor. Our best model achieved F1-scores of 68.0% on VUA AllPOS, 73.0% on VUA Verbs, 66.9% on TOEFL AllPOS, and 69.7% on TOEFL Verbs, placing 7th, 6th, 5th, and 5th respectively. In addition, we outline another potential approach with a KNN-LSTM ensemble model that we did not have enough time to implement given the deadline for the competition. We show that a KNN classifier provides a similar F1-score on a validation set as the LSTM and yields different information on metaphors.

Linguistic Analysis Improves Neural Metaphor Detection

2019

In the field of metaphor detection, deep learning systems are the ubiquitous and achieve strong performance on many tasks. However, due to the complicated procedures for manually identifying metaphors, the datasets available are relatively small and fraught with complications. We show that using syntactic features and lexical resources can automatically provide additional high-quality training data for metaphoric language, and this data can cover gaps and inconsistencies in metaphor annotation, improving state-of-the-art word-level metaphor identification. This novel application of automatically improving training data improves classification across numerous tasks, and reconfirms the necessity of high-quality data for deep learning frameworks.

Using Language Learner Data for Metaphor Detection

Proceedings of the Workshop on Figurative Language Processing

This article describes the system that participated in the shared task (ST) on metaphor detection (Leong et al., 2018) on the Vrije University Amsterdam Metaphor Corpus (VUA). The ST was part of the workshop on processing figurative language at the 16th annual conference of the North American Chapter of the Association for Computational Linguistics (NAACL2018). The system combines a small assertion of trending techniques, which implement matured methods from NLP and ML; in particular, the system uses word embeddings from standard corpora and from corpora representing different proficiency levels of language learners in a LSTM BiRNN architecture. The system is available under the APLv2 open-source license.

Deep" Learning : Detecting Metaphoricity in Adjective-Noun Pairs

Proceedings of the Workshop on Stylistic Variation, 2017

Metaphor is one of the most studied and widespread figures of speech and an essential element of individual style. In this paper we look at metaphor identification in Adjective-Noun pairs. We show that using a single neural network combined with pre-trained vector embeddings can outperform the state of the art in terms of accuracy. In specific, the approach presented in this paper is based on two ideas: a) transfer learning via using pre-trained vectors representing adjective noun pairs, and b) a neural network as a model of composition that predicts a metaphoricity score as output. We present several different architectures for our system and evaluate their performances. Variations on dataset size and on the kinds of embeddings are also investigated. We show considerable improvement over the previous approaches both in terms of accuracy and w.r.t the size of annotated training data. * This research is funded by the Centre of Linguistic Theory and Studies in Probability at the University of Gothenburg.

SupervisedWord-Level Metaphor Detection: Experiments with Concreteness and Reweighting of Examples

Proceedings of the Third Workshop on Metaphor in NLP (at NAACL 2015), 2015

We present a supervised machine learning system for word-level classification of all content words in a running text as being metaphorical or non-metaphorical. The system provides a substantial improvement upon a previously published baseline, using re-weighting of the training examples and using features derived from a concreteness database. We observe that while the first manipulation was very effective, the second was only slightly so. Possible reasons for these observations are discussed.

Testing the role of metadata in metaphor identification

Proceedings of the Second Workshop on Figurative Language Processing

This paper describes the adaptation and application of a neural network system for the automatic detection of metaphors. The LSTM BiRNN system participated in the shared task of metaphor identification that was part of the Second Workshop of Figurative Language Processing (FigLang2020) held at the Annual Conference of the Association for Computational Linguistics (ACL2020). The particular focus of our approach is on the potential influence that the metadata given in the ETS Corpus of Non-Native Written English might have on the automatic detection of metaphors in this dataset. The article first discusses the annotated ETS learner data, highlighting some of its peculiarities and inherent biases of metaphor use. A series of evaluations follow in order to test whether specific metadata influence the system performance in the task of automatic metaphor identification. The system is available under the APLv2 open-source license.

Robust Extraction of Metaphor from Novel Data

2013

This article describes our novel approach to the automated detection and analysis of metaphors in text. We employ robust, quantitative language processing to implement a system prototype combined with sound social science methods for validation. We show results in 4 different languages and discuss how our methods are a significant step forward from previously established techniques of metaphor identification. We use Topical Structure and Tracking, an Imageability score, and innovative methods to build an effective metaphor identification system that is fully automated and performs well over baseline.

Catching metaphors

Proceedings of the Third Workshop on Scalable Natural Language Understanding - ScaNaLU '06, 2006

Metaphors are ubiquitous in language and developing methods to identify and deal with metaphors is an open problem in Natural Language Processing (NLP). In this paper we describe results from using a maximum entropy (ME) classifier to identify metaphors. Using the Wall Street Journal (WSJ) corpus, we annotated all the verbal targets associated with a set of frames which includes frames of spatial motion, manipulation, and health. One surprising finding was that over 90% of annotated targets from these frames are used metaphorically, underscoring the importance of processing figurative language. We then used this labeled data and each verbal target's PropBank annotation to train a maximum entropy classifier to make this literal vs. metaphoric distinction. Using the classifier, we reduce the final error in the test set by 5% over the verb-specific majority class baseline and 31% over the corpus-wide majority class baseline.