Querying Word Embeddings for Similarity and Relatedness (original) (raw)

Analyzing how context size and symmetry influence word embedding information

Pompeu Fabra University Digital Repository, 2022

Word embeddings represent word meaning in the form of a vector; however, the encoded information varies depending on the parameters the vector has been trained with. This paper analyzes how two parameters, context size and symmetry, influence word embedding information and aims to find if there exists a single distributional parametrization for capturing semantic similarity as well as relatedness. The models were trained with GloVe with different parametrizations; then, they were quantitatively evaluated through a similarity task, using WordSim-353 (for relatedness) and SimLex-999 (for semantic similarity) as benchmarks. The results show a minimal variation when manipulating some of the analyzed parameters, in particular between symmetric and asymmetric contexts, which leads us to conclude that it is not necessary to train models with large contexts for achieving good performance.

Circles are like Ellipses, or Ellipses are like Circles? Measuring the Degree of Asymmetry of Static and Contextual Word Embeddings and the Implications to Representation Learning

Proceedings of the ... AAAI Conference on Artificial Intelligence, 2021

Human judgments of word similarity have been a popular method of evaluating the quality of word embedding. But it fails to measure the geometry properties such as asymmetry. For example, it is more natural to say "Ellipses are like Circles" than "Circles are like Ellipses". Such asymmetry has been observed from the word evocation experiment, where one word is used to recall another. This association data have been understudied for measuring embedding quality. In this paper, we use three well-known evocation datasets for the purpose and study both static embedding as well as contextual embedding, such as BERT. To fight for the dynamic nature of BERT embedding, we probe BERT's conditional probabilities as a language model, using a large number of Wikipedia contexts to derive a theoretically justifiable Bayesian asymmetry score. The result shows that the asymmetry judgment and similarity judgments disagree, and asymmetry judgment aligns with its strong performance on "extrinsic evaluations". This is the first time we can show contextual embeddings's strength on intrinsic evaluation, and the asymmetry judgment provides a new perspective to evaluate contextual embedding and new insights for representation learning.

Word Sense Distance in Human Similarity Judgements and Contextualised Word Embeddings

2020

Homonymy is often used to showcase one of the advantages of context-sensitive word embedding techniques such as ELMo and BERT. In this paper we want to shift the focus to the related but less exhaustively explored phenomenon of polysemy, where a word expresses various distinct but related senses in different contexts. Specifically, we aim to i) investigate a recent model of polyseme sense clustering proposed by Ortega-Andres & Vicente (2019) through analysing empirical evidence of word sense grouping in human similarity judgements, ii) extend the evaluation of context-sensitive word embedding systems by examining whether they encode differences in word sense similarity and iii) compare the word sense similarities of both methods to assess their correlation and gain some intuition as to how well contextualised word embeddings could be used as surrogate word sense similarity judgements in linguistic experiments.

Understanding the Source of Semantic Regularities in Word Embeddings

Proceedings of the 24th Conference on Computational Natural Language Learning

Semantic relations are core to how humans understand and express concepts in the real world using language. Recently, there has been a thread of research aimed at modeling these relations by learning vector representations from text corpora. Most of these approaches focus strictly on leveraging the co-occurrences of relationship word pairs within sentences. In this paper, we investigate the hypothesis that examples of a lexical relation in a corpus are fundamental to a neural word embedding's ability to complete analogies involving the relation. Our experiments, in which we remove all known examples of a relation from training corpora, show only marginal degradation in analogy completion performance involving the removed relation. This finding enhances our understanding of neural word embeddings, showing that co-occurrence information of a particular semantic relation is the not the main source of their structural regularity.

Word Embedding Evaluation in Downstream Tasks and Semantic Analogies

2020

Language Models have long been a prolific area of study in the field of Natural Language Processing (NLP). One of the newer kinds of language models, and some of the most used, are Word Embeddings (WE). WE are vector space representations of a vocabulary learned by a non-supervised neural network based on the context in which words appear. WE have been widely used in downstream tasks in many areas of study in NLP. These areas usually use these vector models as a feature in the processing of textual data. This paper presents the evaluation of newly released WE models for the Portuguese langauage, trained with a corpus composed of 4.9 billion tokens. The first evaluation presented an intrinsic task in which WEs had to correctly build semantic and syntactic relations. The second evaluation presented an extrinsic task in which the WE models were used in two downstream tasks: Named Entity Recognition and Semantic Similarity between Sentences. Our results show that a diverse and comprehen...

COMPARATIVE ANALYSIS OF WORD EMBEDDINGS FOR CAPTURING WORD SIMILARITIES

Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already pre-trained distributed word representations, commonly called word embeddings. Determining the most qualitative word embeddings is of crucial importance for such models. However, selecting the appropriate word embeddings is a perplexing task since the projected embedding space is not intuitive to humans. In this paper, we explore different approaches for creating distributed word representations. We perform an intrinsic evaluation of several state-of-the-art word embedding methods. Their performance on capturing word similarities is analysed with existing benchmark datasets for word pairs similarities. The research in this paper conducts a correlation analysis between ground truth word similarities and similarities obtained by different word embedding methods.

(Presentation) Fitting Semantic Relations to Word Embeddings

Proceedings of the 10th Global Wordnet Conference, 2019

We fit WordNet relations to word embeddings, using 3CosAvg and LRCos, two set-based methods for analogy resolution, and introduce 3CosWeight, a new, weighted variant of 3CosAvg. We test the performance of the resulting semantic vectors in lexicographic semantics tests, and show that none of the tested classifiers can learn symmetric relations like synonymy and antonymy, since the source and target words of these relations are the same set. By contrast, with the asymmetric relations (hyperonymy / hyponymy and meronymy), both 3CosAvg and LRCos clearly outperform the baseline in all cases, while 3CosWeight attained the best scores with hyponymy and meronymy, suggesting that this new method could provide a useful alternative to previous approaches.

Lexical semantics enhanced neural word embeddings

Knowledge-Based Systems

Current breakthroughs in natural language processing have benefited dramatically from-neural language models, through which distributional semantics can leverage neural data representations to facilitate downstream applications. Since neural embeddings use context prediction on word co-occurrences to yield dense vectors, they are inevitably prone to capture more semantic association than semantic similarity. To improve vector space models in deriving semantic similarity, we post-process neural word embeddings through deep metric learning, through which we can inject lexical-semantic relations, including syn/antonymy and hypo/hypernymy, into a distributional space. We introduce hierarchy-fitting, a novel semantic specialization approach to modelling semantic similarity nuances inherently stored in the IS-A hierarchies. Hierarchy-fitting attains state-of-the-art results on the common-and rare-word benchmark datasets for deriving semantic similarity from neural word embeddings. It also incorporates an asymmetric distance function to specialize hypernymy's directionality explicitly, through which it significantly improves vanilla embeddings in multiple evaluation tasks of detecting hypernymy and directionality without negative impacts on semantic similarity judgement. The results demonstrate the efficacy of hierarchy-fitting in specializing neural embeddings with semantic relations in late fusion, potentially expanding its applicability to aggregating heterogeneous data and various knowledge resources for learning multimodal semantic spaces.

Combining Word Representations for Measuring Word Relatedness and Similarity

2015

Many unsupervised methods, such as Latent Semantic Analysis and Latent Dirichlet Allocation, have been proposed to automatically infer word representations in the form of a vector. By representing a word by a vector, one can exploit the power of vector algebra to solve many Natural Language Processing tasks e.g. by computing the cosine similarity between the corresponding word vectors the semantic similarity between the two words can be captured. In this paper, we hypothesize that combining different word representations complements the coverage of semantic aspects of a word and thus better represents the word than the individual representations. To this end, we present two approaches of combining word representations obtained from many heterogeneous sources. We also report empirical results for word-to-word semantic similarity and relatedness by using the new representation using two existing benchmark datasets.

Wan2vec: Embeddings learned on word association norms

Semantic Web

Word embeddings are powerful for many tasks in natural language processing. In this work, we learn word embeddings using weighted graphs from word association norms (WAN) with the node2vec algorithm. Although building WAN is a difficult and time-consuming task, training the vectors from these resources is a fast and efficient process. This allows us to obtain good quality word embeddings from small corpora. We evaluate our word vectors in two ways: intrinsic and extrinsic. The intrinsic evaluation was performed with several word similarity benchmarks, WordSim-353, MC30, MTurk-287, MEN-TR-3k, SimLex-999, MTurk-771 and RG-65, and different similarity measures achieving better results than those obtained with word2vec, GloVe, and fastText, trained on a huge corpus. The extrinsic evaluation was done by measuring the quality of sentence embeddings using transfer tasks: sentiment analysis, paraphrase detection, natural language inference, and semantic textual similarity.