A comparison between supervised learning algorithms for word sense disambiguation (original) (raw)
Related papers
A New Supervised Learning Algorithm for Word Sense Disambiguation
1997
The Naive Mix is a new supervised learning algorithm that is based on a sequential method for selecting probabilistic models. The usual objective of model selection is to nd a single model that adequately characterizes the data in a training sample. However, during model selection a sequence of models is generated that consists of the best{ tting model at each level of model complexity. The Naive Mix utilizes this sequence of models to de ne a probabilistic model which is then used as a probabilistic classi er to perform word{sense disambiguation. The models in this sequence are restricted to the class of decomposable log{linear models. This class of models o ers a number of computational advantages. Experiments disambiguating twelve di erent words show that a Naive Mix formulated with a forward sequential search and Akaike's Information Criteria rivals established supervised learning algorithms such as decision trees (C4.5), rule induction (CN2) and nearest{neighbor classi cation (PEBLS).
Naive Bayes and exemplar-based approaches to word sense disambiguation revisited
2000
This paper describes an experimental comparison between two standard supervised learning methods, namely Naive Bayes and Exemplar-based classification, on the Word Sense Disambiguation (WSD) problem. The aim of the work is twofold. Firstly, it attempts to contribute to clarify some confusing information about the comparison between both methods appearing in the related literature. In doing so, several directions have been explored, including: testing several modifications of the basic learning algorithms and varying the feature space. Secondly, an improvement of both algorithms is proposed, in order to deal with large attribute sets. This modification, which basically consists in using only the positive information appearing in the examples, allows to improve greatly the efficiency of the methods, with no loss in accuracy. The experiments have been performed on the largest sense-tagged corpus available containing the most frequent and ambiguous English words. Results show that the Exemplar-based approach to WSD is generally superior to the Bayesian approach, especially when a specific metric for dealing with symbolic attributes is used.
An empirical study of the domain dependence of supervised word sense disambiguation systems
Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics -, 2000
This paper describes a set of experiments carried out to explore the domain dependence of alternative supervised Word Sense Disambiguation algorithms. The aim of the work is threefold: studying the performance of these algorithms when tested on a different corpus from that they were trained on; exploring their ability to tune to new domains, and demonstrating empirically that the Lazy-Boosting algorithm outperforms state-of-theart supervised WSD algorithms in both previous situations.
Boosting Applied to Word Sense Disambiguation
Lecture Notes in Computer Science, 2000
In this paper Schapire and Singer's AdaBoost.MH boosting algorithm is applied to the Word Sense Disambiguation (WSD) problem. Initial experiments on a set of 15 selected polysemous words show that the boosting approach surpasses Naive Bayes and Exemplar-based approaches, which represent state-of-the-art accuracy on supervised WSD. In order to make boosting practical for a real learning domain of thousands of words, several ways of accelerating the algorithm by reducing the feature space are studied. The best variant, which we call LazyBoosting, is tested on the largest sense-tagged corpus available containing 192,800 examples of the 191 most frequent and ambiguous English words. Again, boosting compares favourably to the other benchmark algorithms.
Learning Rules for Large-Vocabulary Word Sense Disambiguation: A Comparison of Various Classifiers
Lecture Notes in Computer Science, 2000
In this article we compare the performance of various machine learning algorithms on the task of constructing word-sense disambiguation rules from data. The distinguishing characteristic of our work from most of the related work in the field is that we aim at the disambiguation of all content words in the text, rather than focussing on a small number of words. In an earlier study we have shown that a decision tree induction algorithm performs well on this task. This study compares decision tree induction with other popular learning methods and discusses their advantages and disadvantages. Our results confirm the good performance of decision tree induction, which outperforms the other algorithms, due to its ability to order the features used for disambiguation, according to their contribution in assigning the correct sense.
Supervised Word Sense Disambiguation
2016
Word Sense Disambiguation (WSD) is the method of the correct sense for word in a context. In this paper we have researched the various approaches for WSD: Knowledge based, Supervised, Semi-supervised, Unsupervised methods. This paper has further elaborated on the supervised methods used for WSD. The methods that are compared in this paper are: Decision Trees, Decision Lists, Support Vector Machines, Neural Networks, Naïve Bayes methods, Exemplar learning.
An Insight into Word Sense Disambiguation Techniques
International Journal of Computer Applications, 2015
This paper presents various techniques used in the area of Word Sense Disambiguation (WSD). There are a number of techniques such as: Knowledge based approaches, which use the knowledge encoded in Lexical resources; Supervised Machine Leaning methods in which the classifier is made to learn from previously semantically annotated corpus; Unsupervised approaches that form cluster occurrences of words. Then there are also semi supervised approaches which use semi annotated corpus as reference data along with unlabeled data.
A Simple Approach to Building Ensembles of Naive Bayesian Classifiers for Word Sense Disambiguation
Computing Research Repository, 2000
This paper presents a corpus-based approach to word sense disambiguation that builds an ensemble of Naive Bayesian classifiers, each of which is based on lexical features that represent co-occurring words in varying sized windows of context. Despite the simplicity of this approach, empirical results disambiguating the widely studied nouns line and interest show that such an ensemble achieves accuracy rivaling the best previously published results.
A Naïve Bayes Approach for Word Sense Disambiguation
The word sense disambiguation (WSD) is the task ofautomatically selecting the correct sense given a context and it helps in solving many ambiguity problems inherently existing in all natural languages.Statistical Natural Language Processing (NLP),which is based on probabilistic, stochastic and statistical methods, has been used to solve many NLP problems.The Naive Bayes algorithm which is one of the supervised learning techniques has worked well in many classification problems. In the present work, WSD task to disambiguate the senses of different words from the standard corpora available in the " 1998 SENSEVAL Word Sense Disambiguation (WSD) shared task " is performed by applying Naïve Bayes machine learning technique. It is observed that senses of ambiguous word having lesser number of part-of-speeches are disambiguated more correctly. Other key observation is that with lesser number of senses to be disambiguated, the chances of words being disambiguated with correct senses are more. I. INTRODUCTION The ambiguity in the senses of the words of different languages does exist inherently in all natural languages used by humans. There are many words in every language which carry more than one meaning for the same word. For example, the word ―chair‖ has one sense which means a piece of furniture and other sense of it means a person chairing say some session. So obviously we need some context to select the correct sense given a situation. Automatically selecting the correct sense given a context is in the core of solving many ambiguity problems. The word sense disambiguation (WSD) is the task to automatically determine which of the senses of an ambiguous (target) word is chosen in the specific use of the word by taking into consideration the context of word's use [1,2]. Having an accurate and reliable word sense disambiguation has been the target of natural language community since long. The motivation and belief behind performing word sense disambiguation is that many tasks which are performed under the umbrella of NLP are highly benefitted with properly disambiguated word senses.Statistical NLP, a special approach of NLP based onthe probabilistic, stochastic and statistical methods, uses machine learning algorithms to solve many NLP problems. AS a branch ofartificial intelligence, machine learning involves computationallylearning patterns from given data, and applying to new or unseen data the pattern which were learned earlier. Machine learning is defined by Tom M.Mitchell as ―A computer program is said to learn from experience E with respect to some class of tasksT and performance measure P, if its performance at tasks in T,as measured by P, improves withexperience E [3].‖ Learning algorithms can be generally classified into three types: supervised learning, semi-supervised learning and unsupervised learning. Supervised learning technique is based on the idea of studying the features of positive and negative examples over a large collection of annotated corpus. Semi-supervised learning uses both labeled data and unlabeled data for the learning process to reduce the dependence on training data. In the unsupervised learning, decisions are made on the basis of unlabeled data. The methods of unsupervised learning are mostly built upon clustering techniques, similarity based functions and distribution statistics. For automatic WSD,supervised learningis one ofthe most successfulapproaches.
Approaches for Word Sense Disambiguation - A Survey
International Journal of Recent Technology and Engineering, 2014
Word sense disambiguation is a technique in the field of natural language processing where the main task is to find the correct sense in which a word occurs in a particular context. It is found to be of vital help to applications such as question answering, machine translation, text summarization, text classification, information retrieval etc. This has resulted in excessive interest in approaches based on machine learning which performs classification of word senses automatically. The main motivation behind word sense disambiguation is to allow the users to make ample use of the available technologies because ambiguities present in any language provide great difficulty in the use of information technology as words in human language that occur in a particular context can be interpreted in more than one way depending on the context. In this paper we put forward a survey of supervised, unsupervised and knowledge based approaches and algorithms available in word sense disambiguation (WSD). Index Terms-Machine readable dictionary, Machine translation, Natural language processing, Wordnet, Word sense disambiguation.