Model-Based Estimation of Word Saliency in Text (original) (raw)

Abstract

We investigate a generative latent variable model for model-based word saliency estimation for text modelling and classification. The estimation algorithm derived is able to infer the saliency of words with respect to the mixture modelling objective. We demonstrate experimental results showing that common stop-words as well as other corpus-specific common words are automatically down-weighted and this enhances our ability to capture the essential structure in the data, ignoring irrelevant details. As a classifier, our approach improves over the class prediction accuracy of the Naive Bayes classifier in all our experiments. Compared with a recent state of the art text classification method (Dirichlet Compound Multinomial model) we obtained improved results in two out of three benchmark text collections tested, and comparable results on one other data set.

Preview

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Francis, W.N., Kucera, H.: Frequency analysis of English usage (1982)
    Google Scholar
  2. Madsen, R.E., Kauchak, D., Elkan, C.: Modeling word burstiness using the Dirichlet distribution. In: ICML 2005: Proceedings of the 22nd international conference on Machine learning, pp. 545–552. ACM Press, New York (2005)
    Chapter Google Scholar
  3. Figueiredo, M.A.T., Law, M.H.C., Jain, F.-A.K.: Simultaneous feature selection and clustering using mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1154–1166 (2004)
    Article Google Scholar
  4. McCallum, A., Nigam, K.: A comparison of event models for Naive Bayes text classification. In: AAAI 1998 Workshop on Learning for Text Categorization (1998)
    Google Scholar
  5. Joachims, T.: Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In: Proceedings of the European Conference on Machine Learning. Springer, Heidelberg (1998)
    Google Scholar
  6. McCallum, A., Rosenfeld, R., Mitchell, T.M., Ng, A.Y.: Improving text classification by shrinkage in a hierarchy of classes. In: ICML 1998: Proceedings of the Fifteenth International Conference on Machine Learning, San Francisco, CA, USA, pp. 359–367. Morgan Kaufmann Publishers Inc., San Francisco (1998)
    Google Scholar
  7. Sebastiani, F.: Machine learning in automated text categorization. ACM Comput. Surv. 34(1), 1–47 (2002)
    Article Google Scholar
  8. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995)
    MATH Google Scholar
  9. Wang, X., Kabán, A.: Finding uninformative features in binary data. In: Gallagher, M., Hogan, J.P., Maire, F. (eds.) IDEAL 2005. LNCS, vol. 3578, pp. 40–47. Springer, Heidelberg (2005)
    Chapter Google Scholar
  10. Yang, Y., Pedersen, J.O.: A comparative study on feature selection in text categorization. In: Fisher, D.H. (ed.) Proceedings of ICML 1997. 14th International Conference on Machine Learning, Nashville, US, pp. 412–420. Morgan Kaufmann Publishers, San Francisco (1997)
    Google Scholar

Download references

Author information

Authors and Affiliations

  1. School of Computer Science, The University of Birmingham, Birmingham, B15 2TT, UK
    Xin Wang & Ata Kabán

Editor information

Editors and Affiliations

  1. Jozef Stefan Institute, Jamova 39, 1000, Ljubljana, Slovenia
    Ljupčo Todorovski
  2. University of Nova Gorica, Nova Gorica, Slovenia
    Nada Lavrač
  3. Meme Media Laboratory, Hokkaido University Sapporo, Kita 13, Nishi 8, Kita-ku, P.O. Box, 060-8628, Sapporo, Japan
    Klaus P. Jantke

Rights and permissions

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wang, X., Kabán, A. (2006). Model-Based Estimation of Word Saliency in Text. In: Todorovski, L., Lavrač, N., Jantke, K.P. (eds) Discovery Science. DS 2006. Lecture Notes in Computer Science(), vol 4265. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893318\_28

Download citation

Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Publish with us