Query Expansion Research Papers - Academia.edu (original) (raw)

The study reported here investigated the query expansion behavior of end-users interacting with a thesaurus-enhanced search system on the Web. Two groups, namely academic staff and postgraduate students, were recruited into this study.... more

The study reported here investigated the query expansion behavior of end-users interacting with a thesaurus-enhanced search system on the Web. Two groups, namely academic staff and postgraduate students, were recruited into this study. Data were collected from 90 searches performed by 30 users using the OVID interface to the CAB abstracts database. Data-gathering techniques included questionnaires, screen capturing software, and interviews. The results presented here relate to issues of search-topic and search-term characteristics, number and types of expanded queries, usefulness of thesaurus terms, and behavioral differences between academic staff and postgraduate students in their interaction. The key conclusions drawn were that (a) academic staff chose more narrow and synonymous terms than did postgraduate students, who generally selected broader and related terms; (b) topic complexity affected users' interaction with the thesaurus in that complex topics required more query expansion and search term selection; (c) users' prior topic-search experience appeared to have a significant effect on their selection and evaluation of thesaurus terms; (d) in 50% of the searches where additional terms were suggested from the thesaurus, users stated that they had not been aware of the terms at the beginning of the search; this observation was particularly noticeable in the case of postgraduate students.

En este artículo presentamos una serie de técnicas de Procesamiento de Lenguaje Natural aplicadas a la normalización de términos en Recuperación de Información Textual. El objetivo de dichas técnicas es el tratamiento de los fenómenos de... more

En este artículo presentamos una serie de técnicas de Procesamiento de Lenguaje Natural aplicadas a la normalización de términos en Recuperación de Información Textual. El objetivo de dichas técnicas es el tratamiento de los fenómenos de variación lingüística morfológica y léxica. En concreto explorará la utilización de la lematización, su empleo combinado con el stemming y la expansión de consultas mediante umbrales de sinonimia.

Recently, the development of business can make so many opportunities and innovations, such as online shopping. The online shopping system can make the interaction between seller and customer easier. But the problem still exists. The... more

Recently, the development of business can make so many opportunities and innovations, such as online shopping. The online shopping system can make the interaction between seller and customer easier. But the problem still exists. The seller cannot deliver their respond immediately while the customer sending some questions. Therefore, a Chabot system can beat solution to the seller, where they able to deliver a response to a question quickly. In today’s scenario, the furniture retails shop processes the furniture retails shop’s raw material, sales, supplier, stock, receipt, and etc. is done manually through hands by ink and paper, which take large amount of time and causes strain and struggle for the furniture retails shop processing. It is important and needed to design a Retails Shop Evaluation for Chatbot, to pace up the work and make it an easy way to use system by shrinking those drawbacks we create a secure windows application for handling the furniture retails shop processing.

In spite of great efforts that have been made to present systems that support the user's need of the answers from the Holy Quran, the current systems of English translation of Quran still need to do more investigation in order to develop... more

In spite of great efforts that have been made to present systems that support the user's need of the answers from the Holy Quran, the current systems of English translation of Quran still need to do more investigation in order to develop the process of retrieving the accurate verse based on user's question. The Islamic terms are different from one document to another and might be undefined for the user. Thus, the need emerged for a Question Answering System (QAS) that retrieves the exact verse based on a semantic search of the Holy Quran. The main objective of this research is to develop the efficiency of the information retrieval from the Holy Quran based on QAS and retrieving an accurate answer to the user's question through classifying the verses using the Neural Network (NN) technique depending on the purpose of the verses' contents, in order to match between questions and verses. This research has used the most popular English translation of the Quran of Abdullah Yusuf Ali as the data set. In that respect, the QAS will tackle these problems by expanding the question, using WordNet and benefitting from the collection of Islamic terms in order to avoid differences in the terms of translations and question. In addition, this QAS classifies the Al-Baqarah surah into two classes, which are Fasting and Pilgrimage based on the NN classifier, to reduce the retrieval of irrelevant verses since the user's questions are asking for Fasting and Pilgrimage. Hence, this QAS retrieves the relevant verses to the question based on the N-gram technique, then ranking the retrieved verses based on the highest score of similarity to satisfy the desire of the user. According to F-measure, the evaluation of classification by using NN has shown an approximately 90% level and the evaluation of the proposed approach of this research based on the entire QAS has shown an approximately 87% level. This demonstrates that the QAS succeeded in providing a promising outcome in this critical field.

Integrating Different Strategies for Cross-Language Information Retrieval in the MIETTA Project Paul Buitelaar, Klaus Netter, Feiyu Xu DFKI Language Technology Lab Stuhlsatzenhausweg 3, 66123 Saarbrücken, Germany {paulb, netter, feiyu}@... more

Integrating Different Strategies for Cross-Language Information Retrieval in the MIETTA Project Paul Buitelaar, Klaus Netter, Feiyu Xu DFKI Language Technology Lab Stuhlsatzenhausweg 3, 66123 Saarbrücken, Germany {paulb, netter, feiyu}@ dfki. de ABSTRACT In this paper ...

This paper reports prototype multilingual query expansion system relying on LMF compliant lexical resources. The system is one of the deliverables of a three-year project aiming at establishing an international standard for language... more

This paper reports prototype multilingual query expansion system relying on LMF compliant lexical resources. The system is one of the deliverables of a three-year project aiming at establishing an international standard for language resources which is applicable to Asian languages. Our important contributions to ISO 24613, standard Lexical Markup Framework (LMF) include its robustness to deal with Asian languages, and its applicability to cross-lingual query tasks, as illustrated by the prototype introduced in this paper. 1

This paper presents an original methodology to consider question answering. We noticed that query expansion is often incorrect because of a bad understanding of the question. But the automatic good understanding of an utterance is linked... more

This paper presents an original methodology to consider question answering. We noticed that query expansion is often incorrect because of a bad understanding of the question. But the automatic good understanding of an utterance is linked to the context length, and the question are often short. This methodology proposes to analyse the documents and to construct an informative structure from the results of the analysis and from a semantic text expansion. The linguistic analysis identifies words (tokenization and morphological analysis), links between words (syntactic analysis) and word sense (semantic disambiguation). The text expansion adds to each word the synonyms matching its sense and replaces the words in the utterances by derivatives, modifying the syntactic schema if necessary. In this way, whatever enrichment may be, the text keeps the same meaning, but each piece of information matches many realisations. The questioning method consists in constructing a local informative str...

Medical documentation is central in health care, as it constitutes the main means of communication between care providers. However, there is a gap to bridge between storing information and extracting the relevant underlying knowledge. We... more

Medical documentation is central in health care, as it constitutes the main means of communication between care providers. However, there is a gap to bridge between storing information and extracting the relevant underlying knowledge. We believe natural language processing (NLP) is the best solution to handle such a large amount of textual information. In this paper we describe the construction of a semantic tagset for medical document indexing purposes. Rather than attempting to produce a home-made tagset, we decided to use, as far as possible, standard medicine resources. This step has led us to choose UMLS hierarchical classes as a basis for our tagset. We also show that semantic tagging is not only providing bases for disambiguisation between senses, but is also useful in the query expansion process of the retrieval system. We finally focus on assessing the results of the semantic tagger.

During software maintenance, developers usually deal with a significant number of software change requests. As a part of this, they often formulate an initial query from the request texts, and then attempt to map the concepts discussed in... more

During software maintenance, developers usually deal with a significant number of software change requests. As a part of this, they often formulate an initial query from the request texts, and then attempt to map the concepts discussed in the request to relevant source code locations in the software system (a.k.a., concept location). Unfortunately, studies suggest that they often perform poorly in choosing the right search terms for a change task. In this paper, we propose a novel technique-ACER-that takes an initial query, identifies appropriate search terms from the source code using a novel term weight-CodeRank, and then suggests effective reformulation to the initial query by exploiting the source document structures, query quality analysis and machine learning. Experiments with 1,675 baseline queries from eight subject systems report that our technique can improve 71% of the baseline queries which is highly promising. Comparison with five closely related existing techniques in query reformulation not only validates our empirical findings but also demonstrates the superiority of our technique.

Software developers frequently issue generic natural language queries for code search while using code search engines (e.g., GitHub native search, Krugle). Such queries often do not lead to any relevant results due to vocabulary mismatch... more

Software developers frequently issue generic natural language queries for code search while using code search engines (e.g., GitHub native search, Krugle). Such queries often do not lead to any relevant results due to vocabulary mismatch problems. In this paper, we propose a novel technique that automatically identifies relevant and specific API classes from Stack Overflow Q & A site for a programming task written as a natural language query, and then reformulates the query for improved code search. We first collect candidate API classes from Stack Overflow using pseudo-relevance feedback and two term weighting algorithms, and then rank the candidates using Borda count and semantic proximity between query keywords and the API classes. The semantic proximity has been determined by an analysis of 1.3 million questions and answers of Stack Overflow. Experiments using 310 code search queries report that our technique suggests relevant API classes with 48% precision and 58% recall which are 32% and 48% higher respectively than those of the state-of-the-art. Comparisons with two state-of-the-art studies and three popular search engines (e.g., Google, Stack Overflow, and GitHub native search) report that our reformulated queries (1) outperform the queries of the state-of-the-art, and (2) significantly improve the code search results provided by these contemporary search engines.