A Survey on Performance Evaluation Measures for Information Retrieval System (original) (raw)
Related papers
IRJET-A SURVEY ON PERFORMANCE EVALUATION MEASURES FOR INFORMATION RETRIEVAL SYSTEM
The World Wide Web gives plenty of information to the users. To make the search effective, a tool called search engine has been introduced. These engines crawl the web for the given users query and display the results to the user based on the relevance score (ranking). Different search engine employs different ranking algorithm. Many ranking algorithm is being introduced frequently by several researchers. Several metrics are available to assess the quality of the ranked web pages. This paper presents a survey on different evaluation measures that are available for information retrieval systems and search engines. Several illustrations are provided for all these metrics.
Towards information retrieval measures for evaluation of Web search engines
Unpublished manuscript, 1999
Information retrieval on the Web is very different from retrieval in traditional indexed databases. This difference arises from: the high degree of dynamism of the Web; its hyper-linked character; the absence of a controlled indexing vocabulary; the heterogeneity of document types and authoring styles; the easy access that different types of users may have to it. Thus, since Web retrieval is substantially different from information retrieval, new or revised evaluative measures are required to assess retrieval performance using Web search engines. This paper suggests a number of different measures to evaluate information retrieval from the Web. The motivation behind each of these measures is presented, along with their descriptions and definitions. In the second part of the paper, application of these measures is illustrated in the evaluation of three search engines. The purpose of this paper is not to give the definite prescription for evaluating information retrieval from the Web, but rather to present some examples and to initiate a wider discussion of how to enhance measures of Web search performance.
Performance Evaluation of Selected Search Engines on the Internet
2008
Search Engines have become an integral part of daily internet usage. The search engine is the first stop for web users when they are looking for a product. Information retrieval may be viewed as a problem of classifying items into one of two classes corresponding to interesting and uninteresting items respectively. A natural performance metric in this context is classification accuracy, defined as the fraction of the system's interesting/uninteresting predictions that agree with the user's assessments. On the other hand, the field of information retrieval has two classical performance evaluation metrics: precision, the fraction of the items retrieved by the system that are interesting to the user, and recall, the fraction of the items of interest to the user that are retrieved by the system. Measuring the information retrieval effectiveness of World Wide Web search engines is costly because of human relevance judgments involved. However, both for business enterprises and people it is important to know the most effective Web search engines, since such search engines help their users find higher number of relevant Web pages with less effort. Furthermore, this information can be used for several practical purposes. This study evaluates the performance of three Web search engines. A set of measurements is proposed for evaluating Web search engine performance.
DESIDOC Bulletin of Information Technology, 2005
The volume of world wide web (WWW) is increasing enormously due to a world wide move to migrate information to online sources. To search some information on WWW, search engines are used, which when presented with queries, return a list of web pages ranked on the basis of estimation of relevance. Generally the search engines due to the abundance of information available on the web return millions of pages. But user studies indicate that a common user browses through top 10 or 20 documents only. So it's all-important to get into those top 10 documents. To achieve this web authors are increasingly beginning to rely on underhand techniques to ensure their sites get seen, in turn affecting the performance of search engines. The existing measures to evaluate these systems' performance are not adequate in the current world of highly interactive end-user systems. In this study a metric 'Ranked Precision' is proposed to evaluate the performance of search engines.
IOSR Journal of Engineering (IOSRJEN) Performance Evaluation of Selected Search Engines
Search Engines have become an integral part of daily internet usage. The search engine is the first stop for web users when they are looking for a product. Information retrieval may be viewed as a problem of classifying items into one of two classes corresponding to interesting and uninteresting items respectively. A natural performance metric in this context is classification accuracy, defined as the fraction of the system's interesting/uninteresting predictions that agree with the user's assessments. On the other hand, the field of information retrieval has two classical performance evaluation metrics: precision, the fraction of the items retrieved by the system that are interesting to the user, and recall, the fraction of the items of interest to the user that are retrieved by the system. Measuring the information retrieval effectiveness of World Wide Web search engines is costly because of human relevance judgments involved. However, both for business enterprises and people it is important to know the most effective Web search engines, since such search engines help their users find higher number of relevant Web pages with less effort. Furthermore, this information can be used for several practical purposes. This study evaluates the performance of three Web search engines. A set of measurements is proposed for evaluating Web search engine performance.
Automatic performance evaluation of Web search engines
Measuring the information retrieval effectiveness of World Wide Web search engines is costly because of human relevance judgments involved. However, both for business enterprises and people it is important to know the most effective Web search engines, since such search engines help their users find higher number of relevant Web pages with less effort. Furthermore, this information can be used for several practical purposes. In this study we introduce automatic Web search engine evaluation method as an efficient and effective assessment tool of such systems. The experiments based on eight Web search engines, 25 queries, and binary user relevance judgments show that our method provides results consistent with human-based evaluations. It is shown that the observed consistencies are statistically significant. This indicates that the new method can be successfully used in the evaluation of Web search engines.
Effective Tool for Exploring Web: An Evaluation of Search Engines
Library Philosophy and Practice, 2019
Evaluation of search engines is necessary to check the retrieval performance of search engines and to differentiate search engines from one another. The ability to retrieve and to rank the relevant result lists can be done by the process of evaluation and this process can take place in two ways viz; human based methods where one can evaluate search engines manually to calculate the significance of the returned results but this method is time consuming and expensive, while as the second is automatic method where one can make use of various techniques like retrieval measures can be used to assess the performance of search engines.
EVALUATION OF INFORMATION RETRIEVAL SYSTEMS USING VARIOUS MEASURES
Our world revolves around technology and information. From a computer system present on desk to smart phone carried everywhere, the use of technology to aid human life has increased enormously. This leads to the production of massive amount of data; be it files belonging to an organization or a person's heartbeat rate. All data is stored. The main challenge is to retrieve information out of it. Additionally, a user specific information retrieval is also needed. Information Retrieval Systems is one of the most used applications in today's life, ranging from search engine searching for a given query to intelligently analyzing and retrieving accurate details of a particular disease. Along with predefined retrieval items, a user can give a new query to the system and relevant information will be retrieved. Since, the usage is wide; the need for evaluating such systems becomes a priority. Federated search is an information retrieval technology that allows the simultaneous search of multiple searchable resources and aggregates the results that are received from the search engines for presentation to the user. It has data for numerous queries and search engines. In this paper, various applications of Information Retrieval Systems are discussed, followed by different approaches used for the evaluation. The dataset used is Federated Web Search track TREC 2014 of FedWeb Greatest Hits collection which allows combining results of multiple search engines. The methods used for evaluation along with the results are provided.
Some Empirical Research on the Performance of Internet Search Engines
In this paper the IRT project (Internet / Information Retrieval Tools) is described. The basic goal of IRT is to advise users of Internet search engines in retrieving information from the free public access part of the Internet. In achieving this, IRT has developed a model to evaluate search engines. This model is described in here. Evaluation criteria refer to functionality: search options, presentation characteristics and indexing characteristics (which elements of a Web document are indexed?). Also evaluated is the consistency of retrieval through search engines. This model has been tested in the period October-December 1998 on six of the major search engines. We found many differences among Internet indexes in their functionality, as well as in their consistency and reliability.
EVALUATION AND PERFORMANCE OF WORLD WIDE WEB SEARCH ENGINES: A COMPARATIVE STUDY
The World Wide Web has revolutionized the way the people access information, and has opened up new possibilities in areas such as digi tal libraries, information dissemination and retrieval, education, commerce, entertainment, government and health care. The amount of publicly available information on the web is increasing consistently at an unbelievable rate. The web is a gigantic digital library, a searchable 15 billionword encyclopedia. It has stimulated res earch and development in information retrieval and dissemination. The revolution that the web has brought to information access is not so much due to the availability of information (huge amounts of i nformation has long been available in librari es), but rather the increased efficiency of accessing i nformation, which can make previously impractical tasks practical.