A new architecture for web meta-search engines (original) (raw)
Related papers
Intelligent Web Search via Personalizable Meta-search Agents
2002
This paper addresses several problems associated with the specification of Web searches, and the retrieval, filtering, and rating of Web pages in order to improve the relevance, precision and quality of search results. A methodology and architecture for an agent-based system, WebSifter is presented, that captures the semantics of a user's search intent, transforms the semantic query into target queries for existing search engines, and ranks resulting page hits according to a user-specified, weighted-rating scheme. Users create personalized search taxonomies, in the form of a Weighted Semantic-Taxonomy Tree. Consultation with a Webbased ontology agent refines the terms in the tree with positively-and negatively-related terms. The concepts represented in the tree are then transformed into queries processed by existing search engines. Each returned page is rated according to user-specified preferences such as semantic relevance, syntactic relevance, categorical match, and page popularity. Experimental results indicate that WebSifter improves the precision of web searches, thereby leading to better information.
Multi Agent Architecture for Search Engine
International Journal of Advanced Computer Science and Applications, 2016
The process of retrieving information is becoming ambiguous day by day due to huge collection of documents present on web. A single keyword produces millions of results related to given query but these results are not up to user expectations. The search results produced from traditional text search engines may be relevant or irrelevant. The underlying reason is Web documents are HTML documents that do not contain semantic descriptors and annotations. The paper proposes multi agent architecture to produce fewer but personalized results. The purpose of the research is to provide platform for domain specific personalized search. Personalized search allows delivering web pages in accordance with user's interest and domain. The proposed architecture uses client side as well server side personalization to provide user with personalized fever but more accurate results. Multi agent search engine architecture uses the concept of semantic descriptors for acquiring knowledge about given domain and leading to personalized search results. Semantic descriptors are represented as network graph that holds relationship between given problem in form of hierarchy. This hierarchical classification is termed as Taxonomy.
Multi-agent based internet search
International Journal of Product Lifecycle Management, 2007
In this paper, we present a multi-agent system for searching the internet. The search for interesting documents on the internet is becoming more and more difficult. The main problem is not that we do not have the needed information to build or use knowledge, the problem is that we have too much information, mostly irrelevant. A proof of this is that inference engines are largely outperformed by the expansion of the internet and the results of queries are often of very poor quality. To improve the search process, intelligent individual agents were developed. We believe that multi-agent systems can give more accurate results than the individual agents because in a multi-agent system, agents can be specialised in different tasks and can share information. We implemented such an approach using an open multi-agent system containing personal assistants, library agents, filter agents, and search agents. In this paper, we study the models of internet multi-agent systems and propose an architecture to do intelligent web search. The application is based on an AUML protocol (slightly extended) to allow interoperability and yields more accurate results than standard search engines.
An Agent-Oriented Personalized Web Searching System
… Systems (AOIS-2002 at AAMAS* 02), …, 2002
Web retrieval is now one of the most important issues in computer science, and we believe that applying multi-agent systems to this area is a promising approach. We introduce Kodama 1 system, which is being developed and in use at Kyushu University, as a multi-agent-based approach to build a distributed Information Retrieval (IR) system that lets users retrieve relevant distributed information from the Web. We reported methods to agentify the Web, and to cluster the agentified domain into communities. In order to investigate the performance of our system, we carried out several experiments in multiple Server Agent domains and developed a smart query routing mechanism for routing the user's query. The results ensure that the idea of Web page agentification, clustering and routing techniques promise to achieve more relevant information.
A semantic taxonomy-based personalizable meta-search agent
Proceedings of the Second International Conference on Web Information Systems Engineering
This paper addresses the problem of specifying, retrieving, filtering and rating Web searches so as to improve the relevance and quality of hits, based on the user's search intent and preferences. We present a methodology and architecture for an agent-based system, called WebSifter II, that captures the semantics of a user's decisionoriented search intent, transforms the semantic query into target queries for existing search engines, and then ranks the resulting page hits according to a user-specified weighted-rating scheme. Users create personalized search taxonomies via our Weighted Semantic-Taxonomy Tree. The terms in the tree can be refined by consulting a web taxonomy agent such as Wordnet. The concepts represented in the tree are then transformed into a collection of queries processed by existing search engines. Each returned page is rated according to userspecified preferences such as semantic relevance, syntactic relevance, categorical match, page popularity and authority/hub rating.
An Intelligent Meta Search Engine for Efficient Web Document Retrieval
In daily use of internet, when searching information we face lots of difficulty due to the rapid growth of Information Resources. This is because of the fact that a single search engine cannot index the entire web of resources. A Meta Search Engine is a solution for this, which submits the query to many other search engines and returns summary of the results. Therefore, the search results receive are an aggregate result of multiple searches. This strategy gives our search a boarder scope than searching a single search engine, but the results are not always better. Because the Meta Search Engine must use its own algorithm to choose the best results from multiple search engines. In this paper we proposed a new Meta Search Engine to overcome these drawbacks. It uses a new page rank algorithm called modified ranking for ranking and optimizing the search results in an efficient way. It is a two phase ranking algorithm used for ordering the web pages based on their relevance and popularity. This Meta Search Engine is developed in such a way that it will produce more efficient results than traditional search engines.
WebSifter II: A Personalizable Meta-Search Agent Based on Weighted Semantic Taxonomy Tree
2001
This paper addresses the problem of specifying, retrieving, filtering and rating Web searches so as to improve the relevance and quality of hits, based on the user's search intent and preferences. We present a methodology and architecture for an agent-based system, called WebSifter II, that captures the semantics of a user's decisionoriented search intent, transforms the semantic query into target queries for existing search engines, and then ranks the resulting page hits according to a user-specified weighted-rating scheme. Users create personalized search taxonomies via our Weighted Semantic-Taxonomy Tree. The terms in the tree can be refined by consulting a web taxonomy agent such as Wordnet. The concepts represented in the tree are then transformed into a collection of queries processed by existing search engines. Each returned page is rated according to userspecified preferences such as semantic relevance, syntactic relevance, categorical match, page popularity and authority/hub rating.
WebSifter II: A Personalizable Meta-Search Agent based on Semantic Weighted Taxonomy Tree
This paper addresses the problem of specifying, retrieving, filtering and rating Web searches so as to improve the relevance and quality of hits, based on the user's search intent and preferences. We present a methodology and architecture for an agent-based system, called WebSifter II, that captures the semantics of a user's decisionoriented search intent, transforms the semantic query into target queries for existing search engines, and then ranks the resulting page hits according to a user-specified weighted-rating scheme. Users create personalized search taxonomies via our Weighted Semantic-Taxonomy Tree. The terms in the tree can be refined by consulting a web taxonomy agent such as Wordnet. The concepts represented in the tree are then transformed into a collection of queries processed by existing search engines. Each returned page is rated according to userspecified preferences such as semantic relevance, syntactic relevance, categorical match, page popularity and authority/hub rating.
Intelligent Web Agent for Search Engines
In this paper we review studies of the growth of the Internet and technologies that are useful for information search and retrieval on the Web. Search engines are retrieve the efficient information. We collected data on the Internet from several different sources, e.g., current as well as projected number of users, hosts, and Web sites. The trends cited by the sources are consistent and point to exponential growth in the past and in the coming decade. Hence it is not surprising that about 85% of Internet users surveyed claim using search engines and search services to find specific information and users are not satisfied with the performance of the current generation of search engines; the slow retrieval speed, communication delays, and poor quality of retrieved results. Web agents, programs acting autonomously on some task, are already present in the form of spiders, crawler, and robots. Agents offer substantial benefits and hazards, and because of this, their development must involve attention to technical details. This paper illustrates the different types of agents ,crawlers, robots ,etc for mining the contents of web in a methodical, automated manner, also discusses the use of crawler to gather specific types of information from Web pages, such as harvesting e-mail addresses.