The navigational power of Web browsers (original) (raw)
International Colloquium on Automata, Languages and Programming, 2002
We consider the navigation power of Web browsers, such as Netscape Navigator, Internet Explorer or Opera. To this end, we formally introduce the notion of a navigational problem. We investigate various characteristics of such problems which make them hard to visit with small number of clicks.
Navigational complexity in web interactions
Proceedings of the 19th international conference on World wide web - WWW '10, 2010
As the web grows in size, interfaces & interactions across websites diverge -for differentiation and arguably for a better user experience. However, this size & diversity is also a cognitive load for the user who has to learn a new user interface for every new website she visits. Several studies have confirmed the importance of well designed websites. In this paper, we propose a method for quantitative evaluation of the navigational complexity of user interactions on the web. Our approach of quantifying interaction complexity exploits the modeling of the web as a graph and uses the information theoretic definition of complexity. It enables us to measure the navigational complexity of web interaction in bits. Our approach is structural in nature and can be applied to both traditional paradigm of web interaction (browsing) and to emerging paradigms of web interaction like web widgets.
Automating Web navigation with the WebVCR
Computer Networks, 2000
Recent developments in Web technology such as the inclusion of scripting languages, frames, and the growth of dynamic content, have made the process of retrieving Web content more complicated, and sometimes tedious. For example, Web browsers do not provide a method for a user to bookmark a frame-based Web site once the user navigates within the initial frameset. Also, some sites, such as travel sites and online classifieds, require users to go through a sequence of steps and fill out a sequence of forms in order to access their data. Using the bookmark facilities implemented in all popular browsers, often it is not possible to create a shortcut to access such data, and these steps must be manually repeated every time the data is needed. However, hard-to-reach pages are often the best candidates for a shortcut, because significantly more effort is required to reach them than to reach a standard page with a well-defined URL.
Distributed computation of web queries using automata
Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems - PODS '02, 2002
We introduce and investigate a distributed computation model for querying the Web. Web queries are computed by interacting automata running at different nodes in the Web. The automata which we are concerned with can be viewed as register automata equipped with an additional communication component. We identify conditions necessary and sufficient for systems of automata to compute Web queries, and investigate the computational power of such systems.
Off the beaten tracks: exploring three aspects of web navigation
Proceedings of the 15th …, 2006
This paper presents results of a long-term client-side Web usage study, updating previous studies that range in age from five to ten years. We focus on three aspects of Web navigation: changes in the distribution of navigation actions, speed of navigation and within-page navigation.
Automated browsing in AJAX websites
Data & Knowledge Engineering, 2011
Web automation applications are widely used for different purposes such as B2B integration, automated testing of web applications or technology and business watch. One crucial part in web automation applications is for them to easily generate and reproduce navigation sequences. This problem is specially complicated in the case of the new breed of AJAX-based websites. Although recently some tools have
A system architecture for intelligent browsing on the Web
Decision Support Systems, 2000
Compared with traditional business operations, WWW-based commerce has many advantages, such as timeliness, worldwide communication, hyper-links, and multimedia. However, there are also several browsing problems, such as getting lost, consuming a great amount of time browsing, and lack of customized interactive features. To acquire a competitive advantage over the countless number of Web sites, it is critical to solve these browsing problems. The purpose of this paper is to systematically review all browsing problems and then propose a system architecture for intelligent browsing on the Web. In this architecture, we present five kinds of browsing agents: recommendation agent, new-contents agent, search agent, customized agent, and personal-status agent. In order to support these agents, a user analyzer is provided to maintain the user profile by analyzing log files and CGI parameters. A site monitor is provided to maintain the site database by monitoring all changes to the site. We also developed a prototype to demonstrate the feasibility of the proposed system architecture. Finally, due to the time limitations, a laboratory experiment was carried out to verify the only value of the customized agent. The value of the agent was confirmed.
ARCHIMIDES" An Intelligent Agent for Adaptive Personalized Navigation within a WEB Server
System Sciences, 1999. …, 1999
With the explosive growth of Internet and the volume of information published on it, the search and retrieval of desired information has become practically impossible, if its source is not known in advance. This is the reason why search engines have been emerged, aiming to relieve the user from the "lost in hyperspace" feeling and the information overload. Imagine, however, cases where the result of some query to a search engine contains hundreds of thousands of URLs (Uniform Resource Locators). With such a number of URLs, search engines become in practice inefficient, if we consider that the navigation through even a few decades of URLs is very tiring and time consuming. Thus, instead of trying to address the information overload problem with search engines and robots (spiders), we believe that each server should facilitate itself the retrieval of desired information, published on its own domain. In this paper we present "Archimides", an intelligent agent that aims to provide intelligent, adaptive and personalized navigation within a WEB server. Provided a subset of the set of keywords that characterize the server's contents, Archimides undertakes the task to perform an intelligent information retrieval and afterwards to construct a personalized version of the server in the form of an index to pages that present some interest to the user. This index does not resemble what search engines return as a result of some query; it could be probably regarded as a much sorter version of the WEB server, with links that are dynamically inserted or deleted according to the user's interests, preferences and behavior, providing Archimides with the feature of adaptivity. As a result the user navigates in a WEB server that may completely present interest to him or her, thus relieving the user from undesired information overload..
Online web navigation assistant
Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki
The problem of finding relevant data while searching the internet represents a big challenge for web users due to the enormous amounts of available information on the web. These difficulties are related to the well-known problem of information overload. In this work, we propose an online web assistant called OWNA. We developed a fully integrated framework for making recommendations in real-time based on web usage mining techniques. Our work starts with preparing raw data, then extracting useful information that helps build a knowledge base as well as assigns a specific weight for certain factors. The experiments show the advantages of the proposed model against alternative approaches.
ACM Transactions on the Web, 2012
We propose a new way of navigating the Web using interactive information visualizations, and present encouraging results from a large-scale Web study of a visual exploration system. While the Web has become an immense, diverse information space, it has also evolved into a powerful software platform. We believe that the established interaction techniques of searching and browsing do not sufficiently utilize these advances, since information seekers have to transform their information needs into specific, text-based search queries resulting in mostly text-based lists of resources. In contrast, we foresee a new type of information seeking that is high-level and more engaging, by providing the information seeker with interactive visualizations that give graphical overviews and enable query formulation. Building on recent work on faceted navigation, information visualization, and exploratory search, we conceptualize this type of information navigation as visual exploration and evaluate a prototype Web-based system that implements it. We discuss the results of a large-scale, mixed-method Web study that provides a better understanding of the potential benefits of visual exploration on the Web, and its particular performance challenges.
Learning to Navigate Web Forms
Wiiw, 2004
Given a particular update request to a WWW system, users are faced with the navigation problem of finding the correct form to accomplish the update request. In a large system, such as SAP with about 10,000 relations for the standard installation, users are faced with a sea of thousands of forms to navigate. For familiar tasks, users have various aids, such as personal tool bars, but for more complex tasks, users are forced to search or navigate for the correct form, or forward the update request to a specialist with the expertise to handle the request. In this later case, the execution of the request may be delayed since the specialist may be unavailable, or have other priorities. Also, typically the user and specialist engaged in a time consuming clarification dialog to extract additional information required to complete the request. In this paper we study the problem of building an assistant for the navigation problem for web forms. This assistant can be deployed either directly to a user, or to specialist that receives a stream of requests from users. In the former case the assistant helps the user navigate to the right form. In the latter case, the assistant cuts ambiguous communication between the user and specialist. We present experimental results from behavioral experiments and machine learning that demonstrate the usefulness of our assistant.
A System Architecture of Intelligent-Guided Browsing on the Web
1998
Compared with traditional business operations, wwwbased commerce has many advantages, such as timeliness, worldwide communication, hyper-links, and multimedia. However, lack of customized interactive abilities of traditional sales representatives is its major weakness. To get competitive advantages over the countless web sites, it is critical to have such customized interactive abilities. The purpose of this paper is to present a system architecture of intelligent-guided browsing on the web. In the architecture, we present five kinds of browsing agents: recommendation agent, new-content agent, search agent, customized agent, and personal-status agent. In order to support these agents, there are user analyzer to maintain the user profile by analyzing log file and CGI parameters, and site monitor to maintain the site database by monitoring all changes of the site.
The impact of task on the usage of web browser navigation mechanisms
Proceedings of Graphics …, 2006
In this paper, we explore how factors such as task and individual differences influence the usage of different web browser navigation mechanisms (eg, clicked links, bookmarks, auto-complete). We conducted a field study of 21 participants and logged detailed web ...
INTRODUCTION TO VOLUME 1, ISSUE 3: WEB NAVIGATION
2003
The amazing wealth of information and profusion of services available on the Internet are only useful when people can access them successfully. Unsophisticated and erratic search strategies make it hard to find desired services, and poor site designs confuse users.
eNavigate: Effective and Efficient User Web Navigation
Web Site is huge source of information. User requires different pages at the same time or same user may access different pages at different time. Web structure mining uses data from web usage and accordingly makes changes in structure of web site. Web site that semi-automatically maintains its organization and presentation by learning from visitors access patterns. Developer develop web site according to their own judgment of use, without considering users intension to use. So users suffers problem of searching in network. We propose a mathematical programming model to improve the user navigation on a website with minimum alterations to its existing structure. Test performed on publicly available real data sets shows that this model significantly improves the user navigation with very few changes as well as effectively solves the navigation problem. The completely reorganized new structure can be highly unpredictable, and cost of disoriented users after website structure changes remains unanalyzed. This approach is how to improve web site without introducing substantial changes. We use two metrics and use them to access the performance of the improved website using the real data set.
Web Navigation Architectures B rowser , A pplication , S erver or E mbedded ?
1999
The World Wide Web is increasingly becoming the preferred repository of information. The strength of this information infrastructure is also its weakness. Faced with the chaos of millions of places to go and thousands of places to remember having bee, the thousands of new Web users who join every day, need a helping hand. The aim of this paper is, by way of an experiment designing a prototype system, to conceptualise the architecture components of Web navigation support. The prototype supports the ranking of bookmarks based on monitoring user behaviour and recording user ranking. The BASE framework is, based on a survey of existing systems, suggested as a means of understanding the pragmatic technological choices. The framework is applied in characterising a number of current Web navigation technologies.
A Comparative Analysis of Browsing Behavior of Human Visitors and Automatic Software Agents
American Journal of Systems and Software, 2015
In this paper, we investigate the comparative access behavior of human visitors and automatic software agents i.e. web robots through access logs of a web portal. We perform an exhaustive investigation on the various resources acquisition trends, hourly activities, entry and exit patterns, geographic analysis of their origin, user agents and the distribution of response sizes and response codes by human visitors and web robots. Gradually web robots are continuing to proliferate and grow in sophistication for non-malicious and malicious reasons. An important share of web traffic is credited to robots and this fraction is likely to cultivate over time. Presence of web robots access traffic entries in web server log repositories imposes a great challenge to extract meaningful knowledge about browsing behavior of actual visitors. This knowledge is useful for enhancement of services for more satisfaction of genuine visitors or optimization of server resources.