Neural Network Based Prefetching Control Mechanism (original) (raw)
Related papers
An intelligent technique for controlling web prefetching costs at the server side
2008
Prefetching is an interesting technique for improving web performance by reducing the user-perceived latency when surfing the web. Nevertheless, due to its speculative nature, prefetching can increase the network traffic and the server load. This could negatively affect the overall system performance and decrease the quality of service. To minimize and maintain under control these adverse effects, in this paper we propose an intelligent prefetching mechanism that dynamically adjusts the aggressiveness of the prefetching algorithm at the server side. To this end, we also propose a traffic estimation model that permits to accurately calculate, in the server side, the extra load and traffic generated by the prefetching. The performance evaluation study shows that our proposal effectively regulates the adverse effects of web prefetching without affecting its benefits.
Reducing user latency in web prefetching using integrated techniques
2011
Web caching and Web Prefetching are the areas for the research in Web Mining. Web Prefetching improves the performance of the Web Caching techniques due to prediction of the user pages in advance before the user requests. Both techniques provide the web pages local to the user; they provide the resources of web for user's ease and
Web prefetch performance evaluation in a real environment
Proceedings of the 4th international IFIP/ACM Latin American conference on Networking - LANC '07, 2007
Web prefetch is a technique aimed at reducing the user's perceived latency by downloading, during the navigation idle times, the web objects before the user asks for them. Despite different research efforts can be found in the literature about this subject there are few implementations for real environments. We proposed a real framework that provides web prefetching on current web client and server software working under the standard HTTP 1.1 protocol. This framework also provides detailed statistics which are very convenient for performance evaluation studies. In this paper we evaluate and compare the performance of different prediction algorithms in real conditions showing the usefulness of our proposal for this kind of environments.
Improving Web Prefetching by Making Predictions at Prefetch
2007 Next Generation Internet Networks, 2007
Most of the research attempts to improve Web prefetching techniques have focused on the prediction algorithm with the objective of increasing its precision or, in the best case, to reduce the user's perceived latency. In contrast, to improve prefetching performance, this work concentrates in the prefetching engine and proposes the Prediction at Prefetch (P@P) technique. This paper explains how a prefetching technique can be extended to include our P@P proposal on real world conditions without changes in the web architecture or HTTP protocol. To show how this proposal can improve prefetching performance an extensive performance evaluation study has been done and the results show that P@P can considerably reduce the user's perceived latency with no additional cost over the basic prefetch mechanism.
Graph based Prediction Model to Improve Web Prefetching
Web prefetching is an effective technique used to mitigate the user perceived latency by making predictions about the user's future requests and prefetching them before the user actually demands them. In this paper, we present an algorithm that learns from user access patterns and builds a Precedence Graph (PG) that is used to generate the predictions. The difference in the relationship between objects of the same web page and the objects of different web pages are reflected in the graph implementation. It uses simple data structure to implement the graph, which is cost effective and consumes less computational resources. The proposed approach significantly improves the performance of web prefetching by utilizing limited amount of resources as compared to other existing algorithms used for prefetching.
Adaptive Web Prefetching Scheme using Link Anchor Information
Web prefetching provides an effective mechanism to mitigate the user perceived latency when accessing the web pages. The content of web pages provide useful information for generating the predictions, which are used to prefetch the web objects for satisfying the user"s future requests. In this paper, we propose fuzzy logic based web prefetching scheme that generates effective predictions for prefetching the web objects. Predictions are generated based on the anchor text information associated with hyperlinks in a web page. Based on the user"s browsing pattern in each session, prediction engine dynamically computes the value and generates the list of predictions. The prefetched web objects are effectively utilized when user browses the web pages for information related to specific topic of interest. In long duration browsing sessions, useful predictions are generated to efficiently minimize the user perceived latency. The proposed scheme is compared with existing prefetchin...
The impact of the web prefetching architecture on the limits of reducing user's perceived latency
Web prefetching is a technique that has been researched for years to reduce the latency perceived by users. For this purpose, several web prefetching architectures have been used, but no comparative study has been performed to identify the best architecture dealing with prefetching. This paper analyzes the impact of the web prefetching architecture focusing on the limits of reducing the user's perceived latency. To this end, the factors that constrain the predictive power of each architecture are analyzed and these theoretical limits are quantified. Experimental results show that the best element of the web architecture to locate a single prediction engine is the proxy, whose implementation could reduce the perceived latency up to 67%. Schemes for collaborative predictors located at diverse elements of the web architecture are also analyzed. These predictors could dramatically reduce the perceived latency, reaching a potential limit of about 97% for a mixed proxy-server collaborative prediction engine.
A comparative study of web prefetching techniques focusing on user’s perspective
Web prefetching mechanisms have been proposed to benefit web users by reducing the perceived download latency. Nevertheless, to the knowledge of the authors, there are no attempts in the open literature comparing different prefetch techniques that consider the latency perceived by the user as the key metric. The lack of performance comparison studies from the user's perspective has been mainly due to the difficulty to accurately reproduce the large amount of factors that take part in the prefetching process, from the environment conditions to the workload. This paper is aimed at reducing this gap by using a cost-benefit analysis methodology to fairly compare prefetching algorithms from the user's point of view. This methodology has been used to configure and compare five of the most used algorithms in the literature under current workloads. In this paper, we analyze the perceived latency versus the traffic increase to evaluate the benefits from the user's perspective. In addition, we also analyze the performance results from the prediction point of view to provide insights of the observed behavior. Results show that across the studied environment conditions higher algorithm complexity do not achieve better performance and object-based algorithms outperform those based on pages.
A Novel Probability based Approach for Optimized Prefetching
International Journal of Information Engineering and Electronic Business, 2016
As the World Wide Web carries on to grow up rapidly in size and popularity, web traffic and network bottlenecks are more important issues in the networked world. The continued enhancement in demand for items on the World Wide Web causes severe overloading in many sites, network congestion, delay in perceived latency and network bottleneck. Many users have no patience in waiting more than a few seconds for downloading a web page, that's why Web traffic reduction system is very necessary in today World Wide Web for accessing the websites efficiently with the facility of existing networks. Web caching is an effective method to improve the performance of the World Wide Web but in today's World Wide Web caching method alone is not enough because of World Wide Web has grown quickly from a simple information-sharing mechanism to a rich collection of dynamic objects and multimedia data. The web prefetching is used to improve the performance of the proxy server. Prefetching predict web object that is expected to be requested in the near future and store them in advance, thus the response time of the user request is reduced. To improve the performance of the proxy server, this paper proposed a new framework which combines the caching system and prefetching technique and also optimize the prefetching with the help of probability. In this paper, we use the dataset for the experiment which is collected from ircache.net proxy server and give the result with the comparison of other technique of prefetching.