Proxy-Side Web Prefetching Scheme for Efficient Bandwidth Usage: A Probabilistic Method (original) (raw)

IJERT-Proxy-Side Web Prefetching Scheme for Efficient Bandwidth Usage: A Probabilistic Method

International Journal of Engineering Research and Technology (IJERT), 2014

https://www.ijert.org/proxy-side-web-prefetching-scheme-for-efficient-bandwidth-usage-a-probabilistic-method https://www.ijert.org/research/proxy-side-web-prefetching-scheme-for-efficient-bandwidth-usage-a-probabilistic-method-IJERTV3IS061160.pdf The expansion of the World Wide Web on internet has emphasize the need for upgrading in user latency. One of the methods that are used for enhancing user latency is Web Prefetching followed by web caching. Web prefetching is one of the methods to condense user's latencies in the World Wide Web efficiently. User's accesses makes it possible to predict future accesses based on the previous objects and previous sites. A prefetching engine makes use of these predictions to prefetch the web objects and performed site before the user demands them on the behalf of user. Web prefetching is becoming important and demanding, even though the Web caching system has been recover because of bandwidth usage. Web prefetching is a helpful implement for upgrading the access to the World Wide Web and it also diminish the bandwidth usage at the time. Prefetching can be perform at the client side or at the server side and in this paper we are going through Proxy side web prefetching scheme. We Propose Proxy side web prefetching scheme with probabilistic method which improves cache hit rate with tiny amount of supplementary storage space. Keywords-Improvement of web caching followed by web prefetching, Probabilistic method for web prefetching, Proxy side web prefetching, Web prefetching objects,

Reducing user latency in web prefetching using integrated techniques

2011

Web caching and Web Prefetching are the areas for the research in Web Mining. Web Prefetching improves the performance of the Web Caching techniques due to prediction of the user pages in advance before the user requests. Both techniques provide the web pages local to the user; they provide the resources of web for user's ease and

An experimental framework for testing web prefetching techniques

… 2004. Proceedings. 30th, 2004

The popularity of web objects, and by extension the popularity of the web sites, besides the appearance of clear footprints in user's accesses that show a considerable spatial locality, make possible to predict future accesses based on the current ones. This fact permits to implement also prefetching techniques in Web Architecture in order to reduce the latency perceived by the users. Although the open literature presents some approaches in this sense, the huge variety of prefetching algorithms, and the different scenarios and conditions where they are applied make very difficult to compare performance and to obtain correct conclusions that permit researchers to improve their proposals or even detect in which conditions one solution is more convenient than others. This is the main reason why we propose in this paper a new and free available environment in order to implement and study prefetching techniques efficiently. Our framework is a hybrid implementation that combines both real and simulated parts in order to provide flexibility and accuracy. It reproduces in detail the behavior of web users, proxy severs and original servers. The simulator also includes a module to provide performance results, such as precision (prefetching accuracy), recall, response time, and byte transference.

A comparative study of web prefetching techniques focusing on user’s perspective

Web prefetching mechanisms have been proposed to benefit web users by reducing the perceived download latency. Nevertheless, to the knowledge of the authors, there are no attempts in the open literature comparing different prefetch techniques that consider the latency perceived by the user as the key metric. The lack of performance comparison studies from the user's perspective has been mainly due to the difficulty to accurately reproduce the large amount of factors that take part in the prefetching process, from the environment conditions to the workload. This paper is aimed at reducing this gap by using a cost-benefit analysis methodology to fairly compare prefetching algorithms from the user's point of view. This methodology has been used to configure and compare five of the most used algorithms in the literature under current workloads. In this paper, we analyze the perceived latency versus the traffic increase to evaluate the benefits from the user's perspective. In addition, we also analyze the performance results from the prediction point of view to provide insights of the observed behavior. Results show that across the studied environment conditions higher algorithm complexity do not achieve better performance and object-based algorithms outperform those based on pages.

Make Web Page Instant: By Integrating Web-Cache and Web-Prefetching

2013

As the Internet continues its exponential growth, two of the major problems that today's Web users are suffering from are the network congestion and Web Server overloading. Web caching and pre-fetching are well known strategies for improving the performance of Internet systems. Web caching techniques have been widely used with the objective of caching as many web pages and web objects in the proxy server cache as possible to improve network performance. Web pre-fetching schemes have also been widely used where web pages and web objects are pre-fetched into the nearby proxy server cache. In this paper, we present an application of web log mining to obtain web-document access patterns of closely related pages based on the analysis of the request from the proxy server log files.

Adaptive Web Prefetching

Many factors contribute to a less-than-speedy web experience, including heterogeneous network connectivity, real-world distances, and congestion due to unexpected network demand. Web caching, along with other forms of data dissemination, has been proposed as a technology that helps reduce network usage and server loads and improve average latencies experienced by the user. When successful, prefetching web objects into local caches can be used to further reduce latencies [KLM97], and even to shift network loads from peak to non-peak periods .

Web caching and prefetching: What, why, and how?

2008 International Symposium on Information Technology, 2008

The demand for Internet content rose dramatically in recent years. Servers became more and more powerful and the bandwidth of end user connections and backbones grew constantly during the last decade. Nevertheless users often experience poor performance when they access web sites or download files. Reasons for such problems are often performance problems which occur directly on the servers (e.g. poor performance of server-side applications or during flash crowds) and problems concerning the network infrastructure (e.g. long geographical distances, network overloads, etc.). Web caching and prefetching have been recognized as the effective schemes to alleviate the service bottleneck, minimize the user access latency and reduce the network traffic. In this paper, we express the discussion on what is the Web caching and prefetching, why we have to opt its and how to pertain of these two technologies.

A Low Latency Proxy Prefetching Caching Algorithm

International Conference on Aerospace Sciences and Aviation Technology, 2003

The Web proxy cache system was deployed to save network bandwidth, balance server load, and reduce network latency by storing copies of popular documents in the client and proxy caches for the Uniform Resource Locator (URL) requests. To solve the problem of the Web's slow end-user response time, a Web proxy caching and prefetching strategy has been developed and implemented by the auother to provide the users by the information they mostly likely want to browse in user profiles. This developed strategy uses the Reverse Aggressive technique for prefetching, which was proposed theortically. This developed strategy has been implemented with different cache sizes using a Web caching simulator. The tradional caching replacement policies such as Least-Recently-Used (LRU), Hybrid, and Size policies were already existed in this simultor. This simulator has been modified by the work in this paper such that the most recent replacement policies; Last-In-First-Out (LIFO), First-Try, Swapping and Place-Holder policies under infinite sized cache have been implemented. The performance measurements of the developed strategy have been studied using the tradional replacement policies, and the most recent replacement policies. Also, a comparative study has been done to clarify the benefits of the Reverse Aggressive caching prefetching algorithm with respect to the Fixed-Horizon caching prefetching algorithm with respect to the Reduced Latency (RL). According to the implementation results, it has been found that the average latency has been reduced at a higher degree by using the Reverse Aggressive cache prefetching strategy.