Caching Policies over Unreliable Channels (original) (raw)

2020

Recently, there has been substantial progress in the formal understanding of how caching resources should be allocated when multiple caches each deploy the common LRU policy. Nonetheless, the role played by caching policies beyond LRU in a networked setting where content may be replicated across multiple caches and where channels are unreliable is still poorly understood. In this paper, we investigate this issue by first analyzing the cache miss rate in a system with two caches of unit size each, for the LRU, and the LFU caching policies, and their combination. Our analytical results show that joint use of the two policies outperforms LRU, while LFU outperforms all these policies whenever resource pooling is not optimal. We provide empirical results with larger caches to show that simple alternative policies, such as LFU, provide superior performance compared to LRU even if the space allocation is not fine tuned. We envision that fine tuning the cache space used by such policies may...

A Framework for Evaluating Caching Policies in A Hierarchical Network of Caches

2018 IFIP Networking Conference (IFIP Networking) and Workshops, 2018

Much attention of the research community has focused on performance analysis of cache networks under various caching policies. However, the issue of how to evaluate and compare caching policies for cache networks has not been adequately addressed. In this paper, we propose a novel and general framework for evaluating caching policies in a hierarchical network of caches. We introduce the notion of a hit probability/rate matrix, and employ a generalized notion of majorization as the basic tool for evaluating caching policies for various performance metrics. We discuss how the framework can be applied to existing caching policies, and conduct extensive simulation-based evaluation to demonstrate the utility and accuracy of our framework.

Performance comparison of dynamic policies for remote caching

Concurrency and Computation: Practice and Experience, 1993

In a distributed system, data servers (file systems and databases) can easily become bottlenecks. We propose an approach to offloading data access requests from overloaded data servers to nodes that are Idle or less busy. This approach is referred to as remote caching, and the idle or less busy nodes are called mutual servers as they help out the busy server nodes on data accesses. In addition to server and client local caches, frequently accessed data are cached in the main memory of mutual servers, thus improving the data access time in the system. We evaluate several data propagation strategics among data servers and mutual servers. These include policies in which senders are active/passive and receivers are active/passive in initiating data propagation. For example, an active sender takes the initiative to offload data onto a passive receiver. Simulation results show that the active-sender/passive-receiver policy is the method of choice In most cases. Active-Sender policies are best able to exploit the main memory of other Idle nodes in the expected normal condition where some nodes are overloaded and others are less loaded. AH active policies perform far better than the policy without remote caching even in the degenerated case where each node is equally loaded.

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.