cache management Research Papers - Academia.edu (original) (raw)

E-learning is basically the integration of various technologies. E-Learning technology is now maturing and we can find a multiplicity of standards. There is a large number of researches that have discussed problems in e-learning. Some of... more

E-learning is basically the integration of various technologies. E-Learning technology is now maturing and we can find a multiplicity of standards. There is a large number of researches that have discussed problems in e-learning. Some of them try to develop models and architectures that enhance e-learning, and merge additional devices such as PDAs and smart phones. All of these researches attempt to reduce the challenges face e-learning such as flexibility, efficiency, availability and convenience, but still a comprehensive solutions for e-learning concept is not conceived yet. Thus the work on solutions still carried out on many aspects. In this paper we propose an e-learning architecture based on cache management and web services. This architecture e-learning environment more adaptive. Furthermore, it will provide learners with availability, efficiency and high performance of resources and learning devices.

2000, IEEE Transactions on Consumer Electronics

A Disk-On-Module (DOM) is a NAND flash memory-based device with legacy I/O interface which does not require any special device driver due to the presence of flash translation layer (FTL). FTL is an intermediate software layer which makes... more

A Disk-On-Module (DOM) is a NAND flash memory-based device with legacy I/O interface which does not require any special device driver due to the presence of flash translation layer (FTL). FTL is an intermediate software layer which makes DOMs look like conventional hard disk drives. Since DOMs are usually used in mass-market consumer electronics devices, they are extremely costsensitive; hence FTL should be able to run in a severely resource-constrained environment. In this paper, we propose TinyFTL, a new FTL which employs an efficient memory management scheme for DOMs with a very small amount of memory. TinyFTL divides the mapping information into multiple levels and caches only recently-accessed mapping information in memory. According to experimental evaluation, TinyFTL shows the performance comparable to or better than the existing FTLs with only 4.3-6.2% of memory requirement (12KB) for 16 GB NAND flash memory. 1

E-learning is a way of teaching by using modern communication mechanisms whether remote or in the classroom. The important point is to use all kinds of technology to deliver information to the learner in a shorter time, with less effort... more

E-learning is a way of teaching by using modern communication mechanisms whether remote or in the classroom. The important point is to use all kinds of technology to deliver information to the learner in a shorter time, with less effort and greater benefit. E-learning able to change what and how we deliver the learning experience to students across time or space, which has led to the evolution of E-learning. But there are some challenges facing E-learning. We propose a new idea in e-learning system relying upon the crawling, memory management with web services to get the model to be adaptive and predictive to make e-learning more effective as far as in terms of accuracy, efficiency, availability and high performance of resources.

2000, IEEE Transactions on Parallel and Distributed Systems

Parallel applications currently suffer from a significant imbalance between computational power and available I/O bandwidth. Additionally, the hierarchical organization of current Petascale systems contributes to an increase of the I/O... more

Parallel applications currently suffer from a significant imbalance between computational power and available I/O bandwidth. Additionally, the hierarchical organization of current Petascale systems contributes to an increase of the I/O subsystem latency. In these hierarchies, file access involves pipelining data through several networks with incremental latencies and higher probability of congestion. Future Exascale systems are likely to share this trait. This paper presents a scalable parallel I/O software system designed to transparently hide the latency of file system accesses to applications on these platforms. Our solution takes advantage of the hierarchy of networks involved in file accesses, to maximize the degree of overlap between computation, file I/O-related communication and file system access. We describe and evaluate a two-level hierarchy for Blue Gene systems consisting of clientside and I/O node-side caching. Our file cache management modules coordinate the data staging between application and storage through the Blue Gene networks. The experimental results demonstrate that our architecture achieves significant performance improvements through a high degree of overlap between computation, communication, and file I/O.

2014, International Journal of Computer Applications Technology and Research (IJCATR),ATS (Association of Technology and Science), India, ISSN 2319–8656 (Online), Vol.3, Issue 9, Pages 569 - 578

The breakthrough in wireless networking has prompted a new concept of computing, called mobile computing in which users tote portable devices have access to a shared infrastructure, independent of their physical location. Mobile computing... more

The breakthrough in wireless networking has prompted a new concept of computing, called mobile computing in which users tote portable devices have access to a shared infrastructure, independent of their physical location. Mobile computing is becoming increasingly vital due to the increase in the number of portable computers and the aspiration to have continuous network connectivity to the Internet irrespective of the physical location of the node. Mobile computing systems are computing systems that may be readily moved physically and whose computing ability may be used while they are being moved. Mobile computing has rapidly become a vital new example in today's real world of networked computing systems. It includes software, hardware and mobile communication. Ranging from wireless laptops to cellular phones and WiFi/Bluetooth-enabled PDA's to wireless sensor networks; mobile computing has become ubiquitous in its influence on our quotidian lives. In this paper various types of mobile devices are talking and they are inquiring into in details and existing operation systems that are most famed for mentioned devices are talking. Another aim of this paper is to point out some of the characteristics, applications, limitations, and issues of mobile computing.

1998, Proceedings 14th International Conference on Data Engineering

2002

Mobile computing paradigm has emerged due to advances in wireless or cellular networking technology. This rapidly expanding technology poses many challenging research problems in the area of mobile database systems. The mobile users can... more

Mobile computing paradigm has emerged due to advances in wireless or cellular networking technology. This rapidly expanding technology poses many challenging research problems in the area of mobile database systems. The mobile users can access information independent of their physical location through wireless connections. However, accessing and manipulating information without restricting users to specific locations complicates data processing activities. There are computing constraints that make mobile database processing different from the wired distributed database computing. In this paper, we survey the fundamental research challenges particular to mobile database computing, review some of the proposed solutions and identify some of the upcoming research challenges. We discuss interesting research areas, which include mobile location data management, transaction processing and broadcast, cache management and replication and query processing. We highlight new upcoming research directions in mobile digital library, mobile data warehousing, mobile workflow and mobile web and e-commerce.

2004, IEEE Transactions on Parallel and Distributed Systems

Caching has been intensively used in memory and traditional file systems to improve system performance. However, the use of caching in parallel file systems and I/O libraries has been limited to I/O nodes to avoid cache coherence... more

Caching has been intensively used in memory and traditional file systems to improve system performance. However, the use of caching in parallel file systems and I/O libraries has been limited to I/O nodes to avoid cache coherence problems. In this paper, we specify an adaptive cache coherence protocol very suitable for parallel file systems and parallel I/O libraries. This model exploits the use of caching, both at processing and I/O nodes, providing performance increase mechanisms as aggressive prefetching and delayed-write techniques. The cache coherence problem is solved by using a dynamic scheme of cache coherence protocols with different sizes and shapes of granularity. The proposed model is very appropriate for parallel I/O interfaces, as MPI-IO. Performance results, obtained on an IBM SP2, are presented to demonstrate the advantages offered by the cache management methods proposed.

1999

Caching has been intensively used in memory and traditional file systems to improve system performance. However, the use of caching in parallel file systems has been limited to I/O nodes to avoid cache coherence problems. In this paper we... more

Caching has been intensively used in memory and traditional file systems to improve system performance. However, the use of caching in parallel file systems has been limited to I/O nodes to avoid cache coherence problems. In this paper we present the cache mechanisms implemented in ParFiSys, a parallel file system developed at the UPM. ParFiSys exploits the use of cache, both at processing and I/O nodes, with aggressive pre-fetching and delayed-write techniques. The cache coherence problem is solved by using a dynamic scheme of cache coherence protocols with different sizes and shapes of granularity. Performance results, obtained on an IBM SP2, are presented to demonstrate the advantages offered by the cache management methods used in ParFiSys.

2006, Future Generation Computer Systems

Proxy caches are essential to improve the performance of the World Wide Web and to enhance user perceived latency. Appropriate cache management strategies are crucial to achieve these goals. In our previous work, we have introduced Web... more

Proxy caches are essential to improve the performance of the World Wide Web and to enhance user perceived latency. Appropriate cache management strategies are crucial to achieve these goals. In our previous work, we have introduced Web object-based caching policies. A Web object consists of the main HTML page and all of its constituent embedded files. Our studies have shown that these policies improve proxy cache performance substantially. In this paper, we propose a new Web object-based policy to manage the storage system of a proxy cache. We propose two techniques to improve the storage system performance. The first technique is concerned with prefetching the related files belonging to a Web object, from the disk to main memory. This prefetching improves performance as most of the files can be provided from the main memory rather than from the proxy disk. The second technique stores the Web object members in contiguous disk blocks in order to reduce the disk access time. We used trace-driven simulations to study the performance improvements one can obtain with these two techniques. Our results show that the first technique by itself provides up to 50% reduction in hit latency, which is the delay involved in providing a hit document by the proxy. An additional 5% improvement can be obtained by incorporating the second technique.

1998, Data Engineering, 1998. …

Communication between mobile clients and database servers in a mobile computing environment is via wireless channels with low bandwidth and low reliability. A mobile client could cache its frequently accessed database items into its local... more

Communication between mobile clients and database servers in a mobile computing environment is via wireless channels with low bandwidth and low reliability. A mobile client could cache its frequently accessed database items into its local storage in order to improve performance of database queries and availability of database items for query processing during disconnection. We describe a mobile caching mechanism

2010, 2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)

We consider the problem of on-chip L2 cache management and replacement policies. We propose a new adaptive cache replacement policy, called Dueling CLOCK (DC), that has several advantages over the Least Recently Used (LRU) cache... more

We consider the problem of on-chip L2 cache management and replacement policies. We propose a new adaptive cache replacement policy, called Dueling CLOCK (DC), that has several advantages over the Least Recently Used (LRU) cache replacement policy.

2004

An Interest-based Clustering peer-to-peer Network (ICN) architecture is introduced in this paper. ICN uses a lot of Freenet mechanisms and is based on cache management. ICN is self-organizing, fully distributed, scalable, and logically... more

An Interest-based Clustering peer-to-peer Network (ICN) architecture is introduced in this paper. ICN uses a lot of Freenet mechanisms and is based on cache management. ICN is self-organizing, fully distributed, scalable, and logically hierarchical. In ICN, the upper level is bound by de Bruijn graph. Nodes in the lower level self-cluster based on interest. Through analysis and simulation, ICN shows good fault-tolerance, efficient data retrieval and resource usage as well as low overhead traffic.

2002, Operations Research/Computer Science Interfaces Series

Mobile computing paradigm has emerged due to advances in wireless or cellular networking technology. This rapidly expanding technology poses many challenging research problems in the area of mobile database systems. The mobile users can... more

Mobile computing paradigm has emerged due to advances in wireless or cellular networking technology. This rapidly expanding technology poses many challenging research problems in the area of mobile database systems. The mobile users can access information independent of their physical location through wireless connections. However, accessing and manipulating information without restricting users to specific locations complicates data processing activities. There are computing constraints that make mobile database processing different from the wired distributed database computing. In this paper, we survey the fundamental research challenges particular to mobile database computing, review some of the proposed solutions and identify some of the upcoming research challenges. We discuss interesting research areas, which include mobile location data management, transaction processing and broadcast, cache management and replication and query processing. We highlight new upcoming research directions in mobile digital library, mobile data warehousing, mobile workflow and mobile web and e-commerce.

2008, 2008 5th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks

Personal communication devices are increasingly equipped with sensors for passive monitoring of encounters and surroundings. We envision the emergence of services that enable a community of mobile users carrying such resource-limited... more

Personal communication devices are increasingly equipped with sensors for passive monitoring of encounters and surroundings. We envision the emergence of services that enable a community of mobile users carrying such resource-limited devices to query such information at remote locations in the field in which they collectively roam. One approach to implement such a service is directed placement and retrieval (DPR), whereby readings/queries about a specific location are routed to a node responsible for that location. In a mobile, potentially sparse setting, where end-to-end paths are unavailable, DPR is not an attractive solution as it would require the use of delay-tolerant (flooding-based store-carry-forward) routing of both readings and queries, which is inappropriate for applications with data freshness constraints, and which is incompatible with stringent device power/memory constraints. Alternatively, we propose the use of amorphous placement and retrieval (APR), in which routing and field monitoring are integrated through the use of a cache management scheme coupled with an informed exchange of cached samples to diffuse sensory data throughout the network, in such a way that a query answer is likely to be found close to the query origin. We argue that knowledge of the distribution of query targets could be used effectively by an informed cache management policy to maximize the utility of collective storage of all devices. Using a simple analytical model, we show that the use of informed cache management is particularly important when the mobility model results in a non-uniform distribution of users over the field. We present results from extensive simulations which show that in sparsely-connected networks, APR is more cost-effective than DPR, that it provides extra resilience to node failure and packet losses, and that its use of informed cache management yields superior performance.