cache management Research Papers - Academia.edu (original) (raw)

E-learning is basically the integration of various technologies. E-Learning technology is now maturing and we can find a multiplicity of standards. There is a large number of researches that have discussed problems in e-learning. Some of... more

E-learning is basically the integration of various technologies. E-Learning technology is now maturing and we can find a multiplicity of standards. There is a large number of researches that have discussed problems in e-learning. Some of them try to develop models and architectures that enhance e-learning, and merge additional devices such as PDAs and smart phones. All of these researches attempt to reduce the challenges face e-learning such as flexibility, efficiency, availability and convenience, but still a comprehensive solutions for e-learning concept is not conceived yet. Thus the work on solutions still carried out on many aspects. In this paper we propose an e-learning architecture based on cache management and web services. This architecture e-learning environment more adaptive. Furthermore, it will provide learners with availability, efficiency and high performance of resources and learning devices.

A distributed object database stores persistently at servers. Applications run on client machines, fetching objects into a client-side cache of objects. If fetching and cache management are done in terms of objects, rather than fixed-size... more

A distributed object database stores persistently at servers. Applications run on client machines, fetching objects into a client-side cache of objects. If fetching and cache management are done in terms of objects, rather than fixed-size units such as pages, three problems must be solved: 1. which objects to prefetch 2. how to translate, or swizzle, inter-object references when they are fetched from server to client, and, 3. which objects to displace from the cache. This thesis reports the results of experiments to test various solutions to the problems. The experiments use the runtime system of the Thor distributed object database and benchmarks adapted from the Wisconsin 007 benchmark suite. The thesis establishes the following points: 1. For plausible workloads involving some amount of object fetching, the prefetching policy is likely to have more impact on performance than swizzling policy of cache management policy. 2. A simple breadth-first prefetcher can have performance tha...

The memory hierarchy of high-performance and embedded processors has been shown to be one of the major energy consumers. For example, the Level-1 (L1) instruction cache (I-Cache) of the StrongARM processor accounts for 27% of the power... more

The memory hierarchy of high-performance and embedded processors has been shown to be one of the major energy consumers. For example, the Level-1 (L1) instruction cache (I-Cache) of the StrongARM processor accounts for 27% of the power dissipation of the whole chip, whereas the instruction fetch unit (IFU) and the I-Cache of Intel's Pentium Pro processor are the single most important power consuming modules with 14% of the total power dissipation [2]. Extrapolating current trends, this portion is likely to increase in the near future, since the devices devoted to the caches occupy an increasingly larger percentage of the total area of the chip. In this paper, we propose a technique that uses an additional mini cache, the LO-Cache, located between the I-Cache and the CPU core. This mechanism can provide the instruction stream to the data path and, when managed properly, it can effectively eliminate the need for high utilization of the more expensive I-Cache. We propose, implement, and evaluate five techniques for dynamic analysis of the program instruction access behavior, which is then used to proactively guide the access of the LO-Cache. The basic idea is that only the most frequently executed portions of the code should be stored in the LO-Cache since this is where the program spends most of its time. We present experimental results to evaluate the effectiveness of our scheme in terms of performance and energy dissipation for a series of SPEC95 benchmarks. We also discuss the performance and energy tradeoffs that are involved in these dynamic schemes. Results for these benchmarks indicate that more than 60% of the dissipated energy in the I-Cache subsystem can be saved

We have performed a study of the usage of the Windows NT File System through long-term kernel tracing. Our goal was to provide a new data point with respect to the 1985 and 1991 trace-based File System studies, to investigate the usage... more

We have performed a study of the usage of the Windows NT File System through long-term kernel tracing. Our goal was to provide a new data point with respect to the 1985 and 1991 trace-based File System studies, to investigate the usage details of the Windows NT file system architecture, and to study the overall statistical behavior of the usage data.In this paper we report on these issues through a detailed comparison with the older traces, through details on the operational characteristics and through a usage analysis of the file system and cache manager. Next to architectural insights we provide evidence for the pervasive presence of heavy-tail distribution characteristics in all aspect of file system usage. Extreme variances are found in session inter-arrival time, session holding times, read/write frequencies, read/write buffer sizes, etc., which is of importance to system engineering, tuning and benchmarking.

Persistent systems manage main memory as a cache for efficient access to frequently-accessed persistent data. Good cache management requires some knowledge of the semantics of the applications running against it. We are attacking the... more

Persistent systems manage main memory as a cache for efficient access to frequently-accessed persistent data. Good cache management requires some knowledge of the semantics of the applications running against it. We are attacking the performance problems of persistence for Java through analysis, profiling, and optimisation of Java classes and methods executing in an orthogonally persistent setting. Knowledge of application behaviour is derived through analysis and profiling, and applied by both a static bytecode transformer and the run-time system to optimise the actions of Java programs as they execute against persistent storage. Our prototype will unify distinct persistence optimisations within a single optimisation framework, deriving its power from treatment of the entire persistent application, consisting of both program code and data stored in the database, for wholeapplication analysis, profiling and optimisation. Keywords: persistence, Java, bytecode, program analysis, dynam...

This paper proposes a dynamic cache partitioning method for simultaneous multithreading systems. We present a general partitioning scheme that can be applied to set-associative caches at any partition granularity. Further-more, in our... more

This paper proposes a dynamic cache partitioning method for simultaneous multithreading systems. We present a general partitioning scheme that can be applied to set-associative caches at any partition granularity. Further-more, in our scheme threads can have overlapping ...

Communication between mobile clients and database servers in a mobile computing environment is via wireless channels with low bandwidth and low reliability. A mobile client could cache its frequently accessed database items into its local... more

Communication between mobile clients and database servers in a mobile computing environment is via wireless channels with low bandwidth and low reliability. A mobile client could cache its frequently accessed database items into its local storage in order to improve performance of database queries and availability of database items for query processing during disconnection. We describe a mobile caching mechanism

The breakthrough in wireless networking has prompted a new concept of computing, called mobile computing in which users tote portable devices have access to a shared infrastructure, independent of their physical location. Mobile computing... more

The breakthrough in wireless networking has prompted a new concept of computing, called mobile computing in which users tote portable devices have access to a shared infrastructure, independent of their physical location. Mobile computing is becoming increasingly vital due to the increase in the number of portable computers and the aspiration to have continuous network connectivity to the Internet irrespective of the physical location of the node. Mobile computing systems are computing systems that may be readily moved physically and whose computing ability may be used while they are being moved. Mobile computing has rapidly become a vital new example in today's real world of networked computing systems. It includes software, hardware and mobile communication. Ranging from wireless laptops to cellular phones and WiFi/Bluetooth-enabled PDA's to wireless sensor networks; mobile computing has become ubiquitous in its influence on our quotidian lives. In this paper various types of mobile devices are talking and they are inquiring into in details and existing operation systems that are most famed for mentioned devices are talking. Another aim of this paper is to point out some of the characteristics, applications, limitations, and issues of mobile computing.

On-disk sequentiality of requested blocks, or their spatial locality, is critical to real disk performance where the throughput of access to sequentially-placed disk blocks can be an order of magnitude higher than that of access to... more

On-disk sequentiality of requested blocks, or their spatial locality, is critical to real disk performance where the throughput of access to sequentially-placed disk blocks can be an order of magnitude higher than that of access to randomly-placed blocks. Unfortunately, spatial locality of cached blocks is largely ignored, and only temporal locality is considered in current system buffer cache managements. Thus, disk performance for workloads without dominant sequential accesses can be seriously degraded. To address this problem, we propose a scheme called DULO ( DU al LO cality) which exploits both temporal and spatial localities in the buffer cache management. Leveraging the filtering effect of the buffer cache, DULO can influence the I/O request stream by making the requests passed to the disk more sequential, thus significantly increasing the effectiveness of I/O scheduling and prefetching for disk performance improvements. We have implemented a prototype of DULO in Linux 2.6.11...

In a distributed storage system, client caches managed on the basis of small granularity objects can provide better memory utilization then page-based caches. However, ob-ject servers, unlike page servers, must perform additional disk... more

In a distributed storage system, client caches managed on the basis of small granularity objects can provide better memory utilization then page-based caches. However, ob-ject servers, unlike page servers, must perform additional disk reads. These installation reads are required to ...

E-learning is a way of teaching by using modern communication mechanisms whether remote or in the classroom. The important point is to use all kinds of technology to deliver information to the learner in a shorter time, with less effort... more

E-learning is a way of teaching by using modern communication mechanisms whether remote or in the classroom. The important point is to use all kinds of technology to deliver information to the learner in a shorter time, with less effort and greater benefit. E-learning able to change what and how we deliver the learning experience to students across time or space, which has led to the evolution of E-learning. But there are some challenges facing E-learning. We propose a new idea in e-learning system relying upon the crawling, memory management with web services to get the model to be adaptive and predictive to make e-learning more effective as far as in terms of accuracy, efficiency, availability and high performance of resources.

XML Web services can now be accessed in all places nd at all times. The problem now facing these XML Web servic es is the need be to universal availability. Caching can be us d by client applications that use XML Web Services on wireless... more

XML Web services can now be accessed in all places nd at all times. The problem now facing these XML Web servic es is the need be to universal availability. Caching can be us d by client applications that use XML Web Services on wireless or mobile networks in the face of intermittent connectivity. The idea of interjecting a client side cache proxy may be a ste p in the direction towards the ultimate goal of a seamless o nline/offline operating environment of these XML Web Services. Bu t, Web services present new challenges to existing cache m anagers since they have generally been designed without regard to caching and hence offer little support. The WSDL description o f a Web service specifies the message format of a necessary to invoke a service operation but lacks the information needed to indicate whether an operation will modify the server state o r produce different results on different invocations. We hav e suggested several annotations to the WSDL document that will al ow cu...

The breakthrough in wireless networking has prompted a new concept of computing, called mobile computing in which users tote portable devices have access to a shared infrastructure, independent of their physical location. Mobile computing... more

The breakthrough in wireless networking has prompted a new concept of computing, called mobile computing in which users tote portable devices have access to a shared infrastructure, independent of their physical location. Mobile computing is becoming increasingly vital due to the increase in the number of portable computers and the aspiration to have continuous network connectivity to the Internet irrespective of the physical location of the node. Mobile computing systems are computing systems that may be readily moved physically and whose computing ability may be used while they are being moved. Mobile computing has rapidly become a vital new example in today's real world of networked computing systems. It includes software, hardware and mobile communication. Ranging from wireless laptops to cellular phones and WiFi/Bluetooth-enabled PDA's to wireless sensor networks; mobile computing has become ubiquitous in its influence on our quotidian lives. In this paper various types of mobile devices are talking and they are inquiring into in details and existing operation systems that are most famed for mentioned devices are talking. Another aim of this paper is to point out some of the characteristics, applications, limitations, and issues of mobile computing.

Page 1. 2004 IEEE International Symposium on Cluster Computing and the Grid Unifier: Unifying Cache Management and Communication Buffer Management for PVFS over InfiniBand * Jiesheng Wul t Pete WyckoffL Dhabaleswar Panda' Rob... more

Page 1. 2004 IEEE International Symposium on Cluster Computing and the Grid Unifier: Unifying Cache Management and Communication Buffer Management for PVFS over InfiniBand * Jiesheng Wul t Pete WyckoffL Dhabaleswar Panda' Rob Ross3 ...