Toward transparent selective sequential consistency in distributed shared memory systems (original) (raw)
Related papers
2004
Relaxed memory consistency models, such a s r elease consistency, w ere introduced in order to reduce the impact of remote memory access latency in both software and hardware distributed shared memory DSM. However, in a software DSM, it is also important to reduce the number of messages and the amount of data exchanged for remote memory access. Lazy release consistency is a new algorithm for implementing release consistency that lazily pulls modi cations across the interconnect only when necessary. T race-driven simulation using the SPLASH benchmarks indicates that lazy release consistency reduces both the number of messages and the amount of data transferred between processors. These reductions are especially signi cant for programs that exhibit false sharing and make extensive use of locks.
A View-based Consistency Model based on Transparent Data Selection in Distributed Shared Memory
2001
This paper proposes a novel View-based Consistency model for Distributed Shared Memory, in which a new concept, view, is coined. A view is a set of data objects that a pro- cessor has the right to access in the shared memory. The View-based Consistency model only requires that the data objects of a processor's view are updated before a processor accesses them. In this way, it can achieve the maximum relaxation of constraints on mod- ification propagation and execution in data-race-free programs. This paper first briefly reviews a number of related consistency models in terms of their use of three techniques - time, processor and data selection - which each eliminate some unnecessary propaga- tion of memory modifications while guaranteeing sequential consistency for data-race-free programs. Then, we present the View-based Consistency model and its implementation. In contrast with other models, the View-based Consistency model can achieve transparent data selection without progra...
Message-driven relaxed consistency in a software distributed shared memory
Message-passing and distributed shared memory have their respective advantages and disadvantages in distributed parallel programming. We approach the problem of integrating both mechanisms into a single system by proposing a new message-driven coherency mechanism. Messages carrying explicit causality annotations are exchanged to trigger memory coherency actions. By adding annotations to standard message-based protocols, it is easy to construct efficient implementations of common synchronization and communication mechanisms. Because these are user-level messages, the set of available primitives is extended easily with language-or application-specific mechanisms. CarlOS, an experimental prototype for evaluating this approach, is derived from the lazy release consistent memory of TreadMarks. We describe the message-driven coherency memory model used in CarlOS, and we examine the performance of several applications.
Selection-based Weak Sequential Consistency Models for Distributed Shared Memory
Based on time, processor, and data selection techniques, a group of Weak Sequential Consistency models have been proposed to improve the performance of Sequential Consistency for Distributed Shared Memory. These models can guarantee Sequential Consistency for data-race-free programs that are properly labelled. This paper reviews and discusses these models in terms of their use of the selection techniques. Their programmer interfaces are also discussed and compared. Among them the View-based Consistency model is recognized as the model that can offer the maximum performance advantage among the Weak Sequential Consistency models. An implementation of the View-based Consistency model has been given. Finally this paper suggests future directions of implementation effort for Distributed Shared Memory.
View-based Consistency for Distributed Shared Memory
2000
This paper proposes a novel View-based Consistency model for Distributed Shared Memory. A view is a set of data objects that a processor has the right to access in a data-race-free program. The View-based Consistency model requires that the data objects of a view are updated only before a processor accesses them. Compared with other memory consistency models, the View-based Consistency model can achieve data selection without user annotation and the performance advantage, though the supporting implementation techniques need to be further explored.
Implementation and Consistency Issues in Distributed Shared Memory
Presently all programmers want to perform their tasks much faster than before. So, Parallel Processing comes into the picture to satisfy the increasing demands. Till a long time, parallel programs were only written either for multiprocessing environment or multi-computing environment. However, both of these parallel processing systems have some relative advantages and disadvantages. Distributed Shared Memory (DSM) system is a new and attractive area of research which combines the advantages of both shared-memory parallel processors (multiprocessors) and distributed systems (multi-computers). However, in DSM environment there are some critical issues like memory consistency that should be handled carefully. In this paper, an overview of DSM is given after a brief description of Distributed Computing Systems. Later various implementation issues and consistency models related to DSM are shown. Then an example of a simple program is given that can be implemented in DSM environment using Open SHMEM.
An efficient implementation of sequentially consistent distributed shared memories
1993
Recently, distributed shared memory systems have received much attention because such an abstraction simpli es programming. In this paper, we present a data consistency protocol for a distributed system which implements sequentially consistent memories. The protocol is aimed at an environment where no special support for atomic broadcast exists. As compared to previously proposed protocols, our protocol eliminates the need of atomic broadcast and signi cantly reduces the amount of information ow among the processors. This is realized by maintaining state information and capturing causal relations among read and write operations.