Wait-Free Linearizable Implementation of a Distributed Shared Memory (Algorithm Engineering as a New Paradigm) (original) (raw)
Related papers
Wait-free linearizable distributed shared memory
We consider a wait-free linearizable implementation of shared objects on a distributed message-passing system. We assume that the system provides each process with a local clock that runs at the same speed as global time and that all message delays are in the range [d − u, d] where d and u (0 < u ≤ d) are constants known to every process. We present four wait-free linearizable implementations of read/write registers on reliable and unreliable broadcast models. We also present two wait-free linearizable implementations of general objects on a reliable broadcast model. The efficiency of an implementation is measured by the worst-case response time for each operation of the implemented object. Response times of our wait-free implementations of read/write registers on a reliable broadcast model is better than a previously known implementation in which waitfreedom is not taken into account.
An Optimistic Protocol for a Linearizable Distributed Shared Memory System
Parallel Processing Letters, 1996
Recently, distributed shared memory systems have received much attention because such an abstraction simplifies programming. In this paper, we present a simple protocol which implements the linearizability consistency criterion in a distributed shared memory system. Unlike previously implemented protocols, our protocol is based on an optimistic approach. The protocol eliminates the necessity of potentially expensive synchronization among processors for each write operation, but may require processes to rollback.
An implementation of distributed shared memory
Software: Practice and Experience, 1991
Shared memory is a simple yet powerful paradigm for structuring systems. Recently, there has been an interest in extending this paradigm to non-shared memory architectures as well. For example, the virtual address spaces for all objects in a distributed object-based system could be viewed as constituting a global distributed shared memory. We propose a set of primitives for managing distributed shared memory. We present an implementation of these primitives in the context of an object-based operating system as well as on top of Unix. KEY WORDS Distributed shared memory Distributed operating systems Object-based system 444 U. RAMACHANDRAN AND M . Y. A . KHALIDI
Eventually linearizable shared objects
Proceeding of the 29th ACM SIGACT-SIGOPS symposium on Principles of distributed computing - PODC '10, 2010
Linearizability is the strongest known consistency property of shared objects. In asynchronous message passing systems, Linearizability can be achieved with 3S and a majority of correct processes. In this paper we introduce the notion of Eventual Linearizability, the strongest known consistency property that can be attained with 3S and any number of crashes. We show that linearizable shared object implementations can be augmented to support weak operations, which need to be linearized only eventually. Unlike strong operations that require to be always linearized, weak operations terminate in worst case runs. However, there is a tradeoff between ensuring termination of weak and strong operations when processes have only access to 3S. If weak operations terminate in the worst case, then we show that strong operations terminate only in the absence of concurrent weak operations. Finally, we show that an implementation based on 3P exists that guarantees termination of all operations.
Extending the Wait-free Hierarchy to Multi-Threaded Systems
Proceedings of the 39th Symposium on Principles of Distributed Computing, 2020
In modern operating systems and programming languages adapted to multicore computer architectures, parallelism is abstracted by the notion of execution threads. Multi-threaded systems have two major specificities: 1) new threads can be created dynamically at runtime, so there is no bound on the number of threads participating in a long-running execution. 2) threads have access to a memory allocation mechanism that cannot allocate infinite arrays. This makes it challenging to adapt some algorithms to multi-threaded systems, especially those that assign one shared register per process. This paper explores the synchronization power of shared objects in multi-threaded systems by extending the famous wait-free hierarchy to take these constraints into consideration. It proposes to subdivide the set of objects with an infinite consensus number into five new degrees, depending on their ability to synchronize a bounded, finite or infinite number of processes, with or without the need to allocate an infinite array. It then exhibits one object illustrating each proposed degree. CCS CONCEPTS • Theory of computation → Distributed computing models; • Software and its engineering → Process synchronization; • Computer systems organization → Multicore architectures; Dependable and fault-tolerant systems and networks.
HAL (Le Centre pour la Communication Scientifique Directe), 2017
Yet another paper on" the implementation of read/write registers in crash-prone asynchronous messagepassing systems! Yes..., but, differently from its predecessors, this paper looks for a communication abstraction which captures the essence of such an implementation in the same sense that total order broadcast can be associated with consensus, or message causal delivery can be associated with causal read/write registers. To this end, the paper introduces a new communication abstraction, named SCD-broadcast (SCD standing for "Set Constrained Delivery"), which, instead of a single message, delivers to processes sets of messages (whose size can be arbitrary), such that the sequences of message sets delivered to any two processes satisfies some constraints. The paper then shows that: (a) SCD-broadcast allows for a very simple implementation of a snapshot object (and consequently also of atomic read/write registers) in crashprone asynchronous message-passing systems; (b) SCD-broadcast can be built from snapshot objects (hence SCD-broadcast and snapshot objects-or read/write registers-are "computationally equivalent"); (c) SCDbroadcast can be built in message-passing systems where any minority of processes may crash (which is the weakest assumption on the number of possible process crashes needed to implement a read/write register).
Time-Efficient Read/Write Register in Crash-Prone Asynchronous Message-Passing Systems
Lecture Notes in Computer Science, 2016
The atomic register is one of the most basic and useful object of computing science, and its simple read-write semantics is appealing when programming distributed systems. Hence, its implementation on top of crash-prone asynchronous message-passing systems has received a lot of attention. It was shown that having a strict minority of processes that may crash is a necessary and sufficient requirement to build an atomic register on top of a crash-prone asynchronous message-passing system. This paper visits the notion of a fast implementation of an atomic register, and presents a new time-efficient asynchronous algorithm that reduces latency in many cases: a write operation always costs a round-trip delay, while a read operation costs a round-trip delay in favorable circumstances (intuitively, when it is not concurrent with a write). When designing this algorithm, the design spirit was to be as close as possible to the original algorithm proposed by Attiya, Bar-Noy, and Dolev.