A methodological construction of an efficient sequential consistency protocol (original) (raw)

On Composition and Implementation of Sequential Consistency (Extended Version)

ArXiv, 2016

It has been proved that to implement a linearizable shared memory in synchronous message-passing systems it is necessary to wait for a time proportional to the uncertainty in the latency of the network for both read and write operations, while waiting during read or during write operations is sufficient for sequential consistency. This paper extends this result to crash-prone asynchronous systems. We propose a distributed algorithm that builds a sequentially consistent shared memory abstraction with snapshot on top of an asynchronous message-passing system where less than half of the processes may crash. We prove that it is only necessary to wait when a read/snapshot is immediately preceded by a write on the same process. We also show that sequential consistency is composable in some cases commonly encountered: 1) objects that would be linearizable if they were implemented on top of a linearizable memory become sequentially consistent when implemented on top of a sequential memory w...

A Mechanism for Sequential Consistency in a Distributed Objects System

Iscapdcs, 2004

This paper presents a new protocol for ensuring sequential consistency in a distributed objects system. The protocol is efficient and simple. In addition to providing a high-level overview of the protocol, we give a brief discussion of the implementation details. We also provide a mathematical model that we used to prove the correctness of our approach.

Update Consistency for Wait-Free Concurrent Objects

2015 IEEE International Parallel and Distributed Processing Symposium, 2015

In large scale systems such as the Internet, replicating data is an essential feature in order to provide availability and fault-tolerance. Attiya and Welch proved that using strong consistency criteria such as atomicity is costly as each operation may need an execution time linear with the latency of the communication network. Weaker consistency criteria like causal consistency and PRAM consistency do not ensure convergence. The different replicas are not guaranteed to converge towards a unique state. Eventual consistency guarantees that all replicas eventually converge when the participants stop updating. However, it fails to fully specify the semantics of the operations on shared objects and requires additional non-intuitive and error-prone distributed specification techniques.

Allowing Atomic Objects to Coexist with Sequentially Consistent Objects

Lecture Notes in Computer Science, 2005

A concurrent object is an object that can be concurrently accessed by several processes. Two well known consistency criteria for such objects are atomic consistency (also called linearizability) and sequential consistency. Both criteria require that all the operations on all the concurrent objects be totally ordered in such a way that each read operation obtains the last value written into the corresponding object. They differ in the meaning of the word "last" that refers to physical time for atomic consistency, and to logical time for sequential consistency. This paper investigates the merging of these consistency criteria. It presents a protocol that allows the upper layer multiprocess program to use simultaneously both types of consistency: purely atomic objects can coexist with purely sequentially consistent objects. The protocol is built on top of a message passing asynchronous distributed system. Interestingly, this protocol is generic in the sense that it can be tailored to provide only one of these consistency criteria.

Normality: A CONSISTENCY CONDITION FOR CONCURRENT OBJECTS

Parallel Processing Letters, 1999

This paper is focused on concurrent objects (objects shared by concurrent processes). It introduces a consistency condition called Normality whose definition is based only on local orders of operations as perceived by processes and by objects. First we consider the model in which each operation is on exactly one object. In this model we show that a history is linearizable iff it is normal. However, the definition of Normality is less constraining in the sense that there are strictly more legal sequential histories which are considered equivalent to the given history when Normality is used. We next consider a more general model where operations can span multiple objects. In this model we show that Normality is strictly weaker than Linearizability, i.e., history may be normal but not linearizable. As Normality refers only to local orders (process order and object order) it appears to be well-suited to objects supported by asynchronous distributed systems and accessed by RPC-like mecha...

Value-based sequential consistency for set objects in dynamic distributed systems

2010

This paper introduces a shared object, namely a set object that allows processes to add and remove values as well as take a snapshot of its content. A new consistency condition suited to such an object is introduced. This condition, named value-based sequential consistency, is weaker than linearizability. The paper also addresses the construction of a set object in a synchronous anonymous distributed system where participants can continuously join and leave the system.

Timed Consistency: Unifying Model of Consistency Protocols in Distributed Systems

Ordering and timeliness are two different aspects of consistency of shared objects in distributed systems. Timed consistency [12] is an approach that considers simultaneously these two elements according to the needs of the system. Hence, most of well known consistency protocols are candidates to be unified under the Timed consistency approach, just by changing some of the time or order parameters.

Toward transparent selective sequential consistency in distributed shared memory systems

Proceedings. 18th International Conference on Distributed Computing Systems (Cat. No.98CB36183), 1998

This paper proposes a transparent selective sequential consistency approach to Distributed Shared Memory (DSM) systems. First, three basic techniques | time selection, processor selection, and data selection { are analyzed for improving the performance of strictly sequential consistency DSM systems, and a transparent approach to achieving these selections is proposed. Then, this paper focuses on the protocols and techniques devised to achieve transparent data selection, including a novel Selective Lazy/Eager Updates Propagation protocol for propagating updates on shared data objects, and the Critical Region Updated Pages Set scheme to automatically detect the associations between shared data objects and synchronization objects. The proposed approach is able to o er the same potential performance advantages as the Entry Consistency model or the Scope Consistency model, but it imposes no extra burden to programmers and never fails to execute programs correctly. The devised protocols and techniques have been implemented and experimented with in the context of the TreadMarks DSM system. Performance results have shown that for many applications, our transparent data selection approach outperforms the Lazy Release Consistency model using a lazy or eager updates propagation protocol.