6th Workshop on Theory of Transactional Memory (original) (raw)

This year, the 6th edition of the Workshop on Theory of Transactional Memory (WTTM) was collocated with PODC 2014 in Paris, and took place on July 14. The objective of WTTM was to discuss new theoretical challenges and recent achievements in the area of transactional computing. Among the various recent developments in the area of Transactional Memory (TM), one of the most relevant was the support for Hardware TM (HTM), which was introduced in various commercial processors. Unsurprisingly, the recent advent of HTM in commercial CPUs has had a major impact also in the program of this edition of WTTM, which has gathered several works addressing issues related to the programmability, efficiency, and correctness of HTM-based systems, as well as hybrid solutions combining software and hardware TM implementations (HyTM). As in its previous editions, WTTM could count on the generous support of the EuroTM COST Action (IC1001), and on a set of outstanding keynote talks which were delivered by some of the leading researchers in the area, namely Idit Keidar, Shlomi Dolev, Maged Michael and Michael Scott, who were invited to present their latest achievements. This edition was dedicated to the 60th birthday of Maurice Herlihy and to his foundational work on Transactional Memory, which was commemorated by Michael Scott in the concluding talk of the event. This report is intended to give the highlights of the problems discussed during the workshop. Transactional Memory (TM) is a concurrency control mechanism for synchronizing concurrent accesses to shared memory by different threads. It has been proposed as an alternative to lock-based synchronization to simplify concurrent programming while exhibiting good performance. The sequential code is encapsulated in transactions, which are sequences of accesses to shared or local variables that should be executed atomically. A transaction ends either by committing, in which case all of its updates take effect, or by aborting, in which case all its updates are discarded. 1 TM Correctness and Universal Constructions Idit Keidar opened the workshop with a talk presenting a joint work with Kfir Lev-Ari and Gregory Chockler on the characterization of correctness for shared data structures. The idea pursued in this work is to replace the classic and overly conservative read-set validation technique (which checks that all read variables have not changed since they were first read) with the verification of abstract conditions over the shared variables, called base conditions. Reading values that satisfy some base condition at every point in time implies correctness of read-only operations. The resulting correctness guarantee, however, is found not to be equivalent to linearizability, and can be captured through two new conditions: validity and regularity. The former requires that a read-only operation never reaches a state unreachable in a sequential execution; the latter generalizes Lamport's notion of regularity [17] for arbitrary data structures. An extended version of the work presented at WTTM has appeared also in the last edition of DISC [18]. Claire Capdevielle presented her joint work with Colette Johnen and Alessia Milani on solo-fast universal constructions for deterministic abortable objects, which are objects that ensure that, if several processes contend to operate on it, a special abort response may be returned. Such a response indicates that the operation failed and guarantees that an aborted operation does not take effect [13]. Operations that do not abort return a response which is legal with respect to the sequential specification of the object. The work presented uses only read/write registers when there is no contention and stronger synchronization primitives, e.g., CAS, when contention occurs [3]. They propose a construction with a lightweight helping mechanism that applies to objects that can return an abort event to indicate the failure of an operation. Sandeep Hans presented a joint work with Hagit Attiya, Alexey Gotsman, and Noam Rinetzky on an evaluation of TMS1 as a consistency criterion necessary and sufficient for the case where local variables are rolled-back upon transaction aborts [2]. The authors claim that TMS [9] is not trivially formulated. In particular, this formulation allows aborted and live transactions to have different views of the system state. Their proof reveals some natural, but subtle, assumptions on the TM required for the equivalence result.