Amittai Aviram | Boston College (original) (raw)
Papers by Amittai Aviram
arXiv (Cornell University), Mar 27, 2010
Page 1. TELLING RHYTHM BODY AND MEANING IN POETRY Amittai E Aviram MICHIGAN Page 2. Page 3. Page ... more Page 1. TELLING RHYTHM BODY AND MEANING IN POETRY Amittai E Aviram MICHIGAN Page 2. Page 3. Page 4. In an era when poetry as a cultural force in the West appears to be waning, Telling Rhythm presents a hopeful ...
arXiv (Cornell University), May 19, 2010
The Journal of Popular Culture, Dec 1, 1992
Page 1. Postmodern Gay Dionysus: Dr. Frank N. Furter Amittai F. Aviram Now that the Rocky Horror ... more Page 1. Postmodern Gay Dionysus: Dr. Frank N. Furter Amittai F. Aviram Now that the Rocky Horror Picture Show is finally available on video, thousands will be at liberty to indulge their cult interest in the privacy of their own ...
Etudes Litteraires, Apr 12, 2005
arXiv (Cornell University), Dec 4, 2009
arXiv (Cornell University), Mar 27, 2010
arXiv (Cornell University), Dec 4, 2009
Findings of the Association for Computational Linguistics: ACL 2023
How to guarantee the deterministic execution of parallel programs and prevent race conditions and... more How to guarantee the deterministic execution of parallel programs and prevent race conditions and heisenbugs is now an important research topic. One obstacle is that conventional programming models rely on naturally nondeterministic synchronization abstractions. With naturally deterministic abstractions, programming logic alone, immune to timing, determines which threads synchronize, how, and at what point, making data races impossible. By contrast, nondeterministic primitives, such as mutex locks and condition variables, place the burden on the programmer to head off data races. New programming languages may address this problem, but legacy code must be rewritten. Alternatively, deterministic thread scheduling [1] handles conventional languages— making race conditions repeatable without eliminating them. A novel OS, Determinator, handles conventional languages and eliminates data races entirely, but only supports code using a handful of naturally deterministic synchronization abstr...
Researchers widely agree that determinism in parallel programs is desirable. Although experimenta... more Researchers widely agree that determinism in parallel programs is desirable. Although experimental parallel programming languages have long featured deterministic semantics, in mainstream parallel environments, developers still build on nondeterministic constructs such as mutexes, leading to time- or schedule-dependent heisenbugs. To make deterministic programming more accessible, we introduce DOMP, a deterministic extension to OpenMP, preserving the familiarity of traditional languages such as C and Fortran, and maintaining source-compatibility with much of the existing OpenMP standard. Our analysis of parallel idioms used in 35 popular benchmarks suggests that the idioms used most often (89 % of instances in the analyzed code) are either already expressible in OpenMP’s deterministic subset (74%), or merely lack more general reduction (12%) or pipeline (3%) constructs. DOMP broadens OpenMP’s deterministic subset with generalized reductions, and implements an efficient deterministic...
Recent interest in deterministic parallelism has yielded new deterministic programming languages,... more Recent interest in deterministic parallelism has yielded new deterministic programming languages, which offer promising features but require rewriting existing code, and deterministic schedulers, which emulate existing thread APIs but do not eliminate races from the basic programming model. Workspace consistency (WC) is a new synchronization and memory consistency model that offers a “naturally deterministic,” race-free programming model that can be adopted in both new or existing languages. WC’s basic semantics are inspired by—and intended to be as easily understood as— the “parallel assignment” construct in sequential languages such as Perl and JavaScript, where concurrent threads always read their inputs before writing shared outputs. Prototype implementations of a restricted form of WC already exist, supporting only strictly hierarchical fork/join-style synchronization, but this paper develops and explores the model in more detail and extends it to support non-hierarchical synch...
ArXiv, 2010
Timing side-channels represent an insidious security challenge for cloud computing, because: (a) ... more Timing side-channels represent an insidious security challenge for cloud computing, because: (a) massive parallelism in the cloud makes timing channels pervasive and hard to control; (b) timing channels enable one customer to steal information from another without leaving a trail or raising alarms; (c) only the cloud provider can feasibly detect and report such attacks, but the provider's incentives are not to; and (d) resource partitioning schemes for timing channel control undermine statistical sharing efficiency, and, with it, the cloud computing business model. We propose a new approach to timing channel control, using provider-enforced deterministic execution instead of resource partitioning to eliminate timing channels within a shared cloud domain. Provider-enforced determinism prevents execution timing from affecting the results of a compute task, however large or parallel, ensuring that a task's outputs leak no timing information apart from explicit timing inputs and total compute duration. Experiments with a prototype OS for deterministic cloud computing suggest that such an approach may be practical and efficient. The OS supports deterministic versions of familiar APIs such as processes, threads, shared memory, and file systems, and runs coarse-grained parallel tasks as efficiently and scalably as current timing channel-ridden systems.
arXiv: Operating Systems, 2010
The difficulty of developing reliable parallel software is generating interest in deterministic e... more The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose deterministic consistency, a parallel programming model as easy to understand as the “parallel assignment” construct in sequential languages such as Perl and JavaScript, where concurrent threads always read their inputs before writing shared outputs. DC supports common data- and task-parallel synchronization abstractions such as fork/join and barriers, as well as non-hierarchical structures such as producer/consumer pipelines and futures. A preliminary prototype suggests that softwareonly impleme...
In parallel programming, a reduction is an operation that combines values across threads into a s... more In parallel programming, a reduction is an operation that combines values across threads into a single result, and can be designed and implemented so as to enforce determinism, not only on the result, but also on the intermediate values and evaluation sequence. These features make the reduction an attractive feature for a language-based approach to deterministic parallelism, especially one that, like OpenMP, provides annotations to parallelize legacy serial code. Hence reductions are a planned feature of the Deterministic OpenMP (DOMP) project. To enable DOMP to help programmers eliminate nondeterministic code wherever possible, we propose a generalized reduction that supports arbitrary data types and user-defined operations—a generalization rich in challenges for language design and implementation.
Recent deterministic parallel programming models show promise for their ability to replay computa... more Recent deterministic parallel programming models show promise for their ability to replay computations and reproduce bugs, but they currently require the programmer to adopt restrictive or unfamiliar parallel constructs. Deterministic OpenMP (DOMP) is a new deterministic parallel environment built on the familiar OpenMP framework. By leveraging OpenMP's block-structured synchronization annotations, which are largely compatible with the constraints of a deterministic model, DOMP eases the parallelization of legacy serial code and preserves substantial compatibility with OpenMP software. A few OpenMP constructs, however, such as atomic and critical, are semantically nondeterministic and unsupported in DOMP. In three well-known parallel benchmark suites, we find that most (81%) uses of these nondeterministic constructs express programming idioms that are compatible with determinism but not directly expressible in OpenMP. DOMP therefore adds new OpenMP constructs to express such idi...
arXiv (Cornell University), Mar 27, 2010
Page 1. TELLING RHYTHM BODY AND MEANING IN POETRY Amittai E Aviram MICHIGAN Page 2. Page 3. Page ... more Page 1. TELLING RHYTHM BODY AND MEANING IN POETRY Amittai E Aviram MICHIGAN Page 2. Page 3. Page 4. In an era when poetry as a cultural force in the West appears to be waning, Telling Rhythm presents a hopeful ...
arXiv (Cornell University), May 19, 2010
The Journal of Popular Culture, Dec 1, 1992
Page 1. Postmodern Gay Dionysus: Dr. Frank N. Furter Amittai F. Aviram Now that the Rocky Horror ... more Page 1. Postmodern Gay Dionysus: Dr. Frank N. Furter Amittai F. Aviram Now that the Rocky Horror Picture Show is finally available on video, thousands will be at liberty to indulge their cult interest in the privacy of their own ...
Etudes Litteraires, Apr 12, 2005
arXiv (Cornell University), Dec 4, 2009
arXiv (Cornell University), Mar 27, 2010
arXiv (Cornell University), Dec 4, 2009
Findings of the Association for Computational Linguistics: ACL 2023
How to guarantee the deterministic execution of parallel programs and prevent race conditions and... more How to guarantee the deterministic execution of parallel programs and prevent race conditions and heisenbugs is now an important research topic. One obstacle is that conventional programming models rely on naturally nondeterministic synchronization abstractions. With naturally deterministic abstractions, programming logic alone, immune to timing, determines which threads synchronize, how, and at what point, making data races impossible. By contrast, nondeterministic primitives, such as mutex locks and condition variables, place the burden on the programmer to head off data races. New programming languages may address this problem, but legacy code must be rewritten. Alternatively, deterministic thread scheduling [1] handles conventional languages— making race conditions repeatable without eliminating them. A novel OS, Determinator, handles conventional languages and eliminates data races entirely, but only supports code using a handful of naturally deterministic synchronization abstr...
Researchers widely agree that determinism in parallel programs is desirable. Although experimenta... more Researchers widely agree that determinism in parallel programs is desirable. Although experimental parallel programming languages have long featured deterministic semantics, in mainstream parallel environments, developers still build on nondeterministic constructs such as mutexes, leading to time- or schedule-dependent heisenbugs. To make deterministic programming more accessible, we introduce DOMP, a deterministic extension to OpenMP, preserving the familiarity of traditional languages such as C and Fortran, and maintaining source-compatibility with much of the existing OpenMP standard. Our analysis of parallel idioms used in 35 popular benchmarks suggests that the idioms used most often (89 % of instances in the analyzed code) are either already expressible in OpenMP’s deterministic subset (74%), or merely lack more general reduction (12%) or pipeline (3%) constructs. DOMP broadens OpenMP’s deterministic subset with generalized reductions, and implements an efficient deterministic...
Recent interest in deterministic parallelism has yielded new deterministic programming languages,... more Recent interest in deterministic parallelism has yielded new deterministic programming languages, which offer promising features but require rewriting existing code, and deterministic schedulers, which emulate existing thread APIs but do not eliminate races from the basic programming model. Workspace consistency (WC) is a new synchronization and memory consistency model that offers a “naturally deterministic,” race-free programming model that can be adopted in both new or existing languages. WC’s basic semantics are inspired by—and intended to be as easily understood as— the “parallel assignment” construct in sequential languages such as Perl and JavaScript, where concurrent threads always read their inputs before writing shared outputs. Prototype implementations of a restricted form of WC already exist, supporting only strictly hierarchical fork/join-style synchronization, but this paper develops and explores the model in more detail and extends it to support non-hierarchical synch...
ArXiv, 2010
Timing side-channels represent an insidious security challenge for cloud computing, because: (a) ... more Timing side-channels represent an insidious security challenge for cloud computing, because: (a) massive parallelism in the cloud makes timing channels pervasive and hard to control; (b) timing channels enable one customer to steal information from another without leaving a trail or raising alarms; (c) only the cloud provider can feasibly detect and report such attacks, but the provider's incentives are not to; and (d) resource partitioning schemes for timing channel control undermine statistical sharing efficiency, and, with it, the cloud computing business model. We propose a new approach to timing channel control, using provider-enforced deterministic execution instead of resource partitioning to eliminate timing channels within a shared cloud domain. Provider-enforced determinism prevents execution timing from affecting the results of a compute task, however large or parallel, ensuring that a task's outputs leak no timing information apart from explicit timing inputs and total compute duration. Experiments with a prototype OS for deterministic cloud computing suggest that such an approach may be practical and efficient. The OS supports deterministic versions of familiar APIs such as processes, threads, shared memory, and file systems, and runs coarse-grained parallel tasks as efficiently and scalably as current timing channel-ridden systems.
arXiv: Operating Systems, 2010
The difficulty of developing reliable parallel software is generating interest in deterministic e... more The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose deterministic consistency, a parallel programming model as easy to understand as the “parallel assignment” construct in sequential languages such as Perl and JavaScript, where concurrent threads always read their inputs before writing shared outputs. DC supports common data- and task-parallel synchronization abstractions such as fork/join and barriers, as well as non-hierarchical structures such as producer/consumer pipelines and futures. A preliminary prototype suggests that softwareonly impleme...
In parallel programming, a reduction is an operation that combines values across threads into a s... more In parallel programming, a reduction is an operation that combines values across threads into a single result, and can be designed and implemented so as to enforce determinism, not only on the result, but also on the intermediate values and evaluation sequence. These features make the reduction an attractive feature for a language-based approach to deterministic parallelism, especially one that, like OpenMP, provides annotations to parallelize legacy serial code. Hence reductions are a planned feature of the Deterministic OpenMP (DOMP) project. To enable DOMP to help programmers eliminate nondeterministic code wherever possible, we propose a generalized reduction that supports arbitrary data types and user-defined operations—a generalization rich in challenges for language design and implementation.
Recent deterministic parallel programming models show promise for their ability to replay computa... more Recent deterministic parallel programming models show promise for their ability to replay computations and reproduce bugs, but they currently require the programmer to adopt restrictive or unfamiliar parallel constructs. Deterministic OpenMP (DOMP) is a new deterministic parallel environment built on the familiar OpenMP framework. By leveraging OpenMP's block-structured synchronization annotations, which are largely compatible with the constraints of a deterministic model, DOMP eases the parallelization of legacy serial code and preserves substantial compatibility with OpenMP software. A few OpenMP constructs, however, such as atomic and critical, are semantically nondeterministic and unsupported in DOMP. In three well-known parallel benchmark suites, we find that most (81%) uses of these nondeterministic constructs express programming idioms that are compatible with determinism but not directly expressible in OpenMP. DOMP therefore adds new OpenMP constructs to express such idi...