Granular Time Warp Objects (original) (raw)
Related papers
Efficient execution of Time Warp programs on heterogeneous, NOW platforms
IEEE Transactions on Parallel and Distributed Systems, 2000
AbstractÐTime Warp is an optimistic protocol for synchronizing parallel discrete event simulations. To achieve performance in a multiuser network of workstation (NOW) environment, Time Warp must continue to operate efficiently in the presence of external workloads caused by other users, processor heterogeneity, and irregular internal workloads caused by the simulation model. However, these performance problems can cause a Time Warp program to become grossly unbalanced, resulting in slower execution. The key observation asserted in this article is that each of these performance problems, while different in source, has a similar manifestation. For a Time Warp program to be balanced, the amount of wall clock time necessary to advance an LP one unit of simulation time should be about the same for all LPs. Using this observation, we devise a single algorithm that mitigates these performance problems and enables the ªbackgroundº execution of Time Warp programs on heterogeneous distributed computing platforms in the presence of external as well as irregular internal workloads.
Distributed Simulation and the Time Warp Operating System
This paper describes the Time Warp Operating System, under development for three years at the Jet Propulsion Laboratory for the Caltech Mark III Hypercube multiprocessor. Its primary goal is concurrent execution of large, irregular discrete event simulations at maximum speed. It also supports any other distributed applications that are synchronized by virtual time.
Exploiting intra-object dependencies in parallel simulation
Information Processing Letters, 1999
This paper introduces the notion of weak causality that models the intra-object parallelism in parallel discrete event simulation. In this setting, a run where events are executed at each object according to their timestamp is a correct run. The weak causality relation allows to de ne the largest subset of all runs of a simulation that are equivalent to the timestamp-based run. Finally, we describe an application of weak causality to optimistic synchronization (Time Warp) by introducing a synchronization protocol that reduces the number of rollbacks and their extent.
JiST: Embedding Simulation Time into a Virtual Machine
Since progress in many avenues of science depends heavily on simulated results, discrete event simulators have been the subject of much research into their ecient design and execution. This paper introduces JiST, a Java-based sim- ulation framework that executes discrete event simulations both eciently and transparently. Our system diers from existing work in that it embeds simulation time semantics into the Java execution model, but does so without invent- ing a new language, without requiring a specialized compiler and without utilizing a custom runtime. The result is a flex- ible simulation environment that allows sequential simula- tion execution and also transparently supports both paral- lel and optimistic execution with automatic checkpointing and rollback. The JiST approach uses a convenient single system image abstraction across a cluster of nodes, that al- lows for dynamic network and computational load-balancing and fine-grained migration of simulation state. The system p...
Transparent and Efficient Shared-State Management for Optimistic Simulations on Multi-core Machines
Traditionally, Logical Processes (LPs) forming a simulation model store their execution information into disjoint simulations states, forcing events exchange to communicate data between each other. In this work we propose the design and implementation of an extension to the traditional Time Warp (optimistic) synchronization protocol for parallel/distributed simulation, targeted at shared-memory/multicore machines, allowing LPs to share parts of their simulation states by using global variables. In order to preserve optimism's intrinsic properties, global variables are transparently mapped to multi-version ones, so to avoid any form of safety predicate verification upon updates. Execution's consistency is ensured via the introduction of a new rollback scheme which is triggered upon the detection of an incorrect global variable's read. At the same time, efficiency in the execution is guaranteed by the exploitation of non-blocking algorithms in order to manage the multi-version variables' lists. Furthermore, our proposal is integrated with the simulation model's code through software instrumentation, in order to allow the application-level programmer to avoid using any specific API to mark or to inform the simulation kernel of updates to global variables. Thus we support full transparency. An assessment of our proposal, comparing it with a traditional message-passing implementation of variables' multi-version is provided as well.
An adaptive memory management protocol for Time Warp parallel simulation
Performance evaluation review, 1994
It is widely believed that Time Warp is prone to two potential problems: an excessive amount of wasted, rolled back computation resulting from "rollback thrashing" behaviors, and inefficient use of memory, leading to poor performance of virtuat memory and/or multiprocessor cache systems. An adaptive mechanism is proposed bastii on the Cancelback memory management protocol that dynamically controls the amount of memory used in the simulation in order to maximize performance. The proposed mechanism is adaptive in the sense that it monitors the execution of the Time Warp program, automatically adjusts the amount of memory used to reduce Time Warp overheads (fossil collection, Cancelback, the amount of rolled back computation, etc.) to a manageable level. The mechanism is based on a model that characterizes the behavior of Time Warp programs in terms of the flow of memory buffers among different buffer pools. We demonstrate that an implementation of the adaptive mechanism on a Kendall Square Research KSR-1 multiprocessor is effective in automatically maximizing performance while minimizing memory utilization of Time Warp programs, even for dynamically changing simulation models.
Distributed Simulation and the Time Wrap Operating System
1987
This paper describes the Time Warp Operating System, under development for three years at the Jet Propulsion Laboratory for the Caltech Mark III Hypercube multiprocessor. Its primary goal is concurrent execution of large, irregular discrete event simulations at maximum speed. It also supports any other distributed applications that are synchronized by virtual time.
New approach to object-oriented simulation of concurrent systems
SPIE Proceedings, 1997
High speed simulation of concurrent systems requires distributed processing if meaningful results are to be obtained for large systems in a reasonable timeframe. One of the most common methods used for such simulation is Parallel Discrete Event Simulation (PDES). A range of PDES simulation kernels have been developed and much research has been devoted to optimistic execution strategies such as TimeWarp. Unfortunately in all this e ort some fundamental aspects of object oriented modelling for simulation have received scant attention. In particular the ability of simulation kernels to act on truly generic simulation objects. In this context we de ne a truly generic object to be one which totally de nes its responeses to external stimuli, but which has no concept of its place in the interconnected web of objects that comprise the total simulation environment. To address this problem, we propose a new modelling approach based on interacting objects, and an associated simulation kernel architecture. This paper describes the architecture and features of our simulation kernel in detail, and demonstrates, using a small example, the bene ts of adopting such a modelling approach. The major speci c bene ts include true object genericity, enhanced scope for object re-use, and enhanced opportunities to use polymorphism.
A New Approach to Object Oriented Simulation of Concurrent Systems [3083-13]
1997
High speed simulation of concurrent systems requires distributed processing if meaningful results are to be obtained for large systems in a reasonable timeframe. One of the most common methods used for such simulation is Parallel Discrete Event Simulation (PDES). A range of PDES simulation kernels have been developed and much research has been devoted to optimistic execution strategies such as TimeWarp.
An Adaptive Memory Management Protocol for Time Warp Simulation
Measurement and Modeling of Computer Systems, 1994
It is widely believed that Time Warp is prone to two potential problems: an excessive amount of wasted, rolled back computation resulting from "rollback thrashing" behaviors, and inefficient use of memory, leading to poor performance of virtuat memory and/or multiprocessor cache systems. An adaptive mechanism is proposed bastii on the Cancelback memory management protocol that dynamically controls the amount of memory used in the simulation in order to maximize performance. The proposed mechanism is adaptive in the sense that it monitors the execution of the Time Warp program, automatically adjusts the amount of memory used to reduce Time Warp overheads (fossil collection, Cancelback, the amount of rolled back computation, etc.) to a manageable level. The mechanism is based on a model that characterizes the behavior of Time Warp programs in terms of the flow of memory buffers among different buffer pools. We demonstrate that an implementation of the adaptive mechanism on a Kendall Square Research KSR-1 multiprocessor is effective in automatically maximizing performance while minimizing memory utilization of Time Warp programs, even for dynamically changing simulation models.