Communication relations: a paradigm for parallel program design (original) (raw)

Categorical semantics of parallel program design

Science of Computer Programming, 1997

AbstractÐ We formalise, using Category Theory, modularisation techniques for parallel and distributed systems based on the notion of superposition, showing that parallel program design obeys the "universal laws" formulated by J.Goguen for General Systems Theory, as well as other algebraic properties of modularity formulated for Specification Theory. The resulting categorical formalisation unifies the different notions of superposition that have been proposed in the literature and clarifies their algebraic properties with respect to modularisation. It also suggests ways of extending or revising existing languages in order to provide higher levels of reusability, modularity and incrementality in system design. (*) This work was partially supported by the Esprit BRA 8319 (MODELAGE), the HCM Scientific Network CHRX-CT92-0054 (MEDICIS), and the PRAXIS XXI contract 2/2.1/MAT/46/94 (ESCOLA).

Communicating parallel processes

Software: Practice and Experience, 1986

Tony Hoare's 1978 paper introducing the programming language Communicating Sequential Processes is now a classic. CSP treated input and output as fundamental programming primitives, and included a simple form of parallel composition based on synchronized communication. This paper provides an excellent example of Tony's clarity of vision and intuition. The notion of processes is easy to grasp intuitively, provides a natural abstraction of the way many parallel systems behave, and has an abundance of applications. Ideas from CSP have influenced the design of more recent programming languages such as occam and Ada. Investigations of the semantic foundations of CSP and its successors and derivatives have flourished, bringing forth a variety of mathematical models, each tailored to focus on a particular kind of program behavior. In this paper we re-examine the rationale for some of the original language design decisions, equipped with the benefit of hindsight and the accumulation of two decades of research into programming language semantics. We propose an "idealized" version of CSP whose syntax generalizes the original language along familiar lines but whose semantics is based on asynchronous communication and fair parallel execution, in direct contrast with the original language and its main successors. This language permits nested and recursive uses of parallelism, so it is more appropriate for us to refer to Communicating Parallel Processes. We outline a simple semantics for this language and compare its structure with the most prominent models of the synchronous language. Our semantic framework is equally well suited to modelling asynchronous processes, shared-variable programs, and Kahn-style dataflow networks, so that we achieve a unification of paradigms.

Describing the Semantics of Parallel Programming Languages using Shared Data Abstractions

Programming language semantics can be de ned in a variety of ways, one of which is to use an information structure model based on abstract data types. We have previously used this technique to provide de nitions of semantic aspects of sequential languages and have demonstrated how it lends itself to the automatic generation of prototype language implementations from the formal model. However, di culties arise when any attempt is made to describe a parallel programming language using this technique, because of the need to regulate access to abstract data types in a parallel environment. This motivates the consideration of alternative techniques for the de nition of the information structures used in the underlying model. The notion of shared data abstractions provides such an alternative and this paper explores some of the issues in using shared data abstractions in the de nition of the semantics of parallel programming languages.

Abstractions for Parallelism: Patterns, Performance and Correctness

cs.manchester.ac.uk

Despite rapid advances in parallel hardware performance, the full potential of processing power is not being exploited in the software community for one clear reason: the difficulty in designing efficient and effective parallel applications. Identifying sub-tasks within the application, designing parallel algorithms, and balancing load among the processing units has been a daunting task for novice programmers, and even the experienced programmers are often trapped with design decisions that underachieve in potential peak performance. Design patterns have been used as a notation to capture how experts in a given domain think about and approach their work. Over the last decade there have been several approaches in identifying common patterns that are repeatedly used in parallel software design process. Documentation of these design patterns helps the programmers by providing definition, solution and guidelines for common parallelization problems. A convenient way to further raise the level of abstraction and make it easier for programmers to write legible code is the philosophy of 'Separation of Concerns'. This separation is achieved by Aspect Oriented Programming (AOP) paradigm by allowing programmers to specify the concerns in an independent manner and letting the compiler 'weave' (AOP terminology for unification of modules) them at compile time. However, abstraction by its very nature often produces unoptimized code as it frames the solution of a problem without much thought to underlying machine architecture. Indeed, in the current phase of multicore era, where chip manufacturers are continuously experimenting with processor architectures, an optimization on one architecture might not yield any benefit on another from a different chip manufacturer. Using the auto-tuner one can automatically explore the optimization space for a particular computational kernel on a given processor architecture. The last relevant aspect of concern in this project would be the formal specification and verification of properties concerning parallel programs. It's well known fact that parallel programs are particularly prone to insidious defects such as deadlocks and race conditions due to shared variables and locking. Using tools from formal verification it is however possible to guarantee certain safety properties (such as deadlock and data race avoidance) while refining successive abstractions to code level. The interplay of abstractions, auto-tuning and correctness in the context of parallel software development will be considered in this project report.

How to Write Parallel Programs: A Guide to the Perplexed

ACM Computing Surveys (CSUR), 1989

We present a framework for parallel programming, based on three conceptual classes for understanding parallelism and three programming paradigms for implementing parallel programs. The conceptual classes are result parallelism, which centers on parallel computation of all elements in a ...

Object-oriented model of parallel programs

Proceedings of 4th Euromicro Workshop on Parallel and Distributed Processing, 1996

In this paper the object oriented model of control ow in parallel systems is presented. It builds upon strong intuitive understanding of traditional control ow graphs of sequential programs. The model is object oriented, what makes it suitable for a variety of applications including re-engineering of the existing code, simulation and modelling, parallel program design, testing and quality assessment. It also enables integration of several methods and techniques from di erent areas of computer science with special emphasis on the nal software product quality. In it's current form, the model has been adopted for the testing tool being developed for the Copernicus Software Engineering for Parallel Processing (SEPP) European programme.

Design and implementation of communication patterns using parallel objects

International Journal of Simulation and Process Modelling, 2017

Within an environment of parallel objects, an approach of structured parallel programming with the paradigm of object-orientation is presented here. The proposal includes a programming method based on high level parallel compositions or HLPCs (CPANs in Spanish). C++ classes and CPANs are syntactically alike and differ in concurrency mechanisms. Different parallel programming patterns, synchronisation operations and new constructs like futures have been discussed throughout the paper. To achieve software-reusability, a series of predefined patterns that use object-oriented programming concepts have been presented. Concurrency related constraints on process synchronisation are set by only resorting to maxpar, mutex, sync primitives in the application code. By means of the method application, the implementation of commonly used parallel communication patterns is explained to finally present a library of classes for C++ applications that use POSIX threads.

Software Engineering Considerations in the Construction of Parallel Programs

Advances in Parallel Computing, 1995

In many papers describing parallel programming tools, the authors illustrate the strengths of their approach by presenting some impressive speedup results. However, is this the only metric by which we should judge the quality of their tool? Many of these tools offer significant software engineering advantages that reduce program development time and increase code reliability. This paper uses the Enterprise programming environment for coarse-grained parallel applications to illustrate the advantages of these tools. For most users, high performance is not an important evaluation criteria; other criteria, such as tool usability and program development savings, are often far more important. * This research has been funded in part by NSERC grants OGP-8173 and OGP-8191, a grant from IBM Canada Limited and the Netherlands Organization for Scientific Research (NWO). † A modified definition from a personal communication with Greg Wilson.

Communication Port: A Language Concept for Concurrent Programming

IEEE Transactions on Software Engineering, 1980

A new language concept-communication port (CP), is introduced for programming on distributed processor networks. Such a network can contain an arbitrary number of processors each with its own private storage but with no memory sharing. The processors must communicate via explicit message passing. Communication port is an encapsulation of two language properties: "communication nondeterminism" and "communication disconnect time." It provides a tool for progranmers to write well-structured, modular, and efficient concurrent programs. A number of examples are given in the paper to demonstrate the power of the new concepts.

Formal techniques for parallel object-oriented languages

Lecture Notes in Computer Science, 1991

This paper is intended to give an overview of the formal techniques that have been developed to deal with the parallel object-oriented language POOL and several related languages. We sketch a number of semantic descriptions, using several formalism: operational semantics, denotational semantics, and a new approach to semantics, which we call layered semantics. Then we summarize the progress that has been made in formal proof systems to verify the correctness of parallel object-oriented programs. Finally we survey the techniques that we are currently developing to describe the behaviour of objects independently of their implementation, leading to linguistic support for behaviourai subtyping.