HAMP –A Highly Abstracted and Modular Programming Paradigm for Expressing Parallel Programs on Heterogenous Platforms (original) (raw)
Related papers
Abstractions for portable, scalable parallel programming
IEEE Transactions on Parallel and Distributed Systems, 1998
In parallel programming, the need to manage communication costs, load imbalance, and irregularities in the computation puts substantial demands on the programmer. Key properties of the architecture, such as the number of processors and the costs of communication, must be exploited to achieve good performance. Coding these properties directly into a program compromises the portability and exibility of the code because signicant changes are usually needed to port or enhance the program. We describe a parallel programming model that supports the concise, independent description of key aspects of a parallel program|such as data distribution, communication, and boundary conditions|without reference to machine idiosyncrasies. The independence of such components improves portability by allowing the components of a program to be tuned independently, and encourages reuse by supporting the composition of existing components. The architecture-sensitive aspects of a computation are isolated from the rest of the program, reducing the need to make extensive changes to port a program. This model is eective in exploiting both data parallelism and functional parallelism. This paper provides programming examples, compares this work to related languages, and presents performance results.
Pal: towards a new approach to high level parallel programming
2006
We present a new programming model based on user annotations that can be used to transform plain Java programs into suitable parallel code that can be run on workstation clusters, networks and grids. The only user responsibility consists in decorating the methods that will eventually be executed in parallel with standard Java 1.5 annotations. Then these annotations are automatically processed and parallel byte code is derived. When the annotated program is started, it automatically retrieves the information about the executing platform and evaluates the information specified inside the annotations to transform the byte-code into a semantically equivalent multithreaded/multitask version. The results returned by the annotated methods, when invoked, are futures with a waitby-necessity semantics. A PAL prototype has been implemented in Java, using JJPF as Parallel Framework. The experiments made with the prototype are encouraging: the design of parallel applications has been greatly simplified and the performances obtained are the same of an application directly written in JJPF.
Booster: a high-level language for portable parallel algorithms
Applied Numerical Mathematics, 1991
The development of programming languages suitable to express parallel algorithms in is crucial to the pace of acceptance of parallel processors for production applications. As in sequential programming, portability of parallel software is a strongly desirable feature. Portability in this respect means that given an algorithm description in a parallel programming language, it must be possible, with relatively little effort, to generate efficient code for several classes of (parallel) architectures.
"Software Engineering for High Performance Computing System (HPCS) Applications" W3S Workshop - 26th International Conference on Software Engineering, 2004
From ongoing practices of those who develop high performance software, there is an evident, pressing need for a highly focused effort to "reinvent" the explicit methodologies that support parallel programming. To enable a new methodology, orthogonalization of features and separation of concerns within the feature sets of existing parallel models (e.g., MPI-1, MPI-2, DRI, MPI/RT, BSP) are required from the viewpoint of creating notational instantiations sufficiently simple to achieve the requirements of actual parallel programs. Additionally, the new notational representations for expressing parallel programs must not be burdensome in terms of the accidental complexities for expressing an implementation (either in middleware or in compilerassisted syntactical form). In support of this vision, this paper identifies specifically the need for application of modern software engineering and design concepts to a refactoring of the Message Passing Interface's set of capabilities. Concepts and methodologies from modern software engineering-Model Driven Architecture and Aspect-Oriented Software Design-inform this effort.
P-RIO: An Environment for Modular Parallel Programming
Citeseer
This paper presents the P-RIO environment which offers high level, but straightforward, concepts for parallel and distributed programming. A simple software construction methodology makes most of the useful object oriented programming technology properties available, facilitating modularity and code reuse. This methodology promotes a clear separation of the individual sequential computation components from the interconnection structure used for the interaction between these components. The mapping of concepts associated to the software construction methodology to graphical representations is immediate. P-RIO includes a graphical programming tool, has a modular construction, is highly portable, and provides runtime support mechanisms for parallel programs, in architectures composed of heterogeneous computing nodes.
Guest Editorial: High-Level Parallel Programming and Applications
International Journal of Parallel Programming, 2016
The arrival of the multi-/many-core systems has produced a game-changing event for the computing industry, which today, much more than few years ago, is relying on parallel processing as a means of improving application performance. Although a wide gap still exists between parallel architectures and parallel programming maturity, the only way forward to keep increasing performance and reducing power consumption is through parallelism. Any program must become a parallel program in order to exploit the capabilities of modern computers at any scale. In the industrial practice, parallel programming is still dominated by low-level machine-centric unstructured approaches based on tools and specialized libraries that originate from high performance computing. Parallel programming at this level of abstraction is difficult, error-prone, time-consuming and, hence, economically infeasible in most application domains. Now, more than ever, it is crucial that the research community makes a significant progress toward making the development of parallel code accessible to all programmers, rather than allowing parallel programming to continue to be the domain of specialized expert programmers. Achieving a proper trade-off among performance, programmability and portability issues is becoming a must. Parallel and distributed programming methodologies are currently dominated by low-level techniques such as send/receive message passing or data sharing coordinated by locks. These abstractions are not a good fit for reasoning about parallelism. In this evolution/revolution phase, a fundamental role is played by high-level and portable programming tools as well as application development frameworks. They may offer
2000
Java.- High Performance Numerical Computing in Java: Language and Compiler Issues.- Instruction Scheduling in the Presence of Java's Runtime Exceptions.- Dependence Analysis for Java.- Low-Level Transformations A.- Comprehensive Redundant Load Elimination for the IA-64 Architecture.- Minimum Register Instruction Scheduling: A New Approach for Dynamic Instruction Issue Processors.- Unroll-Based Copy Elimination for Enhanced Pipeline Scheduling.- Data Distribution.- A Linear Algebra Formulation for Optimising Replication in Data Parallel Programs.- Accurate Data and Context Management in Message-Passing Programs.- An Automatic Iteration/Data Distribution Method Based on Access Descriptors for DSMM.- High-Level Transformations.- Inter-array Data Regrouping.- Iteration Space Slicing for Locality.- A Compiler Framework for Tiling Imperfectly-Nested Loops.- Models.- Parallel Programming with Interacting Processes.- Application of the Polytope Model to Functional Programs.- Multilingua...
A Component Model for High Level and Efficient Parallel Programming on Distributed Architectures
iadis.net
The computer science community has claimed for parallel languages and models with a higher level of abstraction and modularity, without performance penalties, that could be used in conjunction with advanced software engineering techniques, and that are suitable to work with large-scale programs. This paper presents general aspects about the #1 parallel programming model and its associated programming environment, designed to address these issues.