A Parallel Functional Language Compiler for Message-Passing Multicomputers (original) (raw)

A Simple Parallelising Compiler

The process of writing parallel programs is not as simple or common as traditional sequential programming. A programmer must have an in-depth knowledge of the program at hand, as well as the resources available when the program is executed. By designing a compiler that can automatically parallelise sequentially written programs, is obviously of great benefit. The current efforts concentrate mainly on fine grained parallelism. With the move towards using clusters of workstations as platforms for parallel processing, coarse grained parallelism is becoming a very important research issue. This paper reports on the development of a simple parallelising compiler exploiting course grained parallelism in sequential programs. In particular, the use of three structured algorithms used to divide the parallelisation process into manageable phases is presented.

A parallelizing compiler for multicore systems

Proceedings of the 17th International Workshop on Software and Compilers for Embedded Systems - SCOPES '14, 2014

This manuscript summarizes the main ideas introduced in [1]. We propose a compiler that automatically transforms a sequential application into a parallel counterpart for multicore processors. It is based on an intermediate representation, named KIR, which exposes multiple levels of parallelism and hides the complexity of the implementation details thanks to the domain-independent kernels (e.g., assignment, reduction). The effectiveness and performance of our approach, built on top of GCC, has been tested with a large variety of codes.

Languages and compilers for parallel computing : 12th International Workshop, LCPC'99, La Jolla, CA, USA, August 4-6, 1999 : proceedings

2000

Java.- High Performance Numerical Computing in Java: Language and Compiler Issues.- Instruction Scheduling in the Presence of Java's Runtime Exceptions.- Dependence Analysis for Java.- Low-Level Transformations A.- Comprehensive Redundant Load Elimination for the IA-64 Architecture.- Minimum Register Instruction Scheduling: A New Approach for Dynamic Instruction Issue Processors.- Unroll-Based Copy Elimination for Enhanced Pipeline Scheduling.- Data Distribution.- A Linear Algebra Formulation for Optimising Replication in Data Parallel Programs.- Accurate Data and Context Management in Message-Passing Programs.- An Automatic Iteration/Data Distribution Method Based on Access Descriptors for DSMM.- High-Level Transformations.- Inter-array Data Regrouping.- Iteration Space Slicing for Locality.- A Compiler Framework for Tiling Imperfectly-Nested Loops.- Models.- Parallel Programming with Interacting Processes.- Application of the Polytope Model to Functional Programs.- Multilingua...

Development of large scale high performance applications with a parallelizing compiler

A bstract: -High level environment such as High Performance Fortran (HPF) supporting the development of parallel applications and porting of legacy codes to parallel architectures have not yet gained a broad acceptance and diffusion. Common objections claim difficulty of performance tuning, limitation of its application to regular, data parallel computations, and lack of robustness of parallelizing HPF compilers in handling large sized codes.

Automatic Parallelizing Compiler for Distributed Memory Parallel Computers: New Algorithms to Improve the Performance of the Inspector/Executor

1995

The SPMD (Single-Program Multiple-Data Stream) model has been widely adopted as the base of parallelizing compilers and parallel programming languages for scientific programs [1]. This model will work well not only for shared memory machines but also for distributed memory multicomputers, provided that; ■ data are allocated appropriately by the programmer and/or the compiler itself, ■ the compiler distributes parallel computations to processors so that interprocessor communication costs are minimized, and ■ codes for communication are inserted, only when necessary, at the point adequate for minimizing communication latency.

Languages and Compilers for Parallel Computing

Lecture Notes in Computer Science, 2000

The topics covered include languages and language extensions for parallel computing - a status report on CONSUL, a future-based parallel language for a general-purpose high-parallel computer; COOL, blackboard programming in shared Prolog, refined C, the XYZ ...

SUIF: An Infrastructure for Research on Parallelizing and Optimizing Compilers

ACM Sigplan …, 1994

Compiler infrastructures that support experimental research are crucial to the advancement of high-performance computing. New compiler technology must be implemented and evaluated in the context of a complete compiler, but developing such an infrastructure requires a huge investment in time and resources. We have spent a number of years building the SUIF compiler into a powerful, flexible system, and we would now like to share the results of our efforts.

Automatic parallelization in the paralax compiler

2011

The efficient development of multi-threaded software has, for many years, been an unsolved problem in computer science. Finding a solution to this problem has become urgent with the advent of multi-core processors. Furthermore, the problem has become more complicated because multi-cores are everywhere (desktop, laptop, embedded system). As such, they execute generic programs which exhibit very different characteristics than the scientific applications that have been the focus of parallel computing in the past.