Mpi sh: a distributed shared memory implementation of mpi (original) (raw)
Related papers
MPI-2: Extending the message-passing interface
Europar96, Lyon (France), 26-29 Aug 1996, 1996
This paper describes current activities of the MPI-2 Forum. The MPI-2 Forum is a group of parallel computer vendors, library writers, and application specialists working together to de ne a set of extensions to MPI (Message Passing Interface). MPI was dened by the same process and now has many implementations, both vendor-proprietary and publicly available, for a wide variety of parallel computing environments.
MPC-MPI: An MPI Implementation Reducing the Overall Memory Consumption
Lecture Notes in Computer Science, 2009
Message-Passing Interface (MPI) has become a standard for parallel applications in high-performance computing. Within a shared address space, MPI implementations benefit from the global memory to speed-up intra-node communications while the underlying network protocol is exploited to communicate between nodes. But, it requires the allocation of additional buffers leading to a memory-consumption overhead. This may become an issue on future clusters with reduced memory amount per core. In this article, we propose an MPI implementation upon the MPC framework called MPC-MPI reducing the overall memory footprint. We obtained up to 47% of memory gain on benchmarks and a real-world application.
A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard
Parallel Computing, 1996
MPI (Message Passing Interface) is a speci cation for a standard library for message passing that was de ned by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists. Multiple implementations of MPI have been developed. In this paper, we describe MPICH, unique among existing implementations in its design goal of combining portability with high performance. We document its portability and performance and describe the architecture by which these features are simultaneously achieved. We also discuss the set of tools that accompany the free distribution of MPICH, which constitute the beginnings of a portable parallel programming environment. A project of this scope inevitably imparts lessons about parallel computing, the speci cation being followed, the current hardware and software environment for parallel computing, and project management; we describe those we have learned. Finally, we discuss future developments for MPICH, including those necessary to accommodate extensions to the MPI Standard now being contemplated by the MPI Forum.
Leveraging MPI’s One-Sided Communication Interface for Shared-Memory Programming
Lecture Notes in Computer Science, 2012
Hybrid parallel programming with MPI for internode communication in conjunction with a shared-memory programming model to manage intranode parallelism has become a dominant approach to scalable parallel programming. While this model provides a great deal of flexibility and performance potential, it saddles programmers with the complexity of utilizing two parallel programming systems in the same application. We introduce an MPI-integrated shared-memory programming model that is incorporated into MPI through a small extension to the one-sided communication interface. We discuss the integration of this interface with the upcoming MPI 3.0 one-sided semantics and describe solutions for providing portable and efficient data sharing, atomic operations, and memory consistency. We describe an implementation of the new interface in the MPICH2 and Open MPI implementations and demonstrate an average performance improvement of 40% to the communication component of a five-point stencil solver.
MPI-DDL: A distributed-data library for MPI
Future Generation Computer Systems, 1997
The Message-Passing Interface (MPI) defines a de facto standard for writing message-passing programs. However, MPI operates at a rather low level in the sense that a programmer regards a message as a programming unit. We present a new point of view. In our approach, a programmer regards distributed data as a programming unit. MPI-DDL is a programming environment which contains an application-oriented layer on top of MPI to facilitate the programming of distributed data, and a set of tools. We present initial performance comparisons of two matrix algorithms using MPI-DDL, HPF, and direct MPI implementations.
mpiJava: A Java interface to MPI
First UK Workshop on Java for High Performance Network …, 1999
The overall aim of this paper is to introduce mpiJava|a Java interface to the widely used Message Passing Interface (MPI). In the rst part of the paper we discuss the design of the mpiJava API and issues associated with its development. In the second part of the paper we brie y describe an implementation of mpiJava on NT using the WMPI environment. We then discuss some measurements made of communications performance to compare mpiJava with C and Fortran bindings of MPI. In the nal part of the paper we summarize our ndings and brie y mention work we plan to undertake in the near future.
MpiJava: An Object-Oriented Java Interface to MPI
Parallel and Distributed …, 1999
A basic prerequisite for parallel programming is a good communication API. The recent i n terest in using Java for scienti c and engineering application has led to several international e orts to produce a message passing interface to support parallel computation. In this paper we describe and then discuss the syntax, functionality and performance of one such i n terface, mpiJava, an object-oriented Java i n terface to MPI. We rst discuss the design of the mpiJava API and the issues associated with its development. We then move on to brie y outline the steps necessary to 'port' mpiJava onto a range of operating systems, including Windows NT, Linux and Solaris. In the second part of the paper we present and then discuss some performance measurements made of communications bandwidth and latency to compare mpiJava on these systems. Finally, we summarise our experiences and then brie y mention work that we plan to undertake. many aspects of Java and so ensure that its future development makes it more appropriate for scienti c programmers.
MPI + MPI: a new hybrid approach to parallel programming with MPI plus shared memory
Computing, 2013
Hybrid parallel programming with the message passing interface (MPI) for internode communication in conjunction with a shared-memory programming model to manage intranode parallelism has become a dominant approach to scalable parallel programming. While this model provides a great deal of flexibility and performance potential, it saddles programmers with the complexity of utilizing two parallel
MPI-F: An Efficient Implementation of MPI on IBM-SP1
1994
This article introduces MPI-F an efficient implementation of MPI on the IBM-SP1 distributed memory cluster. After discussing the novel and key concepts of MPI and how they relate to an implementation, the MPI-F system architecture is outlined in detail. Although many incorrectly assume that MPI will not be efficient due to its increased functionality, MPI-F performance demonstrates efficiency as good as the best message passing library currently available on the SP1.
Hybrid MPI: Efficient Message Passing for Multi-core Systems
2014
Multi-core shared memory architectures are ubiquitous in both High-Performance Computing (HPC) and commodity systems because they provide an excellent trade-off between performance and programmability. MPI’s abstraction of explicit communication across distributed memory is very popular for programming scientific applications. Unfortunately, OS-level process separations force MPI to perform unnecessary copying of messages within shared memory nodes. This paper presents a novel approach that transparently shares memory across MPI processes executing on the same node, allowing them to communicate like threaded applications. While prior work explored thread-based MPI libraries, we demonstrate that this approach is impractical and performs poorly in practice. We instead propose a novel process-based approach that enables shared memory communication and integrates with existing MPI libraries and applications without modifications. Our protocols for shared memory message passing exhibit b...