MPI: The Complete Reference (original) (raw)
*** Marc Snir** *** Steve Otto** *** Steven Huss-Lederman** *** David Walker** *** Jack Dongarra**
---
Next: Contents
---
Scientific and Engineering ComputationJanusz Kowalik, Editor Data-Parallel Programming on MIMD Computersby Philip J. Hatcher and Michael J. Quinn, 1991
Unstructured Scientific Computation on Scalable Multiprocessorsedited by Piyush Mehrotra, Joel Saltz, and Robert Voigt, 1991
Parallel Computational Fluid Dynamics: Implementations and Resultsedited by Horst D. Simon, 1992
Enterprise Integration Modeling: Proceedings of the First International Conferenceedited by Charles J. Petrie, Jr., 1992
The High Performance Fortran Handbookby Charles H. Koelbel, David B. Loveman, Robert S. Schreiber, Guy L. Steele Jr. and Mary E. Zosel, 1993
Using MPI: Portable Parallel Programming with the Message-Passing Interfaceby William Gropp, Ewing Lusk, and Anthony Skjellum, 1994
PVM: Parallel Virtual Machine-A User's Guide and Tutorial for Network Parallel Computingby Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Bob Manchek, and Vaidy Sunderam, 1994
Enabling Technologies for Petaflops Computingby Thomas Sterling, Paul Messina, and Paul H. Smith
An Introduction to High-Performance Scientific Computingby Lloyd D. Fosdick, Elizabeth R. Jessup, Carolyn J.C. Schauble, and Gitta Domik
Practical Parallel Programmingby Gregory V. Wilson
MPI: The Complete Referenceby Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra
1996 Massachusetts Institute of Technology
All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.
Parts of this book came from, ``MPI: A Message-Passing Interface Standard'' by the Message Passing Interface Forum. That document is copyrighted by the University of Tennessee. These sections were copied by permission of the University of Tennessee.
This book was set in LaTeX by the authors and was printed and bound in the United States of America.
Library of Congress Cataloging-in-Publication Data
This book is also available in postscript and html forms over the Internet.
To retrieve the postscript file you can use one of the following methods:
*** anonymous ftp**
ftp ftp.netlib.org
cd utk/papers/mpi-book
get mpi-book.ps
quit
*** from any machine on the Internet type:**
rcp anon@anonrcp.netlib.org:utk/papers/mpi-book/mpi-book.ps mpi-book.ps
*** sending email to netlib@netlib.org
and in the message type:**
send mpi-book.ps from utk/papers/mpi-book
To view the html file use the URL:
*** Click here**
To order from the publisher, send email to mitpress-orders@mit.edu, or telephone 800-356-0343 or 617-625-8569. Send snail mail orders to
The MIT Press
Book Order Department
55 Hayward Street
Cambridge, MA 02142.
8 x 9 * ??? pages * $??.?? * Original in Paperback ISBN 95-80471 For more information, contact Gita Manaktala, manak@mit.edu. or
*** Click here**
---
*** Contents**
*** Introduction**
*** The Goals of MPI**
*** Who Should Use This Standard?**
*** What Platforms are Targets for Implementation?**
*** What is Included in MPI?**
*** What is Not Included in MPI?**
*** Version of MPI**
*** MPI Conventions and Design Choices**
*** Document Notation**
*** Procedure Specification**
*** Semantic Terms**
*** Processes**
*** Types of MPI Calls**
*** Opaque Objects**
*** Named Constants**
*** Choice Arguments**
*** Language Binding**
*** Fortran 77 Binding Issues**
*** C Binding Issues**
*** Point-to-Point Communication**
*** Introduction and Overview**
*** Blocking Send and Receive Operations**
*** Blocking Send**
*** Send Buffer and Message Data**
*** Message Envelope**
*** Comments on Send**
*** Blocking Receive**
*** Receive Buffer**
*** Message Selection**
*** Return Status**
*** Comments on Receive**
*** Datatype Matching and Data Conversion**
*** Type Matching Rules**
*** Type MPI_CHARACTER**
*** Data Conversion**
*** Comments on Data Conversion**
*** Semantics of Blocking Point-to-point**
*** Buffering and Safety**
*** Multithreading**
*** Order**
*** Progress**
*** Fairness**
*** Example - Jacobi iteration**
*** Send-Receive**
*** Null Processes**
*** Nonblocking Communication**
*** Request Objects**
*** Posting Operations**
*** Completion Operations**
*** Examples**
*** Freeing Requests**
*** Semantics of Nonblocking Communications**
*** Order**
*** Progress**
*** Fairness**
*** Buffering and resource limitations**
*** Comments on Semantics of Nonblocking Communications**
*** Multiple Completions**
*** Probe and Cancel**
*** Persistent Communication Requests**
*** Communication-Complete Calls with Null Request Handles**
*** Communication Modes**
*** Blocking Calls**
*** Nonblocking Calls**
*** Persistent Requests**
*** Buffer Allocation and Usage**
*** Model Implementation of Buffered Mode**
*** Comments on Communication Modes**
*** User-Defined Datatypes and Packing**
*** Introduction**
*** Introduction to User-Defined Datatypes**
*** Datatype Constructors**
*** Contiguous**
*** Vector**
*** Hvector**
*** Indexed**
*** Hindexed**
*** Struct**
*** Use of Derived Datatypes**
*** Commit**
*** Deallocation**
*** Relation to count**
*** Type Matching**
*** Message Length**
*** Address Function**
*** Lower-bound and Upper-bound Markers**
*** Absolute Addresses**
*** Pack and Unpack**
*** Derived Datatypes vs Pack/Unpack**
*** Collective Communications**
*** Introduction and Overview**
*** Operational Details**
*** Communicator Argument**
*** Barrier Synchronization**
*** Broadcast**
*** Example Using MPI_BCAST**
*** Gather**
*** Examples Using MPI_GATHER**
*** Gather, Vector Variant**
*** Examples Using MPI_GATHERV**
*** Scatter**
*** An Example Using MPI_SCATTER**
*** Scatter: Vector Variant**
*** Examples Using MPI_SCATTERV**
*** Gather to All**
*** An Example Using MPI_ALLGATHER**
*** Gather to All: Vector Variant**
*** All to All Scatter/Gather**
*** All to All: Vector Variant**
*** Global Reduction Operations**
*** Reduce**
*** Predefined Reduce Operations**
*** MINLOC and MAXLOC**
*** All Reduce**
*** Reduce-Scatter**
*** Scan**
*** User-Defined Operations for Reduce and Scan**
*** The Semantics of Collective Communications**
*** Communicators**
*** Introduction**
*** Division of Processes**
*** Avoiding Message Conflicts Between Modules**
*** Extensibility by Users**
*** Safety**
*** Overview**
*** Groups**
*** Communicator**
*** Communication Domains**
*** Compatibility with Current Practice**
*** Group Management**
*** Group Accessors**
*** Group Constructors**
*** Group Destructors**
*** Communicator Management**
*** Communicator Accessors**
*** Communicator Constructors**
*** Communicator Destructor**
*** Safe Parallel Libraries**
*** Caching**
*** Introduction**
*** Caching Functions**
*** Intercommunication**
*** Introduction**
*** Intercommunicator Accessors**
*** Intercommunicator Constructors**
*** Process Topologies**
*** Introduction**
*** Virtual Topologies**
*** Overlapping Topologies**
*** Embedding in MPI**
*** Cartesian Topology Functions**
*** Cartesian Constructor Function**
*** Cartesian Convenience Function: MPI_DIMS_CREATE**
*** Cartesian Inquiry Functions**
*** Cartesian Translator Functions**
*** Cartesian Shift Function**
*** Cartesian Partition Function**
*** Cartesian Low-level Functions**
*** Graph Topology Functions**
*** Graph Constructor Function**
*** Graph Inquiry Functions**
*** Graph Information Functions**
*** Low-level Graph Functions**
*** Topology Inquiry Functions**
*** An Application Example**
*** Environmental Management**
*** Implementation Information**
*** Environmental Inquiries**
*** Tag Values**
*** Host Rank**
*** I/O Rank**
*** Clock Synchronization**
*** Timers and Synchronization**
*** Initialization and Exit**
*** Error Handling**
*** Error Handlers**
*** Error Codes**
*** Interaction with Executing Environment**
*** Independence of Basic Runtime Routines**
*** Interaction with Signals in POSIX**
*** The MPI Profiling Interface**
*** Requirements**
*** Discussion**
*** Logic of the Design**
*** Miscellaneous Control of Profiling**
*** Examples**
*** Profiler Implementation**
*** MPI Library Implementation**
*** Systems With Weak symbols**
*** Systems without Weak Symbols**
*** Complications**
*** Multiple Counting**
*** Linker Oddities**
*** Multiple Levels of Interception**
*** Conclusions**
*** Design Issues**
*** Why is MPI so big?**
*** Should we be concerned about the size of MPI?**
*** Why does MPI not guarantee buffering?**
*** Portable Programming with MPI**
*** Dependency on Buffering**
*** Collective Communication and Synchronization**
*** Ambiguous Communications and Portability**
*** Heterogeneous Computing with MPI**
*** MPI Implementations**
*** Extensions to MPI**
*** References**
*** About this document ... **
--- Jack Dongarra Fri Sep 1 06:16:55 EDT 1995