Preface

It is now clear that parallel computing is here to stay: our voracious need for ever greater computing power simply cannot be satisfied by conventional, single-processor architectures. More and more companies are investing in architectures with multiple processors, and more and more colleges and universities are including parallel computing in their curricula.

The increase in the use of parallel computing is being accelerated by the development of standards for programming parallel systems. Developers can now write portable parallel programs, and hence expect to obtain a reasonable return on the huge investment required in a large parallel software development project.

About MPI

The Message-Passing Interface or MPI is the most widely used of the new standards. It is not a new programming language; rather it is a library of subprograms that can be called from C and Fortran 77 programs. It was developed by an open, international forum consisting of representatives from industry, academia, and government laboratories. It has rapidly received widespread acceptance because it has been carefully designed to permit maximum performance on a wide variety of systems, and it is based on message passing, one of the most powerful and widely used paradigms for programming parallel systems. The introduction of MPI makes it possible for developers of parallel software to write libraries of parallel programs that are both portable and efficient. Use of these libraries will hide many of the details of parallel programming, and, as a consequence, make parallel computing much more accessible to students and professionals in all branches of science and engineering.

About this Book

As parallel computing has moved more into the mainstream, there has been a clear need for an introductory text on parallel programming -- a text that can be used by students and professionals who are not specialists in parallel computing, but who still want to learn enough about parallel programming so that they can exploit the vastly greater computational power provided by parallel systems. Parallel Programming with MPI has been written to fill this need. This text aims to provide students, instructors, and professionals with a tool that can ease their transition into this radically different technology.

Parallel Programming with MPI or PPMPI is first and foremost a ``hands-on'' introduction to programming parallel systems. It was written for students and professionals who have no prior experience in programming parallel systems. It was designed for use both as a self-paced tutorial and as a text in a more conventional classroom/computer laboratory setting. The only pre-requisite to reading it is a nodding acquaintance with the first-year college math sequence and a knowledge of a high-level, procedural computing language. PPMPI provides both a complete introduction to MPI and an elementary introduction to parallel programming. It covers all the features of MPI, it provides a brief overview of parallel computing, and it provides an introduction to such topics as parallel debugging, parallel program design and development, and parallel program performance analysis. It also contains an introduction to the use of MPI libraries. In the belief that ``teaching by example'' is the most useful approach, all of the concepts are introduced through the use of fully developed program examples. The source for all of the programs can be downloaded from http://www.cs.usfca.edu/mpi.

Except for the material in Chapters 3 - 8 the chapters are mostly self-contained and can be read in any order. Chapters 3 - 8 form a self-contained tutorial introduction to MPI, and, as such, should probably be read in order, and before the remaining chapters. There are, however, parts of these chapters that can be omitted with no loss of continuity. The material on gather, scatter, and allgather in Chapter 5 and the material on topologies in Chapter 7 can be safely omitted on a first reading.

Before actually writing a larger parallel program, you will probably want to at least familiarize yourself with some of the problems involved in carrying out I/O on parallel systems. So Chapter 8 should receive at least a cursory initial examination. If you're anxious to make use of non-blocking communications, Chapter 13 can be read at any point after completing Chapter 7. Section 1.5 provides a more detailed overview of the contents of each chapter.

Since space considerations don't permit the presentation of programming examples in both C and Fortran, a choice had to be made between the two. For students, using C is the clear choice, since most learn to program in Pascal or C++, both of which are closer to C than to Fortran 77. However, Fortran would probably be the language of choice for most practicing scientists and engineers. Believing that the greater experience of the professional audience will make it easier for them than for students to follow examples in an unfamiliar language, I decided to use C throughout. However, I have tried to write all of the C source in a style that should be relatively accessible to a Fortran programmer. I have made very limited use of pointers and dynamic memory allocation, and I have tried to avoid the use of C's more obscure constructs. In addition, all of the example programs are available online in Fortran 77, thanks to a former student, Laura Koonce, who has done the translation. They can be downloaded from http://www.cs.usfca.edu/mpi.

All of the examples in the text have been written in ANSI C rather than Kernighan and Ritchie C. So if your system uses a K&R C compiler, you will probably want to get a different compiler. The GNU C compiler, gcc, uses ANSI C, it is freely available from a number of sites on the internet, and it has been ported to virtually all currently available systems.

All of the programs in the text have been tested on a network of Silicon Graphics workstations running the mpich implementation of MPI. They have also been tested on an nCUBE 2 running a slightly optimized version of mpich. Please report any errors that do surface, in either the code or the text, to peter@usfca.edu. I'll pay the usual bounty of $1 to the first person reporting each error. A list of errata will be available at http://www.cs.usfca.edu/mpi/errata.html.

Classroom Use

Parallel programming will soon be a basic part of every computer scientists' education, and PPMPI will be well-suited for use in the second or third semester of the basic computing sequence. At this time, however, most colleges and universities introduce parallel programming in upper-division classes on computer architectures or algorithms. For these classes, I've used PPMPI as a supplement to existing, conventional texts. I assign the material in Chapters 3 - 10 as reading at appropriate places in the course, and cover the remaining material as needed in the classroom. For example, in USF's parallel algorithms class, I spend a couple of weeks covering the material in Chapters 1 and 2. Parts of Chapters 3 - 10 are assigned as reading at various points during the course -- usually when a programming assignment makes use of the material. The material on performance is covered in detail in class and applied to some basic parallel algorithms (e.g., dot product and matrix-vector multiplication). Fox's algorithm in Chapter 7 and bitonic sort and parallel tree search in Chapter 14 provide our initial examples of ``significant'' parallel algorithms. In the remainder of the course we cover more parallel algorithms from texts such as Kumar, et al's Introduction to Parallel Computing.

We also teach an upper division class in parallel programming at USF. PPMPI is followed more or less on a chapter-by-chapter basis. The details of what is covered in the classroom are tailored to the level of the students. For well-motivated students, very little time is spent on syntax, and the course turns out to be very close to the previously described parallel algorithms class. For less motivated students, I run the class in much the same way as an ``Introduction to Programming Class.'' Each week or two I introduce a new problem and then spend a week or two discussing the development of a solution. This typically involves fairly extensive discussions of the syntax and semantics of MPI functions.

I also teach a class for seniors and graduate students in which they spend a year working on a parallel programming project sponsored by a company or government agency. In this class, I usually spend a week or two on an overview of parallel computing. The students learn parallel programming by working through PPMPI on their own.

Support Materials

Problems in the text are divided into ``Exercises'' and ``Programming Assignments.'' Most of the exercises involve some programming, but they focus on the mastery of a single basic concept, and the effort involved in writing a solution to an exercise is minimal compared to that needed to complete a programming assignment. In order to save instructors from the labor-intensive activity of designing major programming assignments, there will be a repository available online at http://www.cs.usfca.edu/mpi/progs.html. In order to make this repository as large as possible, I would like to make this a group effort, and I encourage instructors to send exercises and programming assignments (and, if you have them, solutions) to me at peter@usfca.edu. Morgan Kaufmann will make solutions available to faculty. Please contact Morgan Kaufmann at 1-800-745-7323 or orders@mkp.com to obtain your copy.

Return to the Home Page for Parallel Programming with MPI


In order to contact Peter Pacheco, send email to peter@usfca.edu. Last updated Jan 8, 1997.