p. 158, first paragraph: Tom Cormen notes that in systems using Unix, it may be possible to improve performance by insuring that each of the files, file.0, file.1, etc., resides on the same disk as the corresponding process. For example, suppose process 0 is running on machine A with a local disk mounted on /a, process 1 is running on machine B with a local disk mounted on /b, etc. We may be able to symbolically link files on the local drives to file.0, file.1, . . . :

% ln -s /a/local_file.0 /home/peter/file.0
% ln -s /b/local_file.1 /home/peter/file.1
etc. Now when we run the program, process 0 will write to (or read from) local_file.0, process 1 will write to (or read from) local_file.1, etc. [00/01/30]

pages 284-293, Section 13.2 on hypercube allgather: Tom also observed that if the algorithm first exchanges data between processes with ranks differing only in the least significant bit and then exchanges data between processes with ranks differing in successively more significant bits, the exchanged data will always be contiguous, and it won't be necessary to use derived types. In addition to simplifying the code, this would probably result in a considerable performance improvement. [00/01/29]

Regarding reading from stdin Nick Maclaren notes: "the test in 8.2 [p. 154] won't work on all systems. Consider a system where mpiexec just forks to each MPI process - they will all have the same stdin file descriptor, and the first one to do a read will get an (indeterminate) buffer from that. So it may well hang even if you can read from stdin. "Obviously, the solution is to read in a single process, as you have already done in 8.1.6."

Regarding writing to stdout/stderr [Sections 8.1.7 and 8.1.8], Nick also observes: "This isn't a bug report, but is where I think that you have over-simplified and your procedure is unreliable in practice. "You probably know that fflush won't work on all systems (buffering in intermediate processes). So your solution is spot-on for stdout, though it is overkill for many (most ?) systems. Sad. "But the problem with using it for stderr is that you want to write diagnostics when there is an error. If, for any reason, the I/O process gets stuck, you will get no diagnostics. You note that this approach works only for SPMD-style programs, but also say that it will work for errors in MPI functions. But what if the error is a missing send/collective and the stuck process is the I/O one?"

Return to the Home Page for Parallel Programming with MPI


Last updated June 1, 2008