6.4 Basic System Administration

MPI is designed to allow for efficient implementation on all existing distributed memory parallel systems. In some sense, it represents a lowest common denominator, but other protocols have been proposed which are closer to the actual hardware, e.g., Active Messages5 and Fast Messages.6 The ambitious goal that the MPI Forum set for itself was to design an API that could be implemented reasonably efficiently everywhere, but that did not preclude any implementation from making use of proprietary hardware features that might offer improved performance. This common interface can be confusing for the programmer. Some implementations do certain operations extremely quickly, e.g., global barriers, while others carry out the same logical operation much more slowly. Writing portable and correct code with MPI is straightforward. Writing portable and correct code that behaves optimally on a variety of platforms is much harder. Nevertheless, the situation is still better than it was before MPI existed.
8.2 MPI Basic Functionality
An MPI program consists of a number of processes running in multiple instruction/multiple data (MIMD) fashion. That is, each of the processes in an MPI program is independent with a separate and unshared address space. Data is communicated between processes by explicit invocation of MPI procedures at both the sending and receiving end. In MPI-1, processes are neither created nor destroyed, but this limitations is lifted in MPI-2.
An MPI process is just a "normal" program written in C or Fortran (C++ and Fortran90 bindings are specified in MPI-2) and linked with the MPI libraries. No special kernel, operating system, or language support is needed to run MPI programs on a Beowulf.
Although not required, it is most common for all the processes in a single MPI program to correspond to the same program text, i.e., to be the same executable file running in different address spaces. Less common, but perfectly legal is to create an MPI program in which the individual processes are instances of different executables, presumably compiled from different source files, but still linked with the MPI libraries. Similarly, it is possible to pass different command-line arguments to the different processes, but this too is not common.
If a single MPI program consists of multiple instances of exactly the same executable, with exactly the same command-line arguments one might wonder how
5Active Messages home page: http://now.cs.berkeley.edu/AM/activemessages.html
6Fast Messages home page: http://www-csag.cs.uiuc.edu/projects/comm/fm.html

 



How to Build a Beowulf
How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters (Scientific and Engineering Computation)
ISBN: 026269218X
EAN: 2147483647
Year: 1999
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net