6.4.1 Booting and Shutting Down

one obtains useful parallelism, i.e., how does one prevent all these processes from computing exactly the same thing. The answer is for each process to ask the MPI library what rank it has in the MPI_COMM_WORLD communicator. Since each of P processes has a unique integer rank in the range [0, . . P-1], they can easily choose different parts of the problem to work on.
8.2.1 Example: "Hello World" in MPI
We begin our introduction to MPI by considering a program whose functionality should be familiar to all C programmers. Program  8.1 introduces several new concepts that occur over and over in MPI programs. The reader is encouraged to refer to Program 8.1 for an example of each of the concepts discussed in this section.
Initialization and Finalization Since portability and ease of implementation were critically important criteria in the design of MPI, it was decided that the MPI library would not be allowed to "take over" the special behavior of C's main function or FORTRAN's PROGRAM statement. Instead, in order to use MPI, a program must first call MPI_Init. In C, MPI_Init should be passed the addresses of argc and argv. This mechanism allows for maximum flexibility -MPI_Init may obtain values passed on the command line by some implementations, and it may provide them in other implementations. Generally, it is unwise to rely on the values of argc and argv before a call to MPI_Init because the MPI environment is allowed to insert its own arguments into the list, which are then parsed and removed by MPI_Init. Naturally, it is necessary to call MPI_Init before calling any other MPI procedures. Similarly, MPI_Finalize terminates MPI. It is a good idea to explicitly call MPI_Finalize just before exit for normal program termination, so that MPI can gracefully release system resources, close sockets, etc. It is illegal to invoke any MPI procedure after calling MPI_Finalize.
Communicators MPI is primarily a communications library. One of its major advances over earlier systems is the introduction of opaque communicator that provide a concise and modular way of referring to subsets of processes. Most of the MPI communication procedures require that the caller specify a communicator argument which defines the context in which the communication takes place. Every process within a communicator has a unique rank in the range from 0, .., size-1. By inquiring about the number of processes with MPI_Comm_size and the process rank with MPI_Comm_rank, it is possible to write parallel programs that dynamically adjust at runtime to the number of available processors. Communicators can be

 



How to Build a Beowulf
How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters (Scientific and Engineering Computation)
ISBN: 026269218X
EAN: 2147483647
Year: 1999
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net