6.4.3 Account Management

$ mpicc -o hello hello.c 
$ mpirun -np 4 hello 
Hello world: rank 0 of 4 running on wealhtheow DEI! 
Hello world: rank 1 of 4 running on wealhtheow DEI! 
Hello world: rank 2 of 4 running on wealhtheow DEI! 
Hello world: rank 3 of 4 running on wealhtheow DEI! 
$ cat machines 
y101 y102 y103 y104 y105 y106 y107 y108 y109 y110 y111 y112 
$ mpirun -np 4 -nolocal -machinefile machines hello 
Hello world: rank 0 of 4 running on y101 DEI! 
Hello world: rank 1 of 4 running on y102 DEI! 
Hello world: rank 2 of 4 running on y103 DEI! 
Hello world: rank 3 of 4 running on y104 DEI! 
Program  8.4: Log of a session compiling and executing Program 8.1 with MPICH. 
Unlike other parallel programming systems, e.g., OpenMP,7 Compositional C++,8 MPI adds no language features at all. MPI programs can be compiled by any C or Fortran compiler and the semantics implied by those languages are unchanged. Parallel data structures in MPI are implicit in the fact that independent processes are running independently in each and every process. In an MPI program, every symbol is actually a parallel object that exists in every process. It is entirely up to the programmer how to treat this object. In some cases, it makes sense for every process to see the same value for the symbol. Alternatively, it may make sense for a value of a given symbol to be different in different processes.
8.3.1 Example: A Parallel Array
A parallel array is so simple that it often escapes notice as a parallel data structure. Program 8.5 shows a structure definition and constructor for a parallel array object.
In an MPI program, each process has a separate address space, so when an object, e.g., a parray structure, as in Program 8.5 is defined it automatically exists on every process. In order to use a parray as a parallel array, one need only adopt conventions about the meaning of the various fields. The choice of variable names suggests that nelem_global contains the total aggregate number of elements, including those on remote processes, while nelem_local contains the number in local memory. While this convention is reasonable, nothing in MPI enforces such a convention- it is entirely up to the programmer to maintain. The
7OpenMP home page: http://www.openmp.org
8CC++ home page: http://globus.isi.edu/ccpp/

 



How to Build a Beowulf
How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters (Scientific and Engineering Computation)
ISBN: 026269218X
EAN: 2147483647
Year: 1999
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net