6.4.2 The Node File System

do little more than pass their arguments through to cc or f77 with appropriate MPI-specific -1, -L and library arguments added. Nevertheless, it is a good idea to use them unless some specialized behavior is required. For those who prefer to call the C or Fortran compiler directly, header files are in <MPI_ROOT>/include, and on Linux Beowulf systems, libraries are in <MPI_ROOT>/lib/LINUX/ch_p4/. Both of the following commands should successfully compile a file called hello.c into an executable called hello.
> # the value of MPIROOT is site-specific 
> MPIROOT=/usr/local/mpi 
> mpicc -o hello hello.c 
> cc -o hello -I$MPIROOT/include hello.c -L$MPIROOT/LINUX/ch_p4 -lmpich 
One complication that can arise in Beowulf systems is the availability of several different compilers. Fortran programmers are particularly afflicted by the ''blessing" of being able to choose from several compilers - both free and commercial. These compilers can and do have different conventions for capitalization, trailing underscores, etc., and mixing compilers can lead to obscure failures at link time. A simple, but not particularly elegant solution is to compile separate copies of MPICH for use with each compiler, and maintain them in different <MPI_ROOT> locations.
Running an MPI program  An MPI program is started with the mpirun utility. The most commonly used flag is -np <number_of_processes>, which starts the program with the specified number of processes. There is a system-wide default list of processors to use, but it is often necessary to supply a list explicitly. The -machinefile <filename> argument accomplishes this. By default, mpirun starts process 0 on the processor that called mpirun. This is undesirable in Beowulf installations where a worldly node is used for compiling, launching jobs, etc. In this case, the -nolocal flag will force the first process to execute on the first named processor in the -machinefile list. MPICH is extensively documented. and manual pages describing these and other flags and options are included with every installation. Program 8.4 is a log of a short session in which the example, Program 8.1, is compiled and run on four Beowulf processors.
8.3 Parallel Data Structures with MPI
Parallel data structures are crucial to the design of parallel programs. MPI provides a very simple memory model with which to construct parallel data structures.

 



How to Build a Beowulf
How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters (Scientific and Engineering Computation)
ISBN: 026269218X
EAN: 2147483647
Year: 1999
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net