9.3 Simplifying MPI Communications

In addition to simplifying and shortening the code of the MPI task with polymorphism and templates, we can also simplify the communication between MPI tasks by taking advantage of operator overloading. The MPI_Send() and MPI_Recv() class of functions have the form:

 MPI_Send(Buffer,Count,MPI_LONG,TaskRank,Tag,Comm); MPI_Recv(Buffer,Count,MPI_INT,TaskRank,Tag,Comm,&Status); 

where the calls require that the user specify the data type involved in the call and a buffer that will hold the data to be sent or received. The specification of the data type for each call of the send and receive routines can be tedious and can introduce subtle errors if the wrong types are passed. Table 9-3 contains short descriptions for each of the MPI send and receive functions and their prototypes .

The goal is to make the data types and buffers as transparent as possible during send and receive operations. We would like to send and receive MPI data using the stream metaphor of the iostream classes. We would like to send data using syntax such as:

 //... int X; float Y; user_defined_type Z; cout << X << Y << Z; //... 

Here, the developer does not have to specify the types when inserting data into cout . The three data types to be displayed each have the operator << defined. These definitions specify how to translate the type during the insertion into the cout stream. Likewise, extraction from the cin stream:

 //... int X; float Y; user_defined_type Z; cin >> X >> Y >> Z; //... 

occurs without specifying the types involved. Operator overloading allows the developer to use this technique for MPI tasks. The cout stream is instantiated from an ostream class and cin is instantiated from an istream class. These classes define the operator << and >> for the built-in C++ data types. For instance, the ostream class contains a number of overloaded operator << functions:

 //... ostream& operator<<(char c); ostream& operator<<(unsigned char c); ostream& operator<<(signed char c); ostream& operator<<(const char *s); ostream& operator<<(const unsigned char *s); ostream& operator<<(const signed char *s); ostream& operator<<(const void *p); ostream& operator<<(int n); ostream& operator<<(unsigned int n); ostream& operator<<(long n); ostream& operator<<(unsigned long n); //... 
Table 9-3. MPI Send and Receive Functions and Their Prototypes

MPI Send and Receive Routines #include "mpi.h"

Description

 int MPI_Send (void *Buffer,int Count,  MPI_Datatype Type,  int Destination,  int MessageTag,  MPI_Comm Comm); 

Performs a basic send.

 int MPI_Send_init (void *Buffer,int Count,  MPI_Datatype Type,  int Destination,  int MessageTag,  MPI_Comm Comm,  MPI_Request *Request); 

Initializes a handle for a standard send.

 int MPI_Ssend (void *Buffer,int Count,  MPI_Datatype Type,  int Destination,  int MessageTag,  MPI_Comm Comm); 

Performs a basic synchronous send.

 int MPI_Ssend_init (void *Buffer,int Count,  MPI_Datatype Type,  int Destination,  int MessageTag,  MPI_Comm Comm,  MPI_Request *Request); 

Initializes a handle for a synchronous send.

 int MPI_Rsend (void *Buffer, int Count,  MPI_Datatype Type,  int Destination,  int MessageTag,  MPI_Comm Comm); 

Performs basic ready send.

 int MPI_Rsend_init (void *Buffer,int Count,  MPI_Datatype Type,  int Destination,  int MessageTag,  MPI_Comm Comm,  MPI_Request *Request); 

Initializes a handle for a ready send.

 int MPI_Isend (void *Buffer,int Count,  MPI_Datatype Type,  int Destination,  int MessageTag,  MPI_Comm Comm,  MPI_Request *Request); 

Starts a nonblocking send.

 int MPI_Issend (void *Buffer,int Count,  MPI_Datatype Type,  int Destination,  int MessageTag,  MPI_Comm Comm,  MPI_Request *Request); 

Starts a nonblocking synchronous send.

 int MPI_Irsend (void *Buffer, int Count,  MPI_Datatype Type,  int Destination,  int MessageTag,  MPI_Comm Comm,  MPI_Request *Request); 

Starts a nonblocking ready send.

 int MPI_Recv (void *Buffer,int Count,  MPI_Datatype Type,  int source,int MessageTag,  MPI_Comm Comm,  MPI_Status *Status); 

Performs a basic receive.

 int MPI_Recv_init (void *Buffer,int Count,  MPI_Datatype Type,  int source,int MessageTag,  MPI_Comm Comm,  MPI_Request *Request); 

Initializes a handle for a receive.

 int MPI_Irecv (void *Buffer,int Count,  MPI_Datatype Type,  int source,int MessageTag,  MPI_Comm Comm,  MPI_Request *Request); 

Begins a nonblocking receive.

 int MPI_Sendrecv (void *sendBuffer,  int SendCount,  MPI_Datatype SendType,  int Destination,int SendTag,  void *recvBuffer,  int RecvCount,  MPI_Datatype RecvYype,  int Source, int RecvTag,  MPI_Comm Comm,  MPI_Status *Status); 

Sends and receives a message.

 int MPI_Sendrecv_replace (void *Buffer,int Count,  MPI_Datatype Type,  int Destination,int SendTag,  int Source,int RecvTag,  MPI_Comm Comm,  MPI_Status *Status); 

Sends and receives using a single buffer.

These definitions allow the user of the ostream and the istream classes to use cout and cin objects without having to specify the data types involved in the operations. This overloading technique can be used to simplify MPI communications. We explored the idea of a PVM stream in Chapter 6. Here we employ the same approach to create an MPI stream. We can use the structure of an istream and ostream as a guide for creating an mpi_stream class. The stream classes consist of a state component, buffer component, and translation component. The state component is captured by the ios class. The buffer component is represented by the streambuf , stringbuf , or filebuf classes. The translator classes are istream , ostream , istringstream , ostringstream , ifstream , and ofstream . The state component is responsible for encapsulating the state of the stream. The format of the stream, whether the stream is in a good state or failed state, or whether the stream is at eof , and so on are captured by the ios component. The buffer components are used to hold the data that is being read or written. The translation classes translate types into streams of bytes and streams of bytes back into built-in types. Figure 9-3 shows the UML class diagram for the iostream family classes.

Figure 9-3. UML class diagram for iostream family classes.

graphics/09fig03.gif

9.3.1 Overloading the << and >> Operators for MPI Communication

The relationships and functionality of the classes in Figure 9-3 are used as a guideline for designing mpi_streams . Although going through the trouble of designing MPI stream classes is more work up front than using the MPI_Recv() and MPI_Send() routines directly, it will make MPI development considerably simpler in the long run. Where parallel programs can be made simpler, they should. Reducing the complexity of programs is usually a noteworthy goal. We only present a skeleton on an mpi_stream class here. We present enough to demonstrate how the construction of an MPI stream class can be approached. Once an mpi_stream class is designed it can be used to simplify communications in most any MPI program. Example 9.6 contains an excerpt from the declaration of a mpi_stream class.

Example 9.6 Contains an excerpt from the declaration of a mpi_stream class.
 class mpios{ protected:    int Rank;    int Tag;    MPI_Comm Comm;    MPI_Status Status;    int BufferCount;    //... public:    int tag(void);    //... } class mpi_stream public mpios{ protected:    mpi_buffer Buffer;    //... public:    //...    mpi_stream(void);    mpi_stream(int R,int T,MPI_Comm C);    void rank(int R);    void tag(int T);    void comm(MPI_Comm C);    mpi_stream &operator<<(int X);    mpi_stream &operator<<(float X);    mpi_stream &operator<<(string X);    mpi_stream &operator<<(vector<long> &X);    mpi_stream &operator<<(vector<int> &X);    mpi_stream &operator<<(vector<float> &X);    mpi_stream &operator<<(vector<string> &X);    mpi_stream &operator>>(int &X);    mpi_stream &operator>>(float &X);    mpi_stream &operator>>(string &X);    mpi_stream &operator>>(vector<long> &X);    mpi_stream &operator>>(vector<int> &X);    mpi_stream &operator>>(vector<float> &X);    mpi_stream &operator>>(vector<string> &X);    //... }; 

For exposition purposes we have combined the impi_stream and ompi_stream class into a single mpi_stream class. In the same manner that the istream and ostream classes overload the << and >> operators, we overload those operators as well. Example 9.7 shows these overloaded operators can be defined:

Example 9.7 Definition of << and >> operators.
 //... mpi_stream &operator<<(string X) {    MPI_Send(const_cast<char*>(X.data()),X.size(),MPI_CHAR,Rank,Tag,Comm);    return(*this); } //  Over simplification of buffer  mpi_stream &operator<<(vector<long> &X) {    long *Buffer;    Buffer = new long[X.size()];    copy(X.begin(),X.end(),Buffer);    MPI_Send(Buffer,X.size(),MPI_LONG,Rank,Tag,Comm);    delete Buffer;    return(*this); } //  Over simplification of buffer  mpi_stream &operator>>(string &X) {    char Buffer[10000];    MPI_Recv(Buffer,10000,MPI_CHAR,Rank,Tag,Comm,&Status);    MPI_Get_count(&Status,MPI_CHAR,&BufferCount);    X.append(Buffer);    return(*this); } 

The mpios class in Example 9.7 serves a similar purpose to that of the ios class for the iostream . The purpose of the mpios class is to maintain the state of the mpi_stream classes. Each data type that will be used within your MPI applications should have the operators << and >> overloaded for them. Here, we show a few simple overloaded operators. In each case we present an over-simplification of the buffer management. In practice, exception handling and memory allocation issues are handled by template classes and allocator classes. Notice in Example 9.7 that the mpios class holds the communicator, status of the mpi_stream , the buffer count, and the value for rank and tag. This is only one possible configuration for a mpi_stream class ”there are many others. Once an mpi_stream class is defined it can be reused in any MPI program. Communication between MPI tasks can be written as:

 //... int X; float Y; vector<float> Z; mpi_stream Stream(Rank,Tag,MPI_WORLD_COMM); Stream << X << Z; Stream << Y; //... Stream >> Z; 

This notation allows the programmer to maintain the stream metaphor and simplifies the MPI code. Of course the appropriate error checking and exception handling must be included within the definitions of the << and >> operators.



Parallel and Distributed Programming Using C++
Parallel and Distributed Programming Using C++
ISBN: 0131013769
EAN: 2147483647
Year: 2002
Pages: 133

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net