9.2 Using Template Functions to Represent MPI Tasks

9.2 Using Template Functions to Represent MPI Tasks

Function templates allow us to generalize a procedure for any type. Let's look at a multiplication procedure that works for any data type for which multiplication is defined:

 template<class T> T multiplies(T X, T Y) {    return(X * Y); } 

To use a template function such as this one we provide the necessary parameters for the type T . T is a stand-in for some data type that will be supplied when the template is instantiated . So we can instantiate multiplies() accordingly :

 //... multiplies<double>(3.2,4.5); multiplies<int>(7,2); multiplies<rational>("7/2","3/4"); //... 

with T instantiated to double , int , and rational , thereby determining the exact implementation of the multiplication operation. Multiplication is defined differently for each data type. So the specification of the data type causes slightly different code to be executed. The template function allows us to write the multiplies() operation once and apply it to many different data types.

9.2.1 Instantiating Templates and SPMD (Datatypes)

Parameterized functions can be used with the MPI to handle situations where each process is executing the same code but is working with a different type of data. So once we have determined the TaskRank of the process, we can differentiate what data and type of data the process should work with. Example 9.2 shows how to instantiate different tasks for different ranks.

Example 9.2 Using template functions to designate what the MPI task will do.
 int main(int argc, char *argv[]) {    //...    int Tag = 2;    int WorldSize;    int TaskRank;    MPI_Status Status;    MPI_Init(&argc,&argv);    MPI_Comm_rank(MPI_COMM_WORLD,&TaskRank);    MPI_Comm_size(MPI_COMM_WORLD,&WorldSize);    //...    switch(TaskRank)    {       case 1: multiplies<double>(3.2,4.6);               break;       case 2: multiplies<complex>(X,Y)               break;       //case n:       //...    } } 

Since no two tasks have the same rank, each branch in the case statement in Example 9.2 will be executed by a different MPI task. Also, you may extend this type of parameterization to container arguments for template functions. This allows you to pass different containers of objects containing different types of objects to the same generic template function. For instance Example 9.3 contains a generic search() template.

Example 9.3 Using container templates as arguments to template functions.
 template<T> bool search(T Key,graph<T>) {    //...    locate(Key)    //... } //... MPI_Comm_rank(MPI_COMM_WORLD,&TaskRank); //... switch(TaskRank) {    case 1:          {             graph<string> bullion;             search<string> search("gold", bullion)          }          break;    case 2:          {             graph<complex> Coordinates;             search<complex>((X,Y),Coordinates);          }          break; //... 

In Example 9.3, the process with TaskRank == 1 searches a graph named bullion that contains string objects and the process with TaskRank == 2 searches a graph named Coordinates containing complex numbers . We did not have to change the search() routine to accommodate the different data or data types and the MPI program is made simpler because we can reuse the search function template to search a graph container containing any type. Using templates simplifies SPMD programming. The more generic we make the MPI task, the more flexible it is. Also, once the template is debugged and tested , the reliability of all of the MPI tasks are increased since they all execute the same code.

9.2.2 Using Polymorphism to Implement MPMD

Polymorphism is a primary characteristic of object-oriented programming. In order for a language to support true object-oriented programming, the language must support encapsulation, inheritance, and polymorphism. Polymorphism is the ability of an object to take on many forms. Polymorphism supports the notion of "one interface, multiple implementations ." A user uses one name or interface implemented in different ways by different objects. To illustrate the concept of polymorphism, lets look at a vehicle class, its descendants, and a simple function called travel() that uses the vehicle class. Figure 9-2 shows the simple hierarchy for our vehicle class family.

Figure 9-2. The vehicle class family hierarchy.

graphics/09fig02.gif

Airplanes, helicopters, cars , and submarines are all descendants of type vehicle . A vehicle object can start its engine, move forward, turn right, turn left, stop, and so on. Example 9.4 demonstrates how the travel function uses a vehicle object to make a computerized trip.

Example 9.4 The travel() function using a vehicle object.
 void travel(vehicle *Transport) {    Transport->startEngine();    Transport->moveForward();    Transport->turnLeft();    //...    Transport-> stop(); } int main(int argc, char *argv[]) {    //...    car *Car;    Transportation = new Vechicle();    travel(Car);    //... } 

The travel() function accepts a pointer to a vehicle object. The travel() function invokes the appropriate methods of the vehicle object. Notice that the main() function in Example 9.4 declares an object of type car and not type vehicle . A car object is passed to the function travel() instead of a vehicle object. This is possible because in C++ a pointer to a class can point to an object of that type or any objects that are descendants of that type. Since car inherits vehicle , a vehicle pointer can point to an object of type car . The function travel() is written without the knowledge of what types of vehicle object it will manipulate. The travel() function simply requires that its vehicle objects have the capability of starting an engine, moving forward, turning left and right, and so on. As long as its vehicle object can perform those actions, then the travel() function can do its work. Notice in Figure 9-2 that the methods of the vehicle class have been declared as virtual . Declaring the methods as virtual in a base class is necessary for runtime polymorphism to work. The car, helicopter, submarine , and airplane class will each define:

 startEngine(); moveForward(); turnLeft(); turnRight(); stop(); //... 

relative to their type of machine. Although each type of vehicle moves forward, the method in which a car moves forward is different from the way a submarine moves forward. The way an airplane turns right is different from the way a car turns right. Therefore, each vehicle type has to implement the necessary operations to complete its class. Since these operations are declared as virtual in the base class, they are candidates for polymorphism. When the travel() function's vehicle pointer actually points to a car object, then the startEngine() , moveForward() , and so on called will be those methods defined in the car class. If the travel () function's vehicle pointer was assigned a pointer to an airplane class, then the startEngine() , moveForward() , and so on methods that belonged to the airplane class would be called. This is where the many forms , or single interface multiple implementations, come in. Although the travel() function only calls a single set of methods, the behavior of those methods can be radically different depending on what type of vehicle has been assigned to the vehicle pointer. In this way travel() is polymorphic because it may do something very different each time it is called. In fact, as long as the travel() function uses a pointer to a vehicle type, it may be used in the future for vehicle types that were unknown or that did not exist at the time the travel() function was designed. As long as the future vehicle classes inherit vehicle and define the necessary methods then they can be manipulated by the travel() function. This type of polymorphism is called runtime polymorphism . It's called runtime polymorphism because the travel() function does not know exactly which startEngine() , moveForward() , or turnLeft() functions it will call until the program is executing.

This type of polymorphism is useful when implementing MPI programs that use a MPMD model. If the work that the MPI tasks perform manipulate pointers to base classes, then polymorphism allows the MPI class to also manipulate any derived classes of the base class. If instead of pointers, the travel() function in Example 9.4 had a declaration:

 void travel(vehicle Transport); 

then the startEngine() , moveForward() , and so on calls would belong to the vehicle class and there wouldn't be an easy way to manipulate derived classes. The pointer to the vehicle class and the fact that the methods in the vehicle class are declared virtual are what makes the polymorphism work. MPI tasks that manipulate pointers to base classes can take advantage of polymorphism in the same way that the travel() function is able to work with any kind of vehicle object present or future. This technique holds a lot of promise for the future of cluster, smp, and mpp applications that will need to implement MPMD models. To see how this MPMD works in a MPI context, let's use our travel() function as a MPI task that is part of a search and rescue simulation. Each MPI task is responsible for performing a search and rescue mission with a different type of vehicle object. Each vehicle will obviously have different means of mobility. Although the problem to be solved requires that each MPI task perform a search, the code is different because each task uses a different kind of vehicle object that works different and requires different data. Example 9.5 would be launched in our MPICH environment using:

  $ mpirun -np 16  /tmp/search_n_rescue  
Example 9.5 MPI tasks implementing simple search and rescue simulation.
 template<T> bool travel(vehicle *Transport,set<T> Location,                               T Object) {    //...    Transport->startEngine();    Transport->moveForward(XDegrees);    Transport->turnLeft(YDegrees);    //...    if (Location.find(Transport->location() == Object){       //... rescue()    }    //... } int main(int argc, char *argv[]) {    //...    int Tag = 2;    int WorldSize;    int TaskRank;    MPI_Status Status;    MPI_Init(&argc,&argv);    MPI_Comm_rank(MPI_COMM_WORLD,&TaskRank);    MPI_Comm_size(MPI_COMM_WORLD,&WorldSize);    //...    switch(TaskRank)    {        case 1:               {                  //...                  car * Car;                  set<streets> SearchSpace                  travel<streets>(Car, SearchSpace,Street);                  //...               }               break;        case 2:               {                  //...                  helicopter *BlueThunder;                  set<air_space> NationalAirSpace;                  travel<air_space>(BlueThunder,NationalAirSpace,                                          AirSpace);                  //...                }        //case n:        //...    } } 

This will cause search_n_rescue to be launched in 16 processes, with each process potentially running on a different processor and each processor potentially on a different computer. Although each process is executing the same executable, the work (code) and data that each process works with is radically different. Templates and polymorphism are used to differentiate what each MPI task will do and what data it will use. Notice in Example 9.5 that the MPI process that has a TaskRank == 1 will use a Car object to perform a search and rescue with a container that contains street objects. The MPI process that has a TaskRank == 2 will perform its simulation using helicopters and air_space objects. Both tasks call the travel() template function. Since the travel() template function manipulates pointers to the vehicle class, it can take advantage of polymorphism and perform its operations with any descendant of type vehicle . This means that although each MPI task is calling the same travel() function, the operation that the travel() function performs will not be the same. Notice there are no case statements or if statements in the travel() function that attempt to identify what type of vehicle it is working with. The particular vehicle object it is working with is determined by the type that vehicle is pointing to. This MPI application would work with potentially 16 different vehicles, each with its own type of mobility and search space. There are other techniques that can be used to implement MPMD within a MPI environment but the polymorphic approaches generally require less code.

The two primary types of polymophism we demonstrate are dynamic binding polymorphism supported by inheritance and virtual methods and parametric polymorphism supported by templates. The travel() function in Example 9.5 uses both types of polymorphism. The inheritance-based polymorphism is demonstrated by the use of the vehicle *Transport . The parameterized polymorphism is demonstrated by the use of set<T> , and T Object . Parametric polymorphism is the mechanism by which the same code is used on different types passed as parameters. Table 9-2 lists the different types of polymorphism that may be used to simplify MPI tasks and shorten the code required to implement an MPI program.

Table 9-2. Different Types of Polymorphism That May Be Used to Simplify MPI Tasks

Types of Polymorphism

Mechanisms

Description

Runtime (dynamic)

inheritance virtual methods

All information needed to determine which function is to be executed is not known until runtime.

Parametric

templates

A mechanism in which the same code is used on different types that are passed as parameters.

9.2.3 Adding MPMD with Function Objects

Function objects are also used by the standard algorithms to implement a kind of horizontal polymorphism. The polymorphism implemented using vehicle *Transport in Example 9.5 is vertical because in order for it to work the classes must all be related through inheritance. When horizontal polymorphism is used, the classes are not related by inheritance but by interface. Function objects each has the operator() defined. Function objects would allow MPI tasks to be designed with the general form:

 // function object class some_class{    //...    operator();    // }; template<class T> T mpiTask(T X) {    //...    T Result;    Result = X()    //... } 

The mpiTask() template function will then work with any type T that has the operator() function appropriately defined:

 //... MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&TaskRank); MPI_Comm_size(MPI_COMM_WORLD,&WorldSize); //... if(TaskRank == 0){    //...    user_defined_type M;    mpiTask(M);    //... } if(TaskRank == N){    //...    some_other_userdefined_type N;    mpiTask(N); } //.... 

This horizontal polymorphism does not rely on inheritance or virtual functions. So if our MPI task gets its rank and then declares any type of object that has the operator() defined, then when mpiTask() is called its behavior will be dictated by whatever functionality is found in the operator() method. So although each process launched with the mpirun script is identical, the polymorphism of templates and the function objects allow each MPI task to perform different work on different data.



Parallel and Distributed Programming Using C++
Parallel and Distributed Programming Using C++
ISBN: 0131013769
EAN: 2147483647
Year: 2002
Pages: 133

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net