Part II: Parallel Programming For Networks Of Computers With MPC And HMPI


In summarizing challenges associated with parallel programming for NoCs, let us recall main features of an ideal parallel program running on a NoC.

Such a program distributes computations and communications unevenly across processors and communications links, taking into account their actual performance during the execution of the code of the program. The distribution is not static and may be different not only for different NoCs but also for different executions of the program on the same NoC, depending on the workload of its elements. The program may find profitable to involve in computations not all available computers. This way the program will be efficiently portable.

The program keeps running even if some resources in the executing network fail. In the case of a resource failure, it is able to reconfigure itself and resume computations from some point in the past.

The program takes into account differences in machine arithmetic on different computers and avoids erroneous behavior of the program that might be caused by the differences.

This book focuses on the issue of portable efficiency of parallel programs for NoCs. The challenges associated with fault tolerance and machine arithmetic are beyond the scope of our discussion.

HPF provides some basic support for programming heterogeneous algorithms. It allows the programmer to specify uneven distribution of data across abstract HPF processors. Still it is the responsibility of the programmer to provide a code that analyses the implemented parallel algorithm and the executing NoC, and calculates the best distribution. Another problem is that the programmer cannot influence the mapping of abstract HPF processors to computers of the NoC. HPF provides no language constructs allowing the programmer to better control the mapping of heterogeneous algorithms to heterogeneous clusters. The HPF programmer should rely on some default mapping provided by the HPF compiler. The mapping cannot be sensitive to peculiarities of each individual algorithm just because the HPF compiler has no information about the peculiarities. Therefore, to control the mapping and account for both the peculiarities of the implemented parallel algorithm and the peculiarities of the executing heterogeneous environment, the HPF programmer needs to additionally write a good piece of complex code. HPF does not address fault tolerance at all.

As a general-purpose message-passing tool of the assembler level, MPI allows the programmer to write efficiently portable programs for NoCs. However, it provides no support to facilitate such programming. It is the responsibility of the programmer to write all the code that makes the application efficiently portable among NoCs. In other words, when programming for NoCs, a programmer must solve the extremely difficult problem of portable efficiency every time from scratch. Standard MPI also does not address the problem of fault tolerance.

We use in this book a high-level language, mpC, to introduce into parallel programming for NoCs. The language is designed to facilitate parallel programming for the common heterogeneous networks of computers. It addresses all the challenges associated with writing efficiently portable programs for NoCs.

Part II presents the main mpC language constructs and explains how they can be used for programming NoCs. It also introduces basic models underlying the language and the principles of its implementation.

Parallel Computing on Heterogeneous Networks
Parallel Computing on Heterogeneous Networks (Wiley Series on Parallel and Distributed Computing)
Year: 2005
Pages: 95 © 2008-2017.
If you may any questions please contact us: