1.2 The Benefits of Parallel Programming

Programs that are properly designed to take advantage of parallelism can execute faster than their sequential counterparts, which is a market advantage. In other cases the speed is used to save lives. In these cases faster equates to better. The solutions to certain problems are represented more naturally as a collection of simultaneously executing tasks . This is especially the case in many areas of scientific, mathematical, and artificial intelligence programming. This means that parallel programming techniques can save the software developer work in some situations by allowing the developer to directly implement data structures, algorithms, and heuristics developed by researchers. Specialized hardware can be exploited. For instance, in high-end multimedia programs the logic can be distributed to specialized processors for increased performance, such as specialized graphics chips, digital sound processors, and specialized math processors. These processors can usually be accessed simultaneously. Computers with MPP (Massively Parallel Processors) have hundreds, sometimes thousands of processors and can be used to solve problems that simply cannot realistically be solved using sequential methods . With MPP computers, it's the combination of fast with pure brute force that makes the impossible possible. In this category would fall environmental modeling, space exploration, and several areas in biological research such as the Human Genome Project. Further parallel programming techniques open the door to certain software architectures that are specifically designed for parallel environments. For example, there are certain multiagent and blackboard architectures designed specifically for a parallel processor environment.

1.2.1 The Simplest Parallel Model (PRAM)

The easiest method for approaching the basic concepts in parallel programming is through the use of the PRAM (Parallel Random Access Machine). The PRAM is a simplified theoretical model where there are n processors labeled as P 1 , P 2 , P 3 , ... P n and each processor shares one global memory. Figure 1-2 shows a simple PRAM.

Figure 1-2. A Simple PRAM.

graphics/01fig02.gif

All the processors have read and write access to a shared global memory. In the PRAM the access can be simultaneous. The assumption is that each processor can perform various arithmetic and logical operations in parallel. Also, each of the theoretical processors in Figure 1-2 can access the global shared memory in one uninterruptible unit of time. The PRAM model has both concurrent and exclusive read algorithms. Concurrent read algorithms are allowed to read the same piece of memory simultaneously with no data corruption. Exclusive read algorithms are used to ensure that no two processors ever read the same memory location at the same time. The PRAM model also has both concurrent and exclusive write algorithms. Concurrent write algorithms allow multiple processors to write to memory, while exclusive write algorithms ensure that no two processors write to the same memory at the same time. Table 1-1 shows the four basic types of algorithms that can be derived from the read and write possibilities.

Table 1-1. Four Basic Read-Write Algorithms

Read-Write Algorithm Type

Meaning

EREW

Exclusive read exclusive write

CREW

Concurrent read exclusive write

ERCW

Exclusive read concurrent write

CRCW

Concurrent read concurrent write

We will refer to these algorithm types often in this book as we discuss methods for implementing concurrent architectures. The blackboard architecture is one of the important architectures that we implement using the PRAM model and it is discussed in Chapter 13. It is important to note that although PRAM is a simplified theoretical model, it is used to develop practical programs, and these programs can compete on performance with programs that were developed using more sophisticated models of parallelism.

1.2.2 The Simplest Parallel Classification

The PRAM gives us a simple model for thinking about how a computer can be divided into processors and memory and gives us some ideas for how those processors may access memory. A simplified scheme for classifying the parallel computers was introduced by M.J. Flynn. [1] These schemes were SIMD (Single Instruction Multiple Data) and MIMD (Multiple Instruction Multiple Data). These were later extended to SPMD (Single Program Multiple Data) and MPMD (Multiple Program Multiple Data). The SPMD (SIMD) scheme allows multiple processors to execute the same instruction or program with each processor accessing different data. The MPMD (MIMD) scheme allows for multiple processors with each executing different programs or instructions and each with its own data. So in one scheme all the processors execute the same program or instructions and in the other scheme each processor executes different instructions. Of course, there are hybrids of these models where the processors are divided up and some are SPMD and some are MPMD. Using SPMD, all of the processors are simply doing the same thing only with different data. For example, we can divide a single puzzle up into groups and assign each group to a separate processor. Each processor will apply the same rules for trying to put together the puzzle, but each processor has different pieces to work with. When all of the processors are done putting their pieces together, we can see the whole. Using MPMD, each processor executes something different. Even though they are all trying to solve the same problem, they have been assigned a different aspect of the problem. For example, we might divide the work of securing a Web server as a MPMD scheme. Each processor is assigned a different task. For instance, one processor monitors the ports, another processor monitors login attempts, another processor analyzes packet contents, and so on. Each processor works with its own data relative to its area of concern. Although the processors are each doing different work using different data, they are working toward a single solution: security. The parallel programming concepts that we discuss in this book are easily described using PRAM, SPMD (SIMD), and MPMD (MIMD). In fact, these schemes and models are used to implement practical small- to medium-scale applications and should be sufficient until you are ready to do advanced parallel programming.

[1] M.J. Flynn. Very high-speed computers. In Proceedings of the IEEE, 54, 1901-1909 (December 1966).



Parallel and Distributed Programming Using C++
Parallel and Distributed Programming Using C++
ISBN: 0131013769
EAN: 2147483647
Year: 2002
Pages: 133

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net