Index_N


M

Mapping

of heterogeneous parallel algorithm to network of computers, see Heterogeneous parallel algorithm, mapping to network of computers

Matrix multiplication

1-D heterogeneous block distribution, 143

HPF program, 144–145

1-D homogeneous block distribution, 96

MPI program, 127–129

2-D heterogeneous block cyclic distribution, 269–275

mpC program, 275–284, 371–383

2-D homogeneous block distribution, 98, 266–267

HPF program, 133–134

MPI program, 134–135

2-D homogeneous block cyclic distribution, 269

Message Passing Interface, see MPI

Message passing model, 95

in MPI, 104–106. See also MPI

mpC language

abstract network, 165–166

definition of, 165–166

duration, 168–170

automatic, 168

static, 169

mapping to physical processes, 210–211, 247–253. See also Heterogeneous parallel algorithm, mapping to network

of computers

name of, 165–166

parent of network, 173–178

abstract processors, 165

distributed label, 167

distributed value, 162

distributed variable, 161, 166

examples

barrier, implementation of, 179, 180–181, 183

matrix multiplication C=AxBT, 199–206

heterogeneous parallel algorithm, 200

mpC program, 201–204

weight of metallic construction, 191–194

functions

basic functions, 159

library functions

MPC_Barrier, 182–186

MPC_Get_number_of__processors, 204

MPC_Get_processors_info, 205

MPC_Global_barrier, 181–182

MPC_Printf, 160, 179

MPC_Processor_name, 162

MPC_Total_nodes, 163–164

MPC_Wtime, 194

network functions, 182–186

declaration, 183

definition, 183–184

invocation, 182, 185–186

network argument, 183, 185–186

nodal functions, 160

modularity, 213

network type definition, 170–171,

coordinate variables, 171

interaction of abstract processors, specification of

algorithmic patterns, 242–243

parallel, 242–243

sequential, 242–243

communication patterns, 233–241

dynamic, 240

irregular, 240–241

regular, 234–238

name of network type, 171

parent coordinates, specification of, 175–176

volume of communication, specification of, 221

volume of computation, specification of

absolute, 221

relative, 191–194

operators

assignment, 164, 168, 175, 180, 188, 206, 236

coordof, 172–173

timeof, 232–233

projection, 161

recon statement, 196–199, 226, 232, 244, 248

replicated value, 163

replicated variable, 163

subnetworks, 187

explicitly defined, 189

implicitly defined, 187–188

task parallelism, 213

mpC programming system, 206–211, 244–253, 397–413

command-line user interface, 206, 399–413

compiler, 206–209, 399–401

target message-passing program, 207–209

dispatcher, 207

host-process, 207

process team, 207

creation of, 207–209

destruction of, 208

working processes, 207

library, 206. See also mpC language, library functions

run-time support system, 206

MPI, 103–129

collective communication, 120–127

barrier, 120–121

broadcast, 121

gather, 121–122

reduction, 123–126

scatter, 123

communication mode, 116–120

blocking, 116

buffered, 116

nonblocking, 118–120

ready, 117

standard, 116

synchronous, 116–117

communicators, 104–106, 109–111

communicator constructors, 109

communicator destructors, 111

intercommunicators, 106

intracommunicators, 106

MPI_COMM_NULL, 109

MPI_COMM_WORLD, 107

MPI_UNDEFINED, 109

contexts, 106, 109, 118, 121

datatypes, 112–115

derived, 112–115

pre-defined, 112

examples

dot product, 124

matrix multiplication, 127–129

maximum element of distributed vector, 125–126

functions

MPI_Barrier, 121

MPI_Bcast, 121

MPI_Bsend, 117

MPI_Comm_create, 109

MPI_Comm_dup, 109

MPI_Comm_free, 111

MPI_Comm_group, 107

MPI_Comm_rank, 109

MPI_Comm_size, 109

MPI_Comm_split, 109

MPI_Finalize, 110–111

MPI_Gather, 121–122

MPI_Gatherv, 122

MPI_Group_difference, 108

MPI_Group_excl, 107

MPI_Group_free, 111

MPI_Group_incl, 107

MPI_Group_intersection, 108

MPI_Group_range_excl, 108

MPI_Group_range_incl, 107–108

MPI_Group_union, 108

MPI_Ibsend, 119

MPI_Init, 110

MPI_Iprobe, 120

MPI_Irecv, 119

MPI_Irsend, 119

MPI_Issend, 119

MPI_Probe, 120

MPI_Recv, 115–116

MPI_Rsend, 117

MPI_Scatter, 123

MPI_Scatterv, 123

MPI_Send, 111

MPI_Ssend, 117

MPI_Test, 119–120

MPI_Type_commit, 115

MPI_Type_contiguous, 113

MPI_Type_free, 115

MPI_Type_hindexed, 114

MPI_Type_hvector, 114

MPI_Type_indexed, 114

MPI_Type_struct, 114

MPI_Type_vector, 113

MPI_Wait, 119

MPI_Reduce, 123–124

MPI_Op_free, 126–127

MPI_Op_create, 126

MPI_Get_processor_name, 127

MPI_Wtime, 127

MPI_Wtick, 127

MPI_Allgather, 129

message, 111–112

message envelope, 111–112, 115

message tag, 112, 115–116

modularity, 104–106

point-to-point communication, 111–120

functions, 111, 115, 117, 119

modes, 116–120. See also communication mode

properties, 117

fairness, 117

order, 117

progress, 117

process groups, 104, 106–108, 111

group constructors, 107–108

group destructors, 111

process rank, 104

programming model, 104–106

MPICH, 103

MPP, see Distributed memory multiprocessor

MT program, see Multithreaded program

Multithreaded dot product

in OpenMP, see OpenMP, example, multithreaded dot product

in Pthreads, see Pthreads library, example, multithreaded dot product

Multithreaded program

definition of, 65

in Pthreads, see Pthreads library

Mutex, 65, 68, 71–72

in Pthreads, see Pthreads library

Mutual exclusion, 65, 68, 71–72

Mutual exclusion lock, see Mutex

in Pthreads, see Pthreads library, mutexes




Parallel Computing on Heterogeneous Networks
Parallel Computing on Heterogeneous Networks (Wiley Series on Parallel and Distributed Computing)
ISBN: B000W7ZQBO
EAN: N/A
Year: 2005
Pages: 95

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net