14.1 Dealing with Critical Sections

Team-FLY

Imagine a computer system in which all users share a single printer and can simultaneously print. How would the output appear? If lines of users' jobs were interspersed, the system would be unusable. Shared devices, such as printers, are called exclusive resources because they must be accessed by one process at a time. Processes must execute the code that accesses these shared resources in a mutually exclusive manner.

A critical section is a code segment that must be executed in a mutually exclusive manner, that is, only one thread of execution can be active in its boundaries. For example, code that modifies a shared variable is considered to be part of a critical section, if other threads of execution might possibly access the shared variable during the modification. The critical section problem refers to the problem of executing critical section code in a safe, fair and symmetric manner.

Program 14.1 contains a modification of Program 3.1 on page 67 to generate a process chain. It prints its message one character at a time. The program takes an extra command-line argument giving a delay after each character is output to make it more likely that the process quantum will expire in the output loop. The call to wait ensures that the original process does not terminate until all children have completed and prevents the shell prompt from appearing in the middle of the output of one of the children.

Exercise 14.1

Explain why the marked section of code in Program 14.1 is a critical section.

Answer:

After falling out of the forking loop, each process outputs an informative message to standard error one character at a time. Since standard error is shared by all processes in the chain, that part of the code is a critical section and should be executed in a mutually exclusive manner. Unfortunately, the critical section of Program 14.1 is not protected, so output from different processes can interleave in a random manner, different for each run.

Exercise 14.2

Run Program 14.1 with different values of the delay parameter. What happens?

Answer:

When the delay parameter is near 0, each process usually outputs its entire line without losing the CPU. Longer delays make it more likely that a process will lose the CPU before completing the entire message. For large enough values of the delay, each process outputs only one character before losing the CPU. Depending on the speed of the machine, you might need to use values of the delay in excess of 1 million for this last case.

Exercise 14.3

Program 3.1 on page 67 uses a single fprintf to standard error to produce the output. Does this have a critical section?

Answer:

Yes. Although the output is in a single C language statement, the compiled code is a sequence of assembly language instructions and the process can lose the CPU anywhere in this sequence. Although this might be less likely to happen in Program 3.1 than in Program 14.1, it is still possible.

Program 14.1 chaincritical.c

A program to generate a chain of processes that write to standard error .

 #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/wait.h> #include "restart.h" #define BUFSIZE 1024 int main(int argc, char *argv[]) {     char buffer[BUFSIZE];     char *c;     pid_t childpid = 0;     int delay;     volatile int dummy = 0;     int i, n;     if (argc != 3){   /* check for valid number of command-line arguments */         fprintf (stderr, "Usage: %s processes delay\n", argv[0]);         return 1;     }     n = atoi(argv[1]);     delay = atoi(argv[2]);     for (i = 1; i < n; i++)         if (childpid = fork())             break;     snprintf(buffer, BUFSIZE,              "i:%d  process ID:%ld  parent ID:%ld  child ID:%ld\n",              i, (long)getpid(), (long)getppid(), (long)childpid);     c = buffer;    /********************** start of critical section **********************/     while (*c != ' 
 #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/wait.h> #include "restart.h" #define BUFSIZE 1024 int main(int argc, char *argv[]) { char buffer[BUFSIZE]; char *c; pid_t childpid = 0; int delay; volatile int dummy = 0; int i, n; if (argc != 3){ /* check for valid number of command-line arguments */ fprintf (stderr, "Usage: %s processes delay\n", argv[0]); return 1; } n = atoi(argv[1]); delay = atoi(argv[2]); for (i = 1; i < n; i++) if (childpid = fork()) break; snprintf (buffer, BUFSIZE, "i:%d process ID:%ld parent ID:%ld child ID:%ld\n", i, (long)getpid(), (long)getppid(), (long)childpid); c = buffer; /********************** start of critical section **********************/ while (*c != '\0') { fputc (*c, stderr); c++; for (i = 0; i < delay; i++) dummy++; } /********************** end of critical section ************************/ if (r_wait(NULL) == -1) return 1; return 0; } 
') { fputc(*c, stderr); c++; for (i = 0; i < delay; i++) dummy++; } /********************** end of critical section ************************/ if (r_wait(NULL) == -1) return 1; return 0; }

Each process in Program 14.1 executes the statements in sequential order, but the statements (and hence the output) from the different processes can be arbitrarily interleaved. An analogy to this arbitrary interleaving comes from a deck of cards. Cut a deck of cards. Think of each section of the cut as representing one process. The individual cards in each section represent the statements in the order that the corresponding process executes them. Now shuffle the two sections by interleaving. There are many possibilities for a final ordering, depending on the shuffling mechanics. Similarly, there are many possible interleavings of the statements of two processes because the exact timing of processes relative to each other depends on outside factors (e.g., how many other processes are competing for the CPU or how much time each process spent in previous blocked states waiting for I/O). The challenge for programmers is to develop programs that work for all realizable interleavings of program statements.

Code with synchronized critical sections can be organized into distinct parts . The entry section contains code to request permission to modify a shared variable or other resource. You can think of the entry section as the gatekeeper ”allowing only one thread of execution to pass through at a time. The critical section usually contains code to access a shared resource or to execute code that is nonreentrant. The explicit release of access provided in the exit section is necessary so that the gatekeeper knows it can allow the next thread of execution to enter the critical section. After releasing access, a thread may have other code to execute, which we separate into the remainder section to indicate that it should not influence decisions by the gatekeeper.

A good solution to the critical section problem requires fairness as well as exclusive access . Threads of execution that are trying to enter a critical section should not be postponed indefinitely . Threads should also make progress . If no thread is currently in the critical section, a waiting thread should be allowed to enter.

Critical sections commonly arise when two processes access a shared resource, such as the example of Program 14.1. Be aware that critical sections can arise in other ways. Code in a signal handler executes asynchronously with the rest of the program, so it can be thought of as logically executing in a separate thread of execution. Variables that are modified in the signal handler and used in the rest of the program must be treated as part of a critical section. In Program 8.6 on page 271, the signal handler and the results function compete for access to buf and buflen . The entry section or gatekeeper is the code in results to block SIGUSR1 ; the exit section is the code to unblock SIGUSR1 and to restore the original signal mask.

Program 2.3 on page 39 illustrates a related problem that can arise with recursive calls to nonreentrant functions such as strtok . Although this example is not strictly a critical section problem by the definition given above, it has the same characteristics because the single thread of execution changes its execution environment when a function call pushes a new activation record on the stack.

Team-FLY


Unix Systems Programming
UNIX Systems Programming: Communication, Concurrency and Threads
ISBN: 0130424110
EAN: 2147483647
Year: 2003
Pages: 274

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net