The Need for Synchronization

< BACK  NEXT >
[oR]

Thread synchronization is required when an application is multithreaded and these threads attempt to use global variables and resources, or the threads need to wait until some event has completed before continuing execution.

First, let's look at why synchronization is required when multiple threads access a global variable. In the following code, a global foating-point variable is declared, and two threads try to perform different actions on that variable.

 float g_fValue = 10.0; void f1()     // called by thread 1 {   g_fValue = g_fValue * g_fValue; } void f2()     // called by thread 2 {   g_fValue = 3.0 + g_fValue; } 

It is easy to see that the value in "g_fValue" can be either (10*10) + 3 = 103 or (10+3)*(10+3) = 169 after the two threads have finished executing, depending on whether function "f1" or function "f2" completes first. The order in which the two functions execute depends on how the threads were started and scheduled.

However, there is a much more worrisome potential outcome the variable "g_fValue" may contain a completely different value after the functions have completed. While we think of a statement like "g_fValue += 10;" as being atomic (that is, it will execute in its entirety all in one go without interruption), the statement is actually compiled into a number of machine code operations.

 g_fValue = g_fValue * g_fValue;   fld      dword ptr [g_fValue (0041060c)]   fmul     dword ptr [g_fValue (0041060c)]   fst      dword ptr [g_fValue (0041060c)] g_fValue = 3.0 + g_fValue;   fadd     qword ptr                 [__real@8@4000c000000000000000 (0040c020)]   fstp      dword ptr [g_fValue (0041060c)] 

From this listing it becomes obvious that the first statement "g_fValue = g_fValue * g_fValue" is compiled into three different op codes. The thread quantum could finish after the first op code has completed, and the second thread may then be scheduled to execute the statement "g_fValue = 3.0 + g_fValue". Therefore, the resulting computation would be 10*(10+3) = 130. This scenario would be a very rare event, but it could happen. Thread synchronization techniques should be employed to prevent it from ever happening.

A related problem arises when a thread must complete a number of related steps as an atomic unit without interruption from other threads. For example, if you write an application to create and maintain a linked list, a thread that inserts a new item in the linked list must create the new item, link the new item to the previous item in the list, and link the new item to the next item in the list without other threads accessing the linked list (Figure 6.1).

Figure 6.1. Adding a new item to a linked list
graphics/06fig01.gif

If a second thread attempts to access the linked list before the new item has been linked to the next item in the list, the second thread will prematurely reach the end of the list when the new item is traversed (Figure 6.2).

Figure 6.2. Threads add and access items at the same time
graphics/06fig02.gif

Worse still, if two threads attempt to insert new items at the same point in the linked list, the list itself can be broken (Figure 6.3). This is because each thread is unaware of the links being created by the other thread. These are known as race conditions and require synchronization.

Figure 6.3. Race conditions when two threads manipulate the linked list at the same time
graphics/06fig03.gif

The second need for synchronization occurs when threads need to coordinate their executions based on some event being completed. In this situa- tion one or more threads are typically blocked and are waiting for the event to occur. When two or more threads are waiting for two or more events to complete, there is a real chance that a "deadlock" or "deadly embrace" will occur. This should be avoided at all costs. Here is a typical situation that leads to a deadlock:

  • Thread 1 has resource 1 locked and is blocked waiting on resource 2 to be freed.

  • Thread 2 has resource 2 locked and is blocked waiting on resource 1 to be freed.

In this situation neither thread 1 nor thread 2 can continue executing because they are both blocked. Because the threads are blocked, the threads cannot execute code to free the resource they have locked (Figure 6.4). They therefore remain blocked forever. A deadlock between two worker threads is serious, but a deadlock between a worker thread and the primary thread is critical. The application will be not be responsive to the user, and the application will have to be closed down.

Figure 6.4. Deadlock between two threads
graphics/06fig04.gif

Synchronization techniques should be employed to ensure that threads block correctly, and perhaps provide timeouts to occur in the event of a deadlock. Deadlocks may occur infrequently in an application when a particular train of events occurs in a particular order. This makes them difficult to track down.

Deadlocks can be avoided by following this simple rule:

Always lock or block on a resource in the same order. All threads blocking or locking resource 1 and resource 2 should block or lock resource 1 before attempting to block or lock resource 2. Resources should be unlocked in the reverse order they were locked in.

The scenario outlined above with thread 1 and thread 2 blocking on resource 1 and resource 2 leads to a deadlock because the resources were not locked in the same order. Applying this rule leads to the following:

  • Thread 1 locks resource 1 and attempts to use resource 2. If resource 2 is not in use, thread 1 locks resource 2, uses the resources, and then unlocks resource 2 followed by resource 1.

  • Thread 2 attempts to lock resource 1. If it is in use, thread 2 blocks. If it is not in use, thread 2 locks resource 1 and then attempts to lock resource2. It will wait until resource 2 is available, use the resources, and then unlock resource 2 and then resource 1.

While this rule is quite simple, it can be difficult to implement if the code used to lock and block on the resources is scattered throughout the application. Therefore, you should write functions or classes that manage the locking or blocking.

One of the more difficult design issues is deciding which of the synchronization techniques available in Windows CE should be applied to your problem. After describing each of the techniques, the section "Selecting the Correct Synchronization Technique" later in the chapter provides a summary and a set of selection criteria.


< BACK  NEXT >


Windows CE 3. 0 Application Programming
Windows CE 3.0: Application Programming (Prentice Hall Series on Microsoft Technologies)
ISBN: 0130255920
EAN: 2147483647
Year: 2002
Pages: 181

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net