Synchronization Objects

The process of coordinating resource use among multiple threads is known as synchronization . Basically, a thread synchronizes itself with other threads by putting itself to sleep until the operating system signals it to wake up by means of a synchronization object. (A sleeping thread does not execute and is therefore very efficient.)

The Win32 API provides several synchronization objects. Of these, the easiest to use is the critical section. The others are kernel objects and include events, mutexes , semaphores, and waitable timers.

Don't Roll Your Own

While it might be tempting to "roll your own" synchronization objects, don't. These objects may appear simple, but it is difficult to properly synchronize access to code using variables not under operating system control. For instance, you might try the following construction:

 BOOLgate1=FALSE; BOOLgate2=FALSE; if(gate1==FALSE) { gate1=TRUE; if(gate2==FALSE) { gate2=TRUE; //Dosomethingprotected... } } 

Code such as this is very problematic . For example, there is no guarantee that the first test of gate1 will be performed atomically with the setting of gate1 in the next line. All manner of race conditions could result from this kind of code. If you are interested in more information, refer to Andrew Tanenbaum's Modern Operating Systems (Prentice-Hall, 1992).

The kernel objects differ from critical sections because they can be used across process boundaries. The flip side of this benefit is that these objects require a switch from user mode to kernel mode to function, which makes them slower than critical sections. A distinguishing feature of two of the kernel object types (events and semaphores) is that they allow resource counting, while critical sections and one of the other object types (mutexes) allow only for exclusive access. Table 2-1 summarizes the differences among the synchronization objects.

Table 2-1 Synchronization Objects

Name Cross Process Resource Counting
Critical section No No
Event Yes Not automatically
Mutex Yes No
Semaphore Yes Yes
Waitable timer Yes No

Cross-process object types are by definition slower than critical sections, which exist within a single process space. Note also that use of cross-process objects requires that a name be given to the object when it's created. Also, these named objectsalong with memory-mapped filesshare a common name space. Trying to create a named synchronization object will fail if an object of that name already exists, even if the name is for a different type of object. Thus, a naming convention for these objects might be important for larger projects.

NOTE
The name space limitation can be used to your advantage. If you want to be sure that only one instance of your application is running on a system at once, have each instance try to create an instance of a specific, kernel-based synchronization object with a specific name. The first instance of the program will be able to create the program, but subsequent instances will not. This can be a more reliable way to sense the running of the application than checking window titles or classes.

Critical Sections

The critical section is an extremely efficient way to synchronize global resource use among multiple threads, and it is often the easiest method to understand. The example in "Multithreading" can be rewritten to operate safely in the multithreaded world as follows :

 //Aglobalvariable... externlongnumProcessed=0; //Functiondeclarations... voidDoInitialStuff(void); voidDoOtherProcessing(structTHING*thing); //Acriticalsectionvariable... CRITICAL_SECTIONCriticalSection; WinMain() /*Orsomeotherfunctionthatiscalledbefore additionalthreadsarecreated*/ { InitializeCriticalSection(&CriticalSection); //Otherwork,createanduseotherthreads... //Eventually,beforeexiting... DeleteCriticalSection(&CriticalSection); } //Afunctioncalledbymultiplethreads... ProcessThing(structTHING*thing) { EnterCriticalSection(&CriticalSection); if(numProcessed==0) { DoInitialStuff(); } numProcessed++; LeaveCriticalSection(&CriticalSection); DoOtherProcessing(thing); } 

This code could be fine- tuned especially if DoInitialStuff takes a long time to completebut it will work properly with multiple threads executing it ( assuming that DoOtherProcessing does not access global resources).

In general, the less done within the critical section the better. In the example above, DoInitialStuff acts as a choke point, causing all threads calling ProcessThing to wait until the call to DoInitialStuff returns and the numProcessed variable is properly incremented. In this example, that would not be too heavy a burden since there is only a single call to DoInitialStuff. In other circumstances, a critical section might cause all processing to be serialized, thus losing some of the benefits of multithreading.

Before it is used, a critical section must be initialized by a call to InitializeCriticalSection . Before the application terminates, the critical section must be deleted by a call to DeleteCriticalSection :

 VOIDInitializeCriticalSection(LPCRITICAL_SECTIONlpCriticalSection); VOIDDeleteCriticalSection(LPCRITICAL_SECTIONlpCriticalSection); 

In general, it is safest to call these functions in the application initialization and termination code, to ensure that no other threads are active.

Between the return from the call to EnterCriticalSection and the call to LeaveCriticalSection , any other threads calling EnterCriticalSection using the same critical section object will be blocked. EnterCriticalSection does not return until the calling thread is the only thread with ownership of the critical section. This blocking process is a problem addressed by other functions not shown in the above example.

NOTE
MFC users take special note. Even though the Lock member function of the CCriticalSection class can take a single parameter, tantalizingly named dwTimeout , this parameter is ignored, and Lock is a blocking function, just like EnterCriticalSection in the native Win32 API.

Blocking functions and deadlock potential

Blocking functions create several potential problems. The first of these occurs when the function simply does not return because a resource required for it to return is not available. For instance, if you use a critical section to synchronize access to a file shared by multiple threads, the thread using the file (and thus owning the critical section) may not be able to continue and may put up a dialog box for the user to make a decision. If the user is not sitting at the terminal, the thread will not give up the critical section, meaning that other threads waiting for the same critical section (to gain access to the shared resource) will be blocked. Of course, this example is sillyyou should never place user interface code within a critical sectionbut less contrived instances of deadlock with blocking functions abound.

For example, a problem arises when multiple resources are needed and the order in which they are requested differs in different threads. In the following case, thread A has critical section 1 and is waiting for critical section 2, and thread B has critical section 2 and is waiting for critical section 1. The following code illustrates the potential problem:

 CriticalSectionCS1; CriticalSectionCS2;  //InWinMainorasimilarfunction... InitializeCriticalSection(&CS1); InitializeCriticalSection(&CS2);  intThreadA() { EnterCriticalSection(&CS1); EnterCriticalSection(&CS2); //Dosomework.... LeaveCriticalSection(&CS2); LeaveCriticalSection(&CS1); return(0); } intThreadB() { EnterCriticalSection(&CS2); EnterCriticalSection(&CS1); //Dosomework.... LeaveCriticalSection(&CS1); LeaveCriticalSection(&CS2); return(0); } 

In this admittedly contrived example, the following sequence of operations is possible:

Thread A Thread B
 1.EnterCriticalSection(&CS1); 
 
 
 2.EnterCriticalSection(&CS2); 
 3.EnterCriticalSection(&CS2);  Deadlock! 
 4.EnterCriticalSection(&CS1); Deadlock! 

In reality, the resource contention is generally less obvious, and the possibility for deadlock is real. Deadlocks result when two threads, processes, or users are each waiting for a resource held by the other. The code above could be rewritten in the following manner, and a deadlock would not occur:

 CriticalSectionCS1; CriticalSectionCS2;  //InWinMainorasimilarfunction... InitializeCriticalSection(&CS1); InitializeCriticalSection(&CS2);  intThreadA() { EnterCriticalSection(&CS1); EnterCriticalSection(&CS2); //Dosomework.... LeaveCriticalSection(&CS2); LeaveCriticalSection(&CS1); return(0); } intThreadB() { EnterCriticalSection(&CS1); EnterCriticalSection(&CS2); //Dosomework.... LeaveCriticalSection(&CS2); LeaveCriticalSection(&CS1); return(0); } 

Here each thread is allocating resources in a set order. (In this example, CS1 is always entered before CS2.) Even in a complex system, allocating all resources in the same order in all threads is a good way to avoid deadlocks. Although that is not always possible, in a system that will be heavily dependent upon exclusive use of resources, it is worth consideration.

Many systems use what Andrew Tanenbaum describes as the "Ostrich Algorithm" for handling deadlocks. (The algorithm simply ignores the problem.) This makes a great deal of sense for some systems. If a failure resulting from deadlocks occurs once every two years and the system goes down every month for some other reason (such as compiler errors or hardware failures), the failure that occurs every two years is not critical and is possibly not worth programming against. Many systems in widespread use are potentially in danger of deadlocks occurring, but the practical risk is small. Of course, if the system is running a nuclear power plant or monitoring a life support system, the stakes of the game increase drastically.

Server-based deadlocks

Servers are likely to acquire resources on behalf of clients. A common example is a database server. Because the clients requesting resources have no way to detect a deadlock themselves , servers require an additional level of deadlock detection: they must track the exclusive resources used by the clients as new requests come in.

Here's an example of deadlock in a server-based system: Client A requests the record for Douglas Reilly for an edit and gets the record with a lock. Client B requests the record for Erin Reilly for edit and gets the record with a lock. Next, Client A requests the record for Erin Reilly for edit and tries to acquire the record with a lock. Client B now requests the record for Douglas Reilly for edit and tries to acquire the record with a lock. Presuming the request for a lock is a blocking operation, the clients will be deadlocked. Further, the clients cannot really know in advance when the records they are requesting are locked unless they each try to acquire locks by using a nonblocking function.

The previous example is not entirely unreasonable or contrived. For instance, consider a person's record in a medical system. If Douglas Reilly is the guarantor of payment for Erin Reilly, it is reasonable to think that different parts of the system might access this information in different orders. A clinical user might get to Erin's record and then want to get to the guarantor's record; a financial system user might start with the guarantor's record and then need to get to the patient's record.

The solution for a server-based system is to track what clients have which records locked and, when a new request to lock a record fails, scan the lock table for the client holding the lock on the requested record. When Client B tries to access Douglas Reilly's record, the lock will fail. Further checking will allow the server to determine that Client A is holding a lock on Douglas Reilly and also waiting for a lock on Erin Reilly, and that Client B already has Erin Reilly's record locked. In this case, the blocking functions for each client can return an error indicating that a deadlock has occurred, and each client can take whatever action is appropriate.

The important part of this discussion is that the deadlock can be detected in the example above only because there is a central authority on the activity of the clients: the server process. A similar system that uses client-based database engines (for example, using Access MDB files or dBase files) cannot detect deadlocks so easily.

Other critical section functions

There are several other critical section functions that can help with the blocking problems mentioned in the previous sections. The first and most important function is TryEnterCriticalSection. TryEnterCriticalSection takes a pointer to a critical section object, but unlike EnterCriticalSection , this function will not block. The return value is nonzero if the critical section was entered successfully or 0 if the critical section was owned by another thread. This function is not available in Windows 9x and is available only in Windows NT 4.x or later. Its limited availability prevents MFC from directly offering full support for CriticalSection object functionality.

Using TryEnterCriticalSection , we could rewrite the previous example as follows:

  //Afunctioncalledbymultiplethreads... ProcessThing(structTHING*thing) { while(TryEnterCriticalSection(&CriticalSection)==0) { //Dosomeotherwork,orexitthefunctionandreturn //avalueindicatingthatwecannotgetCriticalSection } if(numProcessed==0) { DoInitialStuff(); } numProcessed++; LeaveCriticalSection(&CriticalSection); DoOtherProcessing(thing); } 

In this example, we have control over what to do in case we cannot acquire the resource we need. In addition to the examples of possible actions presented in the code comment, we could even display a dialog box that allows the user to cancel the operation. If we do that, we must be certain that we will not simply continue through the code as if we did get the critical section.

Two functions InitializeCriticalSectionAndSpinCount and SetCriticalSectionSpinCount can be used to set a spin count for the critical section. To understand the implications of a spin count, you must understand what happens inside the operating system when a thread calls EnterCriticalSection . If the critical section is already owned by another thread, the operating system calls WaitForSingleObject , which does not return until the other thread relinquishes control of the critical object. We will discuss WaitForSingleObject in detail later in this chapter, but for now, suffice it to say that calling WaitForSingleObject is a relatively expensive operation. Imagine a system with a couple of processors and a number of threads that serializes access to a shared resource. If the threads are very busy, a multiprocessor system might keep this critical section owned by one or another of the threads almost constantly. In this case, threads constantly calling WaitForSingleObject can be a nontrivial drain on system resources.

The spin count of a critical section object can come into play on multiprocessor systems when a thread tries to acquire a critical section and the section is already owned by another thread. Rather than immediately making a call to WaitForSingleObject , the operating system will "spin" (repeatedly try to get ownership of the critical section) the number of times specified by the spin count. If the object becomes free during the spinning, a call to WaitForSingleObject is unnecessary. (The Visual C++ online help indicates that the heap manager uses a spin count of about 4000; that setting gives optimal performance in most worst-case scenarios.)

SetCriticalSectionSpinCount and InitializeCriticalSectionAndSpinCount are prototyped as follows:

 DWORDSetCriticalSectionSpinCount(LPCRITICAL_SECTIONlpCriticalSection,DWORDdwSpinCount); BOOLInitializeCriticalSectionAndSpinCount(LPCRITICAL_SECTIONlpCriticalSection,DWORDdwSpinCount); 

InitializeCriticalSectionAndSpinCount takes the pointer to a critical section object just like InitializeCriticalSection , but it also expects a parameter specifying the spin count. SetCriticalSectionSpinCount takes the same parameters as InitializeCriticalSectionAndSpinCount , though the critical section should be initialized by calling InitializeCriticalSection before you pass it to SetCriticalSectionSpinCount .

NOTE
The use of the spin count functions has two significant restrictions. First, these functions have meaning only on multiprocessor systems. Second, the functions require Windows NT 4.0 Service Pack 3 or later in order to function.

Events

Events are objects used to signal that some event has taken place. For instance, a program might use an event object to allow a group of threads to know that some action has taken place. The threads can then act on that knowledge. As with all kernel-based synchronization objects, the general sequence of events is that the object is created or opened, used, and then closed (using CloseHandle ). Events are more flexible than the other synchronization objects and can be of special value for developers of server applications.

Even the call to CreateEvent is a little busier than the other Create... calls. The function prototype is as follows:

 HANDLECreateEvent(LPSECURITY_ATTRIBUTESlpEventAttributes, BOOLbManualReset,BOOLbInitialState,LPCTSTRlpName); 

The first parameter, lpEventAttributes , is a pointer to a security attribute. Windows 9x programs do not use it, but this will be described in more detail later. The bManualReset parameter is a flag used to indicate whether the event should require a manual reset. If this parameter is FALSE, the event, once signaled, will automatically reset to nonsignaled. The flag's broader implications will be discussed shortly. The third parameter is bInitialState . This specifies whether the object should be created signaled or nonsignaled. Finally, the lpName parameter allows the event to be named, which is useful for events that need to be detected in other applications. If this parameter is NULL, the event is created without a name. Programs within the same process can use the handle returned by CreateEvent to access the event. Another process accesses the event using OpenEvent and the returned handle. In any case, the handle should be closed using CloseHandle , or the handle will be freed automatically when the process exits. As with all of the kernel-based synchronization objects, the object is destroyed when the last handle to it is closed.

NOTE
For Windows NT 4 Terminal Server Edition Service Pack 4 or later, or Windows 2000 Terminal Server Edition, the lpName parameter for all of the kernel-based synchronization object creation functions can signal whether the name should be global to the machine or global only to the session. If you prefix a name with "Global\" the object is created global to the machine; if you prefix the name with "Local\" the object is created just for one user's session.

The bManualReset parameter also affects the way that multiple threads react when they are waiting for an event to be signaled. If it is TRUE and multiple threads are waiting for the event to be signaled, all threads are notified of the state change when the event is signaled, and the threads are responsible for resetting the event. If it is FALSE, only a single thread is notified and the event is automatically reset to an unsignaled state. This can be useful when a program needs to have a single thread act on some event. An example would be preparing a queue of threads to perform some activity and having a single thread react to each event. The changes to the Win32 API in Windows 2000 allow more elegant solutions to thread pooling, but this remains an option.

Mutexes

The name "mutex" comes from the object's allowance of mutual exclusive access to a shared resource. A mutex is signaled when it is not owned by any thread; it is unsignaled when a thread owns it. For instance, if multiple threads or processes are sharing memory, they might agree to use a mutex to control access to that memory. This one-of-a-kind access to a resource is called serialization. Serialization can be a good thing when needed, but it can also cause a bottleneck in a server application.

A mutex must first be created using the CreateMutex function:

 HANDLECreateMutex(LPSECURITY_ATTRIBUTESlpMutexAttributes, BOOLbInitialOwner,LPCTSTRlpName); 

Once again, lpMutexAttributes is the security attribute. If the second parameter, bInitialOwner , is TRUE, the calling thread will own the mutex if it is created. The third parameter is the name of the mutex if it is to be shared among processes. If it is not to be shared among processes, this parameter can be NULL. As with other kernel-based synchronization objects, the lpName parameter is used to allow other processes or threads to open the object using the OpenMutex function:

 HANDLEOpenMutex(DWORDdwDesiredAccess,BOOLbInheritHandle, LPCTSTRlpName); 

The first parameter, dwDesiredAccess , is a set of flags that specify the access to the object that is required. These flags are specified in the Visual C++ online help for each of the object types. The second parameter, bInheritHandle , specifies whether the mutex should be inheritable by processes created by the CreateProcess function. The third parameter, lpName , is the name that must exactly match the name of a mutex previously created by CreateMutex .

Once a mutex is created or opened, a thread can use one of the Wait functions to gain ownership of the mutex. The exception occurs when the mutex is created with bInitialOwner set to TRUE. Once the task that needs to be serialized has been completed, the mutex can be released by calling ReleaseMutex , a function that takes a single parameter, which is the handle to the mutex object. A thread that has ownership of the mutex object can reacquire ownership by calling one of the Wait functions, but it must call ReleaseMutex once for each call to a Wait function.

Semaphores

Unlike a mutex, a semaphore object allows a number of threads to share ownership of a resource. This kind of metered access is useful for resources that are shareable but not limitlessly so. For example, a server that does calculations might know that it can support, at most, 10 clients during any given moment. You can restrict access to the server by creating a semaphore that allows a maximum of 10 shared owners . The eleventh thread will discover that it cannot gain partial ownership of the semaphore and can refuse the client or make the client wait. The semaphore can then be passed to one of the Wait functions so that the thread will become active when one of the other threads releases its partial ownership.

To create a semaphore object, you call the CreateSemaphore function:

 HANDLECreateSemaphore(LPSECURITY_ATTRIBUTESlpSemaphoreAttributes, LONGlInitialCount,LONGlMaximumCount,LPCTSTRlpName); 

Once again, the first parameter, lpSemaphoreAttributes , is the security attribute. The second parameter, lInitialCount , is a long value that must be greater than or equal to 0 and less than or equal to the third parameter, lMaximumCount , which must be greater than 0. Often the two count values are the same. The final parameter, lpName , is the name used to allow other processes to open the semaphore.

The initial and maximum count values make a semaphore different from a mutex. These are used to allow more than a single "owner" of the semaphore, although ownership does not fully describe why semaphores are used. (Most likely, someone will set the initial count to something other than the maximum count when the semaphore is created before the metered resource is ready for use.)

A semaphore maintains an internal count of the maximum number of threads that can own the semaphore and a count of the number of threads that currently own the semaphore. The count for the number of owning threadsinitially set to the value of lInitialCount is decremented each time you call one of the Wait functions. When the count reaches 0, all Wait calls fail until you call ReleaseSemaphore, which increments the count of the number of owning threads:

 BOOLReleaseSemaphore(HANDLEhSemaphore,LONGlReleaseCount, LPLONGlpPreviousCount); 

ReleaseSemaphore allows you to do more than simply increment the count of owning threads. In addition to the semaphore handle, lReleaseCount is the count of the number to add to the current internal count of the semaphore, and returns in lpPreviousCount a pointer to the count before the release operation takes place. As with other kernel-based synchronization objects, the lpName parameter is used to allow other processes or threads to open the object. In this case, OpenSemaphore is used:

 HANDLEOpenSemaphore(DWORDdwDesiredAccess,BOOLbInheritHandle, LPCTSTRlpName); 

The first parameter, dwDesiredAccess , is a set of flags that specify the access to the object that is required. These flags are specified in the Visual C++ online help for each of the object types. The second parameter, bInheritHandle , specifies whether the mutex should be inheritable by processes created by the CreateProcess function. The third parameter, lpName , is the name, which must exactly match (including case) the name of a semaphore previously created by CreateSemaphore .

NOTE
Other Win32 functions seem to expose some of the same functionality that semaphores do. These other functions provide interlocked access to long variables, which ensures that the operations on the variable are performed in an atomic fashion. In other words, the operations are completely finished as a unit. For instance, InterlockedIncrement increases the value of the variable pointed to by the single parameter to the function. Several other functions are available, including InterlockedCompareExchangePointer for Windows NT 4 and later and Windows 98. How do these functions differ from semaphores? For one thing, they generally will not work between processes, though they can if the variable being atomically incremented is in shared memory. In general, if you are just manipulating a shared variable, the interlocked functions might be appropriate. However, whenever the possible end result of some operation is the need to wait for another thread or process, a semaphore is preferable.

Waitable Timers

A waitable timer is a kernel object that signals itself at a specific time or at regular intervals and can be shared by multiple threads. To create a waitable timer object, you call the CreateWaitableTimer function:

 HANDLECreateWaitableTimer(LPSECURITY_ATTRIBUTESlpWaitableTimerAttributes, BOOLbManualReset,LPCTSTRlpName); 

The first parameter, lpWaitableTimerAttributes , is the security attribute. The second parameter, bManualReset, indicates whether the timer should be triggered just once or repeatedly. The final parameter, lpName , is the name used to allow other processes to open the waitable timer.

You can retrieve the handle of an existing waitable timer by calling OpenWaitableTimer :

 HANDLEOpenWaitableTimer(DWORDdwDesiredAccess,BOOLbInheritHandle, LPCTSTRlpName); 

The first parameter, dwDesiredAccess , is a set of flags that specify the access to the object that is required. These flags are specified in the online Visual C++ help for each of the object types. The second parameter, bInheritHandle , specifies whether the object should be inheritable by processes created by the CreateProcess function. The lpName parameter is the name, which must exactly match (including case) the name of a waitable timer previously created by CreateWaitableTimer .

You set a waitable timer using SetWaitableTimer :

 BOOLSetWaitableTimer(HANDLEhTimer,constLARGE_INTEGER*pDueTime, LONGlPeriod,PTIMERAPCROUTINEpfnCompletionRoutine, LPVOIDlpArgToCompletionRoutine,BOOLfResume); 

The hTimer parameter is the handle from CreateWaitableTimer or OpenWaitableTimer . The second parameter is pDueTime , a pointer to a 64-bit value representing the time the object is to be signaled (in 100-nanosecond intervals from January 1, 1601) if the value is positive. If the value is negative, it represents the number of 100-nanosecond intervals to wait before the object should be signaled. The format of this parameter is the same as that of the FILETIME structure; pDueTime can also be thought of as simply a quad word. The third parameter, lPeriod , specifies a period, in milliseconds , between activations of the timer. If lPeriod is 0, the timer is signaled once; otherwise , the timer is signaled every lPeriod milliseconds. The fourth and fifth parameters specify a completion routine and the argument to be passed to the completion routine. The completion routine is called when the timer is signaled.

Timers can be canceled by a call to CancelWaitableTimer , a function that takes a handle to a waitable timer as its single parameter:

 BOOLCancelWaitableTimer(HANDLEhTimer); 

Wait Functions and Thread Synchronization

Now that we've seen the synchronization objects, let's examine the functions that allow you to wait for them. Six functions allow your program to wait for one of these synchronization objects to be signaled. Their descriptions and distinguishing features are presented in Table 2-2. For more detailed information, refer to the examples using the Wait functions in the communication examples in Chapter 12, or see the Microsoft Platform Software Development Kit (SDK) documentation. The following synchronization objects have two states: signaled and unsignaled. The Wait functions, in general, wait for one or more objects to become signaled.

Table 2-2 Wait Functions for Use with Synchronization Objects

Name Number of Objects to Wait For Special Features
WaitForSingleObject 1 None
WaitForSingleObjectEx 1 Optionally returns when I/O completion routine or asynchronous procedure call is queued to thread
WaitForMultipleObjects Multiple (caller specifies) None
WaitForMultipleObjectsEx Multiple (caller specifies) Optionally returns when I/O completion routine or asynchronous procedure call is queued to thread.
MsgWaitForMultipleObjects Multiple (caller specifies) For use in threads that create windows
MsgWaitForMultipleObjectsEx Multiple (caller specifies) Optionally returns when I/O completion routine or asynchronous procedure call is queued to thread, for use in threads that create windows

In general, we will ignore the MsgWait functions. Server applications do not usually create windows, so understanding the details of these functions is unnecessary. For server applications, we can create fairly simple programs that provide any user interface we might need by using one of the Wait functions, which are prototyped as follows:

 WaitForSingleObject(HANDLEhHandle,DWORDdwMilliseconds); WaitForMultipleObjects(DWORDnCount,CONSTHANDLE*lpHandles, BOOLfWaitAll,DWORDdwMilliseconds) DWORDWaitForSingleObjectEx(HANDLEhHandle,DWORDdwMilliseconds, BOOLbAlertable); DWORDWaitForMultipleObjectsEx(DWORDnCount, CONSTHANDLE*lpHandles,BOOLfWaitAll,DWORDdwMilliseconds, BOOLbAlertable); 

WaitForSingleObject is a relatively simple function that takes two parameters: hHandle is the handle of the object to wait for, and dwMilliseconds is the number of milliseconds to wait for the handle to become signaled. If dwMilliseconds is 0, the function tests the handle's state and immediately returns WAIT_TIMEOUT if the object is unsignaled. If the object is signaled, WAIT_OBJECT_0 is returned. If dwMilliseconds is INFINITE, the function does not return until the object is signaled. For any other value of dwMilliseconds , the function waits until the object is signaled or the time-out expires . For instance, if dwMilliseconds is 1000, the function waits up to 1 second if the object does not become signaled. The normal return values from the Wait functions are listed in Table 2-3.

Table 2-3 Return Values from the Wait Functions

Return Value Description
WAIT_FAILED Call failed; call GetLastError to get more details.
WAIT_ABANDONED_0 to
WAIT_ABANDONED_0 +
(object count -1)
The object being waited for was a mutex, and the object was not released before the thread owning it was terminated . Ownership of the mutex is granted to the calling thread, and its state is unsignaled.
WAIT_OBJECT_0 to
WAIT_OBJECT_0 +
(object count -1)
Object specified is in the signaled state.
WAIT_TIMEOUT The object is unsignaled, and the time-out specified has expired .

WaitForMultipleObjects behaves in a manner similar to WaitForSingleObject , but it has the ability to wait for more than a single object. The count of objects is passed in nCount , and lpHandles points to an array of handles that will be tested for signaled state. If fWaitAll is TRUE, the function returns only if all objects passed are signaled or the time specified by dwMilliseconds has expired. If fWaitAll is FALSE, the function returns if any of the objects become signaled before the time-out with the return code WAIT_OBJECT_0 plus the zero-based index of the object whose state changed.

The wait functions that end in Ex all support the return from the function if an I/O completion routine or an asynchronous procedure call (APC) is queued to the thread. This extension will be discussed in greater detail when I/O completion ports are discussed in Chapter 13.



Inside Server-Based Applications
Inside Server-Based Applications (DV-MPS General)
ISBN: 1572318171
EAN: 2147483647
Year: 1999
Pages: 91

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net