Thread Synchronization

I l @ ve RuBoard

Synchronization is all about communication and coordination between threads. Throughout the .NET Framework documentation, you'll see references to whether a method is thread-safe . For example, if you look at the documentation for the CollectionBase class under System.Collection s, you'll see a dedicated subsection that discusses thread safety. This essentially tells you whether you have to access that class in a thread-safe manner or whether thread safety is handled for you by the class. This is where synchronization constructs come into play. They allow interthread communication and coordination that can ensure thread-safe access to shared resources. Note that all of the constructs described in this section are required to be programmatically correct. These constructs provide a number of signaling or blocking constructs that enable you to develop code that is thread-safe.

Threads run within the same application domain as their creator, and as such they can share resources. This can present problems if threads have conflicting needs in terms of shared resources. Before we go any further into this topic, let's spend some time discussing how threads and context are related and how this affects synchronization issues.

Race Conditions and You

Predictability is important. You need to be able to control when and where an event happens on a thread. If a critical variable changes at an unexpected time, your application can fail, maybe spectacularly. We typically refer to this as a race condition ” when timing of events can affect the correctness of your logic. It is important to avoid race conditions, especially when data integrity or system uptime is critical.

More formally , a race condition is a situation in which multiple threads of execution update or modify shared system resources (objects or variables ) in an unsynchronized fashion, leading to unexpected and or undesirable application states. This can manifest itself in different ways. But the core problem can usually be traced to poor assumptions. When multiple threads are involved, timing is simply not guaranteed . In other words, just because two events happen in sequential order in your testing doesn't mean that those events will always happen in that order ”unless you've implemented an appropriate synchronization construct to provide a guarantee. Guaranteeing synchronization behavior is most crucial when shared resources (files, network connections, variables, arrays, and so forth) are involved. By implementing synchronization in your code, you can ensure that your code executes in a deterministic fashion.

Synchronization Constructs

As I mentioned, the synchronization classes provide signaling mechanisms that allow you to develop thread-safe code and implement a level of determinism. Table 3-3 describes these classes.

Table 3-3. System.Threading Synchronization Classes

Class

Description

AutoResetEvent

Notifies one or more waiting threads that an event has occurred. This class cannot be inherited.

Interlocked

Provides atomic operations for variables that are shared by multiple threads.

ManualResetEvent

Occurs when one or more waiting threads are notified that an event has occurred. This class cannot be inherited.

Monitor

Provides a mechanism that synchronizes access to objects.

Mutex

A synchronization primitive than can also be used for interprocess synchronization.

ReaderWriterLock

Defines the lock that implements single-writer and multiple-reader semantics.

WaitHandle

Encapsulates operating system “specific objects that wait for exclusive access to shared resources.

The WaitHandle Class

The WaitHandle class is an abstract type that is used as the base class for the Mutex , AutoResetEvent , and ManualResetEvent classes. To understand how these inherited classes work, it is important to understand what wait handles are and how they work.

Note

Because the WaitHandle class is an abstract class ( MustInherit ), only derived instances of this class can exist. If you need to create your own type of WaitHandle , you can inherit from the WaitHandle class and provide your own implementation.


A wait handle is a standard threading concept and has to do with resource ownership. A wait handle has two possible states: signaled and nonsignaled. The whole basis of this event signaling relies on a request architecture. Threads can call methods on the WaitHandle object and can be blocked until the WaitHandle is in a signaled state. WaitHandle is an abstract class that provides a standard interface for dealing with all types of wait handles, including the Mutex , AutoResetEvent , and ManualResetEvent classes.

WaitOne

The WaitOne method of the WaitHandle class has three overloads that result in different behaviors. The first overload blocks the current thread until the WaitHandle is signaled:

 OverloadsOverridablePublicFunctionWaitOne()AsBoolean 

This will cause the thread to wait indefinitely. If the signal never comes, the thread will never proceed (unless another thread calls this thread's Interrupt method).

The other two overloads of WaitOne allow you to specify a timeout interval:

 OverloadsOverridablePublicFunctionWaitOne(Integer,Boolean)AsBoolean OverloadsOverridablePublicFunctionWaitOne(TimeSpan,Boolean)AsBoolean 

If the WaitHandle is signaled within the specified time period, the function will return True . Otherwise , the function will return False . Threads can use WaitHandles to check for specific events on a periodic basis. A timeout of 0 can be specified to check the status of the WaitHandle and immediately return with the signaled status of the WaitHandle .

WaitAny

The WaitAny method of the WaitHandle class is a shared method. Unlike the WaitOne method, which waits for a single wait handle, WaitAny allows a thread to check multiple wait handles. You provide the method with an array of wait handles that the thread is interested in. As its name might suggest, WaitAny waits for at least one of the wait handles in the array to be signaled before returning. You have options similar to those provided by the WaitOne method. You can wait for a signal, or you can specify a timeout period.

 OverloadsPublicSharedFunctionWaitAny(WaitHandle())AsInteger OverloadsPublicSharedFunctionWaitAny(WaitHandle(),_ Integer,Boolean)AsInteger OverloadsPublicSharedFunctionWaitAny(WaitHandle(),_ TimeSpan,Boolean)AsInteger 
WaitAll

WaitAll , which is also a shared method of the WaitHandle class, takes an array of wait handles. This method differs from WaitAny in that it requires all of the passed wait handles to be signaled before it returns.

 OverloadsPublicSharedFunctionWaitAll(WaitHandle())AsBoolean OverloadsPublicSharedFunctionWaitAll(WaitHandle(),_ Integer,Boolean)AsBoolean OverloadsPublicSharedFunctionWaitAll(WaitHandle(), TimeSpan,Boolean)AsBoolean 

As you might have figured out, it is possible to duplicate the behavior of a call to WaitOne using WaitAny or WaitAll . You simply pass a single Wait ­Handle to the method:

 SubWait(handleAsWaitHandle) 'Thesethreestatementsarefunctionallyequivalent handle.WaitOne() WaitHandle.WaitAny(NewWaitHandle(){handle}) WaitHandle.WaitAll(NewWaitHandle(){handle}) EndSub 
The AutoResetEvent and ManualResetEvent Classes

The AutoResetEvent and ManualResetEvent classes are related ”literally. They both derive from the WaitHandle class and share a common purpose: to signal the state of one thread to another. These signals are referred to as events, even though they're really just wait handles. These event classes have a signaled state that you can either set or clear (using the Set or Reset methods). The state is either set ( True ) or not set ( False ). Ultimately the AutoResetEvent and ManualResetEvent classes differ only in how they behave after a thread has been signaled.

The AutoResetEvent class resets the event automatically after a single waiting thread has been signaled. This means that if you have two threads waiting on the same AutoResetEvent instance, only one will receive the signal.

The ManualResetEvent , as its name suggests, requires manual intervention to reset the signal state. When the signal state is set, all calls to WaitOne will always return True until some thread resets the signal to False . This can be useful if you need to communicate an event to multiple threads at once.

The following sample application, WaitHandleTest, demonstrates how the behaviors of the AutoResetEvent and ManualResetEvent differ. Note that it is necessary to repeatedly signal the AutoResetEvent to make sure that both threads complete; the ManualResetEvent , in contrast, requires only a single call to Set .

 ImportsSystem.Threading ModuleModule1 PublicHandleAsWaitHandle SubMain() Dimt1AsThread Dimt2AsThread Console.WriteLine("StartingAutoResetEventtest") Handle=NewAutoResetEvent(False) t1=NewThread(AddressOfRun1) t2=NewThread(AddressOfRun2) t1.Start() t2.Start() Thread.Sleep(1000) Console.WriteLine("Settingthehandle") CType(Handle,AutoResetEvent).Set() Thread.Sleep(1000) Console.WriteLine("Settingthehandle") CType(Handle,AutoResetEvent).Set() Console.WriteLine("StartingManualResetEventtest") Handle=NewManualResetEvent(False) t1=NewThread(AddressOfRun1) t2=NewThread(AddressOfRun2) t1.Start() t2.Start() Thread.Sleep(1000) Console.WriteLine("Settingthehandle") CType(Handle,ManualResetEvent).Set() 'Telltheuserthisisdoneandgivethemachancetoreadthis Console.WriteLine("Testcomplete!!") Console.ReadLine() EndSub PublicSubRun1() Console.WriteLine("StartingRun1") Handle.WaitOne() Console.WriteLine("Run1Done") EndSub PublicSubRun2() Console.WriteLine("StartingRun2") Handle.WaitOne() Console.WriteLine("Run2Done") EndSub EndModule 

Figure 3-3 shows the output from this example.

Figure 3-3. The output from the WaitHandleTest sample application.

graphics/f03pn03.jpg

Note

Note that I did not use a Join construct with the child threads to prevent the Main method from exiting before the threads have completed. If the Console.ReadLine statement were missing from the end of the Main method, it's conceivable that the application would summarily exit before either child thread was signaled. In that case, the CLR would destroy the threads and clean up after the Main method. If you were to call the Join method on both t1 and t2 , you would be assured that this example could not exit before the child threads completed their tasks .


The Mutex Class

Mutex is another WaitHandle -derived class. You use it to coordinate access to a resource that requires exclusive access. In fact, the word mutex comes from the term mutually exclusive . Unlike other constructs that are geared toward more complex read/write access permissions (the ReaderWriterLock class is a good example), the Mutex class provides a single thread with exclusive access to a resource or group of resources. The following example uses the Mutex class to provide exclusive access to a Stack collection:

 ImportsSystem.Threading PublicClassMutexedStack Privatem_MutexAsMutex Privatem_StackAsStack PublicSubNew() Me.m_Mutex=NewMutex() Me.m_Stack=NewStack() EndSub PublicSubAdd(objAsObject) Me.m_Mutex.WaitOne() Me.m_Stack.Push(obj) Me.m_Mutex.ReleaseMutex() EndSub PublicFunctionGetItem()AsObject Me.m_Mutex.WaitOne() GetItem=Me.m_Stack.Pop() Me.m_Mutex.ReleaseMutex() EndSub EndClass 

When and where you want to use a mutex is up to you. The need for mutually exclusive access is highly dependent on your architecture and the needs of your application. Remember one important point: a mutex cannot enforce access to an object if you do not use the WaitOne ReleaseMutex syntax. If you miss a possible scenario or provide direct access to the underlying object that you want to enforce the access restrictions on, you risk having the object (a Stack in the previous example) modified at an improper time.

The Interlocked Class

The Interlocked class allows you to protect against errors that can occur if the thread scheduler switches contexts while your thread is updating a variable that can be accessed by other threads. Obviously, when you're updating shared variables, it is vitally important that your thread not be interrupted . The Interlocked class provides support for what are known as atomic operations . There are also performance advantages to using the methods provided by the Interlocked class. The operations supported by the Interlocked class can often be executed in a single processor instruction, leading to more efficient program execution.

Increment and Decrement

The Increment and Decrement methods can be used to increment and decrement operations on a variable in a single, atomic operation. Normally, when you increment or decrement a variable, the operation takes three steps. The value is retrieved, the addition or subtraction is performed, and the result is stored in the original variable. The problem is that during any of these operations, your thread might be preempted by the operating system, which can lead to some strange behaviors ”if, for example, the same variable is subsequently modified by another thread. The Increment and Decrement methods protect against this type of error by preventing the operating system from preempting the operations. Consider the following example:

 DimxasInteger=0 'Hereishowyouwouldnormallyincrementanddecrementx x=x+1'IncrementThisoperationcanbepreempted x=x1'DecrementThisoperationcanbepreempted 'Theseoperationsarefunctionallyequivalent,butcannotbe 'preemptedbytheoperatingsystem. Interlocked.Increment(x) Interlocked.Decrement(x) 

What we can see here is that the Interlocked class provides a very simple thread-safe mechanism for incrementing variables. But wait there's more.

Exchange and CompareExchange

Like the Increment and Decrement methods, the Exchange and CompareExchange methods of the Interlocked class provide support for additional atomic operations on variables. Exchange atomically exchanges the values of the specified variables. CompareExchange combines two operations: comparing two values and storing a third value in one of the variables based on the outcome of the comparison. The following example demonstrates the use of both methods:

 Dimoldx,x,yAsInteger x=5 y=10 'PerforminganexchangewithouttheInterlockedclass oldx=x x=y 'PerforminganexchangewiththeInterlockedclass oldx=Interlocked.Exchange(x,y) 'PerformingacompareandexchangewithouttheInterlockedclass oldx=x Ifx=10Then x=y EndIf 'PerformingacompareandexchangewiththeInterlockedclass oldx=Interlocked.CompareExchange(x,y,10) 

This last example wraps up the Interlocked class. Hopefully you can see how useful this class is. I whole heartedly recommend its use in your code. In situation where you need to perform critical operations in an atomic manner, the Interlocked class can be a savior.

The Monitor Class and SyncLock

A critical section is a block of code that cannot be executed simultaneously by multiple threads. Imagine you're sharing an object ”an ArrayList , for example ” between multiple threads. Certain operations on an ArrayList require exclusive access. For example, if you're sorting the array, you'll want to prevent other threads from accessing the object until the sort is complete.

The Monitor class controls access to a block of code by locking an object for a single thread. This prevents other threads from accessing that object in the same block of code (a critical section) while the lock is in place. An interesting side effect is that the Monitor class prevents simultaneous entry to a block of code as long as the same object is involved. However, if another object is involved, the Monitor class will allow the same block of code to be entered simultaneously. This prevents performing the same operations on one object but allows the same set of operations to execute when multiple objects are involved.

Note

The behavior of the Monitor class is quite separate from that of the Mutex class. The Monitor class enforces mutually exclusive access to a block of code as long as the same object is involved. Otherwise, it will allow unlimited access to a block of code. The Mutex class, on the other hand, enforces strict exclusivity for a block of code and is not conditional.


Visual Basic .NET provides simple access to the Monitor class by encapsulating it with the SyncLock syntax. SyncLock is a block-level statement that prevents multiple threads from entering that block simultaneously. Imagine a situation in which you need to perform some operation, such as resizing an array, that will probably fail if two threads try to perform that operation at the same time. By using SyncLock , you can prevent the second thread from entering the block of code surrounded by the SyncLock statement until the first thread finishes. This vastly simplifies your code and makes it very simple to implement critical sections. The following code illustrates how you can use both the SyncLock syntax and the Monitor class to achieve the same result:

 PublicMyArrayListAsArrayList PublicSubAdd(objAsObject) 'ThisisacriticalsectionusingSyncLock SyncLockMyArrayList MyArrayList.Add(obj) EndSyncLock EndSub PublicSubAdd1(objAsObject) 'ThisisacriticalsectionusingMonitor Monitor.Enter(MyArrayList) MyArrayList.Add(obj) Monitor.Exit(MyArrayList) EndSub 

The Monitor class provides additional flexibility above and beyond that of the SyncLock syntax. It allows you to better control the locking behavior, for instance. Imagine a situation in which you want to avoid waiting beyond a certain point in time to enter a critical section. If you use a SyncBlock , your code will wait until that lock succeeds. You might decide that you'd rather attempt to enter the critical section and do something else if the lock cannot be obtained in a reasonable amount of time. This level of control is possible through the direct use of the Monitor class (but I'll leave it as an exercise for the reader).

The ReaderWriterLock Class

Some resources can be shared on a nonexclusive basis under certain circumstances. A resource might require exclusive access only if a thread intends to modify it, and unlimited threads can otherwise have access to view the contents. In this case, the ReaderWriterLock can enforce exclusive access to a resource when that resource is being modified. When the resource is being accessed in a read-only manner, the ReaderWriterLock allows nonexclusive access.

 ImportsSystem.Threading PublicClassCounter Privatem_rwLockAsReaderWriterLock Privatem_CounterAsInteger=0 PublicSubNew() Me.m_rwLock=NewReaderWriterLock() EndSub PublicSubIncrement() Me.m_rwLock.AcquireWriterLock(-1) Me.m_Counter+=1 Me.m_rwLock.ReleaseWriterLock() EndSub PublicSubDecrement() Me.m_rwLock.AcquireWriterLock(-1) Me.m_Counter-=1 Me.m_rwLock.ReleaseWriterLock() EndSub PublicReadOnlyPropertyCount()AsInteger Get Me.m_rwLock.AcquireReaderLock(-1) Count=Me.m_Counter Me.m_rwLock.ReleaseReaderLock() EndGet EndProperty EndClass 
I l @ ve RuBoard


Designing Enterprise Applications with Microsoft Visual Basic .NET
Designing Enterprise Applications with Microsoft Visual Basic .NET (Pro-Developer)
ISBN: 073561721X
EAN: 2147483647
Year: 2002
Pages: 103

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net