Dealing with Thread Synchronization and Contention


All current versions of Microsoft Windows utilize something called pre-emptive multitasking. This means that any currently running thread can be interrupted (pre-empted) in order to allow another thread to execute. This type of multitasking environment is far more reliable than previous versions of Windows (16-bit) and drastically reduces the number of times the operating system will hang or freeze unexpectedly due to poorly behaved applications.

The downside to pre-emptive multitasking is that if you want to make your application aware of this, you need to be aware of the fact that your application is executing in a multithreaded environment, and you need to be aware of the consequences. The key thing to remember when building multithreaded applications is synchronization.

Synchronization refers to conditions that arise from having multiple threads attempt to perform the same task or access the same data at the same time, or where a thread may stop unexpectedly and potentially leave data in an indeterminate state.

Various facilities are available within the .NET Framework's core threading library that allow you to manage contention for shared resources within a multithreaded application as well as timing and synchronization issues. Table 10.3 provides an overview of these. Each one will be discussed in more detail in the following subsections.

Table 10.3. Synchronization Handling Facilities

Facility

Description

.NET Classes

Mutex

A mutex prevents more than one thread from accessing a shared resource at a time.

Mutex

Critical Section

A critical section is similar to a mutex, but it is not cross-process aware.

lock, Monitor,Interlocked, ReaderWriterLock

Semaphore

A semaphore limits the number of threads that can access the same shared resource.

Semaphore

Event

Event synchronizations raise signals to alert other threads of important state changes.

AutoResetEvent, ManualResetEvent, WaitHandle


Using the lock Keyword

The lock keyword is one of the simpler synchronization facilities available to you. When you wrap a code block inside a lock statement, the code block is guaranteed to allow only one thread at a time to access it. This means that any code written inside that block is thread-safe and you can be sure that there won't be indeterminate or inconsistent data within that block.

When you create a lock block, you pass the lock keyword an object as a parameter. This object is used to determine the scope of re-entrance around which to build the critical section, as shown in the following code:

lock(this) {     // thread-safe code } 


Using Mutexes

The Mutex class is a special type of class that is an extremely powerful thread synchronization tool. A Mutex not only provides the ability to synchronize multiple threads, but it can also synchronize those threads across multiple processes. The purpose of the Mutex is to prevent unwanted simultaneous access by multiple threads on a single shared resource.

When the first thread to access a shared resource acquires a Mutex, all subsequent threads that want to access that shared resource must wait until the first one has released the resource. The release of the resource is signified by the release of the Mutex. The Mutex class enforces thread identity. This means that only the thread that requested the Mutex to begin with can release it. In contrast, the Semaphore class can be modified by any thread.

As mentioned before, a Mutex can actually be used to synchronize cross-process activities as well as multithreaded activities within the same application. When you create a new instance of a Mutex, you can choose to create a local mutex (visible only to the process under which it was created) or a named system mutex (visible to all processes so long as each process knows the name of the mutex).

Be extremely careful when using cross-process mutexes. Because the scope of the mutex is at the operating system level, it is possible that logic failures or unexpected application crashes can cause the mutex to be in an unpredictable state.

When protecting resources with a Mutex, the first step is to call WaitOne, which will wait until the Mutex receives a signal. After the call to WaitOne, you can access the shared resources without fear of synchronization problems. Finally, when the method is complete, you must call ReleaseMutex(). If a thread stops before a Mutex is released, the Mutex will be considered abandoned. If you encounter an abandoned Mutex, the protected data could be in an inconsistent state. In other words, an abandoned Mutex constitutes a coding error that needs to be corrected, especially if that Mutex is a system-level global Mutex.

Listing 10.4 shows both uses of a Mutex. First, a global mutex is created. This actually allows the application to tell if another instance of itself is already running (a task that is fairly common, yet often considered difficult). The second Mutex is a local Mutex used to protect access to a specific shared resource. As you will see when you run the output, the protection of the shared resource by the Mutex block is actually forcing the threads to access the data serially (one item after another), instead of simultaneously, thereby ensuring that the calculations on that shared resource will result in consistent and predictable values.

Listing 10.4. System and Local Mutexes

[View full width]

using System; using System.Threading; using System.Collections.Generic; using System.Text; namespace MutexSample { class Program { static int sharedNumber = 42; static Mutex localMut = new Mutex(); static bool isNew; static Mutex globalMut = new Mutex(true, "Mutex Demo Global Mutex", out isNew); static void Main(string[] args) {     if (!isNew)     {       Console.WriteLine("This application is already running, shutting additional instance  down.");         return;     }     // spin off a bunch of threads to perform     // processing on a shared resource     Thread[] workers = new Thread[20];     for (int i = 0; i < 20; i++)     {         Thread worker = new Thread(new ThreadStart(DoWork));         workers[i] = worker;         worker.Start();     }     foreach (Thread workerThread in workers)         workerThread.Join();     Console.WriteLine("All work finished, new value of shared resource is {0}", sharedNumber);     Console.ReadLine();     globalMut.ReleaseMutex(); } static void DoWork() {     // sit and wait until it's OK to access     // the shared resource     localMut.WaitOne();     // modify shared resource     // multiple lines of code to modify     // to show consistent state of data     // within Mutex-protected block     Console.WriteLine("Accessing protected resource...");     sharedNumber += 2;     sharedNumber -= 1;     localMut.ReleaseMutex(); } } } 

Synchronized Methods

Often you will want to synchronize (lock out multithreaded access via the lock keyword or the Monitor class or the Mutex class) an entire method. To make this easier and to save you the trouble of obtaining and releasing shared locks for each method, you can use the MethodImplAttribute code attribute to mark an entire method as synchronized, as shown in the following example:

[MethodImpl(MethodImplOptions.Synchronized)] public void SynchronizedMethod { ... } 


Using this attribute can save you some time and effort if you plan on synchronizing access to the entire method rather than just a small portion.


Using Monitors

At first glance, the Monitor class might appear to function very much like the lock keyword. You use Monitor.Enter in much the same way you use lock(object), and Monitor.Exit marks the end of a critical section the same way that the last curly brace marks the end of a lock block, as shown in the following example:

Monitor.Enter(this); // thread-safe code Monitor.Exit(this); 


Unlike the lock keyword, however, the Monitor class implements some other methods that give it some added functionality. The following is a list of the methods that set the Monitor class apart from the lock keyword:

  • tryEnter You can specify a time period in milliseconds, or pass a TimeSpan instance to this method. This method will then wait for that time period to acquire an exclusive lock on the protected resource. If the timeout period expires, the code will return false and allow the thread to continue execution. This is an invaluable technique for preventing an application from hanging while waiting on one "stuck" thread or a stale/abandoned Mutex.

  • Wait Releases the current lock on the resource (if any) and then waits to reacquire the lock. If the timeout period expires, this method will return a false, allowing your code to respond appropriately to a failed attempt to obtain a thread-safe lock on the shared resource.

  • Pulse Sends a signal to the next waiting thread to start up. This allows the thread to start working before the acquisition of the lock held by the current thread. This is a way that allows your synchronized block of code to signal the next thread in line that your code is about to release the lock.

  • PulseAll Works just like Pulse, except that it sends the signal to all waiting threads.

Using the Interlocked Class

As you've probably guessed by now, the more synchronized code blocks you have in your application, the more bottleneck points you create for your background threads because they all have to queue up in line and wait nicely for their turn to access the shared resource.

This means that one of the things you want to watch out for in your code is excessive or unnecessary use of synchronized blocks. Quite often, developers will create a synchronized block just so that they can increment or decrement some shared value safely.

This is where the Interlocked class comes in. This class provides methods that allow you to increment, decrement, or exchange values in a synchronized, thread-safe manner without burdening your application by having to waste a synchronized block on a simple operation.

The following code snippet shows the Interlocked class in action:

Interlocked.Increment(ref sharedInteger); Interlocked.Decrement(ref sharedInteger2); int origValue = Interlocked.Exchange(ref sharedInteger, ref sharedInterger2) ; 


Using the ReaderWriterLock Class

So far you've seen quite a few ways to protect a block of code in such a way that multiple write operations to the same data cannot happen at the same time. If a piece of code just wants to read from a shared location instead of writing to it, using the methods already discussed would be an unnecessary performance hit and a waste of shared resources (especially if you're using system-level Mutexes).

To get around this problem, the ReaderWriterLock class allows you to read shared data from a thread without having to create a synchronized section that blocks all requests. Instead, the ReaderWriterLock allows us to block only if the thread needs to update, and to not bother locking if the thread wants to perform a simple read operation.

Listing 10.5 shows the use of the ReaderWriterLock to acquire locks for reading and locks for writing. It also illustrates the use of the timeout value. The code in Listing 10.5 generates between one and two timeouts when attempting to acquire locks when run on my laptop. Feel free to play with the number of threads and the timeout period to see the results of increasing the number of timeouts. One obvious result is that every time the writer lock fails to acquire, you don't increment the shared resource value, so the more timeouts you end up with, the smaller the final result number will be a the more timeouts you end up with.

Listing 10.5. Using the ReaderWriterLock Class

using System; using System.Threading; using System.Collections.Generic; using System.Text; namespace ReadWriteLockDemo { class Program { // shared resource here is a simple int static int sharedResource = 42; static int numTimeouts = 0; static ReaderWriterLock rwl = new ReaderWriterLock(); static void Main(string[] args) {     // Create 10 threads that want write access     Thread[] writers = new Thread[10];     for (int i = 0; i < 10; i++)     {         Thread writeThread = new Thread(new ThreadStart(DoWrite));         writers[i] = writeThread;         writers[i].Start();     }     // Create 40 threads that want read access     Thread[] readers = new Thread[40];     for (int j = 0; j < 40; j++)     {         Thread readThread = new Thread(new ThreadStart(DoRead));         readers[j] = readThread;         readers[j].Start();     }     // wait till they're all done     foreach (Thread writer in writers)         writer.Join();     foreach (Thread reader in readers)         reader.Join();     Console.WriteLine("All work finished, only {0} timeouts.", numTimeouts);     Console.ReadLine(); } static void DoWrite() {     try     {         rwl.AcquireWriterLock(100);         try         {             Interlocked.Increment(ref sharedResource);             Thread.Sleep(15);         }         finally         {             rwl.ReleaseWriterLock();         }     }     catch (ApplicationException ae)     {         Interlocked.Increment(ref numTimeouts);     } } static void DoRead() {     try     {         rwl.AcquireReaderLock(100);         try         {             Console.WriteLine("Inspecting shared value {0}", sharedResource);         }         finally         {             rwl.ReleaseReaderLock();         }     }     catch (ApplicationException ae)     {         Interlocked.Increment(ref numTimeouts);     } } } } 

Working with Manual and Auto Reset Events

You can create synchronized blocks of code in many ways, including ways to protect shared resources against multiple inconsistent writes. As you saw with the Mutex class and others, there are ways to acquire locks and then write code within a thread that waits for the lock to be released, for example, with the Wait method.

Reset events are even more tightly controlled synchronization techniques. The basic premise is that you create an instance of a reset event. Then, in a thread, you call the Wait method on that event. Instead of waiting for a lock to be released, your code will then wait until another thread sends a signal on that same wait event.

Two kinds of reset events are available to you: Manual and Automatic reset events. In almost all aspects they are identical. The two differ only in that an Automatic reset event will set the event's signaled state to unsignaled when a waiting thread is released.

Listing 10.6 shows how to use a ManualResetEvent to line up several threads that are all waiting for the last thread to execute before they can continue. This allows you to tightly control the order in which tasks are completed, regardless of when the thread was started or what its execution priority is. This kind of cascading scheduling is important in many multithreaded applications where progress milestones need to be reached before other tasks can be completed. For example, suppose that you are writing a multithreaded application that processes data, writes that data to a file, and then e-mails the file to someone. You might create reset events so that the thread responsible for e-mailing the file can't do anything until the thread(s) responsible for data processing signal that the file is ready for reading, even if the threads themselves might not be complete.

Listing 10.6. Using Reset Events to Force Execution Order

using System; using System.Threading; using System.Collections.Generic; using System.Text; namespace ThreadEvents { class Program { static ManualResetEvent mre = new ManualResetEvent(false); static string sharedResource = "Shared."; static void Main(string[] args) {     Thread[] workers = new Thread[10];     Console.WriteLine("Queueing up threads...");     for (int i = 0; i < 10; i++)     {         Thread worker = new Thread(new ThreadStart(DoWork));         workers[i] = worker;         worker.Start();     }     // give all the other workers time to line up     // behind the reset event     Thread.Sleep(TimeSpan.FromSeconds(2));     Console.WriteLine("Threads should be lined up, about to signal them to go.");     mre.Set();     foreach (Thread worker in workers)         worker.Join();     Console.WriteLine("Work's all done, work result: {0}", sharedResource);     Console.ReadLine(); } static void DoWork() {     mre.WaitOne();     lock (sharedResource)     {         Console.WriteLine("Work was able to be performed.");         sharedResource += "modified.";     } } } } 

You can think of reset events like the childhood game of "red light/green light." The threads are all lined up and ready to go, but they're waiting for the signal. You can set these staggering points anywhere you like to gain a lot of tight control over what can be done in what order. When you combine the ability to signal threads in this way with the ability to create thread-safe locked synchronized code blocks, Mutexes, the Monitor class that allows timeout periods when requesting exclusive locks, and the vast array of other tools available, writing multithreaded code looks extremely promising, powerful, and far less intimidating than it does in other languages and platforms.



Microsoft Visual C# 2005 Unleashed
Microsoft Visual C# 2005 Unleashed
ISBN: 0672327767
EAN: 2147483647
Year: 2004
Pages: 298

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net