Section 8.9. Using .NET Multithreading Services


8.9. Using .NET Multithreading Services

In addition to the basic multithreading features described at the beginning of this chapter, .NET offers a set of advanced services. Some of these features, such as thread local storage, timers, and the thread pool, are also available in a similar format to Windows developers. Some other features, such as thread-relative static variables, are .NET innovations or are specific aspects of .NET application frameworks. This section briefly describes these .NET multithreading services.

8.9.1. Thread-Relative Static Variables

By default, static variables are visible to all threads in an app domain. This is similar to classic C++ (or Windows), in which static variables are accessible to all threads in the same process. The problem with having all the threads in the app domain able to access the same static variables is the potential for corruption and the resulting need to synchronize access to those variables, which in turn increases the likelihood of deadlocks. Synchronizing access may be a necessary evil, if indeed the static variables need to be shared between multiple threads. However, for cases where such sharing isn't necessary, .NET supports thread-relative static variables: each thread in the app domain gets its own copy of the static variable. You use the ThreadStatic attribute to mark a static variable as thread-relative:

     public class MyClass     {             [ThreadStatic]        static string m_MyString;        public static string MyString        {           set{m_MyString = value;}           get{return m_MyString;}        }     }

You can apply the THReadStatic attribute only to static member variables, not to static properties or static methods; however, you can still wrap the static member with a static property. Thread-relative static variables enforce thread safety and the need to protect the variables, because only a single thread can access them and because each thread gets its own copies of the static variables. Thread-relative static variables usually also imply thread affinity between objects and threadsthe objects will expect to always run on the same thread, so they will have their own versions of the variables. When using thread-relative static variables, you should also be aware of the following pitfall: each thread must perform initialization of the thread-relative static variables, because the static constructor has no effect on thread-relative statics.

Thread-relative static variables are an interesting feature of the .NET runtime, but I find them to be of little practical use. You are more likely to want to share the static variables with other threads than to make them thread-relative, and because exposing a member variable directly is a bad idea in general, you are likely to use the static property to access the variable. As you saw in the previous section on manual synchronization, encapsulating the locking in the property is easy enough, and it seems to be worth the trouble to be able to share the variable between threads.

8.9.2. Thread Local Storage

Objects allocated off the global managed heap are all visible and accessible to all threads in the app domain. However, .NET also provides a thread-specific heap, called the thread local storage (TLS). The TLS is actually part of the thread's stack, and therefore only that thread can access it. You can use the TLS for anything you'd use the global heap for, but there is no need to synchronize access to objects allocated off the TLS because only one thread can access them. The downside to using the TLS is that components that wish to take advantage of it must have thread affinity, because they must execute on the same thread to access the same TLS.

Framework developers often use the TLS to store additional contextual information about the call as it winds its way between objects. The TLS provides slots in which you can store objects. The slot is an object of type LocalDataStoreSlot. A LocalDataStoreSlot object is nothing more than a type-safe key object, defined as:

     public sealed class LocalDataStoreSlot     {}

You use the LocalDataStoreSlot object to identify the slot itself. There are two kinds of slots: named and unnamed. The garbage collector frees unnamed slots automatically, but you must free named slots explicitly. You allocate and use slots via static methods of the Thread class:

     public sealed class Thread     {        public static LocalDataStoreSlot AllocateDataSlot(  );        public static LocalDataStoreSlot AllocateNamedDataSlot(string name);        public static void FreeNamedDataSlot(string name);        public static LocalDataStoreSlot GetNamedDataSlot(string name);        public static void SetData(LocalDataStoreSlot slot, object data);        public static object GetData(LocalDataStoreSlot slot);     }

8.9.2.1 Using a named slot

You can use the AllocateNamedDataSlot( ) method to allocate a named slot and get back a LocalDataStoreSlot object. You then use the SetData( ) method to store data in the slot:

     int number = 8;     LocalDataStoreSlot dataSlot;     dataSlot = Thread.AllocateNamedDataSlot("My TLS Slot");     Thread.SetData(dataSlot,number);

Any object on the thread can use the GetNamedDataSlot( ) method to get back a LocalDataStoreSlot object and then call Getdata( ) to retrieve the data stored:

     object obj;     LocalDataStoreSlot dataSlot;     dataSlot = Thread.GetNamedDataSlot("My TLS Slot");     obj = Thread.GetData(dataSlot);     Thread.FreeNamedDataSlot("My TLS Slot");     int number = (int)obj;     Debug.Assert(number == 8);

Once you are done with the named slot, you must call FreeNamedDataSlot( ) to de-allocate it.

8.9.2.2 Using an unnamed slot

The method AllocateDataSlot( ) allocates a LocalDataStoreSlot object similar to the AllocateNamedDataSlot( ) method. The two main differences between named and unnamed slots are that all clients accessing the unnamed slot must share the LocalDataStoreSlot object (because there is no way to reference it by name) and that there is no need to free the unnamed slot object manually. Again, you use SetData( ) to store data in the slot and Getdata( ) to retrieve the stored data:

     //Storing:     int number = 8;     LocalDataStoreSlot dataSlot;     dataSlot = Thread.AllocateDataSlot(  );     Thread.SetData(dataSlot,number);     //Retrieving:     object obj;     obj = Thread.GetData(dataSlot);     int number = (int)obj;     Debug.Assert(number == 8);

: Physical Thread Affinity

When running under Windows, .NET threads are mapped one-to-one to operating-system threads. As a result, physical thread affinity is guaranteed. However, different hosting environments (such as SQL Server 2005) may choose a different, less direct mapping and occasionally assign different physical threads to your .NET managed thread. If your code is hosted in such an environment and you acquire an operating-system resource that inherently has physical thread affinity (such as any WaitHandle-derived class), your code will not function correctly if the host swaps physical threads underneath. In .NET 2.0, the Thread class provides the static methods BeginThreadAffinity( ) and EndThreadAffinity( ). Calling these methods enables you to signal to the hosting environment that you require physical thread affinity and that the host should not change the physical thread associated with your managed thread while executing the code inside the thread affinity-sensitive section.


8.9.3. The Thread Pool

Creating a worker thread and managing its lifecycle yourself gives you ultimate control over that thread. It also increases the overall complexity of your application. If all you need to do is dispatch a unit of work to a worker thread, instead of creating a thread, you can take advantage of a .NET-provided thread. In each process, .NET provides a pool of worker threads called the thread pool. The thread pool is managed by .NET, and it contains a set of threads ready to serve application requests. .NET makes extensive use of the thread pool itself. For example, it uses the thread pool for asynchronous calls (discussed in Chapter 7), remote calls (discussed in Chapter 10), and timers (discussed later in this chapter). You access the .NET thread pool via the public static methods of the ThreadPool static class. Using the thread pool is straightforward. First you create a delegate of type WaitCallback, targeting a method with a matching signature:

     public delegate void WaitCallback(object state);

You then provide the delegate to one of the THReadPool class's static methods (typically QueueUserWorkItem( )):

     public static class ThreadPool     {        public static bool QueueUserWorkItem(WaitCallback callBack);        /* Other methods */     }

As the method name implies, dispatching a work unit to the thread pool is subject to pool limitationsthat is, if there are no available threads in the pool, the work unit is queued and is served only when a worker thread returns to the pool. Pending requests are served in order.

Example 8-17 demonstrates using the thread pool. For diagnostic purposes, you can find out whether the thread your code runs on originated from the thread pool by using the IsThreadPoolThread property of the Thread class (as shown in this example).

Example 8-17. Posting a work unit to the thread pool
 void ThreadPoolCallback(object state) {    Thread currentThread = Thread.CurrentThread;    Debug.Assert(currentThread.IsThreadPoolThread);    int threadID = currentThread.ManagedThreadId;    Trace.WriteLine("Called on thread with ID :" + threadID); } ThreadPool.QueueUserWorkItem(ThreadPoolCallback);

A second overloaded version of QueueUserWorkItem( ) allows you to pass in a state object to the callback method, in the form of a generic object:

     public static bool QueueUserWorkItem(WaitCallback callBack,object state);

If you don't provide such a parameter (as in Example 8-17), .NET passes in null. While you can use the state object to pass parameters to the callback method, another common use is to pass in an identifier. The identifier enables the same callback method to handle and distinguish between multiple posted requests.

The ThreadPool class also supports a number of other useful ways to queue a work unit. The RegisterWaitForSingleObject( ) method allows you to provide a waitable handle as a parameter. The thread from the thread pool waits for the handle and only calls the callback once the handle is signaled. You can also specify a waiting timeout. The GetAvailableThreads( ) method allows you to find out how many threads are available in the pool, and the GetMaxThreads( ) method returns the maximum size of the pool.

The .NET thread pool isn't boundless. Avoid lengthy operations in the callback so that the thread returns to the pool as quickly as possible, to service other clients. If you require lengthy operations, create dedicated threads.


8.9.3.1 Configuring the thread pool

The thread pool actually has two types of threads in it: worker threads are used for tasks such as asynchronous execution or timers, and completion port threads are used in conjunction with server operations such as network-sockets processing. Most applications only interact (directly or indirectly) with the worker threads and do not care about the completion port threads.

The thread pool needs to maintain a balance between the overhead of a large number of active threads and the latency in responding to new client requests. If there are too many active threads servicing requests, the overhead of the thread context switches can have a serious detrimental effect. If, on the other hand, there are too few threads, client requests may have to spend too much time queuing up for service. Interestingly enough, many experiments and benchmarks have shown that regardless of the technology used, for an average application with an average load, the optimal pool size is one thread per application per CPU. As long as the work unit queued up is small enough, restricting the number of threads in the thread pool eliminates costly thread context switches. Recall from Chapter 7 that by default, the maximum number of worker threads in the pool is 25 per CPU per process. The reason for this limit is that there could potentially be multiple applications in the process using the thread pool. The default maximum number for completion port threads is 1,000.

The ThreadPool class provides methods for controlling the maximum numbers of threads in the thread pool:

     public static class ThreadPool     {        public static void GetMaxThreads(out int workerThreads,                                         out int completionPortThreads);        public static bool SetMaxThreads(int workerThreads,                                         int completionPortThreads);        /* Other methods */     }

You can only use SetMaxThreads( ) to provide a new maximum number of threads that is greater than the number of CPUs on the machine. Any other value will cause SetMaxThreads( ) to return false and ignore your request.

Typically, you'll want to set one maximum value without affecting the other. The SetNewThreadPoolMax( ) method below demonstrates a safe way of setting a new maximum value for the worker threads:

     public static void SetNewThreadPoolMax(int max)     {        Debug.Assert(max > Environment.ProcessorCount);        int workerThreads,completionPortThreads;        ThreadPool.GetMaxThreads(out workerThreads,                                 out completionPortThreads);        ThreadPool.SetMaxThreads(max,completionPortThreads);     }

That said, I must again caution you against tinkering with the maximum number of threads in the thread pool, and especially increasing that number. Unless you have performed benchmarks and profiling that categorically prove that a specific number yields better throughput required for your application without any collateral damage, you are taking on the liability of either increasing the overhead of thread context switches and degrading performance or increasing request latency.

Another performance-related aspect of the thread pool is the question of how many threads the thread pool should keep when it is idle (that is, when there are no pending client requests). That idle-time count is called the thread pool minimum size. If the minimum size is too small, a spike in client requests will quickly exhaust the number of ready-to-run threads, and the thread pool will have to create new threads (up to the maximum number of threads) to deal with them. This will introduce latency and increase response time. If, on the other hand, the minimum number is too large, you will be paying needlessly for maintaining the threads.

By default, the thread pool maintains single worker and completion port threads and creates new threads as required. Excess threads are culled away periodically. You can control the minimum pool size using the GetMinThreads( ) and SetMinThreads( ) methods:

     public static class ThreadPool     {        public static void GetMinThreads(out int workerThreads,                                         out int completionPortThreads);        public static bool SetMinThreads(int workerThreads,int completionPortThreads);        /* Other methods */     }

Unlike the maximum pool size, there are no hard-and-fast rules as to the best minimum size. If the default values are inadequate, you will need to test your application under different load characteristics (uniform, erratic, spikes, etc.) to determine the optimal minimum values.

: Custom Thread Pools

The .NET thread pool is a simple and efficient general-purpose thread pool. However, it does not support features such as canceling a queued request or separating the act of posting a request from executing it. If you require such functionality you will have to implement your own thread pool, in which case you may want to implement the interface IbackgroundTaskThreadMarshaller, found in the System.ComponentModel namespace:

     public interface IBackgroundTaskThreadMarshaller : IDisposable     {        object ActivateNewTask(  );        void DeactivateTask(object id);        void Post(Delegate d,object[] args);     }

IBackgroundTaskThreadMarshaller allows you to post a task to the custom thread pool in the form of a delegate and its arguments, and then explicitly instruct the custom thread pool manager to execute it via the ActivateNewTask( ) method. The custom thread pool manager will multiplex its threads on the activated requests. To cancel the task, simply call DeactivateTask( ), using the ID returned from the activation call. Presently, no class in .NET implements IBackgroundTaskThreadMarshaller.


8.9.4. ISynchronizeInvoke

When a client on thread T1 calls a method on an object, that method is executed on the client's thread, T1. However, what should be done in cases where the object must always run on the same thread (say, T2)? Such situations are common when thread affinity is required. For example, .NET Windows Forms windows and controls must always process messages on the same thread that created them. To address such situations, .NET provides the ISynchronizeInvoke interface, in the System.ComponentModel namespace:

     public interface ISynchronizeInvoke     {        object Invoke(Delegate method,object[] args);        IAsyncResult BeginInvoke(Delegate method,object[] args);        object EndInvoke(IAsyncResult result);        bool InvokeRequired {get;}     }

8.9.4.1 Using ISynchronizeInvoke

ISynchronizeInvoke provides a generic and standard mechanism for invoking methods on objects residing on other threads. For example, if the object implements ISynchronizeInvoke, the client on thread T1 can call ISynchronizeInvoke's Invoke( ) on the object. The implementation of Invoke( ) blocks the calling thread, marshals the call to T2, executes the call on T2, marshals any returned values to T1, and returns control to the calling client on T1. Invoke( ) accepts a delegate targeting the method to invoke on T2, and a generic array of objects as parameters.

Example 8-18 demonstrates the use of ISynchronizeInvoke. In the example, a Calculator class implements ISynchronizeInvoke and provides the Add( ) method for adding two numbers.

Example 8-18. Using ISynchronizeInvoke
 public class Calculator : ISynchronizeInvoke {    public int Add(int argument1,int argument2)    {       int threadID = Thread.CurrentThread.ManagedThreadId;       Trace.WriteLine("Calculator thread ID is " + threadID);       return argument1 + argument2;    }    //ISynchronizeInvoke implementation    public object Invoke(Delegate method,object[] args)    {...}    public IAsyncResult BeginInvoke(Delegate method,object[] args)    {...}    public object EndInvoke(IAsyncResult result)    {...}    public bool InvokeRequired    {get{...}} } //Client-side code public delegate int BinaryOperation(int argument1,int argument2); int threadID = Thread.CurrentThread.ManagedThreadId; Trace.WriteLine("Client thread ID is " + threadID); Calculator  calculator = new Calculator(  ); BinaryOperation addDelegate = calculator.Add; object[] args = {3,4}; int sum = 0; sum = (int)calculator.Invoke(addDelegate,args); Debug.Assert(sum ==7); /* Possible output: Calculator thread ID is 29 Client thread ID is 30 */

Because the call is marshaled to a different thread from that of the caller, you might want to invoke it asynchronously. This functionality is provided by the BeginInvoke( ) and EndInvoke( ) methods. These methods are used in accordance with the general asynchronous programming model described in Chapter 7. In addition, because ISynchronizeInvoke can be called on the same thread as the thread to which the caller is trying to redirect the call, the caller can check the InvokeRequired property. If it returns false, the caller can call the object methods directly.

ISynchronizeInvoke methods aren't type-safe. In case of a mismatch, an exception is thrown at runtime instead of a compilation error. You cannot try to wrap it with a type-safe use of GenericEventHandler, because you cannot overload based on returned type. Pay extra attention when using ISynchronizeInvoke, because the compiler won't be there for you.


8.9.4.2 Windows Forms and ISynchronizeInvoke

Windows Forms base classes make extensive use of ISynchronizeInvoke. The Control class (and every class derived from Control) relies on the underlying Windows messages and a message-processing loop (the message pump) to process them. The message loop must have thread affinity, because messages to a window are delivered only to the thread that created that window. In general, you must use ISynchronizeInvoke to access a Windows Forms window from another thread. Unfortunately, that often results in a cumbersome programming model when accessing windows and controls from multiple threads. Consider the code in Example 8-19. If multiple threads need to update the text of the m_Label control, instead of merely setting the Text property, you are required to use a helper method (an anonymous method, in this example[*]) and a delegate dedicated to the task of setting the text of a label. Real-life examples will of course get much more complex and messy, with a high degree of internal coupling, because any changes to the user-interface layout, the controls on the forms, and the required behavior are likely to cause major changes to the code.

[*] For more information on anonymous methods, see my May 2004 article in the MSDN Magazine, "Create Elegant Code with Anonymous Methods, Iterators, and Partial Classes."

Example 8-19. Thread-safe access to a Windows Forms label
 partial class MyForm : Form {    delegate void SetLabel(Label label,string str);    Label m_Label = new Label(  );    //UpdateLabel is called by multiple threads    void UpdateLabel(string text)    {       ISynchronizeInvoke synchronizer = m_Label;       if(synchronizer.InvokeRequired == false)       {          m_Label.Text = text;          return;       }       SetLabel del = delegate(Label label,string str)                      {                         label.Text = str;                      };       synchronizer.Invoke(del,new object[]{m_Label,text});    }    //Rest of the class }

It is much better to encapsulate the interaction with ISynchronizeInvoke whenever possible, to simplify the overall programming model. Example 8-20 lists the code for SafeLabel, a Label-derived class that provides thread-safe access to its Text property.

Example 8-20. Encapsulating ISynchronizeInvoke
 public class SafeLabel : Label {    delegate void SetString(string text);    delegate string GetString(  );    override public string Text    {       set       {          if(InvokeRequired)          {             SetString setTextDel = delegate(string text)                                    {                                       base.Text = text;                                    };             Invoke(setTextDel,new object[]{value});          }          else             base.Text = value;       }       get       {          if(InvokeRequired)          {             GetString getTextDel = delegate(  )                                    {                                       return base.Text;                                    };             return (string)Invoke(getTextDel,null);          }          else             return base.Text;       }    } }

SafeLabel overrides its base class's Text property and finds out if invoking is allowed on the calling threads. If it is not allowed, SafeLabel uses its base class's implementation of ISynchronizeInvoke to marshal the call to the correct thread. Using SafeLabel, the code in Example 8-19 is reduced to this:

     partial class MyForm : Form     {        Label m_Label = new SafeLabel(  );        //UpdateLabel is called by multiple threads        void UpdateLabel(string text)        {           m_Label.Text = text;        }        //Rest of the class     }

The source code accompanying this book contains in the assembly WinFormsEX.dll the code not only for SafeLabel but also for other commonly used controls, such as SafeButton, SafeListBox, and SafeTextBox.

8.9.4.3 Events and ISynchronizeInvoke

The fact that some subscribers may require you to invoke them on the correct thread poses an interesting challenge to event publishers. It is no longer good enough to simply invoke the delegate, because you might be calling some of the subscribers (especially in a Windows Forms application) on the wrong thread. You must manually iterate over the delegate's internal invocation list, and examine each delegate in that list. You can access the object the delegate points to via the Target property that every delegate inherits from the Delegate base class. Then you need to query the target for ISynchronizeInvoke and use it to invoke the target method. Instead of repeating this for every delegate invocation, you can extend EventsHelper to always publish the event to the correct thread, as shown in Example 8-21.

Example 8-21. Adding correct thread invocation to EventsHelper
 public static class EventsHelper {    static void InvokeDelegate(Delegate del,object[] args)    {       ISynchronizeInvoke synchronizer;       synchronizer = del.Target as ISynchronizeInvoke;       if(synchronizer != null)//Requires thread affinity       {          if(synchronizer.InvokeRequired)          {             synchronizer.Invoke(del,args);             return;          }       }       //Not requiring thread affinity or Invoke() is not required       del.DynamicInvoke(args);    }    //Rest of EventsHelper is same as Example 8-16 }

In Example 8-21, instead of simply calling DynamicInvoke( ) on the delegate, as in Example 8-16, InvokeDelegate( ) first checks whether the target object supports ISynchronizeInvoke and whether Invoke( ) is required. Example 8-21 represents the final version of EventsHelper in this book.

8.9.4.4 Implementing ISynchronizeInvoke

In the abstract, when you implement ISynchronizeInvoke, you need to post the method delegate to the actual thread the object needs to run on, and you need to have it call DynamicInvoke( ) on the delegate in Invoke( ) and BeginInvoke( ). Implementing ISynchronizeInvoke is a nontrivial programming feat. The source files accompanying this book contain a helper class called Synchronizer, which is a generic implementation of ISynchronizeInvoke. You can use Synchronizer as-is, by either deriving from it or containing it as a member object and then delegating your implementation of ISynchronizeInvoke to it:

     public class Calculator : ISynchronizeInvoke     {        ISynchronizeInvoke m_Synchronizer = new Synchronizer(  );        public int Add(int argument1,int argument2)        {           return argument1 + argument2;        }        public object Invoke(Delegate method, object[] args)        {           return m_Synchronizer.Invoke(method,args);        }        //Rest of the implementation of ISynchronizeInvoke via        //delegation to m_Synchronizer     }

Here are the key elements of implementing Synchronizer:

  • Synchronizer uses a nested class called WorkerThread.

  • WorkerThread has a queue of work items. WorkItem is a class containing the method delegate and the parameters.

  • Both Invoke( ) and BeginInvoke( ) add to the work-item queue.

  • WorkerThread creates a worker thread, which monitors the work-item queue. When the queue has items, the worker thread retrieves them and calls DynamicInvoke( ) on the method.

8.9.5. Windows Forms and Asynchronous Calls

Synchronizing multithreaded access to Windows Forms windows and controls is an issue with asynchronous calls as well. As explained in Chapter 7, using a completion callback method is the preferred option in an event-driven application such as a Windows Forms application. The problem is that both the asynchronous method and the completion callback execute on a thread from the thread pool, and therefore they cannot access the Windows Forms elements directly. This significantly complicates the programming model and precludes easy implementation of features that are important for a client application, such as progress reports and cancellation. To address this predicament, .NET 2.0 provides a component called BackgroundWorker that you can use in your Windows Forms applications to manage asynchronous operations safely and easily.

You can find BackgroundWorker in the Components tab of a Windows Forms project. If you drop it on a form, you can use BackgroundWorker to dispatch asynchronous work and to report progress and completion, all while encapsulating the interaction with ISynchronizeInvoke. Using this approach yields a much smoother and superior programming model. Example 8-22 shows the definition of BackgroundWorker and its supporting classes.

Example 8-22. Partial listing of BackgroundWorker
 public class BackgroundWorker : Component {    public event DoWorkEventHandler DoWork;    public event ProgressChangedEventHandler ProgressChanged;    public event RunWorkerCompletedEventHandler RunWorkerCompleted;    public bool CancellationPending{get;}    public void RunWorkerAsync(  );    public void RunWorkerAsync(object argument);    public void ReportProgress(int percent);    public void CancelAsync(  );    //More members } public delegate void DoWorkEventHandler(object sender,DoWorkEventArgs e); public delegate void ProgressChangedEventHandler(object sender,                                                  ProgressChangedEventArgs e); public delegate void RunWorkerCompletedEventHandler(object sender,                                                     RunWorkerCompletedEventArgs e); public class CancelEventArgs : EventArgs {    public bool Cancel{get;set;} } public class DoWorkEventArgs : CancelEventArgs {    public bool Result{get;set;}    public object Argument{get;}; } public class ProgressChangedEventArgs : EventArgs {    public int ProgressPercentage{get;} } public class AsyncCompletedEventArgs : EventArgs {    public object UserState{get;}    public Exception Error{get;}    public bool Cancelled{get;} } public class RunWorkerCompletedEventArgs : AsyncCompletedEventArgs {    public object Result{get;}; }

BackgroundWorker has a public delegate called DoWork, of type DoWorkEventHandler. To invoke a method asynchronously, wrap a method with a matching signature to DoWorkEventHandler and add it as a target to DoWork. Then call the RunWorkerAsync( ) method to invoke the method on a thread from the thread pool:

     BackgroundWorker backgroundWorker;     backgroundWorker = new BackgroundWorker(  );     backgroundWorker.DoWork += OnDoWork;     backgroundWorker.RunWorkerAsync(  );     void OnDoWork(object sender,DoWorkEventArgs doWorkArgs)     {...}

Any party interested in being notified when the asynchronous method is completed should subscribe to the RunWorkerCompleted member delegate of BackgroundWorker. RunWorkerCompleted is a delegate of type RunWorkerCompletedEventHandler. The completion notification method accepts a parameter of type RunWorkerCompletedEventArgs, which contains the result of the method execution. You need to set the Result property of DoWorkEventArgs inside the asynchronous method, as well as any error and cancellation information.

BackgroundWorker cannot simply invoke the RunWorkerCompleted delegate when the asynchronous method execution is completed, because that invocation will be on the thread from the thread pool. Thus, any control or form that subscribed to RunWorkerCompleted cannot be called directly. Instead, BackgroundWorker checks whether each of the target objects in RunWorkerCompleted supports ISynchronizeInvoke, and if Invoke( ) is required. If so, it marshals the call to the owning thread of the target object.

To support progress reports, BackgroundWorker provides a member delegate called ProgressChanged, of type ProgressChangedEventHandler. Any party interested in progress notifications should subscribe to ProgressChanged. When the asynchronous method wishes to notify about a progress update, it calls BackgroundWorker's ReportProgress( ) method to correctly marshal the progress notification to any Windows Forms object.

To cancel a method, anybody from any thread can call BackgroundWorker's CancelAsync( ) method. Calling CancelAsync( ) results in the CancellationPending property of BackgroundWorker returning true. Inside the asynchronous method, you should periodically check the value of CancellationPending; if it is true, you should set the Cancel property of DoWorkEventArgs to true and return from the method. In the completion method, you can check the value of the Cancelled property of RunWorkerCompletedEventArgs to detect whether the method ran to its completion or was cancelled. You can also check the Error property for any exceptions that might have taken place.

To ease the transition of applications from .NET 1.1 to .NET 2.0, the source code accompanying this book contains my implementation of BackgroundWorker, which you can use in your .NET 1.1-based solutions. The .NET 1.1 implementation of BackgroundWorker is polymorphic with that of .NET 2.0 and functions in an identical manner. The techniques used in the implementation of BackgroundWorker are similar to those used in the final version of EventsHelper (Example 8-21).


8.9.5.1 Web service proxy classes

Example 7-10 showed asynchronous invocation of a simple calculator web service:

     public class Calculator     {        [WebMethod]        public int Add(int argument1,int argument2)        {           return argument1 + argument2;        }        //Other methods     }

When the client uses WSDL.exe to create a proxy class targeting the Calculator web service, the web service proxy class contains methods such as BeginAdd( ) and EndAdd( ), used for asynchronous invocation. Although not delegate-based, these methods comply with the programming model presented in Chapter 7: the asynchronous call is delegated to a thread from the thread pool, and that thread calls back on a callback method to notify about completion. However, if the web service client is a Windows Forms object, the callback method cannot call back on the thread from the thread poolyou have to marshal the callback to the correct owning thread. One solution would be to use the BackgroundWorker component, as shown in Example 8-23.

Example 8-23. Using BackgroundWorker for safe asynchronous web service invocation by a Windows Forms client
 partial class CalculatorWebServiceClient : Form {    public void AsyncAdd(  )    {       //Calculator is the auto-generated proxy class       Calculator calculator = new Calculator(  );       BackgroundWorker worker = new BackgroundWorker(  );       worker.DoWork += OnDoWork;       worker.RunWorkerCompleted += OnCompleted;       worker.RunWorkerAsync(calculator);    }    //Executes on thread from thread pool    void OnDoWork(object sender,DoWorkEventArgs e)    {       Calculator calculator = e.Argument as Calculator;       e.Result = calculator.Add(2,3);    }    //Executes on correct Windows Forms thread    void OnCompleted(object sender,RunWorkerCompletedEventArgs e)    {       if(e.Error != null)       {          throw e.Error;       }       MessageBox.Show("Add returned " + e.Result);    } }

However, BackgroundWorker is overkillonce dispatched, there is no way to cancel the web service call, and there are no progress reports (a key feature of BackgroundWorker). As a result, both the WSDL.exe-generated web service proxy class and the proxy class generated when adding a web reference using Visual Studio 2005 supports yet another asynchronous method invocation pattern (on top of the BeginAdd( ) and EndAdd( ) methods, which non-Windows Forms clients can still use). The Calculator web service proxy class contains the following members and supporting types:

     public partial class Calculator : SoapHttpClientProtocol     {        public event AddCompletedEventHandler AddCompleted;        public void AddAsync(int argument1,int argument2);        public void AddAsync(int argument1,int argument2,object userState);        public new void CancelAsync(object userState);        //Additional members     }     public delegate void AddCompletedEventHandler(object sender,                                                   AddCompletedEventArgs args);     //AsyncCompletedEventArgs is defined in Example 8-22     public class AddCompletedEventArgs : AsyncCompletedEventArgs     {        public int Result{get;}     }

For each web method, the proxy class will contain two methods of the form:

     public void <Method Name>Async(<parameters>);     public void <Method Name>Async(<parameters>,object userState);

To dispatch the asynchronous web method, you call one of two versions of the AddAsync( ) method.

The AddAsync( ) method is dispatched on a thread from the thread pool. When the method completes, the proxy class raises an event. The web service proxy class provides an event member for each web method. The events take the form of:

     public delegate void <Method Name>CompletedEventHandler(object sender,                                                  <Method Name>CompletedEventArgs args);

In the case of the Add( ) method, that event is the AddCompleted event. You can add to that event any callback method with a matching signature: the first parameter is an object and the second is a strongly typed AsyncCompletedEventArgs-derived class. The completion event argument class contains a property called Result, which matches the web service method's returned value. If the web service method has no returned value, the proxy class uses AsyncCompletedEventArgs as the event argument. AsyncCompletedEventArgs also contains the Error property, of type Exception, which contains any error information about the asynchronous call.

You can use the same completion callback method to handle multiple asynchronous invocationssimply provide <Method Name>Async( ) with an identifier or state specific for that invocation as the last parameter of type object. Inside the completion callback, you can access the state object via the UserState property of AsyncCompletedEventArgs. You can even cancel an asynchronous method execution in progress, by passing a unique identifier as a state object and using that identifier to call the CancelAsync( ) method:

     Calculator calculator = new Calculator(  );     calculator.AddCompleted += OnCompleted;     Guid ID = Guid.NewGuid(  );     calculator.AddAsync(2,3,ID);     //To cancel:     calculator.CancelAsync(ID);     public void OnCompleted(object sender,AddCompletedEventArgs args)     {...}

When the method is canceled, the completion event is raised immediately and the Cancelled property of AsyncCompletedEventArgs is set to true.

Normally, the completion event will be invoked on the thread from the thread pool. However, if the thread that dispatched the asynchronous call is processing Windows messages, the event will be processed on the message-processing thread. This is accomplished by sending a special message to that thread that makes it invoke the completion delegate as the message processing. Example 8-24 demonstrates the same client code as Example 8-23, except it uses the alternative mechanism just described.

Several classes support a similar pattern for safe asynchronous invocation. For example, the PictureBox control offers it for asynchronous image loading, and the SoundPlayer class offers it for asynchronous media loading. Another example of a class that includes such support is the Ping class that offers a SendAsync( ) method.


Example 8-24. Safe asynchronous web service invocation by a Windows Forms client
 partial class CalculatorWebServiceClient : Form {    public void AsyncAdd(  )    {       //Calculator is the auto-generated proxy class       Calculator calculator = new Calculator(  );       calculator.AddCompleted += OnCompleted;       calculator.AddAsync(2,3);    }    public void OnCompleted(object sender,AddCompletedEventArgs args)    {       if(args.Error != null)       {          throw args.Error;       }       if(args.Cancelled)       {          MessageBox.Show("Web service call cancelled");          return;       }       MessageBox.Show("Add returned " + args.Result);    } }

The mechanism just described is a particular implementation of a complex design pattern called synchronization contexts. Synchronization contexts are unrelated to synchronization domains and .NET contexts. The Windows Forms implementation of the asynchronous operation is done via WindowsFormsOperationSynchronizationContext, which posts the message whose handling invokes the completion callback on the owning thread. For more information on synchronization contexts, see the MSDN Library.


8.9.6. Timers

Applications often need a certain task to occur at regular time intervals. Such services are implemented by timers. A timer is an object that repeatedly calls back into the application at set intervals. For example, you can use a timer to update the user interface with anything from stock quotes to available disk space. You can also use a timer to implement a watchdog, which periodically checks the status of various components or devices in your application. For example, you can poll communication ports or check the status of job queues. Many decent-sized applications use timers, for a wide range of purposes. In the past, developers were left to their own devices when implementing timers. They usually did so by creating a worker thread whose thread method executed the following logic in pseudo-code:

     public class Timer     {        public void ThreadMethod(  )        {           while(true)           {              Tick(  );              Thread.Sleep(Interval);           }        }        public int Interval        {get;set;}        void Tick(  )        {           /* Call back into the application */        }     }

However, such solutions had disadvantages: you had to write code to start and stop the timer, manage the worker thread, change the interval, and hook the timer to the application's callback function. Furthermore, if you took advantage of timers made available by the operating system (such as the CreateWaitableTimer( ) function of the Win32 API) you coupled your application to that mechanism, and switching to a different implementation wasn't trivial.

.NET comes out of the box with not one, but three complementary timer mechanisms you can use in your applications. All three mechanisms comply with the same set of generic requirements, allowing the application using the timer to:

  • Start and stop the timer repeatedly

  • Change the timer interval

  • Use the same callback method to service multiple timers and be able to distinguish among the different timers

This section discusses and contrasts .NET's timer mechanisms and recommends where and when to use each of them.

8.9.6.1 System.Timers.Timer

The System.Timers namespace contains a Timer class, defined as follows:

     public class System.Timers.Timer : Component,ISupportInitialize     {        public Timer(  );        public Timer(double interval);        public bool AutoReset{get; set; }        public bool Enabled{get; set; }        public double Interval{get; set;}        public ISynchronizeInvoke SynchronizingObject { get; set; }        public event ElapsedEventHandler Elapsed;        public void Close(  );        public void Start(  );        public void Stop(  );        /* Other members */     }

The System.Timers.Timer class has an event member called Elapsed, a delegate of type ElapsedEventHandler, which is defined as:

     public delegate void ElapsedEventHandler(object sender,ElapsedEventArgs e);

You provide Elapsed with timer-handling methods with a matching signature:

     void OnTick(object sender,ElapsedEventArgs e)     {...}

The System.Timers.Timer class calls into these methods at specified timer intervals, using a thread from the thread pool. You specify an interval using the Interval property. The sender argument to the timer-handling method identifies the timer object. As a result, the same timer method can be called by multiple timers, and you can use the sender argument to distinguish among them. To hook up the timer to more than one timer-handling method, simply add more targets to the Elapsed event. The ElapsedEventArgs class provides the time the method was called. It is defined as:

     public class ElapsedEventArgs : EventArgs     {        public DateTime SignalTime{get;}     }

To start or stop the timer notifications, simply call the Start( ) or Stop( ) methods. The Enabled property allows you to silence the timer by not raising the event. Consequently, Enabled and the Start( ) and Stop( ) methods are equivalent. Finally, when the application shuts down, call the Close( ) method of the timer to dispose of the system resources it used. Example 8-25 demonstrates using System.Timers.Timer.

Example 8-25. Using System.Timers.Timer
 using System.Timers; class SystemTimerClient {    System.Timers.Timer m_Timer;    int m_Counter;    public int Counter    {       get       {          lock(this)          {             return m_Counter;          }       }       set       {          lock(this)          {             m_Counter = value;          }       }    }    public SystemTimerClient(  )    {       m_Counter = 0;       m_Timer = new System.Timers.Timer(  );       m_Timer.Interval = 1000;//One second       m_Timer.Elapsed += OnTick;       m_Timer.Start(  );       //Can block this thread because the timer uses       //a thread from the thread pool       Thread.Sleep(4000);       m_Timer.Stop(  );       m_Timer.Close(  );    }    void OnTick(object source,ElapsedEventArgs e)    {       string tickTime = e.SignalTime.ToLongTimeString(  );       m_Counter++;       Trace.WriteLine(m_Counter + " " + tickTime);    } } SystemTimerClient obj = new SystemTimerClient(  ); //Output: 1 4:20:48 PM 2 4:20:49 PM 3 4:20:50 PM

Because the timer-handling method is called on a different thread, make sure you synchronize access to the object members. This prevents state corruption.


Of particular interest is the SynchronizingObject property of System.Timers.Timer, which allows you to specify an object implementing ISynchronizeInvoke to be used by the timer to call back into the application (instead of calling directly, using the thread pool). For example, here is the code required to use System.Timers.Timer in a Windows Forms Form-derived class:

     partial class SystemTimerClient : Form     {        System.Timers.Timer m_Timer;        int m_Counter;        public SystemTimerClient(  )        {           m_Counter = 0;           m_Timer = new System.Timers.Timer(  );           m_Timer.Interval = 1000;//One second           m_Timer.Elapsed += OnTick;           m_Timer.SynchronizingObject = this;//Form implements ISynchronizeInvoke           m_Timer.Start(  );        }        void OnTick(object source,ElapsedEventArgs e)        {          //Called on the main UI thread, not on a thread from the pool        }     }

By default, SynchronizingObject is set to null, so the timer uses threads from the pool directly.

8.9.6.2 System.Threading.Timer

The System.Threading namespace contains another Timer class, defined as:

     public sealed class System.Threading.Timer : MarshalByRefObject,IDisposable     {        public Timer(TimerCallback callback,object state,long dueTime,long period);        /* More overloaded constructors */        public bool Change(int dueTime, int period);        /* More overloaded Change(  ) */        public void Dispose(  );     }

System.Threading.Timer is similar to System.Timers.Timer: it too uses the thread pool. The main difference is that it provides fine-grained and advanced control; you can set its due time (that is, when it should start ticking), and you can pass any generic information to the callback tick method. To use System.Threading.Timer, you need to provide its constructor with a delegate of type TimerCallback, defined as:

     public delegate void TimerCallback(object state);

The delegate targets a timer callback method, invoked on each timer tick. The state object is typically the object that created the timer, so you can use the same callback method to handle ticks from multiple senders, but you can of course pass in any other argument you like. The other parameter the timer constructor accepts is the timer period (i.e., the timer interval). To change the timer period, simply call the Change( ) method, which accepts the new due time and period. System.Threading.Timer doesn't provide an easy way to start or stop the timer. It starts ticking immediately after the constructor (actually, after the due time has elapsed), and to stop it you must call its Dispose( ) method. If you want to restart it, you must create a new timer object. Example 8-26 demonstrates the use of System.Threading.Timer.

Example 8-26. Using System.Threading.Timer
 using System.Threading; class ThreadingTimerClient {    System.Threading.Timer m_Timer;    int m_Counter;    public ThreadingTimerClient(  )    {       m_Counter = 0;       Start(  );       //Can block this thread because the timer uses a thread from the thread pool       Thread.Sleep(4000);       Stop(  );    }    void Start(  )    {       m_Timer = new System.Threading.Timer(OnTick,null,0,1000);    }    void Stop(  )    {       m_Timer.Dispose(  );       m_Timer = null;    }    void OnTick(object state)    {       m_Counter++;       Trace.WriteLine(m_Counter);    } } ThreadingTimerClient obj = new ThreadingTimerClient(  ); //Output: 1 2 3 

8.9.6.3 System.Windows.Forms.Timer

The System.Windows.Forms namespace contains a third Timer class, defined as:

     public class System.Windows.Forms.Timer : Component     {        public Timer(  );        public Timer(IContainer container);        public virtual bool Enabled{get; set;}        public int Interval {get; set; }        public event EventHandler Tick;        public void Start(  );        public void Stop(  );     }

Although the System.Windows.Forms.Timer methods look like those of System.Timers.Timer, System.Windows.Forms.Timer doesn't use the thread pool to call back into the Windows Forms application. Instead, it's based on the good old WM_TIMER Windows message. Instead of using a thread, the timer posts a WM_TIMER message to the message queue of its current thread, at the specified interval. Using System.Windows.Forms.Timer is like using System.Timers.Timer, except the timer-handling method is of the canonical signature defined by the EventHandler delegate. The fact that virtually the same set of methods can use drastically different underlying mechanisms is a testimony to the degree of decoupling from the ticking mechanisms that timers provide to the applications using them.

Visual Studio has built-in Designer support for the Windows Forms timer. Simply drag-and-drop a timer from the Windows Forms toolbox control to the form. The Designer then displays the timer icon underneath the form and allows you to set its properties.


Because all the callbacks are dispatched on the main UI thread, there is no need to manage concurrency when using Windows Forms timers. However, this may be a problem if the processing is long, because the user interface will not be responsive during the processing.

8.9.6.4 Choosing a timer

If you are developing a Windows Forms application, you should use System.Windows.Forms.Timer. In all other cases, I recommend using System.Timers.Timer. Its methods are easy to use, while System.Threading.Timer's methods are cumbersome and offer no substantial advantage.

8.9.7. Volatile Fields

To optimize access to object fields, the compiler may cache a member variable's value in a local temporary variable after the first time it is read. If the variable is read repeatedly without an attempted write, the subsequent reads can access the temporary variable instead of the actual object:

     class MyClass     {        public int Number;     }     MyClass obj = new MyClass(  );     int number1 = obj.Number;     int number2 = obj.Number; //Compiler may use cached value here 

This may yield better performance, especially in tight loops:

     while(<some condition>)     {        int number = obj.Number;        /*  Using number */     }

The problem is that if a thread context switch takes place after the assignment to number1 but before the assignment to number2, and the new thread changes the value of the field, number2 will still be assigned the old cached value. Of course, it's a bad idea to expose member variables directly in publicyou should always access them via properties. However, in the rare case that you do want to expose public fields without synchronizing access to them, the C# compiler supports volatile fields. A volatile field is a field defined using the volatile reserved word:

     class MyClass     {        public volatile int Number;     }

When a field is marked as volatile, the compiler doesn't cache its value and always reads the field value. Similarly, the compiler writes assigned values to volatile fields immediately, even if no read operation takes place in between. The Thread class offers multiple versions of the VolatileRead( ) and VolatileWrite( ) static methods:

     public sealed class Thread     {        public static int VolatileRead(ref int address);        public static object VolatileRead(ref object address);        public static void VolatileWrite(ref int address,int value);        public static void VolatileWrite(ref object address,object value);        //Additional versions of VolatileRead(  ) and VolatileWrite(  )     }

VolatileRead( ) reads the latest version of a memory address, and VolatileWrite( ) writes to the address, making the address available to all threads. Using both VolatileRead( ) and VolatileWrite( ) consistently on a member variable has the same effect as marking it as volatile.

Visual Basic 2005 has no equivalent to the C# volatile keyword, so its developers can only use VolatileRead( ) and VolatileWrite( ).

Avoid volatile fieldsinstead, lock your object or fields to guarantee deterministic and thread-safe access.


8.9.8. .NET and COM's Apartments

.NET doesn't have an equivalent to COM's apartments. As you have seen throughout this chapter, every .NET component resides in a multithreaded environment, and it's up to you to provide proper synchronization. The question is, what threading model should .NET components present to COM when interoperating with COM components as clients?

The Thread class provides three methods for managing its apartment state: GetApartmentState( ), SetApartmentState( ), and trySetApartmentState( ). The methods are defined as follows:

     public enum ApartmentState     {        STA,        MTA,        Unknown     }     public sealed class Thread     {        public ApartmentState GetApartmentState(  );        public void SetApartmentState(ApartmentState state);        public bool TrySetApartmentState(ApartmentState state);        //Other methods and properties     }

These methods accept or return the enum ApartmentState.

ApartmentState.STA stands for single-threaded apartment; ApartmentState.MTA stands for multithreaded apartment. The semantic of ApartmentState.Unknown is the same as that of ApartmentState.MTA.

By default, the Thread apartment state is set to ApartmentState.Unknown, resulting in the MTA apartment state.

Threads from the thread pool use the MTA apartment state.


You can programmatically instruct .NET as to what apartment state to present to COM by calling SetApartmentState( ) and providing either ApartmentState.STA or ApartmentState.MTA (but not ApartmentState.Unknown):

     Thread workerThread = new Thread(ThreadMethod);     workerThread.SetApartmentState(ApartmentState.STA);     workerThread.Start(  );

If the apartment state of the managed thread matches that of the COM object on which it tries to invoke a method, COM will run the object on that thread. If the threading model is incompatible, COM will marshal the call to the COM object's apartment, according to the COM rules. Obviously, a match in apartment model will result in better performance.

You can only set the threading model before the thread starts to run. If you try to set the apartment model of an already executing thread, an exception of type InvalidOperation will be raised. If you are unsure as to whether the thread is already running or not, you can use the trySetApartmentState( ) method, which returns true if setting the thread's apartment state was successful (because the thread was not started yet) or false if it failed. (Obviously, it is better to always know deterministically the state of your thread, so you don't have to rely on half-measures like TRySetApartmentState( ).) Finally, you can always retrieve the apartment state of your thread using the GetApartmentState( ) method.

You can also use either the STAThread or the MTAThread method attribute to declaratively set the apartment state. Although the compiler doesn't enforce this rule, you should apply these attributes only to the Main( ) method and use programmatic settings for your worker threads:

     [STAThread]     static void Main(  )     {...}

Note that you can set the apartment model only once, regardless of whether you do it declaratively or programmatically. Future attempts to change it, even if the thread is not running, will result in an InvalidOperation exception.

The Windows Forms application wizard automatically applies the STAThread attribute to the Main( ) method of a Windows Forms application. This is done for two reasons. The first is in case the application hosts ActiveX controls, which are STA objects by definition. The second is in case the Windows Forms application interacts with the Clipboard, which still uses COM interop. With the STAThread attribute, the underlying physical thread uses OleInitialize( ) instead of CoInitializeEx( ) to set up the apartment model. OleInitialize( ) automatically does the additional initialization required for enabling drag-and-drop.


There is one side effect to selecting an apartment threading model: you can't call WaitHandle.WaitAll( ) with multiple handles from a thread whose apartment state is set to ApartmentState.STA. If you do, .NET throws an exception of type NotSupportedException. This is probably because the underlying implementation of WaitHandle.WaitAll( ) uses the Win32 call WaitForMultipleObjects( ), and that call blocks the STA thread from pumping COM calls in the form of messages to the COM objects. Note that when a managed thread makes a call outside managed code for the first time, .NET calls CoInitializeEx( ) with the appropriate apartment state, even if the thread doesn't intend to interact with COM objects directly.



Programming. NET Components
Programming .NET Components, 2nd Edition
ISBN: 0596102070
EAN: 2147483647
Year: 2003
Pages: 145
Authors: Juval Lowy

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net