Synchronizing Variable Access

You might have thought that the AsyncDelegates sample was complicated enough. In fact, however, I carefully designed that sample in a way that made it far simpler than most multithreaded applications. Whenever an asynchronous operation was requested, the results would come back, be displayed on the console, and immediately forgotten about. That meant that when the results were retrieved in a callback method running on one of the thread pool threads, we didn't have to worry about getting the data stored somewhere where the main thread could access it. Which meant we could ignore the whole issue of synchronizing access to variables across threads. In real life it's extremely unlikely that you will be able to write a multi-threaded application without being faced with this issue, and that's the subject that we'll deal with next. I'm briefly going to review the principles of thread synchronization and the objects that are available in the .NET Framework for this purpose. Then I shall present a sample that illustrates these principles using the CLR's Monitor class.

Data Synchronization Principles

The reason for needing to worry about data synchronization is fairly well known: in general, reading and writing large data items are not atomic operations. In other words, even if something looks like one single statement in a high-level language (or even in IL), it actually requires a number of native executable instructions to perform. However, there's no way of telling when Windows might decide that the running thread has finished its time slice and so transfer control to another thread. This means, for example, that one thread might start writing some data to an object. While it is in the middle of writing and the object therefore contains half-finished data, Windows transfers control to another thread, which proceeds to read the value of the same object, happily unaware that it is reading garbage. On a multiprocessor CPU, the situation can get even worse as two threads running on different CPUs really can try to access the same variable simultaneously. The real problem is that this kind of bug is very hard to track down: this kind of data corruption often only manifests itself a lot later on, when the corrupt data is used for something else. Not only that, but such bugs are rarely reproducible: it is essentially unpredictable when Windows will swap threads, and this will vary every time you run the application. You could easily end up with completely different behavior every time you run the application - which makes the problem very hard to debug.

Interestingly, although thread synchronization issues can break your code, they won't cause it to fail type safety. This is because loading primitive types such as pointers is always atomic, so managed pointers and object references still cannot be corrupted in any way that would cause the application to access memory outside its own area of virtual address space.

The solution to this problem is of course ideally a mechanism that prevents any thread from accessing any variable while any other thread is already accessing that variable. In practice, it's not practical to have this done automatically. Instead, the Windows operating system provides some mechanisms (for unmanaged code) whereby a thread can ask to wait before proceeding into a potentially dangerous section of code - but this relies on the programmer knowing where the dangerous points of code are, and including extra code that invokes the Windows synchronization mechanisms at those points. For managed code, the same mechanisms are available, plus some extra facilities provided by the CLR itself. We'll now examine the main such mechanisms available to managed code. Note that, although I'll review all the main objects here so you are aware of their existence, the only ones that we will actually use in the samples in this chapter are Monitor and ManualResetEvent.

The Monitor

The simplest and most efficient way to synchronize variable access in managed code is normally using something called the monitor. Suppose we have some C# code in which some thread is about to access a variable x, and it knows that another thread might want to access x as well. We can ensure that the two threads don't simultaneously access this data like this:

 lock (x) {    // Code that uses x goes here } 

You can view the above code as asking the CLR's monitor for a lock on the variable x. Provided no other thread already has a lock on x, the monitor will freely give this code the lock, which will last for the duration of the block statement associated with the lock (the statements in the braces). However, if another thread has already acquired a lock on x, then the monitor will refuse to grant this thread the lock. Instead, the thread will be suspended until the other thread releases its lock, whereupon this thread can proceed. Provided you are careful to place lock statements around every block of code that accesses the variable x, this will ensure that x is only ever accessed by one thread at any one time, preventing thread synchronization bugs.

In VB, the corresponding syntax is:

 SyncLock x    ' Code that accesses x goes here End SyncLock 

A thread that is waiting before starting to execute a potentially dangerous section of code is said to be blocked. (Incidentally, blocked threads do not consume any CPU time.) The objects that are responsible for controlling the blocking of threads are known as thread synchronization objects, or thread synchronization primitives. The sections of code that should not be executed simultaneously, and which therefore will have been coded up in such a way that they are subject to the control of thread synchronization primitives, are known as protected code.

The C# and VB code we've just seen is actually a useful shorthand syntax, which is great for writing code but not so good for seeing what is actually going on. The lock and SyncLock statements are converted by the compiler into the equivalent of the following code:

 object temp = x; Monitor.Enter(temp); try {    // Code that uses x } finally {    Monitor.Exit(temp); } 

What's actually happening is that we tell the CLR's Monitor class that we want to acquire a lock on an object by passing a reference to the object to the static Monitor.Enter() method - and we use the static Monitor.Exit() method to inform the monitor that we no longer need the lock. Notice that the full version of the code caches the original object reference to make sure that the correct reference is passed to Exit(), even if x is reassigned. The Exit() method is in a finally block so that it is guaranteed to execute. If for any reason our thread failed to execute Monitor.Exit(), there would be a big problem when another thread tried to grab the lock on that object. Because the monitor would think that the first thread still has the lock, it will block the other thread. Permanently. Fortunately, the lock/SyncLock syntax guarantees that Exit() will be executed.

C++ has no shorthand equivalent to the C# lock and VB SyncLock statements. If coding in C++, you'll need to invoke the Monitor methods explicitly - and take care to place the Exit() statement in a finally block.

Locks on different objects do not affect each other. In other words, if one thread is executing code inside a lock (x) { } block, this will not prevent another thread from executing lock (y) { } and entering the protected area of code, provided of course that y ! = x.

One important point is that, although the statement lock (x) { } would normally be used to synchronize access to the object referred to by x, there is in reality no restriction on the code that can be placed in the protected block. Normally you should be wary of placing code in lock (x) { } that is unrelated to x, because that will make your code harder to understand, but we'll see later that you may nevertheless want to do so in some situations, normally for reasons to do with the way you design your thread synchronization architecture.

The Monitor class itself represents a special lightweight synchronization object developed specifically for the CLR. Internally, Monitor is implemented to use the sync block index of the object passed to the Enter() and Exit() methods to control thread blocking. Recall from Chapter 3 that the sync block index forms a portion of an int32 that is stored in the first four bytes of each reference-type managed object. This value is usually zero, but if a thread claims the lock on an object, the monitor will create an entry describing the fact in a table internal to the CLR, and modify the sync block index of the object to reference that table entry. However, if the sync block index indicates that the object is already locked, the monitor will block the thread until the object is released. Bear in mind that you can only use the monitor to synchronize access to object references, not to value types. If you try to pass a value type to Monitor.Enter(), the type will be boxed - which will result in no synchronization. Suppose we execute some code such as this on a thread:

 Monitor.Enter(v)     // v is a value type { 

v will be boxed and the boxed object reference passed to Monitor.Enter(). Unfortunately, if some other thread later attempts the same thing, a new boxed copy of v will be created. Since the monitor will see two different boxed objects, it won't realize that the two threads are trying to synchronize against the same variable, so the threads won't get synchronized. This is potentially a nasty source of bugs. This is another area where the C# lock and VB SyncLock statements provide additional support - the C# and VB compilers will both flag a syntax error if you use these statements to lock a value type. If you do need to synchronize access against a value type, a commonly used technique is this:

 lock (typeof(v)) { 

Bear in mind that typeof(v) always returns the same object for all instances of v, so using this technique will prevent threads running simultaneously, even if those threads are accessing different, unrelated instances of that type; this may be stronger protection than you need. Other possibilities are to find some convenient reference object you can lock against instead, or to manually box v, so you can synchronize access against the same boxed object.

Monitor is actually quite unusual for the thread synchronization classes in two regards: it's implemented by the CLR, and it is never instantiated. Most of the other classes wrap native Windows synchronization objects and need to be instantiated in order to be used. With a monitor, locking is performed against the reference to the object to which access needs to be protected, while most of the other synchronization classes require you to instantiate the synchronization object, and then perform thread locking against that synchronization object.

Mutexes

A mutex has a very similar purpose to the monitor, but the syntax for using it is rather different. Suppose we want to protect access to a variable called X, and we've named the mutex we want to use to do this mutex1:

 // Instantiate the mutex in some manner so that it will be accessible to // all relevant threads (this normally means it'll be a member field) this.mutex1 = new Mutex() // Later, when a thread needs to perform syncrhonization mutex1.waitOne(); // Do something with variable X mutex1.ReleaseMutex(); // Carry on 

Calling the WaitOne() method effectively causes that thread to ask for ownership of the mutex. If no other thread owns the mutex, then everything is fine - this thread can proceed, and it retains ownership of the mutex until it calls ReleaseMutex() against the same mutex object. If another thread already owns the mutex, the first thread will block until the mutex is released. You can instantiate as many mutexes as you want. If a thread asks one mutex if it's OK to proceed, the result will not be affected by the state of any other mutex: the mutexes all work completely independently.

Although for clarity I haven't explicitly shown it in the above code, do remember to place the ReleaseMutex() call in a finally block in order to avoid bugs caused by a mutex not being released.

Mutexes will give you a much bigger performance hit than using the monitor, and there is no shortcut syntax. So why would you ever use them? The answer is partly because they have an ability to avoid an error condition known as a deadlock, which we'll discuss shortly, and partly because they are relatively easy to use cross-process. It's not often that you'll need to synchronize threads running in two different processes (details of how to do this are in the MSDN documentation for Mutex), but the issue may crop up if you are doing some very low-level work (such as programming device drivers), for which multiple processes are sharing resources. Another possible reason for using mutexes is if part of your thread synchronization is being done in unmanaged code. Mutex exposes a Handle property that exposes the native underlying system mutex handle, which can be passed to unmanaged code that needs to manipulate the mutex.

Don't try to combine Monitor and Mutex simultaneously to protect any variables. Use either one or the other. The two types work independently: for example, when the monitor checks to see if it's OK to let a thread through to protected code, it won't notice the existence of any mutexes designed to protect the same variables.

WaitHandles

System.Threading.WaitHandle is an abstract class, so you can't instantiate it yourself. I mention it here because it is the base class for several synchronization classes, including Mutex, ManualResetEvent, and AutoResetEvent. It is WaitHandle that encapsulates the underlying thread blocking mechanism and implements the WaitOne() method which we've just seen in action for the Mutex class, as well as the Handle property. A WaitHandle instance can be in either of two states: signaled or non-signaled. Non-signaled objects will cause threads that are waiting on the object to be blocked until the object's state is set to signaled again.

Besides WaitOne(), WaitHandle implements a couple of other blocking methods: WaitAll() and WaitAny(). These methods are intended for the more complex situation in which one thread is waiting for several thread synchronization objects to become signaled. WaitAny() allows the thread to proceed when any one of the items is signaled, while WaitAll() blocks the thread until all items are signaled.

ReaderWrlterLocks

This is arguably the most sophisticated synchronization object. It's something that has been implemented specially for the CLR - in unmanaged code you'd have to code up a ReaderWriterLock by hand if you needed one. The ReaderWriterLock is similar to a Monitor, but it distinguishes between threads that want to read a variable and threads that want to write to it. In general, there is no problem with multiple threads accessing a variable simultaneously, provided they are all just reading its value. Possible data corruption only becomes an issue if one of the threads is actually trying to write to the variable. With a ReaderWriterLock, instead of simply asking for ownership of the underlying WaitHandle(), a thread can indicate whether it wants a reader lock or a writer lock, by calling either the AcquireReaderLock() or the AcquireWriterLock() method. If a thread has a reader lock, this won't block any other threads that also ask for reader locks - it will only block threads that ask for a writer lock. If a thread has a writer lock, this will block all other threads until the writer lock is released.

Events

In the context of threading, an event is a special thread-synchronization object that is used to block a thread until another thread specifically decides that it's OK for that thread to continue. This use of the term event is quite unrelated to the familiar Windows Forms usage, where event denotes a special type of delegate. We will use events later on in this chapter as a way for a background thread to tell the main foreground thread that it has finished processing its background task.

The way an event works is very simple. The thread that needs to wait for something calls WaitOne() to wait until the event is signaled. When another thread detects that whatever the first thread was waiting for has now happened, it calls the event's Set() method, which sets the event to signaled, allowing the first thread through.

There are two types of event, represented by two classes: ManualResetEvent and AutoResetEvent. The difference between these classes is that once a ManualResetEvent is signaled, it remains signaled until its Reset() method is called. By contrast, when an AutoResetEvent is set to signaled by calling the Set() method, it becomes signaled for a brief instant, allowing any waiting threads to carry on, but then immediately reverts to the non-signaled state.

Semaphores

A semaphore is a thread synchronization object that is similar to a mutex, except that it allows a certain number of threads to execute protected blocks simultaneously. For example, if you want a maximum of, say, three threads to be able to access some resource simultaneously, but no more, then you'd use a semaphore. Semaphores are useful for situations such as database connections where there is some limit on the number of simultaneous connections. Such limits may be imposed either for license reasons or for performance reasons. Semaphores are not implemented by the .NET Framework as of version 1, so if you do need a semaphore, you'll need to fall back on CreateSemaphore() and similar Windows API functions. Because of this, if you do need to use semaphores, you might find it easier to use unmanaged code for that part of your application.

The Interlocked Class

The Interlocked class is a useful utility class that allows a thread to perform a couple of simple operations atomically. It is not itself a thread synchronization primitive as such, but instead it exposes static methods to increment or decrement integers, or to swap over two integers or object references, guaranteeing that the thread will not be interrupted while this process is happening. In some cases, using the Interlocked class can save you from actually having to instantiate a synchronization primitive. Like Monitor, the Interlocked class is never actually instantiated.

Thread Synchronization Architecture

In this section we'll discuss a few issues concerning how you design your thread-synchronization code. We'll concentrate on Monitor and Mutex in this discussion, but similar principles apply to all the synchronization primitives.

Generally, how many primitives you use is going to be a balance between the system resources, maintainability, performance, and code robustness. Robustness is a particular issue because bugs related to thread-synchronization issues are characteristically hard to reproduce or track down. It's not unknown for a thread-synchronization bug to go completely unnoticed on a development machine, and then to appear when the code is moved on to a production machine. The only way to avoid this is to take care with the design of your thread-synchronization code. We've already seen the importance of making sure that all access to variables that can be seen from more than one thread is synchronized. You will also need to take care to avoid introducing deadlocks and race conditions into your code.

Deadlocks

A deadlock is a situation in which two or more threads are cyclically waiting for each other. Suppose thread1 executes the following code, where the objects x and y might both be manipulated by more than one thread, and therefore need synchronizing:

 // Needs to manipulate x lock (x) {    // Do something with x    // Now need to do something with y too...    lock (y);    {       //Do something with x and y    } 

Note that this code features nested lock statements. There's no problem with this - the operation of one lock is completely unaffected by any locks on other objects that may be held by that thread. Meanwhile, thread2 is executing this code:

 // Needs to manipulate y lock(y) {    // Do something with y    // Now need to do something with x too...    lock(x);    {       // Do something with x and y    } 

The problem is caused by the different order in which the threads claim ownership of the locks. Suppose thread1 claims the lock on x at about the same time as thread2 claims the lock on y. Thread1 then goes about its work, and then gets to some code that needs to be protected from y as well. So it calls lock(y), which means the thread is blocked until the second thread releases its ownership of the lock on y. The trouble is that the second thread is never going to release its lock - it's going to get blocked waiting for the first thread to release its lock on x! The two threads will just sit there indefinitely, both waiting for each other.

The moral from this is that if you need to start claiming multiple locks, you will need to be very careful about the order in which you claim them. There are a number of resolutions to the problem posed by the above code. One possibility is simply to use one lock - say the lock on x, and agree that throughout your code, you will use lock(x) to synchronize access to both x and y. Since there's no restriction on the code you can place in a lock block, this is fine syntactically, but may lead to less clear code. Another possibility is to use a mutex. The mutex can avoid deadlocks because of the WaitHandle.WaitAll() static method, which can request a lock on more than one mutex simultaneously. Assume we have declared Mutex variables, mutexX and mutexY, which are used to protect x and y respectively. Then we can write:

 WaitHandle[] mutexes = new WaitHandle[2]; mutexes[0] = mutexX; mutexes[1] = mutexY; WaitHandle.WaitAll(mutexes); // Protected code here mutexX.ReleaseMutex(); mutexY.ReleaseMutex(); 

WaitHandle.WaitAll() will wait until all the locks on all mutexes can be acquired simultaneously. Only when this is possible will the thread be given ownership of any of the mutexes - hence avoiding the risk of a deadlock. But if you use this technique, there will be a performance penalty to pay, because mutexes are slower than using the monitor.

Races

Race conditions occur when the result of some code depends on unpredictable timing factors concerning when the CPU context-switches threads or when locks are acquired or released. There are numerous ways in which you can accidentally cause a race, but a typical example is where some code is broken into two protected blocks when it really needs to be protected as a single unit. This can occur if you're trying to avoid deadlocks, or if you are trying to limit the amount of code that is protected. (This is important because protecting code does hit performance due to blocking of other threads. The shorter the blocks of code you can get away with protecting, the less the performance hit.)

Let's go back to the code snippet that we've just used to demonstrate a deadlock, and let's alter the code that the first thread executes to prevent the deadlock:

 // Needs to manipulate x lock(x) {    // Do something with x } lock(y) {    // Now need to do something with y too...    lock(x);    {       // Do something with x and y    }} 

I've inserted code that releases the lock on x, then reclaims it almost immediately. This means that this thread is now locking the variables in the same order as the second thread, which eliminates the possibility of a deadlock.

Although the deadlock has gone, there is now a more subtle potential problem. Suppose the original protected region of code was there in order to keep some variables in a consistent state while the thread worked on them. There's a brief instant after lock(x) has been released for the first time when a different thread could theoretically jump in, execute lock(x), and then do its own processing on these variables. Since thread1 was in the middle of working on these variables, they might be in an inconsistent state. That's a race condition. In order to avoid races, you need to make sure of two points. Firstly, don't break out of a protected region of code until it really is completely safe to do so, and all relevant variables are in a state in which it's OK for another thread to look at them. Secondly, when you access variables that need to be protected, make sure that you not only do so from within a protected block of code, but also that your code does not make any assumptions about the value of the data being unchanged since the previous protected block if there is in fact a chance that another thread might have modified that data in the meantime.

You will gather from this discussion that the placing of locks needs to be done carefully in order to avoid subtle bugs. In general, the more different locks you are using, the greater the potential for problems. There is also a performance/resource problem associated with having too many synchronization objects in scope simultaneously, since each object does consume some system resources. (By too many, I mean hundreds. Five or ten locks won't be any problem.) This is especially important for objects that wrap underlying Windows structures, but is still the case even for the lightweight Monitor, since each active lock occupies memory in the CLR's internal table of sync blocks. At the simplest extreme, you might decide to synchronize all locking against the same object throughout the entire application, effectively using the same lock to protect all variables (the equivalent, using a Mutex, would be to use just one Mutex for all synchronization). This will make it impossible for deadlocks to occur - and in general will also make code maintenance easier, which in turn means you're less likely to write code that has synchronization bugs such as races. However, this solution will also impact performance, because you may find that threads are blocked waiting for ownership of the same lock, when these threads are actually waiting to access different variables, and so could execute simultaneously.

In practice, what happens in a real application is that you will analyze your code and try to identify which blocks of source code are mutually incompatible, in the sense that they should not be executed at the same time. And you'll come up with some scheme that protects these code blocks using a reasonable number of synchronization primitives. The disadvantage now is that working out how to do that is itself a difficult programming task - and one that you will only become skilled at with practice.

In fact, it would probably be fair to say that getting a thread synchronization to work correctly in a large multi-threaded application is one of the hardest programming tasks you're likely to have to face. And you'll notice this is reflected in the synchronization samples that are coming up soon. You'll find that in the next few samples I am extremely careful exactly how I use the thread-synchronization objects. In a way, this is the opposite situation to the previous sample. For the AsyncDelegates sample, the concepts we had to learn were quite involved, but once we got through those concepts the code was relatively simple. For the next couple of samples, there aren't many new concepts to learn, but the actual code becomes a lot hairier.

Thread Synchronization Samples

We will now develop the previous asynchronous delegates sample in order to demonstrate thread synchronization using the CLR's monitor. The new sample works much like the earlier sample, except that now requests are only fired off asynchronously, with a callback method used to retrieve the results on the thread-pool thread - that's the only scenario of interest to us now. However, instead of having the callback method display the results, it now transfers the results into member fields of the DataRetriever class, so that the main thread can later display the values. This means that these fields can be accessed by more than one thread, so all access to them needs to be protected. This sample also represents better programming practice: in most cases it is desirable for the user interface always to be accessed through the same thread.

The first sample, called MonitorDemo, is going to involve some rewriting of the DataRetriever class, as well as a new enum. Each DataRetriever is now used to obtain the address corresponding to one name, which is supplied in the DataRetriever constructor, and which cannot subsequently be changed. The enum is used to indicate the status of the fields in DataRetriever - whether results have arrived, are still pending, or whether the address lookup failed:

 public enum ResultStatus { Waiting, Done, Failed }; 

Now here's the new fields and constructor in the DataRetriever:

 public class DataRetriever {    private readonly string name;    private string address;    private ResultStatus status = ResultStatus.Waiting;    public DataRetriever(string name)    {       this.name = name;    } 

Notice that the name field is readonly - this means that access to this field will not need to be protected, since there are no thread-synchronization issues unless at least one thread might write to the value. (A class is only ever constructed on one thread, and other threads cannot access the class until after it has been constructed, so the fact that the field is written to in the constructor doesn't matter.)

The only change we need to make to the GetAddress() method is to its signature - to take account of the fact that name is now a member field rather than a parameter:

 public string GetAddress() {    ThreadUtils.DisplayThreadInfo("In GetAddress...");    // Simulate waiting to get results off database servers    Thread.Sleep(1000);    if (name == "Simon")       return "Simon lives in Lancaster";    else if (name == "Wrox Press")       return "Wrox Press lives in Acocks Green";    else       throw new ArgumentException("The name " + name +                                   " is not in the database"); } 

The GetAddressAsync() method, which invokes GetAddress() asynchronously via a delegate is unchanged, except that it too no longer takes a parameter, since the name is accessed as a member field instead:

 public void GetAddressAsync() {    GetAddressDelegate dc = new GetAddressDelegate(this.GetAddress);    AsyncCallback cb = new AsyncCallback(this.GetResultsOnCallback);    IAsyncResult ar = dc.BeginInvoke(cb, null); } 

This change is reflected in the different definition of the delegate:

 public delegate string GetAddressDelegate(); 

The callback method now looks like this:

 public void GetResultsOnCallback(IAsyncResult ar) {    GetAddressDelegate del = (GetAddressDelegate)                                ((AsyncResult)ar).AsyncDelegate;    try    {       string "result;       result = del.EndInvoke(ar);       lock(this)       {          this.address = result;          this.status = ResultStatus.Done;       }    }    catch (Exception ex)    {       lock(this)       {          this.address = ex.Message;          this.status = ResultStatus.Failed;       }    } } 

We simply set the this.address field to the returned address and update this.status. This is done in protected code because this code is executed on a worker thread, but the results (including the address and status fields) will be read out of the object on the main thread. We don't want the main thread to start reading the results while the worker thread is halfway through writing them.

We also need a new method, which we will call GetResults(), which can return the name, address, and status to the Main() method for displaying. This is the method that reads the DataRetriever members on the main thread:

 public void GetResults(out string name, out string address,                        out ResultStatus status) {    name = this.name;    lock (this)    {       address = this.address;       status = this.status;    } } 

The address and status fields are copied from the member fields, once again in a single protected block, to make sure that there is no overlap between reading and writing this data. We don't copy the name field in the protected block, because name is readonly. There is one extra subtlety here: although I have protected the process of copying out of members, I have only copied the address reference - not the string itself. Given that we want to protect simultaneous access to the data, you may wonder why we haven't actually taken a copy of the string itself, instead of merely copying the reference. The way we've done it, it looks like the main thread and worker thread are both going to end up holding references to the same data. However, this isn't a problem because strings are immutable, so there is no possibility of either thread actually modifying this string. In general, however, if we are dealing with references to mutable objects, you'll often have to take copies of these objects in order to ensure that different threads don't try to manipulate the same object.

Finally, here is the new Main() method. This method sets up an array of three DataRetriever objects, initializes them and calls GetAddressAsync() on each of them inside a for loop. Then it sleeps for what we hope is a sufficient period of time (2.5 seconds) for all of them to have returned values, and calls another method, OutputResults(), which displays the results:

 public static void Main() {    Thread.CurrentThread.Name = "Main Thread";    DataRetriever[] drs = new DataRetriever[3];    string[] names = { "Simon", "Julian", "Wrox Press" };    for (int i=0; i<3; i++)    {       drs[i] = new DataRetriever(names[i]);       drs[i].GetAddressAsync();    }    Thread.Sleep(2500);    OutputResults(drs); } 

The OutputResults() method looks like this:

 public static void OutputResults(DataRetriever[] drs) {    foreach (DataRetriever dr in drs)    {       string name;       string address;       ResultStatus status;       dr.GetResults(out name, out address, out status);       Console.WriteLine("Name: {0}, Status: {1}, Result: {2}", name,                         status, address);    } } 

Running this sample produces the expected output:

 In GetAddress...  hash: 14, pool: True, backgrnd: True, state: Background In GetAddress...  hash: 18, pool: True, backgrnd: True, state: Background In GetAddress...  hash: 20, pool: True, backgrnd: True, state: Background Name: Simon, Status: Done, Result: Simon lives in Lancaster Name: Julian, Status: Failed, Result: The name Julian is not in the database Name: Wrox Press, Status: Done, Result: Wrox Press lives in Acocks Green 



Advanced  .NET Programming
Advanced .NET Programming
ISBN: 1861006292
EAN: 2147483647
Year: 2002
Pages: 124

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net