|
The previous example looks cool. Multiple work items? No problem: hand them off to the thread pool and run them in the background on multiple threads. But, like Jayne Torvill and Christopher Dean ice dancing, doing it smoothly is significantly harder than it looks. The operating system swaps running threads in and out of the CPU without any regard to where a thread is in the course of its work, which can cause problems. Writing good multithreaded code is primarily about dealing with the interactions of the various threads swapping in and out at times you can’t control.
The operating system swapping threads in and out sometimes causes problems.
The first problem new thread programmers usually envision is their threads being swapped out in the middle of some time-critical task. Imagining their applications timing out, these programmers feel like a carpenter who’s just spread glue on some pieces of wood and needs to assemble them before the glue dries. However, standard applications don’t usually need to execute uninterrupted until they reach a specific point. The operating system saves and restores the processor register values as it swaps threads, so each thread picks up right where it left off. If you find your operations timing out too often, you probably have a bottleneck somewhere that more CPU cycles won’t fix. In the relatively few cases of dealing with a time-limited process that might time out, the thread priority mechanism discussed in the last section of this chapter usually solves the problem. If that’s not true, and you are in fact CPU bound, then threading won’t help; if you raise one thread’s priority to keep it from timing out, you’ll starve another thread and now it will time out instead of the first one. Low-level system and driver designers sometimes need more control than this, and they will probably need to drop out of the managed .NET environment into native unmanaged code to get that control, but this introduction isn’t aimed at them. Don’t worry about this timing out problem just yet, as you’ve got a much bigger headache to handle first.
A thread losing the CPU isn’t usually a problem.
The main problem in multithreaded code arises when one thread modifies data that another thread is using. Think of two children sharing one set of watercolor paints, or two programmers working on the same file. When voluntarily taking turns does in fact work in real life, it works because you don’t hand over the resource until you’ve reached a safe point in your operations with it at which to do so. But we ditched cooperative multitasking almost a decade ago because expecting all programmers to write code that made their applications share nicely just didn’t work. In preemptive multithreading, you have no control over when the operating system swaps threads, so you have to worry about someone else making changes that undo pieces of setup that you’ve carefully performed during your turn—for example, wiping out a custom color you’ve mixed on the lid of the paint box—before you get a chance to use them instead of after you’re done with them.
The main problem in multithreaded code is threads modifying each other’s data at the wrong time.
Consider the code snippet N = N + 1 (or N++ in C#). Most programmers can’t imagine anything simpler, easier, and safer. And indeed, every thread has its own stack, so if N is a stack (automatic) variable or a function parameter, each thread that executes the code has its own copy and we have no problem. But if two threads try to share the same copy of N, we’re looking at a nasty, hard-to-find bug waiting to happen. The threads will occasionally mix each other up and produce the wrong result. Such a mix-up can occur if N is a global variable (or a shared class variable, which is just a politically correct form of global) and two threads access it simultaneously. It can also occur if N is an object member variable and two threads access the same object simultaneously.
Two threads accessing the same shared data will occasionally mix each other up.
How could such a simple piece of code as N = N + 1 possibly screw up? Look at Listing 9-3, which shows the assembler instructions to which the source line compiles: move the contents of the variable’s memory location to a processor register, increment the value in the register, and move the result back to the variable’s memory location.
Listing 9-3: Assembler code produced by compiling that source code.
mov AX, [N] ; move variable memory location contents to CPU register add AX, 1 ; add 1 to contents of register mov [N], AX ; move contents of register back to memory location
The problem occurs if threads are swapped at the wrong time. Suppose the memory location of the variable N contains the value 4. Suppose Thread A executes the first two statements—it moves 4 into the AX register and adds 1 to it to get 5—but further suppose that Thread A reaches this code near the end of its timeslice so that it is swapped out before it can execute the last statement, which would have stored the result. This isn’t immediately a problem. The operating system retains the values of Thread A’s registers in its own memory along with other administrative information about Thread A, so that doesn’t get lost. But now suppose that Thread B is swapped in and starts executing the same code—it fetches 4 from memory (because Thread A hasn’t had time to store its result), adds 1, and moves 5 back to memory. At some future time, Thread B exhausts its timeslice and the operating system swaps in Thread A, restoring the value of 5 to register AX. Thread A picks up where it left off and moves 5 into the memory location. The variable N now contains an incorrect value of 5, whereas if Thread A had been allowed to complete its operation before Thread B ran, N would contain the correct value of 6. We lost one of our increments because thread swapping happened at the wrong moment. This kind of bug is the most difficult, frustrating kind to track down that I’ve ever encountered because, as you can see, it is devilishly hard to reproduce. It happens only when the threads get swapped in and out at exactly the wrong moments. If Thread A had finished its operation before getting swapped out, or hadn’t started it, or Thread B executed some other code during its timeslice, we wouldn’t have encountered this problem. This is the kind of bug that causes programmers to smash their keyboards and take up goat herding.
Threading introduces some very hard to find types of bugs.
How can we solve this problem? The easiest way, obviously, is not to run threads that access the same data, but this is hard to do because threads are so useful. Much as you might think you’d like to, it’s essentially impossible to keep Thread A from being swapped out during its access of the global variable by raising its priority. Changing Thread A’s priority wouldn’t work definitively because lower-priority threads occasionally get a few CPU cycles (see the last example in this chapter), it could have nasty side effects by preempting system threads performing such tasks as disk buffer maintenance, and maybe Thread A’s work isn’t the most important piece being done by the whole computer at that time. Swapping isn’t the fundamental problem here. We don’t greatly care whether Thread A is in or out at any given moment. We need some way to ensure that Thread A’s operation produces the correct result no matter when it swaps in or out. We need to ensure that Thread B doesn’t mess with any operations that Thread A has started but hasn’t yet finished. We need to make any access to shared resources atomic with respect to threads.
We need some way to make sure that no thread messes with operations that other threads have started but haven’t finished.
We obtain thread safety by using synchronization objects provided by the .NET Framework. I find the term “synchronization” somewhat misleading, as it means making things happen at the same time, but we’re using the objects to make things happen one after the other, not at the same time. I think serialization would be a better name, but since the documentation uses synchronization throughout, I’ll stick with it. Don’t be surprised if you see these two seemingly opposite terms used synonymously in some books.[1]
We make shared access atomic by means of synchronization objects.
I’ve written a sample program that demonstrates some of the ways that you can use synchronization objects to make your code thread safe. The sample client program is shown in Figure 9-4. For you to understand its functionality more easily, I’ve written it so that all the synchronization happens inside the objects themselves rather than in the client. Sometimes this is the right location for synchronization code and sometimes it isn’t, as I’ll come back to discuss once I’ve shown you how synchronization works.
Figure 9-4: Synchronization sample program.
I’ve written three classes of objects. One is unsynchronized, and the other two each use a different mechanism for synchronization. Each object class contains one shared (static, per-class) method called SharedMethod, and two non- shared (instanced, per-object) methods called MethodX and MethodY. Each of the methods pops up a message box reporting its name and the thread from which it was called. When you click any of the buttons in the top row, the client program creates two instances (named 1 and 2) of the specified object class. The client application has two worker threads, labeled A and B. The associated buttons allow you to call any of the methods on either of the object instances from either of the threads. If you create unsynchronized objects and call a method from Thread A and the same method from Thread B, you’ll see two message boxes on the screen and know that there’s a potential for conflict. If you try the same exercise with the synchronized classes, you’ll see how the second call blocks while waiting for the first to complete.
A synchronization sample program starts here.
Apart from doing nothing, we have three basic design options in synchronization. Our first option is to dump the whole problem onto the common language runtime environment. We can mark an object class with the attribute System.Runtime.Remoting.Contexts.SynchronizationAttribute (do not confuse this with System.EnterpriseServices.SynchronizationAttribute, which is an unrelated COM+ compatibility feature) and have it inherit from the base class ContextBoundObject. The code for such an object is shown in Listing 9-4. When a thread calls any instance (non-shared) method on an object, the common language runtime places a synchronization lock on that object, which is released when the method returns. If another thread calls any method on that particular object, the common language runtime will cause that thread to block until the method returns to the original caller. At that time the second thread’s block will clear and the method call will proceed. Try it in the sample program; you may find that easier to understand than parsing my words.
You can synchronize all the methods on an individual object by marking it with an attribute.
Listing 9-4: Attribute-synchronized component.
<System.Runtime.Remoting.Contexts.SynchronizationAttribute()> _ Public Class UsesSynchronizationAttribute Inherits ContextBoundObject (methods omitted) End Class
The main advantage of this approach is that it’s very easy to write; add a couple of declarations and it all just happens. The main drawback is that its synchronization rules are simultaneously too blunt and not blunt enough. It’s too blunt in the sense that it maintains one lock on all methods per object instance. If Thread A is calling Method X on one object, Thread B can’t call Method Y on the same object (although it can on other objects of the same class). The common language runtime locks every method on the object whether it needs it or not. Acquiring and releasing locks consumes microseconds. If your object has many methods that don’t require synchronization, you’ll be wasting them here. On the other hand, the synchronization produced by this attribute is not blunt enough because it doesn’t serialize access to shared (static) methods. Try it and see: Thread A and Thread B can each call the shared method simultaneously, despite the presence of the synchronization attribute. This synchronization mechanism is a good choice when your object class contains no shared methods or data, and when all instance methods need to be locked—for example, object classes (such as middleware objects that participate in transactions) that expect short lifetimes and maintain no state information between calls. This is the exact behavior of Single Call objects used in .NET remoting (see Chapter 10), which is what this synchronization mechanism was designed for. If it matches your needs, fine. But if not, as in most cases, there’s a better option.
This approach doesn’t handle shared (static) methods.
Suppose we don’t like attribute-based synchronization for some reason. Maybe we don’t want to inherit from ContextBoundObject, or maybe we need to synchronize shared methods, or maybe not all of our methods require synchronization so we’d like to omit it to save CPU cycles and avoid potential bottlenecks in cases where we don’t care about it. The .NET Framework provides the class System.Threading.Monitor for synchronizing this type of code. You probably won’t use this class directly, although you can if you want to. Instead, you’ll probably use the support for shared methods that’s built into the compiler. Listings 9-5 and 9-6 show an object that uses this approach. The Visual Basic keyword SyncLock (lock in C#, with curly braces to delineate the code block) tells the compiler to generate code that uses a Monitor object to ensure that only one thread at a time can execute the next block of code. The Visual Basic keywords End SyncLock mark the end of the synchronized block. These keywords cause the compiler to emit a call to Monitor.Enter on entrance to the block and to Monitor.Exit on exit from the block. The first thread to enter acquires a lock, which it releases on exit. If another thread attempts to enter while the first thread holds the lock, the second thread will block until the first thread exits and releases the lock. In this sense, the behavior is similar to the attribute synchronization case that I discussed previously.
The keywords SyncLock and End SyncLock (lock in C#) cause the compiler to emit synchronization locking code.
Listing 9-5: Individual method using SyncLock for synchronization in Visual Basic.
Public Sub MethodX() ’ Acquire a synchronization lock on this object SyncLock Me ’ Perform work we don’t want other threads to do ’ until we finish MessageBox.Show("SyncLock-synchronized component " + _ MyInstance.ToString + _ " received call to MethodX on thread " + _ System.Threading.Thread.CurrentThread.Name) ’ Release the lock End SyncLock End Sub
Listing 9-6: Same functionality in C#.
public void MethodX () { // Acquire lock on this object lock (this) { // Perform work we don’t want another // thread to perform until we’re finished MessageBox.Show("lock-synchronized component " + MyInstance.ToString() + " received call to MethodX on thread " + System.Threading.Thread.CurrentThread.Name) ; // lock automatically released when we leave // this code block } }
This synchronization technique is, however, more flexible than the previous one. The parameter that you pass to SyncLock is the scope of the lock. If you want a lock on an individual object instance only, you will pass Me (or this in C#). For methods that access shared data, you can pass the type of the object class, which acquires a lock over all instances of that class. Only one thread per class will be granted this lock at any time. This allows you to modify per-class data without worrying about other threads. Also unlike the attribute synchronized case, this synchronization mechanism works only in the cases where you write the code to call it. If you omit SyncLock from a method, your call will proceed no matter which thread it’s on, and no matter which other locks might exist on that object or class. This means that you can omit the lock on methods that don’t need it, essentially locking the front door but leaving the back door open. While this sounds extremely dangerous, sometimes it makes sense. For example, your class might contain a pure calculation method that does all of its work on the parameters its client passes and doesn’t use any mutable internal state. In such a case, locking other methods but leaving this one unlocked would be like locking your house but leaving your garden shed unlocked because you know it doesn’t contain anything valuable. On the other hand, it can be easy to forget to put the code in where you need it, resulting in the weird bugs that I discussed earlier.
SyncLock provides you with a more flexible type of lock.
Finally if neither of these synchronization approaches works for you, you can implement your own synchronization manually. You can use the class System.Threading.Monitor directly to get more flexibility. This allows you to do things like attempt to acquire a lock but return immediately (or after a specified interval) with an error if the lock isn’t available. The class System.Threading.Interlocked provides the methods Increment, Decrement, Exchange, and CompareExchange that perform their functions in an atomic, uninterruptible fashion. They use internal system primitives to perform their limited operations more efficiently than acquiring and releasing a lock. For example, if all you want to do is increment a shared variable in a thread-safe manner, you’d simply call the function System.Threading.Interlocked.Increment instead of using a SyncLock section. The class System.Threading.ReaderWriterLock provides an easy way of handling the common case of a single writer with multiple readers. Other classes such as System.Threading.Mutex, which I won’t discuss because they are too geeky for this introduction, are also available to you. Check out the System.Threading namespace to see the whole list. Most regular applications won’t want them, but you should know that the operating system provides as much power and flexibility as you feel like writing the code to handle.
The .NET Framework provides more sophisticated synchronization objects, which naturally are harder to use.
Now that we’ve seen how to synchronize multithreaded code, the question is, who should write the code that does the synchronizing and where should that code live? Should an object make itself safe no matter what the client does, or should a client handle an object class with kid gloves, not knowing whether it’s safe or not? Different pundits will preach different approaches, and I won’t take sides here, except to say that it is absolutely critical for you to think through these questions carefully and pick the approach that makes you the most money. Decide what level of functionality your clients want to buy and provide that level. Obviously, the safer you make your objects, the less your clients have to worry about threading and the fewer calls to tech support you’ll get for those nasty, elusive bugs. On the other hand, if most of your clients are not multithreaded but you put in unnecessary locks to handle the few that are, you’ll be burning everyone’s microseconds to benefit a few ultra-geeky customers. For example, throughout the .NET Framework, shared class methods generally are thread safe, but individual object methods generally are not. Microsoft thought that was the best tradeoff for the most developers in the most cases, and in a large, general-purpose environment like this they were probably right. Maybe your class would provide both thread-safe and non-thread-safe methods, one for fast operation in single-threaded cases and one for slower but safe operation in multithreaded cases. Sometimes you’ll do that sort of thing with completely different classes. For example, Microsoft made the class System.String completely thread-safe and still made it fast by making the string’s data immutable, a good design choice given how often and how widely programmers use strings. No matter when threads get swapped in or out, the data in the string never changes, so synchronization problems can’t arise. Operations that appear to modify a string actually create and return a completely new string object, which again has been highly optimized to make it very fast. The class System.Text.StringBuilder provides a string-like object that you can modify in place, but it’s not thread-safe. What do your customers want? What do they need? Never mind that nonsense, what are they willing to pay for? Know thy customer. For he is not thee.
It is up to an object vendor to decide how much safety to build into an object versus how much to leave for the client.
Now that I’ve explained the necessity of synchronization, I need to warn you of several dangerous cases. I’ve already said that locking and unlocking burns CPU cycles, which you don’t want. Worse than that, unnecessary locking can put bottlenecks into your system. If you have a shared class method that you synchronize with a per-class lock, then when one thread is accessing the method, any other thread that wants to use an object of that class has to block and wait for the release. A bottleneck of this type can nullify the value of multiple threads and really knock the stuffing out of your performance. A load test with a good profiler will detect problems of this type.
A second type of synchronization problem you can encounter is deadlocking, or the deadly embrace. If Thread A acquires lock 1 and waits on lock 2, but meanwhile Thread B acquires lock 2 and waits on lock 1, the threads are tied up together and will never become unsnarled. Try very hard to think through your algorithms and use locking only when you need it so that you can avoid problems of this type. If you can’t defeat the problem by thinking it through, change your synchronization to time out with an error instead of blocking infinitely.
The last type of evil problem I’ll discuss is thread affinity. You usually see this in legacy code, particularly code that deals with the Windows user interface. For historical reasons, certain operating system objects, generally those dealing with window handles, insist on receiving all of their calls on the thread that originally created them. Windows Forms controls are the largest category of objects with this problem. The developers of these types of objects provide methods that can be called from any thread and will switch (marshal) to the original thread and copy the result back. This is hideously inefficient, so don’t write objects that require it. I ignored it in my first sample, calling list view methods from other threads even though I shouldn’t have. The list box wasn’t smart enough to detect and reject the call from a different thread, though don’t be surprised if you run into a component someday that is. I wouldn’t be surprised if it came back to bite me on some user’s machine in some obscure, impossible-to-reproduce situation.
Synchronization also contains some of its own problems.
As I’ve said before, synchronization bugs are very hard to diagnose: they’re hard to reproduce and they depend on the exact timing of thread swaps. If your code seems to have gremlins, say you count up to 100 but find only 99 things when you’re done, think synchronization. Instrumenting a build to catch a synchronization bug often changes the thread timing and masks the bug. Heisenberg’s Uncertainty Principle applies—the act of observation changes the results. You’ll have to do more thought experiments such as code walk-throughs and fewer actual code experiments than normal to catch this sort of critter. And make sure you test your multithreaded code on a multiprocessor machine as well as a single processor machine, as this will often reveal bugs that are otherwise masked by contention for the single CPU.
[1]The English language abounds in just such contradictions, which I love. For example, a man and a guy are more or less the same thing, but a wise man and a wise guy are opposites.
|