Solution Architecture


Windows NT, released in July 1993, introduced the concept of a thread, which has appeared in every 32-bit release of Windows since then. Every application is a separate process, which is a virtual address space. Each process contains at least one thread, which is an object within a process that executes program code, as shown in Figure 9-1. You can think of a process as a garage and a thread as an engine-powered machine within that garage. Every garage has at least one car or else you wouldn’t have built it, but many contain other engine-powered machines such as lawn mowers, chain saws, or more cars. A process doesn’t run, any more than a garage does; only threads within a process ever run (although many processes contain only one thread, which makes it look like the process is running).

Windows NT introduced the concept of a thread, an object in a process that executes code.


Figure 9-1: Process containing threads.

The difference between the garage analogy and computer threading is that each car or lawn mower generally contains its own engine. The engine in a computer is the CPU chip, and most client machines contain only one, or, in extremely geeky cases like mine, two of them. Server machines sometimes have more in order to increase their throughput, perhaps four or eight (the largest I’ve ever heard of for Windows is 96), but this number is still small compared to the number of threads that want to run at any given time. A CPU chip can run only one thread at a time, while the others have to wait their turn. The Windows operating system cleverly switches the CPU engine between the threads that want it, naturally consuming some engine power itself in the process.

The operating system transparently swaps the CPU engine between the threads that want to run.

Windows maintains a list of all the threads in the entire computer that are ready, willing, and able to run. Every 10 milliseconds or so (an interval known as the timeslice), the operating system performs an interrupt and checks to see which thread should have the CPU. Each thread has a priority, and the scheduler picks the highest priority thread in the computer-wide ready list to run. If several threads share the highest priority level, the scheduler alternates them in a round-robin fashion. If the machine contains more than one CPU, each CPU is assigned a thread from the ready list, working from the highest priority downward. The register values of the currently running thread are saved in memory (“swapped out”), and those of the incoming thread placed into the CPU (“swapped in”), which then starts running the incoming thread at the point where it was swapped out the last time. A thread doesn’t know when it is swapped in or out. As far as it knows, it’s simply executing to completion at a speed over which it has no control. The first example in this chapter illustrates a multithreaded operation running to completion in the background, competing for CPU time with other threads trying to do the same thing.

Read this whole paragraph.

Not all threads in the system are in the ready state, squabbling with each other over CPU cycles; in fact most of them usually aren’t. One of the most useful features of threads is that they can be made to wait, without consuming CPU cycles, for various external events to happen. For example, a Windows Forms application’s main thread is generally waiting to receive a message from the operating system announcing that the user has clicked the mouse or pressed a key. A thread in this waiting state is said to be blocked. Think of this thread as a car waiting at a stoplight. It’s more efficient than that, however, as the thread doesn’t have its own engine, so it’s not wasting gas sitting there idling. Other threads can use the CPU while the blocked thread waits. When the block clears, the thread goes back into the ready list to compete for CPU time with the rest of the threads.

Threads often block, consuming no CPU cycles, while they wait for external events to happen.

Threading appeared in Windows almost nine years ago, a very long time in this business, but multithreaded programs have historically been excruciatingly difficult to write. Different development environments provided different levels of support for writing multithreaded code. As usual, C++ developers had access to all the threading functions in the Windows API, but (again, as usual) at a low level of abstraction that forced them to spend a lot of time on repetitive boilerplate. Visual J++ version 6 tried to abstract away a lot of the mess in an object-oriented way, but it only partially succeeded at a technical level, and you don’t need me to rehash its legal difficulties. Visual Basic 6.0 and earlier versions not only didn’t write multithreaded code at all, the COM components that it produced didn’t work well in many multithreaded environments such as COM+ because their mandatory thread affinity severely limited their throughput (of which we’ll discuss more later).

Threading code has historically been difficult to write because of a lack of development tool support.

The .NET Framework provides a great deal of support for programmers who want to write multithreaded code or code that, while not multithreaded itself, needs to run well in a multithreaded environment. Every process contains a pool of threads, which a programmer can use without needing to create and destroy her own. I discuss the thread pool in the first example in this chapter, which is also the simplest example for explaining this whole multithreaded craziness to beginners. The .NET Framework also contains a set of synchronization objects, which we use to regulate the operations of different threads that try to access the same resources. I discuss the operation of synchronization in this chapter’s second example. Finally, the .NET Framework provides support for creating, destroying, prioritizing, and otherwise messing about with threads. I discuss these operations in the third example in this chapter. As with all .NET Framework features, threading support is available to all languages that are fully compliant with the common language runtime.

The .NET Framework provides great threading support for all languages.

Because different development environments provided different levels of support for threading, a COM client didn’t know whether an object it was using was capable of operating successfully with different threads. Reciprocally, a COM object didn’t know what threading demands a client might make on it. Writing COM code involved a whole “apartment-free-single-multi-neutral” mess that was Microsoft’s best effort to allow COM clients and objects with different threading capabilities and requirements to interoperate with each other. Fortunately, you don’t need to know about that when you work with your .NET components. Because all development environments have access to all threading capabilities, it isn’t needed. You will, however, still find a few mentions of threading apartments in the .NET documentation. These apply only to the relatively rare case of .NET objects interoperating with COM and not to interaction of one .NET object with another.

COM had great difficulty reconciling the threading needs and capabilities of components built by different development tools, but .NET doesn’t.

Warning

More than in any other chapter in this book, I would strongly urge anyone who is new to threading to follow the examples in this one in order.




Introducing Microsoft. NET
Introducing Microsoft .NET (Pro-Developer)
ISBN: 0735619182
EAN: 2147483647
Year: 2003
Pages: 110

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net