I l @ ve RuBoard |
5.4 User , Kernel, and Hybrid Threading ModelsScheduling is the primary mechanism an OS provides to ensure that applications use host CPU resources appropriately. Threads are the units of scheduling and execution in multithreaded processes. Modern OS platforms provide various models for scheduling threads created by applications. A key difference between the models is the contention scope in which threads compete for system resources, particularly CPU time. There are two different contention scopes:
Three thread scheduling models are implemented in commonly available operating systems today:
We describe these models below, discuss their trade-offs, and show how they support various contention scopes. The N:1 user-threading model. Early threading implementations were layered atop the native OS process control mechanisms and handled by libraries in user space. The OS kernel therefore had no knowledge of threads at all. The kernel scheduled the processes and the libraries managed n threads within one process, as shown in Figure 5.5 (1). Hence, this model is referred to as "N:1" user-threading model, and the threads are called "user-space threads" or simply "user threads." All threads operate in process contention scope in the N:1 thread model. HP-UX 10.20 and SunOS 4.x are examples of platforms that provide an N:1 user-threading model. Figure 5.5. The N:1 and 1:1 Threading Models
In the N:1 threading model, the kernel isn't involved in any thread life-cycle events or context switches within the same process. Thread creation, deletion, and context switches can therefore be highly efficient. The two main problems with the N:1 model, ironically, also stem from the kernel's ignorance of threads:
The 1:1 kernel-threading model. Most modern OS kernels provide direct support for threads. In the "1:1" kernel-threading model, each thread created by an application is handled directly by a kernel thread. The OS kernel schedules each kernel thread onto the system's CPU(s), as shown in Figure 5.5 (2). In the "1:1" model, therefore, all threads operate in system contention scope. HP-UX 11, Linux, and Windows NT/2000 are examples of platforms that provide a 1:1 kernel-threading model. The 1:1 model fixes the following two problems with the N:1 model outlined above:
Since the OS kernel is involved in thread creation and scheduling, however, thread life-cycle operations can be more costly than with the N:1 model, though generally still cheaper than process life-cycle operations. The N:M hybrid-threading model. Some operating systems, such as Solaris [EKB + 92], offer a combination of the N:1 and 1:1 models, referred to as the "N:M" hybrid-threading model. This model supports a mix of user threads and kernel threads. The hybrid model is shown in Figure 5.6. When an application spawns a thread, it can indicate in which contention scope the thread should operate (the default on Solaris is process contention scope). The OS threading library creates a user-space thread, but only creates a kernel thread if needed or if the application explicitly requests the system contention scope. As in the 1:1 model, the OS kernel schedules kernel threads onto CPUs. As in the N:1 model, however, the OS threading library schedules user-space threads onto so-called "lightweight processes" (LWPs), which themselves map 1-to-1 onto kernel threads. Figure 5.6. The N:M Hybrid Threading Model
The astute reader will note that a problem resurfaces in the N:M model, where multiple user-space threads can block when one of them issues a blocking system function. When the OS kernel blocks an LWP, all user threads scheduled onto it by the threads library also block, though threads scheduled onto other LWPs in the process can continue to make progress. The Solaris kernel addresses this problem via the following two-pronged approach based on the concept of scheduler activations [ABLL92]:
Not all OS platforms allow you to influence how threads are mapped to, and how they allocate, system resources. You should know what your platform(s) do allow and how they behave to make the most of what you have to work with. Detailed discussions of OS concurrency mechanisms appear in [Lew95, But97, Ric97, Sol98, Sch94]. As with any powerful, full-featured tool, it's possible to hurt yourself when misusing threads. So, when given a choice between contention scope, which should you choose? The answer lies in which of the following reasons corresponds most closely to why you're spawning a thread, as well as how independent it must be of other threads in your program:
Although multithreading may seem intimidating at first, threads can help to simplify your application designs once you've mastered synchronization patterns [SSRB00] and OS concurrency mechanisms. For example, you can perform synchronous I/O from one or more threads, which can yield more straightforward designs compared with synchronous or asynchronous event handling patterns, such as Reactor or Proactor, respectively. We discuss OS concurrency mechanisms in Chapter 6 and the ACE threading and synchronization wrapper facades that encapsulate these mechanisms in Chapters 9 and 10. Logging service Our logging server implementations in the rest of this book illustrate various ACE concurrency wrapper facades. These examples use threads with system contention scope, that is, 1:1 kernel threads, when the thread's purpose is to perform I/O, such as receiving log records from clients . This design ensures that a blocking call to receive data from a socket doesn't inadvertently block any other thread or the whole process! |
I l @ ve RuBoard |