12.5 User Threads versus Kernel Threads

Team-FLY

12.5 User Threads versus Kernel Threads

The two traditional models of thread control are user-level threads and kernel-level threads . User-level threads, shown in Figure 12.3, usually run on top of an existing operating system. These threads are invisible to the kernel and compete among themselves for the resources allocated to their encapsulating process. The threads are scheduled by a thread runtime system that is part of the process code. Programs with user-level threads usually link to a special library in which each library function is enclosed by a jacket . The jacket function calls the thread runtime system to do thread management before and possibly after calling the jacketed library function.

Figure 12.3. User-level threads are not visible outside their encapsulating process.

graphics/12fig03.gif

Functions such as read or sleep can present a problem for user-level threads because they may cause the process to block. To avoid blocking the entire process on a blocking call, the user-level thread library replaces each potentially blocking call in the jacket by a nonblocking version. The thread runtime system tests to see if the call would cause the thread to block. If the call would not block, the runtime system does the call right away. If the call would block, however, the runtime system places the thread on a list of waiting threads, adds the call to a list of actions to try later, and picks another thread to run. All this control is invisible to the user and to the operating system.

User-level threads have low overhead, but they also have some disadvantages. The user thread model, which assumes that the thread runtime system will eventually regain control, can be thwarted by CPU-bound threads . A CPU-bound thread rarely performs library calls and may prevent the thread runtime system from regaining control to schedule other threads. The programmer has to avoid the lockout situation by explicitly forcing CPU-bound threads to yield control at appropriate points. A second problem is that user-level threads can share only processor resources allocated to their encapsulating process. This restriction limits the amount of available parallelism because the threads can run on only one processor at a time. Since one of the prime motivations for using threads is to take advantage of multiprocessor workstations, user-level threads alone are not an acceptable approach.

With kernel-level threads, the kernel is aware of each thread as a schedulable entity and threads compete systemwide for processor resources. Figure 12.4 illustrates the visibility of kernel-level threads. The scheduling of kernel-level threads can be almost as expensive as the scheduling of processes themselves, but kernel-level threads can take advantage of multiple processors. The synchronization and sharing of data for kernel-level threads is less expensive than for full processes, but kernel-level threads are considerably more expensive to manage than user-level threads.

Figure 12.4. Operating system schedules kernel-level threads as though they were individual processes.

graphics/12fig04.gif

Hybrid thread models have advantages of both user-level and kernel-level models by providing two levels of control. Figure 12.5 illustrates a typical hybrid approach. The user writes the program in terms of user-level threads and then specifies how many kernel-schedulable entities are associated with the process. The user-level threads are mapped into the kernel-schedulable entities at runtime to achieve parallelism. The level of control that a user has over the mapping depends on the implementation. In the Sun Solaris thread implementation, for example, the user-level threads are called threads and the kernel-schedulable entities are called lightweight processes . The user can specify that a particular thread be run by a dedicated lightweight process or that a particular group of threads be run by a pool of lightweight processes.

Figure 12.5. Hybrid model has two levels of scheduling, with user-level threads mapped into kernel entities.

graphics/12fig05.gif

The POSIX thread scheduling model is a hybrid model that is flexible enough to support both user-level and kernel-level threads in particular implementations of the standard. The model consists of two levels of scheduling ”threads and kernel entities. The threads are analogous to user-level threads. The kernel entities are scheduled by the kernel. The thread library decides how many kernel entities it needs and how they will be mapped.

POSIX introduces the idea of a thread-scheduling contention scope , which gives the programmer some control over how kernel entities are mapped to threads. A thread can have a contentionscope attribute of either PTHREAD_SCOPE_PROCESS or PTHREAD_SCOPE_SYSTEM . Threads with the PTHREAD_SCOPE_PROCESS attribute contend for processor resources with the other threads in their process. POSIX does not specify how such a thread contends with threads outside its own process, so PTHREAD_SCOPE_PROCESS threads can be strictly user-level threads or they can be mapped to a pool of kernel entities in some more complicated way.

Threads with the PTHREAD_SCOPE_SYSTEM attribute contend systemwide for processor resources, much like kernel-level threads. POSIX leaves the mapping between PTHREAD_SCOPE_SYSTEM threads and kernel entities up to the implementation, but the obvious mapping is to bind such a thread directly to a kernel entity. A POSIX thread implementation can support PTHREAD_SCOPE_PROCESS , PTHREAD_SCOPE_SYSTEM or both. You can get the scope with pthread_attr_getscope and set the scope with pthread_attr_setscope , provided that your POSIX implementation supports both the POSIX:THR Thread Extension and the POSIX:TPS Thread Execution Scheduling Extension.

Team-FLY


Unix Systems Programming
UNIX Systems Programming: Communication, Concurrency and Threads
ISBN: 0130424110
EAN: 2147483647
Year: 2003
Pages: 274

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net