| < Day Day Up > |
|
The goal of multithreading our applications so far has been to try to make as much use of the computer's processor as possible. So, it would seem that all we need to do is allocate each independent task to a different thread, and let the processor make sure that it's always processing commands on one of them. Well, for small systems this is pretty much the case. But as systems grow larger, and the number of threads grows, the operating system can spend much of its time allocating locks, and sorting out contention between threads, and little of its time actually processing our program's instructions. In order to make our applications scale, we'll have to take a bit more control of threads.
For some situations where the threads are short-lived, for example, it is efficient to use a pool of threads for performing tasks rather than creating and then subsequently deleting an entirely new thread for each task. A task, in this context, could be a single method execution, or a number of methods. The process of pre-allocating a collection, or pool, of threads prior to their actual usage and for reuse later in an application is known as thread pooling.
This chapter aims to provide a detailed insight into thread pooling, and covers the following topics:
What thread pooling is
The need for thread pooling
The concept of thread pooling
The role of the CLR in thread pooling
Glitches involved in thread pooling and their solutions
The size of a thread pool
Exploring the .NET ThreadPool class
Programming thread pools in C#
As you'll discover, the Common Language Runtime (CLR) of the .NET Framework plays a major role in the thread pooling process.
| < Day Day Up > |
|