Some developers don't realize it, but they have been using threading every time they create an application. Once a process is launched, the operating system creates a primary thread for that application. Regardless of the type of application, there is no way to create an application that contains no threads in it. This lesson deals with the procedure for creating multiple threads within a single process.
There are many instances in which creating a multiple-threaded application is advantageous. In the preceding example, the advantage to creating a separate worker thread in addition to the primary thread, which handles the user interface, is that the user interface still remains responsive to the user and to various system messages that are passed to it. Figure 19.1 shows the results of creating a single-threaded application involved in a processor-intensive operation. You can see that the window contains a picture, but because the application is performing intensive work, it is unable to handle redrawing the picture when it is obscured by another window.
The classic user interface thread and worker thread design is not the only application type that can take advantage of multithreading. Servers, for example, need to connect and communicate with a variety of different clients. If there were only a single thread running within the server, each client would need to wait for another client to finish before communication is possible with the server. With most servers, a single thread is responsible for listening for client connections. Once a client connection is established, it is placed within its own thread until the client disconnects. Therefore, while the client and server are busy communicating, the main thread is still available to listen for more client connections.
Multithreading is not something that should be done, however, without proper design and planning. Each thread, regardless of how it is created and with what priority, still shares the same thing with all the other threads of the running process: the same address space. In other words, it's possible that any global, static and instance variables are available to each thread and can be easily accessed. The problem lies in the fact that with parallel execution, one thread could be writing data to a variable while another is either trying to read that variable or, worse yet, trying to write to it. This is where synchronization comes into play.
Synchronization refers to defensive coding practices that guarantee the safety of variables while multiple threads are executing. The underlying mechanism behind synchronization is the kernel object. Several kernel objects can be used within Windows. The most common include critical sections, mutexes, semaphores, and events. During this hour, you will be using some of these synchronization objects through the .NET Framework threading classes.