Most developers are comfortable with the concept of multitasking, or the capability of computers to execute more than one application or process at the same time. However, multithreading may be a more alien term. Many programmers have not had any cause to program in a multithreaded fashion. In fact, for some programming languages (before .NET, that is), there was no way to do multithreaded programming without jumping through some very convoluted programming hoops.
So, what is multithreaded programming? You might want to think of it as multitasking at the program or process level. A program has two options for executing itself. The first option is to run itself in one thread of execution. In this method of execution, the program follows the logic of the program from start to end in a sequential fashion. You might want to think of this method of execution as single threaded. The second option is that the program can break itself into multiple threads of execution or, in other words, split the program into multiple segments (with beginning and end points) and run some of them concurrently (at the same time). This is what is better known as multithreading. It should be noted, though, that the end result of either a single-threaded or a multithreaded program will be the same.
Of course, if you have a single processor machine, true concurrency is not possible as only one command can be run at a time through the CPU. (With Intel Corporation's new Hyper-Threading Technology, you can execute more than one command at the same time on a single CPU, but that is a topic for another book altogether.) This is an important concept to grasp because many programmers mistakenly think that if they break a computational bound section of a program into two parts and run them in two threads of execution, then the program will take less time to run. The opposite is actually the case—it will take longer. The reason is that the same amount of code is being run for the program, plus additional time must be added to handle the swapping of the thread's context (the CPU's registers, stack, and so on).
So for what reason would you use multithreading for a single process computer if it takes longer than single threading? The reason is that, when used properly, multithreading can provide better I/O-related response time, as well as better use of the CPU.
Wait a second, didn't I just contradict myself? Well, actually, I didn't.
The key point about proper use of multithreading is the types of commands the threads are executing. Computational bound threads (i.e., threads that do a lot of calculations) gain very little when it comes to multithreading, as they are already working overtime trying to get themselves executed. Multithreading actually slows this type of thread down. I/O threads, on the other hand, gain a lot. This gain is most apparent in two areas: better response and CPU utilization.
I'm sure you've all come across a program that seemed to stop or lock up and then suddenly came back to life. The usual reason for this is that the program is executing a computation bound area of the code. And, because multithreading wasn't being done, there were no CPU cycles provided for user interaction with the computer. By adding multithreading, it's possible to have one thread running the computational bound area and another handling user interaction. Having an I/O thread allows the user to continue to work while the CPU blasts its way through the computational bound thread. True, the actual computational bound thread will take longer to run, but because the user can continue to work, this minute amount of time usually doesn't matter.
I/O threads are notorious for wasting CPU cycles. Humans, printers, hard drives, monitors, and so forth are very slow when compared to a CPU. I/O threads spend a large portion of their time simply waiting, doing nothing. Thus, multithreading allows the CPU to use this wasted time.