Chapter 29. Multiprocessor Kernels


Early computers, and early UNIX systems, were designed with one main processor. In order to handle lots of "simultaneous" jobs, the kernel would rapidly switch back and forth between tasks , or processes, giving each a small slice of time, which gave the illusion of simultaneous, parallel activity. This was known as timesharing . A single task would only be interrupted and stopped if it needed a resource or some data that was not available (for example, if it requested input from a tape drive) or if it exceeded its time slice and another task needed to run.

The UNIX kernel was designed to fit this model. A process would run until it ran out of time or until it issued a system request that resulted in the process being blocked. At that point, the process would "give up" the CPU, and the kernel would change over to another process and continue where that one had left off.

One obvious way to make a system process more jobs in a shorter period of time is to add another processor, so that more than one user task could be performed literally simultaneously . This does cause some problems, most notably in the area of synchronization and protection of data. For example, if processor number 1 is busy scanning the list of free pages to secure some additional memory for its job, the second processor had better not be taking pages off the free list at the same time, or there is liable to be a conflict: Both processors may think that they got the same page off the free list, and both will try to use it for two different purposes. Systems can also end up in "deadlock" situations, where each processor has something in use that another one needs and ends up waiting for another resource that it can't get.



PANIC. UNIX System Crash Dump Analysis Handbook
PANIC! UNIX System Crash Dump Analysis Handbook (Bk/CD-ROM)
ISBN: 0131493868
EAN: 2147483647
Year: 1994
Pages: 289
Authors: Chris Drake

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net