While a piece of kernel-mode code is running at an elevated IRQL, nothing executes (on the same CPU) at that or any lower IRQL. Of course, if too much code executes at too high an IRQL, overall system performance will degrade. Time-critical event handling could be deferred and cause more disastrous results. To avoid these problems, kernel-mode code must be designed to execute as much code as possible at the lowest possible IRQL. One important part of this strategy is the Deferred Procedure Call (DPC). Operation of a DPCThe DPC architecture allows a task to be triggered, but not executed, from a high-level IRQL. This deferral of execution is critical when servicing hardware interrupts in a driver because there is no reason to block lower-level IRQL code from executing if a given task can be deferred. Figure 3.1 illustrates the operation of a DPC. Subsequent chapters present more specific information about the use of a DPC in a driver, but an overview is presented below. Figure 3.1. Deferred Procedure Call flow.
Device drivers typically schedule cleanup work with a DPC. This has the effect of reducing the amount of time the driver spends at its DIRQL and improves overall system throughput. Behavior of DPCsFor the most part, working with DPCs is easy because Windows 2000 includes library routines that hide most details of the process. Nevertheless, there are two frustrating aspects of DPCs that should be highlighted. First, Windows 2000 imposes a restriction that only one instance of a DPC object may be present on the system DPC queue at a time. Attempts to queue a DPC object that is already in the queue are rejected. Consequently, only one call to the DPC routine occurs, even though a driver expected two. This might happen if two back-to-back device interrupts occurred before the initial DPC could execute. The first DPC is still on the queue when the driver services the second interrupt. The driver must handle this possibility with a clever design. Perhaps a count of DPC requests could be maintained or a driver might choose to implement a separate (on the side) queue of requests. When the real DPC executes, it could examine the count or private queue to determine exactly what work to perform. Second, there is an issue of synchronization when working with multiprocessor machines. One processor could service the interrupt and schedule the DPC. However, before it dismisses the interrupt, another parallel processor could respond to the queued DPC. Thus, the interrupt service code would be executing simultaneously with the DPC code. For this reason, DPC routines must synchronize access to any resources shared with the driver's interrupt service routine. The DPC architecture prevents any two DPCs from executing simultaneously, even on a multiprocessor machine. Thus, resources shared by different DPC routines do not need to worry about synchronization.
|