When Synchronization Is Required


Synchronization ensures that only one thread can access shared data at a single time and prevents the preemption or interruption of a driver thread during critical operations. Synchronization is required for the following:

  • Any shared data that multiple threads might access, unless all threads access it in a read-only manner.

  • Any operation that involves several actions that must be performed in an uninterruptible, atomic sequence because another thread might use or change data or resources that the operation requires.

On a preemptible, multitasking system such as Windows, one thread can preempt another at any given time. Therefore, these synchronization requirements apply to single-processor systems and to multiprocessor systems.

Every driver must be designed to manage concurrent operations. Consider these common examples:

  • Multiple I/O requests Any device-even a device that is opened exclusively-can have multiple, concurrently active I/O requests. A process can issue overlapped requests, or multiple threads can issue requests.

  • Interrupts, DPCs, and other asynchronous callbacks Some driver operations can result in asynchronous callbacks. Any of these callbacks can run concurrently with other code paths in the driver.

image from book
The Microsoft WDF Team on Synchronization

People underestimate concurrency. During development, be conservative-lock everything by default, especially in the non-I/O paths where performance isn't critical. It's stupid to optimize Plug and Play and power paths. After you get the driver working, you can optimize for performance.

  • Nar Ganapathy, Windows Driver Foundation Team, Microsoft

    The devil is in the details.

  • Doron Holan, Windows Driver Foundation Team, Microsoft

    Always assume that the worst thing will happen.

  • Peter Wieland, Windows Driver Foundation Team, Microsoft

image from book

Synchronized Access to Shared Data: An Example

To understand why synchronization is important, consider the extremely simple situation in which two threads attempt to increment the same global variable. This operation might require the following processor instructions:

  1. Read MyVar into a register.

  2. Add 1 to the value in the register.

  3. Write the value of the register into MyVar.

If the two threads run simultaneously on a multiprocessor system with no locks, interlocked operations, or other synchronization, a race condition could cause the results of an update to be lost. For example, assume that the initial value of MyVar is 0 and that the operations proceed in the order shown in Figure 10-1.

Open table as spreadsheet

Thread A on Processor 1

R1

MyVar

R2

Thread B on Processor 2

Read MyVar into a register on Processor 1.

0

0

  
 

0

0

0

Read MyVar into a register on Processor 2.

 

0

0

1

Add 1 to the Processor 2 register.

 

0

1

 

Write the Processor 2 register into MyVar.

Add 1 to the Processor 1 register.

1

1

   

Write the Processor 1 register into MyVar.

 

1

  


Figure 10-1: Threads without locks on a multiprocessor system

After both threads have incremented MyVar, the value of MyVar should be 2. However, the result of the Thread B operation is lost when Thread A increments the original value of MyVar and then overwrites the variable, so the resulting value in MyVar is 1. In this situation, two threads manipulate the same data in a race condition.

The same race condition can also occur on a single-processor system if Thread B preempts Thread A. When the system preempts a thread, the operating system saves the values of the processor's registers in the thread and restores them when the thread runs again.

The example in Figure 10-2 shows how a race condition can result from thread preemption. As in the previous example, assume that the initial value of MyVar is 0.

Open table as spreadsheet

Thread A

R1

MyVar

R2

Thread B

Read MyVar into a register.

0

0

   

Preempt Thread A and run Thread B.

 

0

0

0

Read MyVar into a register.

 

0

0

1

Add 1 to the register.

 

0

1

 

Write the register into MyVar.

Preempt Thread B and run Thread A.

Add 1 to the register.

1

1

  

Write the register into MyVar.

 

1

   


Figure 10-2: Threads without locks on a single-processor system

As in the multiprocessor example, the resulting value of MyVar is 1 instead of 2.

In both examples, using a lock to synchronize access to the variable resolves the problem caused by the race condition. The lock ensures that Thread A has finished its update before Thread B accesses the variable, as shown in Figure 10-3.

Open table as spreadsheet

Thread A

R1

MyVar

R2

Thread B

Try to acquire the lock.

 

0

   

Acquire the lock.

 

0

 

Try to acquire the lock.

Read MyVar into a register.

0

0

 

wait

Add 1 to the register.

1

0

 

wait

Write the register into MyVar.

 

1

 

wait

Release the lock.

 

1

 

Acquire the lock.

  

1

1

Read MyVar into a register.

   

1

2

Add 1 to the register.

   

2

 

Write the register into MyVar.

   

2

 

Release the lock.


Figure 10-3: Threads with a lock on any system

The lock ensures that one thread's read and write operations are complete before another thread can access the variable. With locks in place, the final value of MyVar is 2 after these two code sequences complete, which is the correct and intended result of the operation.

Although simplistic, this example illustrates a basic problem that every driver must be designed to handle. On a single-processor system, a thread can be preempted or interrupted by another thread that alters the same data. On multiprocessor systems, two or more threads that are running on different processors can also attempt to change the same data at the same time.

Synchronization Requirements for WDF Drivers

Unlike many applications, drivers do not run linearly. Driver functions are designed to be reentrant, and drivers often simultaneously service multiple I/O requests from multiple applications. The following are just a few places where synchronization might be required in a driver:

  • To guarantee consistent results when reading and writing data structures that multiple driver functions share.

  • To comply with device limits on the number of simultaneous operations.

  • To ensure atomic operations when reading and writing device registers.

  • To manage race conditions when completing and canceling I/O requests.

  • To manage race conditions when the device is removed or the driver is unloaded.

  • To ensure that an operation such as bus enumeration is not reentrant.

Different situations call for different techniques. The best technique to use in a particular situation depends on the type of data that your driver accesses, the type of access that your driver requires, the other components with which it shares access to the data, and-for a kernel-mode driver-the IRQL at which the driver accesses the data. Every driver's synchronization requirements are unique.

 Note  This chapter uses two terms in discussing how to manage concurrent access to shared data: synchronization and serialization. Synchronization is used as a general term to refer to the management of operations that share data or resources. Serialization is used specifically to refer to the concurrency of items of a particular class, such as I/O requests, and to the current execution of callback functions. Serialized callbacks do not run concurrently. If two callbacks are serialized, the framework calls the second callback only after the first callback has returned.




Developing Drivers with the Microsoft Windows Driver Foundation
Developing Drivers with the Windows Driver Foundation (Pro Developer)
ISBN: 0735623740
EAN: 2147483647
Year: 2007
Pages: 224

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net