Guidelines for Driver Testing

< BACK  NEXT >
[oR]

In many ways, driver testing is like all software testing: develop test cases that exercise boundary and stress conditions and measure the results. At a more practical level, however, driver testing requires innovation, real-time skills, hardware knowledge, and above all, patience.

A Generalized Approach to Testing Drivers

No complex body of code can ever be bug-free. Everyone is familiar with the phenomena that fixing bugs introduces new bugs. Bugs cannot be stopped; they can only be contained. This is especially true when software interacts with other vendors' software and hardware. Further, as software design occurs in layers and components, the actual mix of software and hardware versions may never have been tested as a system. This classic problem appears time and again with DLLs. Vendors test their code with one version of DLLs, but by the time the product is deployed en masse, the DLL versions have changed on the end users' systems.

Therefore, every test plan must be reasonable searching for the knee in the curve beyond which diminishing returns on the testing effort occur. The real point is that test design is every bit as challenging as software design.

WHEN TO TEST

Experience shows that incremental testing of software components as they are developed is far more effective than waiting until the entire system is constructed. Although incremental testing requires a large number of small tests, bug isolation makes the technique worthwhile. Additionally, predicting the ship date of progressively tested software is more reliable than predicting the ship date of code that has never been tested.

The small test programs developed for this strategy also form the basis of a more formal regression test. As future changes are made to the code base, the small tests help ensure that new bugs are not introduced.

Yet another advantage of incremental testing throughout the driver development phase is that hardware design flaws are identified early. This is especially important for new hardware under design. Nothing kills a schedule more than identifying another spin on a custom ASIC late in the development phase.

WHAT TO TEST

Generally, driver tests can be categorized as follows:

  • Hardware tests

    verify the operation of the hardware. These tests are essential when the device and driver are being developed in parallel.

  • Normal response tests

    validate the complete and accurate functionality of the driver. Does the driver respond to each command as promised?

  • Error response tests

    check for appropriate action when a bad stimulus is applied to the driver. For example, if the device reports an error, does the driver respond by reporting and logging the error? The error stimulus can also be bad data from a user response.

  • Boundary tests

    exercise the published limits of the driver or device. For example, if there is a maximum transfer size, does the driver deal appropriately when presented with one more byte? Speed boundaries are also addressed in this category.

  • Stress tests

    subject the driver and device to high levels of sustained activity. The amount of stimulus is ideally just beyond what will be encountered in the real world. Within this category, different subcategories of stress can be applied. For example, limited CPU availability, limited memory, and heavy I/O activity are all dimensions where stress can be applied.

HOW TO DEVELOP THE TESTS

For optimum scheduling and to ensure a dedicated effort, a separate test design team should be established. However, it is often difficult enough to staff a driver development team, let alone attempt to find specialists in driver testing. As mentioned, the skill set required for the testing effort is every bit as rare as the driver development set. Few organizations can realistically afford the luxury of separate development and test teams.

Thus, the driver author must often write the incremental tests in parallel with the development code. One advantage of the singleton approach is that the author implicitly knows the boundary conditions of the code just developed. Tests to exercise arbitrary software limits are therefore well known.

Regardless, a good discipline must be established to ensure that the scheduling process allocates sufficient time to both development and testing efforts. Reducing test time to enhance a schedule is a "fools gold" approach to any development effort.

HOW TO PERFORM THE TESTS

The test procedure should be as automated as possible. Besides eliminating the boredom and opportunity for missed tests or errors, an automated test script ensures that if (when) an error occurs, the opportunity to reproduce it is high.

Also, after each round of bug fixes is applied to code, the entire suite of incremental tests should be rerun. This is called regression testing and it ensures that one bug fix doesn't introduce others.

All test runs should be logged and it is a good idea to keep statistics on the number of bugs found versus lines of development code added. Beyond the simple value of a management metric, it provides hard evidence of techniques that provide diminishing returns. For example, is it really productive to have developers work 14 hour days to "meet" the schedule?

WHO SHOULD PERFORM THE TESTS

The code author often has a vested interest in keeping some bugs hidden. Perhaps bugs are suspected but the developer is not yet ready to confirm their presence. Perhaps a questionable design must be defended. Perhaps simple ego prevents honest observation of a result. For all of these reasons, the test author is the better choice to run regression tests. A code author simply cannot be expected to be objective about his or her own code and design.

Of course, if the team does not have separate development and test personnel, an alternative must be accepted. When more than one developer makes up the team, the operating procedure can be to have different members test code written by other members.

The Microsoft Hardware Compatibility Tests

Microsoft provides a hardware compatibility test suite (or simply, the HCTs) that is the official test for a hardware platform's ability to run Windows 2000. The suite contains a number of different components, including

  • General system tests that exercise the CPU, the onboard serial and parallel ports, the keyboard interface, and the HAL.

  • Tests that exercise drivers for specific kinds of hardware, such as video adapters, multimedia devices, network interface cards, tape drives, SCSI devices, and so on.

  • General stress tests that put unusually high loads on system resources and I/O bandwidth.

  • A GUI-based test manager that automates test execution and data collection.

Even if the class of hardware for the driver being developed is not covered by the HCTs, the suite can still serve as a tool to place system-level stress on custom driver tests.

The HCT suite is shipped as a separate disk within the DDK. It should be installed on the target machine, not on the development machine. A complete set of documentation is included on the HCTs CD.

< BACK  NEXT >


The Windows 2000 Device Driver Book(c) A Guide for Programmers
The Windows 2000 Device Driver Book: A Guide for Programmers (2nd Edition)
ISBN: 0130204315
EAN: 2147483647
Year: 2000
Pages: 156

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net