In many ways, driver testing is like all software testing: develop test cases that exercise boundary and stress conditions and measure the results. At a more practical level, however, driver testing requires innovation, real-time skills, hardware knowledge, and above all, patience. A Generalized Approach to Testing DriversNo complex body of code can ever be bug-free. Everyone is familiar with the phenomena that fixing bugs introduces new bugs. Bugs cannot be stopped; they can only be contained. This is especially true when software interacts with other vendors' software and hardware. Further, as software design occurs in layers and components, the actual mix of software and hardware versions may never have been tested as a system. This classic problem appears time and again with DLLs. Vendors test their code with one version of DLLs, but by the time the product is deployed en masse, the DLL versions have changed on the end users' systems. Therefore, every test plan must be reasonable searching for the knee in the curve beyond which diminishing returns on the testing effort occur. The real point is that test design is every bit as challenging as software design. WHEN TO TESTExperience shows that incremental testing of software components as they are developed is far more effective than waiting until the entire system is constructed. Although incremental testing requires a large number of small tests, bug isolation makes the technique worthwhile. Additionally, predicting the ship date of progressively tested software is more reliable than predicting the ship date of code that has never been tested. The small test programs developed for this strategy also form the basis of a more formal regression test. As future changes are made to the code base, the small tests help ensure that new bugs are not introduced. Yet another advantage of incremental testing throughout the driver development phase is that hardware design flaws are identified early. This is especially important for new hardware under design. Nothing kills a schedule more than identifying another spin on a custom ASIC late in the development phase. WHAT TO TESTGenerally, driver tests can be categorized as follows:
HOW TO DEVELOP THE TESTSFor optimum scheduling and to ensure a dedicated effort, a separate test design team should be established. However, it is often difficult enough to staff a driver development team, let alone attempt to find specialists in driver testing. As mentioned, the skill set required for the testing effort is every bit as rare as the driver development set. Few organizations can realistically afford the luxury of separate development and test teams. Thus, the driver author must often write the incremental tests in parallel with the development code. One advantage of the singleton approach is that the author implicitly knows the boundary conditions of the code just developed. Tests to exercise arbitrary software limits are therefore well known. Regardless, a good discipline must be established to ensure that the scheduling process allocates sufficient time to both development and testing efforts. Reducing test time to enhance a schedule is a "fools gold" approach to any development effort. HOW TO PERFORM THE TESTSThe test procedure should be as automated as possible. Besides eliminating the boredom and opportunity for missed tests or errors, an automated test script ensures that if (when) an error occurs, the opportunity to reproduce it is high. Also, after each round of bug fixes is applied to code, the entire suite of incremental tests should be rerun. This is called regression testing and it ensures that one bug fix doesn't introduce others. All test runs should be logged and it is a good idea to keep statistics on the number of bugs found versus lines of development code added. Beyond the simple value of a management metric, it provides hard evidence of techniques that provide diminishing returns. For example, is it really productive to have developers work 14 hour days to "meet" the schedule? WHO SHOULD PERFORM THE TESTSThe code author often has a vested interest in keeping some bugs hidden. Perhaps bugs are suspected but the developer is not yet ready to confirm their presence. Perhaps a questionable design must be defended. Perhaps simple ego prevents honest observation of a result. For all of these reasons, the test author is the better choice to run regression tests. A code author simply cannot be expected to be objective about his or her own code and design. Of course, if the team does not have separate development and test personnel, an alternative must be accepted. When more than one developer makes up the team, the operating procedure can be to have different members test code written by other members. The Microsoft Hardware Compatibility TestsMicrosoft provides a hardware compatibility test suite (or simply, the HCTs) that is the official test for a hardware platform's ability to run Windows 2000. The suite contains a number of different components, including
Even if the class of hardware for the driver being developed is not covered by the HCTs, the suite can still serve as a tool to place system-level stress on custom driver tests. The HCT suite is shipped as a separate disk within the DDK. It should be installed on the target machine, not on the development machine. A complete set of documentation is included on the HCTs CD.
|