< Day Day Up > |
At each point in the system's development, you should create a rough draft of the tests. [*] The test outlines can cover the basic use cases, as well as any misuse cases that might come up in the discussion. A misuse case documents how a user might unintentionally (or intentionally, with malice aforethought) interact with the system.
Sam and I developed a first cut at the use cases. So now is a good time to examine some acceptance tests. Brainstorming the tests can bring up new issues that the client has not made part of his requirements. On the other hand, if you cannot imagine a test scenario that can determine that a requirement has been fulfilled, it is time to examine that requirement. Sam, Tim, and I agreed on a preliminary list of acceptance tests that the system should pass before it is installed for production. Use case: Checkout_a_CDDisc
Use case: Checkin_a_CDDisc
Often it is difficult to specify clear-cut tests for some requirements classified as the "ilities" described in Chapter 2. Requirements such as "easy to use," "quick to learn," or " maintainable " can be difficult to measure. So the guideline has been restricted to functionality tests.
4.5.1. Fractals Are EverywhereA fractal is a geometric pattern that is repeated at ever-smaller scales . Likewise, the same software design pattern can appear in both a large scale and a small scale. For example, the overall framework for a system is input-process-output. In the early 1960s, IBM developed HIPO charts. HIPO stands for Hierarchical-Input-Process-Output. The chart depicted a functional breakdown of responsibilities. Each chart showed the input, process, and output for a given level of detail. From that level, you derived lower-level charts that gave more detail for the input/process/output sequence. This same fractal principle is applicable to both use cases and tests. People normally think of use cases in terms of how the user interacts with the system. Although use cases generally are associated with a system as a whole, the technique can also be applied to interfaces and classes within the system. Use cases for a system define steps within a process. When the technique is applied to classes or interfaces, it yields a series of method calls. To distinguish between the two, I will use the term work cases , referring to the work that the implementation has to perform. The work cases for the interfaces and classes are derived from the external interface's use cases. If you cannot come up with work cases for a class, perhaps the class has no purpose. If you do not know how you are going to use a class, it is hard to design it and test it. For Sam's system, there is an external use case for checking in a CDDisc. From this external use case is derived a Check_in work case for CDDisc . Many of the misuse cases listed earlier also represent misuse cases for the CDDisc class. An example of a work case is the following. It appears similar to the use case from which it is derived since the use case involves mostly the CDDisc class: Work case: Checkin_a_CDDisc for CDDisc
Since many functional tests are derived from use cases, the propagation of tests acts in a similar manner. Tests for the external interface show up as tests for the outer classes, which then appear as tests for classes deeper within the system. The tests for each individual class should contain the appropriate subtests from the tests for the whole system. For example, the tests for the CDDisc class will include renting a CDDisc twice without being checked in. That test should produce an error response from the rental method. Tests for individual classes can also include ones based on the nonfunctional requirements for the whole system. An overall performance test such as a rate of transaction (being able to rent 100 CDDisc s in a minute) turns into a performance test for the CDDisc class itself. You might have the time to make up unit tests for every single method of every single class. If you don't, you should at least make up tests that correspond to the work cases for that class. Often tests for individual methods are not meaningful. They can take place only in the context of a work case or multiple work cases. For example, with Sam's system, the test for checking in a Rental needs to be performed in the context of a checked-out CDDisc . Another example of testing in context is a simple file class. You can write a test for just the open method. A successful result is meaningless if you are trying to determine if the file was opened properly. You cannot test for a successful open unless you do another operation on the file, such as a read or write. The open method cannot be tested fully by itself. It can be tested fully only in a context. Conversely, the read method cannot be tested fully by itself, except in the context of an open file.
4.5.2. Testing FeedbackThe concept of feedback applies not only to development, but also to testing. When users report bugs, you can write tests to check for those bugs in your current system. You can also analyze the bugs to see why they occurred and how they slipped through the tests. You might want to alter your testing strategy in the next iteration. For example, if you find that the bugs are occurring when users have many other programs in operation, you can add tests for operating with limited resources to the appropriate class tests.
|
< Day Day Up > |