4.5. Testing Functionality

 <  Day Day Up  >  

At each point in the system's development, you should create a rough draft of the tests. [*] The test outlines can cover the basic use cases, as well as any misuse cases that might come up in the discussion. A misuse case documents how a user might unintentionally (or intentionally, with malice aforethought) interact with the system.

[*] Eric M. Burke, a reviewer, noted, "Ideally you'd start writing tests, not just rough drafts. Also, start writing tools to automate the acceptance testing. If you develop the testing harness in parallel with the code, you will be more likely to write testable code. Failure to do this means you'll be able to write unit tests, but acceptance testing will be hard to automate."

Sam and I developed a first cut at the use cases. So now is a good time to examine some acceptance tests. Brainstorming the tests can bring up new issues that the client has not made part of his requirements. On the other hand, if you cannot imagine a test scenario that can determine that a requirement has been fulfilled, it is time to examine that requirement.

Sam, Tim, and I agreed on a preliminary list of acceptance tests that the system should pass before it is installed for production.

Use case: Checkout_a_CDDisc

  • Scenario: RegularRental

    1. Enter a Customer ID and CDDisc ID.

    2. System should print rental contract.

    3. Check to see that CDDisc is recorded as currently rented .

  • Scenario: AlreadyRented (Misuse)

    1. Enter CDDisc ID of CDDisc that is recorded as currently rented .

    2. System should respond with an error.

  • Scenario: BadCustomer (Misuse)

    1. Enter Customer ID that does not exist in the system.

    2. System should respond with an error.

  • Scenario: BadCDDisc (Misuse)

    1. Enter a physical ID that does not exist in the system.

    2. System should respond with an error.

Use case: Checkin_a_CDDisc

  • Scenario: RegularReturn

    1. Enter CDDisc ID.

    2. System should respond that CDDisc has been returned.

  • Scenario: OverdueReturn (Misuse)

    1. Enter CDDisc ID that is overdue

    2. System should respond that CDDisc has been returned with an overdue message

  • Scenario: NotRentedReturn (Misuse)

    1. Enter CDDisc ID of CDDisc that is recorded as not rented .

    2. System should respond with an error.

IF IT CAN'T BE TESTED , DON'T REQUIRE IT

Every functionality requirement, whether formally or informally stated, should have a test created for it. If you cannot test a requirement, there is no way to determine whether you have met it. [*]


[*] See The Object Primer by Scott W. Ambler and Barry McGibbon (Cambridge University Press, 2001).

Often it is difficult to specify clear-cut tests for some requirements classified as the "ilities" described in Chapter 2. Requirements such as "easy to use," "quick to learn," or " maintainable " can be difficult to measure. So the guideline has been restricted to functionality tests.

CLEAR-CUT TEST

A while back, I mediated a dispute between a library and a vendor. The library had ordered a system to keep track of books and to provide search capabilities for its patrons. The system was delivered, but there was a disagreement on whether the search mechanism was performing according to specifications. I was called in to determine whether the specifications were clearly stated and were reasonable to expect.

The specifications read, "A book search must be completed within two seconds." This requirement seems straightforward. However, most of the terminals in the library branches were connected to the main computer by 9600-baud connections. If a search contained more than a few results, it would be impossible to send those results over the slow connection within two seconds, even if there were no allowance for time to calculate the search results.

This requirement created a very precise test. However, that test would never be met. The specification was rewritten to state that the first character of the search results must appear within two seconds of the submission of the search. Subsequent characters should appear at the rate supported by the connection bandwidth.

Even if a test can be written that is clear cut, that does not mean it can be passed.


4.5.1. Fractals Are Everywhere

A fractal is a geometric pattern that is repeated at ever-smaller scales . Likewise, the same software design pattern can appear in both a large scale and a small scale. For example, the overall framework for a system is input-process-output. In the early 1960s, IBM developed HIPO charts. HIPO stands for Hierarchical-Input-Process-Output. The chart depicted a functional breakdown of responsibilities. Each chart showed the input, process, and output for a given level of detail. From that level, you derived lower-level charts that gave more detail for the input/process/output sequence.

This same fractal principle is applicable to both use cases and tests. People normally think of use cases in terms of how the user interacts with the system. Although use cases generally are associated with a system as a whole, the technique can also be applied to interfaces and classes within the system. Use cases for a system define steps within a process. When the technique is applied to classes or interfaces, it yields a series of method calls. To distinguish between the two, I will use the term work cases , referring to the work that the implementation has to perform.

The work cases for the interfaces and classes are derived from the external interface's use cases. If you cannot come up with work cases for a class, perhaps the class has no purpose. If you do not know how you are going to use a class, it is hard to design it and test it.

For Sam's system, there is an external use case for checking in a CDDisc. From this external use case is derived a Check_in work case for CDDisc . Many of the misuse cases listed earlier also represent misuse cases for the CDDisc class. An example of a work case is the following. It appears similar to the use case from which it is derived since the use case involves mostly the CDDisc class:

Work case: Checkin_a_CDDisc for CDDisc

  • Scenario: RegularReturn

    1. Find a CDDisc by ID.

    2. Return the CDDisc.

Since many functional tests are derived from use cases, the propagation of tests acts in a similar manner. Tests for the external interface show up as tests for the outer classes, which then appear as tests for classes deeper within the system.

The tests for each individual class should contain the appropriate subtests from the tests for the whole system. For example, the tests for the CDDisc class will include renting a CDDisc twice without being checked in. That test should produce an error response from the rental method. Tests for individual classes can also include ones based on the nonfunctional requirements for the whole system. An overall performance test such as a rate of transaction (being able to rent 100 CDDisc s in a minute) turns into a performance test for the CDDisc class itself.

You might have the time to make up unit tests for every single method of every single class. If you don't, you should at least make up tests that correspond to the work cases for that class. Often tests for individual methods are not meaningful. They can take place only in the context of a work case or multiple work cases. For example, with Sam's system, the test for checking in a Rental needs to be performed in the context of a checked-out CDDisc .

Another example of testing in context is a simple file class. You can write a test for just the open method. A successful result is meaningless if you are trying to determine if the file was opened properly. You cannot test for a successful open unless you do another operation on the file, such as a read or write. The open method cannot be tested fully by itself. It can be tested fully only in a context. Conversely, the read method cannot be tested fully by itself, except in the context of an open file.

PLAN FOR TESTING

Developing test strategies in advance can lead to a better design.


4.5.2. Testing Feedback

The concept of feedback applies not only to development, but also to testing. When users report bugs, you can write tests to check for those bugs in your current system. You can also analyze the bugs to see why they occurred and how they slipped through the tests. You might want to alter your testing strategy in the next iteration. For example, if you find that the bugs are occurring when users have many other programs in operation, you can add tests for operating with limited resources to the appropriate class tests.

THE SKYDIVING GRANDMA

I consulted for a life insurance firm that was creating illustrations for life insurance. The illustrations depicted how the value of an insurance policy grew over the lifetime of the insured. The illustration program had hundreds of inputs, ranging from age, sex, and hazardous avocations (including skydiving) to the timing of loans that an insured intended on taking for expected events (e.g., college tuition). It was impossible to test all possible combinations of inputs (e.g., a skydiving 80-year-old female borrowing $10,000 for her grandchild's education in four years ). The only way to perform full checkout was in the field.

The illustration program was distributed to thousands of life insurance salespeople. As the program ran, it recorded all inputs, some intermediate calculation values, and all outputs. The records were sent back to the development department and the actuaries for analysis. The illustrations that contained outputs that appeared anomalous (such as getting $1 million of life insurance for $1) were examined further. The inputs for those instances and the corrected outputs were used as test cases for the next version.


 <  Day Day Up  >  


Prefactoring
Prefactoring: Extreme Abstraction, Extreme Separation, Extreme Readability
ISBN: 0596008740
EAN: 2147483647
Year: 2005
Pages: 175
Authors: Ken Pugh

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net