4.3 Testing the Typical Functionality


4.3 Testing the Typical Functionality

One common recommendation says, Write a test case for each public method. This rule has two catches. First, if we test the methods of an object in an isolated way only, we will not discover errors resulting from state changes of that object. The second catch results from the subliminal statement, One test for each public method is sufficient, since one test is often not enough. For this reason, we modify the preceding rule as follows: Test each typical use of an object. The difference is that a typical use normally consists of a sequence of messages.

For example, a typical use for our Dictionary object is adding new translations and polling existing translations. We have written test cases for both scenarios, but the isEmpty() method was used only within the scope of this "typical usage test." On the other hand, the rule should not lead to a situation where all individual tests are replaced by one complex and unclear test case. On the contrary, the important thing is to identify the smallest typical usage cases and to test them independently.

The typical usage is obviously nothing that can be fixed once and for all at the beginning of an application's life cycle. Right at the start we have nothing but the functional requirements to help us decide where to start. As for the outer boundary of a system, the use cases, user stories, or what-ever we have as requirements specifications represent the primary source from which to derive the test cases. A good set of unambiguous and testable statements is an excellent starting point for developing an application test-driven from the outside in.

As for internal components, the developer figures out what tasks a unit is currently supposed to fulfill. As the component is used by more and more additional components, our initial guess of what is typical will often prove wrong. This insight will then be reflected in changes to the test cases. One reviewer made an important remark here that you should remember:

We are only dealing with one usage situation at a time. We may add tests after we have written some code, but we do not add test cases in anticipation of future clients. . . . The primary quality benefit that we get from test-first comes from the small considered steps that we take and the design thoughts that we have when we are taking them.

Over the course of time, other test ideas are added; these are often atypical but still legitimate usages for our code. Marick [00] recommends writing down these ideas and using them as a checklist for future editing of the tests, then discarding them. Editing in this context means to think about whether a test idea should manifest itself in a test, makes no sense at all, is not within the requirements, or the creation of its corresponding test would be too expensive. And again we are caught in the previous dilemma: should each idea fall to a separate test case or should we try to squeeze as many ideas into a single test case as we possibly can? Some of the benefits and drawbacks of the two approaches read like this:

  • One test per idea. Simple tests facilitate debugging, are easier to read and to restructure, and generally can be created faster. The major drawback of simple tests is that they test only what we originally intended.

  • Many ideas per test. Complex tests can test more than the test ideas we knowingly build into them; they find errors by pure luck. Building many ideas into one single test case requires more planning and is more error-prone. In addition, complex test scenarios resist subsequent modifications, because it will be very difficult for us to familiarize ourselves with them later on.

One potentially meaningful approach is to first implement all test ideas we find valuable in small, fine-grained, and well-documented test cases. We can then complete this scaffolding by adding a few more complex test cases; these are often scenarios directed at fulfilling a user goal and comprising the complete lifetime of objects. Finally, we have to be willing to discard and reconsider these more complex test scenarios when implementing changes and refactoring actions. The attempt to just slightly modify complex tests we don't really understand normally produces test cases of doubtful quality and little use.




Unit Testing in Java. How Tests Drive the Code
Unit Testing in Java: How Tests Drive the Code (The Morgan Kaufmann Series in Software Engineering and Programming)
ISBN: 1558608680
EAN: 2147483647
Year: 2003
Pages: 144
Authors: Johannes Link

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net