Constructing a Test Driver


A test driver is a program that runs test cases and collects the results. We describe three general approaches to writing test drivers. There are probably others and certainly there are many variations on what we present. We recommend one approach over the others and will develop it in detail.[6]

[6] If the behavior of the class calls for program termination as a postcondition for example, when an implementation based on a defensive programming approach uses the assert() library function to check preconditions then multiple test drivers might be needed or the test driver needs to support some way of running individual test cases.

Consider three ways to implement a test driver for the Velocity class. We will use C++ to illustrate the structure of the test driver design.

  1. Implement a function main() that can be compiled conditionally (with #define TEST) when compiling the member function definitions (in the Velocity.cpp file) and then executed (see Figure 5.9).

    Figure 5.9. A conditionally compiled test driver for the Velocity class embedded in the source file

    graphics/05fig09.gif

  2. Implement a static member function within the class that can be invoked to execute and collect the results for each of the test cases (see Figure 5.10).[7]

    [7] In Java, this could be a class method named main(), thereby making execution of the test driver as simple as running a class file on the Java virtual machine.

    Figure 5.10. A test driver embedded as a class operation for Velocity

    graphics/05fig10.gif

  3. Implement a separate class whose responsibility is to execute and collect the results for each test case (see Figure 5.11). A main() function instantiates this class and sends it a message to run all test cases. Note: in Java, main() can be a static method of the VelocityTester class.

    Figure 5.11. A test driver for the Velocity class implemented as a separate "tester" class

    graphics/05fig11.gif

All three designs are equivalent with respect to their support for running the same test cases and reporting the results. Some of the strengths and weaknesses of each are summarized in Figure 5.12.

Figure 5.12. Strengths and weaknesses of the test driver designs

graphics/05fig12.gif

The second and third designs are attractive because they can be implemented using standard features of most object-oriented programming languages. We prefer the third design.[8] Although it separates test code from production code, the relationship between a class and a driver for testing it is easy to remember each class C has a tester class called CTester. The use of a separate class is not necessarily a disadvantage. The proximity of a driver's code to the code for a class it tests is advantageous if the code for both is being developed by the same person. Otherwise it is a disadvantage. This tester class design allows some flexibility since in most programming languages two classes can be defined in the same file or in different files.

[8] It has even more strengths in association with testing inheritance hierarchies, as we will describe in Chapter 7.

We will concentrate on the tester class design, although most aspects of development of such a driver can be adapted in a straightforward manner to the other designs.

Test Driver Requirements

Before looking at tester classes in more detail, consider the requirements for a test driver for execution-based testing of a class.

The main purpose of a test driver is to run executable test cases and to report the results of running them. A test driver should have a relatively simple design because we seldom have time or resources to do execution-based testing of driver software. We rely primarily on code reviews to check driver code. In support of reviews and to facilitate maintenance, we should be able to readily trace the testing requirements in a test plan to the code in a driver. A test driver must be easy to maintain and adapt in response to changes in the incremental specification for the class it tests. Ideally, we should be able to reuse code from the test drivers for existing classes in creating new drivers.

Figure 5.13 shows a model for a class Tester that satisfies these requirements. The public interface provides operations to run various test suites or all of them. The test cases are organized into suites based on their origin functional if they were identified from the specification, structural if they were identified from the code, and interaction if they test the correct operation of sequences of events on an object, such as pairs of input/output transitions. We identify these categories to facilitate maintenance of tests. The lines between these categories are sometimes hard to draw, but the general criterion for putting a test case in a category concerns how the test case was initially identified and what impact changes to a class have on a test case. Interaction test cases are usually generated to augment other test cases to achieve some level of coverage. Implementation-based test cases are generated to test some behavior of the code that arises from the implementation rather than the specification. If the implementation for a class changes, but not the specification, then we should be able to update the driver code just by modifying code to run implementation-based test cases. We refer to the set of test cases in a particular category as a test suite for that category. Thus, we identify a functional (specification-based) test suite, a structural (implementation-based) test suite, and an interaction test suite.

Figure 5.13. A class model for requirements of a Tester class

graphics/05fig13.gif

The tally operations on a tester can be used to check how many test cases have passed so far. A driver keeps a log of test case execution and the results in a file whose name is specified at the time it is instantiated. The protected logTestCaseStart(), logTestCaseResult(), and logComment() operations place information in the log file. The protected runBaselineSuite() operation verifies the correctness of methods in the class under test (CUT) that are used by the test driver in checking the results of test cases. Accessor and modifier methods are usually tested as part of the baseline test suite for a class. The CUTinvariantHolds() operation evaluates the invariant of the CUT using the state of the current object under test (OUT).

The Tester class is abstract. Code for the class can provide default implementations for operations common to all (concrete) testers. These include operations for logging test case results and performing other functions common to all class test drivers, such as measuring heap allocation and providing support for timing execution of individual test cases. The methods to run the test suites and to check a class invariant must be implemented for each specific CUT.

We now look at the typical design for a concrete Tester class. A design for VelocityTester is shown in Figure 5.14. The figure shows a little more detail about the Tester class than is shown in Figure 5.13, including some operations to manipulate an OUT and some factory methods for creating instances of the CUT. We will describe these in the next section. A concrete Tester class is responsible primarily for implementing methods for test cases and running them as part of a suite.

Figure 5.14. Class model for a VelocityTester class

graphics/05fig14.gif

Tester Class Design

Since the Tester class provides operations to help report test case results, the primary responsibility of a concrete Tester class, such as VelocityTester, is to run test cases and report results. The main components of the class interface are operations to set up test cases, to analyze the results of test cases, to execute test cases, and to create instances of the CUT to be used in running test cases. Our design has proven both flexible and maintainable. It has proven quite useful when instances of a class are needed to test another class, as we will show in the next chapter.

Within a concrete tester class, we define one method for each of the test cases. We refer to these as test case methods. These provide traceability to the test plan one method per test case or group of closely related test cases. The purpose of a test case method is to execute a test case by creating the input state, generating a sequence of events, and checking the output state.

Test Case Methods

In a Tester class, each test case is represented by a single method. The name of the method should reflect the test case in some way. For small numbers of test cases, we can sequentially number the test cases identified in the test plan and name the operations runTestCase01(), runTestCase02(), and so on. Sequential numbering is simple, but can result in problems if test cases in a plan are ordered in some way and test cases are inserted or deleted. Usually a naming convention can be developed based on the derivation of the test cases (see sidebar).

The responsibility of a test case method is to construct the input state for a test case for example, by instantiating an OUT and any objects to be passed as parameters, and then by generating the events specified by the test case. A test case method reports the status of the result pass, fail, or TBD[9] to indicate some action is needed to determine the result. A test case method verifies that the CUT's invariant holds for the OUT.

[9] To be determined. Some results require human reaction, such as verifying generation of a sound or a change in what is displayed on a monitor screen. For example, testing an overloaded stream insertion operator for a class in C++ might require a tester to open a file and verify that the data is printed correctly. For such test cases, we like to include directions as comments in the log.

In our code, a test case method has a general structure shown in pseudocode in Figure 5.15.

Figure 5.15. Pseudocode for a typical test case method

graphics/05fig15.gif

Tip

Implement a test script method for each test case when creating a Tester class for classes in which there are many interaction test cases. A test script method is responsible for creating the OUT for use by a test case method, invoking the test case method, and then checking postconditions and the class invariant. It also reports the results. The test case method handles only the event sequence on the OUT. An interaction test case can then be coded as a single test script method that invokes a sequence of test case methods and then checks and reports the results.


OUT Factory Methods

Classes are tested by creating instances and checking their behaviors against a set of test cases. We have referred to an instance to which a test case is being applied as the object under test (OUT). The main requirement with respect to the OUT is that attributes be specified for the inputs to the test case so that preconditions associated with a test case to be applied are met. The Tester class includes setOUT() and getOUT() operations that are used by test case methods to access the current OUT (see Figure 5.14). A disposeOUT() is available to end an association of the OUT with its current instance.

Naming Test Cases

Naming test cases well is an interesting problem. We would like the names to somehow reflect what is being tested. In an environment in which paragraph numbers are associated with each piece of a specification, the name of a test case can include some encoded reference to that paragraph that gives rise to the test case. This is desirable because it gives traceability to the test case and is commonly used to name system test cases. However, paragraph numbers are not associated with OCL specifications or state transition diagrams, which are used to specify classes.

For naming specification-based test cases, a naming scheme can be based on an operation name and pre- and postcondition numbering. Assume an operation is specified with the following pre- and postconditions:

graphics/05equ01.gif


Based on a goal of testing each combination of precondition and postcondition, then the combinations in the following table are possible:

graphics/one.gif graphics/two.gif graphics/shadowone.gif graphics/shadowtwo.gif Test Case Name
F T T F op1F2T1T2F
T F T F op1T2F1T2F
T T T F op1T2T1T2F
F T F T op1F2T1F2T
T F F T op1T2F1F2T
T T F T op1T2T1F2T
F T T T op1F2T1T2T
T F T T op1T2F1T2T
T T T T op1T2T1T2T

The test case name is derived by numbering the disjuncts in the precondition and the postcondition, and incorporating those numbers in the name followed immediately by a "T" or an "F" to indicate the truth value associated with that test case. If several equivalence classes, as determined by boundary values, exist for a particular test case, then a suffix can be added to the namefor example, op1F2T1T2Fa, op1F2T1T2Fb, and so on. An explanation of these test cases should be included in documentation for the code or the test plan.

This scheme works for naming test cases that address a single operation. Test cases involving interactions can be named based on concatenating names for each of the operations in a sequence comprising an interaction.

Checking Postconditions and Invariants

Postconditions and invariants can be checked in a straightforward manner, assuming the CUT defines public operations for accessing state and/or attribute values.

We have identified two general approaches to writing code to check postconditions and invariants. One is to write code to compute attribute values in the tester code when they are needed. The other is to use a database of some form including as a file or array in memory from which values can be retrieved when needed. Consider checking the values of speedX and speedY attributes in the invariant for Velocity. The condition involves sines and cosines of angles. We can compute these values as test cases execute using the sin() and cos() functions in a standard library, assuming we are using reliable functions. Alternatively, we can precompute values of the attribute for various values of direction and speed used in test cases, and then retrieve that information when it is needed. We have used spreadsheet programs to perform such computations.

A tricky part of checking postconditions is in coding expressions that use @pre, meaning the value at the start of the method execution. The method must store such values in a local "temp" whose value is used once the test case output is available for checking. Use the factory method in the Tester class that corresponds to a copy constructor to facilitate @pre checking. For example, in checking the postcondition for setDirection(Direction dir), use

 Velocity *OUTatPre = newCUT( *getOUT() ); // remember state ... if ( OUT.getSpeed() == OUTatPre.getSpeed() &&   OUT.getDirection() == dir ) ... 

If postconditions relate to the state of the OUT, then invoke operations to check the state. If no such operations are defined by the CUT, then define them in the tester class as protected member functions. Note well: If the test case method is relying on operations defined in the CUT, then make sure tests for those operations are included in the baseline suite.

OCL allows state names to be used as Boolean-valued attributes in specifications [WK99]. A class need not define an operation to explicitly return the state of an instance. We believe classes should always include some way of observing the current state of an object based on state names, not just ranges of attribute values for example, in the PuckSupply class we defined the isEmpty() operation. If the CUT does not completely support in its interface the operations needed to test postconditions and invariants in terms of states, approach the developers to add them to the CUT rather than coding methods in the Tester class. After all, a client (in this case the Tester class) should be able to observe all the behavior of an object if that behavior is referenced by the specification. Certainly a class must have in its public interface all the operations necessary for a client to check the preconditions for any public operation.

A tester interface includes a set of operations to construct instances of the CUT. These operations include newCUT(Object), which is a factory method used to create an instance of the CUT that is a copy of the object passed as its argument a factory method resembling a copy constructor in C++. A concrete Tester class should implement a factory method corresponding to each constructor defined in the CUT. Test case methods use these factory methods to create an OUT instead of constructors for the CUT. Test case methods use getOUT() to access a current OUT. In the case of VelocityTester, we define operation newCUT() to create an instance of Velocity constructed with the default constructor and setOUT() to make that instance the current OUT. We also define the newCUT(s: Speed, d: Direction) operation to create a new instance using the Velocity::Velocity(s: Speed, d: Direction) constructor. The test case methods must use these factory methods to create new instances of the CUT for reasons that will be apparent when we look at testing class hierarchies in Chapter 7.[10]

[10] Here is a preview. For any subclass (of the CUT) designed in accordance with the substitution principle, test cases for the CUT still apply to that subclass. We will create a tester for the subclass that is a subclass of the tester for the CUT. Since the test case methods in the tester for CUT rely on factory methods, we can just override those same methods in the subclass's tester to create instances of the subclass. As we mentioned at the start of the book, object-oriented technologies improve testing as well as development.

It is not uncommon for a Tester class to define additional factory methods for the convenience of test cases that need to create an OUT in some specific state. For example, the PuckSupplyTester class might provide a newPuckSupplyOfOne() operation to construct a PuckSupply instance containing a single puck. Such factory methods should be public since they are very useful when instances of the CUT are needed to test another class. The test case methods for the other class can use an instance of this Tester class as a helper to create the instances in the necessary states. In implementing such methods, however, take care to use the other factory methods in the Tester and not the constructors for the CUT.

Objects under test should be allocated from the heap because the use of a single object shared by all test cases will not work in the general case. It is also easier to understand test driver code that is written so that each test case method creates its own OUT and then disposes it. Sharing such objects between test case methods increases coupling. Keep test driver code as simple as possible, even at the expense of some time and/or space inefficiency. One of the most frustrating aspects of developing test drivers is testing and debugging them. The more straightforward the code, the better the driver.

In using a language such as C++ in which a programmer must manage the heap, make each test case method responsible for deleting objects it allocates. The disposeOUT() method can delete the current OUT.

Baseline Testing

Test case methods contain code to establish an OUT, which might require a series of modification requests to be sent to an instance of the CUT. Test case methods use accessor operations in the process of checking postconditions. If the constructors, modifier methods, and accessor methods for the CUT are incorrect, then the results reported by a tester are unreliable. The first thing a tester must do is check that such constructors and methods are themselves correct by executing test cases for them. We call this set of test cases a baseline test suite.[11]

[11] Thorough testing of the most basic operations needed to check test results is critical. We once worked on a compiler for which the programs in the test suite always checked for failure of each test case that is, of the form set up test case input; execute test case input; if (some condition not true) then report failure. A compiler could pass the executable test suite if it generated code so that all conditions evaluate to true so that no failure would be reported. This is clearly a weakness of the testing approach. We needed a baseline test suite that checked for correct evaluation of conditional expressions in if statements.

A baseline test suite is a set of test cases that tests the operations of the CUT that are needed for the other test cases to verify their outcomes. This suite includes testing constructors and accessors. Most likely, all the test cases in the baseline test suite will be replicated in the functional test suite.

We have identified two basic approaches to baseline testing, one that's specification-based and one that's implementation-based:

  1. Check that all the constructors and accessors are self-consistent. Create a test case for each constructor and verify that all attributes are correct by invoking accessors.

  2. Check that all the constructors and accessors use the variables in an object correctly. This requires a tester to know how attributes are implemented in a CUT. Its implementation relies on visibility features of programming languages that allow a tester class to have access to the implementation of the class it tests. These features include friends in C++ and package visibility in Java.

Base your approach on how closely you want to couple the code for a tester to the code for the class it tests. We have found the second approach to produce more reliable results, although it requires more programming effort and tightly couples the code between the two classes for example, in C++ the CUT must declare its Tester class a friend. The second approach usually requires fewer test cases in the baseline suite than does the first approach.

Assertion Checking in a Class Under Test

While the primary mechanism for execution-based testing is implementing a test driver, bugs can also be found by inserting assertion checks in code for a class as it is developed. This can include assertions to check preconditions, postconditions, and invariants. An implementer can identify an implementation-oriented set of invariants in addition to the invariants specified for a class. Consider, for example, the Sprite class in Brickles. For efficiency reasons, each instance maintains in its local state both the bounding rectangle (as a corner point, a width, and a height) and the points that form the upper left and lower right points of the bounding rectangle. This redundancy is a potential source of bugs because the values can become inconsistent. This design introduces an implementation-level class invariant constraining the point at the lower right corner of the bounding rectangle to be the lower right point stored in an instance. A SpriteTester class cannot contain code to check such implementation-level invariants unless it has access to the implementation of Sprite. To facilitate testing, an implementer should include an assertion to check this implementation-level class invariant in every member function that modifies the bounding rectangle of an instance. This facilitates debugging and testing without increasing the coupling between a tester class and its CUT.

Tip

Implement a protected method in a Tester class to check postcondition clauses. The same postcondition often appears in the specification of more than one operation defined for a class. Invoke these protected methods rather than coding the same postcondition checks in each test case method.

Similarly, define a factory method to return an OUT in a state required for a test case. It is not uncommon for a number of test cases to specify the same preconditions for an OUT and to have a convenient method to create an instance and reduce the amount of code in a test driver.

If test script methods are being used to facilitate interaction testing in a class, write each test case method so that it verifies the input state for the test case before generating events on the OUT. Since tester classes are seldom formally tested themselves (by Tester classes), a little defensive programming can help in debugging them.


Running Test Suites

The abstract Tester class includes in its protocol some operations to run all test cases or selected suites. These methods for these operations are straightforward to implement. Each calls a sequence of test case methods. Take care to ensure that the baseline test suite is executed before any of these other suites are executed. A possible design calls for executing the baseline test suite when a concrete tester class is instantiated that is, as part of its initialization.

If the CUT contains static member functions and/or data members, then the Tester class should to incorporate code that ensures that code has already been tested and works correctly or at least warns that the class itself might need testing before its instances can be tested. This is not critical since the goal of testing a class is to uncover bugs, not diagnose the source of those bugs. However, such a reminder can serve to ensure that a test driver is written for those static members.

graphics/note.jpg

Is it possible to design a class to make testing easier?

Yes. Ensure that the public interface includes operations that enable all conditions within preconditions, postconditions, and class invariants to be checked by clients. Furthermore, enable the current observable state to be observed without a client having to determine that state based on current attribute values. If a class is not designed with such methods, approach the class designer about adding them to the interface.

Providing a public operation in a class to check the class invariant is useful to a Tester class and to developers for debugging. Be wary, however, of relying on that code to check postconditions in test case methods. We prefer to code up an independent CUTinvariantHolds() method in each Tester class we implement.


Tip

Be sure to rerun all test cases after debugging code is removed from the code for a class. Sometimes developers add code to help in debugging a class for example, assertion checks and statements that write trace information to streams. In many shops, debugging code is removed before software is deployed. (To support this, for example, C++'s assert() macro (library header file assert.h) checks assertions only if NDEBUG is not defined.) Under some circumstances, code that includes debugging information can have behaviors different from the same code without the debugging support. Consequently, take care to run test cases in both debugging and nondebugging modes.


Reporting Test Results

A test case method determines the success of a test case. In our design, test case methods report results to the tester instance itself, which tallies test suite run statistics. It is useful for each test case method to identify itself as part of its report. A string denoting the script name or purpose is useful.

Keep in mind that the purpose of testing is not to debug a class, but to see if it meets its specification. Since a class's tester is usually its developer, writing code in a driver that attempts to diagnose problems with the CUT is very appealing. Extensive effort put into diagnostic code is almost always misplaced. Symbolic debuggers and other tools are better for such activities. Such debugging can, of course, be done in the context of the test driver.

Example of Test Driver Code

We illustrate the design of a Tester class by showing the representative parts[12] of VelocityTester written in C++ and in Java. Features and restrictions in the two languages result in different designs. A test plan for Velocity is shown in Figure 5.16. A set of test case descriptions is shown in Figure 5.17. Some test cases are determined by combinations of values for attributes over a range of values.

[12] The code is quite lengthy. The sections omitted follow the pattern set forth by the code shown in the example.

Figure 5.16. A component test plan for the Velocity class

graphics/05fig16.gif

Figure 5.17. Test case descriptions for some of the Velocity operations

graphics/05fig17.gif

C++ code for the Tester and VelocityTester is shown first, followed by the Java code. First, we will make some observations about the code.

  • In the C++ version, we have used a template parameterized by the CUT to generate the Tester abstract class. By using a template, we can produce a class at the root of the tester hierarchy for each CUT. Consequently, for example, operations such as getOUT() return a pointer to an instance of the CUT and not a pointer of type void * or of a pointer to some abstract Object class.

    In the Java version, we defined Tester as an abstract class and used Object to represent the class of the OUT. This requires each test case method to dynamically cast a reference to the OUT to a reference to the CUT.

  • The Tester class in both implementations have the same functionality. This includes code to tally and report test results to a log file. This design could be enhanced significantly to maintain a database of test results and do more elaborate reporting.

  • Notice how the factory methods in VelocityTester return an instance of the Velocity class. A tester should always declare such factory methods to return a pointer or a reference to the CUT.

  • The baseline test suite implemented in TestVelocity is minimal. It merely checks that the attribute values returned by accessors are correct for a single object. More extensive testing of accessors is part of the functional test suite.

  • The CUTInvariantHolds() method in VelocityTester relies on the math library functions sin() and cos(). We trust those functions to return the correct value. In the C++ version, we use the arc cosine of -1 to compute a value for PI. Java provides Math.PI to use.

  • To save space, we have not included all test case methods. The test case method tc_Velocity() tests the default constructor. The tcs_VelocitySpeedDirection() and tcs_setDirection() methods run the sets of test cases described in Figure 5.17 for the nondefault constructor and setDirection() operation.

C++ code for the Tester class. This code was compiled using Metrowerks CodeWarrior Pro 5.

 #include <fstream> #include <iomanip> #include <ctime> using namespace std; enum TestResult {Fail, TBD, Pass}; template<class CUT> class Tester { public:  Tester<CUT>(string CUTname, string logFileName)   : _CUTname(CUTname), _logStream(logFileName.c_str()),     _OUTPtr(0), _passTally(0), _failTally(0), _TBDTally(0) {     time_t systime = time(0);     _logStream << ctime(&systime) << endl;  }  virtual ~Tester<CUT>() { // Summarize results in log    _logStream << endl << "Summary of results:" << endl               << '\t' << totalTally() << " test cases run" << endl               << fixed << showpoint << setprecision(2)               << '\t' << setw(7) << "Pass:" << setw(5)               <<passTally() << endl               << '\t' << setw(7) << "Fail:" << setw(5)               << failTally() << endl               << '\t' << setw(7) << "TBD :" << setw(5)               <<  TBDTally() << endl;    _logStream.close();  }  virtual void runAllSuites() {    runFunctionalSuite();    runStructuralSuite();    runInteractionSuite();  }  virtual void runFunctionalSuite() = 0;  virtual void runStructuralSuite() = 0;  virtual void runInteractionSuite() = 0;  int passTally() const { return _passTally; }  int failTally() const { return _failTally; }  int TBDTally() const  { return _TBDTally; }  int totalTally() const {    return _passTally + _failTally + _TBDTally;  }  virtual CUT *getOUT() { return _OUTPtr; } // Current OUT  virtual void disposeOUT() { // Finish use of current OUT    if ( ! _OUTPtr ) {      delete _OUTPtr;      _OUTPtr = 0;    }  }  virtual CUT *newCUT(const CUT &object) = 0; protected:  virtual bool runBaselineSuite() = 0;  virtual bool CUTinvariantHolds() = 0;  void setOUT(CUT *outPtr) { _OUTPtr = outPtr; }          // used by factory methods  void logTestCaseStart(string testID) {    _logStream << "Start test case " << testID << endl;  }  void logSubTestCaseStart(int caseNumber) {    _logStream << "Start sub test case " << caseNumber << endl;  }  void logTestCaseResult(TestResult result) {    _logStream << "RESULT: ";    switch ( result ) {    case Fail:  ++ _failTally;                _logStream << "FAIL";                break;    case TBD:   ++ _TBDTally;                _logStream << "To be determined";                break;    case Pass:  ++ _passTally;                _logStream << "Pass";                break;    default:                _logStream << "BAD result (" << int(result) << ')'                           << endl;    }    _logStream << endl;  }  void logComment(string comment) {    _logStream << "\t* " << comment << endl;  }  TestResult passOrFail(bool condition) {    // Utility for a result that cannot be TBD.    // This checks the invariant, too.    if ( condition && CUTinvariantHolds() )      return Pass;    else      return Fail;  } private:  string _CUTname;  // name of the class under test  ofstream _logStream;// log stream  CUT *_OUTPtr;    // pointer to current object under test  int _passTally;    // number of test cases passing so far  int _failTally;    // number of test cases failing so far  int _TBDTally;    // number of test cases provisionally                      // passing so far }; 

C++ code for the VelocityTester class.

 // VelocityTester.h #include "Tester.h" #include "Velocity.h" class VelocityTester : public Tester<Velocity> { public:  VelocityTester(string logFileName)    : Tester<Velocity>("Velocity", logFileName) {    runBaselineSuite();   }  virtual void runFunctionalSuite() {    tc_Velocity();    tcs_VelocitySpeedDirection();    tcs_setDirection();  }  virtual void runStructuralSuite() { }  virtual void runInteractionSuite() { }  virtual Velocity *newCUT() { return new Velocity(); }  virtual Velocity *newCUT(const Velocity &v) {    return new Velocity(v);  }  virtual Velocity *newCUT(const Speed speed, const Direction dir)  {    return new Velocity(speed, dir);  } protected:  virtual bool runBaselineSuite() {    // Verify that the accessor operations are consistent    logComment("Running baseline test suite.");    Velocity v(1000, 321);    if ( v.getSpeed() == 1000 && v.getDirection() == 321 &&           v.getSpeedX() == 777 && v.getSpeedY() == -629 ) {      logComment("Baseline suite passed");      return true;    }    else {      logComment("Baseline suite FAILED");      return false;    }  }  virtual bool CUTinvariantHolds() {    const Velocity &OUT = *getOUT();    const Direction direction = OUT.getDirection();    const Speed speed = OUT.getSpeed();    const Speed speedX = OUT.getSpeedX();    const Speed speedY = OUT.getSpeedY();    static const double PI = 3.14159265;    const double radians = 2.0 * PI * direction / 360.0;    bool result =      0 <= direction && direction < 360 && speed >= 0 &&      speedX == int(cos(radians) * double(speed)) &&      speedY == int(sin(radians) * double(speed)) &&      (speedX*speedX + speedY*speedY) <= speed*speed;    if ( ! result ) {      logComment("Invariant does not hold");    }    return result;   }  void tc_Velocity() {  // test default constructor    logTestCaseStart("Velocity()");    setOUT(newCUT());    Velocity &OUT = *getOUT();    logTestCaseResult(passOrFail(OUT.getSpeed() == 0 &&                      OUT.getDirection() == 0));    disposeOUT();  }  void tcs_VelocitySpeedDirection() {    // test Velocity(Speed, Direction)    //This  runs 360 test cases    logTestCaseStart("Velocity(Speed, Direction)");    const Speed fixedSpeed = 1000;    for ( Direction dir = 0 ; dir < 360 ; ++dir ) {      logSubTestCaseStart(dir);      setOUT(newCUT(fixedSpeed, dir));      Velocity &OUT = *getOUT();      logTestCaseResult(passOrFail(OUT.getDirection() == dir &&                        OUT.getSpeed() == fixedSpeed));      disposeOUT();    }  }  void tcs_setDirection() {    logTestCaseStart("setDirection");    const Speed fixedSpeed = 1000;    setOUT(newCUT(fixedSpeed, 359)); // any dir value != 0    Velocity &OUT = *getOUT();    for ( Direction dir = 0 ; dir < 360 ; ++dir ) {      logSubTestCaseStart(dir);      OUT.setDirection(dir);      logTestCaseResult(passOrFail(OUT.getDirection() == dir &&                        OUT.getSpeed() == fixedSpeed));    }    disposeOUT();  } }; 

The main program creates an instance of the Tester class and runs all the suites. Results are logged to the VelocityTestResults.txt file.

 #include <iostream> using namespace std; //introduces namespace std #include "VelocityTester.h" int main ( void ) {  VelocityTester vt("VelocityTestResults.txt");  vt.runAllSuites();  return 0; } 

Java code for the Tester class. We define a TestResult class to represent three possible outcomes of a test case.

 import java.io.*; import java.util.*; /**  A class that defines three possible test case outcomes:    Fail - failure    TBD  - unknown ("To be determined"), usually because           result requires further analysis or observation    Pass - success  @see Tester */ public class TestResult {  public TestResult(String value) { _value = value; }  public String toString() { return _value; }  private String _value;  static public final TestResult Fail = new TestResult("Fail");  static public final TestResult TBD  = new TestResult("TBD");  static public final TestResult Pass = new TestResult("Pass"); } /**  An abstract class that represents a class tester. The  responsibilities of a tester for a class C include:    1. running test suites,    2. creating instances of the class it tests    3. logging test results */ abstract class Tester {  /**    Constructs a new instance.    @param CUTname     the name of the class under test    @param logFileName the name of the file into which results                       are logged  */  public Tester(String CUTname, String logFileName) {    _CUTname = CUTname;    try {      _log = new FileWriter(logFileName);    }    catch (IOException e) {      System.err.println("Could not open file " + logFileName);    }    _OUT = null;    _passTally = 0;    _failTally = 0;    _TBDTally = 0;    try {      String line = new Date().toString()+'\n';      _log.write(line);    }    catch (IOException e) {      System.err.println("Error writing to log file");      e.printStackTrace();    }  }  public void dispose() { // Summarize results in log    try {      int total = totalTally();      _log.write("\n");      _log.write("Summary of results:\n");      _log.write("\t" + total + " test cases run\n");      _log.write("\t" + "Pass:" + " " + passTally() + '\n');      _log.write("\t" + "Fail:" + " " + failTally() + '\n');      _log.write("\t" + "TBD :" + " " + TBDTally() + '\n');      _log.close();    }    catch (IOException e) {      System.err.println("Error writing to log file");      e.printStackTrace();    }  }  public abstract Object newCUT(Object object); //copy object  public void runAllSuites() {    runFunctionalSuite();    runStructuralSuite();    runInteractionSuite();  }  public abstract void runFunctionalSuite();  public abstract void runStructuralSuite();  public abstract void runInteractionSuite();  public int passTally() { return _passTally; }  public int failTally() { return _failTally; }  public int TBDTally()  { return _TBDTally; }  public int totalTally() {    return _passTally + _failTally + _TBDTally;  }  public Object getOUT() { return _OUT; }  public void disposeOUT() { _OUT = null; }  protected abstract boolean runBaselineSuite();  protected abstract boolean CUTinvariantHolds();  protected void setOUT(Object outPtr) { _OUT = outPtr; }  protected void logTestCaseStart(String testID) {    try {      _log.write("Start test case " + testID + '\n');      _log.flush();    }    catch (IOException e) {      System.err.println("Error writing to log file");      e.printStackTrace();    }  }  protected void logSubTestCaseStart(int caseNumber) {    try {      _log.write("Start sub test case " + caseNumber + '\n');      _log.flush();    }    catch (IOException e) {      System.err.println("Error writing to log file");      e.printStackTrace();    }  }  protected void logTestCaseResult(TestResult result) {    if ( result == TestResult.Fail ) {      ++ _failTally;      try {        _log.write("\tOUT: " + getOUT().toString() + '\n');      _log.flush();      }      catch (IOException e) {        System.err.println("Error writing to log file");        e.printStackTrace();      }    }    else if ( result == TestResult.TBD ) {      ++ _TBDTally;    }    else if ( result == TestResult.Pass ) {      ++ _passTally;    }    try {      _log.write("RESULT: " + result.toString() + '\n');      _log.flush();    }    catch (IOException e) {      System.err.println("Error writing to log file");      e.printStackTrace();    }  }  protected void logComment(String comment) {    try {      _log.write("\t* " + comment + '\n');      _log.flush();    }    catch (IOException e) {      System.err.println("Error writing to log file");      e.printStackTrace();    }  }  protected TestResult passOrFail(boolean condition) {    // Utility for a result that cannot be TBD.    // This checks the invariant, too.    if ( condition && CUTinvariantHolds() )      return TestResult.Pass;    else      return TestResult.Fail;  }  private String _CUTname;  // name of the class under test  private FileWriter _log;  // log stream  private Object _OUT;    // pointer to current object under test  private int _passTally;    // number of test cases passing so far  private int _failTally;    // number of test cases failing so far  private int _TBDTally;    // number of test cases provisionally                              // passing so far }; 

Java code for the VelocityTester class.

 //import java.util.*; import Tester; import Velocity; /**  A class to test class Velocity. */ class VelocityTester extends Tester {  public static void main(String args[]) {    VelocityTester vt = new VelocityTester("VelTest--Java.txt");    vt.runAllSuites();    vt.dispose();  }  public VelocityTester(String logFileName) {    super("Velocity", logFileName);      runBaselineSuite();   }  public void runFunctionalSuite() {    tc_Velocity();    tcs_VelocitySpeedDirection();    tcs_setDirection();  }  public void runStructuralSuite() { }  public void runInteractionSuite() { }  // Factory methods for creating an instance of CUT  public Object newCUT(Object object) {    Velocity v = (Velocity)object;    return new Velocity(v.getSpeed(), v.getDirection());  }  public Velocity newCUT() {    return new Velocity();  }  public Velocity newCUT(int speed, int dir) {    return new Velocity(speed, dir);  }  protected boolean runBaselineSuite() {    // Verify that the accessor operations are consistent    logComment("Running baseline test suite.");    Velocity v = new Velocity(1000, 321);    if ( v.getSpeed() == 1000 && v.getDirection() == 321 &&           v.getSpeedX() == 777 && v.getSpeedY() == -629 ) {      logComment("Baseline suite passed");      return true;    }    else {      logComment("Baseline suite FAILED");      return false;    }  }  protected boolean CUTinvariantHolds() {    Velocity OUT = (Velocity)(getOUT());    int direction = OUT.getDirection();    int speed = OUT.getSpeed();    int speedX = OUT.getSpeedX();    int speedY = OUT.getSpeedY();    final double radians = Math.toRadians(direction);    if ( direction > 90 ) {      double dx = Math.cos(radians) * (double)(speed);      dx = Math.floor(dx);      int expectedSpeedX = (int)dx;      int expectedSpeedY =        (int)Math.floor(Math.sin(radians) * (double)(speed));      boolean rest =        (speedX*speedX + speedY*speedY) <= speed*speed;      rest = rest;    }    boolean result =      0 <= direction && direction < 360 && speed >= 0 &&      speedX == (int)(Math.cos(radians) * (double)(speed)) &&      speedY == (int)(Math.sin(radians) * (double)(speed)) &&      (speedX*speedX + speedY*speedY) <= speed*speed;    if ( ! result ) {      logComment("Invariant does not hold");    }    return result;  }  protected void tc_setDirection001() {    logTestCaseStart("setDirection001");    setOUT(newCUT(1000, 0));    Velocity OUT = (Velocity)(getOUT());    OUT.setDirection(01);    logTestCaseResult(passOrFail(OUT.getDirection() == 01));    disposeOUT();  }  void tc_Velocity() {  // test default constructor    logTestCaseStart("Velocity()");    setOUT(newCUT());    Velocity OUT = (Velocity)getOUT();    logTestCaseResult(passOrFail(OUT.getSpeed() == 0 &&                      OUT.getDirection() == 0));    disposeOUT();  }  void tcs_VelocitySpeedDirection() {    // test Velocity(Speed, Direction)    logTestCaseStart("Velocity(Speed, Direction)");    final int speedValue[] = { 6, 12, 1000 };    for ( int i = 0 ; i < 3 ; ++i ) {      int speed = speedValue[i];      for ( int dir = 0 ; dir < 360 ; ++dir ) {        logSubTestCaseStart(dir);        setOUT(newCUT(speed, dir));        Velocity OUT = (Velocity)getOUT();        logTestCaseResult(passOrFail(OUT.getDirection() == dir &&                          OUT.getSpeed() == speed));        disposeOUT();      }    }  }  void tcs_setDirection() {    logTestCaseStart("setDirection");    final int fixedSpeed = 1000;    setOUT(newCUT(fixedSpeed, 359)); // any dir value != 0    Velocity OUT = (Velocity)getOUT();    for ( int dir = 0 ; dir < 360 ; ++dir ) {      logSubTestCaseStart(dir);      OUT.setDirection(dir);      logTestCaseResult(passOrFail(OUT.getDirection() == dir &&                        OUT.getSpeed() == fixedSpeed));    }    disposeOUT();  } }; 


A Practical Guide to Testing Object-Oriented Software
A Practical Guide to Testing Object-Oriented Software
ISBN: 0201325640
EAN: 2147483647
Year: 2005
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net