A test driver is a program that runs test cases and collects the results. We describe three general approaches to writing test drivers. There are probably others and certainly there are many variations on what we present. We recommend one approach over the others and will develop it in detail.[6]
Consider three ways to implement a test driver for the Velocity class. We will use C++ to illustrate the structure of the test driver design.
All three designs are equivalent with respect to their support for running the same test cases and reporting the results. Some of the strengths and weaknesses of each are summarized in Figure 5.12. Figure 5.12. Strengths and weaknesses of the test driver designsThe second and third designs are attractive because they can be implemented using standard features of most object-oriented programming languages. We prefer the third design.[8] Although it separates test code from production code, the relationship between a class and a driver for testing it is easy to remember each class C has a tester class called CTester. The use of a separate class is not necessarily a disadvantage. The proximity of a driver's code to the code for a class it tests is advantageous if the code for both is being developed by the same person. Otherwise it is a disadvantage. This tester class design allows some flexibility since in most programming languages two classes can be defined in the same file or in different files.
We will concentrate on the tester class design, although most aspects of development of such a driver can be adapted in a straightforward manner to the other designs. Test Driver RequirementsBefore looking at tester classes in more detail, consider the requirements for a test driver for execution-based testing of a class. The main purpose of a test driver is to run executable test cases and to report the results of running them. A test driver should have a relatively simple design because we seldom have time or resources to do execution-based testing of driver software. We rely primarily on code reviews to check driver code. In support of reviews and to facilitate maintenance, we should be able to readily trace the testing requirements in a test plan to the code in a driver. A test driver must be easy to maintain and adapt in response to changes in the incremental specification for the class it tests. Ideally, we should be able to reuse code from the test drivers for existing classes in creating new drivers. Figure 5.13 shows a model for a class Tester that satisfies these requirements. The public interface provides operations to run various test suites or all of them. The test cases are organized into suites based on their origin functional if they were identified from the specification, structural if they were identified from the code, and interaction if they test the correct operation of sequences of events on an object, such as pairs of input/output transitions. We identify these categories to facilitate maintenance of tests. The lines between these categories are sometimes hard to draw, but the general criterion for putting a test case in a category concerns how the test case was initially identified and what impact changes to a class have on a test case. Interaction test cases are usually generated to augment other test cases to achieve some level of coverage. Implementation-based test cases are generated to test some behavior of the code that arises from the implementation rather than the specification. If the implementation for a class changes, but not the specification, then we should be able to update the driver code just by modifying code to run implementation-based test cases. We refer to the set of test cases in a particular category as a test suite for that category. Thus, we identify a functional (specification-based) test suite, a structural (implementation-based) test suite, and an interaction test suite. Figure 5.13. A class model for requirements of a Tester classThe tally operations on a tester can be used to check how many test cases have passed so far. A driver keeps a log of test case execution and the results in a file whose name is specified at the time it is instantiated. The protected logTestCaseStart(), logTestCaseResult(), and logComment() operations place information in the log file. The protected runBaselineSuite() operation verifies the correctness of methods in the class under test (CUT) that are used by the test driver in checking the results of test cases. Accessor and modifier methods are usually tested as part of the baseline test suite for a class. The CUTinvariantHolds() operation evaluates the invariant of the CUT using the state of the current object under test (OUT). The Tester class is abstract. Code for the class can provide default implementations for operations common to all (concrete) testers. These include operations for logging test case results and performing other functions common to all class test drivers, such as measuring heap allocation and providing support for timing execution of individual test cases. The methods to run the test suites and to check a class invariant must be implemented for each specific CUT. We now look at the typical design for a concrete Tester class. A design for VelocityTester is shown in Figure 5.14. The figure shows a little more detail about the Tester class than is shown in Figure 5.13, including some operations to manipulate an OUT and some factory methods for creating instances of the CUT. We will describe these in the next section. A concrete Tester class is responsible primarily for implementing methods for test cases and running them as part of a suite. Figure 5.14. Class model for a VelocityTester classTester Class DesignSince the Tester class provides operations to help report test case results, the primary responsibility of a concrete Tester class, such as VelocityTester, is to run test cases and report results. The main components of the class interface are operations to set up test cases, to analyze the results of test cases, to execute test cases, and to create instances of the CUT to be used in running test cases. Our design has proven both flexible and maintainable. It has proven quite useful when instances of a class are needed to test another class, as we will show in the next chapter. Within a concrete tester class, we define one method for each of the test cases. We refer to these as test case methods. These provide traceability to the test plan one method per test case or group of closely related test cases. The purpose of a test case method is to execute a test case by creating the input state, generating a sequence of events, and checking the output state. Test Case MethodsIn a Tester class, each test case is represented by a single method. The name of the method should reflect the test case in some way. For small numbers of test cases, we can sequentially number the test cases identified in the test plan and name the operations runTestCase01(), runTestCase02(), and so on. Sequential numbering is simple, but can result in problems if test cases in a plan are ordered in some way and test cases are inserted or deleted. Usually a naming convention can be developed based on the derivation of the test cases (see sidebar). The responsibility of a test case method is to construct the input state for a test case for example, by instantiating an OUT and any objects to be passed as parameters, and then by generating the events specified by the test case. A test case method reports the status of the result pass, fail, or TBD[9] to indicate some action is needed to determine the result. A test case method verifies that the CUT's invariant holds for the OUT.
In our code, a test case method has a general structure shown in pseudocode in Figure 5.15. Figure 5.15. Pseudocode for a typical test case methodTip Implement a test script method for each test case when creating a Tester class for classes in which there are many interaction test cases. A test script method is responsible for creating the OUT for use by a test case method, invoking the test case method, and then checking postconditions and the class invariant. It also reports the results. The test case method handles only the event sequence on the OUT. An interaction test case can then be coded as a single test script method that invokes a sequence of test case methods and then checks and reports the results. OUT Factory MethodsClasses are tested by creating instances and checking their behaviors against a set of test cases. We have referred to an instance to which a test case is being applied as the object under test (OUT). The main requirement with respect to the OUT is that attributes be specified for the inputs to the test case so that preconditions associated with a test case to be applied are met. The Tester class includes setOUT() and getOUT() operations that are used by test case methods to access the current OUT (see Figure 5.14). A disposeOUT() is available to end an association of the OUT with its current instance.
A tester interface includes a set of operations to construct instances of the CUT. These operations include newCUT(Object), which is a factory method used to create an instance of the CUT that is a copy of the object passed as its argument a factory method resembling a copy constructor in C++. A concrete Tester class should implement a factory method corresponding to each constructor defined in the CUT. Test case methods use these factory methods to create an OUT instead of constructors for the CUT. Test case methods use getOUT() to access a current OUT. In the case of VelocityTester, we define operation newCUT() to create an instance of Velocity constructed with the default constructor and setOUT() to make that instance the current OUT. We also define the newCUT(s: Speed, d: Direction) operation to create a new instance using the Velocity::Velocity(s: Speed, d: Direction) constructor. The test case methods must use these factory methods to create new instances of the CUT for reasons that will be apparent when we look at testing class hierarchies in Chapter 7.[10]
It is not uncommon for a Tester class to define additional factory methods for the convenience of test cases that need to create an OUT in some specific state. For example, the PuckSupplyTester class might provide a newPuckSupplyOfOne() operation to construct a PuckSupply instance containing a single puck. Such factory methods should be public since they are very useful when instances of the CUT are needed to test another class. The test case methods for the other class can use an instance of this Tester class as a helper to create the instances in the necessary states. In implementing such methods, however, take care to use the other factory methods in the Tester and not the constructors for the CUT. Objects under test should be allocated from the heap because the use of a single object shared by all test cases will not work in the general case. It is also easier to understand test driver code that is written so that each test case method creates its own OUT and then disposes it. Sharing such objects between test case methods increases coupling. Keep test driver code as simple as possible, even at the expense of some time and/or space inefficiency. One of the most frustrating aspects of developing test drivers is testing and debugging them. The more straightforward the code, the better the driver. In using a language such as C++ in which a programmer must manage the heap, make each test case method responsible for deleting objects it allocates. The disposeOUT() method can delete the current OUT. Baseline TestingTest case methods contain code to establish an OUT, which might require a series of modification requests to be sent to an instance of the CUT. Test case methods use accessor operations in the process of checking postconditions. If the constructors, modifier methods, and accessor methods for the CUT are incorrect, then the results reported by a tester are unreliable. The first thing a tester must do is check that such constructors and methods are themselves correct by executing test cases for them. We call this set of test cases a baseline test suite.[11]
A baseline test suite is a set of test cases that tests the operations of the CUT that are needed for the other test cases to verify their outcomes. This suite includes testing constructors and accessors. Most likely, all the test cases in the baseline test suite will be replicated in the functional test suite. We have identified two basic approaches to baseline testing, one that's specification-based and one that's implementation-based:
Base your approach on how closely you want to couple the code for a tester to the code for the class it tests. We have found the second approach to produce more reliable results, although it requires more programming effort and tightly couples the code between the two classes for example, in C++ the CUT must declare its Tester class a friend. The second approach usually requires fewer test cases in the baseline suite than does the first approach.
Tip Implement a protected method in a Tester class to check postcondition clauses. The same postcondition often appears in the specification of more than one operation defined for a class. Invoke these protected methods rather than coding the same postcondition checks in each test case method. Similarly, define a factory method to return an OUT in a state required for a test case. It is not uncommon for a number of test cases to specify the same preconditions for an OUT and to have a convenient method to create an instance and reduce the amount of code in a test driver. If test script methods are being used to facilitate interaction testing in a class, write each test case method so that it verifies the input state for the test case before generating events on the OUT. Since tester classes are seldom formally tested themselves (by Tester classes), a little defensive programming can help in debugging them. Running Test SuitesThe abstract Tester class includes in its protocol some operations to run all test cases or selected suites. These methods for these operations are straightforward to implement. Each calls a sequence of test case methods. Take care to ensure that the baseline test suite is executed before any of these other suites are executed. A possible design calls for executing the baseline test suite when a concrete tester class is instantiated that is, as part of its initialization. If the CUT contains static member functions and/or data members, then the Tester class should to incorporate code that ensures that code has already been tested and works correctly or at least warns that the class itself might need testing before its instances can be tested. This is not critical since the goal of testing a class is to uncover bugs, not diagnose the source of those bugs. However, such a reminder can serve to ensure that a test driver is written for those static members.
Tip Be sure to rerun all test cases after debugging code is removed from the code for a class. Sometimes developers add code to help in debugging a class for example, assertion checks and statements that write trace information to streams. In many shops, debugging code is removed before software is deployed. (To support this, for example, C++'s assert() macro (library header file assert.h) checks assertions only if NDEBUG is not defined.) Under some circumstances, code that includes debugging information can have behaviors different from the same code without the debugging support. Consequently, take care to run test cases in both debugging and nondebugging modes. Reporting Test ResultsA test case method determines the success of a test case. In our design, test case methods report results to the tester instance itself, which tallies test suite run statistics. It is useful for each test case method to identify itself as part of its report. A string denoting the script name or purpose is useful. Keep in mind that the purpose of testing is not to debug a class, but to see if it meets its specification. Since a class's tester is usually its developer, writing code in a driver that attempts to diagnose problems with the CUT is very appealing. Extensive effort put into diagnostic code is almost always misplaced. Symbolic debuggers and other tools are better for such activities. Such debugging can, of course, be done in the context of the test driver. Example of Test Driver CodeWe illustrate the design of a Tester class by showing the representative parts[12] of VelocityTester written in C++ and in Java. Features and restrictions in the two languages result in different designs. A test plan for Velocity is shown in Figure 5.16. A set of test case descriptions is shown in Figure 5.17. Some test cases are determined by combinations of values for attributes over a range of values.
Figure 5.16. A component test plan for the Velocity classFigure 5.17. Test case descriptions for some of the Velocity operationsC++ code for the Tester and VelocityTester is shown first, followed by the Java code. First, we will make some observations about the code.
C++ code for the Tester class. This code was compiled using Metrowerks CodeWarrior Pro 5. #include <fstream> #include <iomanip> #include <ctime> using namespace std; enum TestResult {Fail, TBD, Pass}; template<class CUT> class Tester { public: Tester<CUT>(string CUTname, string logFileName) : _CUTname(CUTname), _logStream(logFileName.c_str()), _OUTPtr(0), _passTally(0), _failTally(0), _TBDTally(0) { time_t systime = time(0); _logStream << ctime(&systime) << endl; } virtual ~Tester<CUT>() { // Summarize results in log _logStream << endl << "Summary of results:" << endl << '\t' << totalTally() << " test cases run" << endl << fixed << showpoint << setprecision(2) << '\t' << setw(7) << "Pass:" << setw(5) <<passTally() << endl << '\t' << setw(7) << "Fail:" << setw(5) << failTally() << endl << '\t' << setw(7) << "TBD :" << setw(5) << TBDTally() << endl; _logStream.close(); } virtual void runAllSuites() { runFunctionalSuite(); runStructuralSuite(); runInteractionSuite(); } virtual void runFunctionalSuite() = 0; virtual void runStructuralSuite() = 0; virtual void runInteractionSuite() = 0; int passTally() const { return _passTally; } int failTally() const { return _failTally; } int TBDTally() const { return _TBDTally; } int totalTally() const { return _passTally + _failTally + _TBDTally; } virtual CUT *getOUT() { return _OUTPtr; } // Current OUT virtual void disposeOUT() { // Finish use of current OUT if ( ! _OUTPtr ) { delete _OUTPtr; _OUTPtr = 0; } } virtual CUT *newCUT(const CUT &object) = 0; protected: virtual bool runBaselineSuite() = 0; virtual bool CUTinvariantHolds() = 0; void setOUT(CUT *outPtr) { _OUTPtr = outPtr; } // used by factory methods void logTestCaseStart(string testID) { _logStream << "Start test case " << testID << endl; } void logSubTestCaseStart(int caseNumber) { _logStream << "Start sub test case " << caseNumber << endl; } void logTestCaseResult(TestResult result) { _logStream << "RESULT: "; switch ( result ) { case Fail: ++ _failTally; _logStream << "FAIL"; break; case TBD: ++ _TBDTally; _logStream << "To be determined"; break; case Pass: ++ _passTally; _logStream << "Pass"; break; default: _logStream << "BAD result (" << int(result) << ')' << endl; } _logStream << endl; } void logComment(string comment) { _logStream << "\t* " << comment << endl; } TestResult passOrFail(bool condition) { // Utility for a result that cannot be TBD. // This checks the invariant, too. if ( condition && CUTinvariantHolds() ) return Pass; else return Fail; } private: string _CUTname; // name of the class under test ofstream _logStream;// log stream CUT *_OUTPtr; // pointer to current object under test int _passTally; // number of test cases passing so far int _failTally; // number of test cases failing so far int _TBDTally; // number of test cases provisionally // passing so far }; C++ code for the VelocityTester class. // VelocityTester.h #include "Tester.h" #include "Velocity.h" class VelocityTester : public Tester<Velocity> { public: VelocityTester(string logFileName) : Tester<Velocity>("Velocity", logFileName) { runBaselineSuite(); } virtual void runFunctionalSuite() { tc_Velocity(); tcs_VelocitySpeedDirection(); tcs_setDirection(); } virtual void runStructuralSuite() { } virtual void runInteractionSuite() { } virtual Velocity *newCUT() { return new Velocity(); } virtual Velocity *newCUT(const Velocity &v) { return new Velocity(v); } virtual Velocity *newCUT(const Speed speed, const Direction dir) { return new Velocity(speed, dir); } protected: virtual bool runBaselineSuite() { // Verify that the accessor operations are consistent logComment("Running baseline test suite."); Velocity v(1000, 321); if ( v.getSpeed() == 1000 && v.getDirection() == 321 && v.getSpeedX() == 777 && v.getSpeedY() == -629 ) { logComment("Baseline suite passed"); return true; } else { logComment("Baseline suite FAILED"); return false; } } virtual bool CUTinvariantHolds() { const Velocity &OUT = *getOUT(); const Direction direction = OUT.getDirection(); const Speed speed = OUT.getSpeed(); const Speed speedX = OUT.getSpeedX(); const Speed speedY = OUT.getSpeedY(); static const double PI = 3.14159265; const double radians = 2.0 * PI * direction / 360.0; bool result = 0 <= direction && direction < 360 && speed >= 0 && speedX == int(cos(radians) * double(speed)) && speedY == int(sin(radians) * double(speed)) && (speedX*speedX + speedY*speedY) <= speed*speed; if ( ! result ) { logComment("Invariant does not hold"); } return result; } void tc_Velocity() { // test default constructor logTestCaseStart("Velocity()"); setOUT(newCUT()); Velocity &OUT = *getOUT(); logTestCaseResult(passOrFail(OUT.getSpeed() == 0 && OUT.getDirection() == 0)); disposeOUT(); } void tcs_VelocitySpeedDirection() { // test Velocity(Speed, Direction) //This runs 360 test cases logTestCaseStart("Velocity(Speed, Direction)"); const Speed fixedSpeed = 1000; for ( Direction dir = 0 ; dir < 360 ; ++dir ) { logSubTestCaseStart(dir); setOUT(newCUT(fixedSpeed, dir)); Velocity &OUT = *getOUT(); logTestCaseResult(passOrFail(OUT.getDirection() == dir && OUT.getSpeed() == fixedSpeed)); disposeOUT(); } } void tcs_setDirection() { logTestCaseStart("setDirection"); const Speed fixedSpeed = 1000; setOUT(newCUT(fixedSpeed, 359)); // any dir value != 0 Velocity &OUT = *getOUT(); for ( Direction dir = 0 ; dir < 360 ; ++dir ) { logSubTestCaseStart(dir); OUT.setDirection(dir); logTestCaseResult(passOrFail(OUT.getDirection() == dir && OUT.getSpeed() == fixedSpeed)); } disposeOUT(); } }; The main program creates an instance of the Tester class and runs all the suites. Results are logged to the VelocityTestResults.txt file. #include <iostream> using namespace std; //introduces namespace std #include "VelocityTester.h" int main ( void ) { VelocityTester vt("VelocityTestResults.txt"); vt.runAllSuites(); return 0; } Java code for the Tester class. We define a TestResult class to represent three possible outcomes of a test case. import java.io.*; import java.util.*; /** A class that defines three possible test case outcomes: Fail - failure TBD - unknown ("To be determined"), usually because result requires further analysis or observation Pass - success @see Tester */ public class TestResult { public TestResult(String value) { _value = value; } public String toString() { return _value; } private String _value; static public final TestResult Fail = new TestResult("Fail"); static public final TestResult TBD = new TestResult("TBD"); static public final TestResult Pass = new TestResult("Pass"); } /** An abstract class that represents a class tester. The responsibilities of a tester for a class C include: 1. running test suites, 2. creating instances of the class it tests 3. logging test results */ abstract class Tester { /** Constructs a new instance. @param CUTname the name of the class under test @param logFileName the name of the file into which results are logged */ public Tester(String CUTname, String logFileName) { _CUTname = CUTname; try { _log = new FileWriter(logFileName); } catch (IOException e) { System.err.println("Could not open file " + logFileName); } _OUT = null; _passTally = 0; _failTally = 0; _TBDTally = 0; try { String line = new Date().toString()+'\n'; _log.write(line); } catch (IOException e) { System.err.println("Error writing to log file"); e.printStackTrace(); } } public void dispose() { // Summarize results in log try { int total = totalTally(); _log.write("\n"); _log.write("Summary of results:\n"); _log.write("\t" + total + " test cases run\n"); _log.write("\t" + "Pass:" + " " + passTally() + '\n'); _log.write("\t" + "Fail:" + " " + failTally() + '\n'); _log.write("\t" + "TBD :" + " " + TBDTally() + '\n'); _log.close(); } catch (IOException e) { System.err.println("Error writing to log file"); e.printStackTrace(); } } public abstract Object newCUT(Object object); //copy object public void runAllSuites() { runFunctionalSuite(); runStructuralSuite(); runInteractionSuite(); } public abstract void runFunctionalSuite(); public abstract void runStructuralSuite(); public abstract void runInteractionSuite(); public int passTally() { return _passTally; } public int failTally() { return _failTally; } public int TBDTally() { return _TBDTally; } public int totalTally() { return _passTally + _failTally + _TBDTally; } public Object getOUT() { return _OUT; } public void disposeOUT() { _OUT = null; } protected abstract boolean runBaselineSuite(); protected abstract boolean CUTinvariantHolds(); protected void setOUT(Object outPtr) { _OUT = outPtr; } protected void logTestCaseStart(String testID) { try { _log.write("Start test case " + testID + '\n'); _log.flush(); } catch (IOException e) { System.err.println("Error writing to log file"); e.printStackTrace(); } } protected void logSubTestCaseStart(int caseNumber) { try { _log.write("Start sub test case " + caseNumber + '\n'); _log.flush(); } catch (IOException e) { System.err.println("Error writing to log file"); e.printStackTrace(); } } protected void logTestCaseResult(TestResult result) { if ( result == TestResult.Fail ) { ++ _failTally; try { _log.write("\tOUT: " + getOUT().toString() + '\n'); _log.flush(); } catch (IOException e) { System.err.println("Error writing to log file"); e.printStackTrace(); } } else if ( result == TestResult.TBD ) { ++ _TBDTally; } else if ( result == TestResult.Pass ) { ++ _passTally; } try { _log.write("RESULT: " + result.toString() + '\n'); _log.flush(); } catch (IOException e) { System.err.println("Error writing to log file"); e.printStackTrace(); } } protected void logComment(String comment) { try { _log.write("\t* " + comment + '\n'); _log.flush(); } catch (IOException e) { System.err.println("Error writing to log file"); e.printStackTrace(); } } protected TestResult passOrFail(boolean condition) { // Utility for a result that cannot be TBD. // This checks the invariant, too. if ( condition && CUTinvariantHolds() ) return TestResult.Pass; else return TestResult.Fail; } private String _CUTname; // name of the class under test private FileWriter _log; // log stream private Object _OUT; // pointer to current object under test private int _passTally; // number of test cases passing so far private int _failTally; // number of test cases failing so far private int _TBDTally; // number of test cases provisionally // passing so far }; Java code for the VelocityTester class. //import java.util.*; import Tester; import Velocity; /** A class to test class Velocity. */ class VelocityTester extends Tester { public static void main(String args[]) { VelocityTester vt = new VelocityTester("VelTest--Java.txt"); vt.runAllSuites(); vt.dispose(); } public VelocityTester(String logFileName) { super("Velocity", logFileName); runBaselineSuite(); } public void runFunctionalSuite() { tc_Velocity(); tcs_VelocitySpeedDirection(); tcs_setDirection(); } public void runStructuralSuite() { } public void runInteractionSuite() { } // Factory methods for creating an instance of CUT public Object newCUT(Object object) { Velocity v = (Velocity)object; return new Velocity(v.getSpeed(), v.getDirection()); } public Velocity newCUT() { return new Velocity(); } public Velocity newCUT(int speed, int dir) { return new Velocity(speed, dir); } protected boolean runBaselineSuite() { // Verify that the accessor operations are consistent logComment("Running baseline test suite."); Velocity v = new Velocity(1000, 321); if ( v.getSpeed() == 1000 && v.getDirection() == 321 && v.getSpeedX() == 777 && v.getSpeedY() == -629 ) { logComment("Baseline suite passed"); return true; } else { logComment("Baseline suite FAILED"); return false; } } protected boolean CUTinvariantHolds() { Velocity OUT = (Velocity)(getOUT()); int direction = OUT.getDirection(); int speed = OUT.getSpeed(); int speedX = OUT.getSpeedX(); int speedY = OUT.getSpeedY(); final double radians = Math.toRadians(direction); if ( direction > 90 ) { double dx = Math.cos(radians) * (double)(speed); dx = Math.floor(dx); int expectedSpeedX = (int)dx; int expectedSpeedY = (int)Math.floor(Math.sin(radians) * (double)(speed)); boolean rest = (speedX*speedX + speedY*speedY) <= speed*speed; rest = rest; } boolean result = 0 <= direction && direction < 360 && speed >= 0 && speedX == (int)(Math.cos(radians) * (double)(speed)) && speedY == (int)(Math.sin(radians) * (double)(speed)) && (speedX*speedX + speedY*speedY) <= speed*speed; if ( ! result ) { logComment("Invariant does not hold"); } return result; } protected void tc_setDirection001() { logTestCaseStart("setDirection001"); setOUT(newCUT(1000, 0)); Velocity OUT = (Velocity)(getOUT()); OUT.setDirection(01); logTestCaseResult(passOrFail(OUT.getDirection() == 01)); disposeOUT(); } void tc_Velocity() { // test default constructor logTestCaseStart("Velocity()"); setOUT(newCUT()); Velocity OUT = (Velocity)getOUT(); logTestCaseResult(passOrFail(OUT.getSpeed() == 0 && OUT.getDirection() == 0)); disposeOUT(); } void tcs_VelocitySpeedDirection() { // test Velocity(Speed, Direction) logTestCaseStart("Velocity(Speed, Direction)"); final int speedValue[] = { 6, 12, 1000 }; for ( int i = 0 ; i < 3 ; ++i ) { int speed = speedValue[i]; for ( int dir = 0 ; dir < 360 ; ++dir ) { logSubTestCaseStart(dir); setOUT(newCUT(speed, dir)); Velocity OUT = (Velocity)getOUT(); logTestCaseResult(passOrFail(OUT.getDirection() == dir && OUT.getSpeed() == speed)); disposeOUT(); } } } void tcs_setDirection() { logTestCaseStart("setDirection"); final int fixedSpeed = 1000; setOUT(newCUT(fixedSpeed, 359)); // any dir value != 0 Velocity OUT = (Velocity)getOUT(); for ( int dir = 0 ; dir < 360 ; ++dir ) { logSubTestCaseStart(dir); OUT.setDirection(dir); logTestCaseResult(passOrFail(OUT.getDirection() == dir && OUT.getSpeed() == fixedSpeed)); } disposeOUT(); } }; |