Testing Model 2: Optimized Method Ordering


Here we consider a second testing model. The basic idea is that a set (of size greater than one) of test cases is initially identified. Frequently, this might be the entire set of all test cases for the class, although not necessarily. Once a set of test cases is identified, an attempt is made to order the test case runs in a way that maximizes early testing. This means that defects are potentially revealed in the context of as few methods as possible, making those defects easier to localize.

To understand this approach, we first consider a procedure that produces an ordering in which methods should be developed (called a development ordering), based on the test cases in which those methods appear. Our procedure attempts to maximize the number of opportunities to test as methods are developed. These opportunities are called test points. More specifically, a development ordering is just an ordered sequence of methods; a test point is a position following a method, m, within a development ordering, where an expected-positive test run of a particular test case is feasible following m, but not following any method in the ordering before m. At each test point, we also attempt to maximize the number of test runs that can be conducted. In this way, we attempt to maximize the amount of meaningful testing that can be done early in the development process.

One algorithm to produce optimal development orderings is to generate all possible orderings (all permutations of methods) and then count the number of test points for each ordering as well as the number of test cases for which expected-positive test runs are feasible at each test point. Because this algorithm is NP-complete, we propose a greedy heuristic. Specifically, to choose the next method in a development ordering, we first identify all methods that, if chosen next, would enable at least one expected-positive test run to be feasible. Of these methods, we select the method with the highest impact score. The impact score attempts to quantify which method provides the most "progress" toward making test runs feasible for additional test cases. If no method will result in a test point, we choose the method with the highest impact score among all methods that have not been previously chosen.

We compute the impact score based on the number of test cases by which the method is referenced, weighted for each test case. The weight for a given test case is based on the inverse of the number of methods that appear in that test case. For example, suppose a method, M, appears in two test cases, A and B. Moreover, three methods are referenced by test case A and two methods referenced by test case B. The impact score for method M is 1/3 + 1/2 (0.33 + 0.50), which equals 0.83. (Note that multiple references to a method within the single test case are ignored in computing weights.)

Our OrderedList example will clarify the use of this heuristic. Our objective in this example is to define a development ordering among all methods in the ordered list class of Listing 13.1, based on the test cases that appear in Table 13.1. Our first step is to determine, for each method, whether the selection of that method will result in a test point. There is no method that we can choose initially that will make an expected-positive test run feasible (for any test case). Thus, we must look at the impact scores for all methods. Table 13.3 contains the impact scores for all methods in the ordered list class. Recall that Create is a method that invokes the OrderedList constructor, so we compute the impact score for OrderedList based on the appearance of Create as well as direct references to OrderedList in the various test cases.

Table 13.3. Initial Impact Scores
Method Impact Score
OrderedList 4.08
add 3.25
head 1.00
member 1.16
tail 0.75
equals 1.58
length 0.33
delete 0.83

The constructor method (OrderedList) has the highest impact score, so it is chosen as the first method in the development ordering. This is also the intuitive choice because it is clear that no expected-positive test runs are feasible until this method is written.

To find the next method, we first determine whether the selection of any particular method results in a test point. Because of test case 2 (assert(Create().member(1)== false)), selecting member next will result in a test point (enabling a feasible expected-positive run for test case 2). The member method is the only method that will result in a test point and is thus chosen next. Similarly, because of test cases 8 and 9 (for example, assert(Create().add(1).member(1) == true)), selecting add next will result in a test point and is again the only such method. Thus, our development ordering so far is OrderedList member add.

Now we have two possible methods that will result in test points if chosen next: head and length. We therefore choose the method with the higher impact score. Because our previously computed impact scores from Table 13.3 were influenced by methods that have already been chosen (and are therefore now irrelevant), we recompute the impact scores with OrderedList, member, and add removed from consideration. The head method appears in three test cases and is by itself in those test cases (other than the already chosen methods). Thus, its impact score is 3. On the other hand, length appears in only one test case (also by itself except for already chosen methods); thus, its impact score is 1. As such, head is chosen next.

Once head is chosen, no additional methods besides length will result in a test point. So length is chosen next. Thus, our development ordering so far is OrderedList member add head length.

At this point, tail, equals, and delete remain to be chosen. None of these methods, if chosen next, will result in a test point. Once again, we must recompute the impact scores for these three methods, with methods already in the development ordering removed from consideration. The recomputed impact scores are given in Table 13.4

The equals method has the highest impact score and is chosen next. Choosing equals enables either tail or delete to result in a test point if chosen next. Because both appear in the same number of test cases (three), the choice for the next method is arbitrary. We choose delete next, followed by tail. Thus, our final development ordering is OrderedList member add head length equals delete tail.

Table 13.4. Recomputed Impact Scores
Method Impact Score
tail 1.5
equals 3.0
delete 1.5

Once the optimal development ordering for the methods is known, it is straightforward to get an analogous optimal ordering for test cases. In particular, it is straightforward to determine which test cases have feasible expected-positive test runs after each method in the development ordering. Table 13.5 for this model is the analog of Table 13.2 for the ad hoc model; it contains the 13 test cases from Table 13.1 in an order consistent with the development ordering for the methods. Each test case is accompanied by the methods referenced in that test case, along with those methods that must be developed to make an expected-positive test run of that test case feasible. If no methods are listed beside a particular test case, an expected-positive run of the test case is immediately feasible after the previous test case is run. Our heuristic attempts to minimize the number of methods that must be developed for each test case. As Table 13.5 shows, no more than two methods at a time have to be written before conducting some testing. For most test cases, an expected-positive test run is feasible after writing at most one method.

Our example assumes that the development orderings for both methods and test cases are computed once for the entire class. In fact, such orderings may be computed on subsets of methods and test cases. For example, given the "on demand" nature of XP, it may be undesirable to focus on an entire class in this fashion. Instead, the XP design process will naturally identify the appropriate set of test cases for the current story under development.

Also, we have developed a tool that implements this heuristic [Parrish+1996]. Thus, application of this approach does not require large amounts of manual effort. With small numbers of methods and test cases, the proper selection of the next method to be developed or the next test case to be executed may be obvious without a tool. Our tool has been applied to a number of different data structures based classes, including a text editor class and classes implementing various kinds of lists, stacks, queues, complex numbers, and arrays.

Table 13.5. Methods Developed before Each Test Run (Optimized Ordering)
  Test Case Methods Referenced Methods Developed
2. assert(Create().member(1)== false); OrderedList,member OrderedList,member
8. assert(Create().add(1).member(1) == true); OrderedList,add,member add
9. assert(Create().add(1).member(2) == Create().member(2)); OrderedList,add,member  
1. assert(Create().add(1).head()== 1); OrderedList,add,head head
4.
 OrderedList L = new OrderedList();L = L.add(1); assert(L.add(2).head() == L.head()); 
OrderedList,add,head  
5.
 OrderedList L = new OrderedList();L = L.add(2); assert(L.add(1).head() == 1); 
OrderedList,add,head  
10.
 assert(Create().add(1).length() == Create().length + 1); 
OrderedList,add,length length
11. assert(Create().delete(1).equals(Create())); OrderedList,delete,equals equals,delete
12.
 assert(Create().add(1).delete(1). equals(Create())); 
OrderedList,add,delete,equals  
13.
 assert(Create().add(1).delete(2). equals(Create.delete(2).add(1))); 
OrderedList,add,delete,equals  
3. assert(Create().add(1).tail.equals(Create()); OrderedList,add,tail,equals tail
6.
 OrderedList L = new OrderedList();L = L.add(2); assert(L.add(1).tail().equals(L)); 
OrderedList,add,tail,equals  
7.
 OrderedList L = new OrderedList();L = L.add(0); assert(L.add(1).tail().equals (L.tail.add(1)); 
OrderedList,add,tail, equals  



Extreme Programming Perspectives
Extreme Programming Perspectives
ISBN: 0201770059
EAN: 2147483647
Year: 2005
Pages: 445

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net