Ways to Estimate Acceptance-Test Effort


All you need to do now is come up with a ballpark estimate for completing all tasks related to acceptance testing. The approaches to doing this are probably infinite. You need a quick way to estimate, because you may have to do it in a few minutes per story. Remember, this is release planning, and you're participating in a discussion. You probably won't have time to go off by yourself and study it.

In the best-case scenario, you've had experience testing the type of application being developed and have a gut feeling for about how long it's likely to take. Of course, a lot of us don't live in the ideal world. If you have a hard time coming up with estimates on the fly, here are a couple of suggestions. Both involve looking ahead at a more detailed view of the activity. Even though this is release planning and no tasks have been defined yet (that will happen in iteration planning), you need to imagine what some of those tasks are going to be.

Quick-and-Dirty Approach

Lisa's experience, based on years of keeping track of testing time versus developing time, is that testing usually takes about a third of the time spent on development. For a brand-new application that uses new technology, bump this up to half. For a simple application that isn't likely to change much, cut it down to 20%.

Example 4

We'll use this method for a story to add a user to the repository and assign the user to a particular user group. Let's say the programmers estimate the development tasks at 78 points.

This is a pretty basic user interface. We've tested lots of user interfaces that involve creating, updating, deleting, or displaying records. On the other hand, this story involves a security model where users are associated with particular groups, which is a little different. Our customer is not sure yet how the screen should look and is still fuzzy on some of the details about how the user groups will work. It's best to pad the estimate to be on the conservative side. We decide time for testing tasks will add up to about 40% of the development time, or about 31 points.

We could stop at this point and just use 31 points as the estimate, but if the programmers' estimate is way off, ours will be too. What if testing this story reveals something unusual? It doesn't take much time to do a quick check by thinking ahead to what the testing tasks are going to be and splitting the 31 points among them, as shown in Table 10.1.

Table 10.1. Example 4 estimate

Task

Estimate (points)

Finish defining tests, obtain test data, get signoff from customers

8

Load test data

2

Automate tests

18

Execute tests, report results

3

Total

31

This gives you a "reasonableness" check for your estimate: can you fit the likely tasks into the 31 points and reasonably expect to complete them?

A More Detailed Estimating Method

Sometimes you may not be comfortable using the development time as a starting point, or you may have to come up with the testing estimates at the same time, or you may have trouble deciding if the likely tasks will fit or even if you've thought of all the likely tasks. Perhaps you have a project where the customer feels considerable risk is involved and wants detailed and complex acceptance tests to prove that every possible scenario works correctly.

Here's another approach in which you start with acceptance tests and think about the tasks associated with each test to come up with the estimate. This is something you normally wouldn't do until iteration planning, but if you need accurate estimates during release planning, you can use this technique:

  1. For each acceptance test, estimate the time needed for the following:

    1. Preparation. Defining, creating, and validating test data; designing coding; debugging automated tests

    2. Running tests. Setting up, running, evaluating the outcome, and reporting the results

    3. Special considerations. A limited test window, for example

  2. Add these three estimates to get the complete estimate for this test.

  3. Add the estimates for all the acceptance tests to get the total for the story.

You can combine these two approaches as well, using one to complement and validate the other. Use whichever one seems to work best in a given situation, and don't sweat the details. Either method, as well as any other that involves thinking about what needs to be done, even for one minute, is better than no estimate at all.

We're showing this example in units of "ideal time": how long you think it would take to accomplish a task if you had absolutely nothing else to do. If the programmers use some other unit of measurement for estimates, use the same unit or use ideal time and convert later to the same units the programmers use.

Example 5

Here's an example of using this method for the directory-application search story from the previous chapter:

Story: Create a screen that will allow the user to search on the Web for businesses by category of business and location.

Table 10.2 shows the first acceptance test.

For preparation time, we'll allow half an hour each for defining details of the test and creating test records to support the test in the database. We'll also plan for half an hour each to write this as an executable acceptance test (see Chapter 16) and make it run through direct calls to the system code (Chapter 22). Finally, we'll allow 2 hours for interfacing to a test tool to make it run through the HTTP interface (Chapter 23). That's a total of 4 hours for preparation.

The test doesn't need any real setup, and it should run pretty quickly. We'll use the smallest unit we have, 0.1 hour (6 minutes), to represent both setup and execute time. We'll use the same number for evaluation and again for reporting, because those don't seem to have any special considerations either. Same for the test window we don't see any particular limitations.

Table 10.2. Test 1

Action

Data

Expected Result

1. Search

Category/location combination for which some businesses exist

Success: a list of each business within the category and location

The total estimate for this acceptance test is 4.4 hours. Table 10.3 shows it in summary form.

Once you think this through for test 1, all the others except test 3 are similar, and you can use the same estimates without going through all the steps. Table 10.4 shows a summary of the estimate for all acceptance tests for this story.

Table 10.3. Estimate for test 1 (in hours)

Test

Preparation

Execution

Special

Estimate

3

Define details

0.5

Setup

0.1

       
 

Create test records

0.5

Run

0.1

       
 

Write tests

0.5

Evaluate

0.1

       
 

Make tests runnable

2.5

Report

0.1

       
 

Total

4.0

Total

0.4

Total

0.0

Total

4.4

Table 10.4. Estimate for all acceptance tests (in hours)

Test

Preparation

Execution

Special

Estimate

1

4.0

0.4

0.0

4.4

2

4.0

0.4

0.0

4.4

3

12.0

4.5

8.5

25.0

4

4.0

0.4

0.0

4.4

5

4.0

0.4

0.0

4.4

6

4.0

0.4

0.0

4.4

7

4.0

0.4

0.0

4.4

       

Total: 51.4

So the acceptance-testing estimate for this story is 51.4 ideal hours (6.43 ideal days). The estimate for test 3 is a lot more than the others, so let's look at how we came up with it. Table 10.5 shows the details.

Preparation will be fairly extensive. We have to figure how many concurrent users we need to get the expected throughput and define "reasonable" response time. We say 2 hours. It will also take longer to create the test records, because we need a lot more say another 2 hours. We also expect a longer spike for the automation because of all the concurrent users, so we'll allow 4 hours for that and another 4 hours to write the tests and get them running through the HTTP interface (you can't do load simulation through direct calls). That's a total of 12 hours.

The test will take a relatively long time to execute we think 2 hours. We'll spend some time on setup tasks: make sure nothing else is going on in our test system, clear out the log files, set up monitoring tools. That's another half hour. We expect to spend an hour evaluating what happened: noting response times and throughput data, counting failures, preserving log records. We'll spend another hour producing a graphical results report for the team.

Special considerations come into play. As we mentioned earlier, system limitations often dictate a limited window for running load tests. We'll assume we can run this test only between midnight and 8:00 A.M. That means the worst-case time we'd have to wait to run the test would be 16 hours, if we were ready to test at 8:00 A.M. and had to wait until midnight. We estimate the average case at half that, or 8 hours.

Based on experience, we expect to have some kind of problem running a large test like this. We'll probably have to restart it or rerun part of it, not because the system fails the test but because something goes wrong with the test. We account for this with a half hour restart time. Special considerations total 8.5 hours.

Table 10.5. Test 3

Action

Data

Expected Result

3. Search

Enough concurrent users and category/location combinations to generate 200 searches and 100 hits/second

Success: each user gets appropriate result list within a reasonable response time

Table 10.6. Estimate for test 3 (in hours)

Test

Preparation

Execution

Special

Estimate

3

Define details

2.0

Setup

0.5

Window

8.0

   
 

Create test records

2.0

Run

2.0

Rerun

0.5

   
 

Write tests

4.0

Evaluate

1.0

       
 

Make tests runnable

4.0

Report

1.0

       
 

Total

12.0

Total

4.5

Total

8.5

Total

25.0

Adding this all up, we get a whopping 25 hours! Wow. That's a good thing to know up front! This story's a whopper in terms of acceptance-test time and will consume a lot of velocity if it's included in an iteration, unless it can be broken into smaller stories. Table 10.6 shows it in summary form.

You can't do accurate release and iteration planning without good estimates of both development and acceptance test time. In the next chapter, we'll talk about how to make your estimates even more accurate for planning purposes.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net