Identifying and Estimating Test Infrastructure Tasks


The test infrastructure provides a controlled environment in which to run your acceptance tests and tools to aid test execution, evaluation, and reporting.

Here are some of the things your test infrastructure should allow you to do:

  1. Automate the acceptance tests.

  2. Have a separate system you can load any time with the latest successful integration or an earlier one, without regard to what the programmers are doing.

  3. Save test data and other state information from test sessions, to be restored and used as the starting point for subsequent sessions.

  4. Keep track of which tests have passed and failed for each story in a given software build.

  5. Incorporate automated acceptance tests into the build script, to validate that previously delivered stories aren't broken by refactoring.

You probably won't have the luxury of putting one infrastructure in place that has all these features, so use this as a starting point and negotiate. Hopefully, you won't have to start from scratch. Some or all of these features may already exist and be familiar to the team, or they may be inherent in the development environment or the nature of the system being developed.

For instance, when the team has just finished a similar project and is satisfied with the tools and techniques used for testing, it's natural to use them again. This probably doesn't require any specific planning.

Suppose the system is completely self-contained, with an interface consisting of only an application program interface (API) for example, a statistical methods library. In this case, the test system would consist only of the code itself, and the functional tests could be automated in the source language of the system. Again, no need for defining and estimating specific tasks.

In many cases, your project will require test infrastructure tasks that must be identified and estimated in the iteration plan. If the tasks are overlooked, the iteration will fall behind schedule. Either unplanned tasks steal time allocated to other tasks or an inadequate test infrastructure results in longer-than-expected test development, execution, and evaluation times.

For a new project and/or a new project team, you may have to spend a significant amount of time setting up your environment and the tools you need. You could find that you have to tweak not only your test tools but your build and release process and related tasks as well. All these infrastructure-related concerns can make estimating pretty tricky for the first iteration of a new project.

For a way to handle this difficulty, consider the experience of one of the teams on which Lisa was the tester. For the first iteration, they decided to create an infrastructure story for this bootstrap time that would collect the "overhead" for the entire project. They thought at the time this would be a one-time occurrence, because of the number of tasks that happen only once during the project. As Lisa remembers:

This worked fairly well for the first iteration, but later we realized we should also have had an infrastructure story for similar tasks that appear in subsequent iterations. We found that we always spent some time on infrastructure and that it remained fairly constant in each iteration, because we were constantly adding to and adjusting the infrastructure. Based on this, we modified our approach to fold the average of these costs into our team's velocity. We worried about separate estimates only for stories that required a large infrastructure cost, such as creating a relational database.

We recommend using this approach for the identifying and estimating test infrastructure tasks: for the first iteration, identify and estimate all the individual tasks necessary to get the appropriate tools and techniques in place. We recommend tracking the actuals on these, so they become part of the velocity calculation for subsequent iterations. Then, for subsequent iterations, break out separate tasks only when it's clear the infrastructure cost will be exceptionally large. Tracking actuals seems like a big demand, and it might not be necessary on your project. Our personal experience is that time needed for setting up a test environment and acquiring or writing test tools is often overlooked.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net