Identifying and Estimating Functional and Acceptance Testing Tasks


While listing, estimating, and assigning programming tasks, make sure to include testing tasks. When you estimate tasks and stories, include time for both testing and test support. For example, you might need a script to load test data. No matter who is ultimately responsible for the task, it should be included with other tasks for the story. Thinking about testing tasks will help lead the team to a test-friendly design and serve as a reminder that testing is the responsibility of the entire team, not just the tester. In Chapter 10, we discussed a couple of ways to estimate high-level acceptance tests. The same principles apply when estimating detailed test-related tasks for an iteration's stories.

The testing tasks associated with a story usually involve some or all of the following:

  • Clarifying acceptance criteria. You'll need a task for this when a story includes subjective criteria for example, a phrase like "good response time" or "easy to use." It's vital to spend the time with the customer to quantify these aspects of the system before attempting to build or run a test for it.

  • Defining, acquiring, and validating test data. You almost always need some form of test data. The customer should provide this, because she's likely to have representative data on hand or be able to easily define it. You may still need to spend time acquiring test data and may need to build some specialized tools for generating and loading it.

  • Automation design spike. When using test tools to run automated tests through a user interface, you might have to try a variety of approaches to find the approach for a given application or test. If you're using a new tool or automating for the first time, you'll definitely want to include these tasks. Even when the tool and application are old hat, certain types of tests may still require experimentation. Examples include load and performance tests, tests that require synchronization of simultaneous access, tests that require a specific behavior or response from an interfacing system, and tests of failure and recovery behavior.

  • Automation. These tasks include building and verifying the automated tests. Since creating automated test scripts is a form of programming, these often look like programming tasks. Just as you do with any code your team develops, you'll need to test and debug the automated tests before they're ready to run against the system.

  • Execution. Automated tests usually run so quickly that it may not be necessary to break out a separate task for each test, but you'll still need to spend some time setting up the test and making sure it runs. The test may be run many times over the course of the iteration. In some cases, a single test (such as a large-scale load test) may warrant its own separate execution task.

  • Evaluation and reporting. Most tests don't have specific tasks for evaluation and reporting, but there are a few exceptions. Load and performance tests are notorious for requiring lots of analysis during and after the fact, to determine what went wrong and how to remedy it. Another situation often arises in later iterations, as the system moves toward release: careful record-keeping of which version of the software build passed which tests becomes important, because so much already-present functionality can be broken at each integration.

If these look a bit familiar, they should. They're the same general areas we looked at in Chapter 10 when we made high-level estimates of acceptance-test time during release planning. Iteration planning provides the place to take a closer look and break out and estimate each task individually.

When we were estimating stories during release planning, we looked ahead to what we thought the tasks were likely to be, even though we waited to actually define the tasks until now. We probably missed some and maybe included a few we really didn't need. We may not have thought long enough about the ones we did foresee to estimate them accurately. That's what estimating is all about: prediction based on incomplete knowledge.

This technique of anticipating a future step to provide partial information in the present not only allows you to plan for that step more accurately, it also makes the step go more quickly when you get there. It gives you a sort of dry run. We'll use this same technique again in iteration planning, this time by looking ahead to what our test modules are going to be, even though we won't actually define them until we get to test design.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net