Catching Up


If you ended up dropping acceptance tests during the iteration because you couldn't make them run, your job is to remind the team of the risks posed by those missing tests. You'll need to decide whether they should be added in to the next iteration. The risk of a missing test may be higher in subsequent iterations, as the system evolves and code is refactored to support the next set of stories.

If you or any other team member had problems with the mechanics of test automation, pair yourself or that team member with someone who's good at it in the next iteration. If the whole team struggles with automating the tests, have them work through the test automation exercises in Chapters 16 25.

If the number of executable acceptance tests you got running in the last iteration was much smaller than your goal, you need find out why. Were your estimates too low? Did you spend your time documenting tests instead of automating them? Did you try to automate too close to the user interface? Maybe you didn't divide up the automation tasks enough (e.g., you tried to do it all). Fix whatever is broken, but also pad your story estimates for the next iteration, to allow enough time to get those tests and at least some tests from the previous iteration running as well. You have to catch up, or you'll go into death-spiral mode.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net