Running Acceptance Tests


The fast pace of XP iterations makes it difficult for acceptance testing to keep pace with development. It's much better to do the acceptance testing in the same iteration with the corresponding stories. If you've ever done "downstream" testing, where you don't get the code until development is "finished," you know that developers are looking ahead to the next set of tasks. It's painful to have to stop the fun new stuff you're doing and go back to fix something you've already put out of your mind.

In our projects, writing, automating, and running acceptance tests are part of each story's task, and estimates for finding and fixing bugs are included in the story estimates. The developers try to organize tasks so that we can start acceptance testing components early in the iteration. This way, we can find defects and they can fix them before the end of the iteration. I think it makes everyone happier. Most likely, some defects or issues will still be left over that have to become stories for future iterations, but it's possible to minimize these, and we should try.

As iterations roll along, regression testing of acceptance tests from previous iterations also has to be performed. In an e-mail to the YahooGroup extremeprogramming, Ron Jeffries says that once an acceptance test passes, it should pass forever after, so any regression defects for previously working tests must be addressed [Jeffries2001]. Regression testing is when you'll really see the value of automating those tests!

How do you do acceptance testing that fast? That's another topic in itself, but here are some tips.

  • Make acceptance tests granular enough to show the project's true progress. Fifty tests of ten steps each is better than ten tests of 50 steps each.

  • Separate test data from actions in the test cases. Spreadsheet formats work well; we've experimented successfully with XML formats too. It's easy to produce scripts to go from one format to another; a script that turns your spreadsheet test data into a form your test tool can use is invaluable.

  • Identify areas of high business value and critical functionality. Automate tests for basic user scenarios that cover these areas. Add to them as time allows don't forget to budget time to maintain and refactor automated tests.

  • Modularize automated tests; avoid duplicate code and create reusable modules. For example, if you are testing a Web application, have a main script that calls modules to do the work of verifying the various interfaces, such as logging in, running queries, and adding records. Split functions such as verifying that a given set of links is present in the HTTP response into separate modules that can be reused from test to test and project to project.

  • Make automated tests self-verifying. Both manual and automated tests should produce visual reports that indicate "pass" or "fail" at a glance. One way to do this is to write test results in XML format and have your team write a tool that reads the XML and produces an HTML page with graphic representation of tests passed, failed, and not run.

  • Verify the minimum success criteria. As they say in the Air Force, if the minimum wasn't good enough, it wouldn't be the minimum.

  • Apply XP practices to test automation. Do the simplest thing that works, continually refactor, pair test, and verify critical functionality with a bare-bones "smoke" test.



Extreme Programming Perspectives
Extreme Programming Perspectives
ISBN: 0201770059
EAN: 2147483647
Year: 2005
Pages: 445

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net