Chapter 21. Test Automation


You've probably surmised that we've finally reached the most intimidating mountain passes of our XP road trip. We've been climbing steadily for the last few chapters. We wrote executable tests, we talked about ways to organize them with spreadsheets if necessary, we discussed ways to refactor tests to maximize their effectiveness and how to set your system state before and after running tests to ensure valid results. We explored the reasons we're pushing for 100% acceptance test automation. We hope Chapter 19 didn't feel like a big bump in the road to you!

Now we're going to park the car and get out our climbing gear. There just isn't any way to demonstrate acceptance test automation without getting into a lot of technical detail and into the code itself. Hang with us through the rough parts. We'll get into the concrete examples before long, and you'll see that automating all your acceptance tests is not as daunting as you might have thought. In this chapter, we'll explain why and how we write tests that are modular, data-driven, and self-verifying. The next few chapters will get into more gory technical details of coding automated acceptance tests.

Since manual tests are out of the picture, you can get down to business with the automation. As we mentioned before, you'll have some additional work to do to before you can run the executable tests we described in Chapters 16 through 18. This should reassure you if you have any experience whatsoever in test automation, because if we said that's all there was to it, you'd know it was snake oil.

In fact, if you've automated tests on a traditional software development project, you may find our approach pretty foreign. If you haven't done any test automation, you're going to find any approach mysterious. So before we go on, we want to point out the reasons for our automation approach.

Traditional test automation is a classic case of taking a manual process and automating it. Lots of time and effort goes into selecting a "test tool," which is then used to "capture" a manually executed sequence of interactions with the system. These captured scripts are then "replayed," and the results are either examined or compared to a baseline.

Tests created this way require manual execution the first time through. If the system changes much between the capture and the replay, the captured script will no longer run correctly (even if the system is working correctly) and must be recaptured or edited extensively. This can theoretically provide some automation on a traditional project, where weeks and months are spent in "system test," with the specifications frozen. On an XP project, where everything happens quickly and change is encouraged, constant recapturing becomes essentially the same thing as manual testing. Capture/playback may lead you off the high road and over a cliff.

It's possible to address the problems with traditional capture/replay style test automation by using the captured scripts as a starting point, then breaking them into modules, replacing the hard-coded test data with parameters, and assembling the real tests using these modules as building blocks. This becomes a programming effort, working within whatever editing system and language the test tool supports.

While this works better than pure capture/replay, it still may take too long for an XP project. You still can't get started until the system is working, so you can do the capture. Creating modules from the captured scripts is time-consuming and tedious because of all the low-level details the tool records in the script for replay. The automation language, editors, and other development tools (if any) provided by the test tool are often insufficient, and the resulting automated tests end up completely specific to the test tool. If you're an expert user of your test tool, you can make it work, but if the programmers on your team aren't as familiar with it, you'll have problems (and grumbling in the ranks).

Many traditional software automation attempts start out with capture/replay and the intent to subsequently modularize the tests but never get there. The number of resulting automated tests is kept pitifully small because of the large amount of time required to maintain them.

To avoid this trap, we turn the process around. We start by writing modular, data-driven, self-verifying executable tests in an appropriate programming language instead of trying to reverse-engineer them from captured scripts in the language of the test tool. This allows us to immediately write the tests at the appropriate level without getting bogged down in replay details. Then we can take advantage of the full set of tools and features available in a commercial-duty programming language.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net