Acceptance Test Details


It's time to hammer out the details for each high-level test we've already defined or will discover as we go along. Just enough details, that is. We'll still defer lots of details to writing the tests and getting them to run.

The details we want to nail down now are those visible to the customer. For instance, a story may have some mention of "reasonable response time." You definitely want to work with the customer to quantify this. Once you've done so, you may still have details to work out about how you go about setting up the test (how to simulate a load, how to capture the response times, doing what, which response times are measured, and so on), but leave those to the next step, writing the tests.

We'll discuss how to go about getting these details down, but first we want to talk some more about the relationship between the customer, the acceptance tests, the quality of the delivered system, and the development team's quality of life.

Defining the details of the acceptance tests is really defining the quality the customer expects of the system. This should be a collaborative effort of the customer team and the development team, with the tester serving as the interpreter and the acceptance tests acting as the contract. General quality standards should be set during planning, but the acceptance tests contain the details that certify that the system meets the standards.

Quality is both a buzzword and an emotionally charged term for many. A lot of pitfalls beset an overly glib use of the term, and those of us in the "quality" profession are by no means immune. For example, experienced testers suffer the occupational hazard of always wanting to push for "higher" quality. Leaving aside the ambiguity of the term "higher" (assume it means "more"), the truth is, it's not up to the tester or QA engineer to set the quality level. That choice is reserved for the customer.

Which leads to the question: wouldn't customers always want the highest-quality system they can get?

No. Customers with time and money constraints consider a lot of other things. Is a Mercedes higher quality than a Ford or Toyota? Which do you own? Here's Lisa's experience from a team she worked with:

Our customer was a startup with a Web-based product. They were in a crunch: they needed to show their application to potential investors in a few weeks but had nothing running. They just needed a system to be available an hour or two a day for demos. They weren't looking for a bulletproof 24 x 7 production server. In fact, they couldn't afford to pay for a bulletproof system right then. Their priority was to have enough features to show off, not to have a completely stable system. Of course, if they could get it for the same price, they'd love to have it bulletproof. It generally takes significantly more time and/or resources to produce a system with guaranteed stability.

Here's another example. Our sample application, XTrack, is intended for use internally by our own team. We don't expect more than one or two users to be logged into the system at one time. Our users are trained and will not do too much weird, unexpected stuff. If they do, they're savvy enough to work around problems that arise. We don't need to produce the most robust user interface in the world. We're more interested in being able to track the project than in having a pretty, user-friendly interface.

Customers can't make intelligent choices about quality without understanding the cost of producing software that contains the desired features at the desired level of quality. The development team provides this information in the form of story estimates as well as in the details of the acceptance tests.

When the customer is looking for a bargain (and who isn't?) it may become demoralizing for the development team if they begin to feel they're producing a bad product. Understanding how to produce the customer's desired level of quality when the team has a "higher" standard is key to successful XP projects and requires making the distinction between internal and external quality.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net