How Often Do You Run Acceptance Tests?


Running acceptance tests is like voting: do it early and often. You defined acceptance tests before and during the iteration planning, then wrote executable acceptance tests during the first day or two of the iteration. As the stories are completed, the team makes the tests run through direct calls to the code, and you begin getting feedback immediately.

On our ideal team, the programmers responsible for the story are also responsible for making the executable acceptance tests for those stories run, which means implementing those methods in the direct-call interface required for the acceptance tests. The story isn't finished until the acceptance tests run successfully (except when the tests depend on other stories not yet completed).

If more work needs to be done to make the executable tests run, that should become the programmers' highest priority, not going on to the next story. Resist with all your might any temptation to give up on the executable tests and start testing manually. Once you step onto this slope, the only way out is down, and you'll never climb out of the hole. You'll end up spending more and more time on manual tests, until that's all you do.

In the worst case, if a test can't be automated, consider dropping it. As we discussed in Chapter 16, no test at all may be better than a manual test at this point. Keep in mind that with XP, unlike a traditional software development project, the acceptance test is not the primary method for assuring quality. We aren't looking for defects in how we carried out our intentions in the code. That's the job of our test-first, pair-programming teams and unit-test automation.

The job of acceptance testing is to uncover differences between our intentions and the customer's expectations. Consequently, the risk associated with not running an acceptance test is different from that of not running unit tests. This doesn't mean acceptance tests aren't critical to the project. Just remember that you have time to test only the minimum needed to prove that the story has been completed according to the customer's desires.

If you're beyond the first iteration, you also have acceptance tests (now regression tests) from previous iterations to run. Your team should have an integration environment already set up (if you don't, you're heading for big trouble), where the team can start doing builds containing new code for the iteration. Here's where you really get the payoff for test automation.

Run the automated acceptance tests for previous iterations each time a new build is done in the integration environment. Now your team knows right away if they broke something. The code may have passed the unit tests, but we know some defects can't be caught by unit tests. Running automated regression tests will keep development speedy.

"I read some other Extreme Programming publications that said customers should run the acceptance tests. What about that?" you ask. Good point. In XP Utopia land, our customers are right there with us executing the acceptance tests. In less ideal projects, the development team (including you, the tester) must take on this responsibility, at least initially. When you think the stories are complete, though, the customer always gets a turn.

On our projects, we've handled acceptance testing by the customer or user in different ways. Here are some alternative approaches:

  • A tester and customer pair to execute acceptance tests; the tester may go around to a number of customers to repeat the process, each one running the acceptance tests that pertain to his part of the system.

  • A group of customers meets in one room with several workstations and pairs with each other to run the tests.

  • A group of customers meets in one room, but only one "drives" and runs tests, while the others watch via a projector (this scenario could be played out with remote participants, using a product such as NetMeeting).

When do you do this customer testing? Ideally, before the end of the iteration. If you're working with remote customers or in a situation where other teams provide parts of the final working system, you might be better off going through this process after the end of the iteration.

On one of Lisa's projects, the team had to compromise. They ran through the basic acceptance tests in a demo situation with the customer team a couple of days after the end of each iteration. Due to constraints in the availability of the user acceptance-test environment, the customers didn't get to do much hands-on testing until after each release. Each release consisted of two or three iterations, so this might mean several weeks went by before customers really got to test. As a result, some miscommunications between customers and programmers went unnoticed until it was too late to fix them before the release. You'll have to adapt your process to best fit your project and your customers.

Testing after the end of the iteration has one advantage for your development team: the iteration has already ended, and it's hard for the customer to say otherwise. Sure, you should have the customer sign off that the iteration is complete. In some organizations, that may even mean a formal signoff document. If a customer discovers that a story wasn't done according to her requirements (on which she also had signed off) or a significant defect is present, she can delay the signoff until her needs are met. However, that work will be part of the new iteration or a future one and may reduce the team's velocity that can be devoted to new stories.

Make sure your customer understands up front that he can't run a test on the last day of the iteration, decide the result doesn't look the way he wants, and delay the end of the iteration by a day or a week until it's fixed. The iteration is over when it's over. If a story isn't complete or has such severe defects that it can't be called complete, the team's velocity for the next iteration is lowered accordingly, and that story also has to be part of the team's workload for that iteration (unless the customer decides to postpone or drop it).

Extreme Programming Installed has an excellent chapter on handling defects. One suggestion is to report problems by writing them on story cards. The team can estimate how long fixing the defect will take. Fixes for low-priority defects can be scheduled in future iterations. Urgent problems need to be addressed right away, which could mean that one or more stories in that iteration may have to be deferred to make time for fixing problems. This is a thorny area that you, as an XP tester, may find difficult to deal with in spite of all these good guidelines for handling problems.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net