Experimenting with Tools


XP's short iterations and frequent releases give you the opportunity to experiment with different solutions for one or more iterations, evaluate the results, and try something new. Just as projects can "fail fast" with XP, so can tools.

A team for which Lisa was the tester used the short and frequent releases in XP as an opportunity to try different tools for acceptance-test automation. Here's Lisa's story:

When I joined the team, they'd never automated any acceptance tests for the Web applications they were developing. I had been successfully using a vendor tool, WebART, to automate acceptance-test scripts for Web applications in other XP projects. We used this tool for the first release of this new project. We had a dedicated team of testers who learned the tool. We were fairly successful with automation; tests for central functionality were automated. However, the separate-test-team approach had caused a lot of other problems.

For the second release, we applied the XP principle of making the whole development team responsible for acceptance-test automation. The automation tasks were spread amongst programmers who didn't know WebART. We discussed whether we should try using HTTPUnit, which the programmers knew, but the consensus was that HTTPUnit tests took too long to develop.

We decided to try WebART. A couple of team members who had used it tried to pair with others who didn't, to automate the tasks. This was hard, because it was a big team and not enough people were available who had used WebART before. Again, we automated the most critical testing. However, because the programmers didn't know the tool, they felt they spent too much time developing the test scripts as much as half the time to write the production code.

For the next release, we decided to evaluate each acceptance test. If it were possible to automate it more quickly with HTTPUnit and/or JUnit, we'd use that. If it couldn't be automated with those tools, we'd decide whether it could be automated in a timely manner with WebART. If the automation seemed like too big an investment for the return for example, it was a complex test but on noncritical functionality and didn't need to be performed often we'd do the test manually.

This worked well in terms of use of resources, but the acceptance tests automated with JUnit and HTTPUnit didn't really cover the system end to end. They also had a lot of hard-coded inputs and expected outputs and were thus not as flexible and robust as I would have liked. We ended up doing a lot of end-to-end testing manually.

At this point the project ended, but if we'd had another release to experiment with, I would have paired with the programmers to refactor the HTTPUnit and JUnit tests to follow good test-design principles. We would also have used WebART for more tests, because it did a better job of end-to-end testing and found more defects.

If you feel that test automation is taking too long or you're spending too much time maintaining test scripts, try a different approach or a different tool for the next few iterations. As the tester, you can offer your opinion based on experience, but what has worked for you in the past may not be the best choice for the current situation. The team is responsible for quality, and the team should select tools that will help them deliver.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net