Acceptance Tests versus Unit Tests


Okay, you may say, but what's the benefit of having a team member dedicated to this? Why not just adopt the test-first pattern and have programmers write acceptance tests at the very beginning, even before they write unit tests? And automate those acceptance tests, just as with the unit tests, and require all code to pass 100% of them before going into the repository and after each integration, just like unit tests? Then XP could eradicate those user-apparent defects just as it does the unit-level defects.

Well, that, in a nutshell, is exactly why we advocate the dedicated testing role. If acceptance tests were just like unit tests, programmers could write and automate them and then build the system too. However, unit tests and acceptance tests are different in important ways that affect how a test-first approach can be carried out.

Unit tests are written in the same language as the code they test. They interact with the programming objects directly, often in the same address space. A programmer pair can switch seamlessly between writing tests, writing code, refactoring, and running tests without even getting out of the editor. And the unit tests correspond almost exactly to the programming objects. It's perfectly natural for a pair of programmers to write the tests for the object they're about to code and then make sure the tests all run 100% when they're done. From then on, those tests are invaluable in validating that the code still working after refactoring.

Acceptance tests, on the other hand, correspond to the programming objects in a much looser way. A single acceptance test will almost always rely on many different programming objects, and a single programming object will often affect the outcome of multiple unrelated acceptance tests. Also, acceptance tests are frequently run through an external interface to the system, perhaps the user interface, requiring specialized tools to validate behavior visible to a human being, as opposed to another programming object.

It isn't possible to switch easily between acceptance tests and coding the way it is with unit tests, because the acceptance tests won't run until all the required code is ready, even when the piece the pair is working on is completed. Because of this, acceptance tests can't be relied upon after refactoring the same way as unit tests can.

Due to these differences, acceptance tests don't fit as naturally into the programmers' workflow and don't provide as much immediate benefit in finding defects in new or refactored code. It isn't surprising, then, to meet with resistance when asking programmers to spend the time writing automated acceptance tests before writing the code. It might be an easier sell if acceptance test automation were quick and easy, but it requires a high level of concentration and attention, sustained throughout the course of the project, to develop and maintain a set of tests that can both validate the system and be carried out in the available time.

A tester brings this level of focus to the team, along with a crucially different viewpoint that combines the prejudices of the customer with the technical skills of the programmers. As programmers, we're aware of the customer issues and do what is (minimally) necessary to satisfy them. But as a tester, we care about what it will be like to use and own the system over the long haul, while still being aware of the programming and technical details. This allows us to find the missing door as soon as the doorway is conceived rather than after it's constructed.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net