Self-Verifying Tests


What do you want to know when a test runs? Whether it passed or failed. If it passed, you're happy no need to look further. If it failed, you want to investigate and see if it's a real defect or an intended modification, which you'll need to accommodate with a change in the test. In other words, you want tests that tell you their outcome as part of running, as opposed to having to dig through the test output to determine what happened.

There's more than one automated way to verify a test, and some are better than others. For instance, test tools often include a feature where you can compare your latest result against a baseline file from a previous execution. The idea is that any differences represent potential problems. You could, in theory, include the comparison step in your automation, so that running the comparison doesn't require any extra interaction, and consider this a self-verifying test.

Trouble is, on an XP project, you just can't afford the time for this (that may be true for any sort of project). First of all, you'd have to make sure you masked out all the things that might be different but still not a failure: dates, times, session ids, dynamically generated record numbers, and so on. Then, a difference does show up, you have to study it and try to figure out exactly what it is and determine if it's okay or not. You're comparing everything, even items that have no importance. You're likely to get so many differences that the verification essentially becomes manual again.

The whole process may be faster than a purely manual test would be, but it has all the same weaknesses: it's slow, requires intense concentration on details, and is unreliable under heavy schedule pressure. Most systems and user interfaces developed on an XP project are way too dynamic to test in this manner you end up spending all your time trying to deal with no-problem differences.

The problem with the baseline comparison method is that it starts out with the assumption that everything is equally important. Sure, you can modify that with "masks" or some other mechanism to ignore parts of a system response you don't care about or that you expect to be different each time. But if only a few things really matter (as is often the case), why spend all your effort trying to ignore what's unimportant? You should be focusing on identifying what is important, on recognizing those critical things in the response that determines whether the function you're testing actually worked. Remember, in XP we have time to verify only the minimum to show that the story is complete.

Self-verification is built right into the executable tests we described in Chapters 16 through 18:

 assertTrue( login("bob","bobspassword") );  

The login determines whether the attempt with the specified id and password succeeded or not, and the assertTrue verifies that in fact it succeeded (login returns true).



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net