Coupling between Tests


Now that we've complicated things somewhat, we'll try to simplify them again. If you studied the above example long enough and closely enough, you'd eventually realize two things: a) you can simplify it as shown in Listing 18.3, and b) you really need to get a life already (just kidding).

Listing 18.3 Version 1.2
 public class UserIdStoryTest {    public void testCreateDelete() {      login("super","superpassword");      assertTrue( createUserId( "new",                                "newpassword",                                "new@xptester.org") );      assertFalse( createUserId( "fred",                                 "",                                 "") );      assertTrue( deleteUserid( "new" ) );      assertFalse( deleteUserId( "doug") );    } } 

This version combines the create and delete methods into one, using the create tests as setup for the delete tests and the delete tests as teardown for the create tests. It's about half the size, will run in about half the time, and effectively executes the same test cases. It's even shorter than our initial version and can be rerun without a system reset.

Although this is all to the good, it has a dark side. The original test (version 1.0) would pass the delete tests if the delete function were working, without regard to whether the create function worked or not. Our latest version will fail the delete test if create isn't working, even if nothing is wrong with the delete function.

The loss of too much independence between tests can kill the effectiveness of acceptance testing. The acceptance test pass/fail can't be a useful measurement of the project's progress if the failure of one function causes every acceptance test to fail. So watch out for coupling between tests that can lead to this. You may introduce it on purpose in a case like this example after balancing the benefits and drawbacks, but interactions you didn't plan on may occur.

For instance, extra records that show up in the results of a search test could cause the search test to fail not because anything was wrong with searching but because other tests that created records failed to delete them.

To deal with these types of interactions, imagine all the test methods are executing simultaneously and refactor your tests to provide consistent results in that environment. For example, use search queries in the search test that have no overlap with the values of the records created by the create test, so the records from one won't show up in the other, whether they're there or not. Not only does this avoid large numbers of failures resulting from just a few actual problems, it also allows you to execute the tests simultaneously.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net