Testing as Part of Development


Planning and feedback are an essential part of XP, and tests are an essential part of planning and feedback. Acceptance tests define the functionality to be implemented during an iteration. A unit test is just a plan for what code you will write next. Tests give feedback by defining exactly when a task is done. Tests give feedback by announcing when a requirement has been violated. Tests watch over all integration activities and give feedback on integration problems. Altogether, tests form a concrete and decisive form of planning and give enormous amounts of feedback well beyond that which ordinary documents can give.

Create the Test First

One of the biggest changes in testing on an XP project is when the testing occurs. You create the test first, not last. This is a difficult concept to grasp at first. We make no apologies for it, because it is worth the effort to learn how to do this well. When a test has been created first, development is driven to completion with greater focus. Less time is spent postulating problems that might not require a solution. Less time is spent generalizing interfaces that may never be used. Unit tests are equivalent to creating a very detailed specification of the code you are about to write. Acceptance tests are equivalent to very detailed requirements documents. The difference is that tests enforce their requirements concisely and unambiguously.

The way to get started coding tests first is to just write a test by assuming a simple interface to some code that does exactly what you need. Do a little bit of thinking first to have a direction in mind when you start. Think about what you could implement simply. If you always implement the simplest thing next, you will always work as simply as you can. Then get into the habit of writing code after the test is run and has failed. The paradox of test-first coding is that tests are easier to write if you write them before the code, but only if you are experienced at writing tests. Just get started and stick with it until it becomes natural to work this way.

Test-first coding does not allow you to be sloppy. To do things test-first, you must decide what it is you want to accomplish and know exactly what it means to be done with it. You plan in advance what functionality is added next. You then design the interface to your code before the code is written. This creates a focused design that is easy to test.

Testable code is inherently more reliable than untestable code, assuming we take the time to test it. If the tests take only a couple of minutes or less to run, we will run them often. An entire unit test suite should take around ten minutes to run. Any more, and we begin to see less frequent integration, which leads to even less frequent integration. To keep the team working in the context of the latest versions of the code, the tests must execute in a reasonable amount of time. Optimize, prune, and combine tests that are taking too long. Subsets of tests are a viable alternative for running often during development but cannot be used for integration and are not as good as running all the tests all the time.

How Much Do We Test?

This is a very important question with a very unsatisfying answer: just enough and no more. Unfortunately, every project is different, and what might be enough for one project may not be enough for another. You must ask yourself and the team members whether your goals are being met at the current level of testing. If they are not, do more or do less until you find the sweet spot, where the cost of adding more tests would be greater than the cost of fixing the bugs that will escape to the user. How many tests are enough is obviously different for each project because the cost of a bug getting to a user is highly variable.

How much we test is variable and highly subjective. Some guidelines are available, though. On an XP project, tests are used to communicate requirements; thus anything specified by the customer should have a test. Remember the saying "If it doesn't have a test, it doesn't exist." Tests are also used as reminders to ourselves about things we had a hard time coding. We should create a test for anything we want to remember about our code. And when we have all those in place, we can look around for anything that looks interesting and create a test for that. We should also create tests as a form of documentation. Anything we want someone else to be able to use can have a test showing how it is used. Ask yourself whether you have enough tests to guard your code from bugs during refactoring, integration, and future development.

It is important to consider not only how much to test but also how much to not test. A general rule is to average about as much test code as there is system code. Some code will take many times more to test; other code may not need to be tested at all. We want to test everything that might break, not trivial things that can't break. You don't need to test with invalid data unless it really is possible to receive it. By mercilessly refactoring, we can break our code into independent pieces and test separately with a few tests instead of all the combinations of all the inputs. The idea is to test just enough so that we can behave as if we had complete coverage even when we don't.

Ultimately, test coverage is about courage. We need to release to production, refactor mercilessly, and integrate often. If we are unsure and begin to doubt our ability to do these things without introducing bugs, we cannot apply XP as intended.

Testing Frameworks

It is recommended that an automated acceptance test framework be custom crafted for every project. Acceptance tests generally contain large amounts of data that must be maintained. Being able to input data in a form that relates directly to the customer's domain will easily pay for itself in the reduced cost of acceptance test creation. It is also a requirement that the data be manipulated in ways that make sense from a domain-specific point of view. A framework that encapsulates domain concepts usually requires the team to create it specifically for the project.

Being able to specify input data, a script of actions, and output data for comparison will encourage customers to manage the tests themselves. If the customers can use the tool to create acceptance tests and maintain them, they can begin to truly "own" the tests that verify their user stories. This means that a general tool will not be as helpful as you think it will be. These types of frameworks evolve during the lifetime of the project, and the sooner the team starts to create this framework, the better off the project will be in the long run. You should never try to maintain acceptance test data with a bare-bones language-level framework intended for unit testing.

The Web site at http://www.xprogramming.com has a very good selection of unit test frameworks for most programming languages. Using an available framework as a shortcut can pay off, but only if you take the time to learn it well, inside and out. Not too long ago, everyone wrote their own unit test framework from scratch, but these days frameworks such as JUnit [Gamma+2001] are becoming the standard in testing tools. JUnit comes with instructions on how to integrate it into many different Integrated Development Environments (IDEs).

If you choose not to write your own, at the very least you must get your hands on your unit testing framework's source code and refactor it to the point of feeling ownership of it. Only when your team owns its own test framework can they feel confident and not hesitate to extend it or simplify it as needed. Consider the saying "Your unit test framework is not a testing tool, it is a development tool." Because unit testing is now a large part of development, even a small change to the framework can translate into a big boost in productivity.



Extreme Programming Perspectives
Extreme Programming Perspectives
ISBN: 0201770059
EAN: 2147483647
Year: 2005
Pages: 445

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net