XP is designed to be lightweight and streamlined, producing fewer expensive artifacts. Producing a lot of metrics seems counterintuitive. However, feedback is crucial in XP. Customers and managers need to have concrete information about how effective and productive the team has been. Your car's gauges (and possibly some funny noises) tell you when it's time for preventive maintenance. Software metrics have been compared to a dashboard, to tell you the state of your vehicle. They provide simple ways to keep tabs on your project's health. Once again, we're talking about something that isn't a tester function, but we've often found ourselves suggesting practices associated with keeping metrics and found them useful, which is why we include them in this book. Your team is unlikely to produce high-quality software in a timely manner without someone performing the tracker function. If you're the tester, don't try to take on the tracker role as well. It's a job best done by a technical lead or programmer. Some metrics are more appropriately kept by a project manager or XP coach. At the very least, your team has to track the progress of the iteration in some way, so you know each day whether you're still on target to complete all the stories by the end of the iteration. Whoever is tracking your project should track each task, the estimate for the task, the amount of time spent so far on the task, and the estimated time to complete the task. Adding up this last number for each task will tell you whether you need to drop one or more stories or ask the customer for more. Update this information every day in the standup meeting. Tracking is critical to avoid last-minute surprises. You can keep track on a whiteboard if you want, so the whole team can see the progress. If your team is split into multiple geographical locations, as one of Lisa's was, a spreadsheet may be a useful way to do the tracking. See www.xptester.org for a sample tracking spreadsheet. Some organizations need to gather more formal information. Here are examples of metrics kept by teams we've been on or talked with:
If you've stored this type of data, you can produce whatever reports you or the customer finds useful. For example:
Find tools that will automatically produce useful metrics. As a means of encouraging the creation of unit tests, programmers on one project Lisa worked on wrote a simple script that traversed their source tree daily and sent email with the details of the unit tests written, organized by package and class. They also used JavaNCSS, a source measurement suite for Java that generates information such as
JavaNCSS is free software distributed under the GNU general public license from www.kclee.com/clemens/java/javancss. You can generate these metrics automatically daily and display the results on the project wiki to help the team determine what parts of the code are ripe for refactoring and whether test coverage is adequate. The most important metrics are the test results. Unit tests should always be 100%, but it's a good idea to keep track of how many new tests are written each day and post this on the "big board" or other prominent location. Acceptance test results, in graphical form if possible, should be posted where all project members can see them (in more than one location if needed). Janet Gregory passes along this suggestion for metrics:
|