Metrics


XP is designed to be lightweight and streamlined, producing fewer expensive artifacts. Producing a lot of metrics seems counterintuitive. However, feedback is crucial in XP. Customers and managers need to have concrete information about how effective and productive the team has been. Your car's gauges (and possibly some funny noises) tell you when it's time for preventive maintenance. Software metrics have been compared to a dashboard, to tell you the state of your vehicle. They provide simple ways to keep tabs on your project's health.

Once again, we're talking about something that isn't a tester function, but we've often found ourselves suggesting practices associated with keeping metrics and found them useful, which is why we include them in this book.

Your team is unlikely to produce high-quality software in a timely manner without someone performing the tracker function. If you're the tester, don't try to take on the tracker role as well. It's a job best done by a technical lead or programmer. Some metrics are more appropriately kept by a project manager or XP coach.

At the very least, your team has to track the progress of the iteration in some way, so you know each day whether you're still on target to complete all the stories by the end of the iteration. Whoever is tracking your project should track each task, the estimate for the task, the amount of time spent so far on the task, and the estimated time to complete the task. Adding up this last number for each task will tell you whether you need to drop one or more stories or ask the customer for more.

Update this information every day in the standup meeting. Tracking is critical to avoid last-minute surprises. You can keep track on a whiteboard if you want, so the whole team can see the progress. If your team is split into multiple geographical locations, as one of Lisa's was, a spreadsheet may be a useful way to do the tracking. See www.xptester.org for a sample tracking spreadsheet.

Some organizations need to gather more formal information. Here are examples of metrics kept by teams we've been on or talked with:

  • Iteration velocity, start and end dates

  • Story estimates, description, association with iterations

  • Task descriptions, estimates, responsible programmer(s), status, actual time spent

  • Daily standup meeting notes

If you've stored this type of data, you can produce whatever reports you or the customer finds useful. For example:

  • Tasks assigned to a particular programmer

  • Estimated versus actual velocity for an iteration

  • Estimated versus actual effort for a story

Find tools that will automatically produce useful metrics. As a means of encouraging the creation of unit tests, programmers on one project Lisa worked on wrote a simple script that traversed their source tree daily and sent email with the details of the unit tests written, organized by package and class. They also used JavaNCSS, a source measurement suite for Java that generates information such as

  • Global, class, or function-level metrics

  • Non-Commenting Source Statements (NCSS)

  • Cyclomatic Complexity Number (McCabe metric)

  • Count of packages, classes, functions, and inner classes

JavaNCSS is free software distributed under the GNU general public license from www.kclee.com/clemens/java/javancss.

You can generate these metrics automatically daily and display the results on the project wiki to help the team determine what parts of the code are ripe for refactoring and whether test coverage is adequate.

The most important metrics are the test results. Unit tests should always be 100%, but it's a good idea to keep track of how many new tests are written each day and post this on the "big board" or other prominent location. Acceptance test results, in graphical form if possible, should be posted where all project members can see them (in more than one location if needed).

Janet Gregory passes along this suggestion for metrics:

I use the tried and true defect find-and-fix rate. I found that the developers really like the way I tracked number of lines covered by JUnit test code. We used JProbe for that. You can run it on every iteration, or in our case, we did it when I first started with the project, and then again at the end of the project. The rate increased, showing that suggested new tests gave better unit test coverage. [The developers] were actually quite proud of the fact that they managed to improve. (Janet Gregory, personal communication)



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net