Practice 6. Make Key Metrics Visible
Every project should collect some key metrics and find ways to make them visible to the project team. Visibility might be achieved by a simple web page or maybe even a big visible chart on a whiteboard or by the water cooler. The reason metrics are helpful is that they can, if they are presented in the right way, help the team focus on keeping important aspects of its project under control. Metrics help the team track its progress on a regular basis and get early warning signs when problems are beginning to develop. Entire books have been written on the subject of metrics, and I think sometimes people get carried away. A small set of simple metrics should suffice, and if their collection and display can be automated, then your team will be in good shape to continually monitor and improve its progress.
An overlooked aspect of metrics is that they enhance learning. By making them visible, they help drive meaningful discussions about the trends and how to change or improve the trends. If done in a positive way, these conversations help drive the collaborative problem solving that is invaluable to the team. Of course, if metrics are implemented in a negative way, such as by tracking individual instead of team metrics, then you can expect the opposite effect.
Some example metrics might be:
Charts of velocity and feature points remaining (as described above).
A chart that shows the status of all system tests, and how long they took to run. This helps to catch system and performance problems early and prevents cases where the system tests don't run.
A graph that shows the amount of time required for nightly build(s). This helps to keep the builds as small and fast as possible, which is a critical issue to ensure developers are as productive as possible: Slow builds mean lower productivity.
A graph that shows the number of outstanding (not fixed) defects, by severity. Outstanding defects are a key indicator of sustainable development. Teams cannot carry around large backlogs of defects and should fix as they go.
A graph that shows the number of incoming defects, by severity over time. This is an indicator of how well tested the product was. High incoming rates might lead teams to set aside more time for fixing defects in the short term, or perhaps help point to areas of the product that need refactoring/replacement.
A graph that shows the number of fixed defects per week over time.
A companion to the previous two graphs (incoming and fixed defects) is a graph that shows the net gain. The net gain is essentially incoming minus fixed, which should always be around 0 and as close as possible to a flat line.
A chart that tracks the amount of source code versus test code. This gives team members visible evidence that they are adding enough tests as their source code grows.
A chart that shows defect density, in terms of number of defects (on average) per thousand lines of code.