Closing Down the Current Iteration
As each iteration ends, certain tasks must be accomplished. Part of the “fit and finish” of software development is to determine which work items are left and whether they can be completed within the current iteration. If any of the remaining work items cannot be finished, they must be moved into the next (or later) iteration, as determined by the project priorities. You must also make sure that the files in source control are properly labeled so that you can locate them as needed. Another consideration for future iterations is the need to possibly branch code as an option if the next iteration forks or takes a weird turn.
Continuing to Track Bugs and Defects
Even after a successful development iteration, build, and deployment, work item tracking might start tracking user-encountered bugs and defects. During development, primarily the testers and developers were logging in bugs. Keep in mind that even if an iteration is closed out for development, work items can still be logged in. Visual Studio 2005 Team System assists the team in continuing to stabilize the application long after the initial release.
Reporting
When Visual Studio 2005 Team System is used during your project, data is automatically captured in the data warehouse and used in reports that provide additional insight into trends and activity within your project. For example, when project tasks are performed, the data warehouse automatically collects data that enables reports such as the following:
Remaining Work Report
The Remaining Work Report (in a cumulative flow diagram form) shows work remaining measured as scenarios and Quality of Service (QoS) requirements being resolved and closed in the iteration.
Figure 10-1 Remaining Work Report
The following table explains the various sections of the Remaining Work Report example in Figure 10-1.
Data Series | Line Description |
Work Item States | Each color band represents the number of work items that have reached the corresponding state as of the given date. |
Chart Height | The total height is the total amount of work to be done in the iteration. If the top line increases, the total work is increasing. Typically, the reason is that unpla nned work is adding to the total required. That may be expected if you've scheduled a buffer for unplanned work, such as bug fixing. (See “Unplanned Work Report,” later in this section.) If the top line decreases, it means that total work is decreasing, probably because work is being cut out of the iteration. |
Work in Progress | Current status is measured by height on a particular date. The remaining backlog is measured by the current height of the leftmost area, Active in this case. The current completions are shown by the current height of the rightmost area, Closed. The height of the band in between indicates the work in progress—in this case, items Resolved but not Closed. |
Resolved | An expansion in the middle bands can reveal a bottleneck—for example, if too many items are waiting to be tested and testing resources are inadequate. Alternatively, a significant narrowing of the band can indicate spare capacity (a rare discovery!). |
End Date | Although it can be easy to extrapolate an end completion inventory or end date for the backlog from a cumulative flow diagram like this, a small caution applies. Many projects observe an S-curve pattern, in which progress is steepest in the middle. The commonsense explanations for the slower starting and ending rates are that startup is always a little difficult, and tough problems tend to get handled at the end of a cycle. |
Velocity Report
The Velocity Report, one of the key elements for estimation, shows how quickly the team is actually completing planned work and how much the rate varies from day to day or iteration to iteration. Use this data to plan the next iteration in conjunction with the quality measures. Similar to the Remaining Work Report, this report is most useful when looking at days within an iteration or iterations within a project.
Figure 10-2 Velocity Report
The following table explains the various sections of the Velocity Report example in Figure 10-2.
Data Series | Line Description |
Resolved Work Items | This line shows the count of work items resolved on each day. |
Closed Work Items | This line shows the count of work items closed on each day. |
Bugs Found per Scenarios Resolved | This line divides the sums of bugs found by scenarios resolved. This quality indicator should stay low. If it goes up, more time is needed to find and fix bugs. |
Unplanned Work Report
This report distinguishes the total work from the Remaining Work Report into the planned and unplanned. Very few teams know all the work to be done ahead of time, even within the iteration.
Figure 10-3 Unplanned Work Report
The following table explains the various sections of the Unplanned Work Report example in Figure 10-3.
Data Series | Line Description |
Chart Height | The top line of this graph matches the top line of the Remaining Work Report. The total height is the total amount of work to be done in the iteration. |
Added Later | The areas then divide the work into the planned and unplanned segments (“unplanned” means unscheduled as of the beginning of the iteration). |
Interpretation | For monitoring, use this graph to determine the extent to which unplanned work is forcing you to cut into planned work. For estimation, use this to determine the amount of schedule buffer to allow for unplanned work in future iterations. |
Quality Indicators Report
This report combines the test results, code coverage from testing, code churn, and bugs to help you see many perspectives at once.
Figure 10-4 Quality Indicators Report
The following table explains the various sections of the Quality Indicators Report example in Figure 10-4.
Data Series | Line Description |
Bars | The height of the bar shows you how many tests have been run and, of those run, how many have returned Pass, Fail, and Inconclusive results. |
Code Coverage | The first series of points is the Code Coverage attained by those tests (specifically, the ones run with code coverage enabled). Ordinarily, as more tests are run, more code should be covered. On the other hand, if test execution and test pass rates rise without a corresponding increase in code coverage, it might indicate that the incremental tests are redundant. |
Code Churn | The second series of points is Code Churn (in other words, the number of lines added and modified in the code under test). High churn obviously indicates a large amount of change and the corresponding risk that bugs will be introduced as the side effect of the changes. In a perfectly refactored project, you can see code churn with no change in code coverage or test pass rates. Otherwise, high code churn might indicate falling coverage and the need to rewrite tests. |
Active Bugs | The third series is the active bug count. Clearly, there should be a correlation between the number of active bugs and the number of test failures. If the active bug count is rising and your tests are not showing corresponding failures, your tests are probably not testing the same functionality that the bugs are reporting. Similarly, if active bug count is falling and test pass rates are not increasing, you might be at risk for a rising reactivation rate. |
Bug Rates Report
Bug rates are best interpreted with your knowledge of all of the current project activities and the other metrics on the Quality Indicators graph. For example, a high find rate can be a sign of sloppy code (a bad thing), newly integrated code (an expected thing), effective testing (a good thing), or exceptional events such as a bug bash (an infrequent thing). In contrast, a low find rate can mean high-quality product or ineffective testing. Use code coverage, code churn, and test rates to help you assess the meaning.
Figure 10-5 Bug Rates Report
The following table explains the various sections of the Bug Rates Report example in Figure 10-5.
Data Series | Line Description |
Newly Active Bugs | The number of new bugs found on the date |
Resolved Bugs | The number of bugs resolved on the date |
Reactivations Report
Reactivations are the case of work items that have been resolved or closed prematurely. A small amount of noise (for example, < 5%) might be acceptable, but a high or rising rate of reactivation should warn the project manager to diagnose the root cause and fix it.
Figure 10-6 Reactivations Report
The following table explains the various sections of the Reactivations Report example in Figure 10-6.
Data Series | Line Description |
Chart Height | The top line of this graph shows the total work items of the selected types (for example, bugs) resolved in the build. |
Reactivated | The height of top area is the number of reactivations (for example, work items) previously resolved or closed that are now active again. |
Net Work Items | The height of the lower area is the difference between the total work items resolved and the reactivated work items. |
Bugs By Priority Report
This report type assesses the effectiveness of two things: bug hunting and triage. Bugs happen and finding them is a good thing. Often, however, the easy-to-find bugs aren't the ones that will annoy customers the most. If the high-priority bugs are not being found, and a disproportionate number of low-priority bugs ones are, redirect the testing efforts to look for the bugs that matter. In triage, it is easy to overprioritize bugs beyond the capacity to resolve them or underprioritize them to the point where customers are highly dissatisfied.
Figure 10-7 Bugs By Priority Report
The following table explains the various sections of the Bugs By Priority Report example in Figure 10-7.
Data Series | Line Description |
Side By Side | The series includes total active bugs at the time of the build, number found in build, and number resolved in build. These are the same three series of bars that represent similar data in the Bug Rates Report. |
Stacking | Each series is further broken into priority, so each bar stacks from highest to lowest priority, with the lowest on top. |
Interpretation | If there are many high-priority bugs active, be sure that the capacity exists to address them. On the other hand, a significant debt of low-priority bugs might also lead to customer dissatisfaction. |
Actual Quality Versus Planned Velocity Report
How fast can we go before quality suffers? As much as teams believe that “haste makes waste,” there is usually a business incentive to go faster. A project manager's goal ought to be to balance the two by finding the maximum rate of progress that does not make quality suffer. This graph presents the relationship, for each iteration, of estimated size to overall quality.
Figure 10-8 Actual Quality Versus Planned Velocity Report
The following table explains the various sections of the Actual Quality Versus Planned Velocity Report example in Figure 10-8.
Data Series | Line Description |
Scenarios | The x-axis is the number of scenarios actually closed (completed) in the iteration. Each bubble is labeled according to its iteration. |
Bugs Found per Scenarios Resolved | The y-axis is the total number of bugs found divided by the scenarios closed (in other words, the average number of bugs per scenario). |
Total Estimated Work | The area of each bubble is the amount of work estimated in the iteration, computed as the sum of ROM estimates on the scenarios. |
Efficiency | Stoplight colors go from green for the lowest bugs per iteration to red for the highest. If haste is in fact making waste, larger bubbles (larger iterations) will be higher in the northeast, and smaller ones will be lower in the southwest. If you are working at a comfortable pace and have not seen quality drop with planned iteration size, all iterations will be at roughly the same height on the y-axis. |
Other Reports
You can also create reports that are more complex because of the tight integration between these tools. The relationships are maintained in both the tools and the underlying data warehouse. This integration also enables you to create other reports that can provide an overview by showing test results, code coverage, and code churn, among other trends and metrics.
Each Visual Studio 2005 Team System process template uses its own set of unique reports. The reporting functionality is fully customizable using Microsoft SQL Server™ Reporting Services so that it can meet the individual tastes and needs of each development team.
The following is a list of all the reports available in Visual Studio 2005 Team System:Backlog
Blocked Inventory
Buffer Usage
Bug List
Bug Rates
Build Details Report
Build Report
Build Summary of Tests
Builds
Code Complete
Code Coverage Details
Cumulative Flow
Dev/QA Bug Counts
Dev/QA Work
Exit Criteria Status
Generic Charting
Issues
Load Test Comparison
Load Test Selection Report
Load Test Summary Report
My Bugs
Number of Bugs by Priority
Quality
Regressions
Scenario Stability
Team Productivity
Test Effectiveness
Test Effectiveness
Test Failures without Active Bugs
Test Result Details
Tests Passing with Active Bugs
Unit Test Effectiveness
Velocity
Work Item List
Work Progress
Project Integration
As we wrap up, the last area I want to discuss is that of Microsoft Project integration. How does Visual Studio Team System integrate with Microsoft Project? As I mentioned in Chapter 3, the integration between Project and Visual Studio Team System allows customers to be productive with the tools they are comfortable with. Customers who are comfortable with tracking all their project data in Project can still do so. Visual Studio Team System gives them the additional choice to synchronize aspects of their Project—most likely, the subset of project data that translates to developer tasks—with their development teams. Microsoft will release a solution starter to show how Visual Studio Team System can also integrate with Project Server to deliver even greater benefits of central planning.
The integration between Visual Studio Team System and Project is designed to be an added level of productivity and is meant to complement the advanced management that Project is capable of.