The unit testing features in Team System have full support for code coverage. Code coverage automatically inserts tracking logic, a process called instrumentation, to monitor which lines of code are executed during the execution of your tests. The most important result of this is the identification of regions of your code that you have not reached with your tests.
Often, you may have branching or exception-handling logic that isn't executed in common situations. It is critical to use code coverage to identify these areas because your users certainly will. Add unit tests to cause those areas of code to be executed, and you'll be able to sleep soundly at night.
Code coverage is a useful tool, but it should not be relied upon as an exclusive indicator of unit test effectiveness. It cannot tell you the manner in which code was executed, possibly missing errors that would result with different data or timing. A suite of unit tests based on a variety of different inputs and execution orders will help to ensure that your code is correct, complete, and resilient. Use code coverage to help identify code your tests are missing, not to tell you when your tests are complete.
A tenet of effective unit testing is that the removal of any line of code should cause at least one unit test to fail. This is, of course, an ideal, but worth keeping in mind as you develop your systems.
Code coverage is activated via a setting in the Test Run Configuration. Open the configuration for editing by choosing Test Edit Test Run Configuration and selecting the configuration. Once you have the Edit Test Run Configuration dialog active, select the Code Coverage page.
Select the assemblies you wish to instrument from the Select Artifacts to Instrument list. If you don't see an assembly you'd like to instrument, you can click Add Assembly to manually add it. Figure 14-9 is a screenshot of the Code Coverage page of the Test Run Configuration editor.
The instrumentation process modifies your original assemblies, invalidating original signatures. If you are working with signed assemblies, use the Re-signing key file field to specify a key file with which to sign the instrumented assembly.
Once you have enabled coverage and selected the assemblies to instrument, run your unit tests as normal. You will then be able to see results of the coverage in the Code Coverage Results window. This window will show counts of lines and percentages of code covered and uncovered. You may expand the view by clicking the plus signs to see details at the assembly, class, and member levels.
To quickly determine which areas of code need attention, enable code coverage highlighting of your code by pressing the Show Code Coverage button on the Code Coverage toolbar. Executable lines of code will be highlighted in red if they have not been run by your tests and in blue if they were. Code that is purely structural or documentation will not be highlighted. Figure 14-10 illustrates the results of a code coverage test.
In this example, we added two lines to our previous Fibonacci implementation that check for a negative factor and throw an exception. Code Coverage has colored the line throwing the exception red. This means that none of the unit tests we executed caused that line to run. This is a clear indication that we need another unit test. Create a new unit test — for example, FibonacciOfNegativeFactorsNotSupportedTest — and test calling Fibonacci with a factor less than zero. Decorate the test with the ExpectedException attribute, indicating that you expect the ArgumentOutOfRangeException in order for the test to pass. Rerun the unit tests and the Fibonacci method should now have 100 percent coverage.
Again, keep in mind that 100 percent code coverage does not mean you have finished writing your unit tests. Proper testing may involve multiple executions of the same code using different data. Code coverage is one measure of effectiveness, but certainly not the only one. Consider adopting test-driven development, plan your testing cases up front, and then use code coverage to alert you to scenarios you forgot to test.