Trust Yourself, But Verify (Unit Testing)

[Previous] [Next]

I always thought Andy Grove from Intel had it right when he titled his book Only the Paranoid Survive. This notion is especially true for software engineers. I have many good friends who are excellent engineers, but when it comes to having them interface with my code, I verify their data down to the last bit. In fact, I even have a healthy skepticism about myself. Assertions, tracing, and commenting are how I start verifying my fellow developers who are calling my code. Unit testing is how I verify myself. Unit tests are the scaffolding that you put in place to call your code outside the normal program as a whole.

The first way I verify myself is to start writing my unit tests as soon as I start writing my code, developing them in parallel. Once I figure out the interface for a module, I write the stub functions for that module and immediately write a test program, or harness, to call those interfaces. As I add a piece of functionality, I add new test cases to the test harness. Using this approach, I can test each incremental change in isolation and spread out the test harness development over the development cycle. If you do all the regular development after you've implemented the main code, you generally don't have enough time to do a good job on the harness and therefore do a less thorough job implementing an effective test.

The second way I verify myself is to think about how I'm going to test my code before I write it. Try not to fall into the trap of thinking that your entire application has to be written before you can test your code. If you discover that you're a victim of this pitfall, you need to step back and break down your testing. I realize that sometimes you must rely on important functionality from another developer to compile your code. In those cases, your test code should consist of stubs for the interfaces that you can compile against. At a minimum, have the interfaces hard-coded to return appropriate data so that you can compile and run your code.

One side benefit of ensuring that your design is testable is that you quickly find problems that you can fix to make your code more reusable and extensible. Because reusability is the Holy Grail of software, whatever steps you can take to make your code more reusable are worth the effort. A good example of this windfall is when I was working on the crash handler code for Chapter 9. As I was unit testing on Windows 98, I noticed that the SymInitialize API function for the DBGHELP.DLL symbol engine didn't automatically load symbols for all the modules in the processes as it did for Windows 2000. I saw that automatically loading all the process modules was going to be something that I needed in other utilities, so I developed the BSUSymInitialize function. My unit test for the crash handler code tested BSUSymInitialize, and I came out of the development with a perfectly reusable solution.

While you're coding, you should be running your unit tests all the time. I seem to think in an isolated functionality unit of about 50 lines of code. Each time I add or change a feature, I rerun the unit test to see whether I broke anything. I don't like surprises, so I try to keep them to a minimum. I definitely recommend that you run your unit tests before you check in your code to the master sources. Some organizations have specific tests, called check-in tests, that need to be run before code can be checked in. I've seen these check-in tests drastically reduce the number of build and smoke test breakages.

The key to the most effective unit tests comes down to two words: code coverage. If you take nothing else away from this chapter except those two words, I'll consider it a success. Code coverage is simply the percentage of lines you've executed in your module. If 100 lines are in your module and you execute 85, you have 85 percent code coverage. The simple fact is that a line not executed is a line waiting to crash.

You can get code-coverage statistics in two ways. The first way is the hard way and involves using the debugger and setting a breakpoint on every single line in your module. As your module executes a line, clear the breakpoint. Continue running your code until you've cleared all the breakpoints and you have 100 percent coverage. The easy way to get coverage is to use a third-party code-coverage tool such as Compuware NuMega's TrueCoverage or Rational's Visual PureCoverage.

Personally, I don't check in any code to the master sources until I've executed at least 85 to 90 percent of the lines in my code. I know some of you are groaning right now. Yes, getting good code coverage can be time consuming. Sometimes you need to do far more testing than you ever considered, and it can take a while. Getting the best coverage means that you need to run your application in the debugger and change data variables to execute code paths that are hard to hit otherwise. Your job is to write solid code, however, and in my opinion, code coverage is about the only way you'll get it during the unit test phase.

Nothing is worse than having your QA staff sitting on their hands while they're stuck with builds that crash. If you get 90 percent code coverage in the unit test, your QA people can spend their time testing your application on different platforms and ensuring that the interfaces between subsystems work. QA's job is to test the product as a whole and to sign off on the quality as a whole. Your job is to test a unit and to sign off on the quality of that unit. When both sides do their jobs, the result is a high-quality product.

Granted, I don't expect that developers will be able to test on each different Microsoft Win32-based operating system that customers might be using. However, if engineers can get 90 percent coverage on at least one operating system, the team wins 66 percent of the battle for quality. If you're not using one of the third-party code-coverage tools, you're cheating yourself on quality.

In addition to the code coverage, I frequently run third-party error detection and performance tools, as discussed in Chapter 1, on my unit test projects. Those tools help me catch bugs much earlier in the development cycle so that I spend less time debugging overall.

If you follow the recommendations presented in this section, you'll have some effective unit tests at the end of your development—but the work doesn't stop there. If you look at the BUGSLAYERUTIL.DLL code that's on the companion CD, you'll see a directory named Tests under the main source code directory. That directory holds my unit tests. I keep my unit tests as part of the code base so that others can find them easily. In addition, when I make a change to the source code, I can easily test to see whether I broke anything. I highly recommend that you check your tests into your version control system. Finally, although most unit tests are self-explanatory, make sure that you document any key assumptions so that others don't waste their time wrestling with your tests.



Debugging Applications
Debugging Applications for MicrosoftВ® .NET and Microsoft WindowsВ® (Pro-Developer)
ISBN: 0735615365
EAN: 2147483647
Year: 2000
Pages: 122
Authors: John Robbins

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net