Frequent Builds and Smoke Tests Are Mandatory


Two of the most important pieces of your infrastructure are your build system and your smoke test suite. The build system is what compiles and links your product, and the smoke test suite comprises tests that run your program and verify that it works. Jim McCarthy, in his book Dynamics of Software Development (Microsoft Press, 1995), called the daily build and smoke test the heartbeat of the product. If these processes aren't healthy, the project is dead.

Frequent Builds

Your project has to be built every day. That process is the heartbeat of the team, and if you're not building, you've got a dead project. Many people tell me that they have absolutely huge projects that can't be built every day. Does that mean that those people have projects that are even larger than the 40 million lines of code in the Windows XP or Windows Server 2003 source code tree? Given that it's the largest commercial software project in existence and it builds every day, I don't think those people do. So there's no excuse for not building every day. Not only must you build every day, but you must have a build that is completely automated.

When building your product, you should be building both release and debug versions at the same time. As you'll see later in the chapter, the debug builds are critical. Breaking the build must be treated as a sin. If developers check in code that doesn't compile, they need to pay some sort of penalty to right the wrong. A public flogging might be a little harsh (though not by much), but what has always worked on the teams I've been on is penance in the form of supplying donuts to the team and publicly acknowledging the crime. If you're on a team that doesn't have a full-time release engineer, you can punish the build breaker by making him or her responsible for taking care of the build until the next build breaker comes along.

One of the best daily-build practices I've used is to notify the team via e-mail when the build is finished. With an automated nightly build, the first message everyone can look for in the morning is the indication of whether the build failed; if it did, the team can take immediate action to correct it.

To avoid problems with the build, everyone must have the same versions of all build tools and parts. As I mentioned earlier, some teams like to keep the build system in version control to enforce this practice. If you have team members on different versions of the tools, including the service pack levels, you've got room for error in the build. Unless there is a compelling reason to have someone using a different version of the compiler, no developer should be upgrading on his or her own. Additionally, everybody must be using the same build script as the build machine to do their builds. That way there's a valid relationship between what developers are developing and what the testers are testing.

Your build system will be pulling the latest master sources from your version control system each time you do a build. Ideally, the developers should be pulling from version control every day as well. If it's a large project, developers should be able to get the daily compiled binaries easily to avoid big compilation times on their machines. Nothing is worse than spending time trying to fix a nasty problem only to find out that the problem is related to an older version of a file on a developer's machine. Another advantage of developers pulling frequently is that it helps enforce the mantra of "no build breaks." By pulling frequently, any problem with the master build automatically becomes a problem with every developer's local build. Whereas managers get annoyed when the daily build breaks, developers go ballistic when you break their local build. With the knowledge that breaking the master build means breaking the build for every individual developer, the pressure is on everyone to check only clean code into the master sources.

start sidebar
Common Debugging Question: When should I freeze upgrades to the compiler and other tools?

Once you've hit feature complete, also known as beta 1, you should definitely not upgrade any tools. You can't afford the risk of a new compiler optimization scheme, no matter how well thought out, changing your code. By the time you hit beta 1, you've already done some significant testing, and if you change the tools, you'll need to restart your testing from ground zero.

end sidebar

Smoke Tests

In case you're not familiar with the term, a smoke test is a test that checks your product's basic functionality. The term comes from the electronics industry. At some point in a product's life cycle, electronics engineers would plug in their product to see whether it smoked (literally). If it didn't smoke, or worse, catch fire, they were making progress. In most software situations, a smoke test is simply a run-through of the product to see whether it runs and is therefore good enough to start testing seriously. A smoke test is your gauge of the baseline health of the code.

Your smoke test is just a checklist of items that your program can handle. Initially, start out small: install the application, start it, and shut it down. As you progress through the development cycle, your smoke test needs to grow to exercise new features of the product. The best rule of thumb is that the smoke test should contain at least one test for every feature and major component of the product. If you are in a shrink-wrap company, that means testing each feature that appears in a bullet point for your ads. In an IT shop, that means testing each of the major features you promised the CIO and your client. Keep in mind that your smoke test doesn't need to exhaustively test every code path in your program, but you do want to use it to judge whether you can handle the basics. Once your program passes the smoke test, the quality engineers can start doing the hard work of trying to break the program in new and unique ways.

One vital component of your smoke test is some sort of performance benchmark. Many people forget to include these and pay the price later in the development cycle. If you have an established benchmark for an operation (for example, how long the last version of the product took to run), you can define failure as a current run that is 10 percent or more over or under your benchmark. I'm always amazed by how many times a small change in an innocuous place can have a detrimental impact on performance. By monitoring performance throughout the development cycle, you can fix performance problems before they get out of hand.

The ideal situation for a smoke test is one in which your program is automated so that it can run without requiring any user interaction. The tool you use to automate the input and operations on your application is called a regression-testing tool. Unfortunately, you can't always automate every feature, especially when the user interface is in a state of flux. A number of good regression-testing tools are on the market, and if you're working with a large, complicated application and can afford to have someone assigned to maintaining the smoke tests, you might want to consider purchasing such a tool. If you can't get your boss to pay for a commercial tool, you can use the Tester application from Chapter 16, which does a great job of recording your mouse and keyboard input into a JScript or VBScript file, which you can then play back.

Breaking the smoke test should be as serious a crime as breaking the build. It takes more effort to create a smoke test, and no developer should treat it lightly. Because the smoke test is what tells your QA team that they have a build that's good enough to work on, keeping the smoke test running is mandatory. If you have an automated smoke test, you should also consider having the smoke test available for the developers so that they can use it to help automate their testing as well. Additionally, with an automated smoke test, you should have the daily build kick it off so that you can immediately gauge the health of the build. As with the daily build, you should notify the team via e-mail to let them know whether the smoke test succeeded or failed.




Debugging Applications for Microsoft. NET and Microsoft Windows
Debugging Applications for MicrosoftВ® .NET and Microsoft WindowsВ® (Pro-Developer)
ISBN: 0735615365
EAN: 2147483647
Year: 2003
Pages: 177
Authors: John Robbins

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net