Section 8.5. Smoke Tests


8.5. Smoke Tests

A smoke test is a subset of the test cases that is typically representative of the overall test plan. For example, if there is a product with a dozen test plans (each of which has hundreds of test cases), then a smoke test for that product might just contain a few dozen test cases (with just one or two test cases from each test plan). The goal of a smoke test is to verify the breadth of the software functionality without going into depth on any one feature or requirement. (The name "smoke test" originally came from the world of electrical engineering. The first time a new circuit under development is attached to a power source, an especially glaring error may cause certain parts to start to smoke; at that point, there is no reason to continue to test the circuit.)

Smoke tests can be useful in many scenarios. For example, after a product has been tested, released, and deployed into production, a configuration management team member may manually run through a smoke test each time a new installation of the software is put in place at a client, in order to ensure that it is properly deployed. Another good use is to allow programmers to judge the health of a build before they give it to the software testers for testing: once a product has passed all of its unit tests, it may make sense to install the build and manually run the smoke tests , in order to ensure that it is ready for testing.

Unfortunately, smoke tests are often abused by senior managers or stakeholders who are impatient for the software to be complete. Typically, they will learn of a reduced battery of tests that takes very little time to run, but will fail to understand how these tests differ from the complete regression tests that are normally run. Suddenly, there is a new option that doesn't take very long. The project manager will start seeing requests to cut down the testing tasks by substituting the smoke tests for actual tests.

What's worse, the deployment scenario, in which a new deployment is verified with a smoke test, will be abused. The idea behind the deployment scenario is that no changes have been madethe smoke test is simply to help verify that the act of deployment has not accidentally broken the environment. (It's not uncommon for a complex deployment environment to have slight configuration or network differences that can break the softwareor for someone to leave a network cable hanging!) The smoke test is there for the people responsible for deploying the software to be sure that the installation was successful. If, however, changes are made to the environment or (even worse) the code in production, the smoke test will almost certainly fail to uncover the problems that have been introduced.

Many people are skeptical when project managers warn them of potential problems with deploying untested or poorly tested code. It's important to remember some of the most historic and costly defects that were caused by a "tiny" change in the code. In 1990, an engineer in AT&T rolled out a very small change to a switch that had one defect in one line of code (a misspelled "break" statement in C). The long-distance network failed for over 9 hours, causing over 65 million calls to fail to go through and costing AT&T an enormous amount of money. There are plenty of other very costly examples: the NASA Mars orbiter crash due to one team using metric units and another using English units, eBay's outage in 1999 due to a poor database upgrade, the Pentium processor bug that had a few numbers off in a floating point lookup table... all of these problems were "tiny" changes or defects that cost an enormous amount of money. In the end, nobody cared how small the source of the problem was: a disastrous problem was still disastrous, even if it was easy to solve.

This does not mean that the entire test battery needs to be run every single time a deployment is made or a change is made. What it means is that an informed decision must be made, and risks assessed, before the test battery is cut down for any reason. For example, test procedures that target specific areas of functionality could be reduced when changes are limited, and when the risk of those changes is low. There is no one-size-fits-all test that will result in proper coverage for your applications. Any time a limited change is made to the software, it should be carefully considered, in order to make sure that the appropriate tests are executed.



Applied Software Project Management
Applied Software Project Management
ISBN: 0596009488
EAN: 2147483647
Year: 2003
Pages: 122

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net