Section 8.6. Test Automation


8.6. Test Automation

There are software packages available that allow a tester to automate test cases. Typically, this software uses either a record-and-playback system where mouse movements and keystrokes are recorded and played back into a user interface, a programming or scripting languages that accesses the user interface using a class model, or a combination of the two. Automation can be a powerful tool in reducing the amount of time that it takes to run the tests.

However, setting up and maintaining test automation adds an enormous amount of overhead. Now, instead of simply writing a test case, that test case must be programmed or recorded, tested, and debugged. A database or directory of test scripts must be maintained, and since there can be hundreds or thousands of test cases for even a small project, there will be hundreds or thousands of scripts to keep track of. What's more, since the scripts hook into the user interface of the software, there must be some plan in place to keep the scripts working in case the user interface changes.

There have been some advances recently (at the time of this writing) that help cut down on test automation maintenance tasks. These advances include canned functions to automate multiple tasks at once, generalization of scripts so that the tester refers to general business processes instead of specific user interface interactions, and the use of databases of test scripts that can be maintained automatically. But even with these advances, it is still highly time-consuming to automate and maintain testsit often means that test planning takes several times as long as it would without automation.

It is a common misconception that automation will free the test team of most of their work. This simply is not true. Automation requires more effort than manual testing; the trade-off is that the test scripts can be reused, so that a series of automated tests run much more quickly than a manual test.

One good reason to adopt automation is that the test battery will never shrink. As the product matures, the test battery will get bigger and bigger because the software will do more and more. This means that manual testing will require more and more effort as time goes on, and will take longer each time the functionality increases. Automation, on the other hand, requires a constant amount of overhead, and can help keep the test effort under control because the automated tests do not require much effort to run. They do, however, require a lot of effort to interpret once they have run. Instead of a tester manually executing a test case, finding a defect, and recording that defect immediately, the tester must first look at a report document to find the reported failures and then track each failure down in the application, research it, and enter it as a defect.

Another misconception is that test scripts will somehow eliminate the need for test cases. This is absolutely untrue. Automation actually requires more planning than manual testing. Each test script must first be developed as a written test case. Writing a script without a test case is like writing software without requirementsif the tester simply goes in and starts recording, there is no way to define the expected result of the test. It's not enough to just record a script; the test case itself must be written, and then the recording made to automate the test that has been planned. If this is not done, there is no way to verify that the tests really do cover all of the requirements. And if there is no written test case, there is no way to interpret the results if the script fails, because there is no record of what it is that the script was supposed to do!

There are also many logistic difficulties in automation that must be overcome, especially if there is a team of testers who will maintain and run the tests. The scripts must be stored somewhere, and people need to be able to collaborate without overwriting each others' work (which may require that the automation script files be stored in a version control system like Subversionsee Chapter 7). It may be difficult to estimate how long it will take to actually run and interpret the scripts. And another trade-off is hardware: adding more and more powerful test machines can cut down on the amount of time required to test the software, but, since they do not cut down on how long it takes to interpret the results and enter defects, there may be some diminishing returns in that area.

All in all, automation is often a net gain for the project, especially if the project will be tested many times and maintained over a long period of time. However, there is absolutely no benefit in automating a product that will not face repeated regression tests over the course of multiple releases. If there will be just one release, the cost of automation will not be recouped. It is very important for a project manager to understand the pros and cons of automation, and to guide the organization toward its wise use.



Applied Software Project Management
Applied Software Project Management
ISBN: 0596009488
EAN: 2147483647
Year: 2003
Pages: 122

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net