Summary

Users can do unexpected things when entering data, and hackers will definitely test the limits. But in my experience, the most important data comes into most systems through the user interface.

Boundary value analysis is a powerful technique for picking the most probable failure points in the data. You can reduce the amount of testing by two-thirds or three-quarters based on your assumptions. Just beware of hidden boundaries, because with such a reduced test set, there are huge holes in your test coverage.

When testing the data-handling capabilities of an application, I like to build my data sets from the ground up and then perform data reduction on the resulting sets, to cut down the number of tests based on probable redundancy. I continue this process until I have a test set that is both concise and doable in the time allowed.

We instinctively use boundary analysis constantly. Be sure to document it when you do it, so that others have the chance to correct your faulty assumptions. Data reduction is another powerful technique for further reducing the number of tests that you may have to run. This is another technique that we use instinctively. Again, be careful to document your assumptions.

Watch out for the things that you didn't expect or plan for. I try to throw in a few extra off-the-wall tests just to see what happens. Every once in a while, something unexpected does happen, and the truth is always stranger and more interesting than fiction.

For those of you wondering about the mouse driver data estimate in the beginning of the chapter, the number came from the following matrix:

640 × 480 pixels = 307,200 possible locations for the mouse to click on

When you expand the problem, it produces a matrix much like Figure 13.3, only bigger. By boundary value analysis, most of the bugs would occur in a well-selected 50 percent test coverage scenario, or the first 153,600 tests. Based on what I showed you in this chapter, by using the matrix, we could probably identify most of the bugs in 92,160 tests, or 30 percent of the matrix-although screen quadrants are not quite the same as valid month/year combinations. The technique certainly worked well in this case.



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net