|
|
I once had a job interview that involved a three-week contract to test a new mouse driver. During the interview, I was asked how many tests I thought would be needed to test a new mouse driver on a 640 x 480 display. I did a quick calculation and responded that I would need to perform 307,200 tests for a rigorous test effort, but that most of the bugs would probably be identified in the first 153,600 tests.
This answer took only a few seconds, but the explanation of how I got it took considerably longer and ended up involving most of the test department. That answer is presented in this chapter. In the end, I did not get a job to test the mouse driver; the test department wasn't comfortable working with someone who wanted to run a minimum of 153,600 tests in three weeks. I did get a contract to write an automated test tool to run the tests that were eventually selected.
In this chapter, I show you my favorite methods for analyzing the data requirements of the application, along with the techniques I use to cut the number of data sets down to a manageable size. I am not going to try to do justice to this vast topic here. For a complete set of techniques to use in determining answers to this type of question, see Boris Beizer's book, Black-Box Testing (Wiley, 1995), which is the best source I have ever found.
|
|