This first section of this chapter is a list of axioms, or truisms. Think of them as the "rules of the road" or the "facts of life" for software testing and software development. Each of them is a little tidbit of knowledge that helps put some aspect of the overall process into perspective. It's Impossible to Test a Program CompletelyAs a new tester, you might believe that you can approach a piece of software, fully test it, find all the bugs, and assure that the software is perfect. Unfortunately, this isn't possible, even with the simplest programs, due to four key reasons:
Multiply all these "very large" possibilities together and you get a set of test conditions that's too large to attempt. If you don't believe it, consider the example shown in Figure 3.1, the Microsoft Windows Calculator. Figure 3.1. Even a simple program such as the Windows Calculator is too complex to completely test.Assume that you are assigned to test the Windows Calculator. You decide to start with addition. You try 1+0=. You get an answer of 1. That's correct. Then you try 1+1=. You get 2. How far do you go? The calculator accepts a 32-digit number, so you must try all the possibilities up to 1+99999999999999999999999999999999= Once you complete that series, you can move on to 2+0=, 2+1=, 2+2=, and so on. Eventually you'll get to 99999999999999999999999999999999+99999999999999999999999999999999= Next you should try all the decimal values: 1.0+0.1, 1.0+0.2, and so on. Once you verify that regular numbers sum properly, you need to attempt illegal inputs to assure that they're properly handled. Remember, you're not limited to clicking the numbers onscreenyou can press keys on your computer keyboard, too. Good values to try might be 1+a, z+1, 1a1+2b2,…. There are literally billions upon billions of these. Edited inputs must also be tested. The Windows Calculator allows the Backspace and Delete keys, so you should try them. 1<backspace>2+2 should equal 4. Everything you've tested so far must be retested by pressing the Backspace key for each entry, for each two entries, and so on. If you or your heirs manage to complete all these cases, you can then move on to adding three numbers, then four numbers,…. There are so many possible entries that you could never complete them, even if you used a supercomputer to feed in the numbers. And that's only for addition. You still have subtraction, multiplication, division, square root, percentage, and inverse to cover. The point of this example is to demonstrate that it's impossible to completely test a program, even software as simple as a calculator. If you decide to eliminate any of the test conditions because you feel they're redundant or unnecessary, or just to save time, you've decided not to test the program completely. Software Testing Is a Risk-Based ExerciseIf you decide not to test every possible test scenario, you've chosen to take on risk. In the calculator example, what if you choose not to test that 1024+1024=2048? It's possible the programmer accidentally left in a bug for that situation. If you don't test it, a customer will eventually enter it, and he or she will discover the bug. It'll be a costly bug, too, since it wasn't found until the software was in the customer's hands. This may all sound pretty scary. You can't test everything, and if you don't, you will likely miss bugs. The product has to be released, so you will need to stop testing, but if you stop too soon, there will still be areas untested. What do you do? One key concept that software testers need to learn is how to reduce the huge domain of possible tests into a manageable set, and how to make wise risk-based decisions on what's important to test and what's not. Figure 3.2 shows the relationship between the amount of testing performed and the number of bugs found. If you attempt to test everything, the costs go up dramatically and the number of missed bugs declines to the point that it's no longer cost effective to continue. If you cut the testing short or make poor decisions of what to test, the costs are low but you'll miss a lot of bugs. The goal is to hit that optimal amount of testing so that you don't test too much or too little. Figure 3.2. Every software project has an optimal test effort.You will learn how to design and select test scenarios that minimize risk and optimize your testing in Chapters 4 through 7. Testing Can't Show That Bugs Don't ExistThink about this for a moment. You're an exterminator charged with examining a house for bugs. You inspect the house and find evidence of bugsmaybe live bugs, dead bugs, or nests. You can safely say that the house has bugs. You visit another house. This time you find no evidence of bugs. You look in all the obvious places and see no signs of an infestation. Maybe you find a few dead bugs or old nests but you see nothing that tells you that live bugs exist. Can you absolutely, positively state that the house is bug free? Nope. All you can conclude is that in your search you didn't find any live bugs. Unless you completely dismantled the house down to the foundation, you can't be sure that you didn't simply just miss them. Software testing works exactly as the exterminator does. It can show that bugs exist, but it can't show that bugs don't exist. You can perform your tests, find and report bugs, but at no point can you guarantee that there are no longer any bugs to find. You can only continue your testing and possibly find more. The More Bugs You Find, the More Bugs There AreThere are even more similarities between real bugs and software bugs. Both types tend to come in groups. If you see one, odds are there will be more nearby. Frequently, a tester will go for long spells without finding a bug. He'll then find one bug, then quickly another and another. There are several reasons for this:
It's important to note that the inverse of this "bugs follow bugs" idea is true, as well. If you fail to find bugs no matter how hard you try, it may very well be that the feature you're testing was cleanly written and that there are indeed few if any bugs to be found. The Pesticide ParadoxIn 1990, Boris Beizer, in his book Software Testing Techniques, Second Edition, coined the term pesticide paradox to describe the phenomenon that the more you test software, the more immune it becomes to your tests. The same thing happens to insects with pesticides (see Figure 3.3). If you keep applying the same pesticide, the insects eventually build up resistance and the pesticide no longer works. Figure 3.3. Software undergoing the same repetitive tests eventually builds up resistance to them.Remember the spiral model of software development described in Chapter 2? The test process repeats each time around the loop. With each iteration, the software testers receive the software for testing and run their tests. Eventually, after several passes, all the bugs that those tests would find are exposed. Continuing to run them won't reveal anything new. To overcome the pesticide paradox, software testers must continually write new and different tests to exercise different parts of the program and find more bugs. Not All the Bugs You Find Will Be FixedOne of the sad realities of software testing is that even after all your hard work, not every bug you find will be fixed. Now, don't be disappointedthis doesn't mean that you've failed in achieving your goal as a software tester, nor does it mean that you or your team will release a poor quality product. It does mean, however, that you'll need to rely on a couple of those traits of a software tester listed in Chapter 1exercising good judgment and knowing when perfection isn't reasonably attainable. You and your team will need to make trade-offs, risk-based decisions for each and every bug, deciding which ones will be fixed and which ones won't. There are several reasons why you might choose not to fix a bug:
The decision-making process usually involves the software testers, the project managers, and the programmers. Each carries a unique perspective on the bugs and has his own information and opinions as to why they should or shouldn't be fixed. In Chapter 19, "Reporting What You Find," you'll learn more about reporting bugs and getting your voice heard.
When a Bug's a Bug Is Difficult to SayIf there's a problem in the software but no one ever discovers itnot programmers, not testers, and not even a single customeris it a bug? Get a group of software testers in a room and ask them this question. You'll be in for a lively discussion. Everyone has their own opinion and can be pretty vocal about it. The problem is that there's no definitive answer. The answer is based on what you and your development team decide works best for you. For the purposes of this book, refer back to the rules to define a bug from Chapter 1:
Following these rules helps clarify the dilemma by making a bug a bug only if it's observed. To claim that the software does or doesn't do "something" implies that the software was run and that "something" or the lack of "something" was witnessed. Since you can't report on what you didn't see, you can't claim that a bug exists if you didn't see it. Here's another way to think of it. It's not uncommon for two people to have completely different opinions on the quality of a software product. One may say that the program is incredibly buggy and the other may say that it's perfect. How can both be right? The answer is that one has used the product in a way that reveals lots of bugs. The other hasn't. NOTE Bugs that are undiscovered or haven't yet been observed are often referred to as latent bugs. If this is as clear as mud, don't worry. Discuss it with your peers in software testing and find out what they think. Listen to others' opinions, test their ideas, and form your own definition. Remember the old question, "If a tree falls in the forest and there's no one there to hear it, does it make a sound?" Product Specifications Are Never FinalSoftware developers have a problem. The industry is moving so fast that last year's cutting-edge products are obsolete this year. At the same time, software is getting larger and gaining more features and complexity, resulting in longer and longer development schedules. These two opposing forces result in conflict, and the result is a constantly changing product specification. There's no other way to respond to the rapid changes. Assume that your product had a locked-down, final, absolutely-can't-change-it product spec. You're halfway through the planned two-year development cycle, and your main competitor releases a product very similar to yours but with several desirable features that your product doesn't have. Do you continue with your spec as is and release an inferior product in another year? Or, does your team regroup, rethink the product's features, rewrite the product spec, and work on a revised product? In most cases, wise business dictates the latter. As a software tester, you must assume that the spec will change. Features will be added that you didn't plan to test. Features will be changed or even deleted that you had already tested and reported bugs on. It will happen. You'll learn techniques for being flexible in your test planning and test execution in the remainder of this book. Software Testers Aren't the Most Popular Members of a Project TeamRemember the goal of a software tester?
Your job is to inspect and critique your peer's work, find problems with it, and publicize what you've found. Ouch! You won't win a popularity contest doing this job. Here are a couple of tips to keep the peace with your fellow teammates:
Software Testing Is a Disciplined Technical ProfessionIt used to be that software testing was an afterthought. Software products were small and not very complicated. The number of people with computers using software was limited. And, the few programmers on a project team could take turns debugging each others' code. Bugs weren't that much of a problem. The ones that did occur were easily fixed without much cost or disruption. If software testers were used, they were frequently untrained and brought into the project late to do some "ad-hoc banging on the code to see what they might find." Times have changed. Look at the software help-wanted ads and you'll see numerous listings for software testers. The software industry has progressed to the point where professional software testers are mandatory. It's now too costly to build bad software. To be fair, not every company is on board yet. Many computer game and small-time software companies still use a fairly loose development modelusually big-bang or code-and-fix. But much more software is now developed with a disciplined approach that has software testers as core, vital members of their staff. This is great news if you're interested in software testing. It can now be a career choicea job that requires training and discipline, and allows for advancement. |