Section 8.9. Diagnosing Software Testing Problems


8.9. Diagnosing Software Testing Problems

There is an old programming saying: "There's no such thing as bug-free software." Many people simply accept that there are quality problems, and that no piece of software is perfect. This is truebut it is also irrelevant. The goal of testing is to make sure that the software does what the users need it to do, not to make sure that it is perfect. (After all, does anyone really know what a "perfect software project" is?) With that in mind, there are real, specific problems that plague many software projects that can be solved by software testing.

8.9.1. Requirements Haven't Been Implemented

When a software team delivers a product that's missing features, the users tend to notice. It seems like this should never happenaren't there many reviews and checkpoints along the way that ought to prevent it? In fact, the only time when the original requirements are ever checked against the final software is during software testing. And if there is a problem with testing, the most serious symptom is unimplemented requirements.

There are many reasons why requirements do not get implemented. Sometimes a designer or programmer does not understand what's written in the specification, or has a different understanding than what was intended. But sometimes it's simply an oversight. Good review, requirements, and programming practices will help reduce thisinstead of missing features, there might only be a few requirements that have rules that are not fully implemented. But it is rare for the programmers to deliver software that completely meets every single requirement.

Missing requirements are especially insidious because they're difficult to spot. It's not hard to tell that something is there that shouldn't be; it's much harder to recognize when something important is not there at all. Often, a programmer will be completely taken by surprise when a user comes back to her and asks where a certain feature is in the software.

It's especially likely that the missing requirements are difficult to spot because even the programmer missed them when looking over his own work. Programmers tend to be careful and meticulous people; they would have caught the obvious ones.

What's more, the programmers do not necessarily see which requirements are more important to the users and stakeholdersthey all have to be built, and it's up to the users and stakeholders, not the programmers, to prioritize what is developed. So by the time the software is built and delivered to the users, the requirements that are missing are difficult to spot, but are still just as important to the people who asked for them. It is not uncommon for a programmer to be surprised that a user even noticed that a business rule in a requirement was not implemented: the programmer didn't know that it was important, even though the user saw it as absolutely critical. When this happens, the programmer is both embarrassed and upset that all of his good work is ignored because some tiny rule was missing; the user, on the other hand, considers the software unusable, and has trouble even evaluating the rest of the work that's been done.

8.9.2. Obvious Bugs Slip Through

In many software organizations, there is a general feeling that the software testers are not very good at their jobs. Things that were written down and reviewed don't get tested. Applications are released with huge, glaring bugs, even though the testers passed the build and said it was fine to release. There are several common reasons this happens.

Sometimes bugs slip through because of ill-trained software testers. Organizations that are just setting up a software engineering group for the first time often don't have a software testing group at all, and don't realize that software testinglike programming, requirements engineering, or designis its own engineering discipline, requiring both training and experience. In an organization like this, it is not uncommon to draft technical support staff, junior programmers, end users, outside temps, and sales people as "testers." They see their job as simply "banging on the software" and providing comments as to whether or not they like it. If the programmers have not done sufficient unit testing (see Chapter 7), it is likely that these people will find places where the software breaks or crashes. However, it's a crapshoot: while they may find valid, great defects, it's likely that many more will slip through unnoticed.

The problem is that it's not enough to understand the business of the organization. To test the software, a tester needs to be more than an educated user; she needs to understand the requirements and be able to verify that they have been implemented. When the testers do not have a good understanding of the software requirements, they will miss defects. There are many reasons testers may have this problem. If there are not good requirements engineering practices in place at the organization, the testing strategy will certainly go awry. This happens when there are uncontrolled changes to requirements, when requirements and design documents are constantly changing, or with software development that is not based on specifications at all (for example, when stakeholders have approached programmers directly).

Programmer "gold plating" is especially problematic for testing. Most programmers are highly creative, and sometimes they have a tendency to add behavior to the software that was not requested. It is difficult for testersespecially ones who are not working from requirements documentsto figure out what behavior in the software is needed by the users, and what is extraneous. However, all of the software must work, and since the gold-plating features may never have been written down (or even discussed), the tester has difficulty figuring out what the software is even expected to do.

In some organizations, the testers are completely out of the loop in the software project until the product is complete. The programmers will simply cut a build and "throw it over the wall" to the testers; the testers are somehow supposed to intuit whether or not the software works, despite the fact that they were not involved in the requirements, design, or programming of the software, and have no prior experience with it. The testers are not told what the software is supposed to do; they are just supposed to make the software "perfect."

But most commonly, defects slip through because of schedule pressure. The most effective, careful, and well-planned testing effort will fail if it is cut short in order to release the software early. Many project managers have trouble resisting the urge to cut testing and release the current build early. They see a build of the software that seems to run. Since the software testing activities are always at the tail end of the software project, they are the ones that will be compressed or cut when the project runs late and the project manager decides to release the software untested.

8.9.3. "But It Worked For Us!"

When a product is not tested in all environments in which it will be used, the tests will be thrown off. The tests will yield defects, but it's much more likely that users will find glaring problems when they begin to use the software in their own environment. This is frustrating because the tests that were run would have found these problems, had they been conducted in an environment that resembled actual operating conditions.

Sometimes the tests depend on data that does not really represent what the users would input into the software. For example, a tester may verify that a calculation is performed adequately by providing a few examples of test data and comparing the results calculated by the software against results calculated by hand. If this test passes, it may simply mean that the tester chose data with the same characteristics that the programmer used to write the software in the first place. It may well be that when the software goes out to be used, the users will provide all sorts of oddball data that may have unexpected and possibly even disastrous results. In other words, it's easy to verify that an addition function calculates "2 + 2" properly; but it's also important to make sure it does the right thing when the user tries to calculate "2 + Apple."

It's common for a system to work fine with test dataeven with what seems to be a lot of test datayet grind to a halt when put in production. Systems that work fine with a small dataset or few concurrent users can die in real-world usage. It can be very frustrating and embarrassing for the engineering team when the product that they were highly confident in breaks very publicly, because the testers did not verify that it could handle real-world load. And when this happens, it's highly visible because, unless those heavy load conditions are verified in advance, they only happen once the users have gained enough confidence in the system to fully migrate to it and adopt it in their day-to-day work.

Some of the trickiest problems come about when there are differenceswhether subtle or largebetween the environment in which the product is being tested and the environment that it will be used in. Operating systems change often: security patches are released, and various components are upgraded or have different versions in the field. There may be changes in the software with which the product must integrate. There are differences in hardware setups. Any of these things can yield defects. It's up to the software tester to understand the environment the software will be deployed in and the kinds of problems that could arise. If she does not do this, entire features of the software could be unusable when it is released, because tests that worked fine on the tester's computer suddenly break when the software is out in the field.



Applied Software Project Management
Applied Software Project Management
ISBN: 0596009488
EAN: 2147483647
Year: 2003
Pages: 122

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net