Current Testing Strategies

Let's explore the types of testing being done and the pros and cons of various strategies that place the test group in various parts of the organization.

start sidebar
Assumption #1. The developers have unit-tested the code.

In both the bottom-up and the top-down approaches, the most common assumption testers state when they begin testing is this: "The developers have unit tested the code." I state this assumption in all my test agreements, and it is always a requirement in my contracts to test.

end sidebar

Top-Down Broad-Focused Integration Testing

When I test, I have no particular interest in any one part of the system, but rather I am interested in the whole system. After all, my assignment is almost always to verify and validate the system. The system includes applications and components programmed using everything from object-oriented programming (OOP) to assembler to batch languages. Network communications protocols carry transactions between these components through various routers, switches, databases, and security layers.

The system is not a finite state machine; it is a society of components interacting constantly with a dynamic group of stimuli. It is practically impossible to know all the stimuli and interactions going on in even a small group of components at a given instant. The Heisenberg uncertainty principle, which states that "the more precisely the position is determined, the less precisely the momentum is known in this instant, and vice versa," certainly applies: We can only prove that these components exist, not what state they are in at a particular time.

In much the same way as no single algorithm is sufficient to map all the paths through a complex system, no single type of testing used by itself will give a satisfactory result in this case. Traditional system testing, the kind that digs deep into individual components deep in a system, can completely ignore things like the user interface. Function testing, or end-to-end testing, can completely ignore systems issues that may cause total paralysis in the production environment. And, while I am sure that they exist, I don't personally know of any companies willing to pay for different groups of experts to perform unit, integration, system, function, end-to-end, load, usability, and user acceptance testing for the same system.

As a result, I perform whatever types of tests are appropriate for the situation. The types of tests performed should be clearly illustrated in the test inventory. For examples on types of tests to perform, see Chapter 4, "The Most Important Tests (MITs) Method," where every type of test is represented on the same task list. Experts, on the other hand, each have a specialty, some point of fixation about which they are undoubtedly biased.

Bias is a mental leaning or inclination, a partiality or prejudice. It is natural and healthy for specialists to be biased in their view of their project. Much like a proud parent, the experts' partiality gives them a tight focus that ensures maximum quality in their project. However, all of these child projects must grow up to function as an integrated system in the real world. The contemporary software tester must make sure that they do this.

Today's test professional must be, among other things, an integrator whose focus can take in the entire system. Testers who do not have training in test techniques and a good command of test metrics will have a hard time rising to this challenge.

Organizational Strategies for Locating the Test Group

I have consulted in all types of organizations, and I am constantly amazed at the variety of places management comes up with to stick the test group. My experience indicates that no matter where the test group shows up on your org chart, it is important to pick a location in your organization that maximizes good communications and the free flow of information. Every location has its pros and cons, as I point out in the next paragraphs.

Have an Independent Test Group under Their Own Management

This approach sounds great, but unfortunately it has some major flaws. First is the fact that the test group can become squeezed between the development and operations groups. If communications fail, the testers can find themselves with adversarial situations on both sides. If the developers are late with the code, the testers will be the ones who have to either make up the time or explain the delays to operations. Testers need allies, and this organizational strategy has a tendency to put them in a situation where they are continually the bearers of bad news and often made to be the scapegoats.

Put the Test Group in Development

There is currently a trend toward moving the testing functions into the development area and away from a separate test group. There are two dominant themes behind this trend. The first is to break down barriers between the two groups to allow better communications. The other rationale is along the lines that the test group is not competent to conduct system testing; therefore, the developers are going to conduct or assist in conducting the system test.

There is a serious problem with both these rationales. The developers certainly have expertise in the system; generally, they are the ones who wrote it or maintain it. Both of these strategies achieve the same result, "having the fox guard the henhouse." Even if the developers doing the testing have training in software testing techniques, a rare thing at best, they suffer from the bias previously mentioned. Not only are they likely to miss bugs that their bias forbids them to see, they are not likely to test outside their area of expertise. A system test alone is not likely to remove bugs in the user interface or in the function sequence steps, but these bugs are the first bugs that the end user is likely to see.

Any tester who has ever had to try to convince development that an application that ties up the entire PC for a couple of minutes while it does a database query has a serious bug knows exactly what I mean. A tester arguing this point without citing some type of standard has a poor chance of being heard. Just about every user interface design guide recommends constant user feedback and response times of less than 5 seconds. Usability lab studies indicate that the user believes the application is hung after a 20-second period of inactivity. But even the tester who cites a design guide standard for response time is likely to find out either that the recommendations of the design guide have been waived or that the developers are not required to follow any of the cited design guides at all. The developer's bias may come from knowing how much trouble it was to get it to work as well as it does. They may not be anxious to take time away from current commitments to try again, especially when fixing the bug may mean a massive rewrite.

Another example is the situation where developers see nothing wrong with a menu option that says Report Writer but takes the users to a window titled Able Baker-especially because there are several other menu options in the same application that navigate to windows with equally mismatched titles. No matter that the design guide clearly recommends against such confusing labels. After all, if development changed one of these labels, they would probably have to change them all. Surely, the issue cannot be that important. It is very challenging to come up with an argument that will convince developers and management that the fix is worth the cost and effort.

However, having pointed out the types of problems that can arise when the test effort is moved into development, I must also say that I have seen it work very well. The environments where this approach is successful have fairly small, highly competent programming groups of three to ten programmers, producing high-reliability class software or firmware. Typically, these projects last from 6 to 12 months, and after unit testing, no developer tests her or his own software.

The other thing that these successful efforts have in common is that the systems that were being tested by developers for developers were small stand-alone systems, such as firmware for telephone operators' stations, firmware for pagers, and medical imaging software running on a single platform. When this testing strategy is used in large systems, or on components that will run in large systems, the method fails. Even if the testing is thorough, in this situation, it only amounts to a unit test, because when this product is introduced into a larger system, it must be integrated into the larger system. This brings me to the next interesting trend that I have observed in large networks, propping up the product with support.

Don't Have a Test Group At All

It is far too simplistic to assume that companies must test their products or go bankrupt. There are many companies that do little or no testing and not only survive but prosper. Their prosperity is not often the result of flawless programming. These companies often have divvied up the test group or disbanded them altogether. Whatever pre-production testing is done is done by the development group.

Note 

What happens when there is little or no testing? The users test it. But then what? You prop it up with support.

These organizations typically have several layers of support personnel. The first layer of support personnel are generally junior-level people who log the issues, answer common questions, and try to identify and route the more difficult issues to the next layer of more technically competent support personnel. In times past, the testers often filled the second layer of support, since they were the experts on the system and its problems.

Problems that can't be resolved by the second level of support are escalated to the third and most expert layer. The third layer of support is usually made up of senior wizards who understand the depths of the system. These folks have the best chance of diagnosing the really tough problems. Generally, what they can't fix outright they send back to development, and they are likely to get a speedy response.

Management may not see the need to pay for highly competent and experienced testers, but that is not the case when it comes to support. The typical third-line support person is a senior-level programmer or systems engineer, whereas the programmers writing the bulk of the code rank one or two levels below that. The reason is simple: Eventually the bugs get to the users and become prioritized by the user's demands. Management doesn't have a problem justifying highly paid technicians who make the customer happy and get the right bugs fixed. This is a logical outgrowth of the "let the customers test it" approach started by the shrink-wrap software industry in the 1990s.

It is logical that this situation should arise given the chain of events I have described, but it is not obvious to many people that the testing is often going on after the product has been shipped. The support team may be testing the product, and the customer is almost certainly testing the product. I find it interesting that support personnel don't think of it as testing; they generally describe what they do in terms of tuning the system and fixing bugs rather than in terms like testing. The users are doing the testing.

Another flavor of this trend is to let the customer test the product under the guise of installing or upgrading their system, while providing one or more senior support staff to manage the process and get the bugs fixed ASAP.

This approach of propping it up with support is very appealing to the manager who shipped the product without testing. In fact, such support is required. This approach requires no test planning; no time is spent designing or tracking tests. And, only the most important bugs get fixed. The problem is that the cost of finding and fixing bugs grows geometrically the farther the bugs get from the developers. This is the most expensive way to find and remove bugs. And, again, only the most important, obnoxious bugs are removed.

Put the Test Group in Operations

I have found putting the test group in operations to be a very healthy and productive practice. This is the location I prefer. In my experience, operations is the best place to be when testing a system. The people who control the system are my best allies. Proximity to the person who can help find the problem in a complex system is invaluable.

When I am attached to operations, I am in a position to add a great deal of value to the released product, in addition to the test effort itself. Consider the following: When I am testing a system, I am really getting it ready for the customer. Often I am also writing or reviewing the user guide at the same time as I am testing. If I have to write up any special instruction to make the system or product work, like how to tweak this or that, these are passed along to customer support and thence to the customer.

Finally, a good test suite is also a diagnostics suite. So if I am a part of operations, then there is a good confidence level in the verity of my test suites, and again proximity makes it easy for operators to get expert help in maintaining and running the tests and interpreting the results. In this situation my test suites are reused, sometimes for years.

The only way to make test replay automation cost-effective is to make sure the automated tests get lots of reruns. In a single test effort, many tests are only performed once and never again, so it is not worth automating them. However, in this scenario, the automated tests can be incorporated into diagnostics suites that are run in the production environment to help make sure the system stays healthy. This type of reuse really adds value to the testers work.



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net