The Sources of the Tests on the Inventory

Tests on the inventory can and should come from a number of sources, like the complete grocery list on the refrigerator door. The tests can target any part of the system and can range from high-level user functions to low-level system tests. The following are some of the most common sources of tests in the inventory:

  • Requirements

  • Analytical analysis techniques

    • Inspections, reviews, and walk-throughs

    • Paths analysis

    • Data analysis

    • Environment catalog

    • Usage statistics

  • Nonanalytical techniques

Tests Based on Requirements

One of the first sources of tests for the test effort should be the requirements. If you can establish the relative priorities of the requirements, then it helps greatly in establishing the rank of the tests that are designed to verify the requirements and the amount of test coverage that will be provided. Ranking provides a valuable tool for designers and developers to pass on their knowledge and assumptions of the relative importance of various features in the system. More on this when we discuss how I build a test inventory in Chapter 7, "How to Build a Test Inventory."

Analytical Methods

These are tests determined through systematic analysis of the system requirements, user requirements, design documents and code, or the system itself. There are many types of analysis techniques and methods. The following are the ones I have found most useful:

  • Inspections, reviews, and walk-throughs

  • Paths analysis

  • Data analysis

  • Environment catalog

  • Usage statistics

Inspections, Reviews, and Walk-throughs

Inspection, reviews, and walk-throughs are all used to test the accuracy of paper documentation. Paper is also the medium currently used to store and communicate those ideas. Many of the defects discovered in paper documentation are related to paper and the limitations of paper-dependent processes, not fuzzy or incorrect human logic.

start sidebar

"The Horses for Courses Principle: Use walk-throughs for training, reviews for consensus, but use Inspections to improve the quality of the document and its process."-From Software Inspection by Tom Gilb and Dorothy Graham.

end sidebar

Inspections

Inspection, with a capital I, is a formal technique used to test project documentation for defects and measure what is found. It is currently acknowledged to be the most effective method of finding and removing defects. Issues logged against the documentation through inspections are usually resolved before they can become bugs in the product. Testers do not necessarily have access to the Inspection results, unless testers hold their own Inspection (which is a very good idea). Ideally, all test documents, plans, the test inventory, and so on will be inspected by a team that includes both testers and developers.

Inspections do not generate tests that will be run against the system. Tom Gilb, noted author and management practice guru, says that "inspection is a testing process that tests ideas which model a system." Inspections are used to measure and improve the quality of the paper documentation, or the ideas expressed in it.

Reviews

Reviews, also called peer reviews, are usually conducted on project documentation early in the development cycle. The reviewers evaluate the contents of the documents from various groups. Each reviewer has his or her own perspective, expectations, and knowledge base.

A review is probably the best opportunity to publish assumptions for general scrutiny. Reviews are very good at uncovering possible logic flaws, unaddressed logic paths, and dependencies. All of these things will require testing. Tests determined through reviews can be added to the test inventory in the same way as any other tests. Typically, though, reviews, like inspections, are conducted on the paper documentation of a project.

Walk-throughs

Walk-throughs are usually group sessions that trace processes, both existing and proposed, in order to educate everyone of the current thinking. The documentation used in a walk-through may be high-level logic flows (bubbles on a foil or a whiteboard). Walk-throughs are also sources of tests for the test inventory.

In a RAD/Agile effort, the design and requirements may not be written down, but they can and are reviewed nonetheless. Some examples of non-paper documentation techniques that describe the way the system will work include story actors, metaphors, and day-in-the-life scenarios. These oral versions of the design and requirements of the project are continuously evolving and undergo continuous review under the scrutiny of the group. Many types of RAD/Agile efforts use reviewing techniques similar to those described previously to consider the way a software product will work or how a logic process will progress.

Path Analysis

There are many published techniques for conducting path analysis and path testing, also called white box testing. This type of analysis is systematic and quantifiable. Chapters 11 and 12 discuss the topic at length.

Data Analysis

There are many techniques published for conducting data analysis and data testing, also called black box testing and behavioral testing. This type of analysis is systematic and quantifiable. Chapter 13, "Data Analysis Techniques," discusses the topic.

Environment Catalog

The possible combinations and permutations of hardware and software environments in which a software system may be expected to run is virtually infinite.

Just because a software product runs perfectly in one environment does not mean that it will run at all in any other environment. All device drivers were not created equal. No two versions of the operating system are truly compatible, both upward and downward. Standards for data transmission, translation, and storage vary widely.

Note 

The entire test inventory should be run against the software under test in each test environment.

Verifying and validating that the software performs correctly in the hardware and software environments where it will be expected to run is the most demanding testing task of all. In all but the smallest applications, this amount of test coverage is impossible unless the testing is automated.

In the planning stage, testers must make a catalog of the environments they plan to test. Management is normally involved in the decisions about which environment combinations will be tested, because they approve the purchase of the hardware and software. The best source for selecting environments that should be tested are the customer support records of problems. Unfortunately, problematic hardware and software environments are usually first identified and reported by customers.

Usage Statistics: User Profiles

It is common today for progressive commercial software makers to build automatic logging functions into their products, both for purposes of diagnostics when there is a problem and for gathering historical data to use in evaluating the functions and usability of their products. Historically, such logging functions have been used only in beta test sites and special situations. But the information that they provide is crucial to understanding how the customer uses the system. The understanding of how important this information is in a competitive market is best illustrated by considering the core features of the Web servers being marketed today. Makers of Web servers today have included very comprehensive logging abilities in their products. These logs run continuously in the production environment, giving the Web master instant access to usage statistics for every page and function in the system. This information can then be used to tune the Web on a daily basis.

User profile information provides historical data, or feedback, to the developers and testers about how the product is being used, misused, and not used. The most-used functions, options, and problems are usually part of this record. These records are often a source of tests for the current release, especially when new functions have been introduced or functions have been changed to accommodate the users.

When the users report a bug, the user profile information is the best chance of re-creating the bug for development to fix, and for the testers to regression-test the fix. User misunderstandings commonly show up in the user profiles, such as a sequence of actions taken by the user that caused an unexpected outcome. For example, the user did not realize the need to specify a location for a file, and so the file was not actually saved even though the user clicked on the OK button. Or, it was saved in some unknown location.

As valuable as these records are in adding tests to the inventory, they are more valuable for prioritizing the tests. This information is invaluable in determining which functions are used most and how they are used. This, in turn, helps testers determine where to devote test resources.

Profile information invariably leads to improved product features and usability. For example, a client/server application might produce over 200 reports. But usage statistics from the host show that only 15 to 20 of these reports were requested by the customers in the 12 months since release. Such information can have a profound impact on the projections for both the development and test schedules.

Another example of using usage statistics to improve a product is the evolution of the toolbar button. When toolbar buttons were first introduced, they were almost universally ignored by the users. There is a story from Microsoft that the beta test user profiles showed this nonuse. This information prompted an inquiry into why this timesaving feature was not being used. The answer was that users were uncertain what the buttons did and avoided them for that reason. That answer led to an explanation of each button being placed in the bottom row, or status line, of the window. When the mouse moved over the button, the explanation would appear. Human factors kept most people from seeing the information at the bottom of the window when their attention was focused on the toolbar button or menu at the top of the window. As a result, the buttons continued to be ignored. The next step in the evolution of the toolbar button was the inclusion of the explanatory text that opens next to the button. This latest innovation has led to widespread use of descriptive text for all active elements on the screen.

If the test effort has access to this type of usage data, it should be used to help prioritize testing, design the actual test scripts, and determine the order in which the tests should proceed. The use of user profile data in the prioritization of tests will be discussed in Chapter 9, "Risk Analysis."

Nonanalytical Methods

Nonanalytical methods are actually the most commonly used methods to design tests. However, when they are used without any analytical methods, the tests generated will probably not provide systematic or uniform coverage. Nonanalytical methods are most effective when used after analytical methods have been applied, to test assumptions made in the analytical tests, to do error guessing, and to design purely random tests. [3] A couple of the main types of nonanalytical methods are discussed in the sections that follow.

Brainstorming Sessions

Brainstorming sessions are good sources of test scenarios for the test inventory. The results of brainstorming sessions can range from highly structured sets of tests to chaos, depending on the rules by which they are conducted. The scenarios developed in brainstorming sessions typically exercise important functions in the system but are ad hoc in nature and do not ensure systematic or uniform test coverage. Typically, this type of test is also generated spontaneously as the test effort progresses through error guessing and assumption testing on the part of the tester.

An example of this type of test development are cases where a necessary dialog box will not open if the desktop is in a certain view. When testers discover a bug of this type, it is normal to add tests to verify that there are not other problems with the particular view. This is a normal corrective action when an underlying assumption-namely, that the software would function in the same way no matter what view it is in-is found to be in error.

Expert Testers

Typically, expert testers develop test scenarios that explore hot spots in problem areas such as module interfaces, exception processors, event processors, and routers. This type of testing is also used to probe critical security or data-sensitive areas in the system. While these scenarios are very important, again, systematic and uniform coverage is not ensured.

These tests are often designed to stress a particular module or component that is suspected of being buggy or to probe for vulnerabilities in components that must be demonstrably secure. An example of this first type of test would be to set up a series of actions to overload a queue in switch software, and then verify that the overload is handled properly. An example of the second type of testing is when the makers of software that provides system security, like a firewall, send their products to special consultants whose job is to find a way to break the security.

I do not have much to say about specific techniques for adding nonanalytical tests to the inventory. Nonanalytical tests represent an opportunity to exercise an artistic approach to testing. I have had plenty of experience creating these tests, as have most testers. They are spontaneous, ad hoc, often redundant, at least in part, and for the most part, inspirational in nature. Inspiration is capricious, and so are these tests. Children playing Pin the Tail on the Donkey use more consistent techniques than the nonanalytical techniques used to test software.

When nonanalytical methods are used without analytical methods, the test set generated will undoubtedly be seriously deficient. Formal path and data analysis techniques provide consistent and dependable tools for defining tests. But analytical approaches alone are not enough to ensure an excellent test suite either, given the current state of technology. The tester's creativity and artistic technique are still critical factors to building an excellent test set.

[3]Random tests, usually generated and rerun by automated tools, are often called monkeys.



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net