Testing is like any other project. It must be
planned, designed, documented, reviewed, and
Because proper testing is based on the
software requirements, test planning starts during the requirements
phase and continues throughout the SDLC. As the requirements for
the software system are prepared, the original planning for the
test program also gets under way. Each requirement will eventually
have to be
Requirements traceability matrices (RTM), which
track the requirements through design and down to the code that
implements them, are used to prepare test traceability matrices
(TTM). These matrices track the requirements to the tests that
Figure 4.2: Test traceability matrix.
Conflicts between requirements can sometimes be
indicated in the TTM, as the I fields are completed. A common
example of requirements conflict is the situation that calls for
high-speed processing and efficient use of memory, as in the case
of real-time, embedded software. The
Figure 4.2 is an example of a TTM at the system or
black box testing level, since the requirements are noted as
functions. As the SDLC progresses, so does the planning for the
testing, and the TTM becomes more and more detailed until each
specific required characteristic of the software has been
challenged in at least one test at some level. Not every
requirement can, or should, be tested at every level of the test
program. Compliance with some can be
The TTM is also very important as the requirements
Some items of test planning are necessarily left until later in the SDLC. Such things as the bases for regression testing are determined during the acceptance test period as the final requirements baseline is determined. Likewise, as new requirements are determined so are the plans for testing those requirements.
Even though some
Figure 4.3: Typical testing flow.
The first step in function testing, and often
in input/output (I/O) testing, is to construct situations that
Once the test cases are developed, the software
requirements that are involved in each test case are identified. A
check is made against the requirements traceability matrix to be
sure that each requirement is included in at least one test case.
If a test case is too large or contains many requirements, it
should be divided into subtest cases or scenarios. Test cases-and
when used, scenarios-should be small enough to be manageable.
Consider the case of testing the software in a point-of-sale terminal for a convenience store. The store stocks both grocery and fuel products. The test cases, or use cases, might be as follows:
Open the store the very first time. This would test the requirements dealing with the variety of stock items to be sold, their prices, and the taxes to be applied to each item. It also includes requirements covering the setting of the initial inventory levels.
Sell products. Sales of various products might be further divided into test scenarios, such as the following:
Sell only fuel. This scenario includes those requirements that deal with pump control, fuel levels in the tanks, and the prices and volume of fuel sold. It also tests those requirements that cause the sale to be recorded and the register tape to be printed.
Sell only grocery
Here, the sales events are keyed, or scanned, in on the
terminal rather than read from a pump register, so there are
requirements being tested that are different from the
Sell both fuel and
This scenario, building on the first two, causes the
previous requirements to be met in one single sale. There may be
additional requirements that prevent the keying of a grocery sale
to adversely affect the operation of the pump, and vice versa. It
might also be necessary to consider the sequence of sales, such as
fuel first and then grocery, or grocery first and then fuel. Other
requirements might deal with the interaction of pump register
Restock the store.
Having sold sufficient items, it becomes necessary to restock
Close the store for the last time. Even the best businesses eventually close. This test case exercises the requirements involved in determining and reporting the value of the remaining inventory. Some of these same requirements might be used in tallying weekly or other periodic inventory levels for business history and planning tasks.
Should comparison of the test cases and scenarios with the RTM reveal leftover requirements, additional situations must be developed until each requirement is included in at least one test case or scenario.
Although this has been a simple situation, the example shows how test cases and scenarios can be developed using the actual anticipated use of the software as a basis.
As design proceeds, the test plans are
expanded into specific test cases, test scenarios, and test
procedures. Test procedures are the step-by-step instructions that
spell out the specific steps that will be taken in the execution of
the test being run. They tell which
The software quality practitioner reviews the test
cases and scenarios, the test data, and the test procedures to
assure that they all go together and follow the overall test plan
and that they fully exercise all of the requirements for the
software system. Figure 4.4
Figure 4.4: Sample test procedure form.
Input of test data is the key to testing and
comes from a variety of sources. Traditionally, test data inputs
have been provided by test driver software or tables of test data
that is input at the proper time by an executive test control
module specially written for the purpose. These methods are
acceptable when the intent is to provide a large number of data
values to check repetitive calculations or transaction processors.
The use of these
As the software system being tested becomes more complex, particularly in the case of interactive computing, a more flexible type of test environment is needed. Test software packages called simulators, which perform in the same manner as some missing piece of hardware or other software, are frequently used. These can be written to represent everything from a simple interfacing software unit to a complete spacecraft or radar installation. As data are received from the simulator and the results returned to it, the simulator is programmed to respond with new input based on the results of the previous calculations of the system under test.
Another type of test software is a stimulator, which represents an outside software or hardware unit that presents input data independently from the activities of the system under test. An example might be the input of a warning message that interrupts the processing of the system under test and forces it to initiate emergency measures to deal with the warning.
The final step in the provision of interactive inputs is the use of a keyboard or terminal that is being operated by a test user. Here, the responses to the processing by the system under test are (subject to the constraints of the test procedures) the same as they will be in full operation.
Each type of data input fulfills a specific need as called out in the test documentation. The software quality practitioner will review the various forms of test data inputs to be sure that they meet the needs of the test cases and that the proper provisions have been made for the acquisition of the simulators, stimulators, live inputs, and so on.
Documentation of expected results is necessary so that actual results may be evaluated to demonstrate test success or failure. The bottom line in any test program is the finding of defects and the demonstration that the software under test satisfies its requirements. Unless the expected results of each test are documented, there is no way to tell if the test has done what the test designer intended. Each test case is expected to provide the test data to be input for it. In the same way, each test case must provide the correct answer that should result from the input of the data.
Expected results may be of various sorts. The most
common, of course, is simply the answer expected when a computation
operates on a given set of
It is the responsibility of the software quality practitioner to review the test documentation to ensure that each test has an associated set of expected results. Also present must be a description of any processing of the actual results so that they may be compared with the expected results and a pass/fail determination made for the test.
Test analysis involves more than pass/fail
determination. Analyses of the expected versus actual results of
each test provide the pass or fail determination for that test.
There may be some intermediate processing necessary before the
comparison can be made, however. In a case in which previous real
sales data is used to check out the new inventory system, some
adjustments to the actual results may be necessary to allow for the
Other beneficial analysis of the test data is
possible and appropriate. As defects are found during the testing,
or certain tests continue to fail, clues may arise as to larger
defects within the system, or the test program, than are apparent
in just a single test case or procedure. By analyzing test data
over time, trends may appear that show that certain modules to be
defect prone and need special attention before the test program
continues. Other defects that might surfaces include inadequate
housekeeping of common data areas, inappropriate limits on input or
intermediate data values, unstated but
Software quality practitioners can play an
important role in the review and analysis of the test results. It
is not as important that software quality practitioners actually
perform the analysis as it is that they ensure adequate analysis by
those persons with the proper technical knowledge. This
responsibility of software quality
Many automated and manual test tools are available to assist in the various test activities.
A major area for the application of tools is in the
test data provision area. There are commercially available software
packages to help in the creation and insertion of test data. Test
data generators can, on the basis of parameters provided to them,
create tables, strings, or files of fixed data. These fixed data
Another area in which tools are available is that
of data recording. Large-scale event recorders are often used to
record long or complicated interactive test data for future repeats
of the tests or for detailed test data analysis. In association
with the data
Tools of much value in the test area include
Many of these tools are commercially available. Most applications of them, however, are in the form of tools specifically designed and built for a given project or application. Some development organizations will custom-build test completeness packages that software quality practitioners will use prior to acceptance testing or, perhaps, system release. Whatever their source or application, the use of test tools is becoming more and more necessary as software systems grow in size, complexity, and criticality. Software quality practitioners should monitor the application of test tools to be sure that all appropriate use is being made of them and that they are being used correctly.
An important part of the software quality
practitioner's activity is to review the test program. As discussed
in Section 3.3.3, review of the test documentation is important. In
fact, the full test program should be reviewed regularly for
status, sufficiency, and success. Such reviews are expected to be
an integral part of the major
The development test documentation
If a well-planned and well-documented test program
is developed, the
The documentation of the test program should extend all the way to the unit and module tests. While these tend to be more informal than the later tests, they, too, should have test cases and specific test data recorded in, at least, the UDF. The results of the unit and module tests also should be recorded. Software quality practitioners will review the results of the unit and module tests to decide, in part, whether the modules are ready for integration. There may even be cases in which the module tests are sufficient to form part of the acceptance test.