Test planning and conduct

4.2 Test planning and conduct

Testing is like any other project. It must be planned, designed, documented, reviewed, and conducted.

4.2.1 Test plans

Because proper testing is based on the software requirements, test planning starts during the requirements phase and continues throughout the SDLC. As the requirements for the software system are prepared, the original planning for the test program also gets under way. Each requirement will eventually have to be validated during the acceptance testing. The plans for how that requirement will be demonstrated are laid right at the start. In fact, one of the ways that the measurable and testable criteria for the requirements are determined is by having to plan for the test of each requirement. The test planning at this point is necessarily high level, but the general thrust of the acceptance demonstration can be laid out along with the approaches to be used for the intermediate testing.

Requirements traceability matrices (RTM), which track the requirements through design and down to the code that implements them, are used to prepare test traceability matrices (TTM). These matrices track the requirements to the tests that demonstrate software compliance with the requirements. Figure 4.2 is an example of what a test traceability matrix might look like. Each requirement, both functional and interface, is traced to the primary (P) test that demonstrates its correct implementation. In an ideal test situation, each requirement will be challenged by one specific test. This is rarely the case, but redundant testing of some requirements and the failure to test others is quickly apparent in the TTM. Also in Figure 4.2, other tests in which the requirements are involved (I) are indicated. In this way, there is some indication of the interrelationships between the various requirements. As the software matures, and requirements are modified, this matrix can offer clues as to unexpected, and usually undesirable, results if a requirement is changed or eliminated.

click to expand
Figure 4.2: Test traceability matrix.

Conflicts between requirements can sometimes be indicated in the TTM, as the I fields are completed. A common example of requirements conflict is the situation that calls for high-speed processing and efficient use of memory, as in the case of real-time, embedded software. The fastest software is written in highly linear style with little looping or calling of subroutines. Efficient use of memory calls for tight loops, subroutine calls, and other practices that tend to consume more processing time.

Figure 4.2 is an example of a TTM at the system or black box testing level, since the requirements are noted as functions. As the SDLC progresses, so does the planning for the testing, and the TTM becomes more and more detailed until each specific required characteristic of the software has been challenged in at least one test at some level. Not every requirement can, or should, be tested at every level of the test program. Compliance with some can be tested at the white box level; some cannot be fully challenged until the black box testing is in progress.

The TTM is also very important as the requirements evolve throughout the development of the software system. As the requirements that form the basis for testing are changed, added, or eliminated, each change is going to likewise impact the test program. Just as the requirements are the basis for everything that follows in the development of the software, so, too, are they the drivers for the whole test program.

Some items of test planning are necessarily left until later in the SDLC. Such things as the bases for regression testing are determined during the acceptance test period as the final requirements baseline is determined. Likewise, as new requirements are determined so are the plans for testing those requirements.

Even though some parts of the test planning will be done later, the overall test plan is completed during the requirements phase. It is also, therefore, one of the subjects of the SRR at the end of the requirements phase. As the approved requirements are released for the design phase activities, the approved test plans are released to the test design personnel for the beginning of the design of test cases and procedures. Figure 4.3 depicts the flow of testing, starting with the test plan and culminating in the test reports.

click to expand
Figure 4.3: Typical testing flow.

4.2.2 Test cases

The first step in function testing, and often in input/output (I/O) testing, is to construct situations that mimic actual use of the software. These situations, or test cases, should represent actual tasks that the software user might perform. These may be the same as, or similar to, the use cases sometimes used in requirements expression.

Once the test cases are developed, the software requirements that are involved in each test case are identified. A check is made against the requirements traceability matrix to be sure that each requirement is included in at least one test case. If a test case is too large or contains many requirements, it should be divided into subtest cases or scenarios. Test cases-and when used, scenarios-should be small enough to be manageable. Limited size makes sure that errors uncovered can be isolated with minimum delay to and effect on the balance of the testing.

Consider the case of testing the software in a point-of-sale terminal for a convenience store. The store stocks both grocery and fuel products. The test cases, or use cases, might be as follows:

  1. Open the store the very first time. This would test the requirements dealing with the variety of stock items to be sold, their prices, and the taxes to be applied to each item. It also includes requirements covering the setting of the initial inventory levels.

  2. Sell products. Sales of various products might be further divided into test scenarios, such as the following:

    1. Sell only fuel. This scenario includes those requirements that deal with pump control, fuel levels in the tanks, and the prices and volume of fuel sold. It also tests those requirements that cause the sale to be recorded and the register tape to be printed.

    2. Sell only grocery items. Here, the sales events are keyed, or scanned, in on the terminal rather than read from a pump register, so there are requirements being tested that are different from the preceding scenario. The sales recording requirements are probably the same.

    3. Sell both fuel and groceries. This scenario, building on the first two, causes the previous requirements to be met in one single sale. There may be additional requirements that prevent the keying of a grocery sale to adversely affect the operation of the pump, and vice versa. It might also be necessary to consider the sequence of sales, such as fuel first and then grocery, or grocery first and then fuel. Other requirements might deal with the interaction of pump register readings with key-entered sales data. Further, a test of the ability to add pump sale charges to keyed sales charges is encountered.

    4. Provide discounts, coupons, or sales. An additional scenario might be required to test the proper processing of fleet discounts, bulk sales, cents-off coupons, and the like. Any and all complications of the simple sales activity must be tested.

  3. Restock the store. Having sold sufficient items, it becomes necessary to restock shelves and refill fuel tanks. This test case might also deal with the changing of prices and taxes and the modification of inventory levels. It can be seen as an extension of the requirements tested in test case 1.

  4. Close the store for the last time. Even the best businesses eventually close. This test case exercises the requirements involved in determining and reporting the value of the remaining inventory. Some of these same requirements might be used in tallying weekly or other periodic inventory levels for business history and planning tasks.

Should comparison of the test cases and scenarios with the RTM reveal leftover requirements, additional situations must be developed until each requirement is included in at least one test case or scenario.

Although this has been a simple situation, the example shows how test cases and scenarios can be developed using the actual anticipated use of the software as a basis.

4.2.3 Test procedures

As design proceeds, the test plans are expanded into specific test cases, test scenarios, and test procedures. Test procedures are the step-by-step instructions that spell out the specific steps that will be taken in the execution of the test being run. They tell which buttons to push, what data to input, what responses to look for, and what to do if the expected response is not received. The procedures also tell the tester how to process the test outputs to determine if the test passed or failed. The test procedures are tied to the test cases and scenarios that actually exercise each approved requirement.

The software quality practitioner reviews the test cases and scenarios, the test data, and the test procedures to assure that they all go together and follow the overall test plan and that they fully exercise all of the requirements for the software system. Figure 4.4 presents a sample of a test procedure form.

click to expand
Figure 4.4: Sample test procedure form.

4.2.4 Input sources

Input of test data is the key to testing and comes from a variety of sources. Traditionally, test data inputs have been provided by test driver software or tables of test data that is input at the proper time by an executive test control module specially written for the purpose. These methods are acceptable when the intent is to provide a large number of data values to check repetitive calculations or transaction processors. The use of these methods does diminish the interactive capability of the test environment. The sequential data values are going to be presented regardless of the result of the preceding processing.

As the software system being tested becomes more complex, particularly in the case of interactive computing, a more flexible type of test environment is needed. Test software packages called simulators, which perform in the same manner as some missing piece of hardware or other software, are frequently used. These can be written to represent everything from a simple interfacing software unit to a complete spacecraft or radar installation. As data are received from the simulator and the results returned to it, the simulator is programmed to respond with new input based on the results of the previous calculations of the system under test.

Another type of test software is a stimulator, which represents an outside software or hardware unit that presents input data independently from the activities of the system under test. An example might be the input of a warning message that interrupts the processing of the system under test and forces it to initiate emergency measures to deal with the warning.

The final step in the provision of interactive inputs is the use of a keyboard or terminal that is being operated by a test user. Here, the responses to the processing by the system under test are (subject to the constraints of the test procedures) the same as they will be in full operation.

Each type of data input fulfills a specific need as called out in the test documentation. The software quality practitioner will review the various forms of test data inputs to be sure that they meet the needs of the test cases and that the proper provisions have been made for the acquisition of the simulators, stimulators, live inputs, and so on.

4.2.5 Expected results

Documentation of expected results is necessary so that actual results may be evaluated to demonstrate test success or failure. The bottom line in any test program is the finding of defects and the demonstration that the software under test satisfies its requirements. Unless the expected results of each test are documented, there is no way to tell if the test has done what the test designer intended. Each test case is expected to provide the test data to be input for it. In the same way, each test case must provide the correct answer that should result from the input of the data.

Expected results may be of various sorts. The most common, of course, is simply the answer expected when a computation operates on a given set of numbers. Another type of expected result is the lighting or extinguishing of a light on a console. Many combinations of these two results may also occur, such as the appearance of a particular screen display, the starting of a motor, the initiation of an allied software system, or even the abnormal end of the system under test when a particular illegal function has been input, for example, an invalid password into a software security system.

It is the responsibility of the software quality practitioner to review the test documentation to ensure that each test has an associated set of expected results. Also present must be a description of any processing of the actual results so that they may be compared with the expected results and a pass/fail determination made for the test.

4.2.6 Test analysis

Test analysis involves more than pass/fail determination. Analyses of the expected versus actual results of each test provide the pass or fail determination for that test. There may be some intermediate processing necessary before the comparison can be made, however. In a case in which previous real sales data is used to check out the new inventory system, some adjustments to the actual results may be necessary to allow for the dating of the input data or the absence of some allied software system that it was not cost effective to simulate. In any case, the pass/fail criteria are applied to the expected and received results and the success of the test determined.

Other beneficial analysis of the test data is possible and appropriate. As defects are found during the testing, or certain tests continue to fail, clues may arise as to larger defects within the system, or the test program, than are apparent in just a single test case or procedure. By analyzing test data over time, trends may appear that show that certain modules to be defect prone and need special attention before the test program continues. Other defects that might surfaces include inadequate housekeeping of common data areas, inappropriate limits on input or intermediate data values, unstated but implied requirements that need to be added and specifically addressed, design errors, sections of software that are never used or cannot be reached, erroneous expected results, and so on.

Software quality practitioners can play an important role in the review and analysis of the test results. It is not as important that software quality practitioners actually perform the analysis as it is that they ensure adequate analysis by those persons with the proper technical knowledge. This responsibility of software quality practitioners is discharged through careful review of the test results and conclusions as those results are published.

4.2.7 Test tools

Many automated and manual test tools are available to assist in the various test activities.

A major area for the application of tools is in the test data provision area. There are commercially available software packages to help in the creation and insertion of test data. Test data generators can, on the basis of parameters provided to them, create tables, strings, or files of fixed data. These fixed data can, in turn, be input either by the test data generator itself or by any of several test input tools. General-purpose simulators can be programmed to behave like certain types of hardware or software systems or units. Stimulators that provide synchronous or asynchronous interrupts or messages are also available. It is more likely, though, that most of these tools will be created in-house rather than obtained outside so they can be tailored to the test application at hand.

Another area in which tools are available is that of data recording. Large-scale event recorders are often used to record long or complicated interactive test data for future repeats of the tests or for detailed test data analysis. In association with the data recorders are general- and specific-purpose data reduction packages. Large volumes of data are often sorted and categorized so that individual analyses may be made of particular areas of interest. Some very powerful analysis packages are commercially available, providing computational and graphical capabilities that can be of great assistance in the analysis of test results and trend determination.

Tools of much value in the test area include path analyzers. These tools monitor the progress of the test program and track the exercising of the various paths through the software. While it is impossible to execute every path through a software system of more than a few steps, it is possible to exercise every decision point and each segment of code. (A segment in this context means the code between two successive decision points.) A path analyzer will show all software that has been executed at least once, point out any software that has not been exercised, and clearly indicate those code segments that cannot be reached at all (as in the case of a subroutine that never gets called or a decision point that cannot take one branch for some reason).

Many of these tools are commercially available. Most applications of them, however, are in the form of tools specifically designed and built for a given project or application. Some development organizations will custom-build test completeness packages that software quality practitioners will use prior to acceptance testing or, perhaps, system release. Whatever their source or application, the use of test tools is becoming more and more necessary as software systems grow in size, complexity, and criticality. Software quality practitioners should monitor the application of test tools to be sure that all appropriate use is being made of them and that they are being used correctly.

4.2.8 Reviewing the test program

An important part of the software quality practitioner's activity is to review the test program. As discussed in Section 3.3.3, review of the test documentation is important. In fact, the full test program should be reviewed regularly for status, sufficiency, and success. Such reviews are expected to be an integral part of the major phase-end reviews, as explained in Section 3.1.2. It is reasonable to hold less formal, in-process reviews of the test program as testing progresses and more of the software system is being involved.

The development test documentation permits this review of the whole test approach as it is formulated. Without a documented approach to the problems of testing the software, the testing tends to become haphazard and undisciplined. There is a strong tendency on the part of many project managers to commit to a firm delivery date. If the project gets behind schedule, the slippage is usually made up by shortening the test phase to fit the time remaining, adding more testers, or both. Shortening the test phase also happens in the case of budget problems. The great, mythical woodsman Paul Bunyan told of a tree that was so high that it took a man a full week to see its top. But, if he had six of his friends' help, they could see it in one day. This probably would not work in the real world, and having more testers is usually not a solution to a test phase that is too short either.

If a well-planned and well-documented test program is developed, the temptation to shorten the testing effort to make up for other problems is reduced. By having a software quality practitioner review and approve the documentation of the test program, there is even more impetus to maintain the program's integrity.

The documentation of the test program should extend all the way to the unit and module tests. While these tend to be more informal than the later tests, they, too, should have test cases and specific test data recorded in, at least, the UDF. The results of the unit and module tests also should be recorded. Software quality practitioners will review the results of the unit and module tests to decide, in part, whether the modules are ready for integration. There may even be cases in which the module tests are sufficient to form part of the acceptance test.



Practical Guide to Software Quality Management
Practical Guide to Software Quality Management (Artech House Computing Library)
ISBN: 1580535275
EAN: 2147483647
Year: 2002
Pages: 137
Authors: John W. Horch

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net