5.5 Development Process for Test Plans (TP)

 < Day Day Up > 



This is the stage where all product development plans and specifications will be scoured, and test criteria will be gleaned from each specifications document for inclusion in a series of test cases as part of the overall master test plan. This includes data from the System Infrastructure Requirements (SIR) document, the Software Requirements Specification (SRS), the Interface Requirements Specification (IRS), and the Performance Requirements Specification (PRS).

Test cases are generally constructed to cover materials from each unique specifications document or plan, but they are not limited to just those documents. For example, if the IRS states that a common Windows-like interface will be used, there may be a specific set of test cases developed that test common Windowing features such as maximize, minimize, scroll up, down, left, right, and so on. During the test case construction process, the test manager should instruct the test case developers as to how to build reusable test cases. Properly designed, well-confined test cases can be reused for maximum organizational efficiency. They are also reused for regression testing as the software progresses through its life cycle. The best reference to explain how best to go about properly constructing a test plan is the ANSI/IEEE Standard 829-1998 for Software Test Documentation[9]. This standard recommends the following outline format be used for documentation purposes when building a test plan:

I. Contents

II. Scope

III. Definitions

IV. Test Plan

V. Test-Design Specification

VI. Test-Case Specification

VII. Test-Procedure Specification

VIII. Test-Item Transmittal Report

IX. Test Log

X. Test-Incident Report

XI. Test-Summary Report

As you peruse the format of the recommended contents, you can see that Sections I through III are fairly self-explanatory. We focus on Sections IV through XI in our coverage of test planning. The purpose of the test plan is to provide an overview of the testing effort needed for the product delivery to be successful. It is common for all of the unique test plans developed for product testing to be placed into a single planning document that is broken into multiple sections. The master test plan is a centralized repository to be used for all documentation relating to the testing effort. Each section of the master test plan would contain a specific type of test plan (e.g., SIR, SRS, PRS). Some organizations choose to create many separate documents and maintain each individually, but from an SPMO perspective, this is a less efficient method of document handling.

The test plan should contain a test plan identifier. This is a combination of alphanumeric characters that os most often used to cross-reference test results data in a database. An example may be something like SIR- NN --NN-NNN-DDD , where SIR would be the abbreviation for System Interface Requirements and NN-NN would represent the Product Major and Minor Revision ID numbers. The next set, NNN, could represent three digits that are the requirement ID number taken from the SIR document, and DDD could represent the Julian date on which the test plan was last modified or even the last date on which the plan was executed. The sample number might look something like this: SIR-02-01-054-167. Note that each set of alphanumerics between the dashes could be a sort field in a test database. It would be possible to develop a query for SIR (or SRS, IRS, etc.) or all specifications containing the Major Revision ID of 02, and so on. This is, of course, highly subjective and easily modified to meet organizational needs. The purpose of the identifier is to easily find results data as it relates to indexed plans.

Each test plan constructed should have an introductory section. In this section, it is often necessary to explain or cross-reference any relevant organizational policies and standards. The introduction should also give a high-level overview of the product being tested as it relates to the specific plan.

The next section of the test plan should cover the test items. Test items are references to software modules, features, control functions, interfaces, and so on. Each significant feature of the software should have a corresponding test item. All of the items relevant to the plan should be listed in this part of the plan. Furthermore, each item should be cross-referenced back to the design plan or specification that it came from. It is common to assign each test item a numeric ID for future reference.

Your test plan should stipulate which features of the product are to be included or excluded from the testing process. The plan should also cover the approach used in testing the product. This approach would include any tools, techniques, processes, people, and so on that are used in the testing process. For each of the test items mentioned previously, there needs to be a section that defines pass or fail criteria and explains how a tester would make such a pass/fail determination. If a test needs to be halted and subsequently resumed, a corresponding procedure should explain the process. For example, if there is a massive power outage or network outage, does the test resume from the last successful test item, from scratch, or from somewhere else?

The test plan should itemize all of the deliverable documents that will be produced from the testing process. It should explain what deliverables are required, when, and to whom they will be presented. All preparation, execution, and closure tasks need to be identified, do any special environmental conditions or settings that may need to be set up in advance or during the testing. This may include placing network load factors onto the production environment for testing, specific lab conditions, and so on.

All participant responsibilities should be clearly outlined in the plan. Staffing requirements and training needs should be addressed in the plan and coordinated in advance of the testing start date. A schedule of testing needs to be produced and coordinated with all internal and external parties interested in the testing process. Finally, when all of these things are completed, someone (usually the Project Sponsor) must approve the test plan before it can be executed.

Section V of the recommended Table of Contents for IEEE Standard 829-1998 covers test design specification. This section explains how a test item will be tested. According to the standard, each test design specification should contain a test design specification identifier. The features that are to be tested need to be documented in this section. Any refinement to the overall approach needs to be explained here. You should list the specific test case ID number that is associated with each of the features to be tested. Itemize the pass/fail criteria for each test case and explain the method of determining pass or fail for each of the criteria.

Section VI of the Table of Contents for IEEE Standard 829-1998 is the place where a test case is actually defined. It should include a test case specification identifier. The test case specification should include the test items to be covered in this test, and it should list any input and output specifications or environmental needs. If any special processing requirements are needed to conduct the test, they should be listed in this section. Finally, if there are any dependencies on other test cases, they should be explained in this section of the document.

Section VII of the Table of Contents for IEEE Standard 829-1998 outlines the exact procedure for executing a test case. The test procedure specification should contain a test procedure specification ID number, a short section on the purpose of the test procedure, a section to list any special requirements necessary for executing the test procedure, and, most importantly, a section that contains all of the steps involved in the procedure. The procedure steps should contain logging requirements, setup, start, execution, measurement, and shutdown procedures. Details for restarting a procedure, stopping a procedure, closing out a procedure upon successful completion, and contingency actions should also be included.

In Section VIII of the Table of Contents for IEEE Standard 829-1998, the Test Item Transmittal Report is presented. The Test Item Transmittal Report is used when software is submitted to the test group for testing. It should include a transmittal report ID number and a list of all items being submitted for testing. For each item, the location (e.g., file path and/or associated application directory structure) should be stated clearly. The status of the items being submitted for testing should also be stated in the report. The status should focus on what has changed since initial submission for testing. Finally, the approval for turning materials over to the test group should be completed by the development or engineering team lead and signed off on by the Project Manager.

During the testing process, you should keep a log of all tests conducted. Standard 829-1998 requires a test log ID number and a description of the test. For each activity noted in the log, there should be entries for execution descriptions, procedure results, environmental information, unexpected events, and incident report ID numbers. Incident reports, or problem reports, should also follow IEEE Standard 829-1998 guidelines. This calls for an incident report ID number, summary, input, expected results, actual results, anomalies, date and time of the incident, the procedural step, the environmental conditions, and attempts to repeat or re-create the problem, names of the testers and observers, and the impact the incident had on the test plan or specifications.

After all testing has been completed, Section XI of the IEEE Standard 829-1998 requires submission of a Test Summary Report. This is an explanation of the testing process, environment, results, and so on. It should contain a Test Summary Report ID number and a section summarizing the results of testing, explaining any variances in the testing process. An assessment of the level of comprehensiveness of testing is required. The summary of results section should discuss what problems occurred during testing and what was done to solve them. A section providing an overall analysis or evaluation of the process should be in the report. Finally, the report needs to summarize all activities from all tests, and it should be signed off on by the Test Group Manager, the Project Manager, and the Project Sponsor.

As you can see from this discussion, the conduct of testing is a rigorous and detailed process. Ensuring that this process is conducted with the best methods and by the best people improves the odds that the organization will benefit from the process. The SPMO should ensure all of the sections of the recommended standard are adhered to throughout the process. In the following sections, we cover what are considered minimum essentials for a complete testing process.



 < Day Day Up > 



Managing Software Deliverables. A Software Development Management Methodology
Managing Software Deliverables: A Software Development Management Methodology
ISBN: 155558313X
EAN: 2147483647
Year: 2003
Pages: 226

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net