Scripted Testing

Jane was toast, and not the light buttery kind, nay, she was the kind that's been charred and blackened in the bottom of the toaster and has to be thrown away because no matter how much of the burnt part you scrape off with a knife, there's always more blackened toast beneath, the kind that not even starving birds in winter will eat, that kind of toast.

— Beth Knutson

Introduction

For scripted testing to be understood, it must be understood in its historical context. Scripted testing emerged as one of the component parts of the Waterfall model of software development. The Waterfall model defines a number of sequential development phases with specific entry and exit criteria, tasks to be performed, and deliverables (tangible work products) to be created. It is a classic example of the "plan your work, work your plan" philosophy. Typical Waterfall phases include:

  1. System Requirements - Gathering the requirements for the system.
  2. Software Requirements - Gathering the requirements for the software portion of the system.
  3. Requirements Analysis - Analyzing, categorizing, and refining the software requirements.
  4. Program Design - Choosing architectures, modules, and interfaces that define the system.
  5. Coding - Writing the programming code that implements the design.
  6. Testing - Evaluating whether the requirements were properly understood (Validation) and the design properly implemented by the code (Verification).
  7. Operations - Put the system into production.

Interesting Trivia

A Google search for "plan your work" and "work your plan" found 3,570 matches including:

  • Football recruiting
  • Business planning
  • Building with concrete blocks
  • Online marketing
  • Industrial distribution
  • The Princeton University's Women's Water Polo Team
  • And thousands more

This model was first described in 1970 in a paper entitled "Managing the Development of Large Scale Systems" by Dr. Winston W. Royce. Royce drew the following diagram showing the relationships between development phases:

click to expand
Figure 12-1: The Waterfall life cycle model.

What process was used before Waterfall? It is a process known as "Code & Fix." Programmers simply coded. Slogans like "Requirements? Requirements? We don't need no stinkin' Requirements!" hung on the walls of programmers' offices. Development was like the scene in the movie Raiders of the Lost Ark. Our hero, Indiana Jones, is hiding from the bad guys. Indy says, "I'm going to get that truck." Marion, our heroine, turns to him and asks, "How are you going to get that truck?" Indy replies, "I don't know. I'm making this up as I go." If we substituted "build that system" for "get that truck" we'd have the way real men and real women built software systems in the good old days.

Curious Historical Note

Today, Winston Royce is known as the father of the Waterfall model of software development. In fact, in his paper he was actually proposing an iterative and incremental process that included early prototyping - something many organizations are just now discovering.

Today we take a different view of scripted testing. Any development methodology along the spectrum from Waterfall to Rapid Application Development (RAD) may use scripted testing. Whenever repeatability, objectivity, and auditability are important, scripted testing can be used.

Repeatability means that there is a definition of a test (from design through to detailed procedure) at a level of detail sufficient for someone other than the author to execute it in an identical way. Objectivity means that the test creation does not depend on the extrordinary (near magical) skill of the person creating the test but is based on well understood test design principles. Auditability includes traceability from requirements, design, and code to the test cases and back again. This enables formal measures of testing coverage.

"Plan your work, work your plan." No phrase so epitomizes the scripted testing approach as does this one, and no document so epitomizes the scripted testing approach as does IEEE Std 829-1998, the "IEEE Standard for Software Test Documentation."

This standard defines eight documents that can be used in software testing. These documents are:

  • Test plan
  • Test design specification
  • Test case specification
  • Test procedure specification
  • Test item transmittal report
  • Test log
  • Test incident report
  • Test summary report

Figure 12-2 shows the relationships between these documents. Note that the first four documents that define the test plan, test designs, and test cases are all created before the product is developed and the actual testing is begun. This is a key idea in scripted testing—plan the tests based on the formal system requirements.

click to expand
Figure 12-2: The IEEE 829 Test Documents

Curiously, the IEEE 829 standard states, "This standard specifies the form and content of individual test documents. It does not specify the required set of test documents." In other words, the standard does not require you to create any of the documents described. That choice is left to you as a tester, or to your organization. But, the standard requires that if you choose to write a test plan, test case specification, etc., that document must follow the IEEE 829 standard.

The IEEE 829 standard lists these advantages for its use:

  • "A standardized test document can facilitate communication by providing a common frame of reference.
  • The content definition of a standardized test document can serve as a completeness checklist for the associated testing process.
  • A standardized set can also provide a baseline for the evaluation of current test documentation practices.
  • The use of these documents significantly increases the manageability of testing. Increased manageability results from the greatly increased visibility of each phase of the testing process."


IEEE 829 Document Descriptions

The IEEE 829 standard defines eight different documents. Each document is composed of a number of sections.

Test Plan

  • The purpose of the test plan is to describe the scope, approach, resources, and schedule of the testing activities. It describes the items (components) and features (functionality, performance, security, usability, etc.) to be tested, tasks to be performed, deliverables (tangible work products) to be created, testing responsibilities, schedules, and approvals required. Test plans can be created at the project level (master test plan) or at subsidiary levels (unit, integration, system, acceptance, etc.). The test plan is composed of the following sections:

    1. Test plan identifier - A unique identifier so that this document can be distinguished from all other documents.
    2. Introduction - A summary of the software to be tested. A brief description and history may be included to set the context. References to other relevant documents useful for understanding the test plan are appropriate. Definitions of unfamiliar terms may be included.
    3. Test items - Identifies the software items that are to be tested. The word "item" is purposely vague. It is a "chunk" of software that is the object of testing.
    4. Features to be tested - Identifies the characteristics of the items to be tested. These include functionality, performance, security, portability, usability, etc.
    5. Features not to be tested - Identifies characteristics of the items that will not be tested and the reasons why.
    6. Approach - The overall approach to testing that will ensure that all items and their features will be adequately tested.
    7. Item pass/fail criteria - The criteria used to determine whether each test item has passed or failed testing.
    8. Suspension criteria and resumption requirements - The conditions under which testing will be suspended and the subsequent conditions under which testing will be resumed.
    9. Test deliverables - Identifies the documents that will be created as a part of the testing process.
    10. Testing tasks - Identifies the tasks necessary to perform the testing.
    11. Environmental needs - Specifies the environment required to perform the testing including hardware, software, communications, facilities, tools, people, etc.
    12. Responsibilities - Identifies the people/groups responsible for executing the testing tasks.
    13. Staffing and training needs - Specifies the number and types of people required to perform the testing, including the skills needed.
    14. Schedule - Defines the important key milestones and dates in the testing process.
    15. Risks and contingencies - Identifies high-risk assumptions of the testing plan. Specifies prevention and mitigation plans for each.
    16. Approvals - Specifies the names and titles of each person who must approve the plan.

Test Design Specification

  • The purpose of the test design specification is to identify a set of features to be tested and to describe a group of test cases that will adequately test those features. In addition, refinements to the approach listed in the test plan may be specified. The test design specification is composed of the following sections:

    1. Test design specification identifier - A unique identifier so that this document can be distinguished from all other documents.
    2. Features to be tested - Identifies the test items and the features that are the object of this test design specification.
    3. Approach refinements - Specifies the test techniques to be used for this test design.
    4. Test identification - Lists the test cases associated with this test design. Provides a unique identifier and a short description for each test case.
    5. Feature pass/fail criteria - The criteria used to determine whether each feature has passed or failed testing.

Test Case Specification

  • The purpose of the test case specification is to specify in detail each test case listed in the test design specification. The test case specification is composed of the following sections:

    1. Test case specification identifier - A unique identifier so that this document can be distinguished from all other documents.
    2. Test items - Identifies the items and features to be tested by this test case.
    3. Input specifications - Specifies each input required by this test case.
    4. Output specifications - Specifies each output expected after executing this test case.
    5. Environmental needs - Any special hardware, software, facilities, etc. required for the execution of this test case that were not listed in its associated test design specification.
    6. Special procedural requirements - Defines any special setup, execution, or cleanup procedures unique to this test case.
    7. Intercase dependencies - Lists any test cases that must be executed prior to this test case.

Test Procedure Specification

  • The purpose of the test procedure specification is to specify the steps for executing a test case and the process for determining whether the software passed or failed the test. The test procedure specification is composed of the following sections:

    1. Test procedure specification identifier - A unique identifier so that this document can be distinguished from all other documents.
    2. Purpose - Describes the purpose of the test procedure and its corresponding test cases.
    3. Special requirements - Lists any special requirements for the execution of this test procedure.
    4. Procedure steps - Lists the steps of the procedure. Possible steps include: Set up, Start, Proceed, Measure, Shut Down, Restart, Stop, and Wrap Up.

Test Item Transmittal Report (a k a Release Notes)

  • The purpose of the test item transmittal report is to specify the test items being provided for testing. The test item transmittal report is composed of the following sections:

    1. Transmittal report identifier - A unique identifier so that this document can be distinguished from all other documents.
    2. Transmitted items - Lists the items being transmitted for testing including their version or revision level.
    3. Location - Identifies the location of the transmitted items.
    4. Status - Describes the status of the items being transmitted. Include any deviations from the item's specifications.
    5. Approvals - Specifies the names and titles of all persons who must approve this transmittal.

Test Log

  • The purpose of the test log is to provide a chronological record about relevant details observed during the test execution. The test log is composed of the following sections:

    1. Test log identifier - A unique identifier so that this document can be distinguished from all other documents.
    2. Description - Identifies the items being tested and the environment under which the test was performed.
    3. Activity and event entries - For each event, lists the beginning and ending date and time, a brief description of the test execution, the results of the test, and unique environmental information, anomalous events observed, and the incident report identifier if an incident was logged.

Test Incident Report (a k a Bug Report)

  • The purpose of the test incident report is to document any event observed during testing that requires further investigation. The test incident report is composed of the following sections:

    1. Test incident report identifier - A unique identifier so that this document can be distinguished from all other documents.
    2. Summary - Summarizes the incident.
    3. Incident description - Describes the incident in terms of inputs, expected results, actual results, environment, attempts to repeat, etc.
    4. Impact - Describes the impact this incident will have on other test plans, test design specifications, test procedures, and test case specifications. Also describes, if known, the impact this incident will have on further testing.

Test Summary Report

  • The purpose of the test summary report is to summarize the results of the testing activities and to provide an evaluation based on these results. The test summary report is composed of the following sections:

    1. Test summary report identifier - A unique identifier (imagine that!) so that this document can be distinguished from all other documents.
    2. Summary - Summarizes the evaluation of the test items.
    3. Variance - Reports any variances from the expected results.
    4. Comprehensive assessment - Evaluates the overall comprehensiveness of the testing process itself against criteria specified in the test plan.
    5. Summary of results - Summarizes the results of the testing. Identifies all unresolved incidents.
    6. Evaluation - Provides an overall evaluation of each test item including its limitations.
    7. Summary of activities - Summarizes the major testing activities by task and resource usage.
    8. Approvals - Specifies the names and titles of each person who must approve the report.


Advantages of Scripted Testing

  1. Scripted testing provides a division of labor—planning, test case design, test case implementation, and test case execution that can be performed by people with specific skills and at different times during the development process.
  2. Test design techniques such as equivalence class partitioning, boundary value testing, control flow testing, pairwise testing, etc. can be integrated into a formal testing process description that not only guides our testing but that could also be used to audit for process compliance.
  3. Because scripted tests are created from requirements, design, and code, all important attributes of the system will be covered by tests and this coverage can be demonstrated.
  4. Because the test cases can be traced back to their respective requirements, design, and code, coverage can be clearly defined and measured.
  5. Because the tests are documented, they can be easily understood and repeated when necessary without additional test analysis or design effort.
  6. Because the tests are defined in detail, they are more easily automated.
  7. Because the tests are created early in the development process, this may free up additional time during the critical test execution period.
  8. In situations where a good requirements specification is lacking, the test cases, at the end of the project, become the de facto requirements specification, including the results that demonstrate which requirements were actually fulfilled and which were not.
  9. Scripted tests, when written to the appropriate level of detail, can be run by people who would otherwise not be able to test the system because of lack of domain knowledge or lack of testing knowledge.
  10. You may have special contractual requirements that can only be met by scripted testing.
  11. There may be certain tests that must be executed in just the same way, every time, in order to serve as a kind of benchmark.
  12. By creating the tests early in the project we can discover what we don't know.
  13. By creating the tests early we can focus on the "big picture."

In his book Software System Testing and Quality Assurance, Boris Beizer summarizes in this way:

"Testing is like playing pool. There's real pool and kiddie pool. In kiddie pool, you hit the balls and whatever pocket they happen to fall into, you claim as the intended pocket. It's not much of a game and although suitable to ten-year-olds it's hardly a challenge. The object of real pool is to specify the pocket in advance. Similarly for testing. There's real testing and kiddie testing. In kiddie testing, the tester says, after the fact, that the observed outcome was the intended outcome. In real testing the outcome is predicted and documented before the test is run."


Disadvantages of Scripted Testing

  1. Scripted testing is very dependent on the quality of the system's requirements. Will the requirements really be complete, consistent, unambiguous, and stable enough as the foundation for scripted testing? Perhaps not.
  2. Scripted testing is, by definition, inflexible. It follows the script. If, while testing, we see something curious, we note it in a Test Incident Report but we do not pursue it. Why not? Because it is not in the script to do so. Many interesting defects could be missed with this approach.
  3. Scripted testing is often used to "de-skill" the job of testing. The approach seems to be, "Teach a tester a skill or two and send them off to document mountains of tests. The sheer bulk of the tests will probably find most of the defects."


Summary

  • "Plan your work, work your plan." Like the Waterfall model, no phrase so epitomizes the scripted testing approach as does this one, and no document so epitomizes the scripted testing approach as does IEEE Std 829-1998, the "IEEE Standard for Software Test Documentation."
  • The IEEE Standard 829 defines eight documents that can be used in software testing. These documents are: test plan, test design specification, test case specification, test procedure specification, test item transmittal report, test log, test incident report, and test summary report.
  • The advantages of scripted testing include formal documentation, coverage, and traceability.


References

Beizer, Boris (1984). Software System Testing and Quality Assurance. Van Nostrand Reinhold.

"IEEE Standard for Software Test Documentation," IEEE Std 829-1998. The Institute of Electrical and Electronics Engineers, Inc. ISBN 0-7381-1443-X

Royce, Winston W. "Managing the Development of Large Software Systems," Proceedings of the 9th International Conference on Software Engineering, Monterey, CA, IEEE Computer Society Press, Los Alamitos, CA, 1987. http://www.ipd.bth.se/uodds/pd&agile/royce.pdf




A Practitioner's Guide to Software Test Design
A Practitioners Guide to Software Test Design
ISBN: 158053791X
EAN: 2147483647
Year: 2003
Pages: 161
Authors: Lee Copeland

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net