Master Test Planning

Overview

"Make no little plans; they have no magic to stir men's blood."

— Daniel Hudson Burnham

"Plans must be simple and flexible. Actually, they only form a datum plane from which you build as necessity directs or opportunity offers. They should be made by the people who are going to execute them."

— George S. Patton

Test planning is one of the keys to successful software testing, yet it's frequently omitted due to time constraints, lack of training, or cultural bias. A survey taken at a recent STAR conference showed that 81% of the companies participating in the survey completed test plans. That doesn't sound too bad, but our experience has shown that many of those 81% are calling the testing schedule the test plan, so the actual percentage is probably much less. Testing without a plan is analogous to developing software without a project plan and generally occurs for the same reason - pressure to begin coding (or in this case, testing) as soon as possible. Many organizations measure progress in development by modules completed or lines of code delivered, and in testing by the number of test cases run. While these can be valuable measures, they don't recognize planning as a worthwhile activity.

  Key Point 

"Planning is the art and science of envisioning a desired future and laying out effective ways of bringing it about."

- Planning, MCDP5 U.S. Marine Corps



Levels (Stages) of Test Planning

Test planning can and should occur at several levels or stages. The first plan to consider is the Master Test Plan (MTP), which can be a separate document or could be included as part of the project plan. The purpose of the MTP is to orchestrate testing at all levels. The IEEE Std. 829-1998 Standard for Software Test Documentation identifies the following levels of test: Unit, Integration, System, and Acceptance. Other organizations may use more or less than four levels and possibly use different names. Some other levels (or at least other names) that we frequently encounter include beta, alpha, customer acceptance, user acceptance, build, string, and development. In this book, we will use the four levels identified in the IEEE and illustrated in figure 3-1.

click to expand
Figure 3-1: Levels of Test Planning

  Key Point 

Test planning CAN'T be separated from project planning.

All important test planning issues are also important project planning issues.

The test manager should think of the Master Test Plan as one of his or her major communication channels with all project participants. Test planning is a process that ultimately leads to a document that allows all parties involved in the testing process to proactively decide what the important issues are in testing and how to best deal with these issues. The goal of test planning is not to create a long list of test cases, but rather to deal with the important issues of testing strategy, resource utilization, responsibilities, risks, and priorities.

  Key Point 

Test planning SHOULD be separated from test design.

In test planning, even though the document is important, the process is ultimately more important than the document. Discussing issues of what and how to test early in the project lifecycle can save a lot of time, money, and disagreement later. Case Study 3-1 describes how one company derived a great benefit from their Master Test Plan, even though it was never actually used.

Case Study 3-1: If the Master Test Plan was so great, why didn't they use it?

The "Best" Test Plan We Ever Wrote

I once had a consulting assignment at a major American company where I was supposed to help them create their first ever Master Test Plan. Following up with the client a few months later, the project manager told me that the creation of the Master Test Plan had contributed significantly to the success of the project, but unfortunately they hadn't really followed the plan or kept it up to date. I replied, "Let me get this straight. You didn't use the plan, but you felt that it was a major contributor to your success. Please explain." The project manager told me that when they began to fall behind, they dispensed with much of the project documentation, including the test plan (sound familiar?). But because they created the plan early in the project lifecycle, many testing issues were raised that normally weren't considered until it was too late to take action. The planning process also heightened the awareness of the importance of testing to all of the project participants. Now, I believe that keeping test plans up to date is important, so that's not the purpose of telling you this story. Rather, I'm trying to stress the importance of the testing process, not just the document.

— Rick Craig

  Key Point 

"We should think of planning as a learning process - as mental preparation which improves our understanding of a situation… Planning is thinking before doing."

- Planning, MCDP5 U.S. Marine Corps

  Key Point 

Ike said it best: "The plan is nothing, the planning is everything."

- Dwight D. Eisenhower

In addition to the Master Test Plan, it is often necessary to create detailed or level-specific test plans. On a larger or more complex project, it's often worthwhile to create an Acceptance Test Plan, System Test Plan, Integration Test Plan, Unit Test Plan, and other test plans, depending on the scope of your project. Smaller projects, that is, projects with smaller scope, number of participants, and organizations, may find that they only need one test plan, which will cover all levels of test. Deciding the number and scope of test plans required should be one of the first strategy decisions made in test planning. As the complexity of a testing activity increases, the criticality of having a good Master Test Plan increases exponentially, as illustrated in Figure 3-2.

click to expand
Figure 3-2: Importance of Test Planning

Detailed level planning is explained in Chapter 4 - Detailed Test Planning. For the most part, the major considerations for detailed test plans are the same as those for the master test plan, but differ in scope and level of detail. In fact, it's normally desirable to use the same basic template for the detailed test plans that you use for the master test plan.



Audience Analysis

The first question you must ask yourself when creating a test plan is, "Who is my audience?" The audience for a Unit Test Plan is quite different than the audience for an Acceptance Test Plan or a Master Test Plan, so the wording, use of acronyms, technical terms, and jargon should be adjusted accordingly. Also keep in mind that various audiences have different tolerances for what they will and will not read. Executives, for example, may not be willing to read an entire Master Test Plan if it's 50 pages long, so you might have to include an executive summary. In fact, you may want to avoid making the test plan prohibitively long or no one will read (and use) it. If your test plan is too long, it may be necessary to create a number of plans of reduced scope built around subsystems or functionality. Sometimes, the size of your test plans can be managed and limited by the judicious use of references. If you decide to use references, though, you should carefully consider the implications. Most people don't really want to gather a stack of documents just so they can read a single test plan.

  Key Point 

If your test plan is too long, it may be necessary to create a number of plans of reduced scope built around subsystems or functionality.

Since we can't predict how long a document your audience is willing to read, we can't say that your test plan should not exceed any particular length such as 5, 10, 15, or 100 pages. Instead, we recommend that you survey the potential audience of your test plan to determine how long a document they are willing to read and use. Some military organizations, for example, may be accustomed to using documents of 100 pages or more, while members of a small entrepreneurial firm may only tolerate 10 pages or less.



Activity Timing

Test planning should be started as soon as possible. Generally, it's desirable to begin the Master Test Plan at about the same time the requirements specifications and the project plan are being developed. Figure 3-3 relates the approximate start times of various test plans to the software development lifecycle.

click to expand
Figure 3-3: Timing of Test Planning

If test planning is begun early enough, it can and should have a significant impact on the content of the project plan. Acceptance test planning can be started as soon as the requirements definition process has begun. We have one client, for example, that actually includes the acceptance test plan and high-level test scenarios as part of the requirements specification. Similarly, the system, integration, and unit test plans should be started as early as possible.

Test planners often get frustrated when they begin their planning process early and find out that all of the information needed is either not available or in a state of flux. Experienced test planners have learned to use TBD (To Be Determined) when they come to a part of the plan that is not yet known. This, in itself, is important because it allows planners to see where to focus their efforts and it highlights what has yet to be done. It's true that plans that are written early will probably have to be changed during the course of the software development and testing. Sometimes, the documenting of the test plan will precipitate changes to the strategy. This change process is important because it records the progress of the testing effort and helps planners become more proficient on future projects.

  Key Point 

As a rule of thumb, when using TBD (To Be Determined), it's desirable to record who's responsible for resolution of the TBD and a target completion date.



Standard Templates

It's important that an organization have a template for its test plans. The templates used in this book are based on the IEEE Std. 829-1998 for Software Test Documentation, which provides a good basis for creating your own customized template. In many cases, you may find that the IEEE template meets your particular needs without requiring modifications.

If a template doesn't meet your particular requirements, you should feel free to customize it as necessary. For example, we use a slightly modified version of the IEEE test plan template in this book because we believe that risk should be divided into two sections, rather than the one section included in the standard template. Refer to Chapter 2 - Risk Analysis for a detailed explanation.

Over time, it's likely that you'll find some of the required items on your template are always left blank. If you're confident that those items are not germane to your organization, there's no need to maintain those fields in your template, so remove them. If the wording in certain sections is constant from plan to plan, then you must first decide if you've really addressed the issue. If you're confident that you've adequately addressed the issue, then maybe that section should become part of your standard methodology and be removed from the test plan. Remember that a test plan should consider the unique situation of a given project or release and may need to be customized for some projects. Since different sizes and types of projects require different amounts of documentation, it may be wise to identify some sections of the template as optional.

  Key Point 

Since different sizes and types of projects require different amounts of documentation, it may be wise to identify some sections of the template as optional.

Case Study 3-2 describes a strategy that some companies use to improve the usability of their templates and, consequently, recognize their employees for their outstanding achievements.

Case Study 3-2: What do your company's templates have in common with employee morale?

A Mark of Pride

One good idea that we've seen at several companies is the inclusion of sample documents such as test plans and supporting material for the template. You could include one sample each from a small, medium, and large project. If your organization has different types of applications, you might consider having a sample template for each of them (e.g., client/server, Web, etc.). In one company, it was regarded as a" mark of pride" if your test plan was chosen to be included as a sample in the template.



Sections of a Test Plan

There are many issues that should be considered in developing a test plan. The outline that we describe (refer to Figure 3-4) and recommend is a slightly modified version of the IEEE Std. 829-1998 document for test planning. The modifications that we've made to this template include breaking the standard IEEE section Risks and Contingencies into two sections: Software Risk and Planning Risks and Contingencies. Furthermore, we've added sections for Table of Contents, References, and Glossary, which aren't included in the IEEE Standard. The parts of the template in Figure 3-4 that we've added to the IEEE template are shown in italics. Please feel free to modify this template (or any other template) to meet your needs. This outline is useful for creating any kind of test plan: Master, Acceptance, System, Integration, Unit, or whatever you call the levels of test planning within your organization.

IEEE Std. 829-1998 Standard for Software Test Documentation

Template for Test Planning

Contents

  1. Test Plan Identifier
  2. Table of Contents
  3. References
  4. Glossary
  5. Introduction
  6. Test Items
  7. Software Risk Issues
  8. Features to Be Tested
  9. Features Not to Be Tested
  10. Approach
  11. Item Pass/Fail Criteria
  12. Suspension Criteria and Resumption Requirements
  13. Test Deliverables
  14. Testing Tasks
  15. Environmental Needs
  16. Responsibilities
  17. Staffing and Training Needs
  18. Schedule
  19. Planning Risks and Contingencies
  20. Approvals


Figure 3-4: Template for Test Planning from IEEE Std. 829-1998

Test Plan Identifier

In order to keep track of the most current version of your test plan, you should assign it an identifying number. If you have a standard documentation control system in your organization, then assigning numbers should be second nature to you. A test plan identifier is a unique company-generated number used to identify a version of a test plan, its level, and the version of software that it pertains to.

Keep in mind that test plans are like other software documentation - they're dynamic in nature and, therefore, must be kept up-to-date. When we're auditing the testing practices of an organization, we always check for the test plan identifier. If there isn't one, this usually means that the plan was created but never changed and probably never used. In some cases, it may even mean that the plan was created only to satisfy International Standards Organization (ISO) or Capability Maturity Model (CMM) guidelines, or simply because the boss said you had to have a plan. Occasionally, we even encounter a situation where the test plan was written after the software was released. Our colleague, Lee Copeland, calls this "post-implementation test planning."

  Key Point 

Due to the dynamic nature of test plans, it may be more efficient to disseminate and maintain the documents electronically.

Table of Contents

The table of contents should list each topic that's included in the test plan, as well as any references, glossaries, and appendices. If possible, the table of contents should be two or more levels deep to give the reader as much detail about the content of each topic as possible. The reader can then use this information to quickly review the topics of interest, without having to read through the document from beginning to end.

References

In the IEEE Std. 829-1998 Standard for Test Documentation, references are included in the Introduction, but we've separated them into their own section to emphasize their importance.

References recommended in the IEEE include:

  • Project Authorization
  • Project Plan
  • QA Plan
  • Configuration Management Plan
  • Relevant Policies
  • Relevant Standards

The IEEE standard also specifies that in multi-level test plans, each lower-level plan must reference the next higher-level plan. Other references to consider are requirements specifications, design documents, and any other documents that provide additional related information. Each listing in this section should include the name of the document, date and version, and the location or point of contact. References add credibility to your test plan, while allowing the reader to decide which topics warrant further investigation.

Glossary

A glossary is used to define any terms and acronyms used in the document. When compiling the glossary, be sure to remember who your audience is and include any product-specific terms as well as technical and testing terms. Some readers, for example, may not understand the meaning of a "level" as it pertains to test planning. A glossary provides readers with additional information, beyond the simple meaning of a term derived from its usage.

Introduction (Scope)

There are two main things to include in the Introduction section: a basic description of the scope of the project or release including key features, history, etc., and an introduction that describes the scope of the plan. The scope of the project may include a statement such as:

  • "This project will cover all of the features currently in use, but will not cover features scheduled for general availability in release 5.0."

The scope of the plan might include a statement such as:

  • "This Master Test Plan covers integration, system, and acceptance testing, but not unit testing, since unit testing is being done by the vendor and is outside the scope of this organization."

Figure 3-5 illustrates some of the considerations when deciding the scope of the Master Test Plan (MTP). For embedded systems, the MTP might cover the entire product (including hardware) or only the software. The MTP might include only testing or might address other evaluation techniques such as reviews, walkthroughs, and inspections. Similarly, a project may have one MTP, or large projects may have multiple plans organized around subsystems.

click to expand
Figure 3-5: Scope of Test and Evaluation Plans

Test Items

This section of the test plan describes programmatically what is to be tested within the scope of this test plan and should be completed in collaboration with the configuration or library manager and the developer. This section can be oriented to the level of the test plan. For higher levels, this section may be organized by application or by version. For lower levels, it may be organized by program, unit, module, or build. If this is a Master Test Plan, for example, this section might include information pertaining to version 2.2 of the accounting software, version 1.2 of the user manual and version 4.5 of the requirements specification. If this is an Integration or Unit Test Plan, this section might actually list the programs to be tested, if they're known. The IEEE standard specifies that the following documentation be referenced, if it exists:

  • Requirements Specification
  • Design Specification
  • User's Guide
  • Operations Guide
  • Installation Guide
  • Incident Reports that relate to the test items

Items that are to be specifically excluded from testing should be identified.

Software Risk Issues

The purpose of discussing software risk is to determine what the primary focus of testing should be. Generally speaking, most organizations find that their resources are inadequate to test everything in a given release. Outlining software risks helps the testers prioritize what to test and allows them to concentrate on those areas that are likely to fail or have a large impact on the customer if they do fail. Organizations that work on safety- critical software can usually use the information from their safety and hazard analysis as the basis for this section of the test plan.

We've found, though, that in most companies no attempt is made to verbalize software risks in any fashion. If your company doesn't currently do any type of risk analysis, starting simple is the recommended approach. Organize a brainstorming session among a small group of users, developers, and testers to find out what their concerns are. Start the session by asking the group, "What worries you?" We don't use the word risk, which we find can be intimidating to some people. Some examples of software risks include:

  • Interfaces to other systems
  • Features that handle large sums of money
  • Features that affect many (or a few very important) customers
  • Highly complex software
  • Modules with a history of defects (from a defect analysis)
  • Modules with many or complicated changes
  • Security, performance, and reliability issues
  • Features that are difficult to change or test

You can see that the risk analysis team needs users to judge the impact of failure on their work; as well as developers and testers to analyze the likelihood of failure. The list of software risks should have a direct effect on what you test, how much you test, and in what order you test. Risk analysis is hard, especially the first time you try it, but you will get better, and it's worth the effort. Risk analysis is covered in depth in Chapter 2.

  Key Point 

What you test is more important than how much you test.

Features to Be Tested

This section of the test plan includes a listing of what will be tested from the user or customer point of view as opposed to test items, which are a measure of what to test from the viewpoint of the developer or library manager. If you're testing an Automated Teller Machine (ATM), for example, some of the features to be tested might include withdraw cash, deposit cash, check account balance, transfer funds, purchase stamps, and make a loan payment. For lower levels of test, the features to be tested might be much more detailed. Table 3-1 shows how the risk analysis described in Section 7.0 is based on analyzing the relative risk of each feature identified in the Features to Be Tested section.

Table 3-1: Prioritized List of ATM Features/Attributes with "Cut Line"

ATM Software

Likelihood

Impact

Priority

 

Features

Attributes

Withdraw cash

 

High

High

6

To Be Tested

Deposit cash

 

Medium

High

5

 

Usability

Medium

High

5

Transfer funds

 

Medium

Medium

4

Purchase stamps

 

High

Low

4

Security

Low

High

4

Make a loan payment

 

Low

Medium

3

Not to Be Tested (or tested less)

Check account balance

 

Low

Medium

3

 

Performance

Low

Medium

3

One benefit of using the list of features to be tested as the basis for software risk analysis is that it can help determine which low-risk features should be moved to Section 9.0 - Features Not to Be Tested, if your project falls behind schedule.

Features Not to Be Tested

This section of the test plan is used to record any features that will not be tested and why. There are many reasons why a particular feature might not be tested. Maybe the feature wasn't changed, it's not yet available for use, or it has a good track record; but whatever the reason a feature is listed in this section, it all boils down to relatively low risk. Even features that are to be shipped but not yet enabled and available for use pose at least a certain degree of risk, especially if no testing is done on them. This section will certainly raise a few eyebrows among managers and users, many of whom cannot imagine consciously deciding not to test a feature, so be careful to document the reason you decided not to test a particular feature. These same managers and users, however, will often approve a schedule that doesn't possibly allow enough time to test everything. This section is about intelligently choosing what not to test (i.e., low-risk features), rather than just running out of time and not testing whatever was left on the ship date.

  Key Point 

Choosing features not to be tested allows you to intelligently decide what not to test, rather than just running out of time and not testing whatever was left on the ship date.

Politically, some companies that develop safety-critical systems or have a corporate culture that "requires" every feature to be tested will have a hard time listing any features in this section. If every feature is actually tested, then that's fine. But, if resources don't allow that degree of effort, using the Features Not to Be Tested section actually helps reduce risk by raising awareness. We've met many test managers who have obtained additional test resources or time when they clearly spelled out which features would not be tested! Case Study 3-3 describes one company's claim that they test every feature of their software.

Case Study 3-3: Does your company really test every feature?

Here at XYZ Company, "We Test Everything"

Once, I was giving a series of Test Management courses at a large software company. I gave the same two-day lecture three times in a row! I thought I deserved a medal for that, but the real medal belonged to the VP of Testing (yes, they had a Testing VP) for sitting through the same class three straight times. Anyway, the only guideline he gave me was that I couldn't talk about "features NOT to be tested" because at his company, everything was tested! Well, of course I forgot what the VP told me and I began talking to his staff about features not to be tested. The VP quickly stood up and said, "Rick, you know that here at the XYZ Company, we test everything." Meanwhile, behind him, all of his managers were mouthing the words, "No, we don't." Apparently, the only person who thought that everything was being tested was the VP. The moral of the story is this: even if you think your company tests every feature of their software, chances are they don't.

— Rick Craig

Another important item to note is that this section may grow if projects fall behind schedule. If the risk assessment identifies each feature by risk, it's much easier to decide which additional features pose the least risk if moved from Section 8.0 - Features to Be Tested to Section 9.0 - Features Not to Be Tested of your test plan. Of course, there are other options other than reducing testing when a project falls behind schedule, and they should be included in Section 19.0 - Planning Risks and Contingencies.

Approach (Strategy)

Since this section is the heart of the test plan, some organizations choose to label it Strategy rather than Approach. This section should contain a description of how testing will be performed (approach) and explain any issues that have a major impact on the success of testing and ultimately on the project (strategy). Figure 3-6 illustrates some typical influences on strategy decisions.

click to expand
Figure 3-6: Influences on Strategy Decisions

For a Master Test Plan, the approach to be taken for each level should be explained, including the entrance and exit criteria from one level to another. Case Study 3-4 describes one company's approach to testing.

Case Study 3-4: Example of the Approach Section in a Master Test Plan

ABC Company's Approach to Testing

System testing will take place in the test labs in our London Office. The Testing effort will be under the direction of the London test team, with support from the development staff and users from our New York office. An extract of production data from an entire month will be used for the duration of the testing effort. Test plans, test design specs, and test case specs will be developed using the IEEE Std. 829-1998 Standard for Software Test Documentation. All tests will be captured using our in-house tool for subsequent regression testing. Tests will be designed and run to test all features listed in section 8 of the system test plan. Additionally, testing will be done in concert with our Paris office to test the billing interface. Performance, security, load, reliability, and usability testing will be included as part of the system test. Performance testing will begin as soon as the system has achieved stability. All user documentation will be tested in the latter part of the system test. The system test team will assist the acceptance test team in testing the installation procedures. Before bug fixes are reintroduced into the test system, they must first successfully pass unit testing, and if necessary, integration testing. Weekly status meetings will be held to discuss any issues and revisions to the system test plan, as required.

Exit Criteria from System Test include:

  • All test cases must be documented and run.
  • 90% of all test cases must pass.
  • All test cases dealing with the Billing function must pass.
  • All Medium and High defects must be fixed.
  • Code coverage must be at least 90% (including Integration and Unit testing).

Methodology Decisions

Many organizations use an "off-the-shelf" methodology, while others have either created a brand-new methodology from scratch or have adapted someone else's. Methodology decisions require management to answer many questions:

  • When will testers become involved in the project?
  • When will test execution begin?
  • How many (if any) beta sites will be used?
  • Will there be a pilot (i.e., a production system executed at a single or limited number of sites)?
  • What testing techniques (e.g., "buddy" testing, inspections, walkthroughs, etc.) will be utilized?
  • How many testers will be required for planning? Design? Execution?
  • What testing levels (e.g., Acceptance, System, Integration, Unit, etc.) will be used?
  Key Point 

Refer to Chapter 4 - Detailed Test Planning for more information on buddy testing.

The left-most column of Figure 3-7 shows the standard levels identified in the IEEE 829-1998 Standard for Software Test Documentation. Many organizations always try to use the same levels on every project and every release, but some organizations may choose to occasionally or always combine levels, delete levels, add levels, or call them by different names.

click to expand
Figure 3-7: Test Level Decisions

Figure 3-8 illustrates the test levels identified in IEEE Std. 829- 1998 Standard for Test Documentation. Each level is defined by a particular environment, which may include the hardware configuration, software configuration, interfaces, testers, etc. Notice that as you move to higher levels of test, the environment becomes increasingly more realistic. The highest level of test, in this example acceptance testing, should mirror the production environment as closely as possible since the system will be fielded upon successful completion of the testing.

click to expand
Figure 3-8: Typical Test Levels

  Key Point 

As you move to higher levels of test, the environment becomes more realistic.

Case Study 3-5: Many people who are used to actually doing the coding and testing are frustrated by the process of sitting around trying to help us document a testing methodology - they feel like they should be doing "real" work.

Using the Test Planning Process to Create a Methodology

As consultants, we are often asked to help create a testing methodology for organizations that don't even have a rudimentary testing process in place - or at least not one that is documented. We've found that many people who are used to actually doing the coding and testing are frustrated by the process of sitting around trying to help us document a testing methodology - they feel like they should be doing "real" work. This frustration often leads to a documented process that no one wants to use.

So, an alternate approach is to use the test planning process as a way to create a methodology from the bottom up. That is, we choose a pilot project and create a master test plan. The decisions made while creating the master test plan for the pilot project are declared to be Version 1.0 of the organization's testing methodology.

Resources

The best-laid plans of test managers can easily be sabotaged by either of two events: development is running late and will not be able to provide the testing team with the builds as originally scheduled, or the ship date has been moved forward (often due to competitive pressure). Unfortunately, the test manager has little control over these events and should therefore ensure that the testing schedule contains contingencies to accommodate these possible scenarios.

Another strategy decision might be where the testing resources will come from. If your organization has a dedicated test group, you may already have sufficient resources. If the testing group is understaffed or has other priorities, it may be necessary to look for other resources in the form of developers, users, college interns, contractors, support staff, and others. Unfortunately, adding resources can also become a political issue in some organizations. Some users may want nothing to do with the testing effort, while others may be "miffed" if they aren't included in the project. You can usually maximize efficiency by adequately staffing your project from the beginning. However, you should avoid the scenario in which high-priced testing consultants are just sitting around (using up the testing budget) waiting for development to provide them with something to test. Conversely, bringing on additional testers late in the project can actually slow down the process due to the steep learning curve.

  Key Point 

According to Frederick Brooks' The Mythical Man-Month "adding more people to a late software project makes it later."

Test Coverage Decisions

Several types of coverage measures are used in software testing. Perhaps the best-known form of coverage is code coverage, which measures the percentage of program statements, branches, or paths that are executed by a group of test cases (i.e., a test set). Code coverage requires the assistance of a special tool to instrument the code. These tools have been around for years and help the programmers and testers understand what parts of the code are or are not executed by a given group of tests. They are also useful for identifying "dead" or unexecutable code.

Based on our experiences, code coverage tools still don't enjoy widespread use. While it is not totally clear why these tools are not used more often, we believe the following issues may be factors:

  • Code coverage requires the purchase and subsequent training on a new tool.
  • Code coverage metrics are foreign to some functional level testers.
  • Code coverage is almost a moot point for organizations that have entire programs or even subsystems that are not addressed by the tests due to time or resource constraints or lack of system knowledge.

Other measures include coverage of requirements, design, and interfaces. Requirements coverage measures the percentage of business requirements that are covered by a test set, while design coverage measures how much of the design is covered. Interface coverage measures the percentage of interfaces that are being exercised by a test set. Coverage will be explained in more detail in Chapter 7 - Test Execution.

  Key Point 

Requirements coverage measures the percentage of business requirements that are covered by a test set, while design coverage measures how much of the design is covered.

Walkthroughs and Inspections

The major focus of this book is on testing and analysis, but as you can see in Figure 3-9, software evaluation also includes another category called reviews. Reviews of requirements, design, and code are examples of verification techniques, which are an important part of software quality assurance (known as evaluation in the STEP methodology). While they are not testing activities, they are complementary activities that can significantly affect the test strategy and should be included in the Approach (Strategy) section of the Master Test Plan. Specifically, they can have an impact on the quality of the software being tested and on the resources available for testing.

click to expand
Figure 3-9: Software Evaluation Process

Two of the most common types of reviews are walkthroughs and inspections. It is not clear to us when and where the term "walkthrough" originated, but walkthroughs have been in use longer than their more rigorous cousin, software inspections. Software inspections as we know them today were developed and popularized by Michael Fagan while he worked for IBM in the 1970s.

  Key Point 

The IEEE defines an inspection as a formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author, to detect faults, violations of development standards, and other problems.

A walkthrough is a peer review of a software product that is conducted by "walking through" the product sequentially (line by line) to judge the quality of the product being reviewed and to discover defects. Most walkthroughs that we've taken part in are led by the developer of the product being reviewed. Inspections are also peer reviews, but are much more rigorous and, in addition to finding defects in the product being inspected, typically employ statistical process control to measure the effectiveness of the inspection process and to identify process improvement opportunities in the entire software development process.

The IEEE Std. 729-1983 Standard Glossary of Software Engineering Terminology defines an inspection as: a formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems. It also states that the objective of software inspections is to detect and identify defects in software elements.

The information in Table 3-2 reflects our thoughts on the differences between walkthroughs and inspections. The various books on walkthroughs and inspections have surprisingly different views on the exact definition, purpose, and rigor of the two techniques.

Table 3-2: Comparison of Walkthroughs versus Inspections
 

Walkthroughs

Inspections

Participants

Peer(s) led by author

Peers in designated roles

Rigor

Informal to formal

Formal

Training Required

None, informal, or structured

Structured, preferably by teams

Purpose

Judge quality, find defects, training

Measure/improve quality of product and process

Effectiveness

Low to medium

Low to very high, depending on training and commitment

References

Handbook of Walkthroughs, Inspections, and Technical Reviews by Daniel P. Freedman and Gerald M. Weinberg

Software Inspection by Tom Gilb and Dorothy Graham

Structured Walkthroughs by Edward Yourdon

Handbook of Walkthroughs, Inspections, and Technical Reviews by Daniel P. Freedman and Gerald M. Weinberg

Software Reviews and Audits Handbook by C.P. Hollocker

Design and Code Inspections to Reduce Errors in Program Development by Michael E. Fagan

  Key Point 

To learn more about software inspections, we recommend the book Software Inspection by Tom Gilb and Dorothy Graham.

Inspections and, to a lesser degree, walkthroughs are very labor and thought intensive and require a lot of resources to conduct them well. For many projects, it may not be possible to perform inspections on everything. The tester and/or the developer may decide to do inspections only on highly complex code, modules that have had many lines of code changed, code that's been problematic in past releases, or high-risk requirements and design specifications. What has been inspected will have a great impact on the testing strategy. Those modules that have undergone successful inspections may require less testing than other modules. On the other hand, if the inspection reveals many bugs, testing should be delayed until the code or specification is repaired or more time may need to be allocated for testing those parts of the system. An inspection is a rigorous, formal peer examination that does the following:

  • Verifies that the software elements satisfy the specifications.
  • Verifies that the software element(s) conform to applicable standards.
  • Identifies deviations from standards and specifications.
  • Collects software engineering data (for example, defect and effort data).
  • Does not examine alternatives or stylistic issues.
  Key Point 

"… human processes tend to be more effective in finding certain types of errors, while the opposite is true of other types of errors. The implication is that inspections, walk-throughs, and computer-based testing are complementary; error detection will suffer if one or the other is not present."

- Glenford Myers, The Art of Software Testing

Another part of the walkthroughs and inspections strategy is determining who should participate. In this case, we're particularly interested in the role of the testers in the process. It's highly desirable to have system-level testers involved in the requirements and design reviews, but they may or may not be as useful in the code reviews, depending on the skill set of the testers. If the testers don't have any coding experience, their presence in the meeting may not contribute significantly to the review, but can still serve as a useful learning experience for them.

Configuration Management

Another strategic issue that should generally be considered in the Approach section of a test plan is how configuration management will be handled during software testing. Alternatively, many companies choose to describe their configuration management processes in an entirely separate document. Configuration management in the context of a Master Test Plan usually includes change management as well as the decision-making process used to prioritize bugs. Change management is important because it's critical to keep track of the version of the software and related documents that are being tested. There are many woeful tales of companies that have actually shipped the wrong (untested) version of the software.

  Key Point 

If the code is frozen prematurely, the tests will become unrealistic because fixing the bugs that were previously found may change the code now being tested.

Equally important is the process for reviewing, prioritizing, fixing, and re-testing bugs. The test environment in some companies is controlled by developers, which can be very problematic for test groups. As a general rule, programmers want to fix every bug (in their code) immediately. It's as though many programmers feel that if they can fix the bug quickly enough it didn't actually happen. Testers, on the other hand, are famous for saying that "testing a spec is like walking on water - it helps if it's frozen." Obviously, both of the extremes are counterproductive. If every bug fix were immediately promoted into the test environment, testers would never do anything but regression testing. Conversely, if the code is frozen prematurely, the tests will become unrealistic because fixing the bugs that were previously found may change the code now being tested. The key is to mutually agree on a process for reviewing, fixing, and promoting bugs back into the test environment. This process may be very informal during unit and integration testing, but will probably need to be much more formal at higher levels of test.

  Key Point 

Regression testing is retesting previously tested features to ensure that a change or bug fix has not introduced new problems.

  Key Point 

Confirmation testing is rerunning tests that revealed a bug to ensure that the bug was fully and actually fixed.

- Rex Black

We recommend that our clients use acceptance testing as a way of validating their software configuration management process. A Change Control Board (CCB) comprised of members from the user community, developers, and testers can be set up to handle this task. They will determine the severity of the bug, the approximate cost to fix and test the bug, and ultimately the priority for fixing and re-implementing the code. It's possible that some bugs discovered, especially in acceptance testing, may be deferred to a future release.

Collection and Validation of Metrics

Another topic often described in the Approach section of a test plan is metrics. Since metrics collection and validation can be a significant overhead, it's necessary to discuss which metrics will be collected, what they will be used for, and how they will be validated. All testing efforts will need a way to measure testing status, test effectiveness, software quality, adherence to schedules, readiness for shipment, etc. Refer to Chapter 10 - The Test Manager for more information.

Tools and Automation

Another strategy issue that should be addressed in the Approach section of the test plan is the use of tools and automation. Testing tools can be a tremendous help to the development and testing staff, but they can also spell disaster if their use isn't carefully planned and implemented. For some types of tools, there can actually be a requirement for more time to develop, implement, and run a test set the first time than there would be if the test cases were executed manually. Alternatively, time may be saved during regression testing. Other types of tools can pay time dividends from the very beginning, but again, it's not our purpose to discuss test tools here (refer to Chapter 7 - Test Execution for more information). We only want to emphasize that the use of automated testing tools needs to be well planned and have adequate time allocated for implementation and training.

Changes to the Test Plan

The Master Test Plan should address how changes to the plan itself and its corresponding detailed test plans will be handled. When working to draft the plan, it's desirable to include all of the key people and groups (e.g., developers, users, configuration managers, customers, marketing, etc.) in the development and review cycles. At some point, we hope that these key people will sign off on the plan.

It's also important to remember that the test plan will change during the project. Each test manager should include a strategy addressing how to update the plan. Some of the questions that need to be addressed include:

  • Are small changes (e.g., misspelled words) permissible without going through the approval process again?
  • Should there be weekly or monthly updates to the test plan?
  • Should the test plan go through the regular CM process?
  • How should the test plan be published (e.g., electronically, on paper, or both)?
  • Should the test plan review be conducted in a "shotgun" fashion, sequentially, in a meeting, or some combination thereof?
  Key Point 

The test manager must develop a strategy for updating the test plan.

Meetings and Communications

It's often a good idea to include a section in the Master Test Plan on meetings, reporting, and communications. If there are to be any standing meetings, they should be described in the Approach section of the test plan. Examples of meetings and other methods of communication include the Change Control Board (CCB), status meetings, and presentations to users and/or upper management.

Status reporting should also be covered in this section and include details on how often meetings will be held, in what format, and what metrics will be used to monitor and communicate results. Finally, it's useful to describe chains of command and where to go for conflict resolution - the CCB is one obvious choice.

Other Strategy Issues

We've covered a few of the strategy issues that occur frequently. Other topics that might affect the strategy include how to handle:

  • multiple production environments
  • multi-level security
  • beta testing
  • test environment setup and maintenance
  • use of contractual support
  • unknown quality of software
  • feature creep
  • etc.

The bottom line is that anything that has a significant impact on the effectiveness or cost of testing is a candidate for inclusion in the Approach section of the test plan.

Item Pass Fail Criteria

This section of the test plan describes the pass/fail criteria for each of the items described in Section 6.0 - Test Items. Just as every test case needs an expected result, each test item needs to have an expected result. Typically, pass/fail criteria are expressed in terms of test cases passed and failed; number, type, severity and location of bugs; usability, reliability, and/or stability. The exact criteria used will vary from level to level and organization to organization.

Remember that all test cases are not created equal. Percentage of test cases executed, although a common and often useful metric, can be misleading. For example, if 95% of the test cases pass, but the "nuclear shut-off valve" test fails, the actual percentage may not mean much. Furthermore, all tests don't cover the same amount of the system. For example, it may be possible to have 75% of the test cases cover only 50% of the system. A more effective measure for quantifying pass/fail criteria would relate the test case completion to some measure of coverage (e.g., code, design, requirements, etc.).

  Key Point 

Some examples of pass/fail criteria include:

  • % of test cases passed
  • number, severity, and distribution of defects
  • test case coverage
  • successful conclusion of user test
  • completion of documentation
  • performance criteria

If you've never tried to quantify pass/fail criteria before, you may find it a little frustrating at first. But, trying to foresee "what's good enough" can really help crystallize the thinking of the various test planners and reduce contention later. If the software developer is a contractor, this section can even have legal ramifications, since the pass/fail criteria may be tied to bonus or penalty clauses, or client acceptance of the product.

Suspension Criteria Resumption Requirements

The purpose of this section of the test plan is to identify any conditions that warrant a temporary suspension of testing and the criteria for resumption. Because testers are often harried during test execution, they may have a tendency to surge forward no matter what happens. Unfortunately, this can often lead to additional work and a great deal of frustration. For example, if a group is testing some type of communications network or switch, there may come a time when it's no longer useful to continue testing a particular interface if the protocol to be used is undefined or in a state of flux. Using our ATM example, it may not be possible to test the withdraw cash feature if the check account balance feature has not yet been developed.

Metrics are sometimes established to flag a condition that warrants suspending testing. If a certain predefined number of total defects or defects of a certain severity are encountered, for example, testing may be halted until a determination can be made whether or not to redesign part of the system, try an alternate approach, or take some other action.

  Key Point 

Frequently used suspension criteria include:

  • incomplete tasks on the critical path
  • large volumes of bugs
  • critical bugs
  • incomplete test environments
  • and resource shortages.

Gantt charts can be used to clearly show dependencies between testing activities. In Figure 3-10, for example, Task 5.3-Execute Test Procedures for 8.6 and all subsequent tasks cannot begin until task 5.2-Load ATM Version 8.6, Build 1 is completed. The Gantt chart clearly shows that Task 5.2 is on the critical path and all subsequent activities will need to be suspended until this task is completed.

click to expand
Figure 3-10: Sample Gantt Chart

Test Deliverables

This is a listing of all of the documents, tools, and other components that are to be developed and maintained in support of the testing effort. Examples of test deliverables include test plans, test design specs, test cases, custom tools, defect reports, test summary reports, and simulators. One item that is not a test deliverable is the software to be tested. The software to be tested should be listed under Section 6.0 - Test Items.

Artifacts that support the testing effort need to be identified in the overall project plan as deliverables and should have the appropriate resources assigned to them in the project tracking system. This will ensure that the test process has visibility within the overall project tracking process and that the test tasks used to create these deliverables are started at the appropriate times. Any dependencies between the test deliverables and their related software deliverables should be identified in Section 18.0 - Schedule and may be tracked using a Gantt chart. If the predecessor document is incomplete or unstable, the test products will suffer as well.

  Key Point 

Examples of test deliverables include:

  • test plans
  • test design specs
  • test cases
  • test procedures
  • test log
  • test incident reports
  • test summary reports
  • test data
  • simulators
  • custom tools

Testing Tasks

This section is called Testing Tasks in the IEEE template and it identifies the set of tasks necessary to prepare for and perform testing. All intertask dependencies and any special skills that may be required are also listed here. We often omit this section and include all testing tasks in a matrix under Section 16.0 - Responsibilities to ensure that someone will be responsible for the completion of these tasks at a later date.

Environmental Needs

Environmental needs include hardware, software, data, interfaces, facilities, publications, security access, and other requirements that pertain to the testing effort, as illustrated in Figure 3-11. An attempt should be made to configure the testing environment as similar to the real-world system as possible. If the system is destined to be run on multiple configurations (hardware, operating system, etc.), a decision must be made whether to replicate all of these configurations, only the riskiest, only the most common, or some other combination. When you're determining the hardware configuration, don't forget to list your system software requirements as well.

click to expand
Figure 3-11: Environmental Needs

In addition to specifying the hardware and software requirements, it's also necessary to identify where the data will come from to populate the test database. Some possible choices might include production data, purchased data, user-supplied data, generated data, and simulators. At this point, you should also determine how to validate the data and assess its fragility so you know how often to update it. Remember that it's false to assume that even production data is totally accurate.

  Key point 

Test data that is quickly outdated due to a very dynamic business environment is said to be fragile.

Undoubtedly, many of our students get tired of hearing that "interfaces are risky," but indeed they are. When planning the test environment, it's very important to determine and define all interfaces. Occasionally, the systems that we must interface with already exist. In other instances, they may not yet be ready and all we have to work with is a design specification or some type of protocol. If the interface is not already in existence, building a realistic simulator may be part of your testing job.

Facilities, publications, and security access may seem trivial, but you must ensure that you have somewhere to test, your tests are properly documented, and you have appropriate security clearance to access systems and data.

Case Study 3-6: Security access may seem trivial, but it's really an important part of the test environment.

Tough Duty

Once, while on active duty in the Marine Corps, I was "loaned" to an Air Force command to help in testing a large critical system. For some reason, my security clearance didn't arrive at the base until two days after I was scheduled to begin work. Since I couldn't logon to the system or even gain access to the building, I was forced to spend a couple of boring days hanging out at the Officer's Club and lounging by the pool - basically doing everything except testing.

— Rick Craig

Refer to Chapter 6 - Test Implementation for more information about the test environment.

Responsibilities

We like to include a matrix in this section that shows major responsibilities such as establishment of the test environment, configuration management, unit testing, and so forth. Some people like to list job titles in the responsibilities matrix (i.e., Development Manager) because the staff members holding various jobs change so frequently. We prefer to list the responsible parties by name because we've found that having someone's name next to a task gets their attention more than just listing a department or job title. In Figure 3-12, we hedged our bets by listing the responsible parties both by name and by job title.

click to expand
Figure 3-12: Responsibilities Matrix

Staffing and Training Needs

The actual number of people required to handle your testing project is, of course, dependent upon the scope of the project, the schedule, and a multitude of other factors. This section of the test plan describes the number of people required and what skills they need to possess. In some cases, you may need 15 journeymen testers and 5 apprentice testers. More often, though, you will have to be more specific. If you already have someone in mind, for example, you could state your requirements as, "We must have Jane Smith to help establish a realistic test environment."

Examples of training needs might include learning how to use a particular tool, testing methodologies, interfacing systems, management systems such as defect tracking, configuration management, or basic business knowledge related to the system under test. Training needs may vary significantly, depending on the scope of the project. Refer to Chapter 10 - The Test Manager for more information.

Schedule

The testing schedule should be built around the milestones contained in the Project Plan such as delivery dates of various documents and modules, availability of resources, interfaces, and so forth. Then, it will be necessary to add all of the testing milestones. These testing milestones will differ in level of detail depending upon the level of the test plan being created. In a Master Test Plan, milestones will be built around major events such as requirements and design reviews, code delivery, completion of user manuals, and availability of interfaces. In a Unit Test Plan, most of the milestones will be based on the completion of various software modules.

Initially, it's often useful to build a generic schedule without calendar dates; that is, identify the time required for various tasks, dependencies, and so forth without specifying particular start and finish dates. Normally, this schedule should be portrayed graphically using a Gantt chart in order to show dependencies.

  Key Point 

It's important that the schedule section reflect how the estimates for the milestones were determined.

Our template specifies a testing schedule without reference to where the milestone came from, but it's our hope that the milestones are based on some type of formal estimate. If we're ever going to gain credibility in the software development arena, we must be more accurate in estimating time and resources. It's important that the schedule section reflect how the estimates for the milestones were determined. In particular, if the time schedule is very aggressive, estimating becomes even more critical, so that the planning risks and contingencies and priorities for test can be specified. Recording schedules based on estimates also provides the test manager with an audit trail of how the estimates did and did not come to pass, and forms the basis for better estimating in the future.

Planning Risks and Contingencies

Many organizations have made a big show of announcing their commitment to quality. We've seen quality circles, quality management, total quality management, and who knows what else. Unfortunately, in the software world, many of these same organizations have demonstrated that their only true commitment is to the schedule. The Planning Risks and Contingencies section of Chapter 2 provides a good overview of how to make intelligent and informed planning decisions. Any activity that jeopardizes the testing schedule is a planning risk. Some typical planning risks include:

  • Unrealistic delivery dates
  • Staff availability
  • Budget
  • Environmental options
  • Tool inventory
  • Acquisition schedule
  • Participant buy-in and marketing
  • Training needs
  • Scope of testing
  • Lack of product requirements
  • Risk assumptions
  • Usage assumptions
  • Resource availability
  • Feature creep
  • Poor-quality software

Possible contingencies include:

  • Reducing the scope of the application
  • Delaying implementation
  • Adding resources
  • Reducing quality processes

Refer to Chapter 2 - Risk Analysis for more information on planning risks and contingencies.

Approvals

The approver(s) should be the person or persons who can declare that the software is ready to move to the next stage. For example, the approver on a Unit Test Plan might be the Development Manager. The approvers on a System Test Plan might be the people in charge of the system test and whoever is going to receive the product next, which may be the customer if they're going to perform the Acceptance Testing. Since this is a Master Test Plan, there may be many approvers including developers, testers, customers, QA, configuration management, among others. One of the important parts of the approval section of the test plan is the signature page. Figure 3-13 shows an example of a signature page.

click to expand
Figure 3-13: Sample Signature Page

  Key Point 

The approver(s) should be the person or persons who can declare that the software is ready to move to the next stage.

The author(s) should sign in the appropriate block and enter the date that this draft of the plan was completed. In our sample signature page, we've also included a place for the reviewer to sign and date the document and check the block indicating whether or not he/she is recommending approval. The reviewers should be technical or business experts and are usually not managers. If some of the approvers lack the technical or business expertise to understand the entire document, their approval may be based partly upon the expertise and reputation of the reviewers.

In our sample signature block, we've included a space for the approver to sign and date the document and indicate approval "as-is" or conditionally. The approver(s) should be the person(s) who have the authority to declare accept/reject the terms of this document. Even though we're anxious to get the approvers to "sign off" on the plan, we really want their buy-in and commitment - not just their signature. If you wait until the plan is written and then circulate the document for approval, it's much harder to get buy-in and the most you can hope for is just a signature. In order to get the commitment we want, the approver(s), or their representatives, should be involved in the creation and/or review of the test plan during its development. It's part of your challenge, as the test planner, to determine how to involve all the approvers in the test planning process.

  Key Point 

In order to get the commitment we want, the approver(s), or their representatives, should be involved in the creation and/or review of the test plan during its development.

Ideally, we'd like to have the developers and users actually help write the test plan. For example, convincing the development manager or one of the senior developers to explain how unit testing will be conducted is much more effective than having someone from the testing group try to describe how unit testing will de done. Often, though, the key developer and/or user may not be willing (or may not have enough time) to actually write the plan. We've found that one way to involve developers and users early in the development of the test plan is to invite them to a test planning meeting. While few people like attending meetings, many prefer them over helping write the plan. During the course of the meeting, you should go through the entire template and identify the issues. Then, publish the first rough draft of the plan as the minutes of the meeting. You might want to preface your e-mail with, "Is this what we agreed on?" If you follow these steps, you're well on your way to achieving buy-in for your test plan.

Test planning is a lot of work and can be time consuming. If you're under the gun to get the next release out the door, you may argue that you can't afford to spend time creating a test plan. On the contrary, we hope that you will agree that you can't afford to begin testing without a good test plan.





Systematic Software Testing
Systematic Software Testing (Artech House Computer Library)
ISBN: 1580535089
EAN: 2147483647
Year: 2002
Pages: 114

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net