Test Planning Topics


Many software testing books present a test plan template or a sample test plan that you can easily modify to create your own project-specific test plan. The problem with this approach is that it makes it too easy to put the emphasis on the document, not the planning process. Test leads and managers of large software projects have been known to take an electronic copy of a test plan template or an existing test plan, spend a few hours cutting, copying, pasting, searching, and replacing, and turn out a "unique" test plan for their project. They felt they had done a great thing, creating in a few hours what other testers had taken weeks or months to create. They missed the point, though, and their project showed it when no one on the product team knew what the heck the testers were doing or why.

For that reason, you won't see a test plan template in this book. What follows, instead, is a list of important topics that should be thoroughly discussed, understood, and agreed to among your entire project teamincluding all the testers. The list may not map perfectly to all projects, but because it's a list of common and important test-related concerns, it's likely more applicable than a test plan template. By its nature, planning is a very dynamic process, so if you find yourself in a situation where the listed topics don't apply, feel free to adjust them to fit.

Of course, the result of the test planning process will be a document of some sort. The format may be predefinedif the industry or the company has a standard. The IEEE Standard 8291998 for Software Test Documentation suggests a common form. Otherwise, the format will be up to your team and should be what's most effective in communicating the fruits of your work.

High-Level Expectations

The first topics to address in the planning process are the ones that define the test team's high-level expectations. They're fundamental topics that must be agreed to by everyone on the project team, but they're often overlooked. They might be considered "too obvious" and assumed to be understood by everyonebut a good tester knows never to assume anything!

  • What's the purpose of the test planning process and the software test plan? You know the reasons for test planningokay, you will soonbut do the programmers know, do the technical writers know, does management know? More importantly, do they agree with and support the test planning process?

  • What product is being tested? Of course you believe it's the Ginsumatic v8.0, but is it, for sure? Is this v8.0 release planned to be a complete rewrite or a just a maintenance update? Is it one standalone program or thousands of pieces? Is it being developed in house or by a third party? And what is a Ginsumatic anyway?

    For the test effort to be successful, there must be a complete understanding of what the product is, its magnitude, and its scope. The product description taken from the specification is a good start, but you might be surprised if you show it to several people on the team. You don't want a programmer to proclaim, "Gee, the code I was planning to write won't perform that function!"

  • What are the quality and reliability goals of the product? This area generates lots of discussion, but it's imperative that everyone agrees to what these goals are. A sales rep will tell you that the software needs to be as fast as possible. A programmer will say that it needs to have the coolest technology. Product support will tell you that it can't have any crashing bugs. They can't all be right. How do you measure fast and cool? And how do you tell the product support engineer that the software will ship with crashing bugs? Your team will be testing the product's quality and reliability, so you need to know what your target is; otherwise, how will you know if the software is hitting it?

    A result of the test planning process must be a clear, concise, agreed-on definition of the product's quality and reliability goals. The goals must be absolute so that there's no dispute on whether they were achieved. If the salespeople want fast, have them define the benchmarkable to process 1 million transactions per second or twice as fast as competitor XYZ running similar tasks. If the programmers want whiz-bang technology, state exactly what the technology is and remember, gratuitous technology is a bug. As for bugs, you can't guarantee that they'll all be foundyou know that's impossible. You can state, however, that the goal is for the test automation monkey to run 24 hours without crashing or that all test cases will be run without finding a new bug, and so on. Be specific. As the product's release date approaches, there should be no disagreement about what the quality and reliability goals are. Everyone should know.

People, Places, and Things

Test planning needs to identify the people working on the project, what they do, and how to contact them. If it's a small project this may seem unnecessary, but even small projects can have team members scattered across long distances or undergo personnel changes that make tracking who does what difficult. A large team might have dozens or hundreds of points of contact. The test team will likely work with all of them and knowing who they are and how to contact them is very important. The test plan should include names, titles, addresses, phone numbers, email addresses, and areas of responsibility for all key people on the project.

Similarly, where documents are stored (what shelf or file server the test plan is sitting on), where the software can be downloaded from, where the test tools are located, and so on need to be identified. Think email addresses, servers, and websites.

If hardware is necessary for running the tests, where is it stored and how is it obtained? If there are external test labs for configuration testing, where are they located and how are they scheduled?

This topic is best described as "pointers to everything that a new tester would ask about." It's often a good test planning area for a new tester to be responsible for. As you find the answers to all your questions, simply record what you discover. What you want to know is probably what everyone will want to know, too.

Definitions

Getting everyone on the project team to agree with the high-level quality and reliability goals is a difficult task. Unfortunately, those terms are only the beginning of the words and concepts that need to be defined for a software project. Recall the definition of a bug from Chapter 1, "Software Testing Background":

  1. The software doesn't do something that the product specification says it should do.

  2. The software does something that the product specification says it shouldn't do.

  3. The software does something that the product specification doesn't mention.

  4. The software doesn't do something that the product specification doesn't mention but should.

Would you say that every person on the team knows, understands, andmore importantlyagrees with that definition? Does the project manager know what your goal as a software tester is? If not, the test planning process should work to make sure they do.

This is one of the largest problems that occurs within a project teamthe ignorance of what common terms mean as they apply to the project being developed. The programmers think a term means one thing, the testers another, management another. Imagine the contention that would occur if the programmers and testers didn't have the same understanding of something as fundamental as what a bug is.

The test planning process is where the words and terms used by the team members are defined. Differences need to be identified and consensus obtained to ensure that everyone is on the same page.

Here's a list of a few common terms and very loose definitions. Don't take the list to be complete nor the definitions to be fact. They are very dependent on what the project is, the development model the team is following, and the experience level of the people on the team. The terms are listed only to start you thinking about what should be defined for your projects and to show you how important it is for everyone to know the meanings.

  • Build. A compilation of code and content that the programmers put together to be tested. The test plan should define the frequency of builds (daily, weekly) and the expected quality level.

  • Test release document (TRD). A document that the programmers release with each build stating what's new, different, fixed, and ready for testing.

  • Alpha release. A very early build intended for limited distribution to a few key customers and to marketing for demonstration purposes. It's not intended to be used in a real-world situation. The exact contents and quality level must be understood by everyone who will use the alpha release.

  • Beta release. The formal build intended for widespread distribution to potential customers. Remember from Chapter 16, "Bug Bashes and Beta Testing," that the specific reasons for doing the beta need to be defined.

  • Spec complete. A schedule date when the specification is supposedly complete and will no longer change. After you work on a few projects, you may think that this date occurs only in fiction books, but it really should be set, with the specification only undergoing minor and controlled changes after that.

  • Feature complete. A schedule date when the programmers will stop adding new features to the code and concentrate on fixing bugs.

  • Bug committee. A group made up of the test manager, project manager, development manager, and product support manager that meets weekly to review the bugs and determine which ones to fix and how they should be fixed. The bug committee is one of the primary users of the quality and reliability goals set forth in the test plan.

Inter-Group Responsibilities

Inter-group responsibilities identify tasks and deliverables that potentially affect the test effort. The test team's work is driven by many other functional groupsprogrammers, project managers, technical writers, and so on. If the responsibilities aren't planned out, the projectspecifically the testingcan become a comedy show of "I've got it, no, you take it, didn't you handle, no, I thought you did," resulting in important tasks being forgotten.

The types of tasks that need to be defined aren't the obvious onestesters test, programmers program. The troublesome tasks potentially have multiple owners or sometimes no owner or a shared responsibility. The easiest way to plan these and communicate the plan is with a simple table (see Figure 17.1).

Figure 17.1. Use a table to help organize inter-group responsibilities.


The tasks run down the left side and the possible owners are across the top. An x denotes the owner of a task and a dash () indicates a contributor. A blank means that the group has nothing to do with the task.

Deciding which tasks to list comes with experience. Ideally, several senior members of the team can make a good first pass at a list, but each project is different and will have its own unique inter-group responsibilities and dependencies. A good place to start is to question people about past projects and what they can remember of neglected tasks.

What Will and Won't Be Tested

You might be surprised to find that everything included with a software product isn't necessarily tested. There may be components of the software that were previously released and have already been tested. Content may be taken as is from another software company. An outsourcing company may supply pre-tested portions of the product.

The planning process needs to identify each component of the software and make known whether it will be tested. If it's not tested, there needs to be a reason it won't be covered. It would be a disaster if a piece of code slipped through the development cycle completely untested because of a misunderstanding.

Test Phases

To plan the test phases, the test team will look at the proposed development model and decide whether unique phases, or stages, of testing should be performed over the course of the project. In a code-and-fix model, there's probably only one test phasetest until someone yells stop. In the waterfall and spiral models, there can be several test phases from examining the product spec to acceptance testing. Yes, test planning is one of the test phases.

The test planning process should identify each proposed test phase and make each phase known to the project team. This process often helps the entire team form and understand the overall development model.

NOTE

Two very important concepts associated with the test phases are the entrance and exit criteria. The test team can't just walk in to work on Monday morning, look at the calendar and see that they're now in the next phase. Each phase must have criteria defined for it that objectively and absolutely declares if the phase is over and the next one has begun.

For example, the spec review stage might be over when the minutes to the formal spec review have been published. The beta test stage might begin when the testers have completed an acceptance test pass with no new bugs found on the proposed beta release build.

Without explicit entrance and exit criteria, the test effort will dissolve into single, undirected test effortmuch like the code-and-fix development model.


Test Strategy

An exercise associated with defining the test phases is defining the test strategy. The test strategy describes the approach that the test team will use to test the software both overall and in each phase. Think back to what you've learned so far about software testing. If you were presented with a product to test, you'd need to decide if it's better to use black-box testing or white-box testing. If you decide to use a mix of both techniques, when will you apply each and to which parts of the software?

It might be a good idea to test some of the code manually and other code with tools and automation. If tools will be used, do they need to be developed or can existing commercial solutions be purchased? If so, which ones? Maybe it would be more efficient to outsource the entire test effort to a specialized testing company and require only a skeleton testing crew to oversee their work.

Deciding on the strategy is a complex taskone that needs to be made by very experienced testers because it can determine the success or failure of the test effort. It's vitally important for everyone on the project team to understand and be in agreement with the proposed plan.

Resource Requirements

Planning the resource requirements is the process of deciding what's necessary to accomplish the testing strategy. Everything that could possibly be used for testing over the course of the project needs to be considered. For example:

  • People. How many, what experience, what expertise? Should they be full-time, part-time, contract, students?

  • Equipment. Computers, test hardware, printers, tools.

  • Office and lab space. Where will they be located? How big will they be? How will they be arranged?

  • Software. Word processors, databases, custom tools. What will be purchased, what needs to be written?

  • Outsource companies. Will they be used? What criteria will be used for choosing them? How much will they cost?

  • Miscellaneous supplies. Disks, phones, reference books, training material. What else might be necessary over the course of the project?

The specific resource requirements are very project-, team-, and company-dependent, so the test plan effort will need to carefully evaluate what will be needed to test the software. It's often difficult or even impossible to obtain resources late in the project that weren't budgeted for at the beginning, so it's imperative to be thorough when creating the list.

Tester Assignments

Once the test phases, test strategy, and resource requirements are defined, that information can be used with the product spec to break out the individual tester assignments. The inter-group responsibilities discussed earlier dealt with what functional group (management, test, programmers, and so on) is responsible for what high-level tasks. Planning the tester assignments identifies the testers (this means you) responsible for each area of the software and for each testable feature. Table 17.1 shows a greatly simplified example of a tester assignments table for Windows WordPad.

Table 17.1. High-Level Tester Assignments for WordPad

Tester

Test Assignments

Al

Character formatting: fonts, size, color, style

Sarah

Layout: bullets, paragraphs, tabs, wrapping

Luis

Configuration and compatibility

Jolie

UI: usability, appearance, accessibility

Valerie

Documentation: online help, rollover help

Ron

Stress and load


A real-world responsibilities table would go into much more detail to assure that every part of the software has someone assigned to test it. Each tester would know exactly what they were responsible for and have enough information to go off and start designing test cases.

Test Schedule

The test schedule takes all the information presented so far and maps it into the overall project schedule. This stage is often critical in the test planning effort because a few highly desired features that were thought to be easy to design and code may turn out to be very time consuming to test. An example would be a program that does no printing except in one limited, obscure area. No one may realize the testing impact that printing has, but keeping that feature in the product could result in weeks of printer configuration testing time. Completing a test schedule as part of test planning will provide the product team and project manager with the information needed to better schedule the overall project. They may even decide, based on the testing schedule, to cut certain features from the product or postpone them to a later release.

An important consideration with test planning is that the amount of test work typically isn't distributed evenly over the entire product development cycle. Some testing occurs early in the form of spec and code reviews, tool development, and so on, but the number of testing tasks and the number of people and amount of time spent testing often increases over the course of the project, with the peak being a short time before the product is released. Figure 17.2 shows what a typical test resource graph may look like.

Figure 17.2. The amount of test resources on a project typically increases over the course of the development schedule.


The effect of this gradual increase is that the test schedule is increasingly influenced by what happens earlier in the project. If some part of the project is delivered to the test group two weeks late and only three weeks were scheduled for testing, what happens? Does the three weeks of testing now have to occur in only one week or does the project get delayed two weeks? This problem is known as schedule crunch.

One way to help keep the testing tasks from being crunched is for the test schedule to avoid absolute dates for starting and stopping tasks. Table 17.2 is a test schedule that would surely get the team into a schedule crunch.

Table 17.2. A Test Schedule Based on Fixed Dates

Testing Task

Date

Test Plan Complete

3/5/2001

Test Cases Complete

6/1/2001

Test Pass #1

6/15/20018/1/2001

Test Pass #2

8/15/200110/1/2001

Test Pass #3

10/15/200111/15/2001


If the test schedule instead uses relative dates based on the entrance and exit criteria defined by the testing phases, it becomes clearer that the testing tasks rely on some other deliverables being completed first. It's also more apparent how much time the individual tasks take. Table 17.3 shows an example of this.

Table 17.3. A Test Schedule Based on Relative Dates

Testing Task

Start Date

Duration

Test Plan Complete

7 days after spec complete

4 weeks

Test Cases Complete

Test plan complete

12 weeks

Test Pass #1

Code complete build

6 weeks

Test Pass #2

Beta build

6 weeks

Test Pass #3

Release build

4 weeks


Many software scheduling products will make this process easier to manage. Your project manager or test manager is ultimately responsible for the schedule and will likely use such software, but you will be asked to contribute to it to schedule your specific tasks.

Test Cases

You already know what test cases are from what you've learned in this book. Chapter 18, "Writing and Tracking Test Cases," will go further into detail about them. The test planning process will decide what approach will be used to write them, where the test cases will be stored, and how they'll be used and maintained.

Bug Reporting

Chapter 19, "Reporting What You Find," will describe the techniques that can be used to record and track the bugs you find. The possibilities range from shouting over a cubicle wall to sticky notes to complex bug-tracking databases. Exactly what process will be used to manage the bugs needs to be planned so that each and every bug is tracked from when it's found to when it's fixedand never, ever forgotten.

Metrics and Statistics

Metrics and statistics are the means by which the progress and the success of the project, and the testing, are tracked. They're discussed in detail in Chapter 20, "Measuring Your Success." The test planning process should identify exactly what information will be gathered, what decisions will be made with them, and who will be responsible for collecting them.

Examples of test metrics that might be useful are

  • Total bugs found daily over the course of the project

  • List of bugs that still need to be fixed

  • Current bugs ranked by how severe they are

  • Total bugs found per tester

  • Number of bugs found per software feature or area

Risks and Issues

A common and very useful part of test planning is to identify potential problem or risky areas of the projectones that could have an impact on the test effort.

Suppose that you and 10 other new testers, whose total software test experience was reading this book, were assigned to test the software for a new nuclear power plant. That would be a risk. Maybe some new software needs to be tested against 1,500 modems but there's only time in the project schedule to test 500 of them. Another risk.

As a software tester, you'll be responsible for identifying risks during the planning process and communicating your concerns to your manager and the project manager. These risks will be identified in the software test plan and accounted for in the schedule. Some will come true, others will turn out to be benign. The important thing is to identify them early so that they don't appear as a surprise late in the project.



    Software Testing
    Lessons Learned in Software Testing
    ISBN: 0471081124
    EAN: 2147483647
    Year: 2005
    Pages: 233

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net