Section 8.1. Test Plans and Test Cases


8.1. Test Plans and Test Cases

The goal of test planning is to establish the list of tasks that, if performed, will identify all of the requirements that have not been met in the software. The main work product is the test plan. There are many standards that can be used for developing test plans . Table 8-1 shows the outline of a typical test plan. (This outline was adapted from IEEE 829, the most common standard for software test plans.)

Table 8-1. Test plan outline


Purpose

A description of the purpose of the application under test.


Features to be tested

A list of the features in the software that will be tested. It is a catalog of all of the test cases (including a test case number and title) that will be conducted, as well as all of the base states.


Features not to be tested

A list of any areas of the software that will be excluded from the test, as well as any test cases that were written but will not be run.


Approach

A description of the strategies that will be used to perform the test.


Suspension criteria and resumption requirements

Suspension criteria are the conditions that, if satisfied, require that the test be halted. Resumption requirements are the conditions that are required in order to restart a suspended test.


Environmental Needs

A complete description of the test environment or environments. This should include a description of hardware, networking, databases, software, operating systems, and any other attribute of the environment that could affect the test.


Schedule

An estimated schedule for performing the test. This should include milestones with specific dates.


Acceptance criteria

Any objective quality standards that the software must meet, in order to be considered ready for release. This may include things like stakeholder sign-off and consensus, requirements that the software must have been tested under certain environments, minimum defect counts at various priority and severity levels, minimum test coverage numbers, etc.


Roles and responsibilities

A list of the specific roles that will be required for people in the organization, in order to carry out the test. This list can indicate specific people who will be testing the software and what they are responsible for.


The test plan represents the overall approach to the test. In many ways, the test plan serves as a summary of the test activities that will be performed. It shows how the tests will be organized, and outlines all of the testers' needs that must be met in order to properly carry out the test. The test plan is especially valuable because it is not a difficult document to review, so the members of the engineering team and senior managers can inspect it.

The bulk of the test planning effort is focused on creating the test cases. A test case is a description of a specific interaction that a tester will have, in order to test a single behavior of the software. Test cases are very similar to use cases, in that they are step-by-step narratives that define a specific interaction between the user and the software. However, unlike use cases, they contain references to specific features of the user interface. The test case contains actual data that must be entered into the software and the expected result that the software must generate. A typical test case includes these sections, usually laid out in a table:

  • A unique name and number

  • A requirement that this test case is exercising

  • Preconditions that describe the state of the software before the test case (which is often a previous test case that must always be run before the current test case)

  • Steps that describe the specific steps that make up the interaction

  • Expected results that describe the expected state of the software after the test case is executed

Table 8-2 shows an example of a test case that would exercise one specific behavior in requirement FR-4 from the discussion of functional requirements in Chapter 6. This requirement specified how a search-and-replace function must deal with case sensitivity. One part of that requirement said, "If the original text was all lowercase, then the replacement text must be inserted in all lowercase."

Table 8-2. Example of a test case

Name

TC-47: Verify that lowercase data entry results in lowercase insert

Requirement

FR-4 (Case sensitivity in search-and-replace), bullet 2

Preconditions

The test document TESTDOC.DOC is loaded (base state BS-12).

Steps

  1. Click on the "Search and Replace" button.

  2. Click in the "Search Term" field.

  3. Enter This is the Search Term.

  4. Click in the "Replacement Text" field.

  5. Enter This IS THE Replacement TeRM.

  6. Verify that the "Case Sensitivity" checkbox is unchecked.

  7. Click the OK button.

Expected results

  1. The search-and-replace window is dismissed.

  2. Verify that in line 38 of the document, the text this is the search term has been replaced by this is the replacement term.

  3. Return to base state BS-12.


Note that this test case includes interactions with design elements like text fields, buttons, and windows. This is one of the main differences between a use case and a test case. A use case specifically does not talk about design elements, in order to avoid constraining the designers. Test cases must be very specific about how they plan on doing their interaction because the design has been decided upon, and the part of the purpose of the test case is to exercise that design. This means that the test case cannot be completed until the design of the software is finished.

A project manager should be aware of the characteristics of a good test case. A test case describes in complete detail everything that the tester must do. It contains the names of the buttons that the tester must click on, the menu items that should be selected, the exact data that must be typed in, etc. All of the expected behavior of the software (such as windows dismissed or error messages displayed) must be described. The goal of that is to make each test case repeatable, so that no two people will test the software differently.

Table 8-3 shows an example of a test case that is widely open to interpretation. It may seem specific at first glance, but there are some serious problems with it:

  • The test case does not specify exactly how the search-and-replace function is accessed. If there are several ways to bring up this function, and a defect is found that is specific to only one of them, it may be difficult to repeat precisely.

  • The test case is not data-specific. Every tester could enter a different search term and replacement term. If a defect only occurs for certain terms, it will be difficult to reproduce.

  • The test case does not specify how the data is entered into the field. It is possible that a problem might come up when the user uses the tab key to navigate between fields, but cannot be reproduced by clicking on them.

Table 8-3. This poorly designed test case does not describe the interaction precisely enough

Steps

  1. Bring up search-and-replace.

  2. Enter a lowercase word from the document in the search term field.

  3. Enter a mixed-case word in the replacement field.

  4. Verify that case sensitivity is not turned on and execute the search.

Expected results

  1. Verify that the lowercase word has been replaced with the mixed-case term in lowercase.


In short, this test case is not repeatable. It may seem intuitive to make the test case more general, in order to capture a wider range of functionality. However, the test case itself will be run only once during each test iteration. Instead of trying to make the test case more general, multiple test cases should be added to the test plan, in order to verify each specific type of test. For example, there should be separate test cases for clicking in the text field and tabbing between them. If the tester wants to verify that the find-and-replace function works with long strings as well as short ones, or for numbers and symbols as well as alphabetical characters, all of those things should be separate test cases. (The test case name should be used to differentiate between these tests.)

Another important characteristic is that each test case describes oneand only onetest. The reason for this is that the test case should isolate the specific behavior that is being tested, to ensure that any defect that is found is a problem with that feature only. One of the complexities of software is that there are usually an infinite number of possible feature combinations, and the only way to make sure that those combinations are not interacting improperly is to isolate each specific behavior. That makes it much easier to determine the root cause of any defect found.

For example, Table 8-4 contains the "Expected Results" section of a poorly designed test case that exercises all of the bullet points in requirement FR-4. If a defect were found in bullet point number 4, it would be difficult to determine whether the defect arose because those specific actions were done in sequence, or if it were simply an isolated defect.

Table 8-4. This poorly designed test case has more than one interaction

Expected Results

  1. The search and replace window is dismissed.

  2. Verify that in line 36 of the document, the text THIS IS THE SEARCH TERM has been replaced by THIS IS THE REPLACEMENT TERM.

  3. Verify that in line 38 of the document, the text this is the search term has been replaced by this is the replacement term.

  4. Verify that in line 43 of the document, the text This is the search term has been replaced by This is the replacement term.

  5. Verify that in line 44 of the document, the text This Is the Search Term has been replaced by This Is the Replacement Term.

  6. Verify that in line 44 of the document, the text thIS is the SEarCh Term has been replaced by This IS THE Replacement TeRM.


Test cases are usually strung together in one long interaction with the software. This means that the results in Table 8-4 should really be verified using five different test cases, one per bullet point in requirement FR-4. For example, since test case TC-47 verifies the second bullet point in FR-4, TC-48 could verify the third bullet point and have in its precondition that TC-47 has been run. The precondition for TC-47, in turn, would require that TC-46 be run.

To ensure that the test cases all start out with the same document open, each test case depends on a base state , or a condition of the software that can be reproduced at any time. A base state is an anchor point that is easy to navigate to. Test case TC-47 contains two references to a base state labeled BS-12. The first reference is in the Precondition section: the test case requires that the software be in its base state. The second reference is at the end of the Expected Results section: the tester must return the software to the base state after the test case, in order to reset it for the next one. This ensures that whether the test passes or fails, it will not have any side effects on any tests that are executed after it. Table 8-5 shows the definition of this base state. It is in the same form as a test case (note that since it is not exercising a particular requirement, the "Requirement" section contains the text "N/A").

Table 8-5. Base state BS-12

Name

BS-12: Load test document TESTDOC.DOC

Requirement

N/A

Preconditions

No user is logged in and no applications are running.

Steps

  1. Log in as user "joetester" with password "test1234".

  2. Launch the application.

  3. Select the File/Open menu item.

  4. Enter "/usr/home/joetester/TESTDOC.DOC".

  5. Click OK.

Expected Results

  1. Verify that the Open Window dialog box has been dismissed.

  2. Verify that the file TESTDOC.DOC is loaded.

  3. Verify that the file TESTDOC.DOC is given focus.

  4. Verify that the file TESTDOC.DOC is active.


It is not necessary for every test case to start out in a base state. In fact, it is often useful to string a set of test cases together so that the precondition of each one depends on the previous test case passing. However, when there are strings of test cases that go for a long time without returning to a base state, there is a risk that areas of the application will go untested in the event of a failure. If a test case fails, the results of the following test cases simply cannot be trusted until the software is returned to a base state.

Once all of the test cases and base states are defined, they should be combined into a single test case document . This document is usually far longer than any other document produced over the course of the software project. It contains a separate table for each test case and base state. Each of the test cases and base states should be cross-referenced with the "Features to be Tested" section of the test plan . This section should contain the complete Name field of each test case and base state. Typically, the test cases and base states appear in the test case document in the same order that they appear in the test plan.

The test case document should have an outline that follows the software requirements specification: it should contain one section for each use case and requirement, and in that section, there should be a set of test cases that fully test that requirement. This makes the test cases much easier to inspect, because a reviewer can look at a single section and judge whether the test cases in that section fully exercise the requirement that they are supposed to be testing.

Once the test cases are complete, they should be inspected by the engineering team (see Chapter 5). The test plan and test cases should be collaborative documents; if the team does not give input into them, then it is likely that the software will fail to implement certain behavior that the users expect. This inspection will generally have a narrower audience than the test plan because the document is much longer and more technical. Minimally, it should be reviewed by another software tester, the requirements engineer who built the requirements that are being tested, and the programmer who implemented them.

8.1.1. Inspection Checklist

The following checklist items apply to the test plan.


Completeness

Does the document meet all established templates and standards?

Is the document complete?

Are there any requirements that are not tested?

Are there any features that are planned for testing but should be excluded?


Feasibility

Can the testing as planned be accomplished within the known cost and schedule constraints?

Can every test described in the test plan be reasonably conducted?


Environment

Is the description of the environment complete?

Is the test plan traceable to any nonfunctional requirements that define the operating environment?


Performance

Does the test plan account for the expected load for concurrent users, large databases, or other performance requirements?

Can the performance tests be traced back to requirements in the specification?


Acceptance Criteria

Do the acceptance criteria match the standards of the organization?

The following checklist items apply to the test cases:


Clarity

Does each test case have a clear flow of events?

Does each test case test only one specific interaction?

Does each test case describe the interaction using specific user interface and data elements?

Is each test case repeatable by someone uninitiated on the project?


Completeness

Is every requirement in the SRS verified fully with individual test cases?

Are all of the steps in each test case necessary?

Are there any steps that are missing?

Are all alternative paths and exceptions accounted for?


Accuracy

For every action, is there an expected result?

For every behavior in the requirement, is there a verification of the actual behavior?

Is the test case data specificif data must be entered or modified, is that data provided?


Traceability

Is each test case uniquely identified with a name and a number?

Can each test case be traced back to a specific requirement?



Applied Software Project Management
Applied Software Project Management
ISBN: 0596009488
EAN: 2147483647
Year: 2003
Pages: 122

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net