8.1. Test Plans and Test CasesThe goal of test planning is to establish the list of tasks that, if performed, will identify all of the requirements that have not been met in the software. The main work product is the test plan. There are many standards that can be used for developing test plans . Table 8-1 shows the outline of a typical test plan. (This outline was adapted from IEEE 829, the most common standard for software test plans.)
The test plan represents the overall approach to the test. In many ways, the test plan serves as a summary of the test activities that will be performed. It shows how the tests will be organized, and outlines all of the testers' needs that must be met in order to properly carry out the test. The test plan is especially valuable because it is not a difficult document to review, so the members of the engineering team and senior managers can inspect it. The bulk of the test planning effort is focused on creating the test cases. A test case is a description of a specific interaction that a tester will have, in order to test a single behavior of the software. Test cases are very similar to use cases, in that they are step-by-step narratives that define a specific interaction between the user and the software. However, unlike use cases, they contain references to specific features of the user interface. The test case contains actual data that must be entered into the software and the expected result that the software must generate. A typical test case includes these sections, usually laid out in a table:
Table 8-2 shows an example of a test case that would exercise one specific behavior in requirement FR-4 from the discussion of functional requirements in Chapter 6. This requirement specified how a search-and-replace function must deal with case sensitivity. One part of that requirement said, "If the original text was all lowercase, then the replacement text must be inserted in all lowercase."
Note that this test case includes interactions with design elements like text fields, buttons, and windows. This is one of the main differences between a use case and a test case. A use case specifically does not talk about design elements, in order to avoid constraining the designers. Test cases must be very specific about how they plan on doing their interaction because the design has been decided upon, and the part of the purpose of the test case is to exercise that design. This means that the test case cannot be completed until the design of the software is finished. A project manager should be aware of the characteristics of a good test case. A test case describes in complete detail everything that the tester must do. It contains the names of the buttons that the tester must click on, the menu items that should be selected, the exact data that must be typed in, etc. All of the expected behavior of the software (such as windows dismissed or error messages displayed) must be described. The goal of that is to make each test case repeatable, so that no two people will test the software differently. Table 8-3 shows an example of a test case that is widely open to interpretation. It may seem specific at first glance, but there are some serious problems with it:
In short, this test case is not repeatable. It may seem intuitive to make the test case more general, in order to capture a wider range of functionality. However, the test case itself will be run only once during each test iteration. Instead of trying to make the test case more general, multiple test cases should be added to the test plan, in order to verify each specific type of test. For example, there should be separate test cases for clicking in the text field and tabbing between them. If the tester wants to verify that the find-and-replace function works with long strings as well as short ones, or for numbers and symbols as well as alphabetical characters, all of those things should be separate test cases. (The test case name should be used to differentiate between these tests.) Another important characteristic is that each test case describes oneand only onetest. The reason for this is that the test case should isolate the specific behavior that is being tested, to ensure that any defect that is found is a problem with that feature only. One of the complexities of software is that there are usually an infinite number of possible feature combinations, and the only way to make sure that those combinations are not interacting improperly is to isolate each specific behavior. That makes it much easier to determine the root cause of any defect found. For example, Table 8-4 contains the "Expected Results" section of a poorly designed test case that exercises all of the bullet points in requirement FR-4. If a defect were found in bullet point number 4, it would be difficult to determine whether the defect arose because those specific actions were done in sequence, or if it were simply an isolated defect.
Test cases are usually strung together in one long interaction with the software. This means that the results in Table 8-4 should really be verified using five different test cases, one per bullet point in requirement FR-4. For example, since test case TC-47 verifies the second bullet point in FR-4, TC-48 could verify the third bullet point and have in its precondition that TC-47 has been run. The precondition for TC-47, in turn, would require that TC-46 be run. To ensure that the test cases all start out with the same document open, each test case depends on a base state , or a condition of the software that can be reproduced at any time. A base state is an anchor point that is easy to navigate to. Test case TC-47 contains two references to a base state labeled BS-12. The first reference is in the Precondition section: the test case requires that the software be in its base state. The second reference is at the end of the Expected Results section: the tester must return the software to the base state after the test case, in order to reset it for the next one. This ensures that whether the test passes or fails, it will not have any side effects on any tests that are executed after it. Table 8-5 shows the definition of this base state. It is in the same form as a test case (note that since it is not exercising a particular requirement, the "Requirement" section contains the text "N/A").
It is not necessary for every test case to start out in a base state. In fact, it is often useful to string a set of test cases together so that the precondition of each one depends on the previous test case passing. However, when there are strings of test cases that go for a long time without returning to a base state, there is a risk that areas of the application will go untested in the event of a failure. If a test case fails, the results of the following test cases simply cannot be trusted until the software is returned to a base state. Once all of the test cases and base states are defined, they should be combined into a single test case document . This document is usually far longer than any other document produced over the course of the software project. It contains a separate table for each test case and base state. Each of the test cases and base states should be cross-referenced with the "Features to be Tested" section of the test plan . This section should contain the complete Name field of each test case and base state. Typically, the test cases and base states appear in the test case document in the same order that they appear in the test plan. The test case document should have an outline that follows the software requirements specification: it should contain one section for each use case and requirement, and in that section, there should be a set of test cases that fully test that requirement. This makes the test cases much easier to inspect, because a reviewer can look at a single section and judge whether the test cases in that section fully exercise the requirement that they are supposed to be testing. Once the test cases are complete, they should be inspected by the engineering team (see Chapter 5). The test plan and test cases should be collaborative documents; if the team does not give input into them, then it is likely that the software will fail to implement certain behavior that the users expect. This inspection will generally have a narrower audience than the test plan because the document is much longer and more technical. Minimally, it should be reviewed by another software tester, the requirements engineer who built the requirements that are being tested, and the programmer who implemented them. 8.1.1. Inspection ChecklistThe following checklist items apply to the test plan.
The following checklist items apply to the test cases:
|