A Testing Process Overview


Testing is usually listed last as an activity in virtually every software development process after implementation. This activity refers to the type of testing that attempts to determine whether the product as a whole functions as it should. From our view, testing is a type points during development, not just at the end and not just to code. We define a process separate from, but intimately related to, the development process because the goal of testing is really different from the goal of development. Consequently, we prefer to consider development and testing as two separate, but intimately connected, processes.

Development and testing processes are distinct primarily because they have different goals and different measures of success. Development strives to build a product that meets a need. Testing strives to answer questions about the product, including whether the product meets the need that it is intended to meet. Consider, for example, the number of defects identified after testing some developed software. The lower the defect rate (ratio of test cases that fail to the total number used), the more successful the development is considered to be. On the other hand, the higher the defect rate, the more successful the testing is considered to be.

The roles of developing and testing functionality are assigned to different people, thereby reinforcing the idea that the processes are distinct. Using different people for development and testing activities is particularly productive from a system test perspective. The testers write test cases independently from those who will develop the code to ensure that the resulting system does what the requirements actually intend rather than what the developers interpreted the requirements to mean.

The same is true at all levels of testing. In most shops developers are responsible for some testing such as, what has been traditionally called unit and integration testing. However, to be successful, any person who takes on the role of both developer and tester must ensure that the proper goal is pursued with equal vigor. To achieve this, we use buddy testing in which one developer is assigned to unit test the code of another developer. In this way, at least a developer is responsible for one goal and one set of functionality, and the other is responsible for another goal and another set of functionality.

Even though the two processes are distinct, they are intimately related. Their activities even overlap when test cases have to be designed, coded, and executed. Together they encompass the activities necessary to produce a useful product. Defects can be introduced during each phase of the development process. Consequently, each development activity has an associated testing activity. The relationship between the two processes is such that when something is developed, it is tested using products of the testing process to determine that it appropriately meets a set of requirements.

The testing and development processes are in a feedback loop (see Figure 3.3). The testing process feeds identified failures back into the development process.[1] Failure reports provide a set of symptoms that a developer uses to identify the exact location of a fault or error. The development process feeds new and revised designs and implementations into the testing process. Testing of development products will help identify defective test cases when testers determine that "failures" result from problems with test cases themselves or the drivers that execute them, and not the software under test.[2]

[1] The purpose of testing is to identify failures and not to identify the error or the fault that gave rise to a failure. The developers are responsible for finding the source of a failure.

[2] An interesting aspect of test case development is determining who checks the test cases. Most cases are reviewed, but most processes involve very little formal testing of test cases.

Figure 3.3. The testing and development processes form a feedback loop

graphics/03fig03.gif

In the context of this feedback loop, the form and content of development products affect the testing process. When developers select methods and tools, they establish constraints on the testing process. Consider, for example, how the degree of formality of class specifications affects the ease with which test cases can be identified for testing a class. The testing perspective must be considered, preferably by the presence of professional testers, when development methods and tools are selected.

Testability

One of the pieces of information that is fed back to the developers is an evaluation of how amenable the software is to being tested. Testability is related to how easily you can evaluate the results of the tests. In Chapter 7 we will show how our testing architecture, PACT, improves testability by overcoming information hiding. Testability is also an appropriate context to examine the question about when to test. As layers of software are added on top of layers, the visibility to the stored values becomes more cloudy. The lower the level at which a piece is tested, the more easily visible are its internals for the verification of test results and, by definition, the more testable it is.

The form and quality of a requirements specification also affects the process. Product requirements comprise the source of test cases in system and acceptance testing. System testers should participate in the gathering and validation of the requirements in order to have a sufficient understanding of them to assess risks and testability.

Test Cases and Test Suites

The basic component of testing is a test case. In its most general form, a test case is a pair (input, expected result), in which input is a description of an input to the software under test and expected result is a description of the output that the software should exhibit for the associated input. Inputs and expected results are not necessarily simple data values, such as strings or integer values, but they can be arbitrarily complex. Inputs often incorporate system state information as well as user commands and data values to be processed. Expected result includes not only perceivable things, such as printed reports, audible sounds, or changes in a display screen, but changes to the software system itself for example, an update to a database or a change in a system state that affects processing of subsequent inputs. A test case execution is a running of the software that provides the inputs specified in the test case and observes the results and compares them to those specified by the test case. If the actual result varies from the expected result, then a failure has been detected and we say the software under test "fails the test case." If the actual result is the expected result for a test case, then we say the software "passes the test case."

Test cases are organized into a test suite. Most test suites have some sort of organization based on the kinds of test cases. For example, a test suite might have one part containing test cases that are concerned with testing system capacities and another part containing test cases concerned with testing typical uses of the system well within any specified capacities. If software passes all the test cases in a test suite, then we say that the software "passes the test suite."

One of the greatest challenges in testing is developing and organizing a test suite. The main issues in test suite development are correctness, observability of results, and adequacy.

The STEP testing technique developed by William Hetzel [Hetz84] provides a three-step approach for each type of testing performed on a project.

  1. Analysis The product to be tested is examined to identify any special features that must receive particular attention and to determine the test cases that should be constructed. We will present a number of analysis techniques. Some can be automated, such as branch testing, but many require the tester to manually determine what to test.

  2. Construction In this phase the artifacts that are needed for testing are created. The test cases identified during analysis are translated into programming languages and scripting languages, or they are entered in a tool-specific language. There is also often the need for data sets, which may require an extensive effort to build a sufficiently large set.

  3. Execution and Evaluation This is the most visible and often the only recognized part of the test effort; however, it is also typically the quickest part of the test effort. The test cases that were identified during analysis and then constructed are executed. The results are examined to determine whether the software passed the test suite or failed it. Often many of these activities can be automated. This is particularly useful in an iterative environment since the same tests will be applied repeatedly over time.

Test suites are maintained. As requirements change, so must the test suite. You must correct test cases that are found to be in error. As problems are found by users, test cases will be added to catch those problems in future releases before deployment.

A testing process is iterative and incremental and must be planned in connection with the planning of its associated development.

graphics/note.jpg

What do testers want developers to specify about the system?

The template for the use case that we have presented provides most of the information that a person needs to develop system-level tests. In particular the pre- and postconditions are important in terms of sequencing tests and communicating information about hidden dependencies. A structured use case model can assist the person writing the tests with information about the possible reuse of test scripts and data. A series of state models related to subsystems and the system itself also helps communicate information about sequencing of actions and expected responses.


graphics/note.jpg

When are testers needed on a project?

The culture in some companies specifies that testing personnel are not assigned to a project until it is well underway. The linkages described here between the development and testing processes are evidence that early project decisions require input from personnel who are knowledgeable about testing. This may be one of the testers who is assigned to the project very early, or a developer with testing experience.




A Practical Guide to Testing Object-Oriented Software
A Practical Guide to Testing Object-Oriented Software
ISBN: 0201325640
EAN: 2147483647
Year: 2005
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net