Testing against Functional RequirementsChecking functional requirements is the traditional "system testing" activity and is one that we have already covered. It is based on the derivation of test cases from use cases. Testing for Qualitative System AttributesProject charters and advertising literature often present qualitative claims that go unsubstantiated. A mature software development organization wants techniques for validating all system "requirements," including claims that are intended to make a product distinctive. In this section we address testing a system to validate qualitative claims. There are two types of claims that a development organization may make about their products. The first type is a claim of interest only to the development organization. For example, "the code will be reusable." The second type of claim is one that is of interest to the users of the system. For example, the system will be more comprehensive than others on the market currently. Clearly not all of these claims can be subjected to validation through testing. Most of these claims are best tested by examining the design rather than executing the code. The Guided Inspection technique in Chapter 4 provides a method for examining these types of system-level attributes.
One type of claim that can be validated by executing code is performance-based claims. Component manufacturers may make claims that their database performance remains acceptable under a rapid increase in the number of transactions. To substantiate this claim, the system testers perform a load test as follows:
Clearly this type of testing will not be used on every project, but it is an important aspect of a complete validation program. Testing the System DeploymentTesting the deployment mechanism for your application is not necessarily new, but it takes on added importance for configurable systems and those that require dynamic interaction with the environment. Deployment testing is intended to ensure that the packaging used for the system provides adequate setup steps and delivers a product in working condition. The most expensive part of this process is handling the installation of options. The initial test case is a full, complete installation. This might seem to be a sufficient test all by itself; however, there are usually interactions between options. If certain options are not installed, libraries or drivers may be needed for other options, but they are not copied to the installation directories. An interaction matrix (see Chapter 6) can be used to record the dependencies between options. Test cases can then be designed that attempt to install one option but not the other. The expected result, if the two options are not interdependent, should be normal operation of the system. There can be many possible combinations, particularly if the different types of platforms on which the system will be installed are considered. This is a canonical situation for applying OATS, but we will not do a detailed example here since we have already included two examples. The factors are the options that are to be installed and which levels are installed or not installed. In the case of mode complex options the levels might be the canonical ones: typical, custom, and full installations. Normal operation is judged by running a set of regression tests for the system. The regression set must be pruned to remove any tests that use options that were not installed.
Testing after DeploymentA natural extension of deployment testing is to provide self-test functionality in the product. Deployment testing exercises the mechanism for deployment in a test lab environment while the self-test functionality tests the actual deployment in the customer's environment. A separate self-test component of the architecture is designed to invoke the end-user functionality of the product and to evaluate the results. This provides a user with the opportunity to run a set of tests whenever there is doubt about the "sanity" of the application. The test suite for this type of testing is a subset of the regression test suite for the system. The test suite concentrates on those parts of the system that can be affected by changes in the environment. We consider that software "wears" over time due to changes in its interactions with its environment much as mechanical systems wear over time due to friction between components. As new versions of standard drivers and libraries are installed, the mismatches increase and the chance of failure increases as well. Each new version of a dynamic link library (DLL) brings the possibility of mismatched domains on standard interfaces or the exposure of race conditions between the library and the application. The self-test functionality must provide tests that exercise the interfaces between these products. Testing Environment InteractionsOn a recent project, the development was performed on a Windows NT platform with 256 megabytes of RAM and a 20 gigabyte hard drive. When the system was placed into beta testing, it failed on a range of systems. It crashed more dramatically under Windows 95, but it failed on several systems that had a range of configurations. There was an interaction between the size of RAM and the available space on the disk due to the swap space allocated by the operating system. We investigated this by defining a set of test cases like the ones shown in Figure 9.12. Because the development machines were larger than many of the machines on which the system would typically be deployed, memory handling had not been properly investigated. One problem was that every window that was opened increased the need for swap space even though RAM was available for the entire window. This problem did not appear until we accidently executed the program with the disk nearly full so that the swap space could not be allocated. That failure caused us to create a set of test cases that investigated the failure (see More Truth below). Figure 9.12. Test cases for memory/disk interaction
Test the application in a variety of situations that a user might create for example, execute the application concurrently with Microsoft Word, Lotus Notes, or other application programs. Test System SecurityTesting the effects of security on an application is not special to object-oriented systems, but there are some special aspects. There are three categories of issues that could be classified as security:
We will not get into issues 1 and 3, which consider holes in firewalls or the usual system account/password software. Tip Try special character keys as a means of accessing operating system-level loopholes to bypass security features such as password protection. Use a free-play style of testing to try combinations of CTRL, ESC, ALT and other keys to determine whether you can escape to the level where data is available for access. Specifically, the modularity of the executables and the dynamic aspects of the code does raise some security issues. We briefly discussed (see Testing after Deployment on page 328) situations in which an application is deployed and files are copied to a number of different directories. Most will be within the directory created for the application; however, several may have to be copied to specific subdirectories under system directories. When this is done by a system administrator, the files may have permissions that are different from those used by the actual users. The application may begin operation with no problem and may even be able to be used successfully by users for certain tasks. Only certain operations may fail and, in fact, only certain operations for certain users may fail. The level of testing that should be accomplished here is to execute sufficient test cases to use at least one resource from each directory and one user from each security class. Java now uses a permissions file that is independent of the security of the operating system. Permissions can be required for accessing any number of system or application resources. Again, inadequate permissions may not show up initially unless they are explicitly tested. |