What Needs to Be Tested?


Testing against Functional Requirements

Checking functional requirements is the traditional "system testing" activity and is one that we have already covered. It is based on the derivation of test cases from use cases.

Testing for Qualitative System Attributes

Project charters and advertising literature often present qualitative claims that go unsubstantiated. A mature software development organization wants techniques for validating all system "requirements," including claims that are intended to make a product distinctive. In this section we address testing a system to validate qualitative claims.

There are two types of claims that a development organization may make about their products. The first type is a claim of interest only to the development organization. For example, "the code will be reusable." The second type of claim is one that is of interest to the users of the system. For example, the system will be more comprehensive than others on the market currently. Clearly not all of these claims can be subjected to validation through testing.

Most of these claims are best tested by examining the design rather than executing the code. The Guided Inspection technique in Chapter 4 provides a method for examining these types of system-level attributes.

Technique Summary Validating Qualitative Claims

  1. Translate each qualitative claim into measurable attributes that covary with the qualitative attribute or that define the qualitative attribute.

  2. Design test cases that can detect the presence or absence of these measurable attributes.

  3. Execute the test cases and analyze the results.

  4. Aggregate these results to determine if a specific claim is justified.

One type of claim that can be validated by executing code is performance-based claims. Component manufacturers may make claims that their database performance remains acceptable under a rapid increase in the number of transactions. To substantiate this claim, the system testers perform a load test as follows:

  1. Quantify the terms "acceptable performance" and "rapid increase." Acceptable performance would be quantified by a number of transactions per second given a record size of 1024 bytes. Rapid increase would be quantified by defining the shape of the increase curve, "a quadratic increase over a 10 minute period."

  2. The testers would create new data or capture and groom historical data for use in the tests. A test frame capable of delivering the maximum number of transactions per second would be developed. The frame would include instrumentation that would assist in collecting service times.

  3. The tests would be run. Test results and timing data would be collected.

  4. The test group would make a pass/fail determination for that claim.

Clearly this type of testing will not be used on every project, but it is an important aspect of a complete validation program.

Testing the System Deployment

Testing the deployment mechanism for your application is not necessarily new, but it takes on added importance for configurable systems and those that require dynamic interaction with the environment. Deployment testing is intended to ensure that the packaging used for the system provides adequate setup steps and delivers a product in working condition. The most expensive part of this process is handling the installation of options.

The initial test case is a full, complete installation. This might seem to be a sufficient test all by itself; however, there are usually interactions between options. If certain options are not installed, libraries or drivers may be needed for other options, but they are not copied to the installation directories. An interaction matrix (see Chapter 6) can be used to record the dependencies between options. Test cases can then be designed that attempt to install one option but not the other. The expected result, if the two options are not interdependent, should be normal operation of the system. There can be many possible combinations, particularly if the different types of platforms on which the system will be installed are considered. This is a canonical situation for applying OATS, but we will not do a detailed example here since we have already included two examples. The factors are the options that are to be installed and which levels are installed or not installed. In the case of mode complex options the levels might be the canonical ones: typical, custom, and full installations.

Normal operation is judged by running a set of regression tests for the system. The regression set must be pruned to remove any tests that use options that were not installed.

Technique Summary Deployment Testing

  1. Identify categories of platforms on which the system will be deployed.

  2. Locate at least one system of each type that has a typical environment but that has not had the system installed on it.

  3. Install the system using the deployment mechanism.

  4. Run a regression set of system tests and evaluate the results.

Testing after Deployment

A natural extension of deployment testing is to provide self-test functionality in the product. Deployment testing exercises the mechanism for deployment in a test lab environment while the self-test functionality tests the actual deployment in the customer's environment. A separate self-test component of the architecture is designed to invoke the end-user functionality of the product and to evaluate the results. This provides a user with the opportunity to run a set of tests whenever there is doubt about the "sanity" of the application.

The test suite for this type of testing is a subset of the regression test suite for the system. The test suite concentrates on those parts of the system that can be affected by changes in the environment. We consider that software "wears" over time due to changes in its interactions with its environment much as mechanical systems wear over time due to friction between components. As new versions of standard drivers and libraries are installed, the mismatches increase and the chance of failure increases as well. Each new version of a dynamic link library (DLL) brings the possibility of mismatched domains on standard interfaces or the exposure of race conditions between the library and the application. The self-test functionality must provide tests that exercise the interfaces between these products.

Testing Environment Interactions

On a recent project, the development was performed on a Windows NT platform with 256 megabytes of RAM and a 20 gigabyte hard drive. When the system was placed into beta testing, it failed on a range of systems. It crashed more dramatically under Windows 95, but it failed on several systems that had a range of configurations. There was an interaction between the size of RAM and the available space on the disk due to the swap space allocated by the operating system.

We investigated this by defining a set of test cases like the ones shown in Figure 9.12. Because the development machines were larger than many of the machines on which the system would typically be deployed, memory handling had not been properly investigated. One problem was that every window that was opened increased the need for swap space even though RAM was available for the entire window. This problem did not appear until we accidently executed the program with the disk nearly full so that the swap space could not be allocated. That failure caused us to create a set of test cases that investigated the failure (see More Truth below).

Figure 9.12. Test cases for memory/disk interaction

graphics/09fig12.gif

Technique Summary Defining a Context

  1. Describe the scope of the context, such as a single platform or distributed environment of heterogeneous machines.

  2. Identify the attributes of the system that affect the operation of the system, such as the amount of memory in the platform or the other applications running concurrently.

  3. Analyze each of the attributes and identify the usual equivalence classes.

  4. Construct combinations of attribute values that provide good coverage of the context.

More Truth

So we lied when we said that the purpose of testing was to find failures. We came closer to telling the truth when we added that testing is also intended to determine whether the system satisfies its requirements. Now, some more truth. Testing can also provide information to support the repair effort after a failure. The test cases in Figure 9.12 were constructed to systematically investigate the root cause of a failure. By sending this table back to the developers, the testers speed the diagnostic process.

Test the application in a variety of situations that a user might create for example, execute the application concurrently with Microsoft Word, Lotus Notes, or other application programs.

Test System Security

Testing the effects of security on an application is not special to object-oriented systems, but there are some special aspects. There are three categories of issues that could be classified as security:

  1. The ability of the application to allow authorized persons access and to prevent access by unauthorized persons.

  2. The ability of the code to access all of the resources that it needs to execute.

  3. The ability of the application to prevent unauthorized access to other system resources not related to the application.

We will not get into issues 1 and 3, which consider holes in firewalls or the usual system account/password software.

Tip

Try special character keys as a means of accessing operating system-level loopholes to bypass security features such as password protection. Use a free-play style of testing to try combinations of CTRL, ESC, ALT and other keys to determine whether you can escape to the level where data is available for access.


Specifically, the modularity of the executables and the dynamic aspects of the code does raise some security issues. We briefly discussed (see Testing after Deployment on page 328) situations in which an application is deployed and files are copied to a number of different directories. Most will be within the directory created for the application; however, several may have to be copied to specific subdirectories under system directories. When this is done by a system administrator, the files may have permissions that are different from those used by the actual users. The application may begin operation with no problem and may even be able to be used successfully by users for certain tasks. Only certain operations may fail and, in fact, only certain operations for certain users may fail. The level of testing that should be accomplished here is to execute sufficient test cases to use at least one resource from each directory and one user from each security class.

Java now uses a permissions file that is independent of the security of the operating system. Permissions can be required for accessing any number of system or application resources. Again, inadequate permissions may not show up initially unless they are explicitly tested.



A Practical Guide to Testing Object-Oriented Software
A Practical Guide to Testing Object-Oriented Software
ISBN: 0201325640
EAN: 2147483647
Year: 2005
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net