Types of Testing


Stress Testing

Stress testing is operating a system under conditions that come close to exhausting the resources needed by the system. This may be filling RAM with objects, filling the hard drive with records, or filling internal data structures. One of our favorite tests is to rapidly move the mouse back and forth. This can cause the mouse move-events queue to overflow. If this condition is not handled properly a program will crash.

Object-oriented systems will usually be stressed by creating a large number of instances of classes. Select those classes that are likely to actually have a large number of instances in normal operation. Use random number generators or other devices to vary the values of parameters since this is a good opportunity to test constructors on a variety of classes.

Objects are often larger than you think. An object that contains a reference to a 30-second full-motion video clip has an object reference (usually 4 bytes), but the total memory required to instantiate that object includes the memory needed to hold some portion of the video clip. Object-oriented systems will often stress memory in normal operation because the developers do not pay sufficient attention to the real sizes of objects. Over the development life cycle, testing begins with a small number of objects being used during unit tests, normal operational numbers of objects during integration and system tests, and extraordinary numbers of objects later in a system test when the system has become stable under operational limits.

One of the most frequently overlooked stresses is the natural growth of information that accumulates as a system is operated. As a company uses a computerized accounting system and accumulates years of data, there is a natural tendency for users to expand their analyses. So the department head who used to budget by the seat of his pants, now asks the system to load the last five years of data. This can lead to a degradation of performance and even a system failure. This type of stress should be applied during life-cycle testing.

Life-Cycle Testing

The life cycle for a system can be rather long and therefore difficult to simulate in a testing environment. There are two types of life cycles that do make sense to test. First are domain life cycles. Second are computer application life cycles.

Domain life cycles correspond to key processes in the domain. For example, in an accounting system, you might choose to run a series of tests that cover a complete fiscal year for a specific set of accounts. This begins with initializing the accounts for the year, posting a series of transactions, and performing other operations before closing the accounts for the year. Life-cycle testing must include realistic growth in the load on the system. The schedule has to include time to manufacture test data or to write programs to convert existing data into the appropriate format for use in the system under test. We have found that this is the most time-consuming part of the test process. Customers and domain experts can be a source of help.

Technique Summary Stress Testing

The steps in stress testing are:

  • Identify the variable resources that increase the amount of work the system has to do.

  • If there are relationships among these resources, develop a matrix that lists combinations of resource levels to use.

  • Create test cases that use each combination.

  • Execute and evaluate the results.

The life cycle of an application begins with its installation and ends with its removal. This means that we want to test the installer program and the uninstaller program. The initial conditions are a typical machine (on which the program will be installed) that has not been used in the development of the product. Running the installer program should result in a usable application. After that, running the uninstaller should essentially return the system to its condition prior to the installation. Numbers of files on the disk and space available should be returned to their original values, otherwise the test has failed.

Problems with Real Data in Testing

When extensive past data maybe from the operation of an earlier version of the system is available, there is a tendency to think this is an easy way to obtain test data. The time required to analyze this data is usually underestimated. For each test case, the data must be examined to determine the expected results for test cases run against this data set. For tests that involve business rules and databases, this can be a very time-consuming task. It may be quicker to manufacture data that has specific properties than to use the real data. Test data is constructed by following these steps:

  • Analyze the existing data to identify patterns.

  • Construct test data that follow these patterns but for which the expected results are more easily determined.

  • Design test cases that use the test data in the context of a complete life cycle.

  • Execute and evaluate the results.

Performance Testing

Object-oriented systems originally had a reputation for being inherently slow. Therefore, systems for which performance was particularly critical just stayed away from the approach. A couple of things have happened to change both the perception and reality.

First, tools have improved. C++ compilers generate better code. Java virtual machines have been optimized. Much research has led to optimizations and new constructs for compilers and runtime environments. We have helped clients deploy successful systems using distributed object technologies in real-time, embedded environments.

Second, as people have become more knowledgeable in object-oriented techniques, they have become more skillful at articulating design rationales. Object-oriented systems are often slower than they have to be, in an absolute sense, because other design objectives have a higher priority. There are simply different design patterns that come into play if performance is the priority as opposed to the design in which flexibility or ease of maintenance is the priority.

"Testing" for performance is much like measuring the reliability of a piece of software. The most important aspect is defining and establishing the context within which performance will be measured. By context we mean a description of the environment in which the measurement will be made. The number of users logged into the system, the configuration of the machine being used, and other factors that may affect the behavior of a system should be addressed in the description. There may be multiple contexts with a different goal and different criteria in each context. A context should be meaningful to the user of the program and should include those aspects of the program that will be of value to the user.

The attributes of the system that are related to performance will vary with the type of system. In some systems, throughput of the system, measured in transactions per minute, will be the most important aspect while in others it may be the ability to react to individual events fast enough. In Brickles, there are two aspects of performance: the speed with which the graphics are refreshed and the speed with which a collision is detected and the display is updated.

The test cases for measuring the refresh of graphics use the heaviest load possible on the system. Each test case places the maximum number of bricks on the screen, and it calls for high levels of input. The paddle is moved back and forth very quickly. This produces the maximum number of calculations and drawing activities. The expected result during this test is no noticeable "flicker" in the graphics on the screen. The movement of the paddle image, as the mouse is moved from side to side, should be smooth and should correspond to the position of the mouse.

As discussed in the Testing Environment Interactions section on page 328, the other applications running concurrently with the tests can affect the results, particularly from a performance perspective. The context definition for the test cases should also provide a description of the state of the other applications running. The tests should be conducted using a typical load on the system.

Technique Summary Performance Testing

  1. Define the context in which the performance measure applies.

    1. Describe the state of the application to be tested.

    2. Describe the execution environment in terms of the platform being used.

    3. Describe the other applications running at the time of the tests.

  2. Identify the extremes within that context.

  3. Define, as the expected result, what will constitute acceptable performance.

  4. Execute the tests and evaluate the results.



A Practical Guide to Testing Object-Oriented Software
A Practical Guide to Testing Object-Oriented Software
ISBN: 0201325640
EAN: 2147483647
Year: 2005
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net