Tool Features


Overview

The tools we use consist of a reusable framework coupled with a test automation tool. The goal of the framework is to provide a tool-independent method for automating the tests. One part of the framework consists of rules for designing and implementing the tests, and another is a set of Java classes that integrate with the JUnit tool and provide a convenient way to integrate functional and acceptance tests into the unit test automation when appropriate. A set of supporting utilities and modules, written in the scripting language of the test automation tool, interface the automation tool to the framework.

For a test automation tool, we use WebART, a tool for automated testing of Web-based applications. We are also experimenting with the use of other tools, such as HttpUnit.

Framework

The following rules guide the design and implementation of automated functional tests.

  • The tests must be self-verifying to determine whether they passed or failed without human intervention.

  • Tests verify only the aspects of concern for a particular test, not every function that may have to be performed to set up the test.

  • Verification is only of the minimum criteria for success. It demonstrates the business value end-to-end, but not more than the customer needs for success.

  • Test data is separated from any actual code so that anyone can easily extend the test coverage simply by adding cases that is, without programming.

  • The tests are designed so that they do not contain any duplication and have the fewest possible modules.

  • The tests are continually refactored by combining, splitting, or adding modules or by changing module interfaces or behavior whenever it is necessary to avoid duplication or to make it easier to add new test cases.

Essentially, we specify the tests as a sequence of actions, parameters, and expected results. Table 38.1 shows some examples.

The actions are then implemented as test modules that perform the associated action according to the specified parameters and validate that the expected results actually occur. For instance, the login module would log in to the system using the specified user ID and password, and validate that the welcome screen results. The search module would search the toys category in Denver and validate that a list is returned that contains "KBToys." These modules are generic in the sense that a single implementation is used for all the tests for a given system.

The test modules interact with the system at the same level that a user does, over the same interface, using the test automation tool. They validate the fixed aspects of the system response first and then any specified variable aspects. For instance, the login module in the test in Table 38.1 might check for either the welcome page or an error message as the fixed portion of the validation, and then check that the variable portion (the welcome screen) was present. In this manner the same module can be used to test that the error message is present when a mismatched user ID and password are entered.

Table 38.1. Specifying Automated Functional Tests
Action Parameters Expected Results
Login User Id=Test1 Password=Pass1 Welcome page
Search Category=toys Location=Denver Results list containing "KBToys"

WebART

WebART is an HTTP-level tool for testing Web applications that is capable of functional testing as well as load and performance testing. The name comes from an earlier tool for online systems testing that was called ART, for Automated Regression Testing. It is based on a model of scripted user simulation, in which a script describes a sequence of user interactions with the system. Typical functional testing is done with a single simulated user executing a set of defined test cases, with validation of the results done on the fly by the script or by using a batch comparator and a result baseline file. Multiuser scripts are also possible in which the users' activities are synchronized for testing deadlock and other types of simultaneous user interactions.

Load and performance testing are accomplished by running large numbers of simulated users, either all executing the same script or executing a mixture of different scripts. All scripts are innately capable of execution by an essentially unlimited number of users, and a number of constructs exist in the scripting language to produce controlled variation among different users executing the same script. During a load or performance test, the system throughput, transaction rate, error rate, and response time are displayed and logged by the tool, and the load parameters (such as number of users, think time, and transaction rate) can be adjusted dynamically. Tests by up to several thousand users can generally be run on a single workstation (running Windows NT/2000/XP, Linux, AIX, or SunOs). For larger tests, the tool can be installed on multiple workstations and still controlled and monitored from a single point.



Extreme Programming Perspectives
Extreme Programming Perspectives
ISBN: 0201770059
EAN: 2147483647
Year: 2005
Pages: 445

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net