Tools for Testing Performance and Throughput

To test application performance and throughput, at whatever stage in the development process, we'll need to use appropriate tools and have a clear test plan. It's vital that tests are repeatable and that test results can be filed for future reference.

Usually we'll begin with load testing, which will also indicate performance in the form of response times for each concurrent test. However, we may also need to profile individual requests, so as to be able to optimize or eliminate slow operations.

Preparing to Benchmark

Benchmarking is a form of experimentation, so it's vital to adopt a good experimental method to ensure that benchmarks are as accurate as possible and are repeatable. For example, the application must be configured as it will be in production.

  • The application should be running on production hardware or the closest available hardware to production hardware.

  • Logging must be configured as in production. Verbose logging, such as at the Java 1.4 FINE level or Log4j DEBUG level, can seriously affect performance, especially if it results in the generation of log messages that are generated only if they will be displayed. Log output format can also be important. Log4j, for example, provides the ability to display the line number of each logging statement for which output is generated. This is great for debugging, but so expensive (due to the need to generate exceptions and parse their stack traces) as to seriously distort performance outcomes.

  • Configure third-party products for optimum performance, as they will be deployed in production. For example:

    • MVC web frameworks may have debug settings that will reduce performance. Ensure that they're disabled.

    • The Velocity template engine can be configured to check templates for changes regularly. This is convenient in development but reduces performance.

    • The application server should be configured to production settings.

    • RDBMSs should be set to production settings.

  • Disable any application features that will not be used in a particular production environment, but may affect performance.

  • Use realistic data. Performance of a system with only the data for a few hundred test users loaded may be very different from that with the thousands or millions of records used in production.

It's also vital to ensure that there are no confounding factors that may affect the running of the tests:

  • When running load testing or profiling software on the same server as the J2EE application, check that it's not distorting performance figures by hogging CPU time. If possible, load test a web interface from one or more separate machines.

  • Ensure that no other processes on the machine(s) running the application server are likely to reduce resources available to the application. Even innocent monitoring tools such as top can consume surprising amounts of CPU time; virus scans and the like can be disastrous in long test runs.

  • Ensure that there's enough RAM available on the application server and machine running the load testing software.

Important 

Remember that benchmarking isn't a precise science. Strive to eliminate confounding factors, but remember not to read too much into a particular number. In my experience, it's common to see variations of 20-30% between successive test runs on J2EE applications, especially where load testing is involved, because of the number of variables, which will also apply in production.

Web Test Tools

One of the easiest ways is to establish whether a J2EE web application performs satisfactorily and delivers sufficient throughput under load is to load-test its web interface. Since the web interface will be the user's experience of the application, non-functional requirements should provide a clear definition of the performance and concurrency required.

Microsoft Web Application Stress Tool

There are many tools for testing the performance of web applications. My preferred tool is Microsoft's free Web Application Stress (WAS) Tool (http://webtool.rte.microsoft.com/).

For a platform-neutral, Java-based alternative, consider Apache JMeter (available at http://jakarta.apache.org/jmeter/index.html) or the Grinder (discussed below). However, these tools are less intuitive and harder to set up. Since it's generally best to run load testing software on a separate machine to the application server, there is usually no problem in finding a Windows machine to run the Microsoft tool.

Configuring Microsoft's WAS is very easy. It simply involves creating one or more scripts. Scripts can also be "recorded" using Internet Explorer. Scripts consist of one or more definitions of application URLs to be load-tested, including GET or POST data if necessary. WAS can use a range or set of parameter values. The following screenshot illustrates configuring WAS to request the "Display Show" page in the sample application. Remember to change the port from the default of 80 if necessary for each URL. In this example, it is the JBoss/Jetty default of 8080:

click to expand

Each script has global settings for the number of concurrent threads to use, the delays between requests issued by each thread, and options such as whether to follow redirects and whether to simulate user access via a slow modem link. It's also possible to configure cookie and session behavior. Each script is configured via the following screen:

click to expand

Once a script has been run, reports can be viewed via the Reports option on the View menu. Reports are stored in a database so that they can be viewed at any time. Reports include the number of requests per second, the amount of data transmitted and received, and the average wait to receive the first and last byte of the response:

click to expand

Using the Web Application Stress Tool or any comparable product, we can quickly establish the performance and throughput of a whole web application, indicating where further, more detailed analysis may be required and whether performance tuning is required at all.

Non-Web Testing Tools

Sometimes testing through the web interface is all that's required. Performance and load testing isn't like unit testing; there's no need to have performance tests for every class. If we can easily set up a performance test of an entire system and are satisfied with the results, there's no need to spend further time writing performance or scalability tests.

However, not all J2EE applications have a web interface (and even in web applications, we may need a more detailed breakdown of the architectural layers where an application spends most of its time). This means that we need the ability to load-test and performance-test individual Java classes, which in turn may test application resources such as databases.

There are many open source tools available for such testing, such as the Grinder (http://sourceforge.net/projects/grinder), an extensible load tester first developed for a Wrox book on WebLogic, and Apache JMeter.

Personally I find most of these tools unnecessarily complex. For example, it's not easy to write test cases for the Grinder, which also requires multicast to be enabled to support communication between load-test processes and its console. Unlike JUnit, there is a learning curve involved.

I use the following simple framework, which I originally developed for a client a couple of years ago and which I've found to meet nearly all requirements with a minimum of effort in writing load tests. The code is included with the sample application download, under the /framework/test directory. Unlike JMeter or the Grinder, it doesn't provide a GUI console. I did write one for an early version of the tool, but found it was less useful than file reports, especially as the tests were often run on a server without a display.

Like most load-testing tools, the test framework in the com.interface21.load package, described below, is based on the following concepts:

  • It enables a number of test threads to be run in parallel as part of a test suite. In this framework, test threads will implement the com.interface21.load.Test interface, while the test suite is usually a generic framework class.

  • Each test thread executes a number of passes, independent of the activity of other threads.

  • Each thread can use a random delay of up to a maximum number of milliseconds between test passes. This is essential to simulate the unpredictable user activity likely to be experienced at run time.

  • Each thread implements a simple interface that requires it to execute an application-specific test for each pass (the random delay between test passes is handled by the framework test suite).

  • All threads use a single test fixture exposing the application object(s) to test (this is analogous to a JUnit fixture).

Periodic reports are made to the console and a report can be written to file after the completion of a test run.

The following UML class diagram illustrates the framework classes involved, and how an application-specific test thread class (circled) can extend the AbstractTest convenience class. The framework supplies a test suite implementation, which provides a standard way to coordinate all the application-specific tests:

click to expand

The only code required to implement a load test is an extension of the framework AbstractTest class, as shown below. This involves implementing just two methods, as the AbstractTest class provides a final implementation of the java.lang.Runnable interface:

    import com.interface21.load.AbstractTest;    public class MyTestThread extends AbstractTest {         private MyFixture fixture; 

The framework calls the following method on subclasses of AbstractTest to make the shared test fixture-the application object to test - available to each thread. Tests that don't require a fixture don't need to override this method:

    public void setFixture (Object fixture) {           this.fixture = (MyFixture) fixture;    } 

The following abstract method must be implemented to run each test. The index of the test pass is passed as an argument in case it is necessary:

        protected void runPass (int i) throws Exception {                // do something with fixture        }    } 

Typically the runPass() method will be implemented to select random test data made available by the fixture and use it to invoke one or more methods on the class being load tested. As with JUnit test cases, we need only catch exceptions resulting from normal execution scenarios: uncaught exceptions will be logged as errors by the test suite and included in the final report (the test thread will continue to run further tests). Exceptions can also be thrown to indicate failures if assertions are not satisfied.

This tool uses the bean-based approach to configuration described in Chapter 11 and used consistently in the application framework discussed in this book. Each test uses its own properties file, which enables easy parameterization. This file is read by the PropertiesTestSuiteLoader class, which takes the filename as an argument and creates and initializes a test suite object of type BeanFactoryTestSuite from the bean definitions contained in the properties file.

The following definitions configure the test suite, including its reporting format, how often it reports to the console during test runs, and where it writes its report files. If the reportFile bean property isn't set, there's no file output:

    suite.class=com.interface21.load.BeanFactoryTestSuite    suite.name=Availability check    suite.reportIntervalSeconds=10    suite.longReports=false    suite.doubleFormat=###.#    suite.reportFile=c:\\reports\\results1.txt 

The following keys control how many threads are run, how many passes or test cases each thread runs, and how long (in milliseconds) is the maximum delay between test cases in each test thread:

    suite.threads=50    suite.passes=40    suite.maxPause=23 

The following properties show how an application-specific test fixture can be made available to the test suite, and configured via its JavaBean properties. The test suite will invoke the setFixture() method on each test thread to enable all test thread instances to share this fixture:

    suite.fixture (ref) = fixture    fixture.class=com.interface21.load.AvailabilityFixture    fixture.timeout=10    fixture.minDelay=60    fixture.maxDelay=120 

Finally, we must include bean definitions for one or more test threads. Each of these will be independent at run time. Hence this bean definition must not be a singleton, so we override the bean factory's default behavior, in the highlighted line:

    availabilityTest.class=com.interface21.load.AvailabilityCheckTest    availabilityTest. (singleton) = false                                                                                     

The default behavior is for each thread to take its number of passes and maximum pause value from that of the test suite, although this can be overridden for each test thread. It's also possible to run several different test threads concurrently, each with a different weighting.

The test suite can be run using an Ant target like this:

    <target name="load" >          <java                classname="com.interface21.load.PropertiesTestSuiteLoader"                fork="yes"                dir="src"                <classpath location="classpath"/>                                                                                         <arg file="path/mytest.properties"/>          </java> </target> 

The highlighted line should be changed as necessary to ensure that both the com.interface21.load package and the application-specific test fixture and test threads are available on the classpath.

Reports will show the number of test runs completed, the number of errors, the number of hits per second achieved by each test thread and overall, and the average response time:

    AvailabilityCheckTest-0   40/40  errs=0 125hps  avg=8ms    AvailabilityCheckTest-1   40/40  errs=0 95hps   avg=10ms    AvailabilityCheckTest-2   40/40  errs=0 90.7hps avg=11ms    AvailabilityCheckTest-3   40/40  errs=0 99.8hps avg=10ms    AvailabilityCheckTest-4   40/40  errs=0 110hps  avg=9ms    *********** Total hits=200    *********** HPS=521.3    *********** Average response=9 

The most important setting is the number of test threads. By increasing this, we can establish at what point throughput begins to deteriorate, which is usually the point of the exercise. Modern JVMs can cope with very high numbers of concurrent threads; I've successfully tested with several hundred concurrent threads. However, it's important to remember that if we run too many concurrent test threads, the work of switching execution between the test threads may become great enough to distort the results. It's also possible to use this tool to establish how the application copes with long periods of prolonged load, by specifying a very high number of passes to be executed by each thread.

This tool can be used for web testing as well, by providing an AbstractTest implementation that requests web resources. However, the Web Application Stress Tool is easier to use and provides all the flexibility required in most cases.



Expert One-on-One J2EE Design and Development
Microsoft Office PowerPoint 2007 On Demand
ISBN: B0085SG5O4
EAN: 2147483647
Year: 2005
Pages: 183

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net