11.2 The Performance Environment

I l @ ve RuBoard

The performance environment has several components :

  • A client simulation tool (the load-testing tool)

  • Java Virtual Machine (JVM) profiling tools

  • System and server-side monitoring tools

As with other tools and third-party products used in your project, these tools need to be planned for, evaluated, selected, purchased (or built), trained on, customized, and used. And don't forget to include allocation for literature and developer learning time. Your choices of the right tools and the right approach make a difference in terms of the overall cost and time taken for managing performance.

11.2.1 Use a Client Simulation Tool

The client simulation tool, often referred to as a benchmark harness or load-testing tool, exercises the application as though one or more users are performing the expected business activity. Some projects adapt their quality assessment testing toolset to create a benchmark harness, other projects build a dedicated harness to exercise the server-side components directly, and others use an off-the-shelf web loading or GUI capture-and-playback tool.

The following three factors are imperative when deciding on a client simulation tool:

  • The tool must effectively simulate client activity, including variable pauses in activity such as the time users would take to fill out fields or make selections at decision points.

  • The tool should make and record timed measurements of simulated activity between arbitrary points, such as from a simulated user click on a browser to complete page display.

  • The tool should not interfere with timing measurements ”i.e., it should not add any significant overhead that would measurably affect the times being recorded.

From the J2SE world, here are some tips to keep in mind:

  • Build or buy a benchmark harness, which is a tool dedicated to performance measurements and not robustness testing.

  • Specify benchmarks based on real user behavior.

  • Run benchmarks simulating user behavior across all expected scales .

11.2.2 Don't Use JVM Profiling Tools

JVM profiling tools are normally used during development to identify bottlenecks in running Java code. They are suitable for identifying bottlenecks in application subcomponents that run in individual JVMs. However, they usually impose a heavy overhead on the JVM. Therefore, they tend to be used infrequently because of the extra time needed to run an application while profiling, and because analyzing the results of the profile can be difficult.

JVM profiling tools do not provide absolute measurements of execution time. The heavy overhead makes the absolute times produced by a JVM profiler irrelevant. Instead, the relative times of execution between methods and threads are measured to provide a profile that can be analyzed to determine the program bottlenecks. Their heavy overheads make JVM profiling tools unsuitable for use as enterprise monitoring tools.

11.2.3 Use Monitoring Tools

Monitoring tools continually measure activity and produce logs that can be analyzed for trends or problems. Your choice of monitoring tools should be guided by three primary requirements:

  • The tool should have a low overhead cost for collecting data from the server.

  • The tool should provide measurements that are important for your project.

  • The tool should be suitable for monitoring in both the development environment and the production environment.

Enterprise monitoring tools provide valuable information for both development performance tuning and production performance monitoring. Ideally, to ensure the success of performance monitoring in production, the skills and knowledge acquired in development should be transferred to the production environment with a minimum of disruption.

Second, monitoring tools should also do the following:

  • Scale with the application

  • Be easy to configure

  • Provide detailed analysis tools

  • Provide automatic advisements or warnings whenever possible

A number of commercial J2EE performance-monitoring tools are now available. These tools improve J2EE performance-tuning productivity significantly, and it is worth obtaining one for your project. (A list of such tools can be obtained at http://www.JavaPerformanceTuning.com/resources.shtml). If you want to implement your own tool, you need to add logging to all the main communications interfaces of the application, the transaction and session boundaries, and the lifecycle boundaries (for instance, the creation and destruction of Enterprise JavaBeans [EJBs]) and request initiation and completion. Free logging tools designed to work with J2EE applications, such as Steve Souza's JAMon (see http://www.JavaPerformanceTuning.com/tools/jamon/index.shtml), can assist with this task.

The following are some important lessons learned from the J2SE world that are applicable here:

  • Make your benchmarks long enough; more than 5 seconds is a good target.

  • Use elapsed time (wall-clock time) for the primary time measurements and a benchmark harness that does not interfere with measured times.

  • Run benchmarks in an isolated, reproducible environment before starting the tuning process, and again after each tuning exercise.

  • Be sure that you are not measuring artificial situations, such as full caches containing the exact data needed for the test. Account for all the performance effects of any caches.

  • Measure all aspects of the system, including operating system statistics ( especially CPU, memory, and I/O statistics), and JVM statistics, including the heap, method execution times, garbage collection, and object creation.

11.2.4 Use Test Systems

Every phase of your project should have a testable system. This allows you to continually performance-test your system and quickly identify potential performance problems. The earlier you can identify such problems, the cheaper they are to remedy. Analysis and design stages in particular should include testing of proposed architectures to eliminate possibilities that are functionally adequate but not adequately efficient. Test systems include:


The ECperf benchmark is not difficult to install and run, and is representative of what many J2EE applications can do. The Sun Pet Store tutorial is also available and, although it is not a benchmark, can, after tuning, be used for internal performance testing.

Prototypes and models

Many projects start with a prototype or working model. Such a test system can form a useful core for exercising the main ideas from analysis and design.

Skeleton systems

This type of system provides a core into which components can be slotted as they become available. Temporary simulation components can be used to model behavior and identify potential performance problems even before components are testable.

Partial systems

In many projects with no performance plan, the first time performance is seriously considered is often when a partial system can be tested (usually because performance inadequacies become clear at this point).

Complete development system

When the application has been completed but has not yet passed quality assessment, there is a window of time during which performance testing is possible. Some projects use this window for performance testing simultaneously with quality assessment. However, as the performance planner, you need to be aware that most identified performance problems at this stage will not be fixed in time for the application to pass through quality assessment and be released by the scheduled date.

Potential release system

Subsequent to successful completion of quality assessment, the application is ready for deployment. This system is frequently a target system for intensive performance testing to provide an upgrade to the deployed system that addresses any significant performance problems. There is normally a window of time ”after the application has been released to the administration team but before it has been moved into production ”during which performance testing can effectively be performed to contribute to an upgrade soon after deployment.

Deployed production system

The production system is the ultimate performance-testing environment. Deploying the application with monitoring in place ensures that valuable performance data is not missed. This data can be analyzed to eliminate any remaining performance problems.

Your performance plan should include some aspects that might not be obvious. First, performance testing should normally be scheduled to take place on systems where no other activity is being performed. Sharing the QA or development system is possible, but in these cases performance testing should be scheduled to run when other activity has died down, typically in the evening or overnight. If this is the case for your environment, it is important that the tools run unattended, and preferably, automatically.

Second, your overall plan must take into account code versioning and release from development to performance environment simultaneously with QA releases. And bear in mind that changes required from both QA and performance testing will need to be applied to both environments. As milestones approach, performance changes are frequently pushed back to the next release, which might be acceptable but should be planned for to avoid confusion.

I l @ ve RuBoard

The OReilly Java Authors - JavaT Enterprise Best Practices
The OReilly Java Authors - JavaT Enterprise Best Practices
Year: 2002
Pages: 96

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net