11.7 Tuning Techniques

I l @ ve RuBoard

Java Enterprise performance-tuning techniques include J2SE tuning techniques, as well as a variety of others. Here is a summary of the J2SE best practice techniques:

  • Improve CPU limitations with faster code, better algorithms, and fewer short-lived objects.

  • Improve memory limitations by using fewer objects or smaller long-lived objects.

  • Improve I/O limitations by reducing the number of I/O operations with targeted redesigns, or by speeding up I/O by reducing the amount of data requiring I/O, or perhaps by multithreading the I/O. Buffer I/O where possible (usually almost everywhere).

  • Switch to an optimizing compiler.

  • Use a JIT-enabled JVM.

  • Test other JVMs to find a faster one.

  • Turn off any JVM options that slow down the application (e.g., -Xrunhprof and -verbose ).

  • Tune the heap.

  • If a bottleneck consists of a slow method, make it faster.

  • If a bottleneck consists of many calls to a fast method, reduce the number of times that method is called.

  • Tune object-creation and garbage-collection bottlenecks by reusing objects and reducing the number of objects used.

  • Target loops . Minimize the time executed in the loop by moving any code you can outside the loop, avoiding repeated operations and inlining method calls.

  • Target strings. Only internationalized text needs to use String objects. All other string use can probably be optimized with your own character handling.

  • Try to use primitive datatypes directly. Avoid wrapping them.

  • Use or build the fastest collection possible. Traverse collection elements directly where possible.

  • Exceptions are time-consuming to create, so avoid creating them.

  • Minimize casting by specializing algorithms and data structures to the required datatype.

11.7.1 Choose the Right Data Structure

Enterprise systems typically have requirements for handling very large datasets. The data structures delivered with the Java SDK are not necessarily ideal for efficiently manipulating very large datasets. The efficiency of the data structures is closely related to the data being manipulated by the structure and the algorithms used to do the manipulation. You need to consider carefully how large datasets will be used and queried, and try to match the scaling characteristics of the structures and algorithms against the volumes of data likely to be applied.

Sometimes no single data structure is ideal for your data. In these cases, you have three choices:

Compromise data structure

Use a compromise data structure that is not ideal but provides adequate performance for the most important functions. This option is the easiest to implement and maintain, but performance is compromised. As an example, use a TreeMap for holding key/value pairs that also need to be ordered.

Aggregate data structure

Use more than one data structure holding the same set of data, with each data structure providing optimum performance for different functions. This option can be a little complex, though aggregating preexisting data structures under a single class and hiding the underlying complexity provides a maintainable solution. For example, you could use a HashMap and an ArrayList holding the same data in an aggregate class when you want to hold key/value pairs that need to be ordered. Iteration using the ArrayList is fast, but there is more overhead in maintaining the repeated dataset in the two structures held by the aggregation class.

Hybrid data structure

Use a hybrid data structure that combines the structures and algorithms you need for your dataset and data manipulation algorithms. Because you usually need to build this complex solution from scratch, this option can be difficult to implement and maintain, but it potentially provides the best performance. Using the example of holding key/value pairs that also need to be ordered, the exact functionality required would need to be analyzed , and a specialized solution proposed, built, debugged , and maintained .

11.7.2 Transactions

Transactions have many overheads. If you can avoid them, do so for performance reasons. Handle the nontransactional part of your application differently from the parts that require transactions. Where transactions are necessary, tune transactions using the following guidelines:

  • Try to use optimistic transactions (the Optimistic Locking design pattern) for transactional systems in which accesses predominate updates.

  • Minimize the time spent in any transaction, but don't shorten transactions so much that you unnecessarily increase the total number of transactions. Combine transactions that occur within a few seconds of each other to minimize the overall time spent in transactions. This can require manually controlling the transaction ”i.e., turning off auto-commit for JDBC transaction or using TX_REQUIRED for EJBs.

  • J2EE transactions are defined with several isolation modes. Choose the fastest transaction isolation level that avoids corrupting the data. Transaction levels in order of increasing cost are: TRANSACTION_READ_UNCOMMITTED, TRANSACTION_READ_COMMITTED, TRANSACTION_REPEATABLE_READ, and TRANSACTION_SERIALIZABLE.

  • Don't leave transactions open , relying on the user to close them. Inevitably, there will be times when the user does not close the transaction, and the very long transaction that results will significantly decrease the performance of the system.

  • Bulk or batch updates are usually more efficiently performed in larger transactions.

  • Optimize read-only transactions. EJBs should use read-only transactions in the deployment descriptor, while JDBC should use read-only connections.

  • Lock only where the design absolutely requires it.

I l @ ve RuBoard

The OReilly Java Authors - JavaT Enterprise Best Practices
The OReilly Java Authors - JavaT Enterprise Best Practices
Year: 2002
Pages: 96

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net