OO Design Recommendations for J2EE Applications

It's possible to design a J2EE application so badly that, even if it contains beautifully written Java code at an individual object level, it will still be deemed a failure. A J2EE application with an excellent overall design but poor implementation code will be an equally miserable failure. Unfortunately, many developers spend too much time grappling with the J2EE APIs and too little ensuring they adhere to good coding practice. All of Sun's J2EE sample applications seem to reflect this.

In my experience, it isn't pedantry to insist on adherence to good OO principles: it brings real benefits.

Important 

OO design is more important than any particular implementation technology (such as J2EE, or even Java). Good programming practices and sound OO design underpin good J2EE applications. Bad Java code is bad J2EE code.

Some "coding standards" issues – especially those relating to OO design – are on the borderline between design and implementation: for example, the use of design patterns.

The following section covers some issues that I've seen cause problems in large code bases, especially issues that I haven't seen covered elsewhere. This is a huge area, so this section is by no means complete. Some issues are matters of opinion, although I'll try to convince you of my position.

Important 

Take every opportunity to learn from the good (and bad) code of others, inside and outside your organization. Useful sources in the public domain include successful open source projects and the code in the core Java libraries. License permitting, it may be possible to decompile interesting parts of commercial products. A professional programmer or architect cares more about learning and discovering the best solution than the buzz of finding their own solution to a particular problem.

Achieving Loose Coupling with Interfaces

The "first principle of reusable object-oriented design" advocated by the classic Gang of Four design patterns book is: "Program to an interface, not an implementation". Fortunately, Java makes it very easy (and natural) to follow this principle.

Important 

Program to interfaces, not classes. This decouples interfaces from their implementations. Using loose coupling between objects promotes flexibility. To gaim maximum flexibility, declare instance variables and method parameters to be of the least specific type required.

Using interface-based architecture is particularly important in J2EE applications, because of their scale. Programming to interfaces rather than concrete classes adds a little complexity, but the rewards far outweigh the investment. There is a slight performance penalty for calling an object through an interface, but this is seldom an issue in practice.

A few of the many advantages of an interface-based approach include:

  • The ability to change the implementing class of any application object without affecting calling code. This enables us to parameterize any part of an application without breaking other components.

  • Total freedom in implementing interfaces. There's no need to commit to an inheritance hierarchy. However, it's still possible to achieve code reuse by using concrete inheritance in interface implementations.

  • The ability to provide simple test implementations and stub implementations of application interfaces as necessary, facilitating the testing of other classes and enabling multiple teams to work in parallel after they have agreed on interfaces.

Adopting interface-based architecture is also the best way to ensure that a J2EE application is portable, yet is able to leverage vendor-specific optimizations and enhancements.

Interface-based architecture can be effectively combined with the use of reflection for configuration (see below).

Prefer Object Composition to Concrete Inheritance

The second basic principle of object-oriented design emphasized in the GoF book is "Favor object composition over class inheritance". Few developers appreciate this wise advice.

Unlike many older languages, such as C++, Java distinguishes at a language level between concrete inheritance (the inheritance of method implementations and member variables from a superclass) and interface inheritance (the implementation of interfaces). Java allows concrete inheritance from only a single superclass, but a Java class may implement any number of interfaces (including, of course, those interfaces implemented by its ancestors in a class hierarchy). While there are rare situations in which multiple concrete inheritance (as permitted in C++) is the best design approach, Java is much better off avoiding the complexity that may arise from permitting these rare legitimate uses.

Concrete inheritance is enthusiastically embraced by most developers new to OO, but has many disadvantages. Class hierarchies are rigid. It's impossible to change part of a class's implementation; by contrast, if that part is encapsulated in an interface (using delegation and the Strategy design pattern, which we'll discussed below), this problem can be avoided.

Object composition (in which new functionality is obtained by assembling or composing objects) is more flexible than concrete inheritance, and Java interfaces make delegation natural. Object composition allows the behavior of an object to be altered at run time, through delegating part of its behavior to an interface and allowing callers to set the implementation of that interface. The Strategy and State design patterns rely on this approach.

To clarify the distinction, let's consider what we want to achieve by inheritance.

Abstract inheritance enables polymorphism: the substitutability of objects with the same interface at run time. This delivers much of the value of object-oriented design.

Concrete inheritance enables both polymorphism and more convenient implementation. Code can be inherited from a superclass. Thus concrete inheritance is an implementation, rather than purely a design, issue. Concrete inheritance is a valuable feature of any OO language; but it is easy to overuse. Common mistakes with concrete inheritance include:

  • Forcing users to extend an abstract or concrete class, when we could require implementation of a simple interface. This means that we deprive the user code of the right to its own inheritance hierarchy. If there's normally no reason that a user class would need it's own custom superclass, we can provide a convenient abstract implementation of the method for subclassing. Thus the interface approach doesn't preclude the provision of convenient superclasses.

  • Using concrete inheritance to provide helper functionality, by subclasses calling helper methods in superclasses. What if classes outside the inheritance hierarchy need the helper functionality? Use object composition, so that the helper is a separate object and can be shared.

  • Using abstract classes in place of interfaces. Abstract classes are very useful when used correctly. The Template Method design pattern (discussed below) is usually implemented with an abstract class. However, an abstract class is not an alternative to an interface. It is usually a convenient step in the implementation of an interface. Don't use an abstract class to define a type. This is a recipe for running into problems with Java's lack of multiple concrete inheritance. Unfortunately, the core Java libraries are poor examples in this respect, often using abstract classes where interfaces would be preferable.

Interfaces are most valuable when kept simple. The more complex an interface is, the less valuable is modeling it as an interface, as developers will be forced to extend an abstract or concrete implementation to avoid writing excessive amounts of code. This is a case where correct interface granularity is vital; interface hierarchies may be separate from class hierarchies, so that a particular class need only implement the exact interface it needs.

Important 

Interface inheritance (that is, the implementation of interfaces, rather than inheritance of functionality from concrete classes) is much more flexible than concrete inheritance.

Does this mean that concrete inheritance is a bad thing? Absolutely not; concrete inheritance is a powerful way of achieving code reuse in OO languages. However, it's best considered an implementation approach, rather than a high-level design approach. It's something we should choose to use, rather than be forced to use by an application's overall design.

The Template Method Design Pattern

One good use of concrete inheritance is to implement the Template Method design pattern.

The Template Method design pattern (GoF) addresses a common problem: we know the steps of an algorithm and the order in which they should be performed, but don't know how to perform all of the steps. This Template Method pattern solution is to encapsulate the individual steps we don't know how to perform as abstract methods, and provide an abstract superclass that invokes them in the correct order. Concrete subclasses of this abstract superclass implement the abstract methods that perform the individual steps. The key concept is that it is the abstract base class that controls the workflow. Public superclass methods are usually final: the abstract methods deferred to subclasses are protected. This helps to reduce the likelihood of bugs: all subclasses are required to do, is fulfill a clear contract.

The centralization of workflow logic into the abstract superclass is an example of inversion of control. Unlike in traditional class libraries, where user code invokes library code, in this approach framework code in the superclass invokes user code. It's also known as the Hollywood principle: "Don't call me, I'll call you". Inversion of control is fundamental to frameworks, which tend to use the Template Method pattern heavily (we'll discuss frameworks later).

For example, consider a simple order processing system. The business involves calculating the purchase price, based on the price of individual items, checking whether the customer is allowed to spend this amount, and applying any discount if necessary. Some persistent storage such as an RDBMS must be updated to reflect a successful purchase, and queried to obtain price information. However, it's desirable to separate this from the steps of the business logic.

The AbstractOrderEJB superclass implements the business logic, which includes checking that the customer isn't trying to exceed their spending limit, and applying a discount to large orders. The public placeOrder() method is final, so that this workflow can't be modified (or corrupted) by subclasses:

   public final Invoice placeOrder (int customerId, InvoiceItem[] items)       throws NoSuchCustomerException, SpendingLimitViolation {     int total = 0;     for (int i = 0; i < items. length; i++) {         total += getItemPrice (items [i]) * items [i] .getQuantity();                                                            }     if (total > getSpendingLimit (customerId) ){                                                                                     getSessionContext() .setRollbackOnly();         throw new SpendingLimitViolation (total, limit);     }     else if (total > DISCOUNT_THRESHOLD) {         // Apply discount to total...     }     int invoiceId = placeOrder (customerId, total, items);                                                                       return new InvoiceImpl (iid, total); } 

I've highlighted the three lines of code in this method that invoke protected abstract "template methods" that must be implemented by subclasses. These will be defined in AbstractOrderEJB as follows:

    protected abstract int getItemPrice(InvoiceItem item);   protected abstract int getSpendingLimit(customerId)      throws NoSuchCustomerException;    protected abstract int placeOrder(int customerId, int total,                       InvoiceItem[] items); 

Subclasses of AbstractOrderEJB merely need to implement these three methods. They don't need to concern themselves with business logic. For example, one subclass might implement these three methods using JDBC, while another might implement them using SQLJ or JDO.

Such uses of the Template Method pattern offer good separation of concerns. Here, the superclass concentrates on business logic; the subclasses concentrate on implementing primitive operations (for example, using a low-level API such as JDBC). As the template methods are protected, rather than public, callers are spared the details of the class's implementation.

As it's usually better to define types in interfaces rather than classes, the Template Method pattern is often used as a strategy to implement an interface.

Note 

Abstract superclasses are also often used to implement some, but not all, methods of an interface. The remaining methods – which vary between concrete implementations – are left unimplemented. This differs from the Template Method pattern in that the abstract superclass doesn't handle workflow.

Important 

Use the Template Method design pattern to capture an algorithm in an abstract superclass, but defer the implementation of individual steps to subclasses. This has the potential to head off bugs, by getting tricky operations right once and simplifying user code. When implementing the Template Method pattern, the abstract superclass must factor out those methods that may change between subclasses and ensure that the method signatures enable sufficient flexibility in implementation.

Always make the abstract parent class implement an interface. The Template Method design pattern is especially valuable in framework design (discussed towards the end of this chapter).

The Template Method design pattern can be very useful in J2EE applications to help us to achieve as much portability as possible between application servers and databases while still leveraging proprietary features. We've seen how we can sometimes separate business logic from database operations above. We could equally use this pattern to enable efficient support for specific databases. For example, we could have an OracleOrderEJB and a DB2OrderEJB that implemented the abstract template methods efficiently in the respective databases, while business logic remains free of proprietary code.

The Strategy Design Pattern

An alternative to the Template Method is the Strategy design pattern, which factors the variant behavior into an interface. Thus, the class that knows the algorithm is not an abstract base class, but a concrete class that uses a helper that implements an interface defining the individual steps. The Strategy design pattern takes a little more work to implement than the Template Method pattern, but it is more flexible. The advantage of the Strategy pattern is that it need not involve concrete inheritance. The class that implements the individual steps is not forced to inherit from an abstract template superclass.

Let's look at how we could use the Strategy design pattern in the above example. The first step is to move the template methods into an interface, which will look like this:

   public interface DataHelper {       int getItemPrice (InvoiceItem item);       int getSpendingLimit (customerId) throws NoSuchCustomerException;       int placeOrder (int customerId, int total, InvoiceItem[] items);   } 

Implementations of this interface don't need to subclass any particular class; we have the maximum possible freedom.

Now we can write a concrete OrderEJB class that depends on an instance variable of this interface. We must also provide a means of setting this helper, either in the constructor or through a bean property. In the present example I've opted for a bean property:

   private DataHelper dataHelper;   public void setDataHelper (DataHelper newDataHelper) {       this.dataHelper = newDataHelper;   } 

The implementation of the placeOrder() method is almost identical to the version using the Template Method pattern, except that it invokes the operations it doesn't know how to do on the instance of the helper interface, in the highlighted lines:

   public final Invoice placeOrder (int customerId, InvoiceItem[] items)         throws NoSuchCustomerException, SpendingLimitViolation {     int total = 0;     for (int i = 0; i < items.length; i++) {        total += this.dataHelper.getItemPrice(items[i]) *                                                                              items[i].getQuantity();                                                                                                  }     if (total > this.dataHelper.getSpendingLimit(customerId)) {                                                                    getSessionContext() .setRollbackOnly();       throw new SpendingLimitViolation(total, limit);     } else if (total > DISCOUNT_THRESHOLD) {       // Apply discount to total...     }     int invoiceId = this.dataHelper.placeOrder (customerId, total, items);                                                       return new InvoiceImpl (iid, total);   } 

This is slightly more complex to implement than the version using concrete inheritance with the Template Method pattern, but is more flexible. This is a classic example of the tradeoff between concrete inheritance and delegation to an interface.

I use the Strategy pattern in preference to the Template Method pattern under the following circumstances:

  • When all steps vary (rather than just a few).

  • When the class that implements the steps needs an independent inheritance hierarchy.

  • When the implementation of the steps may be relevant to other classes (this is often the case with J2EE data access).

  • When the implementation of the steps may need to vary at run time. Concrete inheritance can't accommodate this; delegation can.

  • When there are many different implementations of the steps, or when it's expected that the number of implementations will continue to increase. In this case, the greater flexibility of the Strategy pattern will almost certainly prove beneficial, as it allows maximum freedom to the implementations.

Using Callbacks to Achieve Extensibility

Let's now consider another use of "inversion of control" to parameterize a single operation, while moving control and error handling into a framework. Strictly speaking, this is a special case of the Strategy design pattern: it appears different because the interfaces involved are so simple.

This pattern is based around the use of one or more callback methods that are invoked by a method that performs a workflow.

I find this pattern useful when working with low-level APIs such as JDBC. The following example is a stripped down form of a JDBC utility class, JdbcTemplate, used in the sample application, and discussed further in Chapter 9.

JdbcTemplate implements a query() method that takes as parameters a SQL query string and an implementation of a callback interface that will be invoked for each row of the result set the query generates. The callback interface is as follows:

   public interface RowCallbackHandler {     void processRow(ResultSet rs) throws SQLException;   } 

The JdbcTemplate.query() method conceals from calling code the details of getting a JDBC connection, creating and using a statement, and correctly freeing resources, even in the event of errors, as follows:

   public void query(String sql, RowCallbackHandler callbackHandler)       throws JdbcSqlException {     Connection con = null;     PreparedStatement ps = null;     ResultSet rs = null;     try {       con = <code to get connection>       ps = con.prepareStatement (sql);       rs = ps.executeQuery();

       while (rs.next()) {         callbackHandler.processRow(rs);       }

         rs.close();         ps.close();     } catch (SQLException ex) {         throw new JdbcSqlException("Couldn't run query [" + sql + "]", ex);     }     finally {         DataSourceUtils.closeConnectionIfNecessary(this.dataSource, con);     }   } 

The DataSourceUtils class contains a helper method that can be used to close connections, catching and logging any SQLExceptions encountered.

In this example, JdbcSqlException extends java.lang.RuntimeException, which means that calling code may choose to catch it, but is not forced to. This makes sense in the present situation. If, for example, a callback handler tries to obtain the value of a column that doesn't exist in the ResultSet, it will do calling code no good to catch it. This is clearly a programming error, and JdbcTemplate's behavior of logging the exception and throwing a runtime exception is logical (see discussion on Error Handling – Checked or Unchecked Exceptions later).

In this case, I modeled the RowCallbackHandler interface as an inner interface of the JdbcTemplate class. This interface is only relevant to the JdbcTemplate class, so this is logical. Note that implementations of the RowCallbackHandler interface might be inner classes (in trivial cases, anonymous inner classes are appropriate), or they might be standard, reusable classes, or subclasses of standard convenience classes.

Consider the following implementation of the RowCallbackHandler interface to perform a JDBC query. Note that the implementation isn't forced to catch SQLExceptions that may be thrown in extracting column values from the result set:

   class StringHandler implements JdbcTemplate.RowCallbackHandler {     private List 1 = new LinkedList();     public void processRow(ResultSet rs)throws SQLException {         1.add(rs.getString(1));     }     public String[] getStrings() {         return (String[]) 1.toArray(new String[1.size()]);     }  } 

This class can be used as follows:

   StringHandler sh = new StringHandler();   jdbcTemplate.query("SELECT FORENAME FROM CUSTMR", sh);   String[] forenames = sh.getStrings(); 

These three lines show how the code that uses the JdbcTemplate is able to focus on the business problem, without concerning itself with the JDBC API. Any SQLExceptions thrown will be handled by JdbcTemplate.

This pattern shouldn't be overused, but can be very useful. The following advantages and disadvantages indicate the tradeoffs involved:

Advantages:

  • The framework class can perform error handling and the acquisition and release of resources. This means that tricky error handling (as is required using JDBC) can be written once only, and calling code is simpler. The more complex the error handling and cleanup involved, the more attractive this approach is.

  • Calling code needn't handle the details of low-level APIs such as JDBC. This is desirable, because such code is bug prone and verbose, obscuring the business problem application code should focus on.

  • The one control flow function (JdbcTemplate.query() in the example) can be used with a wide variety of callback handlers, to perform different tasks. This is a good way of achieving reuse of code that uses low-level APIs.

Disadvantages:

  • This idiom is less intuitive than having calling code handle execution flow itself, so code may be harder to understand and maintain if there's a reasonable alternative.

  • We need to create an object for the callback handler.

  • In rare cases, performance may be impaired by the need to invoke the callback handler via an interface. The overhead of the above example is negligible, compared to the time taken by the JDBC operations themselves.

This pattern is most valuable when the callback interface is very simple. In the example, because the RowCallbackHandler interface contains a single method, it is very easy to implement, meaning that implementation choices such as anonymous inner classes may be used to simplify calling code.

The Observer Design Pattern

Like the use of interfaces, the Observer design pattern can be used to decouple components and enable extensibility without modification (observing the Open Closed Principle). It also contributes to achieving separation of concerns.
Consider, for example, an object that handles user login. There might be several outcomes from a user's attempt to login: successful login; failed login due to an incorrect password; failed login due to an incorrect username and password; system error due to failure to connect to the database that holds login information.

Let's imagine that we have a login implementation working in production, but that further requirements mean that the application should e-mail an administrator in the event of a given number of system errors; and should maintain a list of incorrectly entered passwords, along with the correct passwords for the users concerned, to contribute to developing information to help users avoid common errors. We would also like to know the peak periods for user login activity (as opposed to general activity on the web site).

All this functionality could be added to the object that implements login. We should have unit tests that would verify that this hasn't broken the existing functionality, but this is approach doesn't offer good separation of concerns (why should the object handling login need to know or obtain the administrator's e-mail address, or know how to send an e-mail?). As more features (or aspects) are added, the implementation of the login workflow itself – the core responsibility of this component – will be obscured under the volume of code to handle them.

We can address this problem more elegantly using the Observer design pattern. Observers (or listeners) can be notified of application events. The application must provide (or use a framework that provides) an event publisher. Listeners can register to be notified of events: all workflow code must do is publish events that might be of interest. Event publication is similar to generating log messages, in that it doesn't affect the working of application code. In the above example, events would include:

  • Attempted login, containing username and password

  • System error, including the offending exception

  • Login result (success or failure and reason)

Events normally include timestamps.

Now we could achieve clean separation of concerns by using distinct listeners to e-mail the administrator on system errors; react to a failed login (added it to a list); and gather performance information about login activity.

The Observer design pattern is used in the core Java libraries: for example, JavaBeans can publish property change events. In our own applications, we will use the Observer pattern at a higher level. Events of interest are likely to relate to application-level operations, not low-level operations such as setting a bean property.

Consider also the need to gather performance information about a web application. We could build sophisticated performance monitoring into the code of the web application framework (for example, any controller servlets), but this would require modification to those classes if we required different performance statistics in future. It's better to publish events such as "request received" and "request fulfilled" (the latter including success or failure status) and leave the implementation of performance monitoring up to listeners that are solely concerned with it. This is an example of how the Observer design pattern can be used to achieve good separation of concerns. This amounts to Aspect-Oriented Programming, which we discuss briefly under Using Reflection later.

Don't go overboard with the Observer design pattern: it's only necessary when there's a real likelihood that loosely coupled listeners will need to know about a workflow. If we use the Observer design pattern everywhere our business logic will disappear under a morass of event publication code and performance will be significantly reduced. Only important workflows (such as the login process of our example) should generate events.

A warning when using the Observer design pattern: it's vital that listeners return quickly. Rogue listeners can lock an application. Although it is possible for the event publishing system to invoke observers in a different thread, this is wasteful for the majority of listeners that will return quickly. It's a better choice in most situations for the onus to be on listeners to return quickly or spin off long-running tasks into separate threads. Listeners should also avoid synchronization on shared application objects, as this may lead to blocking. Listeners must be threadsafe.

The Observer design pattern is less useful in a clustered deployment than in deployment on a single server, as it only allows us to publish events on a single server. For example, it would be unsafe to use the Observer pattern to update a data cache; as such an update would apply only to a single server. However, the Observer pattern can still be very useful in a cluster. For example, the applications discussed above would all be valid in a clustered environment. JMS can be used for cluster-wide event publication, at the price of greater API complexity and a much greater performance overhead.

In my experience, the Observer design pattern is more useful in the web tier than in the EJB tier. For example, it's impossible to create threads in the EJB tier (again, JMS is the alternative).

In Chapter 11 we look at how to implement the Observer design pattern in an application framework. The application framework infrastructure used in the sample application provides an event publication mechanism, allowing approaches such as those described here to be implemented without the need for an application to implement any "plumbing".

Consider Consolidating Method Parameters

Sometimes it's a good idea to encapsulate multiple parameters to a method into a single object. This may enhance readability and simplify calling code. Consider a method signature like this:

   public void setOptions(Font f, int lineSpacing, int linesPerPage,                            int tabSize); 

We could simplify this signature by rolling the multiple parameters into a single object, like this:

    public void setOptions(Options options); 

The main advantage is flexibility. We don't need to break signatures to add further parameters: we can add additional properties to the parameter object. This means that we don't have to break code in existing callers that aren't interested in the added parameters.

As Java, unlike C++, doesn't offer default parameter values, this can be a good way to enable clients to simplify calls. Let's suppose that all (or most) or the parameters have default values. In C++ we could code the default values in the method signature, enabling callers to omit some of them, like this:

   void SomeClass::setOptions(Font f, int lineSpacing = 1, int linesPerPage = 25,                              int tabSize = 4); 

This isn't possible in Java, but we can populate the object with default values, allowing subclasses to use syntax like this:

   Options o = new Options();   o.setLineSpacing(2);   configurable.setOptions(o); 

Here, the Options object's constructor sets all fields to default values, so we need modify only to those that vary from the default. If necessary, we can even make the parameter object an interface, to allow more flexible implementation.

This approach works particularly well with constructors. It's indicated when a class has many constructors, and subclasses may face excessive work just preserving superclass constructor permutations. Instead, subclasses can use a subclass of the superclass constructor's parameter object.

The Command design pattern uses this approach: a command is effectively a consolidated set of parameters, which are much easier to work with together than individually.

The disadvantage of parameter consolidation is the potential creation of many objects, which increases memory usage and the need for garbage collection. Objects consume heap space; primitives don't. Whether this matters depends on how often the method will be called.

Note 

Consolidating method parameters in a single object can occasionally cause performance degradation in J2EE applications if the method call is potentially remote (a call on the remote interface of an EJB), as marshaling and unmarshaling several primitive parameters will always be faster than marshaling and unmarshaling an object. However, this isn't a concern unless the method is invoked particularly often (which might indicate poor application partitioning – we don't want to make frequent remote calls if we can avoid it).

Exception Handling – Checked or Unchecked Exceptions

Java distinguishes between two types of exception. Checked exceptions extend java.lang.Exception, and the compiler insists that they are caught or explicitly rethrown. Unchecked or runtime exceptions extend java.lang.RuntimeException, and need not be caught (although they can be caught and propagate up the call stack in the same way as checked exceptions). Java is the only mainstream language that supports checked exceptions: all C++ and C# exceptions, for example, are equivalent to Java's unchecked exceptions.

First, let's consider received wisdom on exception handling in Java. This is expressed in the section on exception handling in the Java Tutorial (http://java.sun.com/docs/books/tutorial/essential/exceptions/runtime.html), which advises the use of checked exceptions in application code.

Note 

Because the Java language does not require methods to catch or specify runtime exceptions, it's tempting for programmers to write code that throws only runtime exceptions or to make all of their exception subclasses inherit from RuntimeException. Both of these programming shortcuts allow programmers to write Java code without bothering with all of the nagging errors from the compiler and without bothering to specify or catch any exceptions. While this may seem convenient to the programmer, it sidesteps the intent of Java's catch or specify requirement and can cause problems for the programmers using your classes

Checked exceptions represent useful information about the operation of a legally specified request that the caller may have had no control over and that the caller needs to be informed about – for example, the file system is now full, or the remote end has closed the connection, or the access privileges don't allow this action.

What does it buy you if you throw a RuntimeException or create a subclass of RuntimeException just because you don't want to deal with specifying it? Simply, you get the ability to throw an exception without specifying that you do so. In other words, it is a way to avoid documenting the exceptions that a method can throw. When is this good? Well, when is it ever good to avoid documenting a method's behavior? The answer is "hardly ever".

To summarize Java orthodoxy: checked exceptions should be the norm. Runtime exceptions indicate programming errors.

I used to subscribe to this view. However, after writing and working with thousands of catch blocks, I've come to the conclusion that this appealing theory doesn't always work in practice. I'm not alone. Since developing my own ideas on the subject, I've noticed that Bruce Eckel, author of the classic book Thinking in Java, has also changed his mind. Eckel now advocates the use of runtime exceptions as the norm, and wonders whether checked exceptions should be dropped from Java as a failed experiment (http://www.mindview.net/Etc/Discussions/CheckedExceptions).

Eckel cites the observation that, when one looks at small amounts of code, checked exceptions seem a brilliant idea and promise to avoid many bugs. However, experience tends to indicate the reverse for large code bases. See "Exceptional Java" by Alan Griffiths at http://www.octopull.demon.co.uk/java/ExceptionalJava.html for another discussion of the problems with checked exceptions.

Using checked exceptions exclusively leads to several problems:

  • Too much code
    Developers will become frustrated by having to catch checked exceptions that they can't reasonably handle (of the "something when horribly wrong" variety) and write code that ignores (swallows) them. Agreed: this is indefensible coding practice, but experience shows that it happens more often than we like to think. Even good programmers may occasionally forget to "nest" exceptions properly (more about this below), meaning that the full stack trace is lost, and the information contained in the exception is of reduced value.

  • Unreadable code
    Catching exceptions that can't be appropriately handled and rethrowing them (wrapped in a different exception type) performs little useful function, yet can make it hard to find the code that actually does something. The orthodox view is that this bothers only lazy programmers, and that we should simply ignore this problem. However, this ignores reality. For example, this issue was clearly considered by the designers of the core Java libraries. Imagine the nightmare of having to work with collections interfaces such as java.util.Iterator if they threw checked, rather than unchecked, exceptions. The JDO API is another example of a Sun API that uses unchecked exceptions. By contrast, JDBC, which uses checked exceptions, is cumbersome to work with directly.

  • Endless wrapping of exceptions
    A checked exception must be either caught or declared in the throws clause of a method that encounters it. This leaves a choice between rethrowing a growing number of exceptions, or catching low-level exceptions and rethrowing them wrapped in a new, higher-level exception. This is desirable if we add useful information by doing so. However, if the lower-level exception is unrecoverable, wrapping it achieves nothing. Instead of an automatic unwinding of the call stack, as would have occurred with an unchecked exception, we will have an equivalent, manual, unwinding of the call stack, with several lines of additional, pointless, code in each class along the way. It was principally this issue that prompted me to rethink my attitude to exception handling.

  • Fragile method signatures
    Once many callers use a method, adding an additional checked exception to the interface will require many code changes.

  • Checked exceptions don't always work well with interfaces
    Take the example of the file system being full in the Java Tutorial. This sounds OK if we're talking about a class that we know works with the file system. What if we're dealing with an interface that merely promises to store data somewhere (maybe in a database)? We don't want to hardcode dependence on the Java I/O API into an interface that may have different implementations. Hence if we want to use checked exceptions, we must create a new, storage-agnostic, exception type for the interface and wrap file system exceptions in it. Whether this is appropriate again depends on whether the exception is recoverable. If it isn't, we've created unnecessary work.

Many of these problems can be attributed to the problem of code catching exceptions it can't handle, and being forced to rethrow wrapped exceptions. This is cumbersome, error prone (it's easy to lose the stack trace) and serves no useful purpose. In such cases, it's better to use an unchecked exception. This will automatically unwind the call stack, and is the correct behavior for exceptions of the "something went horribly wrong" variety.

I take a less heterodox view than Eckel in that I believe there's a place for checked exceptions. Where an exception amounts to an alternative return value from a method, it should definitely be checked, and it's good that the language helps enforce this. However, I feel that the conventional Java approach greatly overemphasizes checked exceptions.

Important 

Checked exceptions are much superior to error return codes (as used in many older languages). Sooner or later (probably sooner) someone will fail to check an error return value; it's good to use the compiler to enforce correct error handling. Such checked exceptions are as integral to an object's API as parameters and return values.

However, I don't recommend using checked exceptions unless callers are likely to be able to handle them. In particular, checked exceptions shouldn't be used to indicate that something went horribly wrong, which the caller can't be expected to handle.

Important 

Use a checked exception if calling code can do something sensible with the exception. Use an unchecked exception if the exception is fatal, or if callers won't gain by catching it. Remember that a J2EE container (such as a web container) can be relied on to catch unchecked exceptions and log them.

I suggest the following guidelines for choosing between checked and unchecked exceptions:

Question

Example

Recommendation if the answer is yes

Should all callers handle this problem? Is the exception essentially a second return value for the method?

Spending limit exceeded in a processInvoice() method

Define and used a checked exception and take advantage of Java's compile-time support.

Will only a minority of callers want to handle this problem?

JDO exceptions

Extend RuntimeException. This leaves callers the choice of catching the exception, but doesn't force all callers to catch it.

Did something go horribly wrong? Is the problem unrecoverable?

A business method fails because it can't connect to the application database

Extend RuntimeException. We know that callers can't do anything useful besides inform the user of the error.

Still not clear?

 

Extend RuntimeException. Document the exceptions that may be thrown and let callers decide which, if any, they wish to catch.

Important 

Decide at a package level how each package will use checked or unchecked exceptions. Document the decision to use unchecked exceptions, as many developers will not expect it.

The only danger in using unchecked exceptions is that the exceptions may be inadequately documented. When using unchecked exceptions, be sure to document all exceptions that may be thrown from each method, allowing calling code to choose to catch even exceptions that you expect will be fatal. Ideally, the compiler should enforce Javdoc-ing of all exceptions, checked and unchecked.

If allocating resources such as JDBC connections that must be released under all circumstances, remember to use a finally block to ensure cleanup, whether or not you need to catch checked exceptions. Remember that a finally block can be used even without a catch block.

One reason sometimes advanced for avoiding runtime exceptions is that an uncaught runtime exception will kill the current thread of execution. This is a valid argument in some situations, but it isn't normally a problem in J2EE applications, as we seldom control threads, but leave this up to the application server. The application server will catch and handle runtime exceptions not caught in application code, rather than let them bubble up to the JVM. An uncaught runtime exception within the EJB container will cause the container to discard the current EJB instance. However, if the error is fatal, this usually makes sense.

Important 

Ultimately, whether to use checked or unchecked exception is a matter of opinion. Thus it's not only vital to document the approach taken, but to respect the practice of others. While I prefer to use unchecked exceptions in general, when maintaining or enhancing code written by others who favor exclusive use of checked exceptions, I follow their style.

Good Exception Handling Practices

Whether we used checked or unchecked exceptions, we'll still need to address the issue of "nesting" exceptions. Typically this happens when we're forced to catch a checked exception we can't deal with, but want to rethrow it, respecting the interface of the current method. This means that we must wrap the original, "nested" exception within a new exception.

Some standard library exceptions, such as javax.servlet.ServletException, offer such wrapping functionality. But for our own application exceptions, we'll need to define (or use existing) custom exception superclasses that take a "root cause" exception as a constructor argument, expose it to code that requires it, and override the printStackTrace() methods to show the full stack trace, including that of the root cause. Typically we need two such base exceptions, one for checked and on for unchecked exceptions.

Note 

This is no longer necessary in Java 1.4, which supports exception nesting for all exceptions. We'll discuss this important enhancement below.

In the generic infrastructure code accompanying our sample application, the respective classes are com.interface21.core.NestedCheckedException and com.interface21.core.NestedRuntimeException. Apart from being derived from java.lang.Exception and java.lang.RuntimeException respectively, these classes are almost identical. Both these exceptions are abstract classes; only subtypes have meaning to an application. The following is a complete listing of NestedRuntimeException:

   package com.interface21.core;   import java.io.PrintStream;   import java.io.PrintWriter;   public abstract class NestedRuntimeException extends RuntimeException {                                                        private Throwable rootCause;     public NestedRuntimeException (String s) {         super(s);     }     public NestedRuntimeException(String s, Throwable ex) {         super (s);         rootCause = ex;     }     public Throwable getRootCause() {         return rootCause;     }

    public String getMessage() {         if (rootCause == null) {              return super.getMessage();         } else {             return super.getMessage() + "; nested exception is: \n\t" +                      rootCause.toString();       }     }

     public void printStackTrace (PrintStream ps) {         if (rootCause == null) {             super.printStackTrace(ps);         } else {             ps.println(this);             rootCause.printStackTrace(ps);         }     }     public void printStackTrace(PrintWriter pw) {         if (rootCause == null) {             super.printStackTrace(pw);         } else {             pw.println(this);             rootCause.printStackTrace(pw);         }     }     public void printStackTrace() {         printStackTrace(System.err);     } } 

Java 1.4 introduces welcome improvements in the area of exception handling. There is no longer any need for writing chainable exceptions, although existing infrastructure classes like those shown above will continue to work without a problem. New constructors are added to java.lang.Throwable and java.lang.Exception to support chaining, and a new method void initCause (Throwable t) is added to java.lang.Throwable to allow a root cause to be specified even after exception construction. This method may be invoked only once, and only if no nested exception is provided in the constructor.

Java 1.4-aware exceptions should implement a constructor taking a throwable nested exception and invoking the new Exception constructor. This means that we can always create and throw them in a single line of code as follows:

   catch (RootCauseException ex) {       throw new MyJava14Exception("Detailed message", ex);   } 

If an exception does not provide such a constructor (for example, because it was written for a pre Java 1.4 environment), we are guaranteed to be able to set a nested exception using a little more code, as follows:

   catch (RootCauseException ex) {       MyJava13Exception mex = new MyJava13Exception("Detailed message");       mex.initCause(ex);       throw mex;   } 

When using nested exception solutions such as NestedRuntimeException, discussed above, follow their own conventions, rather than Java 1.4 conventions, to ensure correct behavior.

Exceptions in J2EE

There are a few special issues to consider in J2EE applications.

Distributed applications will encounter many checked exceptions. This is partly because of the conscious decision made at Sun in the early days of Java to make remote calling explicit. Since all RMI calls – including EJB remote interface invocations – throw java.rmi.RemoteException, local-remote transparency is impossible. This decision was probably justified, as local-remote transparency is dangerous, especially to performance. However, it means that we often have to write code to deal with checked exceptions that amount to "something went horribly wrong, and it's probably not worth retrying".

It's important to protect interface code – such as that in servlets and JSP pages – from J2EE "system-level" exceptions such as java.rmi.RemoteException. Many developers fail to recognize this issue, with unfortunate consequences, such as creating unnecessary dependency between architectural tiers and preventing any chance of retrying operations that might have been retried had they been caught at a low enough level. Amongst developers who do recognize the problem, I've seen two approaches:

  • Allow interface components to ignore such exceptions, for example by writing code to catch them at a high level, such as a superclass of all classes that will handle incoming web requests that permits subclasses to throw a range of exceptions from a protected abstract method.

  • Use a client-side façade that conceals communication with the remote system and throws exceptions – checked or unchecked – that are dictated by business need, not the problem of remote method calls. This means that the client-side façade should not mimic the interface of the remote components, which will all throw java.rmi.RemoteException. This approach is known as the Business delegate J2EE pattern (Core J2EE Patterns).

I believe that the second of these approaches is superior. It provides a clean separation of architectural tiers, allows a choice of checked or unchecked exceptions and does not allow the use of EJB and remote invocation to intrude too deeply into application design. We'll discuss this approach in more detail in Chapter 11.

Making Exceptions Informative

It's vital to ensure that exceptions are useful both to code and to humans developing, maintaining and administering an application.

Consider the case of exceptions of the same class reflecting different problems, but distinguished only by their message strings. These are unhelpful to Java code catching them. Exception message strings are of limited value: they may be helpful to explain problems when they appear in log files, but they won't enable the calling code to react appropriately, if different reactions are required, and they can't be relied on for display to users. When different problems may require different actions, the corresponding exceptions should be modeled as separate subclasses of a common superclass. Sometimes the superclass should be abstract. Calling code will now be free to catch exceptions at the relevant level of detail.

The second problem – display to users – should be handled by including error codes in exceptions. Error codes may be numeric or strings (string codes have the advantage that they can make sense to readers), which can drive runtime lookup of display messages that are held outside the exception. Unless we are able to use a common base class for all exceptions in an application – something that isn't possible if we mix checked and unchecked exceptions – we will need to make our exceptions implement an ErrorCoded or similarly named interface that defines a method such as this:

   String getErrorCode(); 

The com.interface21.core.ErrorCoded interface from the infrastructure code discussed in Chapter 11 includes this single method. With this approach, we are able to distinguish between error messages intended for end users and those intended for developers. Messages inside exceptions (returned by the getMessage() method) should be used for logging, and targeted to developers.

Important 

Separate error messages for display to users from exception code, by including an error code with exceptions. When it's time to display the exception, the code can be resolved: for example, from a properties file.

If the exception isn't for a user, but for an administrator, it's less likely that we'll need to worry about formatting messages or internationalization (internationalization might, however, still be an issue in some situations: for example, if we are developing a framework that may be used by non-English speaking developers).

As we've already discussed, there's little point in catching an exception and throwing a new exception unless we add value. However, occasionally the need to produce the best possible error message is a good reason for catching and wrapping.

For example, the following error message contains little useful information:

WebApplicationContext failed to load config

Exception messages like this typically indicate developer laziness in writing messages or (worse still) use of a single catch block to catch a wide variety of exceptions (meaning that the code that caught the exception had as little idea what went wrong as the unfortunate reader of the message).

It's better to include details about the operation that failed, as well as preserving the stack trace. For example, the following message is an improvement:

WebApplicationContext failed to load config: cannot instantiate class com.foo.bar.Magic

Better still is a message that gives precise information about what the process was trying to do when it failed, and information about what might be done to correct the problem:

WebApplicationContext failed to load config from file /WEB-INF/applicationContext.xml': cannot instantiate class ‘com.foo.bar.Magic’ attempting to load bean element with name ‘too’ – check that this class has a public no arg constructor

Important 

Include as much context information as possible with exceptions. If an exception probably results from a programming error, try to include information on how to rectify the problem.

Using Reflection

The Java Reflection API enables Java code to discover information about loaded classes at runtime, and to instantiate and manipulate objects. Many of the coding techniques discussed in this chapter depend on reflection: this section considers some of the pros and cons of reflection.

Important 

Many design patterns can best be expressed by use of reflection. For example, there's no need to hard-code class names into a Factory if classes are JavaBeans, and can be instantiated and configured via reflection. Only the names of classes – for example, different implementations of an interface – need be supplied in configuration data.

Java developers seem divided about the use of reflection. This is a pity, as reflection is an important part of the core API, and forms the basis for many technologies, such as JavaBeans, object serialization (crucial to J2EE) and JSP. Many J2EE servers, such as JBoss and Orion, use reflection (via Java 1.3 dynamic proxies) to simplify J2EE deployment by eliminating the need for container-generated stubs and skeletons. This means that every call to an EJB is likely to involve reflection, whether we're aware of it or not. Reflection is a powerful tool for developing generic solutions.

Important 

Used appropriately, reflection can enable us to write less code. Code using reflection can also minimize maintenance by keeping itself up to date. As an example, consider the implementation of object serialization in the core Java libraries. Since it uses reflection, there's no need to update serialization and deserialization code when fields are added to or removed from an object. At a small cost to efficiency, this greatly reduces the workload on developers using serialization, and eliminates many programming errors.

Two misconceptions are central to reservations about reflection:

  • Code that uses reflection is slow

  • Code that uses reflection is unduly complicated

Each of these misconceptions is based on a grain of truth, but amounts to a dangerous oversimplification. Let's look at each in turn.

Code that uses reflection is usually slower than code that uses normal Java object creation and method calls. However, this seldom matters in practice, and the gap is narrowing with each generation of JVMs. The performance difference is slight, and the overhead of reflection is usually far outweighed by the time taken by the operations the invoked methods actually do.

Most of the best uses of reflection have no performance implications. For example, it's largely immaterial how long it takes to instantiate and configure objects on system startup. As we'll see in Chapter 15, most optimization is unnecessary. Unnecessary optimization that prevents us from choosing superior design choices is downright harmful. Similarly, the overhead added by the use of reflection to populate a JavaBean when handling a web request (the approach taken by Struts and most other web application frameworks) won't be detectable.

Disregarding whether or not performance matters in a particular situation, reflection also has far from the disastrous impact on performance that many developers imagine, as we'll see in Chapter 15. In fact, in some cases, such as its use to replace a length chain of if/else statements, reflection will actually improve performance.

The Reflection API is relatively difficult to use directly. Exception handling, especially, can be cumbersome. However, similar reservations apply to many important Java APIs, such as JDBC. The solution to avoid using those APIs directly, by using a layer of helper classes at the appropriate level of abstraction, not to avoid the functionality they exist to provide. If we use reflection via an appropriate abstraction layer, using reflection will actually simplify application code.

Important 

Used appropriately, reflection won't degrade performance. Using reflection appropriately should actually improve code maintainability. Direct use of reflection should be limited to infrastructure classes, not scattered through application objects.

Reflection Idioms

The following idioms illustrate appropriate use of reflection.

Reflection and Switches

Chains of if/else statements and large switch statements should alarm any developer committed to OO principles. Reflection provides two good ways of avoiding them:

  • Using the condition to determine a class name, and using reflection to instantiate the class and use it (assuming that the class implements a known interface).

  • Using the condition to determine a method name, and using reflection to invoke it.

Let's look at the second approach in practice.

Consider the following code fragment from an implementation of the java.beans.VetoableChangeListener interface. A PropertyChangeEvent received contains the name of the property in question. The obvious implementation will perform a chain of if/else statements to identify the validation method to invoke within the class (the vetoableChange() method will become huge if all validation rules are included inline):

   public void vetoableChange(PropertyChangeEvent e) throws PropertyVetoException {     if (e.getPropertyName() .equals ("email")) {         String email = (String) e.getNewValue();         validateEmail (email, e);     }     ...     } else if (e.getPropertyName() .equals ("age")) {         int age = ((Integer) e.getNewValue()).intValue();         validateAge(age, e);     } else if (e.getPropertyName() .equals ("surname")) {         String surname = (String) e.getNewValue();         validateForename(surname, e);     } else if (e.getPropertyName() .equals("forename")) {         String forename = (String) e.getNewValue();         validateForename(forename, e);     }   } 

At four lines per bean property, adding another 10 bean properties will add 40 lines of code to this method. This if/else chain will need updating every time we add or remove bean properties.

Consider the following alternative. The individual validator now extends AbstractVetoableChangeListener, an abstract superclass that provides a final implementation of the vetoableChange() method. The AbstractVetoableChangeListener's constructor examines methods added by subclasses that fit a validation signature:

   void validate<bean property name>(<new value>, PropertyChangeEvent)           throws PropertyVetoException 

The constructor is the most complex piece of code. It looks at all methods declared in the class that fit the validation signature. When it finds a valid validator method, it places it in a hash table, validationMethodHash, keyed by the property name, as indicated by the name of the validator method:

   public AbstractVetoableChangeListener() throws SecurityException {     Method[] methods = getClass() .getMethods();     for (int i = 0; i < methods.length; i++) {         if (methods[i] .getName() .startsWith(VALIDATE_METHOD_PREFIX) &&             methods[i] .getParameterTypes() .length == 2 &&                 PropertyChangeEvent.class.isAssignableFrom(methods[i].                     getParameterTypes() [1])) {             // We've found a potential validator             Class[] exceptions = methods[i] .getExceptionTypes();             // We don't care about the return type, but we must ensure that             // the method throws only one checked exception, PropertyVetoException             if (exceptions.length == 1 &&                     PropertyVetoException.class.isAssignableFrom(exceptions[0])) {                 // We have a valid validator method                 // Ensure it's accessible (for example, it might be a method on an                 // inner class)                 methods[i].setAccessible(true);                 String propertyName = Introspector.decapitalize(methods[i].getName().                 substring(VALIDATE_METHOD_PREFIX.length()));                 validationMethodHash.put(propertyName, methods[i]);                 System.out.println(methods[i] + " is validator for property " +                     propertyName);             }         }     }   } 

The implementation of vetoableChange() does a hash table lookup for the relevant validator method for each property changed, and invokes it if one is found:

   public final void vetoableChange(PropertyChangeEvent e)         throws PropertyVetoException {     Method m = (Method) validationMethodHash.get(e.getPropertyName());     if (m != null) {         try {             Object val = e.getNewValue();             m.invoke(this, new Object[] { val, e });         } catch (IllegalAccessException ex) {             System.out.println("WARNING: can't validate. " +                 "Validation method "' + m + "' isn't accessible");         } catch (InvocationTargetException ex) {             // We don't need to catch runtime exceptions             if (ex.getTargetException() instanceof RuntimeException)                 throw (RuntimeException) ex.getTargetException();             // Must be a PropertyVetoException if it's a checked exception             PropertyVetoException pex = (PropertyVetoException)                 ex.getTargetException();             throw pex;         }     }   } 

For a complete listing of this class, or to use it in practice, see the com.interface21.bean.AbstractVetoableChangeListener class under the /framework/src directory of the download accompanying this book.

Now subclasses merely need to implement validation methods with the same signature as in the first example. The difference is that a subclass's logic will automatically be updated when a validation method is added or removed. Note also that we've used reflection to automatically convert parameter types to validation methods. Clearly it's a programming error if, say, the validateAge() method expects a String rather than an int. This will be indicated in a stack trace at runtime. Obvious bugs pose little danger. Most serious problems result from subtle bugs, that don't occur every time the application runs, and don't result in clear stack traces.

Interestingly, the reflective approach will actually be faster on average than the if/else approach if there are many bean properties. String comparisons are slow, whereas the reflective approach uses a single hash table lookup to find the validation method to call.

Certainly, the AbstractVetoableChangeListener class is more conceptually complex than the if/else block. However, this is framework code. It will be debugged once, and verified by a comprehensive set of test cases. What's important is that the application code – individual validator classes – is much simpler because of the use of reflection. Furthermore, the AbstractVetoableChangeListener class is still easy to read for anyone with a sound grasp of Java reflection. The whole of the version of this class I use – including full Javadoc and implementation comments and logging statements – amounts to a modest 136 lines.

Important 

Reflection is a core feature of Java, and any serious J2EE developer should have a strong grasp of the Reflection API. Although reflective idioms (such as, the ternary operator) may seem puzzling at first, they're equally a part of the language's design, and it's vital to be able to read and understand them easily.

Reflection and the Factory Design Pattern

I seldom use the Factory design pattern in its simplest form, which requires all classes created by the factory to be known to the implementation of the factory. This severely limits extensibility: the factory object cannot create objects (even objects that implement a known interface) unless it knows their concrete class.

The following method (a simplified version of the "bean factory" approach discussed in Chapter 11) shows a more flexible approach, which is extensible without any code changes. It's based on using reflection to instantiate classes by name. The class names can come from any configuration source:

   public Object getObject(String classname, Class requiredType)         throws FactoryException {     try {         Class clazz = Class.forName(classname);         Object o = clazz.newInstance();         if (! requiredType.isAssignableFrom(clazz))             throw new FactoryException("Class "' + classname +                                                      "' not of required type " + requiredType);         // Configure the object...         return o;     } catch (ClassNotFoundException ex) {         throw new FactoryException("Couldn't load class "' + classname + ""', ex);     } catch (IllegalAccessException ex) {         throw new FactoryException("Couldn't construct class "' + classname + "': is the no arg constructor public?", ex);     } catch (InstantiationException ex) {         throw new FactoryException("Couldn't construct class "' + classname +                                      "': does it have a no arg constructor", ex);     }   } 

This method can be invoked like this:

     MyInterface mo = (MyInterface)     beanFactory.getObject("com.mycompany.mypackage.MyImplementation",     MyInterface.class); 

Like the other reflection example, this approach conceals complexity in a framework class. It is true that this code cannot be guaranteed to work: the class name may be erroneous, or the class may not have a no arg constructor, preventing it being instantiated. However, such failures will be readily apparent at runtime, especially as the getObject() method produces good error messages (when using reflection to implement low-level operations, be very careful to generate helpful error messages). Deferring operations till runtime does involve trade-offs (such as the need to cast), but the benefits may be substantial.

Note 

Such use of reflection can best be combined with the use of javaBeans. If the objects to be instantiated expose JavaBean properties, it's easy to hold initialization information outside Java code.

This is a very powerful idiom. Performance is unaffected, as it is usually used only at application startup; the difference between loading and initializing, say, ten objects by reflection and creating the same objects using the new operator and initializing them directly is undetectable. On the other hand, the benefit in terms of truly flexible design may be enormous. Once we do have the objects, we invoke them without further use of reflection.

There is a particularly strong synergy between using reflection to load classes by name and set their properties outside Java code and the J2EE philosophy of declarative configuration. For example, servlets, filters and web application listeners are instantiated from fully qualified class names specified in the web.xml deployment descriptor. Although they are not bean properties, ServletConfig initialization parameters are set in XML fragments in the same deployment descriptor, allowing the behavior of servlets at runtime to be altered without the need to modify their code.

Important 

Using reflection is one of the best ways to parameterize Java code. Using reflection to choose instantiate and configure objects dynamically allows us to exploit the full power of loose coupling using interfaces. Such use of reflection is consistent with the J2EE philosophy of declarative configuration.

Java 1.3 Dynamic Proxies

Java 1.3 introduced dynamic proxies: special classes that can implement interfaces at runtime without declaring that they implement them at compile time.

Dynamic proxies can't be used to proxy for a class (rather than an interface). However, this isn't a problem if we use interface-based design. Dynamic proxies are used internally by many application servers, typically to avoid the need to generate and compile stubs and skeletons.

Dynamic proxies are usually used to intercept calls to a delegate that actually implements the interface in question. Such interception can be useful to handle the acquisition and release of resources, add additional logging, and gather performance information (especially about remote calls in a distributed J2EE application). There will, of course, be some performance overhead, but its impact will vary depending on what the delegate actually does. One good use of dynamic proxies is to abstract the complexity of invoking EJBs. We'll see an example of this in Chapter 11.

The com.interface21.beans.DynamicProxy class included in the infrastructure code with the sample application is a generic dynamic proxy that fronts a real implementation of the interface in question, designed to be subclassed by dynamic proxies that add custom behavior.

Dynamic proxies can be used to implement Aspect Oriented Programming (AOP) concepts in standard Java. AOP is an emerging paradigm that is based on crosscutting aspects of a system, based on separation of concerns. For example, the addition of logging capabilities just mentioned is a crosscut that addresses the logging concern in a central place. It remains to be seen whether AOP will generate anything like the interest of OOP, but it's possible that it will at least grow to complement OOP.

For more information on AOP, see the following sites:

  • http://aosd.net/.AOP home page.

  • http://aspectj.org/. Home page for AspectJ, an extension to Java that supports AOP.

Note 

See the reflection guide with your JDK for detailed information about dynamic proxies.

Important 

A warning: I feel dangerously good after I've made a clever use of reflection. Excessive cleverness reduces maintainability. Although I'm a firm believer that reflection, used appropriately, is beneficial, don't use reflection if a simpler approach might work equally well.

Using JavaBeans to Achieve Flexibility

Where possible, application objects – except very fine-grained objects – should be JavaBeans. This maximizes configuration flexibility (as we've seen above), as beans allow easy property discovery and manipulation at runtime. There's little downside to using JavaBeans, as there's no need to implement a special interface to make an object a bean.

When using beans, consider whether the following standard beans machinery can be used to implement functionality:

  • PropertyEditor

  • PropertyChangeListener

  • VetoableChangeListener

  • Introspector

Important 

Designing objects to be JavaBeans has many benefits. Most importantly, it enables objects to be instantiated and configured easily using configuration data outside Java code.

Note 

Thanks to Gary Watson, my colleague at FT.com, for convincing me of the many merits of JavaBeans.

Avoid a Proliferation of Singletons by Using an Application Registry

The Singleton design pattern is widely useful, but the obvious implementation can be dangerous. The obvious way to implement a singleton is Java is to use a static instance variable containing the singleton instance, a public static method to return the singleton instance, and provide a private constructor to prevent instantiation:

   public class MySingleton {     /** Singleton instance */     private static MySingleton instance;     // Static block to instantiate the singleton in a threadsafe way     static {         instance = new MySingleton();     } // static initializer     /** Enforces singleton method. Returns the instance of this object.      * @throws DataImportationException if there was an internal error      * creating the singleton      * @return the singleton instance of this class      */     public static MySingleton getInstance() {         return instance;     }     /** Private constructor to enforce singleton design pattern.      */ private MySingleton() {               ...   }     // Business methods on instance 

Note the use of a static initializer to initialize the singleton instance when the class is loaded. This prevents race conditions possible if the singleton is instantiated in the getInstance() method if it's null (a common cause of errors). It's also possible for the static initializer to catch any exceptions thrown by the singleton's constructor, which can be rethrown in the getInstance() method.

However, this common idiom leads to several problems:

  • Dependence on the singleton class is hard-coded into many other classes.

  • The singleton must handle its own configuration. As other classes are locked out of its initialization process, the singleton will be responsible for any properties loading required.

  • Complex applications can have many singletons. Each might handle its configuration loading differently, meaning there's no central repository for configuration.

  • Singletons are interface-unfriendly. This is a very bad thing. There's little point in making a singleton implement an interface, because there's then no way of preventing there being other implementations of that interface. The usual implementation of a singleton defines a type in a class, not an interface.

  • Singletons aren't amenable to inheritance, because we need to code to a specific class, and because Java doesn't permit the overriding of static methods such as getInstance().

  • It's impossible to update the state of singletons at runtime consistently. Any updates may be performed haphazardly in individual Singleton or factory classes. There's no way to refresh the state of all singletons in an application.

A slightly more sophisticated approach is to use a factory, which may use different implementation classes for the singleton. However, this only solves some of these problems.

Important 

I don't much like static variables in general. They break OO by introducing dependency on a specific class. The usual implementation of the Singleton design pattern exhibits this problem.

In my view, it's a much better solution to have one object that can be used to locate other objects. I call this an application context object, although I've also seen it termed a "registry" or "application toolbox". Any object in the application needs only to get a reference to the single instance of the context object to retrieve the single instances of any application object. Objects are normally retrieved by name. This context object doesn't even need to be a singleton. For example, it's possible to use the Servlet API to place the context in a web application's ServletContext, or we can bind the context object in JNDI and access it using standard application server functionality. Such approaches don't require code changes to the context object itself, just a little bootstrap code.

The context object itself will be generic framework code, reusable between multiple applications.

The advantages of this approach include:

  • It works well with interfaces. Objects that need the "singletons" never need to know their implementing class.

  • All objects are normal Java classes, and can use inheritance normally. There are no static variables.

  • Configuration is handled outside the classes in question, and entirely by framework code. The context object is responsible for instantiating and configuring individual singletons. This means that configuration outside Java code (such as an XML document or even RDBMS tables) can be used to source configuration data. Individual objects can be configured using JavaBean properties. Such configuration can include the creation of object graphs amongst managed objects by the application context, without the objects in question needing to do anything except expose bean properties.

  • The context object will implement an interface. This allows different implementations to take configuration from different sources without any need to change code in managed application objects.

  • It's possible to support dynamic state changes to "singletons". The context can be refreshed, changing the state of the objects it manages (although of course there are thread safety issues to consider).

  • Using a context object opens other possibilities. For example, the context may provide other services, such as implementing the Prototype design pattern to serve as a factory for independent object instances. Since many application objects have access to it, the context object may serve as an event publisher, in the Observer design pattern.

  • While the Singleton design pattern is inflexible, we can choose to have multiple application context objects if this is useful (the infrastructure discussed in Chapter 11 supports hierarchical context objects).

The following code fragments illustrate the use of this approach.

The context object itself will be responsible for loading configuration. The context object may register itself (for example with the ServletContext of a web application, or JNDI) or a separate bootstrap class may handle this. Objects needing to use "singletons" must look up the context object in. For example:

   ApplicationContext application = (ApplicationContext )   servletContext.getAttribute("com.mycompany.context.ApplicationContext"); 

The ApplicationContext instance can be used to obtain any "singleton":

   MySingleton mySingleton = (MySingleton )   applicationContext.getSingleInstance("mysingleton"); 

In Chapter 11 we'll look at how to implement this superior alternative to the Singleton design pattern. Note that it isn't limited to managing "singletons": this is valuable piece of infrastructure that can be used in many ways.

Note 

Why not use JNDI – a standard J2EE service – instead of use additional infrastructure to achieve this result? Each "singleton" could be bound to the JNDI context, allowing other components running in the application server to look them up.

Using JNDI adds complexity (JNDI lookups are verbose) and is significantly less powerful than the application context mechanism described above. For example, each "singleton" would be left on its own to handle its configuration, as JNDI offers only a lookup mechanism, not a means of externalizing configuration. Another serious objection is that this approach would be wholly dependent on application server services, making testing outside an application server unnecessarily difficult. Finally, some kind of bootstrap service would be required to bind the objects into JNDI, meaning that we'd probably need to implement most of the code in the application context approach anyway. Using an application context, we can choose to bind individual objects with JNDI if it proves useful.

Important 

Avoid a proliferation of singletons, each with a static getInstance() method. Using a factory to return each singleton is better, but still inflexible. Instead, use a single "application context" object or registry that returns a single instance of each class. The generic application context implementation will normally (but not necessarily) be based on the use of reflection, and should take care of configuring the object instances it manages. This has the advantage that application objects need only expose bean properties for configuration, and never need to look up configuration sources such as properties files.

Refactoring

Refactoring, according to Martin Fowler in Refactoring: Improving the Design of Existing Code from Addison-Wesley (ISBN 0-201485-6-72), is "the process of changing a software system in such a way that it does not alter the external behavior of the code, yet improves its internal structure. It's a disciplined way to clean up code that minimizes the chances of introducing bugs". See http://www.refactoring.com for more information and resources on refactoring.

Most of the refactoring techniques Fowler describes are second nature to good developers. However, the discussion is useful and Fowler's naming is being widely adopted (For example, the Eclipse IDE uses these names on menus).

Important 

Be prepared to refactor to eliminate code duplication and ensure that a system is well implemented at each point in time.

It's helpful to use an IDE that supports refactoring. Eclipse is particularly good in this respect.

I believe that refactoring can be extended beyond functional code. For example, we should continually seek to improve in the following areas:

  • Error messages
    A failure with a confusing error message indicates an opportunity to improve the error message.

  • Logging
    During code maintenance, we can refine logging to help in debugging. We'll discuss logging below.

  • Documentation
    If a bug results from a misunderstanding of what a particular object or method does, documentation should be improved.



Expert One-on-One J2EE Design and Development
Microsoft Office PowerPoint 2007 On Demand
ISBN: B0085SG5O4
EAN: 2147483647
Year: 2005
Pages: 183

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net