Testing Correctness

Let's now examine some issues and techniques around testing the correctness of applications: that is, testing that applications meet their functional requirements.

The XP Approach to Testing

In Chapter 2 I mentioned Extreme Programming (XP), a methodology that emphasizes frequent integration and comprehensive unit testing. The key rules and practices of XP that relate to testing are:

  • Write tests before code

  • All code must have unit tests, which can be run automatically in a single operation

  • When a bug is reported, tests are created to reproduce it before an attempt is made to fix the bug

Note 

The pioneers of XP didn't invent test-first development. However, they have popularised it and associated it with XP in common understanding. Among other methodologies, the Unified Software Development Process also emphasizes testing throughout the project lifecycle.

We don't need to adopt XP as a whole in order to benefit from these ideas. Let's look at their benefits g and implications.

Writing test cases before writing code – test-first development – has many benefits:

  • The test cases amount to a specification and provide additional documentation. A working specification, compliance to which can be checked daily or even more often, is much more valuable than a specification in a thick requirements document that no one reads or updates.

  • It promotes understanding of the requirements. It will uncover, and force the resolution of, uncertainty about the class or component's functionality before any time has been wasted. Other components will never be affected by forced reworking. It's impossible to write a test case without understanding what a component should do; it is possible to waste a lot of coding time on the component itself before the lack of understanding becomes apparent.

    A common example concerns null arguments to methods. It's easy to write a method without considering this possibility, with the result that a call with null arguments can produce unexpected results. A proper test suite will include test cases with null arguments, ensuring that the method is only written after the behavior on null arguments is determined and documented.

  • Test cases are more likely to be viewed as vital, and updated throughout the project lifecycle.

  • It's much more difficult to write tests for existing code than to write tests before and while writing code. Developers implementing application code should have complete knowledge of what it should do (and therefore how to test it); tests written afterwards will always play catch-up. Thus test-first development is one of the best ways to maximize test coverage.

A test-first approach doesn't mean that a developer should spend all day writing all possible tests for a class before writing the class. Test cases and code are typically written in the same sitting, but in that order. For example, we might write the tests for a particular method before fully implementing the method. After the method is complete and these tests succeed, we move on to another method.

When we write tests before application code, we should check that they fail before implementing die required functionality. This allows us to test the test case and verifies test coverage. For example, we might write tests for a method, then a trivial implementation of the method that returns null. Now we can run the test case and see it fail (if it doesn't, something is wrong with our test suite).

If we write tests before code, the second rule (that all code should have unit tests) will be honored automatically. Many benefits flow from having tests for all classes:

  • It's possible to automate tests and verify in a single operation that all code is working as we expect. This isn't the same thing as working perfectly. Over the course of a project we'll learn more about how we want our code to work, and add test cases accordingly.

  • We can confidently add new functionality, as we have regression tests that will indicate if we've broken any existing functionality. Thus it's important that we can run all tests quickly and easily.

  • Refactoring is much less stressful to developers and less threatening to overall functionality. This ensures that the quality of the application's code remains high throughout the project lifecycle (for example, there's no need to keep away from that appalling class that sort of works, just because so many other classes depend on it). Similarly, it's possible to optimize a class or subsystem if necessary, with a feeling of security. We have a way of demonstrating that the optimized code does what it did before. With comprehensive unit test coverage, the later stages of the development cycle are likely to become much less stressful.

Important 

Unit testing will only provide a secure basis for refactoring and bug fixing if we have a comprehensive set of unit tests. A half-hearted approach to unit testing will deliver limited benefit.

A test-first approach can also be applied to bug fixing. Whenever a bug is reported (or becomes apparent other than through test cases) a failing test case should be written (failing due to the bug) before any code is written to fix the bug. The result is verification that the bug has been fixed without impact on functionality covered by previous tests, and a measure of confidence that that bug won't reappear.

Important 

Write unit tests before writing code, and update them throughout the project lifecycle. Bug reports and new functionality should first prompt the writing and execution of failing tests demonstrating the mismatch between what the application does and what it should do.

Test-first development is the best way to guarantee comprehensive test coverage, as it is much harder to write comprehensive tests for existing code.

Remember to use test failures to improve error messages and handling. If a test fails, and it wasn't immediately obvious what went wrong, try first to make the problem obvious (through improved error handling and messages) and then to fix it.

All these rules move much of the responsibility of testing onto the development team. In a traditional large organization approach to software development, a specialized testing team is responsible for testing, while developers produce code to be tested. There is a place for QA specialists; developers aren't always best at writing test cases (although they can learn). However, the distinction between development and technical testing is artificial. On the other hand, acceptance testing is likely to be conducted at least partly outside the development team.

Important 

There shouldn't be an artificial division between development and testing roles. Developers should be encouraged to value the writing of good test cases as an important skill.

Writing Test Cases

To enjoy the benefits of comprehensive unit testing, we need to know how to write effective tests. Let's consider some of the key issues, and Java tools to help simplify and automate test authoring.

What Makes a Good Test Case?

Writing good test cases takes practice. Our knowledge of the implementation (or likely implementation, if we're developing test-first) may suggest potential problem areas; however, we must also develop the ability to think outside the developer role. In particular, it's important to view the writing of a failing test as an achievement, not a problem. Common themes of testing will include:

  • Testing the most common execution paths (these should be apparent from the application use cases)

  • Testing what happens on unexpected arguments

  • Testing what happens when components under test encounter errors from components they use

We'll take a practical look at writing test cases shortly.

Recognizing Test Case Authoring and Maintenance as a Core Task

Writing all those test cases does take time, and as they're crucial to the documentation of the system, they must be written carefully. It's vital that the suite of tests continues to reflect the requirements of the application throughout the project lifecycle. Like most code, test suites tend to accrete rubbish over time. This can be dangerous. For example, old tests that are no longer relevant can complicate both test code and application code (which will still be required to pass them). It's essential that test code – like application code – be kept under version control. Changes to test code are as important as changes to application code, so may need to be subject to a formal process. It takes a while to get used to test-first development, but the benefits grow throughout the project lifecycle.

Unit Testing

So we're convinced that unit testing is important. How should we go about it in J2EE projects?

main() Methods

The traditional approach to unit testing in Java is to write a main() method in each class to be tested. However; this unnecessarily adds to the length of source files, bloats compiled byte codes and often introduces unnecessary dependencies on other classes, such as implementations of interfaces referenced in the code of the class proper.

A better approach is to use another class, with a name such as XXXXMain or XXXXTest, which contains only the main() method and whatever supporting code it needs, such as code to parse command-line arguments. We can now even put unit test classes in a parallel source tree.

However, using main() methods to run tests is still an ad hoc approach. Normally the executable classes will produce console output, which developers must read to establish whether the test succeeded or failed. This is time consuming, and usually means that it's impossible to script main() method tests and check the results of several at a time.

Using JUnit

There's a much better approach than main() method tests, which permits automation. JUnit is a simple open source tool that's now the de facto standard for unit testing Java applications. JUnit is easy to use and easy to set up; there's virtually no learning curve. JUnit was written by Erich Gamma (one of the Gang of Four) and Kent Beck (the pioneer of XP). JUnit can be downloaded from http://www.junit.org/index.htm. This site also contains many add-ons for JUnit and helpful articles about using JUnit.

JUnit is designed to report success or failure in a consistent way, without any need to interpret the results. JUnit executes test cases (individual tests) against a test fixture: a set of objects under test. JUnit provides easy ways of initializing and (if necessary) releasing test fixtures.

The JUnit framework is customizable, but creating JUnit tests usually involves only the following simple steps:

  1. Create a subclass of junit.framework.TestCase.

  2. Implement a public constructor that accepts a string parameter and invokes the superclass constructor with the string. If necessary this constructor can also load test data used subsequently. It can also perform initialization that should be performed once only for the entire test suite, appropriate when tests do not change the state of the text fixture. This is handy when the fixture is slow to create.

  3. Optionally, override the setUp() method to initialize the objects and variables (the fixture) used by all test cases. Not all test cases will require this. Individual tests may create and destroy their own fixture. Note that the setUp() method is called before every individual test case, and the tearDown() method after.

  4. Optionally, override the tearDown() method to release resources acquired in setUp(), or to revert test data into a clean state. This will be necessary if test cases may update persistent data.

  5. Add test methods to the class. Note that we don't need to implement an interface, as JUnit uses reflection and automatically detects test methods. Test methods are recognized by their signature, which must be of the form public test<Description>(). Test methods may throw any checked or unchecked exception.

JUnit's value and elegance lies in the way in which it allows us to combine multiple test cases into a test suite. For example, an object of class junit.framework.TestSuite can be constructed with a class that contains multiple test methods as an argument. It will automatically recognize test methods and add them to the suite. This illustrates a good use of reflection, to ensure that code keeps itself up to date. We'll discuss the use of reflection in detail in Chapter 4. When we write new tests or delete old tests, we don't need to modify any central list of tests – avoiding the potential for errors. The TestSuite class provides an API allowing us to add further tests to a test suite easily, so that multiple tests can be composed.

We can use a number of test runners provided by JUnit that execute and display the results of tests. The two most often used are the text runner and the Swing runner, which displays a simple GUI. I recommend running JUnit tests from Ant (we'll discuss this below), which means using the text interface. The Swing test runner does provide the famous green bar when all tests pass, but text output provides a better audit trail.

Test methods invoke operations on the objects being tested and contain assertions based on comparing expected results with actual results. Assertions should contain messages explaining what went wrong to facilitate debugging in the case of failures. The JUnit framework provides several convenient assertion methods available to test cases, with signatures such as the following:

     public void assertTrue(java.lang.String message, boolean condition)     public void assertSame(message, Object expected, Object actual) 

Failed assertions amount to test failures, as do uncaught exceptions encountered by test methods. This last feature is very handy. We don't want to be forced into a lot of error handling in test cases, as try/catch blocks can rapidly produce large amounts of code that may be unnecessarily hard to understand. If an exception simply reflects something going wrong, rather than the expected behavior of the API with a given input, it is simpler not to catch it, but to let it cause a test failure.

Consider the following example of using JUnit, which also illustrates test-first development in action. We require the following method in a StringUtils class, which takes a comma-delimited (CSV) list such as "dog,cat,rabbit" and outputs an array of elements as individual strings: for this input, "dog", "cat", "rabbit":

    public static String[] commaDelimitedListToStringArray (String s) 

We see that we need to test the following conditions:

  • Ordinary inputs – words and characters separated with commas.

  • Inputs that include other punctuation characters, to ensure that they aren't treated as delimiters.

  • A null string. The method should return the empty array on null input.

  • A single string (without any commas). In this case, the return value should be an array containing a single string equal to the input string.

Using a test-first approach, the first step is to implement a JUnit test case. This will simply extend junit.framework.TestCase. As the method we're testing is static, there's no need to initialize a test fixture by overriding the setUp() method.

We declare the class, and provide the required constructor, as follows:

     public class StringUtilsTestSuite extends TestCase {     public StringUtilsTestSuite (String name) {     super (name) ;     } 

We now add a test method for each of the four cases described above. The whole test class is shown below, but let's start by looking at the simplest test: the method that checks behavior on a null input string. There are no prizes for short test method names: we use method names of the form test<Method to be tested><Description of test>:

    public void testCommaDelimitedListToStringArrayNullProducesEmptyArray ( ) {      String[] sa = StringUtils.commaDelimitedListToStringArray (null) ;      assertTrue ("String array isn't null with null input", sa != null);      assertTrue ("String array length == 0 with null input", sa.length == 0) ;    } 

Note the use of multiple assertions, which will provide the maximum possible information in the event of failure. In fact, the first assertion isn't required as the second assertion will always fail to evaluate (with a NullPointerException, which causes a test failure), if the first assertion would fail. However it's more informative in the event of failure to separate the two.

Having written our tests first, we then implement the commaDelimitedListToStringArray() method to return null.

Next we run JUnit. We'll look at how to run JUnit below. As expected, all the tests fail.

Now we implement the method in the simplest and most obvious way: using the core Java java.util.StringTokenizer class. As it requires no more effort, we've implemented a more general delimitedListToStringArray() method, and treated commas as a special case:

    public static String [] delimitedListToStringArray (        String s, String delimiter) {     if (s == null) {     return new String [0];     }     if (delimiter == null) {       return new String[] { s };     }     StringTokenizer st = new StringTokenizer (s, delimiter);     String[] tokens = new String[st.countTokens()];     System.out.println ("length is " + tokens.length) ;     for (int i = 0; i < tokens.length; i++) {        tokens [i] = st.nextToken() ;     }     return tokens;    }    public static String[] commaDelimitedListToStringArray (String s) {      return delimitedListToStringArray (s, ",");    } 

All our tests pass, and we believe that we've fully defined and tested the behavior required.

Sometime later it emerges that this method doesn't behave as expected with input strings such as "a,,b". We want this to result in a string array of length 3, containing the strings "a", the empty string, and "b". This is a bug, so we write a new test method that demonstrates it, and fails on the existing code:

    public void testCommaDelimitedListToStringArrayEmptyStrings() {      String[] ss = StringUtils.commaDelimitedListToStringArray ("a, ,b"); .      assertTrue("a,,b produces array length 3, not "                 + ss.length, ss.length == 3);      assertTrue("components are correct",                 ss [0].equals ("a") && ss [1].equals (" ") && ss [2].equals ("b"));      // Further tests omitted    } 

Looking at the implementation of the delimitedListToStringArray() method, it is clear that the StringTokenizer library class doesn't deliver the behavior we want. So we reimplement the method, doing the tokenizing ourselves to deliver the expected result. In two test runs, we end up with the following version of the delimitedListToStringArray() method:

    public static String[] delimitedListToStringArray(        String s, String delimiter) {      if (s == null) {        return new String[0];      }      if (delimiter == null) {        return new String [] { s } ;      }
     List 1 = new LinkedList();         int delimCount = 0;      int pos = 0;      int delpos = 0;      while ((delpos = s.indexOf (delimiter, pos)) != -1) {        1.add(s.substring(pos, delpos));        pos = delpos + delimiter.length();      }      if (pos <= s.length()) {        // Add remainder of String        1.add(s.substring(pos));      }      return (String[]) 1.toArray(new String[1.size()]);
    } 

Although Java has relatively poor string handling, and string manipulation is a common cause of bugs, we can do this refactoring fearlessly, because we have regression tests to verify that the new code performs as did the original version (as well as satisfying the new test that demonstrated the bug).

Here's the complete code for the test cases. Note that this class includes a private method, testCoirmaDelimitedListToStringArrayLegalMatch (String[] components), which builds a CSV-format string from the string array it is passed and verifies that the output of the commaDelimitedListToStringArray() method with this string matches the input array. Most of the public test methods use this method, and are much simpler as a result (although this method name begins with test, because it takes an argument, it won't be invoked directly by JUnit). It's often worth making this kind of investment in infrastructure in test classes:

    public class StringUtilsTestSuite extends TestCase {      public StringUtilsTestSuite (String name) {        super(name);      }      public void testCommaDelimitedListToStringArrayNullProducesEmptyArray() {        String[] sa = StringUtils.commaDelimitedListToStringArray(null);        assertTrue("String array isn't null with null input", sa != null);        assertTrue("String array length == 0 with null input", sa.length == 0);      }      private void testCommaDelimitedListToStringArrayLegalMatch(          String[] components) {        StringBuffer sbuf = new StringBuffer() ;        // Build String array        for (int i = 0; i < components.length; i++) {        if (i != 0) {          sbuf.append(",");          sbuf.append(components[i] ) ;        }        System.out.printIn("STRING IS " + sbuf);        String[] sa =          StringUtils.commaDelimitedListToStringArray(sbuf.toString());        assertTrue("String array isn't null with legal match", sa != null);        assertTrue("String array length is correct with legal match: returned " +          sa.length + " when expecting " + components.length + " with String [" +          sbuf.toString() + "]", sa.length == components.length);        assertTrue ("Output equals input", Arrays.equals(sa, components));      }      public void testCommaDelimitedListToStringArrayMatchWords() {        // Could read these from files        String[] sa = new String[] { "foo", "bar", "big" };        testCommaDelimitedListToStringArrayLegalMatch(sa);        sa = new String[] { "a", "b", "c" } ;        testCommaDelimitedListToStringArrayLegalMatch(sa);        // Test same words        sa = new String[] { "AA", "AA", "AA", "AA", "AA" };        testCommaDelimitedListToStringArrayLegalMatch(sa);      }      public void testCommaDelimitedListToStringArraySingleString() {        String s = "woeirqupoiewuropqiewuorpqiwueopriguwopeiurqopwieur";        String [] sa = StringUtils.commaDelimitedListToStringArray(s);        assertTrue ("Found one String with no delimiters", sa.length == 1);        assertTrue ("Single array entry matches input String with no delimiters",          sa[0].equals(s));      }      public void testCommaDelimitedListToStringArrayWithOtherPunctuation() {        String[] sa = new String[] { "xcvwert4456346&*.", "///", ".!", ".", ";" };        testCommaDelimitedListToStringArrayLegalMatch(sa) ;      }      /** We expect to see the empty Strings in the output */      public void testCommaDelimitedListToStringArrayEmptyStrings() {        String[] ss = StringUtils.commaDelimitedListToStringArray ("a,,b");        assertTrue("a,,b produces array length 3, not " + ss. length,          ss.length == 3 ) ;        assertTrue("components are correct",          ss[0].equals("a") && ss[1] .equals ("") && ss [2] .equals ("b"));        String [] sa = new String [] { "", "", "a", "" };        testCommaDelimitedListToStringArrayLegalMatch(sa) ;      }        public static void main (String[] args) {          junit.textui.TestRunner.run(new TestSuite (StringUtilsTestSuite.class));      }    } 

Note the main() method, which constructs a new TestSuite given the current class, and runs it with the junit.textui.TestRunner class. It's handy, although not essential, for each JUnit test case to provide a main() method (as such main() methods invoke JUnit themselves, they can also use the Swing test runner).

JUnit requires no special configuration. We simply need to ensure that junit.jar, which contains all the JUnit binaries, is on the classpath at test time.

We have several choices for running JUnit test suites. JUnit is designed to allow the implementation of multiple "test runners" which are decoupled from actual tests (we'll look at some special test runners that allow execution of test suites within a J2EE server later). Typically we'll use one of the following approaches to run JUnit tests:

  • Run test classes with a main() method from the command line.

  • Run tests through an IDE that offers JUnit integration. As with invocation via a main() method, this also only usually allows us to run one test class at once.

  • Run multiple tests as part of the application build process. Normally this is achieved with Ant. This is an essential part of the build process, and is discussed under Automating Tests towards the end of this chapter.

While automation using Ant is the key to integrating testing into the application build process, integration with an IDE can be very handy as we work on individual classes. The following screenshots show how the JUnit test suite discussed above can be invoked from the Eclipse IDE.

Clicking on the Run icon on the toolbar, we choose JUnit from the list of launchers on the Run With submenu:

click to expand

Eclipse brings up a dialog box to display the progress and result of the tests. A green bar indicates success; a red bar, failure. Any errors or failures are listed, with their stack trace appearing in the Failure Trace pane:

click to expand

Test Practices

Now that we've seen JUnit in action, let's step back a little and look at some good practices for writing tests. Although we'll discuss implementing them with JUnit, these practices are applicable to whatever test tool we may choose to use.

Write Tests to Interfaces

Wherever possible, write tests to interfaces, rather than classes. It's good OO design practice to program to interfaces, rather than classes, and testing should reflect this. Different test suites can easily be created to run the same tests against implementations of an interface (see Inheritance and Testing later).

Don't Bother Testing JavaBean Properties

It's usually unnecessary to test property getters and setters. It's usually a waste of time to develop such tests. Also, bloating test cases with code that isn't really useful makes them harder to read and maintain.

Maximizing Test Coverage

Test-first development is the best strategy for ensuring that we maximize test coverage. However, sometimes tools can help to verify that we have met our goals for test coverage. For example, a profiling tool such as Sitraka's JProbe Profiler (discussed in Chapter 15) can be used to examine the execution path through an application under test and establish what code was (arid wasn't) executed.

Specialized tools such as JProbe Coverage (also part of the JProbe Suite) make this much easier. JProbe Coverage can analyze one or more test runs along with the application codebase, to produce a list of methods and even lines of source code that weren't executed.

The modest investment in such a tool is likely to be worthwhile when it's necessary to implement a test suite for code that doesn't already have one.

Don't Rely on the Ordering of Test Cases

When using reflection to identify test methods to execute, JUnit does not guarantee the order in which it runs tests. Thus tests shouldn't rely on other tests having been executed previously. If ordering is vital, it's possible to add tests to a TestSuite object programmatically. They will be executed in the order in which they were added. However, it's best to avoid ordering issues by using the setUp() method appropriately.

Avoid Side Effects

For the same reasons, it's important to avoid side effects when testing. A side effect occurs when one test changes the state of the system being tested in a way that may affect subsequent tests. Changes to persistent data in a database are also potential side effects.

Read Test Data from the Classpath, Not the File System

It's essential that tests are easy to run. A minimum of configuration should be required. A common cause of problems when running a test suite is for tests to read their configuration from the file system. Using absolute file paths will cause problems when code is checked out to a different location; different file location and path conventions (such as \home\rodj\tests\ foo.dat or C:\\Documents and Settings\\rodj\\foo.dat) can tie tests to a particular operating system. These problems can be avoided by loading test data from the classpath, with the Class.getResource() or Class.getResourceAsStream() methods. The necessary resources are usually best placed in the same directory as the test classes that use them.

Avoid Code Duplication in Test Cases

Test cases are an important part of the application. As with application code, the more code duplication they contain, the more likely they are to contain errors. The more code test cases contain the more of a chore they are to write and the less likelyit is that they will be written. Avoid this problem by a small investment in test infrastructure. We've already seen the use of a private method by several test cases, which greatly simplifies the test methods using it.

When Should We Write "Stub" Classes?

Sometimes classes we wish to test depend on other classes that aren't easy to provide at test time. If we follow good coding practice, any such dependencies will be on interfaces, rather than classes.

In J2EE applications, such dependencies will often be on implementation classes supplied by the application server. However, we often wish to be able to test code outside the server. For example, a class intended for use as a Data Access Object (DAO) in the EJB tier may require a javax.sql.DataSource object to provide connections to an RDBMS, but may have no other dependency on an EJB container. We may want to test this class outside a J2EE server,

In such cases, we can write simple stub implementations of interfaces required by classes under test. For example, we can implement a trivial javax.sql.DataSource that always returns a connection to a test database (we won't need to implement our own connection pool). Particularly useful stub implementations, such as a test DataSource are generic, and can be used in multiple tests cases, making it much easier to write and run tests. We can also use stub implementations of application objects that aren't presently available, or aren't yet written (for example, to enable development on the web tier to progress in parallel with development of the EJB tier).

The /framework/test directory in the download with this book includes several useful generic test classes, including the jdbc.TestDataSource class that enables us to test DAOs without a J2EE server.

This strategy delivers real value when implementing the stubbed objects doesn't involve too much work. It's best to avoid writing unduly complex stub implementations. If stubbed objects begin to have dependencies on other stubbed objects, we should consider alternative testing strategies.

Inheritance and Testing

We need to consider the implications of the inheritance hierarchy of classes we test. A class should pass all tests associated with its superclasses and the interfaces it implements. This is a corollary of the "Liskov Substitution Principle", which we'll meet in Chapter 4.

When using JUnit, we can use inheritance to our advantage. When oneJUnit test case extends another (rather than extending junit.framework.TestCase directly), all the tests in the superclass are executed, as well as tests added in the subclass. This means that JUnit test cases can use an inheritance hierarchy paralleling the concrete inheritance hierarchy of the classes being tested.

In another use of inheritance among test cases, when a test case is written against an interface, we can make the test case abstract, and test individual implementations in concrete subclasses. The abstract superclass can declare a protected abstract method returning the actual object to be tested, forcing subclasses to implement it.

Important 

It's good practice to subclass a more general JUnit test case to add new tests for a subclass of an object or a particular implementation of an interface.

Let's consider an example, from the code used in our sample application. This code is discussed in detail in Chapter 11. Don't worry about what it does at the moment; we're only interested here in how to test classes and interfaces belonging to an inheritance hierarchy. One of the central interfaces in this supporting code is the BeanFactory interface, which provides methods to return objects it manages:

    Object getAsSingleton(String name) throws BeansException; 

A commonly used subinterface is ListableBeanFactory, which adds additional methods to query the names of all managed objects, such as the following:

    String[] getBeanDefinitionNames(); 

Several classes implement the ListableBeanFactory interface, such as XmlBeanFactory (which takes bean definitions from an XML document). All implementing classes pass all tests against the ListableBeanFactory interface as well as all tests applying to the BeanFactory root interface. The following class diagram illustrates the inheritance hierarchy among these application interfaces and classes:

click to expand

It's natural to mirror this inheritance hierarchy in the related test cases. The root of the JUnit test case hierarchy will be an abstract BeanFactoryTests class. This will include tests against the BeanFactory interface, and define a protected abstract method, getBeanFactory() that subclasses must implement to return the actual BeanFactory. Individual test methods in the BeanFactoryTests class will call this method to obtain the fixture object to run tests against. A subclass, ListableBeanFactoryTests, will include additional tests against the functionality added in the ListableBeanFactory interface and ensure that the BeanFactory returned by the getBeanFactory() method is of the ListableBeanFactory subinterface.

As both these test classes contain tests against interfaces, they will both be abstract. As JUnit is based on concrete inheritance, a test case hierarchy will be wholly concrete. There is little value in test interfaces.

Either one of these abstract test classes can be extended by concrete test classes, such as XmlBeanFactoryTests. Concrete test classes will instantiate and configure the concrete BeanFactory or ListableBeanFactory implementation to be tested and (optionally) add new tests specific to this class (there's often no need for new class-specific tests; the aim is simply to create a fixture object that the superclass tests can be run against). All test cases defined in all superclasses will be inherited and run automatically by JUnit. The following class diagram illustrates the test case hierarchy:

click to expand

The following excerpt from the BeanFactoryTests abstract base test class shows how it extends junit.framework.TestCase and implements the required constructor:

    public abstract class BeanFactoryTests        extends junit.framework.TestCase {      public BeanFactoryTests (String name) {        super (name) ;    } 

The following is the definition of the protected abstract method that must be implemented by concrete subclasses:

    protected abstract BeanFactory getBeanFactory(); 

The following test method from the BeanFactoryTests class illustrates the use of this method:

    public void testNotThere() throws Exception {      try {     Object o = getBeanFactory() .getBean ("Mr Squiggle");                                                                          fail ("Can't find missing bean");     } catch (NoSuchBeanDefinitionException ex) {       // Correct behavior       // Test should fail on any other exception     }   } 

The ListableBeanFactoryTests class merely adds more test methods. It does not implement the protected abstract method.

The following code fragment from the XmlBeanFactoryTests class – a concrete test suite that tests an implementation of the ListableBeanFactory interface – shows how the abstract getBeanFactory() method is implemented, based on an instance variable initialized in the setUp() method:

    public class XmlBeanFactoryTests        extends ListableBeanFactoryTests {      private XmlBeanFactory factory;      public XmlBeanFactoryTests (String name) {        super(name);      }

      protected void setUp() throws Exception {        InputStream is = getClass() .getResourceAsStream("test.xml");        this.factory = new XmlBeanFactory(is);      }      protected BeanFactory getBeanFactory() {        return factory;      }

      // XmlBeanFactory specific tests...    } 

When this test class is executed by JUnit, the test methods defined in it and its two superclasses will all be executed, ensuring that the XmlBeanFactory class correctly implements the contract of the BeanFactory and ListableBeanFactory interfaces, as well as any special requirements that apply only to it.

Where Should Test Cases be Located?

Place tests in a separate source tree from the code to be tested. We don't want to generate Javadoc for test cases for users of the classes, and it should be easy to JAR up application classes without test cases. Both these tasks are harder if tests are in the same source tree as application code.

However, it is important to ensure that tests are compiled with each application build. If tests don't compile, they're out of synch with code and therefore useless. Using Ant, we can build code in a single operation regardless of where it is located.

I follow a common practice in using a parallel package structure for classes to be tested and test cases. This means that the tests for the com.mycompany.beans package will also in the com.mycompany.beans package, albeit in a separate source tree. This allows access to protected and package-protected methods (which is occasionally useful), but, more importantly, makes it easy to find the test cases for any class.

Should Testing Strategy Affect How We Write Code?

Testing is such an important part of the development process that it is legitimate for the testing strategy we use to affect how we write application code – with certain reservations.

First, the reservations: I don't favor white-box testing and don't advocate increasing the visibility of methods and variables to facilitate testing. The "parallel" source tree structure we've discussed gives test cases access to protected and package-protected methods and variables, but this is not usually necessary. As we've seen, the existence of comprehensive tests promotes refactoring – being able to run existing tests provides reassurance that refactoring hasn't broken anything. White-box testing reduces the value of this important benefit. If test cases depend on implementation details of a class, refactoring the class has the potential to break both class and test case simultaneously – a dangerous state of affairs. If maintaining tests becomes too much of a chore, they won't be maintained, and our testing strategy will break down.

So what implications might a rigorous unit testing strategy have on coding style?

  • It encourages us to ensure that classes don't have too much responsibility, which makes testing unduly complex. I always use fairly fine-grained objects, so this doesn't tend to affect my coding style. However, many developers do report that adopting test-first development changes their style in this respect.

  • It prompts us to ensure that class instance variables can only be modified through method calls (otherwise, external changes to instance variables can make tests meaningless; if the state of a class can be changed other than through the methods it declares, tests can't prove very much). Again, this reflects good design practice: public instance variables violate encapsulation.

  • It encourages us to prompt stricter encapsulation with respect to inheritance. The use of read-write protected instance variables allows subclasses to corrupt the state of a superclass, as does allowing the overriding of concrete methods. In the next chapter, we'll discuss these issues from the perspective of OO design.

  • It occasionally prompts us to add methods purely intended to facilitate testing. For example, it may be legitimate to add a package-protected method exposing information about a class's state purely to facilitate testing. Consider a class that allows listeners to be registered through a public method, but has no method exposing the listeners registered (because other application code has no interest in this).

    Adding a package-protected method returning a Collection (or whatever type is most convenient) of registered listeners won't complicate the class's public interface or allow the class's state to be corrupted, but will be very useful to a test class in the same package. For example, a test class could easily register a number of listeners and then call the package-protected method to check that only these listeners are registered or it could publish an event using the class and check that all registered listeners were notified of it.

By far the biggest effect of having comprehensive unit tests on coding style is the flow-on effect: the refactoring guarantee. This requires that we think of the tests as a central part of the application.

We've already discussed how this allows us to perform optimization if necessary. There are also significant implications for achieving J2EE portability. Consider a session EJB for which we have defined the remote and home interfaces. Our testing strategy dictates that we should have comprehensive tests against the public (component) interface (the container conceals the bean implementation class). These tests amount to a guarantee of the EJB's functionality from the client perspective.

Now, suppose that our present requirement is for a system that uses an Oracle database. We can write a session bean that uses a helper class that runs Oracle-specific SQL. If, in the future, we need to migrate to another database, we can reimplement the bean's implementation class, leaving the component interface alone. The test cases will help ensure that the system behaves as before. This approach isn't "pure" J2EE, but it is effective in practice and it allows us to use the simplest and most efficient implementation at any point.

Of course, we should try to share code between bean implementation classes wherever possible (perhaps in an abstract superclass). If this is not possible – or if the effort involved in achieving it would outweigh the benefit – test cases provide a working specification of what the implementation classes should do, and will make it much easier to provide different implementations if necessary.

Integration and Acceptance Testing

Acceptance testing is testing from a customer perspective. Inevitably this will involve some hands-on testing, in which testers play the role of users, and execute test scenarios. However, we can also automate aspects of acceptance testing.

Integration testing is slightly lower level, and tests how application classes or components work together. The distinction between unit and integration tests blurs in practice; we can often use the same tool (such as JUnit) for both. Integration tests merely involve higher-level classes, that use many other classes (for which there are unit tests) to do their work.

Testing Business Objects

If we follow the design recommendations of Chapter 1, application business logic will be exposed via a layer of business interfaces. Tests written against these interfaces will be the core of application integration testing. Testing application interface layers, such as a web interface, will be simpler because we only need to test whether the interface layer correctly invokes the business interfaces – we know that the implementations of the business interfaces work correctly if invoked correctly.

Typically, we can take the application's use cases and write a number of test cases for each. Often one method on a business interface will correspond to a single use case.

Depending on the architectural choices we discussed in Chapter 1, business objects may be implemented as ordinary Java classes running in the web container (without using EJB, but with access to most of J2EE's container services), or as EJBs. Let's look at the issues in testing each in turn.

Testing Business Objects Implemented Without Using EJB

Testing ordinary Java classes is relatively easy. We can simply use JUnit. The only significant problem is likely to involve configuration required by the class and access to external resources such as databases and container services such as JNDI.

Container services can often be simulated by test objects; for example, we can use a generic test JNDI implementation that enables business objects to perform JNDI lookups when instantiated outside an application server (this is discussed further below).

With business objects, we will always write tests to our business interfaces.

Some business objects depend on other application objects – although such dependencies should be on interfaces, not classes. We have three main choices to address this:

  • Replace the required objects with test implementations of the relevant interfaces, which return test data. This works well so long as the interfaces aren't complex to implement. It's also essentially a unit testing, rather than integration testing, technique.

  • Implement tests that can run within the application server, with the application configured as in production. We'll discuss this approach below, as it's often the only option for testing EJBs.

  • Try to design application infrastructure so that application configuration doesn't depend on the J2EE server. The JavaBeans-based application infrastructure discussed in Chapter 11 facilitates this, enabling the same application-specific configuration files to be read by a test harness as at run time. It ensures that – except where EJBs are concerned – many business objects can be tested outside the EJB container, configured as in production. Business interface implementations that depend on container services can be replaced by test implementations, to allow integration testing without deployment on an application server.

Testing EJBs

Testing EJBs is much harder than testing ordinary Java classes, because EJBs depend on EJB container services.

We will generally focus on testing session beans, rather than entity beans, even if we do choose to use entity beans. Entity beans don't usually contain business logic; their effect on persistent data should be checked by session bean tests.

We can't simply instantiate EJBs and test them like ordinary Java classes. EJBs are managed objects; the EJB container manages their lifecycle at run time and they depend on container services such as connection pools. Furthermore, the container controls access to their functionality, and the behavior added by container interception (such as transaction management and security restrictions) is part of the application itself and needs to be tested.

There are several ways around this problem:

  • Write a test that is a remote client of the EJB container. This is usually the best approach for testing EJBs with remote interfaces.

  • Write and deploy a test that executes within the application server. This is a good strategy for testing EJBs with local interfaces. It will require additional infrastructure to supplement JUnit.

  • Test with stub objects replacing container objects. This will generally only work when EJBs have simple requirements of the EJB container.

The most obvious approach is the remote client approach. This is simple and intuitive. We can write ordinary JUnit test cases, which connect to the EJB server. The test cases run in a separate JVM from the EJB container. We can invoke them as we invoke any JUnit tests, simply needing to take care that we provide the appropriate JNDI properties to allow connection to the EJB container and supply the necessary EJB client binaries. On the negative side, testing through a remote client doesn't enable us to test EJBs via their local interfaces. We are unable to test the effect of local call semantics. Even when we wish to test EJBs with remote interfaces, this may be a problem, as we may wish to allow container optimizations when running EJBs and web applications in the same server instance.

We can get around these problems by writing tests that execute within the application server. Typically, we package tests as web applications, giving them access to EJBs running in the same JVM (this will probably allow local calling, but this isn't presently guaranteed by the J2EE specifications). However, this approach is harder to implement, requires additional infrastructure for JUnit, and complicates application deployment.

Finally, we can simulate our own EJB container to supply the services the EJBs expect at run time. However, this is usually impracticable, because of the complexity of the EJB infrastructure. EJBs don't merely have access to container-provided interfaces such as a javax.ejb.SessionContext; they have access to container services other man directly through the API (for example, the ability to access their naming context). Security and transaction management services are also difficult to replicate.

The download for this book includes some useful generic classes in the /framework/test directory that can be used with this approach: for example, dummy EJB context objects, and a test JNDI implementation to allow the binding of required objects in a simulated naming context to allow EJBs to perform JNDI lookups as if they are running within a server. However, this approach only works when EJBs have simple requirements of the container. When using this approach we must also ensure that we invoke EJB lifecycle methods such as setSessionContext() when creating a test fixture.

The following table summarizes the advantages and disadvantages of the three approaches:

Approach

Advantages

Disadvantages

Testing with a remote client.

Easy to write and run tests.

Can use standard

JUnit infrastructure.

Will ensure that our EJBs support genuine remote semantics.

The remote interfaces exposed by the EJB tier in a distributed application usually expose an application's business logic, so this is a natural place to test.

We can't test local interfaces.

The application may use call by reference in production, even with remote interfaces.

Testing within the application server (either in the EJB container or web container).

In the case of web applications, this will probably mean that tests will have exactly the same access to the EJB tier as the application code that uses the EJB tier.

Requires additional test framework.

More complex implementation, deployment, and invocation of tests.

Testing with stub objects replacing container objects.

We can run tests without an EJB container.

We may be able to reuse standard infrastructure components in multiple applications.

We may end up writing a lot of classes simulating container behavior.

We haven't tested the application in the application server we will deploy it on.

If we test EJBs using remote interfaces we need no special tools beyond JUnit itself. If we test inside the EJB container, we need a tool that enables test cases to be packaged into a J2EE application.

Cactus

The most sophisticated free tool I'm aware of for testing within the application server is Cactus (available at http://jakarta.apache.org/cactus/index.html). It is an open source framework based on JUnit that allows EJBs, servlets, JSP pages, and servlet filters to be tested within the target application server.

Cactus allows test invocation and result reporting on the client side as with ordinary JUnit tests: Cactus takes care of connecting to the server, where the test cases actually run. The test runner in the client JVM connects to a Cactus "redirector" in the server JVM. Although each test class is instantiated in both server and client, the tests are executed within a web application running within the server. Typically this will be the same web application that contains the application's web interface.

Cactus is a complex framework and is relatively complex to set up. However, it's a good approach for testing EJBs, in which case the complexity is unavoidable.

Setting up Cactus involves the following steps:

  1. Ensure that the Cactus classpath is set correctly. This area is the most common cause of errors when using Cactus, so please read the documentation on "Setting up Cactus classpaths" included in the Cactus distribution carefully. Most of the Cactus binaries must be included in a WAR distribution, under the /WEB-INF/lib directory.

    Note 

    If more than one application is likely to use Cactus, I recommend including the Cactus binaries at server-wide level, so they will be available to all applications. In JBoss, this simply means copying the JAR files to the /lib directory of the JBoss server to which you deploy applications. When using this approach, there's no need to include Cactus JARs in the /WEB–INF/lib directory of each WAR. When using Cactus to test EJBs, ensure that none of the Cactus servlet test cases is included in the EJB JAR. This will result in class loading problems that will generate mysterious "class not found" errors.

  2. Edit the web application's web.xml deployment descriptor to define the Cactus "servlet redirector" servlet that will route requests from the remote tests to the server-side test instances. This definition should look like this:

     <servlet>     <servlet-name>ServletRedirector</servlet-name>     <servlet-class>          org.apache.cactus.server.ServletTestRedirector     </servlet-class> </servlet> 

    We also need to provide a URL mapping to this servlet (note that for some web containers, such as Jetty, it's necessary to drop the trailing / included in the example in Cactus documentation, as I've done in this example):

     <servlet-mapping>    <servlet-name>ServletRedirector</servlet-name>    <url-pattern>/ServletRedirector</url-pattern> </servlet-mapping> 
  3. Include the test classes in the WAR. Cactus test classes must be derived from a Cactus superclass that handles redirection – we'll discuss this below.

  4. Configure the Cactus client. We'll need to ensure that all Cactus binaries (those required on the server-side and additional client-side libraries) are available to the client. We must also supply a cactus.properties file, to tell Cactus the server URL and port and specify the context path of the web application. For testing the sample application on my local machine, the cactus.properties file is as follows. Note that the servletRedirectorName property should match the URL mapping we created in web.xml:

     cactus.contextURL = http://localhost:8080/ticket cactus.servletRedirectorName = ServletRedirector cactus.jspRedirectorName = JspRedirector 

Once we've followed all these steps, we can invoke the test cases on the client side like normal JUnit test cases.

Let's look at what's involved in writing a Cactus test case. The principles are the same as for any JUnit test case, but we must extend Cactus's org.apache.cactus.ServletTestCase, not junit.framework.TestCase directly. The org.apache.cactus.ServletTestCase superclass provides the ability to invoke tests and perform reporting in the client, while tests actually run inside the server.

Let's look at a practical example. We begin by extending org.apache.cactus.ServletTestCase:

    public class CactusTest extends ServletTestCase { 

The remainder of our class uses normal JUnit concepts. We set up a test fixture following normal JUnit conventions, and implement test methods as usual:

    private BoxOffice boxOffice;    public CactusTest(String arg0) {      super(arg0);    }    public static Test suite() {      return new TestSuite (CactusTest.class);    } 

We can access the server's JNDI context to look up EJBs when creating a test fixture, either in the setUp() method, as shown below, or in individual test methods:

    public void setUp() throws Exception {      Context ctx = new InitialContext();      BoxOfficeHome home =        (BoxOfficeHome) ctx.lookup ("java:comp/env/ejb/BoxOffice");      this.boxOffice = home.create();    }    public void testCounts() throws Exception {      int all = boxOffice.getSeatCount(1) ;      int free = boxOffice.getFreeSeatCount(1) ;      assertTrue ("all > o", all > 0);      assertTrue ("all > free", all > = free);      }    } 

The org.apache.cactus.ServletTestCase class makes a number of "implicit objects" available to subclass test methods as instance variables, including the ServletConfig object, from which we can obtain the web application's global ServletContext. This is very useful when we need to access web application attributes.

Cactus also provides the ability to provide test inputs on the client side; through additional methods associated with each test case (this is most relevant for testing servlets, rather than EJBs). Please refer to the detailed Cactus documentation for information about such advanced features.

This is a powerful mechanism of ensuring that we enjoy the benefits offered by JUnit, such as the ability to execute multiple test suites automatically, while actually running tests in the server. However, it complicates application deployment. Typically we'll need distinct build targets to build the application including test cases, Cactus configuration and Cactus binaries for testing, and to build the application without test support for production.

Note 

JUnitEE (http://junitee.sourceforge.net/) is a simpler framework than Cactus, but is also based on running tests within the J2EE server. Like Cactus, JUnitEE packages tests in WARs. JUnitEE provides a servlet that runs ordinary JUnit test cases within the web container. Instead of using a redirection mechanism with test cases held on both client and server, JUnitEE provides a servlet that allows test cases to be chosen and output generated on the server.

It's very easy to implement tests using JUnitEE, because test cases are simply JUnit test cases. All the JUnitEE infrastructure does is to provide a J2EE-aware means of running the test cases. Test cases will simply be implemented with the knowledge that they will run within the J2EE server running the application. The most important implication is that they have access to JNDI, which they can use to look up application EJBs to test.

The downside of this simpler approach is that tests can only be invoked from a browser, meaning that it's impossible to automate the test process.

Cactus didn't support this simpler, more intuitive, approach in the past, but Cactus 1.4 provides a "Servlet test runner" that enables Cactus to use the same approach as JUnitEE. I recommend using Cactus, rather than JUnitEE, even if using this approach, as it's very important to be able to automate tests as part of the build process.

To use the Cactus 1.4 servlet test runner, we need to follow the following steps:

  1. Ensure that all Cactus binaries – not just the server-side binaries – are distributed in the application WAR (or placed on the server classpath, so as to be available to all applications).

  2. Edit web.xml to create a servlet definition and URL mapping for the Cactus ServletTestRunner servlet. The servlet definition is shown below:

        <servlet>        <servlet-name>ServletTestRunner</servlet-name>        <servlet-class>            org.apache.cactus.server.runner.ServletTestRunner        </servlet-class>    </servlet> 

    The URL mapping should look like this:

        <servlet-mapping>        <servlet-name>ServletTestRunner</servlet-name>        <url-pattern>/ServletTestRunner</url-pattern>    </servlet-mapping>    
  3. Ensure that the test cases are included in the WAR. Note that we aren't forced to extend org.apache.cactus.ServletTestCase when we use this approach; we can use ordinary JUnit test cases if we prefer (although these won't support Cactus redirection if we want to automate tests).

With this approach, we don't need to worry about client-side configuration, as we can run tests through a browser. All we need to do is to request a URL such as: http://localhost:8080/mywebapp/ServletTestRunner?suite=com.mycompany.MyTest&xsl=junit-noframes.xsl

The servlet test runner returns the results as an XML document by default; the xsl parameter in the above example specifies a stylesheet that can be used to transform the XML results to HTML and render them in a browser (the stylesheet is included with the Cactus distribution, but must be included in each application WAR using the servlet test runner).

Test results will be displayed as in the following example from Cactus documentation:

click to expand

This has the virtues and disadvantages of the JUnitEE approach to testing within the server. It's relatively simple to configure, but isn't scriptable, and so can't easily be integrated into the application build process.

Important 

When we use EJBs with remote interfaces, we can write ordinary JUnit test cases that test them from a remote JVM.

When we use EJBs with local interfaces, we will usually need to test them within the target application server.

The disadvantages of testing within the application server are that it complicates application deployment and takes longer to configure and execute than testing ordinary Java classes.

Testing Database Interaction

Business objects, whether they're EJBs or ordinary Java objects, will certainly interact (although not necessary directly) with a database, and will depend on J2EE data sources. Hence we'll have to consider the effect of our tests on data in the database and the data they require. There are several strategies here.

The most radical is to do away with the database at test time and replace actual JDBC classes with mock objects (see http://www.mockobjects.com/papers/jdbc_testfirst.html for more information on this approach). This approach avoids any requirements for or issues relating to modification of persistent data. However, it won't help us test complex queries or updates (which we really want the target database to run as part of our application code), and is difficult to integrate with EJB containers.

Thus normally we will need to access a test database. This means that we'll typically need to write SQL scripts that execute before each test run to put the database into the desired state. These SQL scripts are integral parts of the tests. Using Ant, it's possible to automate the execution of database scripts before we run tests: Ant allows us to execute SQL statements held in a build file or in a separate script.

When testing JDBC helper classes, it may be possible to write a test case that rolls back any changes, meaning that it's not necessary to clean the database afterwards. However, when we test code running in an EJB container this is impossible, as the EJB container should create and commit or rollback the transaction.

Changes to persistent data are central to the functionality of much EJB code. Thus test cases must have the ability to connect to the database and examine data before and after the execution of EJB code. We must also check that rollback occurs when demanded by business logic or if an error is encountered.

To illustrate this, consider testing the following method on an EJB's remote interface:

    InvoiceVO placeOrder(int customerld, Invoiceltem[] items)      throws NoSuchCustomerException, RemoteException, SpendingLimitViolation; 

We need multiple test cases here: one for valid orders, one for orders by non-existent customers to check tiiat the correct exception is thrown, and one for an order of an illegally large amount to check that SpendingLimitViolation is thrown. Our test cases should include code to generate orders for random customers and random products.

This level of testing requires that the test cases should be able to access the underlying data. To achieve this, we use a helper object with a connection to the same database as the EJB server to run SQL functions and queries to verify the EJB's behavior. We can also use a helper object to load data from the database to provide a set of customer numbers and item numbers that we can use to generate random orders. We'll discuss suitable JDBC helper classes in Chapter 9.

Consider the following test method that checks that an excessively large order results in a SpendingLimitViolation exception being thrown. It's also the responsibility of the EJB to ensure that the transaction is rolled back in this event, and that there are no lasting changes to the database. We should check this as well. This test method requires the existence of two Products (invoice items) in the database, and a Customer with primary key of 1. A test script should ensure that this data is present before the test case runs:

    public void testPlaceUnauthorizedOrder() throws Exception {      int invoicesPre = helper.runSQLFunction("SELECT COUNT(ID) FROM INVOICE");      int itemsPre = helper.runSQLFunction("SELECT COUNT(*) FROM ITEM");      Invoiceltem[] items = new Invoiceltem[2];      // Constructor takes item id and quantity      // We specify a ridiculously large quantity to ensure failure      items[0] = new Invoiceltemlmpl(1, 10000);      items[1] = new Invoiceltemlmpl(2, 13000);      try {        InvoiceVO inv = sales.placeOrder(1, items);        int id = inv.getId();        fail ("Shouldn't have created new invoice for excessive amount");      } catch (SpendingLimitViolation ex) {        System.out.println("CORRECT: spending limit violation " + ex);      }      int invoicesPost = helper.runSQLFunction("SELECT COUNT(ID) FROM INVOICE");      int itemsPost = helper.runSQLFunction("SELECT COUNT(*) FROM ITEM");      assertTrue("Must have same number of invoices after rollback",        invoicesPost == invoicesPre);      assertTrue("Must have same number of items after rollback",        itemsPost == itemsPre);    } 

Thus we need to make a modest investment in infrastructure to support test cases.

Testing Web Interfaces

It's harder to test web interfaces than ordinary Java classes, or even EJBs. Web applications don't provide neat, easily verifiable responses: the dynamic content we need to test exists as islands in a sea of fancy markup. The look and feel of web applications changes frequently; we need to be able to design tests that don't need to be rewritten every time this happens.

There are a host of web-specific issues, some difficult to reproduce in automated testing. For example:

  • Resubmission of a form (for example, what if the user resubmits a purchase form while the server is still processing the first submission?)

  • The implications of use of the back button

  • Security issues, such as resistance to denial of service attacks

  • Issues if the user opens multiple windows (for example, we may need to synchronize web-tier access to stateful session beans)

  • The implications of canceled requests

  • The implications of browser (and possibly ISP) caching

  • Whether both GET and POST requests should be supported

Like EJBs, web-tier components depend on container services, making unit testing difficult.

JSP pages are particularly hard to unit test. They don't exist as Java classes until they're deployed into a web container, they depend on the Servlet API and they don't offer an easily testable interface. This is one reason why JSP should never implement business logic, which must always be tested. JSP pages are normally tested as part of the application's complete web interface.

Some other view technologies, such as Velocity templates and XSLT stylesheets, are easier to unit test, as they don't depend on the Servlet API. However, in general there's little need to test views in isolation, so this isn't an important consideration.

We'll normally focus on two approaches to testing web interfaces: unit testing of web-tier Java classes; and acceptance testing of the overall web application. Let's discuss each in turn.

Unit Testing Web-Tier Components

We can test web-tier Java classes outside the servlet container using standard JUnit functionality by providing stub objects that emulate the server. The ServletUnit project (http://sourceforge.net/projects/servletunit/) provides objects that can be used to invoke servlets and other Servlet API-dependent classes outside a container, such as test ServletContext, HttpServletRequest, and HttpServletResponse implementations. This enables us to invoke any request handling method directly, and make assertions about the response (for example, that it contains appropriate attributes). This approach works well for simple web-tier classes. However, it's less useful if objects require more complex initialization (for example, loading data contained within a WAR's /WEB-INF directory).

While the ServletUnit package is an excellent idea, it's a simplistic implementation, which doesn't implement some of the Servlet API methods we will want to work with (such as the status code methods). The /framework/test/servletapi directory of the download accompanying this book contains more usable test objects, originally based on the ServletUnit implementations but providing more sophisticated functionality.

It's very simple to use this approach. The test objects not only implement the relevant Servlet API interface, but also provide methods enabling us to provide data to the classes being tested. The commonest requirement is to add request parameters. The following example creates a GET request for the URL "test.html", with a single "name" parameter:

    TestHttpRequest request = new TestHttpRequest(null, "GET", "test.html");    request.addParameter("name", name); 

Since it's good practice to implement web applications using an MVC framework, we won't normally need to test servlets directly (MVC frameworks usually provide a single generic controller servlet, which isn't part of our application). Typically we will use test Servlet API objects to test individual request controllers (we can assume that the controller framework has already been tested and that our request controllers will be invoked correctly at run time).

For example, the MVC web application framework used in our sample application (discussed in Chapter 12) requires request controllers to implement the following interface:

    ModelAndView handleRequest (HttpServletRequest request,      HttpServletResponse response) throws ServletException, lOException; 

Request controllers don't actually generate response content (this is the role of views), but select the name of a view that should render the response, and provide model data it should display. The ModelAndView object returned by the above method contains both view name and model data. This decoupling of controller and view is not only good design practice, but greatly simplifies unit testing. We can ignore markup generation and simply test that the controller selects the correct view and exposes the necessary model data.

Let's look at a following simple controller implementation, along with a JUnit test class we can use to test it. The controller returns one of three different view names based on the presence and validity of a name request parameter. If a parameter was supplied, it forms the model passed to a view:

    public class DemoController implements Controller {      public static final String ENTER_NAME_VIEW = "enterNameView";      public static final String INVALID_NAME_VIEW = "invalidNameView";      public static final String VALID_NAME_VIEW = "validNameView";      public ModelAndView handleRequest(HttpServletRequest request,          HttpServletResponse response) throws ServletException {        String name = request.getParameter ("name") ;        if (name == null $$ "" .equals (name)) {          return new ModelAndView(ENTER_NAME_VIEW) ;        } else if (name.indexOf ("-") != -1) {          return new ModelAndView (INVALID_NAME_VIEW, "name", name);        } else {          return new ModelAndView (VALID_NAME_VIEW, "name", name);        }      }    } 

The following JUnit test case will check the three cases – name parameter not supplied; name parameter valid; and name parameter invalid:

    package com.interface21.web.servlet.mvc;    import javax.servlet.http.HttpServletResponse;    import com.interface21.web.servlet.ModelAndView;    import junit.framework.TestCase;    import servletapi.TestHttpRequest;    import servletapi.TestHttpResponse;    public class DemoControllerTestSuite extends TestCase {      private Controller testController;      public DemoControllerTestSuite(String arg0){      super(arg0);    } 

In the setUp() method we will initialize the controller. With real controllers we will usually need to configure the controller by setting bean properties (for example, references to application business objects), but this is not required in this simple example. Any required objects can often be replaced by test implementations that return dummy data. We're not testing the implementation of business logic, but whether it's invoked correctly by web-tier components:

    public void setup() {      testController = new DemoController() ;    } 

Each test method will create a test request and response object and check that the controller selects the appropriate view name and returns any model data required:

    public void testNoName() throws Exception {      TestHttpRequest request =        new TestHttpRequest(null, "GET", "test.html") ;      HttpServletResponse response = new TestHttpResponse() ;      ModelAndView mv =        this.testController.handleRequest(request, response) ;      assertTrue ("View is correct",        mv.getViewname() .equals(DemoController.ENTER_NAME_VIEW)) ;      assertTrue( "no name parameter", request.getParameter ("name") == null);    }    public void testValidName() throws Exception {      String name = "Tony";      TestHttpRequest request = new TestHttpRequest(null, "GET", "test.html");      request.addParameter ("name", name) ;      HttpServletResponse response = new TestHttpResponse();      ModelAndView mv = this.testController.handleRequest (request, response);      assertTrue ("View is correct",        mv. getViewname() .equals(DemoController.VALID_NAME_VIEW)) ;      assertTrue ("name parameter matches",        request.getParameter ("name") .equals (name)) ;    }    public void testlnvalidName() throws Exception {      String name = "Tony–";      TestHttpPequest request = new TestHttpRequest (null, "GET", "tesl.htm1") ;      request.addParameter ("name", name) ;      HttpServletResponse response = new TestHttpResponse();      ModelAndView mv = this.testController.handleRequest(request, response);      assertTrue ("View is correct: expected "' +        DemoController . INVALID_NAME_VIEW + "' not "' + mv.getViewname() + ""',        mv.getViewname() .equals (DemoController.INVALID_NAME_VIEW)) ;      assertTrue ("name parameter matches",        request.getParameter ("name") .equals (name)) ;      }    } 

There are test packages for some common MVC web frameworks, such as Struts, which support the writing of test cases for applications using them. No special support is required for testing request controllers using the framework discussed in Chapter 12, as they don't depend on the controller servlet at run time. Custom support is required for frameworks such as Struts, in which application request handlers depend on the framework's controller servlet.

If we follow the design principles outlined in Chapter 1, we won't need to write many such test classes. The web interface will be such a thin layer that we can rely on web-tier acceptance testing to identify any problems in how it invokes business logic.

We can also unit test web-tier components inside the web container using a tool such as Cactus. In this case we don't need to supply test Servlet API objects. The disadvantage is that test deployment and authoring is more complex. It's so much easier and quicker to perform tests outside the container that I recommend it if it's at all possible.

Acceptance Testing Web Interfaces

Both these approaches amount to unit testing, which is of limited importance in the web tier. As a web interface reflects the user's view of system, we need to be able to implement acceptance testing. The HttpUnit project (http://httpunit.sourceforge.net/) allows us to write test cases that run outside the server. (The HttpUnit binaries are also included with Cactus.)

HttpUnit is a set of classes for use in JUnit test cases, enabling us to automate HTTP requests and make assertions about the response. HttpUnit doesn't only work with servlets; it doesn't care what the server-side implementation is, so it's equally applicable to JSP pages and XML-generated content. HttpUnit allows easy access to generated documents: for example, its WebResponse object exposes an array of forms, links, and cookies found on the page. HttpUnit provides an elegant wrapper around screen scraping.

Note 

HttpUnit is also a handy library when we need to do HTML screen scraping for any other reason.

The HttpUnit approach is essentially white-box acceptance testing. It has the advantage that we test our application exactly as it will be deployed in production, running on the production server. It's also easy and intuitive to write test cases. The drawback is that screen scraping can be vulnerable to changes in an application's look and feel that don't reflect changes in functionality (a frequent occurrence).

As an example of an HttpUnit test class, consider the following listing. Note that it comes from an ordinary JUnit test case; HttpUnit is a library, not a framework. The highlighted code checks that the page contains no forms and that it contains a nonzero number of links. We could connect to the database in our test case and verify that the links reflected data in the database, if the page was data-driven:

    public void testGenresPage() throws Exception {      WebConversation conversation = new WebConversation();      WebRequest request = new        GetMethodWebRequest("http://localhost/ticket/genres.html");      WebResponse response = conversation.getResponse( request );      WebForm forms[] = response.getForms();      assertEquals( 0, forms.length );      WebLink[] links = response.getLinks();      int genreCount = 0;      for (int i = 0; i < links.length; i++) {        if (links[i].getURLString().indexOf("genre.html") > 0) {          genreCount++;        }      }      assertTrue(" There are multiple genres", genreCount > 0);    } 

Important 

In my experience, acceptance testing is more important than unit testing where web interfaces are concerned, as a web interface should be a thin layer on top of the application's business interfaces. The implementation of the application's use cases should be tested against the implementations of the business interfaces.

The HttpUnit class library provides an excellent, intuitive way of implementing web-tier acceptance tests using JUnit.

Design Implications

In summary, we can test EJBs and web-tier components, but it's a lot harder than testing ordinary Java classes. We need to master the testing techniques for J2EE components we've just discussed, but we also need to learn the lesson that our applications will be much easier to test if we design them so that we can test them as far as possible without the need to test J2EE components. The application infrastructure we will discuss in Chapter 11 is designed to facilitate this.

In the EJB tier, we may be able to achieve this by making EJBs a façade for business logic implemented in ordinary Java classes. There will still be problems such as container-managed transaction demarcation, but it's often possible to test key functionality without an EJB container (see http://www.xp2001.org/xp2001/conference/papers/Chapter24-Peeters.pdf for an XP-oriented approach to testing EJBs this way, and discussion of the difficulties of testing EJBs in general).

This approach can't be applied to entity beans, as their functionality is inextricably linked to EJB container services. This is one of the reasons I don't much like entity beans.

In the web tier, it's relatively easy to minimize the need to test servlets. This can be done by ensuring that web-specific controllers (servlets and helper classes) access a well-defined business interface that isn't web-specific. This interface can be tested using ordinary JUnit test cases.

Important 

Where possible, design applications so that their central functionality can be tested against ordinary Java classes, not J2EE components such as EJBs, servlets, or JSP pages.



Expert One-on-One J2EE Design and Development
Microsoft Office PowerPoint 2007 On Demand
ISBN: B0085SG5O4
EAN: 2147483647
Year: 2005
Pages: 183

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net