Building the JavaEdge Application with Ant and Anthill

Building an application is more than just compiling the source code. Ideally, building an application would comprise all the steps required to produce readily consumable artifacts. But we must notice that different audiences are going to require different consumable artifacts. For example, the development team may require Javadocs and last night's test results as artifacts, whereas the QA team may not care about the Javadocs as much as the revision log. The end user may not care about either of these and just want a URL in the case of an online application or an executable in the case of a traditional program. So building an application may need to produce a whole range of artifacts, each consumable by one or more of the target audiences of the build.

This chapter is going to introduce build practices that stand up to the rigors of real-world development. We will explore some of the problems that can arise as a result of poor build and configuration management practices and show you how to get around with the aid of two tools. Throughout the chapter we will develop build practices that can accommodate a multi-person, potentially geographically distributed, development team working with a separate QA team.

The Goals of a Build Process

We know that the build system for the JavaEdge application needs to produce readily consumable artifacts. Let's go through and identify what each of those artifacts should be.

Build Artifacts

We know that we will need to compile the Java source code and produce some artifacts that are executable. For a Java web-based application like JavaEdge that means that we will need to produce a WAR file. Once we have a WAR file we can deploy it in a servlet container such as Tomcat.

Also, we would like to have some indication of the quality of each build. This is especially important since we are working in a team of developers. We want to make sure that the last change made by one of the developers did not break a change that we made earlier today. One method used to measure the quality of builds is unit tests. While it is beyond the scope of this chapter to into detail about unit tests, let's just say that each unit test determines whether one particular piece of the application is working properly or not. If we have unit tests written for each piece of functionality in the application, and if we run such unit tests as part of every build, then we will know whether the last change made by one of the developers broke anything or not.

JavaEdge is a database-driven application, which presents several complications. For one, the only way to truly test whether the application works is to test it against a database. Another, perhaps bigger, problem is that we need to ensure that the latest version of the Java code is compatible with the latest version of the database. This may not sound like a big deal when we're working on a small application, but in a large development team where the DBAs are separate from the application developers, this really becomes an issue. It is not uncommon for a developer to add an attribute to a class meaning to have a corresponding column added to a table, only to forget and cause the application to fail. This problem is compounded by the fact that often the DBAs do not keep their database scripts under source control as the application developers do with their source code.

But that kind of practice is very dangerous and we are not going to follow it on the JavaEdge project. All the scripts required to create and initialize the database are part of the source code for the project, and we need to make sure that the Java source is compatible with the database scripts during every build. To do that, we will create a new database using the database scripts that are part of our project during every build and we will run a series of tests to ensure that the Java code still works with the latest database.

All these tests that we are going to run as part of every build are going to end up as consumable artifacts in the form of test results. Having test results for every build of the application will allow the developers to keep tabs on what problems need to be addressed.

On real-world applications that can grow well into hundreds of thousands of lines of code, having up-to- date Javadocs is very important. On large projects, developers typically work on the modules assigned to them and do not need to know the details about the rest of the application. As long as their modules obey the contracts expected by the other parts of the application (such contracts often being in the form of interfaces) everything is fine. But sometimes, developers need to browse the documentation for other parts of the system. In such cases, having up-to-date and easily accessible Javadocs is very important. For this reason our build system will produce Javadocs of the entire application on every build.

Another very useful artifact that we can easily produce during our builds is a to-do list. Very often developers will leave notes for themselves and other developers about what remains to be done on a module or class. Having these notes live in the source code is one thing, but generating project wide reports of all these notes allows the entire team to communicated more effectively. We will generate such project-wide to-do reports as part of every build.

Perhaps one of the basic artifacts of any build is the binary distribution of the project. Such a distribution includes all the binary artifacts (the WAR file in our case) and supporting libraries plus documentation. Our build system will produce a tar.gz file containing the binary distribution of the project.

Source code metrics are very useful for evaluating and reviewing the design of an application. As an application is being developed it evolves, and so does its design. It is not uncommon for a beautifully elegant design present at the inception of coding to gradually lose shape day after day, to the point that it is almost unrecognizable. It is very difficult to see this happen when you are close to the project. Keeping tabs on source code metrics is definitely not going to prevent the design decay, but it may allow you to stop it earlier than you would have otherwise. For this reason we are going to compute and report source code metrics for our project with every build.

The last artifact that we will produce during our builds is an HTML version of all the project's Java source code. Having all the source code for our application available for online browsing is very helpful when faced with a question about how a particular feature is implemented during a meeting. Rather than having to go back to your workstation and fire up your IDE to view the source, you can jump onto any computer equipped with a browser and reach the same conclusion.

That completes the list of artifacts that our build system is going to produce for the JavaEdge application. Let's take a look now at some of the other goals that our build system must attain in order to make it successful on a large real-world project.

Version Control

When working on production projects, one of the most important goals of any build system is the ability to generate reproducible builds. That means that the build that was produced this morning and sent to the QA team should be reproducible if we need to go to production with it in the afternoon. The ability to generate reproducible builds hinges on the use of a Version Control System (VCS) to manage the project's source code. Although a discussion of Version Control Systems falls under the area of configuration management and is outside the scope of this chapter, every real-world production-level project needs to use a VCS. The JavaEdge application was developed with the aid of a VCS and the practices presented in this chapter assume the use of a VCS as well.

Let's take a look at a scenario that could all too easily happen (and does happen too often in practice). A developer implements a feature in the project and commits what they think are all the changes. Then, the developer may build the software and hand it to the QA team for testing. There are already two problems with this scenario:

  • The first is that the developer might have missed a file that has been modified and hence a file in the source code control system might be different from the one on the developer's environment.
  • The second problem arises when the developers do not tag every build released to a QA team in the source code repository. Once the development and QA teams get into a code and test cycle, the QA team reports any issues to the development team and the development team fixes them and produces a build for testing. In the future, a decision might be made that one of the builds that the QA team received is the one that is going to go into the production. But, the developers might have continued to implement the fixes after they made that build and now there is no way to recover the exact contents of that build.

Our build system needs to prevent the above scenario. The most foolproof way to do that is to ensure that any build that is going to be handed off to another party (the QA team in the above scenario) goes through a build management server. The build management server must then ensure that only sources that exist in the VCS are included in the build, that every build is assigned a unique and traceable build number, and that the sources used to create the build are recoverable (this is accomplished by applying a label to the VCS). Only by incorporating a build management server into our build system can we ensure reproducible builds and avoid the problems illustrated in the scenario above.

The final goals that our build system must achieve are that it must provide for automated nightly builds and for a project intranet that can hold the latest artifacts produced by the build. Achieving these goals is crucial when working in a development team. Having a nightly build helps keep all the team members aware of the current status and progress of the project. A central project intranet that houses all the latest artifacts including unit test reports and to-do lists is an often-used resource during development.

Our Final Build System Goals

Let's recap the goals for our build system. Our build system will do the following:

  • Compile and package the source code
  • Run unit tests to indicate the quality of every build
  • Create the database for our application and test the application against the database
  • Generate Javadocs
  • Generate a to-do lists based on notes left in the source code
  • Create a binary distribution of our application
  • Generate source code metrics
  • Generate a browseable HTML version of our source code
  • Use a build management server to:

    • Ensure that only source code that exists in the VCS is included in the build
    • Assign a unique and traceable build number to every build
    • Apply a label to the VCS ensuring that we can reproduce any build
    • Conduct an automated nightly build
    • Create a project intranet to hold all the build artefacts

The Tools

We are going to use a tool called Ant to produce all the artifacts that are required by our build. Later we will see how an open source build management server called Anthill can help us attain the remaining goals for our build system.

Jakarta Ant

In a nutshell, Ant is a build tool much like make, but Ant is Java-based and truly cross-platform. Also, one of the big advantages of Ant is that it is far easier to use and extend than make or any of its derivatives. Ant started because the author of Jakarta Tomcat needed a truly cross-platform way to build Tomcat and make introduced too many OS dependencies (it inherently relies on OS-specific commands). Ant is bundled with many IDEs and has become the standard tool for building Java projects. You can download Ant from

Ant build files (also referred to as Ant scripts or build scripts) are written in XML. Ant reads these build files and then performs the steps described in them to create a build of the software. There is a well- defined structure for XML build files that are considered valid by Ant. This structure consists of the following parts and XML tags:

  • Project
  • Target
  • Task
  • Properties
  • Core Types

    • Pattern Sets
    • File Types
    • Directory Types
    • Path Types

Now, we will cover each of these parts in detail.


Each Ant build file has exactly one element. This element is defined as shown in the following code:


The project element has three possible attributes:

  • name

    Provides the name of the project and is an optional attribute.

  • basedir

    Specifies the directory that serves as the base directory for all relative paths used in the build script. This is also an optional attribute.

  • default

    Specifies the default target within the build script. Whenever Ant is told to run the build script and a target to be run is not specified, Ant will run the default target. By convention, a default target called "all" is included in scripts to provide an easy way to run every target in the script. This is a required attribute.


A target is a group of tasks. Targets can be dependent on other targets and can call other targets as well as other Ant build scripts. Each target accomplishes some specified functionality. For example, the compile target compiles all the source code, the jar target packages the source code into a JAR file, and so on. A project build script is made up of these individual targets that can be chained together via dependencies and explicit calls to other targets. You may find the examples shown below in a typical build script:


Note that the jars target depends on the compile target. That means that when we run the jars target, Ant will run the compile target first. This allows us to create a hierarchy of ordered and related tasks.


An Ant project consists of tasks that actually know how to compile our software, how to package it into a JAR file, how to generate Javadocs, or how to do many other useful things. Tasks are just Java classes that implement the Task interface. This interface provides a contract between the task and Ant so that Ant can configure the task and then execute it. The following example will give you an idea of a task in our build script:


A task is represented via an XML element whose name typically corresponds to the task's class name. In the above example, the element name is copy, which corresponds to the copy task. The copy task is implemented by the class, which extends the class

The task element can have attributes and nested elements. The above copy task has one attribute (todir) and a nested element (). When Ant parses our XML-based build script, it uses the values of the attributes in a task element to configure an instance of the corresponding Java Task class. When Ant reaches the element, it creates an instance of the Copy task class and then sets the value of the todir attribute in the copy task instance to the value in our build script.

Nested elements are treated very similarly except that their values typically correspond to another Java Task instance. In the above example, the nested element corresponds to an instance of the class. Ant creates an instance of the class corresponding to the nested element, configures its attributes and nested elements, and then passes a reference to it the object corresponding to the containing task (copy in this case).


Properties are basically name and value pairs. A project build script can have any number of properties. Properties can be defined in the build script using the element, they can be imported from a properties file, or they can be set at the command line:


The example above illustrates how a build property would be defined in an Ant build script. The property in our example has the name db.user and the value root. To access the value of a property, you would surround the property name with ${}. In the example above, the echo task would print the value (root) of the (db.user) property.

It may be helpful to think of properties within the build script as constants (or static final variables in Java) in the application code. The benefits of properties in build scripts are very similar to constants in code. Rather than hard-coding values all throughout your code, you should use constants to establish a value that is used frequently in your code. This makes it much easier to change the values of your constants, since you have to make the change only at one place. And, using constants in a strongly typed language helps catch the typing errors. Unfortunately, Ant does not catch spelling errors in the names of properties, instead it uses the literal value of the flawed property reference ${}.

To put the values of the properties into a property file and have Ant load these values from a property file is a very useful feature. This feature allows the same Ant build script to be reused on many different environments as long as the environment specifics are handled via properties. For example, your build script needs to know the location of a project "A" on your machine. If you put the location of project "A" into a property file, you can run the same build script without any modifications on another computer, which has project "A" installed in a different location. We need to change only the property file referenced by the build script.

Core Types

Ant has a built-in understanding of how to work with files and directories in a cross-platform environment. There is a very powerful mechanism, based on patterns, for working with the groups of files or directories within Ant.

Pattern Sets

Many tasks can operate on a collection of files. For example, the copy task can copy multiple files at once rather than one file at a time. Likewise, the compile task can compile all the source code (java files) rather than compiling one file at a time. Ant allows us to define a group of files that match certain criteria with a element. This element allows the definition of inclusion and exclusion rules. Files that match the inclusion rules, but not the exclusion rules, are included in the group of files pointed to by the patternset. The inclusion and exclusion rules are specified as nested elements. Consider the following example:


A patternset can have multiple and elements. Patterns within and elements are specified using syntax, where * matches zero or more characters while ? matches exactly one character. The pattern *.java will match files such as or A pattern such as */*.java would match any Java file in a subdirectory, for example, story/ or member/ Ant introduces an additional variation, here, so that the ** pattern matches zero or more subdirectories. A pattern such as com/wrox/**/*.java would match any Java files in any subdirectory (or deeper) of the com/wrox directory, for example, the file com/wrox/, or com/wrox/javaedge/story/

File Types

Ant has a special element called to describe a group of files. It uses pattern sets to describe the files included in the group. A element can have a nested element. Also, it can act as a patternset and, therefore, can nest and elements directly. The element has one required attribute called dir, which specifies the base directory for the inclusion and exclusion rules of the pattern set. The following example will give you an idea:


This element can also be written as follows:


Many tasks accept elements as nested elements to describe the files on which the task acts. For example, the javac task, jar task, copy task, and so on. Also, some tasks extend fileset and add functionality. For example, the tarfileset used by the tar task, which we'll be discussing in the section called Packaging for Distribution.

Directory Types

The element is similar to . However, instead of describing a group of files, the element describes a group of directories. Like , a can include a nested pattern set or it can act as a pattern set itself and contain nested and elements. Since the is implemented with the help of a pattern set, it supports the same pattern matching rules as and . Let's take a look at the following example:


In the above example, all subdirectories of com/wrox that are named dao are included in the dirset.

Path Types

The last built-in types that we are going to cover are paths and path-like structures. While building Java software, we must always remain aware of the classpath used to compile our software. Also while running Java applications such as Ant, we can sometimes specify to the classpath that certain tasks get loaded, as Ant includes its own classloader. Ant provides us with a convenient way to construct a classpath, which can be referenced throughout our build scripts. The element is used to construct a classpath within Ant. A classpath is simply a list of locations that can hold Java classes and resources. These locations can identify directories or JAR files. The element supports a number of nested elements including , , and . Take a look at the following example:


In this example, all the entries specified by the value of the ${classpath} property, all JAR files in the lib directory and all its subdirectories, and the contents of the classes directory are included in the classpath.


Anthill is a build management server. It can also be used for continuous integration. The Anthill home page can be found at Typically, Anthill runs on a server that has access to your Version Control System (VCS). The machine on which the Anthill runs should not be a development machine since it will verify that everything required for building your project is in the VCS. Anthill can be configured to build your project(s) on a schedule, such as every hour or every night at midnight. When the schedule fires, Anthill begins the build as illustrated in the following diagram and described in the steps that follow:

  1. Anthill gets an up-to-date copy of the project from the VCS at the start of every build.
  2. Anthill obtains a log of revisions since the last build. If this list of revisions is not empty, indicating that there has been development activity on the project since the last build, Anthill moves on to the next step. On the other hand, if the list is empty, there is no reason to continue with the build and Anthill cleans up the temporary files that it may have created.
  3. Once Anthill makes the decision that a build is required, it increments the version number assigned to the project.
  4. Depending on the project settings, Anthill may apply a label, with the current version number, to the project in the VCS. This is to ensure that this particular build can be recovered at any time in the future.
  5. Anthill calls Ant to run the project build script. The project build script has been checked out along with the rest of the project from the VCS. As discussed in the section on Setting up a New Project, Anthill knows where that build script is, based on the information that we supply when we set up the project in Anthill.
  6. When Anthill calls Ant to build the project in Step 5, it passes in two extra parameters to Ant. These parameters are the version and the deployment directory. The build script can use these parameters as a part of the build process. Any files that the build script copies over to the location pointed to by deployment directory location parameter are later visible via the project intranet site created by Anthill.
  7. Once Ant has built the project, whether successfully or not, Anthill sends out e-mails with the build results to all interested parties. The e-mails contain the status of the build (success or failure), links to the project intranet site and the build log, and the revision log listing every revision included in the build.

click to expand

Anthill ensures that only the code that is checked into the source code control system is included in our build, and that we can always recover a build. One could argue that you can do everything that Anthill does right from within Ant. Ant does contain tasks to interact with various VCSs, to checkout the latest sources from the VCS, and to apply a label, and you can schedule an Ant script to run at regular intervals using an operating system-specific scheduler (either Cron on Linux or Scheduled Tasks on Windows). But doing everything that Anthill does using only Ant would not be a good idea.

One problem that you'd encounter very quickly is the bootstrapping problem; how can you run a build script to checkout the latest source from the VCS when the build script itself is in the VCS? Even once you got past that problem, having a build script that runs on a schedule is not quite the same as having a web-based application that allows you to quickly see the status of any one of your projects, to browse the project artifacts, to initiate a build of any project outside of a schedule, and many other things.

Defining the Build Process for JavaEdge

In this section, we will develop the Ant build script that is used to produce all our build artifacts. Then we will demonstrate how the JavaEdge project is configured in Anthill so that all its builds are managed. Before we get to the Ant build script however, let's take a look at the directory structure that we used to organize the JavaEdge project.

The Project Directory Structure

Before we start building the application, we need to understand the structure of the project directories. Projects vary widely in their directory structure but having a consistent and, yet, a flexible directory structure is a key to a good build process.

The directory structure that we have used in the JavaEdge project has evolved over the past several years and has definitely been influenced by Ant. It is as shown in the following diagram:

click to expand



Under Source Control?


Holds any scripts, such as batch files required to start and run the project and/or any test harnesses. Not every project requires a bin directory. Only the projects that result in executable applications (as opposed to library type projects) will require this directory.

Yes (If present)


Is used for building the project. Any Ant build scripts and batch files should be placed here. If Ant creates subdirectories under this during a build, it should return the directory to the same state in which it was found, before the build.



Is used to hold the class files compiled during the development. Generally, the developers configure their IDE or compilation scripts so that the output points to this directory. This directory is used only locally for development purposes.



Holds any configuration scripts used by the module; for example, Log4j configuration files should be located here. Also, any properties files used by the module should also be located in this directory. Our project does not require any configuration settings so this directory is not present.



Holds UML and any other design documents. The files created by a UML modeling tool should be placed in the UML subdirectory. We do not have any design documents or UML models for our project, thus this directory is not present.



Holds the binary artifacts of the module. If the Ant build script creates any JAR, EJB JAR, WAR, or EAR files, they should be placed in this directory. The contents of this directory do not belong under source control, as they can be recreated by executing the Ant build script.



The dist directory also contains a number of subdirectories for other types of artifact produced besides archives:

  • The api subdirectory holds the Javadoc that is produced automatically by the Ant publish script.
  • The download subdirectory holds the zipped distribution source files as .tar.gz files.
  • The java2html subdirectory holds the HTML created automatically from the Java source by the Ant build script.
  • The metrix subdirectory holds the source code metrics produced automatically by the Ant publish script.
  • The tests subdirectory holds the results of running unit tests on the code files.
  • The todo subdirectory holds the HTML created to represent the to-do tasks marked as Javadoc comments in the Java source.


Holds any JAR files that are required to build this project. For example, if the project requires an XML parser, the parser JAR file should be included in this directory, the reason being that we want to ensure that we have the exact version of the libraries, on which the project is dependent and that were used while developing the project.



Holds all the source code for the project. It defines subdirectories to further help organize the source. The java subdirectory is used to hold Java source code. The web subdirectory is used to hold the (non-Java) contents of a WAR file, if the project needs one. (The contents of WEB-INFlib and WEB-INFclasses would be filled in by the Ant build script, when the actual WAR file is constructed). Additional subdirectories may be created, as needed, to help keep code organized. For example, we also have sql, ojb, and test subdirectories for JavaEdge.


Now that we have our directory structure, and a project that fits into this structure, let's get to the build.

The Ant Script

We will now develop the Ant script that controls how our project is built and what artifacts are generated. One of the nice things about having two separate tools, each focused on achieving different goals, is that we can use them separately. We will be able to use Ant and the build script we will develop to build the project independently of Anthill. This will allow the developers to build the project locally (on their development environment) using the same script. That's a good thing because the developers should make sure that the project builds and passes all the unit tests locally before committing any source code to the VCS.

Let's begin writing the Ant script to build this project (in a file called build.xml). The first step in writing the script is to define the element as shown below:


The name attribute defines the name of the project, that is JavaEdge. The value of the basedir attribute may be surprising, but this build script is in the build subdirectory, and we want all of our paths in the build script to be relative to the project root directory. Hence we specify that the project base directory as "../".

Next, we define a set of properties. The first two are used to identify the project as shown below:


The name property is given a value equal to, which points to the value of the name attribute in the (that is, JavaEdge). This property is just a shortcut because instead of writing ${}, we can now write ${name}. The version property is used to uniquely identify each and every build of the software once we start using Anthill to do controlled builds.

The next three properties are used to configure the compiler when we compile our code:


The next group of property declarations models the layout of our project directory tree. These properties tell Ant about the location of each of the directories holding our source code, library JAR files, etc.:


We also define the properties that tell Ant where to put intermediate products that it generates during the build. We keep all of these intermediate products in the uild emp directory and its subdirectories. As a result, it becomes easy for us to clean them up later:


The next set of properties identifies the locations where Ant should put the results of our build. Any JAR or WAR files created by the build will be placed into the location pointed to by ${dist.dir}, which corresponds to the dist directory by default. For example, the generated Javadocs are placed in the location pointed to by ${dist.api.dir}:


Next, we define a group of properties that identifies the database-related properties, which will be used when we create the database tables for testing. Generally, it is a good idea to use property values instead of strings within you tasks. This makes it easier to modify your Ant scripts in the future, since everything is in one place and you need to make the change only once rather than searching the entire Ant script for strings. Another advantage of using properties, rather than hard-coded values, is that properties can be overridden. This allows your script to be used in different environments without making any changes in it:


Our last set of properties define the classpaths that should be used to compile our project:


Main Targets

Build files can get quite complex with many targets and interdependencies between them. One strategy that we would suggest to use in the build files for dealing with the complexity is to create a set of main targets. These main targets then are responsible for calling all the dependent targets that work together. We would suggest creating the following main targets:

  • dev

    Used to create an incremental build of the project. This is probably the most often used target during development. Every time you make a change and want to test it, you would run this target.

  • dev-clean

    After your changes pass all the tests, you would typically run this target, which does a clean build. It's a good idea to run a clean build before you complete or commit a change to the version control system because a clean build can reveal dependencies on the outdated and deleted files.

  • doc

    This target is run every time we want to regenerate the documentation for the project.

  • all

    An all encompassing target, which does a complete build of the project and produces all the documentation and project artifacts.


A good tool that can help you visualize the targets present in your build script and the dependencies between them is Vizant (available at

The following graph shows the targets and the dependencies that exist in the build script for the JavaEdge project:

click to expand

Here are those main targets defined in the build file:







The most fundamental piece of the entire build script is the compilation target. This target is created using three steps:

  • Creating the required directories
  • Calling the compiler to compile the source code
  • Copying any non-code resources that need to be available in the classpath




The javac task takes care of compiling Java code. The destdir attribute identifies the output directory for the compiled classes. The classpath tag identifies the classpath to be used for this compilation. The src tag provides the location path for the Java source files.

Compiling the sources places the compiled class files in a different location from the source Java files. Also, we want to ensure that any resources to be made available via a classloader also get moved. Therefore, we use the copy task. This task uses a fileset to define the files that will get copied. In the above code, we have copied all the property and version files.

Packaging for Deployment

The next target packages our project into a WAR file and a JAR file. To deploy our project application, we just need a WAR file. However, we are creating the JAR file so that we can run unit tests separately from the build, and all the code that we need on the classpath is included in the JAR file:






The war task has special way of handling the files that end up in the WEB-INF, WEB-INF/lib, and WEB-INF/classes directories. The contents of each of these special locations can be specified using a fileset element. In our example, the fileset called lib places all JAR files found in the lib directory of the project in the WAR files in the WEB-INF/lib directory. This ensures that the classpath that we need for compiling is included in the WAR file. The classes fileset ensures that all of the compiled application classes get placed in the WEB-INF/classes location in the WAR file.

We can have multiple filesets of the same type within our task, that is, we can grab files from different source locations and place them in the same destination location in a WAR file. In our application, we have separate source locations for our Java source code and the OJB configuration files. But at run time, we want the OJB configuration files to be accessible via a classloader. Hence, they need to be in the same location as our class files. We use two distinct classes filesets for this purpose. The first classes element includes our compiled class files (located in ${build.classes.dir}) in the WEB-INF/classes location of the WAR file. The second classes fileset includes our OJB configuration files (located in ${src.ojb.dir}) in the same location as the classes in the WAR file.

Creating a JAR file is even easier. We use the jar task to specify the name and location of the resulting JAR file and the base directory. Every file found in the specified base directory (or any of its subdirectories), will be included in the resulting JAR file. If you need more flexibility than that or if you need to filter out some files and include files from multiple locations, the jar task also supports nested filesets that provide all those capabilities.

Compiling Test Code

Compiling the test code is almost identical to compiling the main project code. Instead of placing the compiled class files in the ${build.classes.dir}, we place them in the ${tests.classes.dir}. The classpath used for compilation is the tests.classpath, which includes the compiled project code:





One interesting thing to note in the copy task is that, when we copy the OJB configuration files to make them accessible by the test code, via a classloader, we apply a replacement filter. This filter changes the value of the subprotocol attribute in the jdbc-connection-descriptor element of the repository.xml file (refer to Chapter 5 for the discussion on the repository.xml file). This allows us to use a potentially different database URL for automated testing during the build than for production.

Preparing the Database

The JavaEdge project requires a database for most of the tests. While we could have the requirement that a database with all the required tables and users is going to be available to run all the tests, that would be a broken link in the chain of testing. This is because if the database schema changes (that is, if we add a column to one table and remove it from another table), and we try running the tests that require the new database schema against an old database, they will fail. Equally, what if some developer inadvertently makes a change to the database schema? If we do not rebuild the database every time we do a build, we may not catch that inadvertent change for a long time (maybe until we try to put the application into production). So, we should have the option of creating the database required for our application from scratch with every build. This is actually quite easy since we already have all the database scripts in our source tree. All we need to do is execute the scripts against a database, and Ant provides us with a task to do this. Look at the following code:



The sql task allows us to execute SQL statements against a database via a JDBC connection. This task requires the name of the driver to be used, the URL of the database, the user name, and the password for establishing the database connection, as well as a classpath containing the driver. The element allows us to execute multiple SQL commands on the same database connection. The src attribute of this element points to the location of the file containing the SQL to be executed. In our project, we are going to first execute the create_java_edge.sql script. This script drops the existing database (if it exists) and then recreates the entire required database from scratch. Then, we are going to run the mysql_user_setup.sql, which creates the user account required by our project.

Note that this target includes an if attribute. This target will run only if the dev.clean property exists. As we declare the dev.clean property in the dev-clean target, the database will get rebuilt only if we run the dev-clean target (or any other target that calls dev-clean, such as the all target).

Using Ant properties to hold the value of the database URL, user name, and password (instead of hardcoded strings) will allow us to integrate this build script with the Anthill build server as a continuous integration tool.

Running the Unit Tests

Running our tests as part of every build is quite easy, given that we can use the JUnit framework. Ant contains a pair of optional tasks called junit and junitreport. The junit task can be used to run a single JUnit test for a whole series of tests. It can produce XML-formatted test results as well as plain text results. The junitreport task basically applies an XSLT stylesheet to the XML-formatted test results. The use of these tasks is demonstrated in the following snippet:





The haltonfailure attribute controls whether the build should end when a unit test fails. We set this attribute to false because the build does not consist only of running the tests, and we want all the other components to run regardless of whether all the tests are successful. The fork attribute tells the task that all unit tests should run in their own JVM rather than inside the JVM running the Ant script. The classpath element allows us to specify the classpath to be used for running the tests. In our example, we are referring to the classpath that we created at the beginning of this build script. The tests.classpath contains the classpath we used for compilation plus the compiled classes and test cases in our project (the OJB property files are included in the test cases classpath, copied over during the compilation target).

The two formatter elements tell the junit task about the kind of output to be generated. The brief formatter produces text-based output similar to junit.textui.TestRunner. The xml formatter ensures that all test results are saved in an XML format so that we can apply a stylesheet to them later. The batchtest element can contain a number of fileset elements, which are used to identify the unit tests to be run. The junit task then runs all the unit tests identified by these elements.

Formatting the JUnit results is accomplished using the junitreport task. This task accepts an attribute todir that identifies the location where the formatted report should be placed. A fileset element within this task identifies the XML files containing test results to be formatted, and the report element identifies the stylesheet to be applied, which is either frames or noframes.


There are a couple of points that you should remember when you're using junit and junitreport tasks with Ant. The first is that junit.jar needs to be in the classpath available to Ant; so, it needs to be in either the lib directory under your Ant installation or added to your system classpath. The second is that if you are using junitreport with JDK 1.3, you need to download Xalan (from and make sure the xalan.jar file is available to Ant's classpath (the same as with junit.jar).

Generating Javadocs

Ant allows you to generate Javadocs with every build. The javadoc task is probably one of the larger Ant tasks with a large number of options. Most of these options map directly to options available in javadoc.exe that come with the JDK, and you need to use only a few of the options to generate a simple set of Javadocs:




XDoclet-Generated To-Do List

XDoclet is a powerful code-generation tool that uses Javadoc comments as its input metadata. Based on this input, XDoclet can generate EJBs and Data Transfer Objects, or portions of a web application. In our application, we are going to use XDoclet to generate something very simple, but still very useful. We are going to generate a to-do list based on the to-do items left in our code in the form of Javadoc comments:




Publishing Project Documentation

The move-docs target illustrated below is not very interesting but it provides one little trick. Some tasks support an if attribute, which determines whether the task gets executed or not. If the property pointed to by the if attribute exists, the task gets executed, otherwise, it does not. We are going to use this attribute in the following move-docs example:



If the ${doc.dir} exists, which indicates the directory holding the project documentation exists, we would move it to its publish location, otherwise, we would like not to do anything. But the move task does not support the if attribute. Hence, it's not going to be very easy. The trick is that we can produce the same behavior as if the move task did support the if attribute by wrapping the move task in its own target. Since targets now support the if attribute, the move task will be executed only if its wrapping target gets executed, and the target will get executed only if our condition is met.

Packaging for Distribution

The download target, illustrated in the following code snippet, creates two tar.gz files, one that includes only the project binaries and one that includes the binaries as well as the source code. Creating tar.gz files in Ant consists of two steps: first the .tar file needs to be created, and second it needs to be zipped. We use the tar task to create the TAR file. This task accepts a fileset called tarfileset. These filesets allows us to specify which files should be included in the TAR file, as well as some useful attributes, such as prefix, which allows us to prefix the path of every file in the fileset with the specified value. This is very handy if you want the .tar.gz file to be unarchived into a directory identifying the project name and version (as shown in the code):




Once we've created the .tar file, we use the gzip task to gzip it and we get a .tar.gz file. We then delete the .tar file, since it is no longer needed. In our example, we create two .tar.gz files, one for the source distribution and one for the binary distribution. The only difference between the two is the files that are included in the tarfileset. The source distribution includes all the binary files included in the binary distribution, in addition to the build directory containing the Ant build scripts and the source directory. The exclusion rules ensure that the .tar.gz files do not contain any of the CVS directories or any old versions of files (old files having the name format *.*~).

Source Code Metrics

For the JavaEdge projects we used a tool called JDepend (available at to calculate the project source code metrics. JDepend has an Ant task that is distributed with Ant in the optional.jar file, but you still need to download the jdepend.jar file and add it to Ant's classpath.

JDepend generates an XML file that holds the computed metrics for the source code identified by the element. We can then apply an XSLT stylesheet to transform the XML report to HTML. For this purpose, we use the style task as shown in the following code:



Professional Struts Applications. Building Web Sites with Struts, Object Relational Bridge, Lucene, and Velocity
Professional Struts Applications: Building Web Sites with Struts ObjectRelational Bridge, Lucene, and Velocity (Experts Voice)
ISBN: 1590592557
EAN: 2147483647
Year: 2003
Pages: 83 © 2008-2020.
If you may any questions please contact us: