Section 18.4. Using JUnit with Ant


18.4. Using JUnit with Ant

The Ant project build tool includes a few built-in tasks that can be used to integrate JUnit tests into your build scripts. The JUnit support tasks discussed here are included among the optional tasks in the Ant distribution since, in addition to the core Ant libraries, they also require the JUnit libraries in order to run. That means that before using these tasks, you need to ensure that you have the optional Ant tasks as well as the JUnit library in the appropriate CLASSPATH. See Chapter 17 for more details on general Ant usage.

It's possible, of course, to use the core Java support tasks to drive JUnit in your Ant buildfiles. For example, we could simply use the java task to invoke a TestRunner on a compiled TestCase, like our AllModelTests example:

 <target name="run-tests"         description="Run unit tests for the system"         depends="compile-tests">     <java classname="junit.textui.TestRunner"           fork="true">         <classpath>             <path ref/>             <path location="${java.classes.dir}"/>             <path location="${junit.jar}"/>             <path location="${java.test.classes.dir}"/>         </classpath>         <arg line="com.oreilly.jent.people.AllModelTests"/>     </java> </target> 

This isn't too difficultwe invoke the java task with the name of the TestRunner class, and within the task we define the CLASSPATH to be used (making sure to include both our compiled application classes and the JUnit library), and we specify the test to be run using an arg child on the java task, mimicking the use of command-line arguments when invoking a TestRunner from the console. Notice that we are using the fork option on the java task, which runs JUnit in a separate JVM. We do this because the TestRunner we are using attempts to exit the JVM when it finishes running the tests, and naturally this raises a security exception when run within the same JVM as the Ant process.

But the built-in Ant tasks for driving JUnit make even this very simple use case a little bit more concise, as we see here where we've rewritten the run-tests target to use the built-in junit task:

 <target name="run-tests-junit"         description="Run unit tests for the system"         depends="compile-tests">     <junit printsummary="on">         <classpath>             <path ref/>             <path location="${java.classes.dir}"/>             <path location="${junit.jar}"/>             <path location="${java.test.classes.dir}"/>         </classpath>         <test name="com.oreilly.jent.people.AllModelTests"/>     </junit> </target> 

For this very simple use of JUnit, the utility of the junit task is pretty minimal. By using the junit task instead of the general java task, we've avoided having to specify the TestRunner class (the junit task uses its own internal TestRunner). We also don't need to use the fork option as we did when using the general java task because the internal TestRunner used by the junit task is written to operate in the Ant context and doesn't attempt to exit the JVM when it's done. When using the junit task, however, we do need to turn on the printsummary option, as shown previously, at a minimum. This prints some basic feedback about the test results in the Ant output. You can set additional output options using the <formatter> subelement described in "Formatting Test Results" later in this chapter.

A few other features of the junit task are worth highlighting since they are most directly relevant to unit testing and the ones you're most likely to use.

18.4.1. Controlling the Build with Unit Tests

The example Ant targets shown earlier run the specified JUnit test case and report the results but have no impact on the rest of the Ant build process. If any failures or errors are encountered in the tests, they are reported, but the surrounding build process continues. It's likely that you will want to use the unit tests as conditions to control different parts of the build process. You want to use your JUnit tests to catch functional problems before deploying the application, for example. If the tests succeed, proceed with the deployment; otherwise, don't bother because there are known problems. This is roughly analogous to the conditional generation of class files by the Java compiler, just later in the build pipeline, checking functionality instead of syntax. In the case of the compiler, the rule is, if syntax or other compilation errors are encountered, don't generate a class file. In our buildfile, we might want to extend this conditional logic and say that if compilation succeeds, run the unit tests to verify the functional behavior of the code, but if any functional problems are encountered, don't deploy the compiled code.

Four attributes of the junit task can be used for this purpose: haltonfailure, haltonerror, failureproperty, and errorproperty. Setting haltonfailure or haltonerror to true causes the overall build process to stop if any failure or error (respectively) is encountered in the tests. The failureproperty attribute value is the name of a property to be set if a failure is encountered in the tests, and the errorproperty plays a similar role for errors.

To demonstrate the use of these attributes, let's take a modified version of the deploy-app target from the Ant template buildfile (details on the Ant template can be found in Chapter 17). The deploy-app target depends on the create-app target to create the deployable application file, which in turn depends on the code compiling successfully and the component archives (if any) being constructed. In this modified version of the deploy-app target, we make the target depend on our run-tests target before we deploy the application using the as-deploy target:

 <target name="deploy-app"         description="Deploy application to application server"         depends="create-app,run-tests">     <!-- Deploy the application using the imported deploy target -->     <antcall target="as-deploy">         <param name="as.deploy.file"                value="${basedir}/${ant.project.name}.ear"/>         <param name="as.deploy.server.id" value="default"/>     </antcall> </target> 

The haltonfailure and haltonerror attributes are the simplest approach to making the build conditional on the test results. We can just set these attributes to true when we invoke the junit task, as shown in this modification of the run-tests target:

 <target name="run-tests"         depends="compile-tests">     <junit printsummary="on"            haltonfailure="true">         <classpath>             . . .         </classpath>         <test name="com.oreilly.jent.people.AllModelTests"/>     </junit> </target> 

With the haltonfailure attribute set to true, if any of the tests in our AllModelTests test case either fail or generate an error, the junit task halts the overall Ant build process. If you want the build to halt only if errors are generated by the tests but continue if any structured failures are encountered, you would use the haltonerror attribute. Since we've configured our deploy-app target to depend on the run-tests target, if the junit task aborts the build, the deploy-app target is aborted before it tries to deploy the application. And if there were other tasks to be run after the deploy-app target, those would be aborted as well.

We can get more flexibility in dealing with failed tests if we use the failureproperty and errorproperty attributes. With these attributes in use, instead of blindly halting the build process when a test fails or generates an error, the junit task simply sets the property named in the attribute value. We can then use the property as a condition in other parts of our buildfile. For example, let's change the run-tests target to use the failureproperty attribute instead of the haltonfailure attribute:

 <target name="run-tests"         depends="compile-tests">     <junit printsummary="on"            failureproperty="unit-tests-failed">         <classpath>             . . .         </classpath>         <test name="com.oreilly.jent.people.AllModelTests"/>     </junit> </target> 

With this change, if any tests fail or generate an error, the unit-tests-failed property is set. We can use this condition however we'd like in the rest of the buildfile. In our case, we'd like the deploy-app target to be skipped if any unit tests fail. So we can set the unless attribute on the deploy-app target to unit-tests-failed, indicating that the target should be run unless this property is set:

 <target name="deploy-app"         depends="create-app,run-tests"         unless="unit-tests-failed"> . . . 

Since the target depends on the run-tests target, run-tests is run first, setting the property if any tests fail. The deploy-app target's unless attribute is then checked, and the target is skipped if the property is set. But if any additional targets or tasks are queued to be run (e.g., the deploy-app target was invoked as part of another target), these subsequent tasks still execute unless they are also made conditional on the test property.

Note that the <test> subelement on the junit task also has if and unless attributes that you can use to conditionally run the tests themselves. We might have multiple <test> elements, for example, and we want to stop running tests once the first error or failure is encountered. If the surrounding junit task is using the failproperty attribute as shown earlier, we can use the unless attribute on the <test> elements to do this, like so:

 <target name="run-tests"         depends="compile-tests">     <junit printsummary="on"            failureproperty="unit-tests-failed">         <classpath>             . . .         </classpath>         <test name="com.oreilly.jent.people.TestPerson"               unless="unit-tests-failed"/>         <test name="com.oreilly.jent.people.TestPersonDAO"               unless="unit-tests-failed"/>     </junit> </target> 

18.4.2. Running Batches of Tests

So far, we've used only the <test> subelement of the junit task to run single TestCases. The junit task also supports the running of multiple tests through the use of the batchtest subelement. The set of tests to be run is specified using a <fileset> element that contains either the test classes or their Java source files. So if we wanted to execute every test found in the directory where we compiled our test classes, we would do something like this:

 <target name="run-tests"         depends="compile-tests">     <junit printsummary="on"            failureproperty="unit-tests-failed">         <classpath>             . . .         </classpath>         <batchtest unless="unit-tests-failed">             <fileset dir="${java.test.classes.dir}">                 <include name="**/*.class"/>             </fileset>         </batchtest>     </junit> </target> 

Notice that we're also using the unless attribute here since batchtest supports both the if and unless attributes.

In the previous example, we used the test classes to specify the batch to run. We can also use the Java source files, like so:

 <target name="run-tests"         depends="compile-tests">     <junit printsummary="on"            failureproperty="unit-tests-failed">         <classpath>             . . .         </classpath>         <batchtest unless="unit-tests-failed">             <fileset dir="${java.test.src.dir}">                 <include name="**/*.java"/>             </fileset>         </batchtest>     </junit> </target> 

You can use multiple <test> and <batchtest> elements in your junit task. In our example, we might want to run our suite of model tests using a <test> element and selectively run some of our other tests using the <fileset> in a <batchtest> element:

 <target name="run-tests"         depends="compile-tests">     <junit printsummary="on"            failureproperty="unit-tests-failed">         <classpath>             . . .         </classpath>         <test name="com.oreilly.jent.people.AllModelTests"/>         <batchtest>             <fileset dir="${java.test.classes.dir}">                 <include name="**/TestDataUtils.class"/>                 <include name="**/TestDBUtils.class"/>             </fileset>         </batchtest>     </junit> </target> 

18.4.3. Formatting Test Results

Running tests and detecting failures are useful in their own right within an Ant buildfile, but even more useful is collecting the results of the tests for viewing, reporting, and analysis. You might, for example, use a nightly build process that does a fresh build of all your system code followed by a run of your unit tests. In that situation, you'll want to collect the results and report them on a web page or through email to the members of the development team.

The format and destination of the test results can be controlled using the <formatter> subelement on the junit task. You can also use attributes on the junit task to generate test output: the printsummary attribute on the junit task prints a one-line summary after each <test> or <batchtest> runs (as we've done in all our previous examples), and the showoutput attribute prints the output from all tests. But the output generated using the junit attributes is written only to the Ant output console, which is useful for eyeballing the results as the tests are running, but not very useful for other reporting purposes. The <formatter> element lets you write the results to a file and format the results in various ways.

When you use the <formatter> element, you have to specify the type of the formatter. The simplest way to do this is using the type attribute. Three supported formatter types are xml, plain, and brief. The plain and brief types generate the test results in a plain-text format, while the xml type generates results in an XML format suitable for postprocessing.

The following example runs our model tests using the plain formatter:

 <target name="run-tests"         depends="compile-tests">     <junit printsummary="on">         <classpath>             . . .         </classpath>         <formatter type="plain"/>         <test name="com.oreilly.jent.people.AllModelTests"/>     </junit> </target> 

If we don't adjust the destination of the results output, it will be written by the formatter to a file named TEST-<test case classname>.txt. In our case, the results are stored in the file TEST-com.oreilly.jent.people.AllModelTests.txt, and the contents look something like this:

 Testsuite: com.oreilly.jent.people.AllModelTests Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.033 sec   Testcase: testSetName took 0.017 sec Testcase: testAddEmail took 0.001 sec Testcase: testEquals took 0 sec 

Note that each individual test case that is run through a <test> or <batchtest> element generates its own separate results file, formatted according to the specified formatter. That means each <test> element generates a results file, and each test matched by the <fileset> of a <batchtest> element generates its own results file as well.

If we ran the same tests with the xml format type, the results would be placed in a file named TEST-com.oreilly.jent.people.AllModelTests.xml and would look like this:

 <?xml version="1.0" encoding="UTF-8" ?> <testsuite name="com.oreilly.jent.people.AllModelTests"     tests="3" failures="0" errors="0" time="0.16">   <properties>     <property name="java.vendor" value="Apple Computer, Inc."></property>     . . .   </properties>   <testcase name="testSetName"             classname="com.oreilly.jent.people.TestPerson"             time="0.014"></testcase>   <testcase name="testAddEmail"             classname="com.oreilly.jent.people.TestPerson"             time="0.0010"></testcase>   <testcase name="testEquals" classname="com.oreilly.jent.people.TestPerson"             time="0.0"></testcase>   <system-out><![CDATA[]]></system-out>   <system-err><![CDATA[]]></system-err> </testsuite> 

The XML format has a root <testsuite> element whose elements include the name of the test class that was run and summary data about the overall run. The first child element is a <properties> element that contains all of the Ant property values as they were set when the test was run. This information is useful for debugging purposes, but also helps you to ensure consistency across test runs. We've (mercifully) truncated the properties list in the sample results shown.

After the properties, the results of each test are output as <testcase> elements. Finally, any output that might have been printed to the system output or error streams is listed as <system-out> and <system-err> elements, respectively. In our case, the tests didn't write anything to output, so these elements are empty.

You can include a <formatter> element as a child of the junit task itself or specify <formatter> elements within individual <test> and <batchtest> elements. If a <formatter> is specified on a <test> or <batchtest> element, it overrides the main formatter for that test or set of tests.

If the standard text or XML formats aren't sufficient for your needs, you can write a custom formatter of your own by subclassing the org.apache.tools.ant.taskdefs.optional.junit.JUnitResultFormatter bundled with the Ant distribution. However, since the XML format provides all of the result data, you can also use an XSLT transformation to convert the results into whatever format you need (for example, text, HTML, PDF, or another XML format). Ant even provides another optional task, junitreport, which facilitates this. You can specify a set of XML test result files, and the junitreport task will merge the XML files into a single document and apply an XSLT stylesheet to the merged XML file to generate a final output report. The junitreport task has default stylesheets that generate an HTML-formatted report file, but you can provide your own as well. See the Ant documentation on the junitreport task for more details.

You can control (to a degree) the destination of the formatted results as well, using the extension attribute on the <formatter> element, and the outfile attribute on the <test> element. As we mentioned earlier, each built-in formatter (xml, plain, or brief) has a default file extension (.xml for the XML formatter and .txt for the plain and brief formatters). You can override these by specifying your own extension using the extension attribute on the <formatter> element, for example:

 <junit . . .>     . . .     <formatter type="xml" extension=".fooxml"/>     . . . </junit> 

Each <test> element can also have an outfile attribute, which overrides the base filename of the results file for that test. Note that one of the disadvantages of using the <batchtest> element is that you can't override the default filenames of the results files for the matching tests.



Java Enterprise in a Nutshell
Java Enterprise in a Nutshell (In a Nutshell (OReilly))
ISBN: 0596101422
EAN: 2147483647
Year: 2004
Pages: 269

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net