Managing Jobs

Previous chapters took a cursory look at Quartz jobs, but now we go through a more formal discussion of Quartz jobs and how to use them.

What Is a Quartz Job?

Quite simply, a Quartz job is a Java class that performs a task on your behalf. This task can be anything that you can code in Java. Here are a few examples to make the point:

  • Use JavaMail (or another Mail framework, such as Commons Net) to send e-mails
  • Create a remote interface and invoke a method on an EJB
  • Use HttpClient and invoke a URL for a Web application
  • Get a hibernate session and query and update data in a relational database
  • Use OSWorkflow and invoke a workflow from the job

These examples are just a few; you surely can come up with your own. Anything that you can do in Java can become a job.

The org.quartz.Job Interface

The only requirement that Quartz puts on your Java class is that it must implement the org.quartz.Job interface. Your job class can implement any other interfaces that it wants or extend any class that it needs, but it or a superclass must implement the job interface. The job interface defines a single method:

public void execute(JobExecutionContext context)
 throws JobExecutionException;

When the Scheduler determines that it is time to run the job, the execute() method is called, and a JobExecutionContext object is passed to the job. The only contractual obligation that Quartz puts on the execute() method is that if there's a serious problem with the job, you must throw an org.quartz.JobExecutionException.

JobExecutionContext

When the Scheduler calls a job, a JobExecutionContext is passed to the execute() method. The JobExecutionContext is an object that gives the job access to the runtime environment of Quartz and the details of the job itself. This is analogous to a Java Web application in which a servlet has access to the ServletContext. From the JobExecutionContext, the job can access everything about its environment, including the JobDetail and trigger that were registered with the Scheduler for the job. Listing 4.4 shows a job called PrintInfoJob that prints some information about the job.

As you can see from Listing 4.4, Quartz jobs can be very basic. The PrintInfoJob gets the JobDetail object, which is stored in the JobExecutionContext, and prints some basic details about the job. The JobDetail class deserves a little more discussion.

Listing 4.4. The PrintInfoJob Shows How to Access the JobExecutionContext

public class PrintInfoJob implements Job {
 static Log logger = LogFactory.getLog(PrintInfoJob.class);

 public void execute(JobExecutionContext context)
 throws JobExecutionException {

 // Every job has its own job detail
 JobDetail jobDetail = context.getJobDetail();

 // The name and group are defined in the job detail
 String jobName = jobDetail.getName();
 logger.info("Name: " + jobDetail.getFullName());

 // The name of this class configured for the job
 logger.info("Job Class: " + jobDetail.getJobClass());

 // Log the time the job started
 logger.info(jobName + " fired at " + context.getFireTime());

 logger.info("Next fire time " + context.getNextFireTime());
 }
}

JobDetail

You first saw the org.quartz.JobDetail class back in Chapter 3. A JobDetail instance is created for every job that is scheduled with the Scheduler. The JobDetail serves as the definition for a job instance. Notice in Listing 4.5 that the job isn't the object registered with the Scheduler; it's actually the JobDetail instance.

Listing 4.5. A JobDetail Is Registered with the Scheduler, Not the Job

public class Listing_4_5 {
 static Log logger = LogFactory.getLog(Listing_4_5.class);

 public static void main(String[] args) {
 Listing_4_5 example = new Listing_4_5();
 example.runScheduler();
 }

 public void runScheduler() {

 try {

 // Create a default instance of the Scheduler
 Scheduler scheduler =
 StdSchedulerFactory.getDefaultScheduler();

 logger.info("Scheduler starting up...");
 scheduler.start();

 // Create the JobDetail
 JobDetail jobDetail =
 new JobDetail("PrintInfoJob",
 Scheduler.DEFAULT_GROUP,
 PrintInfoJob.class);
 // Create a trigger that fires now and repeats forever
 Trigger trigger = TriggerUtils.makeImmediateTrigger(
 SimpleTrigger.REPEAT_INDEFINITELY, 10000);
 trigger.setName("PrintInfoJobTrigger");

 // register with the Scheduler
 scheduler.scheduleJob(jobDetail, trigger);

 } catch (SchedulerException ex) {
 logger.error(ex);
 }
 }
 }

You can see in Listing 4.5 that the JobDetail gets added to the Scheduler, not the job. The job class is part of the JobDetail but is not instantiated until the Scheduler is ready to execute it.

Job Instances Are Not Created Until Execution Time

Job instances are not instantiated until it's time to execute them. Each time a job is executed, a new job instance is created. One implication of this is that your jobs don't have to worry about thread safety because only one thread will be executing a given instance of your job class at a timeeven if you execute the same job concurrently.

 

Setting Job State Using the JobDataMap Object

You can define state for a job using org.quartz.JobDataMap. The JobDataMap implements java.util.Map through its superclass, org.quartz.utils.DirtyFlagMap. You can store key/value pairs within the JobDataMap, and those data pairs can be passed along and accessed from within your job class. This is a convenient way to pass configuration information to your job. Listing 4.6 illustrates this approach using a job we created especially for this purpose, called PrintJobDataMapJob.

Listing 4.6. Use the JobDataMap to Pass Configuration Information to Your Job

public class Listing_4_6 {
 static Log logger = LogFactory.getLog(Listing_4_6.class);

 public static void main(String[] args) {
 Listing_4_6 example = new Listing_4_6();
 example.runScheduler();
 }

 public void runScheduler() {
 Scheduler scheduler = null;

 try {
 // Create a default instance of the Scheduler
 scheduler = StdSchedulerFactory.getDefaultScheduler();
 scheduler.start();
 logger.info("Scheduler was started at " + new Date());

 // Create the JobDetail
 JobDetail jobDetail =
 new JobDetail("PrintJobDataMapJob",
 Scheduler.DEFAULT_GROUP,
 PrintJobDataMapJob.class);

 // Store some state for the Job
 jobDetail.getJobDataMap().put("name", "John Doe");
 jobDetail.getJobDataMap().put("age", 23);
 jobDetail.getJobDataMap().put("balance",
 new BigDecimal(1200.37));
 // Create a trigger that fires once
 Trigger trigger =
 TriggerUtils.makeImmediateTrigger(0, 10000);
 trigger.setName("PrintJobDataMapJobTrigger");

 scheduler.scheduleJob(jobDetail, trigger);

 } catch (SchedulerException ex) {
 logger.error(ex);
 }
 }
 }

In Listing 4.6, the information that we want to pass to the PrintJobDataMapJob is stored in the JobDataMap within the JobDetail. Because the JobDataMap implements the java.util.Map interface, we store state there using a key/value pair configuration. The JobDataMap includes niceties to make it easier to deal with object conversion. Normally with maps, you have to explicitly convert from object to the known type. The JobDataMap includes methods that do this on your behalf.

When the Scheduler eventually calls the job, the job can use the JobDetail to access and use the key/value pairs from the JobDataMap. Listing 4.7 shows the PrintJobDataMapJob.

Listing 4.7. The Job Can Access the JobDataMap THRough the JobExecutionContext Object

public class PrintJobDataMapJob implements Job {
 static Log logger = LogFactory.getLog(PrintJobDataMapJob.class);

 public void execute(JobExecutionContext context)
 throws JobExecutionException {

 logger.info("in PrintJobDataMapJob");

 // Every job has its own job detail
 JobDataMap jobDataMap =
 context.getJobDetail().getJobDataMap();

 // Iterate through the key/value pairs
 Iterator iter = jobDataMap.keySet().iterator();

 while (iter.hasNext()) {
 Object key = iter.next();
 Object value = jobDataMap.get(key);

 logger.info("Key: " + key + " - Value: " + value);
 }
 }
 }

When you obtain the JobDataMap, you can use its methods as you might any map instance. Normally, you access the data within the JobDataMap using a predefined key of your choice. You can also iterate through the map itself, as Listing 4.7 shows.

For jobs such as PrintJobDataMapJob, the properties within the JobDataMap become an informal contract obligation between the client scheduling the job and the job itself. Job creators should document very carefully which properties are required and which are optional. This helps ensure that jobs get reused by other members of your team.

Since Quartz 1 5, a JobDataMap Is Available on Triggers

As of Quartz 1.5, a JobDataMap is also available at the trigger level. This one is used similar to the one at the job level, except that it can support multiple triggers for the same JobDetail. Along with this enhancement added during version 1.5, a new convenience method on the JobExecutionContext can be used to get a merged map of values from the job and trigger level. This method, called getMergedJobDataMap(), can be used within a job. From Quartz 1.5 forward, this method should be considered a best practice for retrieving the JobDataMap

 

Stateful Versus Stateless Jobs

You learned from the previous section that information can be inserted into the JobDataMap and accessed by your jobs. For every job execution however, a new instance of the JobDataMap is created with the values that have been stored (for example, in a database) for the particular job. Therefore, there's no way to hold that information between job invocationsthat is, unless you use a stateful job.

In the same way that stateful session beans (SFSB) in J2EE keep their state between calls, the Quartz StatefulJob can hold its state between job executions. However, just like SFSBs, Quartz stateful jobs have some downsides when compared with their stateless counterparts.

Using Stateful Jobs

The Quartz framework offers the org.quartz.StatefulJob interface when you need to maintain state between job executions. The StatefulJob interface extends the standard job interface and adds no methods that you have to implement. You simply implement the StatefulJob interface using the same execute() method as the job interface. If you have an existing job class, all you have to do is change the job interface to org.quartz.StatefulJob.

Two key differences exist between a job and StatefulJob as they are used by the framework. First, the JobDataMap is repersisted in the JobStore after each execution. This ensures that changes that you make to the job data are kept for the next execution.

Changing the JobDataMap for Stateful Jobs

You can modify the JobDataMap within Stateful Jobs by simply calling the various put() methods on the map. Any data that was present will be overwritten with the new. You can also do this for stateless jobs, but because the JobDataMap is not repersisted for stateless jobs, the data will not be saved. Changes are also not saved for JobDataMaps on triggers and the JobExecutionContext.

The other important difference between stateless and stateful jobs is that two or more stateful JobDetail instances can't execute concurrently. Say that you have created and registered a stateful JobDetail with the Scheduler. You also have set up two triggers that fire the job: one that fires every minute and another that fires every five minutes. If the two triggers tried to fire the job at the same time, the framework would not allow that to occur. The second trigger would be blocked until the first one completed.

This requirement has to do with the JobDataMap storage. Because the JobDataMap is stored along with the JobDetail that defines the job instance, thread-safety issues must be taken into consideration. Only one thread can run and update the JobDataMap storage at a time. Otherwise, the data would be erroneous because the second trigger could try to execute the job before the first had a chance to update the storage. Even stranger results could occur if the second execution completed before the first, which is possible, depending on what your job does.

Because of these differences, you should use the StatefulJob carefully. When you need to prevent concurrent executions of a job, the stateful job is your easiest bet. In the J2EE world, stateful has developed a somewhat negative connotation, but this is not true for Quartz.


Volatility, Durability, and Recoverability

Scheduling in the Enterprise

Getting Started with Quartz

Hello, Quartz

Scheduling Jobs

Cron Triggers and More

JobStores and Persistence

Implementing Quartz Listeners

Using Quartz Plug-Ins

Using Quartz Remotely

Using Quartz with J2EE

Clustering Quartz

Quartz Cookbook

Quartz and Web Applications

Using Quartz with Workflow

Appendix A. Quartz Configuration Reference



Quartz Job Scheduling Framework(c) Building Open Source Enterprise Applications
Quartz Job Scheduling Framework: Building Open Source Enterprise Applications
ISBN: 0131886703
EAN: 2147483647
Year: N/A
Pages: 148

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net