Appendix AE

Overview

The DBMS_JOB package allows you to schedule one-off or recurring jobs in your database. A job is a stored procedure, anonymous PL/SQL block, or external procedure written in C or Java. These jobs are run in the background by the server processes themselves. They can be run on a recurring basis (every night at 2am), or one time (run this job right after I commit, and then remove it from the job queue). If you are familiar with the cron or at utilities in UNIX or Windows, you already have a good understanding of the DBMS_JOB package. They are run in the same environment (user, characterset, and so on) they were submitted in (minus roles). Jobs are run in an environment much as a definer rights stored procedure is - without any roles being enabled. We can see this by the following example:

Note 

The routines used in this example are explained in detail further down in this section.

tkyte@TKYTE816> create table t ( msg varchar2(20), cnt int );      Table created.      tkyte@TKYTE816> insert into t select 'from SQL*PLUS', count(*) from                 session_roles;      1 row created.      tkyte@TKYTE816> variable n number tkyte@TKYTE816> exec dbms_job.submit(:n,'insert into t select ''from job'',                 count(*) from session_roles;');      PL/SQL procedure successfully completed.      tkyte@TKYTE816> print n          N ----------         81      tkyte@TKYTE816> exec dbms_job.run(:n);      PL/SQL procedure successfully completed.      tkyte@TKYTE816> select * from t;      MSG                         CNT -------------------- ---------- from SQL*PLUS                10 from job                      0 

As you can see, in SQL*PLUS we had 10 roles active, in the job environment, we had none. Typically, since most people submit a stored procedure call as the job, this will not affect anything, since the stored procedure runs without roles in the first place. The only time you might notice this is if you try to schedule a stored procedure to which you have access via a role. This will not work - there are no roles enabled in jobs, ever.

Many times, people ask what the best method is for hiding a username/password associated with a batch job (for example, to analyze tables periodically) that is scheduled via cron, or some utility on Windows NT/2000. They are worried about the password being stored in file (as they should be), or being visible in the ps output on UNIX, and so on. My answer to this is to not use the OS to schedule operations against the database at all, but rather, write a stored procedure that performs your operation, and schedule it using DBMS_JOB. In this fashion, there is no stored username and password, and the job will only actually execute if the database is available. If the database is not available, the job will not run of course, as the database is responsible for running the job.

Another frequent question is, 'How can I speed this up?' You are faced with some long operation, and the end user does not want to wait. Sometimes the answer is that it cannot be sped up. For example, I've been sending e-mails from the database for many years. I've used different mechanisms over time; database pipes, UTL_HTTP, external procedures, and Java. They all worked at about the same speed, but they were slow. It takes a while for SMTP to finish its stuff sometimes. It definitely took too long in my application, where anything longer than quarter of a second is too long. The SMTP send might take 2 to 3 seconds at times. We cannot make it go faster, but we can give it the perception of being faster. Instead of sending the e-mail when the user hit the submit button on the application, we would submit a JOB that would send the e-mail as soon as we committed. This had two nice side effects. The first was that the operation appeared to be much faster, the second was that it made e-mail 'transactional'. One of the properties of DBMS_JOB is that the job will be visible in the queue, only after you commit. If you roll back, the job is dequeued, and will never be executed. By using DBMS_JOB, not only did we make the application appear faster, but we made it more robust as well. No longer did we send e-mail alerts out from a trigger on an update of a row that got rolled back. Both the row was updated and we sent the e-mail, or the row was not updated and we did not send the e-mail.

So, DBMS_JOB has many uses. It can make 'non transactional' things transactional (like sending an e-mail, or creating a table upon an insert into another table). It can appear to speed things up, especially when you do not need any output from the really slow operation. It can schedule and automate many of the tasks you normally write scripts for outside of the database. It is one of those truly useful packages.

In order for DBMS_JOB to function correctly, we need to do a little set up in the database. There are two init.ora parameters that must be set:

  • job_queue_interval - Specifies the frequency in seconds by which the job queues will be inspected for jobs that are ready to run. If you schedule a job to run every 30 seconds, but set job_queue_interval to 60 (the default), your job will never run every 30 seconds - it'll run every 60 seconds at best.

  • job_queue_processes - Specifies the number of background processes available to run jobs. This is an integer number between 0 (the default) and 36. This value may be changed without restarting the database via the ALTER SYSTEM SET JOB_QUEUE_PROCESSES=<nn> command. If this value is left at 0, jobs in the job queue will never run automatically. These job queue processes are visible in the UNIX environment, where they will have the name ora_snpN_$ORACLE_SID where the N will be a number (0, 1, 2, ..., job_queue_processes-1). On Windows, the job queues execute as threads and will not be externally visible.

Many systems run with a value of 60 for job_queue_interval (in other words, check the queues every minute), and 1 for the job_queue_processes (run at most one job at a time). If you use jobs heavily, or make use of features that use the job queues as well (replication, and materialized views are two features that make use of the job queues), you might consider adding an additional job_queue_processes.

Once the job queues are set up to run automatically, we are ready to start using them. The main routine you will use with DBMS_JOB is the SUBMIT routine. Its interface is as follows:

PROCEDURE SUBMIT  Argument Name                  Type                    In/Out Default?  ------------------------------ ----------------------- ------ --------  JOB                            BINARY_INTEGER          OUT  WHAT                           VARCHAR2                IN  NEXT_DATE                      DATE                    IN     DEFAULT  INTERVAL                       VARCHAR2                IN     DEFAULT  NO_PARSE                       BOOLEAN                 IN     DEFAULT  INSTANCE                       BINARY_INTEGER          IN     DEFAULT  FORCE                          BOOLEAN                 IN     DEFAULT 

where the arguments to the SUBMIT routine have the following meanings:

  • JOB - A job identifier. It is system-assigned (it is an OUT only parameter). You can use this to query the USER_JOBS or DBA_JOBS views by job ID to see information about that job. Additionally, some routines such as RUN and REMOVE take the job ID as their only input, to uniquely identify the job to run or be removed.

  • WHAT - The SQL text of what will be run. It must be a valid PL/SQL statement or block of code. For example, to run a stored procedure P, you might pass the string P; (with the semi-colon) to this routine. Whatever you submit in the WHAT parameter, will be wrapped in the following PL/SQL block:

    DECLARE    job BINARY_INTEGER := :job;    next_date DATE := :mydate;    broken BOOLEAN := FALSE; BEGIN          WHAT     :mydate := next_date;     IF broken THEN :b := 1; ELSE :b := 0; END IF; END; 

    This is why you need to add the ; to any statement. In order to just replace the WHAT with your code, it will need a semi-colon.

  • NEXT_DATE - The next (or since we are just submitting, the first) time to run the job. The default is SYSDATE - run as soon as possible (after committing).

  • INTERVAL - A string containing a date function that calculates the next time to run the job. You can consider this function to be 'selected from dual'. If you pass in the string sysdate+1, the database will in effect execute SELECT sysdate+1 INTO :NEXT_DATE FROM DUAL. See below for some caveats on setting the interval of a job to prevent 'sliding'.

  • NO_PARSE - Determines whether the WHAT parameter is parsed upon submission. By parsing the string, you can be reasonably sure the string is in fact, executable. In general, NO_PARSE should always be left with its default of False. When set to True, the WHAT parameter is accepted 'as is' with no validity checking.

  • INSTANCE - Only meaningful in Parallel Server mode, a mode Oracle can run in, on a loosely coupled cluster of machines. This would specify the instance upon which this job should be executed. By default, this will have a value of ANY_INSTANCE.

  • FORCE - Again, this is only meaningful in Parallel Server mode. If set to True (the default), you may submit the job with any instance number, even if that instance is not available at the time of job submission. If set to False, submit will fail the request if the associated instance is not available.

There are other entry points into the DBMS_JOB package as well. SUBMIT is the one you will use to schedule a job, and the others allow you to manipulate the scheduled jobs, and perform operations such as RUN it, REMOVE it, and CHANGE it. Below is a listing of the commonly used ones, what they expect as input, and what they do:

Entry Point

Inputs

Description

REMOVE

job number

Removes a job from the job queue. You should note that if the job is running, this cannot stop it. It will be removed from the queue so it will not execute again, but it will not stop an executing job. In order to stop a running job, you may use the ALTER SYSTEM command to kill the session.

CHANGE

job number

WHAT, NEXT_DATE, INTERVAL, INSTANCE, FORCE

This acts like an UPDATE statement would on the JOBS view. It allows you to change any of the settings of the job.

BROKEN

job number

BROKEN (Boolean)

NEXT_DATE

Allows you to 'break' or 'unbreak' a job. A broken job will not execute. A job that has failed 16 times in a row will automatically be set to broken, and Oracle will stop trying to execute it.

RUN

job number

Runs a job right now in the foreground (in your session). Useful for trying to debug why a job is failing.

Now that we have a working knowledge of how DBMS_JOB works, and what functions are available to us, we'll look at how to run a job once, how to set up a recurring job correctly, and how to monitor our jobs and find out what errors they have encountered.

Running a Job Once

Many of the jobs I run are 'one-off' jobs. I use DBMS_JOB much as one would use the & in UNIX, or the start command in Windows, to run a process in the background. The example I gave above with regards to sending e-mail is a good example. I use DBMS_JOB to make the sending of e-mail not only transactional, but also appear to be fast. Here is one implementation of this to demonstrate how to run a job once. We'll start with a small stored procedure to send e-mail using the supplied UTL_SMTP package:

tkyte@TKYTE816> create or replace   2  PROCEDURE send_mail (p_sender       IN VARCHAR2,   3                       p_recipient IN VARCHAR2,   4                       p_message   IN VARCHAR2)   5  as   6     -- Note that you have to use a host   7     -- that supports SMTP and that you have access to.   8     -- You do not have access to this host and must change it   9     l_mailhost VARCHAR2(255) := 'aria.us.oracle.com';  10     l_mail_conn utl_smtp.connection;  11  BEGIN  12     l_mail_conn := utl_smtp.open_connection(l_mailhost, 25);  13     utl_smtp.helo(l_mail_conn, l_mailhost);  14     utl_smtp.mail(l_mail_conn, p_sender);  15     utl_smtp.rcpt(l_mail_conn, p_recipient);  16     utl_smtp.open_data(l_mail_conn );  17     utl_smtp.write_data(l_mail_conn, p_message);  18     utl_smtp.close_data(l_mail_conn );  19     utl_smtp.quit(l_mail_conn);  20  end;  21  /      Procedure created. 

Now, to time how long this takes, I'll run it twice:

tkyte@TKYTE816> set serveroutput on tkyte@TKYTE816> declare   2          l_start number := dbms_utility.get_time;   3  begin   4          send_mail( 'anyone@outthere.com',   5                     'anyone@outthere.com', 'hey there' );   6          dbms_output.put_line   7          ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||   8            ' seconds' );   9  end;  10  / .81 seconds      PL/SQL procedure successfully completed.      tkyte@TKYTE816> / .79 seconds      PL/SQL procedure successfully completed. 

It looks like it will consistently take the order of 8 tenths of a second to send a mail during the best of times. As far as I am concerned, that is far too long. We can do much better - well, we can 'apparently'do much better. We'll use jobs to give this the appearance of being much faster, and gain the benefit of a 'transactional' e-mail as well.

We will start by creating a table to store the e-mail, and a procedure we could run against it to send e-mail. This procedure will ultimately become our background job. A question is, why am I using a table to store the emails? Why not just pass parameters to the job? The reason is bind variables, and the shared pool. Since all jobs are created using the WHAT parameter, and the database will simply 'execute' this string at run-time, we want to make sure that the WHAT parameter we submit is something that will be in the shared pool. We could easily just submit a job such as:

dbms_job.submit( x, 'send_mail(''someone@there.com'',                                ''someone@there.com'', ''hello'' );' ); 

but that would have the effect of flooding our shared pool with hundreds or thousands of unique statements, killing our performance. Since we plan on sending lots of e-mails (anything more than one is lots, and would mandate the use of bind variables), we need to be able to submit something like:

dbms_job.submit( x, 'background_send_mail( constant );' ); 

Well, as it turns out, there is an easy way to do this. We simply need to create a table that contains a field for each parameter we really wanted to send to the routine (sender, recipient, and message in this case), plus an ID primary key field. For example:

tkyte@TKYTE816> create table send_mail_data( id        number primary key,   2                               sender    varchar2(255),   3                               recipient varchar2(255),   4                               message   varchar2(4000),   5                               senton    date default NULL );      Table created. 

Here I added an ID column as a primary key, and in this case, a sent on senton column. We'll use this table not only as a place to queue up outgoing e-mails, but also to keep a persistent log of e-mails sent, and when they were sent (very handy, trust me, for when people say 'but I didn't get the notification'). Now all we need to do is figure out a way to generate a key for this table, and get it to our background process using a constant string. Fortunately DBMS_JOB already does that for us. When we schedule a job, it automatically creates a job ID for it, and returns this to us. Since the block of code it wraps around our WHAT parameter includes this job ID, we can simply pass it to ourselves! This means that our FAST_SEND_MAIL routine will look like this:

tkyte@TKYTE816> create or replace   2  PROCEDURE fast_send_mail (p_sender    IN VARCHAR2,   3                            p_recipient IN VARCHAR2,   4                            p_message   IN VARCHAR2)   5  as   6          l_job   number;   7  begin   8          dbms_job.submit( l_job, 'background_send_mail( JOB );' );   9          insert into send_mail_data  10          ( id, sender, recipient, message )  11          values  12          ( l_job, p_sender, p_recipient, p_message );  13  end;  14  /      Procedure created. 

This routine will submit a job, BACKGROUND_SEND_MAIL, and pass it the JOB parameter. If you refer to the WHAT parameter description above, you'll see the block of code includes three local variables we have access to - we are simply passing ourselves one of them. The very next thing we do in this procedure is to insert the e-mail into our QUEUE table, for delivery later. So, DBMS_JOB creates the primary key, and then we insert the primary key with the associated data into this table. That's all we need to do. Now we need to create the BACKGROUND_SEND_MAIL routine and it is simply:

tkyte@TKYTE816> create or replace   2  procedure background_send_mail( p_job in number )   3  as   4          l_rec   send_mail_data%rowtype;   5  begin   6          select * into l_rec   7            from send_mail_data   8           where id = p_job;   9  10          send_mail( l_rec.sender, l_rec.recipient, l_rec.message );  11          update send_mail_data set senton = sysdate where id = p_job;  12  end;  13  /      Procedure created. 

It reads out the data we saved, calls the slow SEND_MAIL routine, and then updates the record, to record the fact that we actually sent the mail. Now, we can run FAST_SEND_MAIL, and see how fast it really is:

tkyte@TKYTE816> declare   2          l_start number := dbms_utility.get_time;   3  begin   4          fast_send_mail( 'panda@panda.com',   5                          'snake@snake.com', 'hey there' );   6          dbms_output.put_line   7          ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||   8            ' seconds' );   9  end;  10  / .03 seconds      PL/SQL procedure successfully completed.      tkyte@TKYTE816> / .02 seconds      PL/SQL procedure successfully completed. 

As far as our end users are concerned, this FAST_SEND_MAIL is 26 to 40 times faster than the original send mail. It is not really faster, but it just appears to be that much faster (and that is what really counts). The actual sending of the mail will happen in the background after they commit. This is an important note here. If you run this example, make sure you COMMIT when using the DBMS_JOB example, else the e-mail will never get sent. The job will not be visible to the job queue processes until you do (your session can see the job in the USER_JOBS view, but the job queue processes won't see it until you commit). Don't take this as a limitation, it is actually a feature - we've just made e-mail transactional. If you ROLLBACK, so does your send mail. When you COMMIT, it'll be delivered.

Ongoing Jobs

The other main use of DBMS_JOB is to schedule recurring jobs in the database. As mentioned previously, many people try to use OS utilities such as cron or at to run jobs in the database, but then encounter the issue of, 'How do I protect the password?' and such. My answer to this is always to use the job queues. In addition to removing the need to store credentials anywhere, this ensures the jobs are only run if in fact, the database is up and running. It will also retry the job time and time again in the event of a failure. For example, if the first time we attempted to run the job, the database link it uses was unavailable, it will put the job back onto the queue, and will retry it later. The database will do this 16 times, waiting a little longer each time, before ultimately marking the job 'broken'. See the next section, Monitoring Jobs And Finding The Errors, for more details on that. These are things cron and at won't do for you. Also, since the jobs are in the database, we can just run queries to find their status - when they last ran, if they ran, and so on. Everything is integrated.

Other Oracle features such as replication and materialized views implicitly use the job queues themselves as part of their day-to-day functioning. The way a snapshot pulls its changes, or a materialized view refreshes, is by the job queues running the stored procedures that perform these operations.

Let's say you wanted to schedule an analysis of all of the tables in a certain schema to take place every night at 3am. The stored procedure for doing such a thing could be:

scott@TKYTE816> create or replace procedure analyze_my_tables   2  as   3  begin   4      for x in ( select table_name from user_tables )   5      loop   6          execute immediate   7              'analyze table ' || x.table_name || ' compute statistics';   8      end loop;   9  end;  10  /      Procedure created. 

Now, in order to schedule this to run tonight at 3am (tomorrow morning really), and every day thereafter at 3am, we will use the following:

scott@TKYTE816> declare   2      l_job number;   3  begin   4      dbms_job.submit( job       => l_job,   5                       what      => 'analyze_my_tables;',   6                       next_date => trunc(sysdate)+1+3/24,   7                       interval  => 'trunc(sysdate)+1+3/24' );   8  end;   9  /      PL/SQL procedure successfully completed.      scott@TKYTE816> select job, to_char(sysdate,'dd-mon'),   2                    to_char(next_date,'dd-mon-yyyy hh24:mi:ss'),   3                    interval, what   4  from user_jobs   5  /       JOB TO_CHA TO_CHAR(NEXT_DATE,'D INTERVAL              WHAT ---- ------ -------------------- --------------------- ------------------   33 09-jan 10-jan-2001 03:00:00 trunc(sysdate)+1+3/24 analyze_my_tables; 

So, the next date for this job to run will be 3am on the 10th of January. We used a 'real' date for that, not a string as we did for interval. We used a date function so that no matter when it is executed, no matter what time during the day, it will always return 3am tomorrow morning. This is an important fact. We use the same exact function for the INTERVAL parameter as a string. We are using a function that always returns 3am tomorrow, regardless of when it is executed. The reason this is important is to prevent jobs from sliding. It might seem that since the first time the job is run, it'll be run at 3am, we could use an interval simply of sysdate+1. If we ran this at 3am on Tuesday, it should give us 3am on Wednesday. It would - if the jobs were guaranteed to run precisely on time, but they are not. Jobs are processed in the queue sequentially based on their time to be run. If I have one job queue process, and two jobs to be run at 3am, obviously one of them will not run at 3am exactly. It will have to wait for the first to finish to completion, and then it will be executed. Even if I have no overlapping jobs, the job queues are inspected at discrete points in time, say every 60 seconds. I might pick up the job to be run at 3am at 3:00:45am. If it used a simple sysdate+1 function, it might compute its next time to run as '3:00:46am' tomorrow. Tomorrow at 3:00:45am, this job would not be ready to run yet, and would be picked up on the next inspection of the queue at 3:01:45am. This job would slowly slip over time. Even more dramatic, let's say the tables were being operated on at 3am one morning, so the analysis failed. The stored procedure would fail, and the job queues would retry the job later. Now the job will 'slip' by many minutes for the next day since it runs at a time much later than 3am. For this reason, to prevent the job from slipping, you must use a function that returns a fixed point in time if you want the job to always be scheduled at a particular point in time. If it is important that this job runs at 3am, you must use a function that always returns 3am, and is not dependent on the time of day it is executed.

Many of these 'non-sliding' functions are typically very easy to write - others not so. For example, I was once requested to implement a job that would collect STATSPACK statistics Monday through Friday at 7am, and 3pm only. Well, the INTERVAL for this was certainly non-intuitive, but let's have a look at the pseudo-code:

if it is before 15:00 then    return TODAY at 15:00   (eg: if we are running at 7am, we want to run at 3pm today) else    return today + either 3 (if it is Friday) or 1 (otherwise) at 7am end if 

So, what we needed to do then, was turn this logic into a nice DECODE statement - or if that is too complex, I could have used a PL/SQL function to perform the complex logic. I used the interval:

decode(sign(15-to_char(sysdate,'hh24')),             1, trunc(sysdate)+15/24,             trunc( sysdate + decode(to_char(sysdate,'d'), 6, 3, 1))+7/24) 

The decode starts with the SIGN(15-TO_CHAR(SYSDATE,'HH24')). SIGN is a function that returns -1, 0, or 1 if the resulting number is negative, zero, or positive respectively. If this number was positive, it would imply that it was before 3pm in the afternoon (the hour was less than 15), and hence the next time we should run would be TRUNC(SYSDATE)+15/24 (15 hours after midnight today). On the other hand, if the sign came back 0 or -1, then we would use the TRUNC(SYSDATE + DECODE( TO_CHAR(SYSDATE,'D'), 6, 3, 1))+7/24. This would use the DECODE to look at the current day of the week to see if we should add three days (on Friday to get to Monday), or one day (every other day of the week). We would add that many days to SYSDATE, truncate this date back to midnight, and add 7 hours to it.

There are times when a 'sliding' date is OK, and even desired. For example, if you would like a job to collect some statistics from the V$ tables every 30 minutes while the database is up and running, it would be totally appropriate to use an interval of SYSDATE+1/24/2 which adds a half hour to a date.

Custom Scheduling

There are times, such as the above, where the NEXT_DATE is hard to compute in a simple SQL statement, or where the next time the job runs is dependent on some complex procedural set of rules. In this case, we can have the job itself set the next date to run.

If you recall from above, the PL/SQL block that runs a job is:

DECLARE    job BINARY_INTEGER := :job;    next_date DATE := :mydate;    broken BOOLEAN := FALSE; BEGIN          WHAT     :mydate := next_date;     IF broken THEN :b := 1; ELSE :b := 0; END IF; END; 

We have already seen how we can make use of the fact that JOB is available there in the Running a Job Once section. We can use it as a primary key into a parameter table to make maximum use of shared SQL. Well, we can also make use of the NEXT_DATE variable as well. As you can see in the above block of code, Oracle uses the bind variable :mydate as an input into the routine, to set the NEXT_DATE variable, but it also retrieves this value after what (your procedure) executes. If your procedure happens to modify this value, the value of NEXT_DATE, Oracle will use this as the next date to run the job. As an example, we'll set up a small procedure P that will write some informative message to a table T, and set it's NEXT_DATE:

tkyte@TKYTE816> create table t ( msg varchar2(80) ); Table created.      tkyte@TKYTE816> create or replace   2  procedure p( p_job in number, p_next_date in OUT date )   3  as   4      l_next_date date default p_next_date;   5  begin   6      p_next_date := trunc(sysdate)+1+3/24;   7   8      insert into t values   9      ( 'Next date was "' ||  10         to_char(l_next_date,'dd-mon-yyyy hh24:mi:ss') ||  11        '" Next date IS ' ||  12         to_char(p_next_date,'dd-mon-yyyy hh24:mi:ss') );  13  end;  14  / Procedure created. 

Now we will schedule this job using the method from the section, Running a Job Once. That is, without an INTERVAL:

tkyte@TKYTE816> variable n number      tkyte@TKYTE816> exec dbms_job.submit( :n, 'p(JOB,NEXT_DATE);' ); PL/SQL procedure successfully completed.      tkyte@TKYTE816> select what, interval,   2         to_char(last_date,'dd-mon-yyyy hh24:mi:ss') last_date,   3         to_char(next_date,'dd-mon-yyyy hh24:mi:ss') next_date   4    from user_jobs   5   where job = :n   6  /      WHAT                      INTERVAL LAST_DATE            NEXT_DATE ------------------------- -------- -------------------- -------------------- p(JOB,NEXT_DATE);         null                          28-apr-2001 18:23:01 

In this case, we send the JOB and the NEXT_DATE as parameters to our procedure. These will be supplied by the job queue at run-time. As you can see, this job has not yet run (LAST_DATE is Null), the INTERVAL is set to null so that the NEXT_DATE will be computed as SELECT NULL FROM DUAL. Normally, this means the job would run once, and be removed from the job queue. However, when this job runs, we'll discover:

tkyte@TKYTE816> exec dbms_job.run( :n );      PL/SQL procedure successfully completed.      tkyte@TKYTE816> select * from t;      MSG --------------------------------------------------------------------- Next date was "" Next date IS 29-apr-2001 03:00:00      tkyte@TKYTE816> select what, interval,   2         to_char(last_date,'dd-mon-yyyy hh24:mi:ss') last_date,   3         to_char(next_date,'dd-mon-yyyy hh24:mi:ss') next_date   4    from user_jobs   5   where job = :n   6  /      WHAT                      INTERVAL LAST_DATE            NEXT_DATE ------------------------- -------- -------------------- -------------------- p(JOB,NEXT_DATE);         null     28-apr-2001 18:23:01 29-apr-2001 03:00:00 

that the NEXT_DATE is filled in. It is the NEXT_DATE computed in the procedure itself, and the job is still in the queue. As long as this job continues to fill in the NEXT_DATE field, it will remain in the job queue. If it ever exits successfully without setting NEXT_DATE, it will be removed from the queue.

This is very useful for those jobs with hard to compute NEXT_DATE values, or NEXT_DATE values that depend on data found in other database tables.

Monitoring the Jobs and Finding the Errors

There are three main views used to monitor jobs in the database. They are simply:

Everyone has access to USER_JOBS as normal, and the DBA_* views are limited to people with the DBA privilege, or those who have been granted SELECT on these particular views directly. These views will give you information such as:

Suppose you look into these views and see some jobs with a positive value in the FAILURES column - where would you go to see the error message for that job? It is not stored in the database, rather it can be found in the alert log for the database. For example, let's say you create a procedure such as:

tkyte@TKYTE816> create or replace procedure run_by_jobs   2  as   3          l_cnt   number;   4  begin   5          select user_id into l_cnt from all_users;   6          -- other code here   7  end;   8  /      Procedure created.      tkyte@TKYTE816> variable n number tkyte@TKYTE816> exec dbms_job.submit( :n, 'run_by_jobs;' );      PL/SQL procedure successfully completed.      tkyte@TKYTE816> commit;      Commit complete.      tkyte@TKYTE816> exec dbms_lock.sleep(60);      PL/SQL procedure successfully completed. tkyte@TKYTE816> select job, what, failures   2    from user_jobs   3   where job = :n;             JOB WHAT                             FAILURES ---------- ------------------------------ ----------         35 run_by_jobs;                            1 

If you have more than one user in your database (as all databases do) this procedure will most definitely fail. The SELECT ... INTO will always return too many rows; we have a programming error. Since this happens in the background however, it is hard for us to see what exactly might be wrong. Fortunately the error is recorded in the alert log for the database. If we were to edit that file and go to the bottom, we would find:

Tue Jan 09 13:07:51 2001 Errors in file C:\oracle\admin\tkyte816\bdump\tkyte816SNP0.TRC: ... ORA-12012: error on auto execute of job 35 ORA-01422: exact fetch returns more than requested number of rows ORA-06512: at "SCOTT.RUN_BY_JOBS", line 5 ORA-06512: at line 1 

It tells us that job 35 (our job) failed to execute. More importantly, it tells us exactly why it failed; the same error stack you would get if you ran this in SQL*PLUS. This information is crucial to diagnosing why a job is failing. With this information we can fix it and get it to run correctly.

This is pretty much all there is to monitoring jobs. You need to either keep an eye on your alert.log (something your DBA should already be doing), or monitor the DBA_JOBS table from time to time to ensure things are running smoothly.

Summary

DBMS_JOB is an excellent facility inside the database for running procedures in the background. It has uses in the automated scheduling of routine tasks such as analyzing your tables, performing some archival operation, cleaning up scratch tables - whatever. It has application functionality in the area of making long running operations 'apparently fast' (and apparently fast is all that matters to the end user really). It removes the need to code OS-dependent scripts to perform database operations on a recurring basis. Even better, it removes the need to hard code usernames and passwords in a script to log into the database. The job always runs as the person who submitted it - no credentials are required. Lastly, unlike an OS scheduling facility, these database jobs run only when the database is actually available. If the system is down when a job is scheduled to run, it will not run (obviously, if the database isn't up, the job queues are not up). All in all, DBMS_JOB is a robust facility for which I've found many uses.



Expert One on One Oracle
Full Frontal PR: Getting People Talking about You, Your Business, or Your Product
ISBN: 1590595254
EAN: 2147483647
Year: 2005
Pages: 41
Authors: Richard Laermer, Michael Prichinello
BUY ON AMAZON

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net