Appendix AL

Overview

The DBMS_UTILITY package is a collection of miscellaneous procedures. It is where many, standalone procedures are placed. The DBMS_UTILITY package is installed in the database by default, and has EXECUTE granted to PUBLIC. The procedures in this package are not related to one another as they typically are in the other packages. For example, all of the entry points in the UTL_FILE package have a common goal and meaning - to perform I/O on a file. The entry points in DBMS_UTILITY are pretty much independent of one another.

In this section, we will look at many of these functions, and the important caveats and issues will be pointed out.

COMPILE_SCHEMA

The goal of the COMPILE_SCHEMA procedure is to attempt to make valid all invalid procedures, packages, triggers, views, types, and so on in a schema. This procedure works in Oracle 8.1.6 by using the SYS.ORDER_OBJECT_BY_DEPENDENCY view. This view returns objects in the order they depend on each other. In Oracle 8.1.7 and higher, this view is no longer used (why this is relevant will be shown below). If we compile the objects in the order that this view returns them, then at the end, all objects that can be valid, should be valid. This procedure runs the ALTER COMPILE command as the user who invoked the procedure (invoker rights).

It should be noted that COMPILE_SCHEMA demands you pass in a case-sensitive username. If you call:

scott@TKYTE816> exec DBMS_UTILITY.compile_schema( 'scott' ); 

It is probable that nothing will happen, unless you have a lowercase user named scott. You must pass in SCOTT.

There is however another issue with COMPILE_SCHEMA in 8.1 versions of the database prior to 8.1.6.2 (that is all 8.1.5, 8.1.6.0, and 8.1.6.1 versions). If you have a Java-enabled database, this will introduce some recursive dependencies into your system. This will cause COMPILE_SCHEMA to raise the error:

scott@TKYTE816> exec dbms_utility.compile_schema( user ); BEGIN dbms_utility.compile_schema( user ); END;      * ERROR at line 1: ORA-01436: CONNECT BY loop in user data ORA-06512: at "SYS.DBMS_UTILITY", line 195 ORA-06512: at line 1 

This is coming from the SYS.ORDER_OBJECT_BY_DEPENDENCY view, and is the reason why Oracle 8.1.7 and up do not use this view. If you encounter this error, we can create our own COMPILE_SCHEMA procedure that behaves exactly as the real COMPILE_SCHEMA. We can do this by compiling the objects in any order we feel like it. It is a common misconception that we must compile objects in some specific order - we can in fact do them in any arbitrary order, and still end up with the same outcome we would have, if we ordered by dependency. The logic is:

  1. Pick any invalid object from a schema that we have not yet tried to compile.

  2. Compile it.

  3. Go back to step one until there are no more invalid objects that we have not yet tried to compile.

It is that simple - we need no special ordering. This is because a side effect of compiling an invalid object is that all invalid objects it depends on will be compiled in order to validate this one. We just have to keep compiling objects until we have no more invalid ones (well, we might have invalid ones, but that would be because they cannot be successfully compiled no matter what). What we might discover is that we need only to compile a single procedure to get 10 or 20 other objects compiled. As long as we don't attempt to manually recompile those 10 or 20 other objects (as this would invalidate the first object again) we are OK.

Since the implementation of this procedure is somewhat interesting, we'll demonstrate it here. We need to rely on an invoker rights routine to do the actual ALTER COMPILE command. However, we need access to the DBA_OBJECTS table to find the 'next' invalid object, and report on the status of the just-compiled object. We do not necessarily want the invoker of the routine to have to have access to DBA_OBJECTS. In order to achieve this, we will use a mixture of invoker rights routines and definer rights routines. We need to make sure that the top-level routine, the one called by the end user, is the invoker rights routine however, to ensure that roles are enabled.

Here is my implementation of a COMPILE_SCHEMA.

Note 

The user who runs this script must have had SELECT granted to them on the SYS.DBA_OBJECTS view directly (refer to Chapter 23, Invoker and Definer Rights for details on why this is).

Since this is a SQL*PLUS script, with some SQL*PLUS directives in it, I'll show the script here this time, not the results of actually running the script. I am using a SQL*PLUS substitution variable to fill in the schema name as we compile objects. I am doing this because of the invoker rights routine (the need to fully qualify objects if they should always access the same table, regardless of who is running it), and the fact that I personally do not like to rely on public synonyms. The script will be given to you in pieces below with commentary in between:

column u new_val uname select user u from dual;      drop table compile_schema_tmp /      create global temporary table compile_schema_tmp ( object_name varchar2(30),   object_type varchar2(30),   constraint compile_schema_tmp_pk   primary key(object_name,object_type) ) on commit preserve rows /      grant all on compile_schema_tmp to public / 

We start the script by getting the currently logged in user's username into a SQL*PLUS substitution variable. We will use this later in our CREATE OR REPLACE procedures. We need to do this because our procedure is going to run as an invoker rights routine, and needs to access the table we just created above. If you recall in Chapter 23 on Invoker and Definer Rights, we discussed how references to tables in the procedure are be done using the default schema of the person running the procedure. Well, we only have one temporary table that all users will use, and it will be owned by whoever installs this package. Therefore, we need to hard code the username into the PL/SQL routine. The temporary table is used by our procedures to 'remember' what objects we have attempted to compile. We need to use ON COMMIT PRESERVE ROWS because of the fact that we are going to do DDL in our procedure (the ALTER COMPILE command is DDL), and DDL commits. Next, we can start in on the procedures we need:

create or replace procedure get_next_object_to_compile( p_username in varchar2,                                       p_cmd out varchar2,                                       p_obj out varchar2,                                       p_typ out varchar2 ) as begin     select 'alter ' || object_type || ' '            || p_username || '.' || object_name ||           decode( object_type, 'PACKAGE BODY', ' compile body',                   ' compile' ), object_name, object_type       into p_cmd, p_obj, p_typ       from dba_objects a      where owner = upper(p_username)        and status = 'INVALID'        and object_type <> 'UNDEFINED'        and not exists ( select null                           from compile_schema_tmp b                          where a.object_name = b.object_name                            and a.object_type = b.object_type                       )        and rownum = 1;          insert into compile_schema_tmp     ( object_name, object_type )     values     ( p_obj, p_typ ); end; / 

This is a definer rights procedure that accesses the DBA_OBJECTS view for us. This will return 'some' invalid object to be compiled, as long as we have not yet attempted to compile it. It just finds the first one. As we retrieve them, we 'remember' them in our temporary table. Note that this routine will throw the exception NO_DATA_FOUND when there are no objects left to be compiled in the requested schema - we'll use this fact in our next routine to stop processing. Next, we have our invoker rights routine that will actually do the compilation. This also shows why we needed the COLUMN U NEW_VAL UNAME directive above- we need to physically insert the owner of the temporary table in here to avoid having to use a synonym. Since we do this dynamically upon compiling the procedure, it makes it better than a synonym:

create or replace procedure compile_schema( p_username in varchar2 ) authid current_user as     l_cmd  varchar2(512);     l_obj  dba_objects.object_name%type;     l_typ  dba_objects.object_type%type; begin     delete from &uname..compile_schema_tmp;          loop         get_next_object_to_compile( p_username, l_cmd, l_obj, l_typ );              dbms_output.put_line( l_cmd );         begin             execute immediate l_cmd;             dbms_output.put_line( 'Successful' );         exception             when others then                 dbms_output.put_line( sqlerrm );         end;         dbms_output.put_line( chr(9) );     end loop;      exception - get_next_object_to_compile raises this when done     when no_data_found then NULL; end; /      grant execute on compile_schema to public / 

And that's it. Now you can go into any schema that is able to compile some objects, and execute:

scott@TKYTE816> exec tkyte.compile_schema('scott') alter PROCEDURE scott.ANALYZE_MY_TABLES compile Successful      alter PROCEDURE scott.CUST_LIST compile ORA-24344: success with compilation error      alter TYPE scott.EMP_MASTER compile ORA-24344: success with compilation error      alter PROCEDURE scott.FOO compile Successful      alter PACKAGE scott.LOADLOBS compile Successful      alter PROCEDURE scott.P compile Successful      alter PROCEDURE scott.RUN_BY_JOBS compile Successful      PL/SQL procedure successfully completed. 

So, this shows me the objects it attempted to compile, and the outcome. According to the above, we compile seven objects, two of which failed, and five of which succeeded. We compiled them in any order - the order was simply not relevant. This procedure should work in all situations.

ANALYZE_SCHEMA

The ANALYZE_SCHEMA routine does pretty much what it sounds like it would do - it performs an ANALYZE to collect statistics for the objects in a user's schema. It is recommended that you never do this on either SYS or SYSTEM. This is especially for SYS, as the recursive SQL generated by Oracle over the years was optimized for execution using the rule-based optimizer. Having statistics on SYS-owned tables will cause your database to operate slower than it should. You may use this procedure to analyze application schemas you have yourself developed.

The ANALYZE_SCHEMA procedure accepts five arguments:

So, for example, to analyze all of the objects in SCOTT's schema, we can do the following. We start by first deleting and then collecting statistics:

scott@TKYTE816> exec dbms_utility.analyze_schema(user,'delete');      PL/SQL procedure successfully completed.      scott@TKYTE816> select table_name, num_rows, last_analyzed   2  from user_tables;      TABLE_NAME                       NUM_ROWS LAST_ANAL ------------------------------ ---------- --------- BONUS CREATE$JAVA$LOB$TABLE DEPT ...      12 rows selected.      scott@TKYTE816> exec dbms_utility.analyze_schema(user,'compute');      PL/SQL procedure successfully completed.      scott@TKYTE816> select table_name, num_rows, last_analyzed   2  from user_tables;      TABLE_NAME                       NUM_ROWS LAST_ANAL ------------------------------ ---------- --------- BONUS                                   0 03-FEB-01 CREATE$JAVA$LOB$TABLE                  58 03-FEB-01 DEPT                                    4 03-FEB-01 ...      12 rows selected. 

This simple shows that the ANALYZE COMPUTE actually did its job - the NUM_ROWS and LAST_ANALYZED columns got filled in.

In general, the ANALYZE_SCHEMA procedure is as straightforward as it sounds. If you have the need to specifically analyze certain objects in certain ways, it will not apply. This procedure does the same sort of analysis to each object type, and does not have exceptions. For example, if you are a large data warehouse, and you make use of histograms on specific columns, or sets of columns on certain tables only, ANALYZE_SCHEMA is not what you want. You can use ANALYZE_SCHEMA to get histograms either for every column or none of the columns - not just certain columns. Once you go beyond the 'simple' with regards to analyzing objects, ANALYZE_SCHEMA will not be useful any more. This routine works well for small to medium sized applications, where small to medium is a measure of the amount of data you have. If you have large volumes of data, you will want to analyze in parallel or use special options to analyze on various tables. This will exclude ANALYZE_SCHEMA from being of use to you.

If you do use ANALYZE_SCHEMA, you should be aware of the following two issues. The first has to do with ANALYZE_SCHEMA against a schema that is changing. The second is with respect to objects ANALYZE_SCHEMA does not analyze. We will look at both of these caveats in turn.

ANALYZE_SCHEMA with a Changing Schema

Suppose you start an ANALYZE_SCHEMA in the SCOTT schema. You've added some large tables so it will take a while. In another session, you drop or add some objects to SCOTT's schema. The object you drop hasn't been reached by ANALYZE_SCHEMA yet. When it does, you will receive the somewhat misleading message:

scott@TKYTE816> exec dbms_utility.analyze_schema(user,'compute'); BEGIN dbms_utility.analyze_schema(user,'compute'); END;      * ERROR at line 1: ORA-20000: You have insufficient privileges for an object in this schema. ORA-06512: at "SYS.DBMS_UTILITY", line 258 ORA-06512: at line 1 

Obviously, you have all of the privileges you need; you own the objects after all. The error here is that a table, which it is trying to analyze no longer exists, when it gets round to analyzing it. Instead of recognizing that the table does not exist anymore, it assumes it does, and the error must be that you do not have sufficient privileges to analyze it. Currently, there is nothing you can do about this other than:

The other thing to be aware of is that objects added to the schema after the ANALYZE_SCHEMA begins will not be analyzed - it will not see them yet. This is fairly harmless, as the ANALYZE_SCHEMA will run to completion successfully.

ANALYZE_SCHEMA does not Analyze Everything

There is an open issue with respect to ANALYZE_SCHEMA. It will not analyze an index-organized table that has an overflow segment (see Chapter 7, Indexes, for more information regarding IOTs and overflows). For example if you run the following code:

scott@TKYTE816> drop table t;      Table dropped.      scott@TKYTE816> create table t ( x int primary key, y date )   2  organization index   3  OVERFLOW TABLESPACE  TOOLS   4  /      Table created.      scott@TKYTE816> execute dbms_utility.analyze_schema('SCOTT','COMPUTE')      PL/SQL procedure successfully completed.      scott@TKYTE816> select table_name, num_rows, last_analyzed   2    from user_tables   3   where table_name = 'T'; TABLE_NAME                       NUM_ROWS LAST_ANAL ------------------------------ ---------- --------- T 

it did not get analyzed. However, if you leave off the OVERFLOW clause:

scott@TKYTE816> drop table t;      Table dropped.      scott@TKYTE816> create table t ( x int primary key, y date )   2  organization index   3  /      Table created.      scott@TKYTE816> execute dbms_utility.analyze_schema('SCOTT','COMPUTE')      PL/SQL procedure successfully completed.      scott@TKYTE816> select table_name, num_rows, last_analyzed   2    from user_tables   3   where table_name = 'T';      TABLE_NAME                       NUM_ROWS LAST_ANAL ------------------------------ ---------- --------- T                                       0 03-FEB-01 

it does. This does not mean you should leave the OVERFLOW off of your IOTs, but rather that you will have to manually analyze these objects.

ANALYZE_DATABASE

This will be an exceptionally short section. Do not use this procedure. It is not realistic on a database of any size, and has a nasty side effect of analyzing the data dictionary (these are SYS owned objects, and we should never analyze these). Do not use it. Simply ignore its existence.

FORMAT_ERROR_STACK

FORMAT_ERROR_STACK is a function that, at first glance, would appear to be very useful, but in retrospect, is not at all. In actuality, FORMAT_ERROR_STACK is simply a less functional implementation of SQLERRM (SQL ERRor Message). A simple demonstration will help you to understand what I mean:

scott@TKYTE816> create or replace procedure p1   2  as   3  begin   4          raise program_error;   5  end;   6  /      Procedure created. scott@TKYTE816> create or replace procedure p2   2  as   3  begin   4          p1;   5  end;   6  /      Procedure created.      scott@TKYTE816> create or replace procedure p3   2  as   3  begin   4          p2;   5  end;   6  /      Procedure created.      scott@TKYTE816> exec p3 BEGIN p3; END;      * ERROR at line 1: ORA-06501: PL/SQL: program error ORA-06512: at "SCOTT.P1", line 4 ORA-06512: at "SCOTT.P2", line 4 ORA-06512: at "SCOTT.P3", line 4 ORA-06512: at line 1 

If we have an error, and we do not catch it in an exception handle, the entire error stack is displayed for us, and would be available to use in a Pro*C, OCI, JDBC, and so on, program. You would expect that the DBMS_UTILITY.FORMAT_ERROR_STACK routine would return similar information. You will find however that it loses this important information:

scott@TKYTE816> create or replace procedure p3   2  as   3  begin   4          p2;   5  exception   6          when others then   7                dbms_output.put_line( dbms_utility.format_error_stack );   8  end;   9  / Procedure created.      scott@TKYTE816> exec p3 ORA-06501: PL/SQL: program error      PL/SQL procedure successfully completed. 

As you can see, we actually lost the error stack by calling FORMAT_ERROR_STACK! This routine returns the same information SQLERRM would return:

scott@TKYTE816> create or replace procedure p3   2  as   3  begin   4          p2;   5  exception   6          when others then   7                  dbms_output.put_line( sqlerrm );   8  end;   9  / Procedure created.      scott@TKYTE816> exec p3 ORA-06501: PL/SQL: program error      PL/SQL procedure successfully completed. 

Before, I said FORMAT_ERROR_STACK was a less functional SQLERRM. This is because SQLERRM can, not only return the current error message, but it can also return any error message:

scott@TKYTE816> exec dbms_output.put_line( sqlerrm(-1) ); ORA-00001: unique constraint (.) violated      PL/SQL procedure successfully completed. 

Unfortunately, there simply is no way currently to get the real error stack in PL/SQL. You must let fatal errors propagate up to the calling client routine, in order to get the actual line number of the code that raised the error in the first place.

FORMAT_CALL_STACK

Fortunately this function is truly useful compared to FORMAT_ERROR_STACK. This returns to us the current call stack. Using this, we can write some utility procedures such as MY_CALLER and WHO_AM_I. These routines call a procedure to determine what source code from which line number invoked it. This is very useful for debugging and logging purposes. Also, a procedure could modify its behavior based on who called it, or where it was called.

Before we introduce the code for MY_CALLER and WHO_AM_I, let us look at what the call stack provides for us, and what the output from these routines is destined to be. If we use the P1, P2, P3 example from above, and rewrite P1 to be:

scott@TKYTE816> create or replace procedure p1   2  as   3      l_owner     varchar2(30);   4      l_name      varchar2(30);   5      l_lineno    number;   6      l_type      varchar2(30);   7  begin   8      dbms_output.put_line( '----------------------' );   9      dbms_output.put_line( dbms_utility.format_call_stack );  10      dbms_output.put_line( '----------------------' );  11      who_called_me( l_owner, l_name, l_lineno, l_type );  12      dbms_output.put_line( l_type || ' ' ||  13                            l_owner || '.' || l_name ||  14                            '(' || l_lineno || ')' );  15      dbms_output.put_line( '----------------------' );  16      dbms_output.put_line( who_am_i );  17      dbms_output.put_line( '----------------------' );  18      raise program_error;  19  end;  20  /      Procedure created. 

we will receive output such as:

scott@TKYTE816> exec p3 ---------------------- ----- PL/SQL Call Stack -----   object      line  object   handle    number  name  2f191e0         9  procedure SCOTT.P1  39f0a9c         4  procedure SCOTT.P2  3aae318         4  procedure SCOTT.P3  3a3461c         1  anonymous block ---------------------- PROCEDURE SCOTT.P2(4) ---------------------- SCOTT.P1(16) ---------------------- BEGIN p3; END;      * ERROR at line 1: ORA-06501: PL/SQL: program error ORA-06512: at "SCOTT.P2", line 8 ORA-06512: at "SCOTT.P3", line 4 ORA-06512: at line 1 

So, we can see the entire call stack in P1. This shows that P1 was called by P2, P2 was called by P3, and P3 was called by an anonymous block. Additionally, we can procedurally retrieve the fact that our caller in P1 was the procedure SCOTT.P2, and that they called us from line 4. Lastly, we can see simply that we are the procedure SCOTT.P1.

So, now that we see what the call stack looks like, and what kind of output we would like to get, we can present the code to do it:

tkyte@TKYTE816> create or replace function my_caller return varchar2   2   3  as   4      owner       varchar2(30);   5      name        varchar2(30);   6      lineno      number;   7      caller_t    varchar2(30);   8      call_stack  varchar2(4096) default dbms_utility.format_call_stack;   9      n           number;  10      found_stack BOOLEAN default FALSE;  11      line        varchar2(255);  12      cnt         number := 0;  13  begin  14  15      loop  16          n := instr( call_stack, chr(10) );  17          exit when ( cnt = 3 or n is NULL or n = 0 );  18  19          line := substr( call_stack, 1, n-1 );  20          call_stack := substr( call_stack, n+1 );  21  22          if ( NOT found_stack ) then  23              if ( line like '%handle%number%name%' ) then  24                  found_stack := TRUE;  25              end if;  26          else  27              cnt := cnt + 1;  28              -- cnt = 1 is ME  29              -- cnt = 2 is MY Caller  30              -- cnt = 3 is Their Caller  31              if ( cnt = 3 ) then  32                  lineno := to_number(substr( line, 13, 6 ));  33                  line   := substr( line, 21 );  34                  if ( line like 'pr%' ) then  35                      n := length( 'procedure ' );  36                  elsif ( line like 'fun%' ) then  37                      n := length( 'function ' );  38                  elsif ( line like 'package body%' ) then  39                      n := length( 'package body ' );  40                  elsif ( line like 'pack%' ) then  41                      n := length( 'package ' );  42                  elsif ( line like 'anonymous block%' ) then  43                      n := length( 'anonymous block ' );  44                  else -- must be a trigger  45                      n := 0;  46                  end if;  47                  if ( n <> 0 ) then  48                     caller_t := ltrim(rtrim(upper(substr(line,1,n-1))));  49                     line := substr( line, n );  50                  else  51                      caller_t := 'TRIGGER';  52                      line := ltrim( line );  53                  end if;  54                     n := instr( line, '.' );  55                     owner := ltrim(rtrim(substr( line, 1, n-1 )));  56                     name  := ltrim(rtrim(substr( line, n+1 )));  57              end if;  58          end if;  59      end loop;  60      return owner || '.' || name;  61  end;  62  / Function created.      tkyte@TKYTE816> create or replace function who_am_i return varchar2   2  as   3  begin   4      return my_caller;   5  end;   6  /      Function created. 

Now that you have these routines, you can do some interesting things. It has been used to

GET_TIME

This function returns a ticker that measures time in hundredths of a second. You cannot use GET_TIME to tell what time it is, a function it's name may imply, but rather you can use this to measure elapsed time. A common way to do this is:

scott@TKYTE816> declare   2      l_start  number;   3      n         number := 0;   4  begin   5   6      l_start := dbms_utility.get_time;   7   8      for x in 1 .. 100000   9      loop  10          n := n+1;  11      end loop;  12  13      dbms_output.put_line( ' it took ' ||  14        round( (dbms_utility.get_time-l_start)/100, 2 ) ||  15        ' seconds...' );  16  end;  17  / it took .12 seconds...      PL/SQL procedure successfully completed. 

so you can use GET_TIME to measure elapsed time in hundredths of a second. You should realize however, that GET_TIME will wrap around to zero and start counting again if your database is up long enough. Now, on most platforms this time to wrap is well over a year in length. The counter is a 32-bit integer, and this can hold hundredths of seconds for about 497 days. After that, the 32-bit integer will roll over to zero, and start over again. On some platforms, the operating system supplies this ticker in a smaller increment than hundredths of seconds. On these platforms the ticker may roll over sooner than 497 days. For example, on Sequent it is known that the timer will roll over every 71.58 minutes, since this operating system's ticker measures time in microseconds, leaving significantly less room in the 32-bit integer. On a 64-bit platform, the time may very well not roll over for many thousands of years.

A last note about GET_TIME. The same value that GET_TIME returns may be retrieved from a SELECT * FROM V$TIMER. The dynamic view and GET_TIME return the same values:

tkyte@TKYTE816> select hsecs, dbms_utility.get_time   2  from v$timer;           HSECS   GET_TIME ---------- ----------    7944822    7944822

GET_PARAMETER_VALUE

This API allows anyone to get the value of a specific init.ora parameter. Even if you have no access to V$PARAMETER, and cannot run the SHOW PARAMETER command, you can use this to get the value of an init.ora parameter. It works like this:

scott@TKYTE816> show parameter utl_file_dir ORA-00942: table or view does not exist      scott@TKYTE816> select * from v$parameter where name = 'utl_file_dir'   2  / select * from v$parameter where name = 'utl_file_dir'               * ERROR at line 1: ORA-00942: table or view does not exist      scott@TKYTE816> declare   2      intval number;   3      strval varchar2(512);   4  begin   5      if ( dbms_utility.get_parameter_value( 'utl_file_dir',   6                                              intval,   7                                              strval ) = 0)   8      then   9          dbms_output.put_line( 'Value = ' || intval );  10      else  11          dbms_output.put_line( 'Value = ' || strval );  12      end if;  13  end;  14  / Value = c:\temp\      PL/SQL procedure successfully completed. 

As you can see, even though SCOTT cannot query V$PARAMETER and the call to show parameter failed, he can still use this call to get the value. It should be noted that parameters set with True/False strings in the init.ora file will be reported back as returning a number type (this particular function will return 0), and a value of 1 indicates True while a value of 0 indicates False. Additionally, for multi-valued parameters, such as UTL_FILE_DIR, this routine only returns the first value. If I use an account that can do a SHOW PARAMETER in the same database:

tkyte@TKYTE816> show parameter utl_file_dir      NAME                                 TYPE    VALUE ------------------------------------ ------- ------------------------------ utl_file_dir                         string  c:\temp, c:\oracle 

I can see more values.

NAME_RESOLVE

This routine will take the name of a:

scott@TKYTE816> declare   2      type vcArray is table of varchar2(30);   3      l_types vcArray := vcArray( null, null, null, null, 'synonym',   4                                  null, 'procedure', 'function',   5                                 'package' );   6   7       l_schema   varchar2(30);   8       l_part1    varchar2(30);   9       l_part2    varchar2(30);  10       l_dblink   varchar2(30);  11       l_type     number;  12       l_obj#     number;  13  begin  14    dbms_utility.name_resolve( name => 'DBMS_UTILITY',  15                               context       => 1,  16                               schema        => l_schema,  17                               part1         => l_part1,  18                               part2         => l_part2,  19                               dblink        => l_dblink,  20                               part1_type    => l_type,  21                               object_number => l_obj# );  22    if l_obj# IS NULL  23    then  24      dbms_output.put_line('Object not found or not valid.');  25    else  26      dbms_output.put( l_schema || '.' || nvl(l_part1,l_part2) );  27      if l_part2 is not null and l_part1 is not null  28      then  29          dbms_output.put( '.' || l_part2 );  30      end if;  31  32      dbms_output.put_line( ' is a ' || l_types( l_type ) ||  33                            ' with object id ' || l_obj# ||  34                            ' and dblink "' || l_dblink || '"' );  35    end if;  36  end;  37  / SYS.DBMS_UTILITY is a package with object id 2408 and dblink ""      PL/SQL procedure successfully completed. 

In this case, NAME_RESOLVE took our synonym DBMS_UTILITY, and figured out for us that this was in fact a database package that is owned by SYS.

It should be noted that NAME_RESOLVE works only on procedures, functions, packages, and synonyms that point to one of these three object types. It explicitly will not work on a database table for example. You will receive the following error:

declare * ERROR at line 1: ORA-06564: object emp does not exist ORA-06512: at "SYS.DBMS_UTILITY", line 68 ORA-06512: at line 9 

if you attempt to use it on the EMP table in the SCOTT schema for example.

In addition to not being able to do tables, indexes, and other objects, NAME_RESOLVE does not function as documented when it comes to resolving synonyms that point to remote objects over a database link. It is documented that if you pass NAME_RESOLVE a synonym to a remote package/procedure, then the TYPE will be set to synonym, and they will tell us the name of the database link. This is an issue with the NAME_RESOLVE code (the documentation is correct, the procedure does not function as it should). Currently, NAME_RESOLVE will never return SYNONYM as the type. Rather, it will resolve the remote object and return its name and an object ID of -1. For example, I have a database link set up, and I create a synonym X for DBMS_UTILITY@ora8i.world. When I NAME_RESOLVE this, I receive:

SYS.DBMS_UTILITY is a package with object id -1 and dblink ""      PL/SQL procedure successfully completed. 

I should have been told that X was a synonym and the DBLINK OUT parameter would have been filled in. As you can see however, the DBLINK is Null, and the only indication we have that this is not a local package, is the fact that the object ID is set to -1. You should not rely on this behavior persisting in future releases of Oracle. It has been determined as an issue in the NAME_RESOLVE implementation, and is not a documentation issue. The documentation is correct, the observed behavior is wrong. When this gets corrected, NAME_RESOLVE will function differently on remote objects. For this reason, you will want to either avoid using NAME_RESOLVE on remote objects or make sure to 'wrap' the NAME_RESOLVE routine in some function of your own. This will make it so that when, and if, the behavior changes, you can easily modify your code to provide yourself with the old functionality, if that is what you depend on.

One last comment about NAME_RESOLVE. The parameters CONTEXT and OBJECT_NUMBER are under-documented and not documented, respectively. The CONTEXT parameter is documented briefly as:

.. . must be an integer between 0 and 8

In fact, it must be an integer between 1 and 7 or you'll receive:

declare * ERROR at line 1: ORA-20005: ORU-10034: context argument must be 1 or 2 or 3 or 4 or 5 or 6 or 7 ORA-06512: at "SYS.DBMS_UTILITY", line 66 ORA-06512: at line 14 

And if it is anything other than 1, you will receive one of the two following error messages:

ORA-04047: object specified is incompatible with the flag specified      ORA-06564: object OBJECT-NAME does not exist 

So, the only valid value for context is 1. The OBJECT_NUMBER parameter is not documented at all. This is the OBJECT_ID value found in DBA_OBJECTS, ALL_OBJECTS and USER_OBJECTS. For example, given our first example where the OBJECT_ID was shown to be 2048 I can query:

scott@TKYTE816> select owner, object_name   2  from all_objects   3  where object_id = 2408;      OWNER                          OBJECT_NAME ------------------------------ ------------------------------ SYS                            DBMS_UTILITY 

NAME_TOKENIZE

This utility routine simply takes a string that represents some object name, and breaks it into its component pieces for you. Objects are referenced via:

[schema].[object_name].[procedure|function]@[database link] 

NAME_TOKENIZE simply takes a string in this form, and breaks it out into the three leading pieces and the last (database link) piece. Additionally, it tells us what byte it stopped parsing the object name at. Here is a small example showing what you might expect back from various object names you pass to it. Note that you do not have to use real object names (these tables and procedures do not have to exist), but you must use valid object identifiers. If you do not use a valid object identifier, NAME_TOKENIZE will raise an error. This makes NAME_TOKENIZE suitable as a method to discover whether a given string of characters will be a valid identifier or not:

scott@TKYTE816> declare   2      l_a      varchar2(30);   3      l_b      varchar2(30);   4      l_c      varchar2(30);   5      l_dblink varchar2(30);   6      l_next   number;   7   8      type vcArray is table of varchar2(255);   9      l_names vcArray :=  10          vcArray( 'owner.pkg.proc@database_link',  11                   'owner.tbl@database_link',  12                   'tbl',  13                   '"Owner".tbl',  14                   'pkg.proc',  15                   'owner.pkg.proc',  16                   'proc',  17                   'owner.pkg.proc@dblink with junk',  18                   '123' );  19  begin  20      for i in 1 .. l_names.count  21      loop  22      begin  23          dbms_utility.name_tokenize(name   => l_names(i),  24                                     a      => l_a,  25                                     b      => l_b,  26                                     c      => l_c,  27                                     dblink => l_dblink,  28                                     nextpos=> l_next );  29  30          dbms_output.put_line( 'name    ' || l_names(i) );  31          dbms_output.put_line( 'A       ' || l_a );  32          dbms_output.put_line( 'B       ' || l_b );  33          dbms_output.put_line( 'C       ' || l_c );  34          dbms_output.put_line( 'dblink  ' || l_dblink );  35          dbms_output.put_line( 'next    ' || l_next || ' ' ||  36                                  length(l_names(i)));  37          dbms_output.put_line( '-----------------------' );  38      exception  39          when others then  40              dbms_output.put_line( 'name    ' || l_names(i) );  41              dbms_output.put_line( sqlerrm );  42      end;  43      end loop;  44  end;  45  / name    owner.pkg.proc@database_link A       OWNER B       PKG C       PROC dblink  DATABASE_LINK next    28 28 

As you can see, this breaks out the various bits and pieces of our object name for us. Here the NEXT is set to the length of the string - parsing ended when we hit the end of the string in this case. Since we used every possible piece of the object name, all four components are filled in. Now for the remaining examples:

name    owner.tbl@database_link A       OWNER B       TBL C dblink  DATABASE_LINK next    23 23 ----------------------- name    tbl A       TBL B C dblink next    3 3 ----------------------- 

Notice here how B and C are left Null. Even though an object identifier is SCHEMA.OBJECT.PROCEDURE, NAME_TOKENIZE makes no attempt to put TBL into the B OUT parameter. It simply takes the first part it finds, and puts it in A, the next into B, and so on. A, B, and C do not represent specific pieces of the object name, just the first found, next found, and so on.

name    "Owner".tbl A       Owner B       TBL C dblink next    11 11 ----------------------- 

Here is something interesting. In the previous examples, NAME_TOKENIZE uppercased everything. This is because identifiers are in uppercase unless you use quoted identifiers. Here, we used a quoted identifier. NAME_TOKENIZE will preserve this for us, and remove the quotes!

name    pkg.proc A       PKG B       PROC C dblink next    8 8 ----------------------- name    owner.pkg.proc A       OWNER B       PKG C       PROC dblink next    14 14 ----------------------- name    proc A       PROC B C dblink next    4 4 ----------------------- name    owner.pkg.proc@dblink with junk A       OWNER B       PKG C       PROC dblink  DBLINK next    22 31 ----------------------- 

Here is an example where the parsing stopped before we ran out of string. NAME_TOKENIZE is telling us it stopped parsing at byte 22 out of 31. This is the space right before with junk. It simply ignores the remaining pieces of the string for us.

name    123 ORA-00931: missing identifier      PL/SQL procedure successfully completed. 

And lastly, this shows if we use an invalid identifier, NAME_TOKENIZE will raise an exception. It checks all tokens for being valid identifiers before returning. This makes it useful as a tool to validate object names if you are building an application that will create objects in the Oracle database. For example, if you are building a data modeling tool, and would like to validate that the name the end user wants to use for a table or column name is valid, NAME_TOKENIZE will do the work for you.

COMMA_TO_TABLE, TABLE_TO_COMMA

These two utilities either take a comma-delimited string of identifiers and parse them into a PL/SQL table (COMMA_TO_TABLE), or take a PL/SQL table of any type of string, and make a comma-delimited string of them (TABLE_TO_COMMA). I stress the word identifiers above, because COMMA_TO_TABLE uses NAME_TOKENIZE to parse the strings, hence as we saw in that section, we need to use valid Oracle identifiers (or quoted identifiers). This still limits us to 30 characters per element in our comma-delimited string however.

This utility is most useful for applications that want to store a list of table names in a single string for example, and have them easily converted to an array in PL/SQL at run-time. Otherwise, it is of limited use. If you need a general purpose COMMA_TO_TABLE routine that works with comma-delimited strings of data, see Chapter 20, Using Object Relational Features. In the SELECT * from PLSQL_FUNCTION section, I demonstrate how to do this.

Here is an example using this routine, demonstrating how it deals with long identifiers and invalid identifiers:

scott@TKYTE816> declare   2      type vcArray is table of varchar2(4000);   3   4      l_names  vcArray := vcArray( 'emp,dept,bonus',   5                                   'a,  b  ,   c',   6                                   '123, 456, 789',   7                                   '"123", "456", "789"',   8           '"This is a long string, longer then 32 characters","b",c');   9      l_tablen number;  10      l_tab    dbms_utility.uncl_array;  11  begin  12      for i in 1 .. l_names.count  13      loop  14          dbms_output.put_line( chr(10) ||  15                                '[' || l_names(i) || ']' );  16      begin  17  18          dbms_utility.comma_to_table( l_names(i),  19                                       l_tablen, l_tab );  20  21          for j in 1..l_tablen  22          loop  23              dbms_output.put_line( '[' || l_tab(j) || ']' );  24          end loop;  25  26          l_names(i) := null;  27          dbms_utility.table_to_comma( l_tab,  28                                       l_tablen, l_names(i) );  29          dbms_output.put_line( l_names(i) );  30      exception  31          when others then  32              dbms_output.put_line( sqlerrm );  33      end;  34      end loop;  35  end;  36  /      [emp,dept,bonus] [emp] [dept] [bonus] emp,dept,bonus 

So, that shows that it can take the string emp,dept,bonus, break it into a table, and put it back together again.

[a,  b,    c] [a] [  b  ] [    c] a,  b,    c 

This example shows that if you have whitespace in the list, it will be preserved. You would have to use the TRIM function to remove leading and trailing white space if you do not want any.

[123, 456, 789] ORA-00931: missing identifier 

This shows that to use this procedure on a comma-delimited string of numbers, we must go one step further as demonstrated below:

["123", "456", "789"] ["123"] [ "456"] [ "789"] "123", "456", "789" 

Here, it is able to extract the numbers from the string. Note however, how it not only retains the leading whitespace, but it also retains the quotes. It would be up to you to remove them if you so desire.

["This is a long string, longer than 32 characters","b",c] ORA-00972: identifier is too long      PL/SQL procedure successfully completed. 

And this last example shows that if the identifier is too long (longer than 30 characters), it will raise an error as well. These routines are only useful for strings of 30 characters or less. While it is true that TABLE_TO_COMMA will take larger strings than 30 characters, COMMA_TO_TABLE will not be able to undo this work.

DB_VERSION and PORT_STRING

The DB_VERSION routine was added in Oracle 8.0, in order to make it easier for applications to figure out what version of the database they were running in. We could have used this in our CRYPT_PKG (see the DBMS_OBFUSCATION_TOOLKIT section) for example, to tell users who attempted to use the DES3 routines in an Oracle 8.1.5 database that it would not work, instead of just trying to execute the DES3 routines and failing. It is a very simple interface as follows:

scott@TKYTE816> declare   2      l_version        varchar2(255);   3      l_compatibility varchar2(255);   4  begin   5      dbms_utility.db_version( l_version, l_compatibility );   6      dbms_output.put_line( l_version );   7      dbms_output.put_line( l_compatibility );   8  end;   9  / 8.1.6.0.0 8.1.6      PL/SQL procedure successfully completed. 

And provides more version detail than the older function, PORT_STRING:

scott@TKYTE816> select dbms_utility.port_string from dual;      PORT_STRING --------------------------- IBMPC/WIN_NT-8.1.0 

Using the PORT_STRING, not only would you have to parse the string, but you also cannot tell if you are in version 8.1.5 versus 8.1.6 versus 8.1.7. DB_VERSION will be more useful for this. On the other hand, the PORT_STRING does tell you what operating system you are on.

GET_HASH_VALUE

This function will take any string as input, and return a numeric HASH value for it. You could use this to build your own 'index by table' that was indexed by a string, or as we did in the DBMS_LOCK section, to facilitate the implementation of some other algorithm.

You should be aware that the algorithm used to implement GET_HASH_VALUE can, and has, changed from release to release, so you should not use this function to generate surrogate keys. If you find yourself storing the return value from this function in a table, you might be setting yourself up for a problem in a later release when the same inputs return a different hash value!

This function takes three inputs:

As a demonstration of using the GET_HASH_VALUE, we will implement a new type, HASHTABLETYPE, to add to the PL/SQL language a hash type. This is very similar to a PL/SQL table type that is indexed by a VARCHAR2 string instead of a number. Normally, PL/SQL table elements are referenced by subscripts (numbers). This new type of PL/SQL table will have elements that are referenced by arbitrary strings. This will allow us to declare variables of type HASHTABLETYPE and GET and PUT values into it. We can have as many of these table types as we like. Here is the specification for our package:

tkyte@TKYTE816> create or replace type myScalarType   2  as object   3  ( key  varchar2(4000),   4    val  varchar2(4000)   5  );   6  /      Type created.      tkyte@TKYTE816> create or replace type myArrayType   2  as varray(10000) of myScalarType;   3  /      Type created.      tkyte@TKYTE816> create or replace type hashTableType   2  as object   3  (   4      g_hash_size     number,   5      g_hash_table    myArrayType,   6      g_collision_cnt number,   7   8      static function new( p_hash_size in number )   9          return hashTableType,  10  11      member procedure put( p_key in varchar2,  12                            p_val in varchar2 ),  13  14      member function get( p_key in varchar2 )  15         return varchar2,  16  17      member procedure print_stats  18  );  19  /      Type created. 

An interesting implementation detail here is the addition of the static member function NEW. This will allow us to create our own constructor. You should note that there is absolutely nothing special about the name NEW that I used. It is not a keyword or anything like that. What NEW will allow us to do is to declare a HASHTABLETYPE like this:

declare     l_hashTable hashTableType := hashTableType.new( 1024 ); 

instead of like this:

declare    l_hashTable hashTableType := hashTableType( 1024, myArrayType(), 0 ); 

It is my belief that the first syntax is in general, more readable and clearer than the second. The second syntax makes the end user aware of many of the implementation details (that we have an array type in there, that there is some variable G_COLLISION_CNT that must be set to zero, and so on). They neither need to know that nor do they really care.

So, now onto the type body itself:

scott@TKYTE816> create or replace type body hashTableType   2  as   3   4  -- Our 'friendly' constructor.   5   6  static function new( p_hash_size in number )   7  return hashTableType   8  is   9  begin  10      return hashTableType( p_hash_size, myArrayType(), 0 );  11  end;  12  13  member procedure put( p_key in varchar2, p_val in varchar2 )  14  is  15      l_hash  number :=  16              dbms_utility.get_hash_value( p_key, 1, g_hash_size );  17  begin  18  19      if ( p_key is null )  20      then  21           raise_application_error( -20001, 'Cannot have NULL key' );  22      end if;  23 

This next piece of code looks to see if we need to 'grow' the table to hold this new, hashed value. If we do, we grow it out big enough to hold this index:

 27      if ( l_hash > nvl( g_hash_table.count, 0 ) )  28      then  29          g_hash_table.extend( l_hash-nvl(g_hash_table.count,0)+1 );  30      end if;  31 

Now, there is no guarantee that the index entry our key hashed to is empty. What we do upon detecting a collision is to try and put it in the next collection element. We search forward for up to 1,000 times to put it into the table. If we hit 1,000 collisions, we will fail. This would indicate that the table is not sized properly, if this is the case:

 35      for i in 0 .. 1000  36      loop  37          -- If we are going to go past the end of the  38          -- table, add another slot first.  39          if ( g_hash_table.count <= l_hash+i )  40          then  41              g_hash_table.extend;  42          end if;  43 

The next bit of logic says 'if no one is using this slot or our key is in this slot already, use it and return.' It looks a tad strange to check if the G_HASH_TABLE element is Null, or if the G_HASH_TABLE(L_HASH+I).KEY is Null. This just shows that a collection element may be Null, or it may contain an object that has Null attributes:

 46          if ( g_hash_table(l_hash+i) is null OR  47               nvl(g_hash_table(l_hash+i).key,p_key) = p_key )  48          then  49              g_hash_table(l_hash+i) := myScalarType(p_key,p_val);  50              return;  51          end if;  52  53          -- Else increment a collision count and continue  54          -- onto the next slot.  55          g_collision_cnt := g_collision_cnt+1;  56      end loop;  57  58      -- If we get here, the table was allocate too small.  59      -- Make it bigger.  60      raise_application_error( -20001, 'table overhashed' );  61  end;  62  63  64  member function get( p_key in varchar2 ) return varchar2  65  is  66      l_hash  number :=  67               dbms_utility.get_hash_value( p_key, 1, g_hash_size );  68  begin 

When we go to retrieve a value, we look in the index element we think the value should be in, and then look ahead up to 1,000 entries in the event we had collisions. We short circuit this look-ahead search if we ever find an empty slot C we know our entry cannot be beyond that point:

 71      for i in l_hash .. least(l_hash+1000, nvl(g_hash_table.count,0))  72      loop  73          -- If we hit an EMPTY slot, we KNOW our value cannot  74          -- be in the table. We would have put it there.  75          if ( g_hash_table(i) is NULL )  76          then  77              return NULL;  78          end if;  79  80          -- If we find our key, return the value.  81          if ( g_hash_table(i).key = p_key )  82          then  83              return g_hash_table(i).val;  84          end if;  85      end loop;  86  87      -- Key is not in the table. Quit.  88      return null;  89  end;  90 

The last routine is used to print out useful information, such as how many slots you've allocated versus used, and how many collisions we had. Note that collisions can be bigger than the table itself!

 97  member procedure print_stats  98  is  99      l_used number default 0; 100  begin 101      for i in 1 .. nvl(g_hash_table.count,0) 102      loop 103          if ( g_hash_table(i) is not null ) 104          then 105              l_used := l_used + 1; 106          end if; 107      end loop; 108 109      dbms_output.put_line( 'Table Extended To.....' || 110                             g_hash_table.count ); 111      dbms_output.put_line( 'We are using..........' || 112                             l_used ); 113      dbms_output.put_line( 'Collision count put...' || 114                             g_collision_cnt ); 115  end; 116 117  end; 118  /      Type body created. 

`As you can see, we simply used the GET_HASH_VALUE to turn the string into some number we could use to index into our table type, to get the value. Now we are ready to see how this new type can be used:

tkyte@TKYTE816> declare   2      l_hashTbl hashTableType := hashTableType.new(power(2,7));   3  begin   4      for x in ( select username, created from all_users )   5      loop   6          l_hashTbl.put( x.username, x.created );   7      end loop;   8   9      for x in ( select username, to_char(created) created,  10                        l_hashTbl.get(username) hash  11                   from all_users )  12      loop  13          if ( nvl( x.created, 'x') <> nvl(x.hash,'x') )  14          then  15              raise program_error;  16          end if;  17      end loop;  18  19      l_hashTbl.print_stats;  20  end;  21  / Table Extended To.....120 We are using..........17 Collision count put...1      PL/SQL procedure successfully completed. 

And that's it. We've just extended the PL/SQL language again, giving it a hash table using the built-in packages.

Summary

This wraps up our overview of many of the procedures found in the DBMS_UTILITY package. Many, such as GET_TIME, GET_PARAMETER_VALUE, GET_HASH_VALUE, and FORMAT_CALL_STACK are in my list of 'frequently given answers'. This is to say, they are frequently the answer to many a question C people just weren't even aware they even existed.



Expert One on One Oracle
Full Frontal PR: Getting People Talking about You, Your Business, or Your Product
ISBN: 1590595254
EAN: 2147483647
Year: 2005
Pages: 41
Authors: Richard Laermer, Michael Prichinello
BUY ON AMAZON

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net