Oracle 9i New Features

 < Day Day Up > 



With all of the database enhancements that Oracle 8i brought to Oracle E-Business Suite over what was available in Oracle7, Oracle brought even more with the release of Oracle 9i. Oracle 9i database provides many new features to the RDBMS world that Oracle E-Business Suite administrator can take advantage of.

Online Table Redefinition

By its inherent nature, Oracle Financials is designed to be a highly available system. If it is only part of your internal finance and accounting backbone, it is required by those directly connected with those functions to do their day-to-day jobs. It is the key to your company's financial future, as all of the financial information is maintained within its tables. If it is part of your E-business solution and any part of your company's store front on the Internet, its availability is even more critical, particularly if you are multinational.

In systems where high availability is a requirement, it is often necessary to redefine tables to improve performance, to provide added features, or to simply store related information that was not otherwise captured. While you should never attempt to make manual changes to any of the tables that belong to the core product, if you have any custom tables that your company relies on for reporting, for interfacing with outside data sources, or for any functionality that you have determined that Oracle E-Business Suite is lacking, you may find it necessary at some point to redefine one or more of your tables. In past releases, it was often necessary to make these tables unavailable to the end user for the duration of the redefinition. In Oracle 9i, this is no longer the case. Oracle now provides a means by which you can redefine tables online, making a significant improvement in availability in the system. With Online Table Redefinition, only a very small window exists where the table is not accessible to DML and this window is fixed in size regardless of the size or complexity of the table or the redefinition.

For example, you have a table that gets populated from an external source every hour and the process overwrites the previous hour's file. You need to make changes to the table so that one column (abc, for example) will be resized to be 100 varchar2 characters bigger, there will be 3 new columns (e.g., def, ghi, and jkl) and they will be defaulted to 10, 25, and 50, respectively, and the new definition will be range partitioned by the transaction location. Any one of these could be accomplished on its own, each might take a long time, and any of them may cause the table to be locked against update for the period of time that it takes to process the change. But if you process these changes via online redefinition, while it takes a little extra setup to prepare for the procedure, there need not be any interruption in service for the end user to notice.

Steps for Online Table Redefinition

There are two methods for Online Table Redefinition. The preferred (and default) method for doing an Online Table Redefinition uses the primary key information to perform the redefinition. The only caveat to using this method is that the preredefined table and the postredefined table have to have the same primary key columns. The second uses the rowed method as the vehicle through which the redefinition is accomplished. Using this rowed method, you should not attempt to redefine an Index Organized Table. The end product, the redefined table, has one additional column that the initial table did not have. The hidden M_ROW$$ column is added to the table to facilitate this kind of redefinition and should be either marked as unused or (for a cleaner table) dropped after the redefinition is successfully completed.

The first step in online redefinition is to call the dbms_redefinition.can_redev_table () specifying the table name on which you want to perform the online redefinition as well as the method or redefinition you want to use if you are not using the default. If there is a reason why an online redefinition of the specified table cannot take place, an error will be raised by the procedure indicating why the table is not eligible for Online Table Redefinition.

Once you have determined that the table that you want to redefine is indeed a valid candidate for redefinition, you will have to create an empty interim table in the same schema as the table to be redefined with all of the desired attributes of the table that you want to have as an end product table, including leaving out and adding columns if necessary. This interim table should look exactly as the end table will look.

Table redefinition is an operation that can be performed in parallel. If you want to specify a degree of parallelism for the operation on both tables, you can alter the session in which you will perform the redefinition setting parallel= to the degree you want to attempt and Oracle will use parallel execution whenever it can in the redefinition process speeding the process up as much as it possibly can.

You can now start the table redefinition process by executing:

 dbms_redefinition.start_redef_table(table_name, interim_table_name, column mapping,  redef method) 

Table name and interim table name have to be present in the parameter list; the procedure cannot make this assumption. However, you can omit the column mappings for the columns that are mapped exactly the same in both tables (i.e., the names are unchanged in the interim table). If the method of redefinition is omitted, the default method is assumed, meaning that the procedure will attempt to use the primary key manner to redefinition.

Make sure that you have defined any triggers, indexes, grants, and constraints on the interim table that exist in the base table. Referential constraints that would in any way affect the base table, either as the parent or the child in any relationships, need to be created in a disabled state. Until the redefinition is complete, the triggers will be inactive and will not fire for the triggering event if it should occur. Once the redefinition is completed, the triggers, constraints, indexes, and grants associated with the interim table will replace those on the table being redefined, eliminating the need for you to manually go and redefine them after definition time. They will transfer in a fully enabled state at the end of the redefinition process.

You can now execute the dbms_redefintion.finish_redev_table() procedure to allow for the successful completion of the redefinition of the original table. Again, during this procedure, there will be a small fixed window where the table will be locked against DML independent of the size and complexity of the table and ensuing processes. This procedure will complete the process, redefine the original table so that it has all of the attributes, triggers, constraints, indexes, and grants as the interim table had, and make sure that any referential constraints that were defined on the interim table will be created on the final table as enabled.

Now you can complete the optional step of dropping the hidden M_ROW$$ column that was included in the final table if you chose to use the rowed method of redefintion, using either of the following methods:

  • Alter table <your table> set unused M_ROW$$;

  • Alter table <your table> drop M_ROW$$;

As a result of this process you now have the following product in your system. The original table has been redefined and contains all desired attributes, constraints, and features. All indexes, constraints, grants, and triggers defined on the interim table are now defined on the base table as it was redefined. All referential constraints created as disabled on the interim table are now created on the end redefined table as enabled. All constraints, triggers, referential integrity (RI), and indexes on the original unredefined table have been transferred to the interim table and will be dropped when you choose to drop the interim table. And referential constraints that were built on the original version of the table are now on the interim table and are disabled. What is more important, though, all PL/SQL procedures, packages, and cursors defined on the original table are now invalidated. They will be revalidated, depending on how the table was changed, the next time that they are accessed. This validation can fail if the table definition has changed sufficiently to make the addition of programming logic a requirement in these program units.

If you are redefining a heavily accessed table, it is possible for there to have been a large amount of DML occurring on the base table during the redefinition process. This is one of the advantages of redefining online. If you know ahead of time that this is likely to occur, Oracle recommends that you periodically synchronize the interim table's data with the data from the original table. This is accomplished by periodically calling the dbms_redefinition.sync_interim_table () procedure, which allows the dbms_redefinition.finish_redef_table to finish faster, and reducing the overall time that it takes to complete the redefinition. The window of unavailability will still occur, as this is independent of the amount of data or the volume of transactions that have occurred during the process.

So what happens if there is an exception raised during the redefinition process or if you opt to abort the process? Call the dbms_redefintion.abort_redef_table() procedure to drop the temporary logs and tables that are associated with the redefinition process. You can then safely drop the interim table along with all of its associated objects and continue on as if nothing ever occurred.

Naturally, there are drawbacks:

  • Tables that have MVs and MV logs defined on them are not candidates for online redefinition.

  • Advanced Queuing tables cannot be redefined.

  • While an Index Organized Table can be redefined, its overflow table cannot.

  • Tables that have user-defined types cannot be redefined.

  • Tables containing BFILE columns are not candidates, nor are tables with long columns (although large objects (LOB), columns are acceptable).

  • The table cannot be part of a cluster.

  • The table cannot belong to system or sys and it cannot be a temporary table.

  • Horizontal subsetting of data cannot be handled in online redefinition.

  • Only simple expressions are permitted in mapping the columns from the base table to the interim table. These cannot include subqueries.

  • Now columns must not be declared as not null in the redefinition process. This has to be done after definition.

  • There can be no referential constraints between the base and interim tables.

  • Redefinition cannot be done nologging.

Skip Scan Index

A composite index is an index that contains more than one column in the key. Typically a composite index can have significant improvements in selectivity and reductions in I/O. Two columns that have very poor selectivity can be combined to see significant improvements in selectivity. I/O can be reduced if the Oracle RDBMS can get the information that it requires by the use of an index, which can reduce the amount of time it would spend fetching blocks into memory to determine if the information it was looking for would be found in that block. Oracle has been able to use the index for an access path if the leading column of the index was used in the query. Typically columns in composite indexes are chosen because they:

  • Occur frequently together in where clauses (on the all_objects table this is an index referencing owner and status: where owner='SYS' and status = 'INVALID').

  • Are frequently found in and conditions (SYS and INVALID).

  • Have higher selectivity when taken together than if they were taken alone (sys usually does not have a lot of invalid objects, but there are often many invalids and sys owns a lot of objects).

  • Are almost always selected (the two or more columns) together.

For example, on the FND_CONCURRENT_PROCESSES table (the table in which Apps stores information about Concurrent Manager jobs), there is an index defined called fnd_concurrent_processes_n1. This index has three columns: queue_application_id, concurrent_queue_id, and process_status_code. The leading edge of the index would be the columns defined, left to right, in the list in the create index statement, and in the above list, as well. For Oracle to make use of this index, at least queue_application_id would have to be in the query. But running a query by process_status_code would (had there not already been an index built on that column alone) have caused, in all likelihood, a full table scan.

Index skip scans improve index usage considerably, because they allow the index to be used by nonleading columns as long as they participate in the index's makeup, because scanning the index blocks is still faster than scanning the table blocks to quickly locate pieces of information. The initial column in the index definition is, in effect, skipped by the query either because the information was not needed, or a qualifier could not be specified for the column. Skip scanning is particularly effective if there are a few distinct values in the leading column and more distinct values in the nonleading columns. In the case of the example index (owner = SYS and status = invalid) the reverse is true. To make full use of skip scanning, the index would be defined with status as the leading value (as in the case of all_objects, status can take on two values and in financials, owner can take on nearly 200). The query select count(*) from all_objects where owner = 'SYS' would skip the leading value of "status" in the index and perform a scan of the subindex, owner would provide the desired information in a more timely fashion.

Transportable Tablespaces

Transportable Tablespaces are tablespaces within an Oracle database that can be defined to move data and structures from one database to another, simply by plugging the new tablespace into the alternative instance. This allows the data in the tables and the table structures to be moved without having to take the time for an extended outage. This may be a valid option when looking at cloning options and alternatives, when all you need to refresh is the data in the instances, and you have a limited number of modules configured. The tablespaces that are being transported can either be dictionary managed or locally managed, but dictionary managed tablespaces cannot be opened for write operations in a database with a locally managed system tablespace. Starting in Oracle 9i, the transported tablespaces can be of a different block size than the standard block size of the database into which they are being transported. Typically, Transportable Tablespaces are used for migrating information from an OLTP system into a data warehouse staging area, eliminating a significant part of the transformation process in the process. Further uses include:

  • Updating the data warehouses and data marts or reporting instances from staging areas

  • Loading data marts from the far larger central data warehouses

  • Archiving OLTP information and data warehouse information rather than dumping it to tape

  • Data publishing by pushing information to either (or both) internal or external customers

  • Enabling the ability to perform Tablespace Point-in-Time Recovery, where you recover a single tablespace to a specific point in time while allowing all of the rest of the database to be at a different point because the corruption was self-contained.

Moving a tablespace full of data can be much faster than performing an export/import on the tablespace level or an unload and load of the tablespace's data because there is no database level manipulation but rather is handled by directly copying the data files from one instance to another and then integrating the tablespace's structural information into the new database. Because you can choose to move any tablespaces, data or index, you can avoid having to rebuild indexes as you would likely have to do when either importing or loading the data directly into tables. Consistency is maintained while keeping time to an absolute minimum.

As with all of the new features, the advantages are tempered with certain limitations and Transportable Tablespaces are no exception:

  • Both the source and target database must be on the same hardware platform and OS. You cannot use Transportable Tablespaces to move the database from RS-6000® AIX 5L to Intel® Based Windows 2000 or even between Intel® Based Linux and Intel® Based Windows 2000.

  • Both the source and target database have to use the same character set and national character set. This is true even if one character set is a subset of the other.

  • You cannot use Transportable Tablespaces to overlay a tablespace that already exists in the target database with the same name. This would imply that the only way that you could use these, at this point in time, would be to drop custom tables in their tablespaces from the target database and overlay where they used to be with the newly refreshed data from production or as a means for migrating similar structures from a well-tested test environment into production. Great care should be taken when you start attempting this and Oracle Support should be initially involved.

  • Tablespaces containing certain kinds of structures are not candidates for transport. These structures include:

    • MVs used for replication

    • Function-based indexes

    • Scoped REFs

    • Advanced queues with multiple recipients that are at an 8.0 compatible level

If you are going to be transporting tablespaces with a different block size than the target database's standard block size, the compatible parameter has to be set to at least 9.0 on the target database. You do not have to be running the same Oracle version in both databases, so you could transport an Oracle 8i tablespace into a 9i database. Oracle guarantees that the Transportable Tablespace is compatible with the target database or an error condition will be signaled at the beginning of the transport plug-in operation. Transporting from an older database to a newer release of the database is always possible, as long as you meet the previously stated limitations. When creating and transporting Transportable Tablespace sets, Oracle determines the lowest compatibility level to which the target database needs to be set. When attempting to put the new tablespace into the target database, Oracle will throw an error if the compatibility level of the transportable set is higher than the compatibility level of the target. One solution might be to lower the compatibility level of the source database or to raise the compatibility level of the target database for the duration of the operation.

When you are ready to proceed with transporting a set of tablespaces, the following procedures should be followed.

Pick a set of self-contained tablespaces that you want to transport. There can be no logical or physical dependencies between the entire set of tablespaces and those structures outside of the set within the database. There can be no references from within the set pointing to structures outside the entire set that you are trying to transport. There can be no indexes inside the tablespace set on the tables that are not within the set that are not contained in tablespaces within the set (however indexes on tables in the tablespace set do not have to be included). You cannot have referential integrity that points to structures that are outside of the set (no child table or parent table can be outside of the transport set). However, you can choose not to include constraints and these tablespaces would be able to be included in a transport set. Tablespaces containing objects that have LOB columns or that point to LOBs outside of the tablespace set are not candidates for the transport operation. Have someone with the execute catalog role call the DBMS_TTS.TRANSPORT_SET_CHECK procedure to determine if the tablespace set is self-contained and able to be transported. You pass into the DBMS_TTS.TRANSPORT_SET_CHECK procedure the list of tablespaces in the tablespace set and indicate whether you want to have referential integrity and constraints included in the check.

After you have determined that you have a self-contained set of tablespaces, generate a transportable tablespace set that consists of the data files for the set of tablespaces being transported along with a file containing the structural information for the set of tablespaces to be transported. Make all of the tablespaces involved in the tablespace set read only:

 Alter tablespace users_data read only; Alter tablespace users_indexes read only; 

You then have to invoke the export utility, specifying which tablespaces are in the transport set. While you are still using the export utility, you are exporting only the information that is contained in the data dictionary concerning the tablespaces in the tablespace set; therefore, this export, regardless of tablespace size, goes very rapidly.

 Exp transport_tablespace=y tablespaces=(users_data, users_indexes) triggers=y constraints=n grants=n file=transport.dmp 

You will be prompted for the user with whom to connect; connect with a user that has the sysdba privilege, typically if you have OS authentication it would look like the following:

 Connect '/as sysdba' 

You always have to specify tablespaces that are in your transport set. Triggers can be included. You must either specify triggers = y or triggers = n. If you choose triggers = y, then your triggers are exported without a validity check and this may cause compilation errors later during the subsequent import.

You need to specify whether or not referential integrity constraints and grants are to be exported. Specify the name of the export file that will contain the structural information.

If the tablespace sets that you are trying to transport are determined not to be self-contained at this point, the export fails and indicates that the failure is due to the transportable set not being self-contained. You must then resolve all violations and attempt this again.

Transport the tablespace set from the source database to the target database. This is accomplished by copying the data files along with the export information file to the target system, either using FTP, copy, Tar to tape and untar to the new location, or burning to CD and copying from the CD to the new location. Transport both the data files and the export files of those tablespaces to a central place that is accessible to the target database.

Plug the new tablespace into the target system. To do this, you invoke the import utility to plug the new information into the new database. If you are transporting a tablespace set that has a different block size than the standard block size of the target database receiving the tablespace set, you must have first set a DB_nK_CACHE_SIZE initialization parameter entry in the receiving database's parameter file matching the block size for the tablespace set. If it is not already included in the parameter file, the DB_nK_CACHE_SIZE parameter can be set using the ALTER SYSTEM SET statement. You plug the tablespace set into the target database and integrate the structural information using the import utility. Again, you are importing the data dictionary information only, so the import time will be minimal.

 IMP TRANSPORT_TABLESPACE=y FILE=transport.dmp      DATA FILES=('/db/users_dat.dbf','/db/      users_idx.dbf') TABLESPACES=(users_dat,      users_index) TTS_OWNERS=(users)FRO-      MUSER=(users) TOUSER=(users) 

At this point, if everything has successfully executed, all tablespaces have been copied and are now in read-only mode in the new database. You will need to check the import logs to determine that no errors have occurred.

Inputting the list of data files can be somewhat labor intensive, particularly if you have an extensive set of tablespaces in your transport set or if there are a large number of data files associated with the tablespaces. If the list becomes prohibitively long (if it spans more than a line) you can put all of the parameters to import into a single file and use that file as a parfile (parameter file) and invoke import as follows:

 IMP PARFILE='ajw_import' 

The file 'ajw_import' file would contain the following:

 TRANSPORT_TABLESPACE=y FILE=transport.dmp DATA FILES=('/db/users_dat.dbf','/db/ users_idx.dbf') TABLESPACES=(users_dat, users_index) TTS_OWNERS=(users) FROMUSER=(users) TOUSER=(users) 

You can now put the Transported Tablespaces back into read/write mode.

 Alter tablespace users_data read write; Alter tablespace users_indexes read write; 

Online Table Reorganization

We have seen that we can redefine a table online, but one of the most resource intensive things that a DBA can do is to reorganize the tables. This is particularly true for a DBA who is responsible for an OLTP database that can far exceed 50 or 100 gigabytes in just the data. You can now perform reorganization of heap tables (that is those tables that are not index organized tables or external tables) online, while allowing users access to the data.

The online architecture provides you the ability to do any or all of the following:

  • Any of the physical attributes of the table can be changed online.

  • The table can be moved to a new tablespace.

  • The table can be partitioned or partitions added.

  • The table can be converted from one type of organization (such as a heap-organized) to another (such as index-organized).

  • Column names, column types, and column sizes can be changed.

  • Columns can be added, deleted, or merged online, although the primary key of the table cannot be modified online.

  • Online creation and rebuilding of secondary indexes on index-organized tables is supported.

  • Indexes can be created online and analyzed at the same time to save time and resources.

List Partitioning

Recall that with Oracle 8i, you were able to partition tables and the indexes that went along with the tables. Oracle 9i brings an enhancement to the partitioning algorithm — list partitioning. List partitioning is used when you require explicit control over how the rows in a table are mapped to the partitions in the table. In this method of partitioning, you specify a list of discrete values for the partitioned columns in the description of each partition. This is vastly different from the way that range partitioning is handled in that you do not just specify an upper limit value and all values smaller than that value fall into the partition.

A good example of when list partitioning is appropriate is when you have a predescribed list of account numbers, department numbers, or other set of limited values that have particular meaning to your business and reside in some of the large tables in the database. As your financials instance grows, you may find more and more uses for list partitioning, or partitioning in general.

Range partitioning assumes that the data matches a natural range of values for the partition column. It is not possible to physically group together those out-of-range values. Hash partitioning allows absolutely no control over the distribution of data. In a hash partition, the data is distributed over various partitions using a system hash algorithm, making it impossible to logically group together the discrete data values for the partition columns into the particular partitions.

List partitioning allows you to group together unordered and not obviously related sets of data. All of the data for all of the departments that fall under a particular division could be partitioned together, allowing for quicker queries on that data by those falling into that set of departments. You can group the data together in a manner that matches the logic behind the business rules rather than using an arbitrary algorithm or having to manipulate the data to fit the algorithm.

One drawback is that, unlike hash or range partitioning, you cannot have multiple columns involved in the list of values in the partition list. If you choose to partition a column by list method, you are limited to a single column on the table on which to partition.

When you create the partitions in the list partition set you have to specify the method that you are using to partition, the column that will be involved, and the partition descriptions (each specifying a value list of literal discrete values) for the partitioning column that designate a row to be included in a particular partition. You cannot create an all others into parameter; you have to ennumerate all of the values that you anticipate being inserted into the table. An attempt to insert a value that does not fall into any existing partition will cause an error condition. Instead, you can define a default partition in which to place those rows that would otherwise not fall into a partition. This will allow you go back and determine commonalities in the data that is suddenly falling into the default partition and more effectively partition the data later. This means that, if you opt for using list partitions as a partitioning method, you will have to work even more closely with the end users of the system to head off problems when new values for the partitioned columns are added for expanded business reasons. If you look at the following definition of the GL_BALANCES (see Table 2.7), we might define a list partition on period name or, if you have multiple sets of books (multi-org), you might decide to list partition on set_of_books ID to segregate each set into its own partition.

Table 2.7: The GL_BALANCES Table

Name

Datatype

Size

Scale

SET_OF_BOOKS_ID

NUMBER

15

0

CODE_COMBINATION_ID

NUMBER

15

0

CURRENCY_CODE

VARCHAR2

15

 

PERIOD_NAME

VARCHAR2

15

 

ACTUAL_FLAG

VARCHAR2

1

 

LAST_UPDATE_DATE

DATE

7

 

LAST_UPDATED_BY

NUMBER

15

0

BUDGET_VERSION_ID

NUMBER

15

0

ENCUMBRANCE_TYPE_ID

NUMBER

15

0

TRANSLATED_FLAG

VARCHAR2

1

 

REVALUATION_STATUS

VARCHAR2

1

 

PERIOD_TYPE

VARCHAR2

15

 

PERIOD_YEAR

NUMBER

15

0

PERIOD_NUM

NUMBER

15

0

PERIOD_NET_DR

NUMBER

  

PERIOD_NET_CR

NUMBER

  

PERIOD_TO_DATE_ADB

NUMBER

  

QUARTER_TO_DATE_DR

NUMBER

  

QUARTER_TO_DATE_CR

NUMBER

  

QUARTER_TO_DATE_ADB

NUMBER

  

YEAR_TO_DATE_ADB

NUMBER

  

PROJECT_TO_DATE_DR

NUMBER

  

PROJECT_TO_DATE_CR

NUMBER

  

PROJECT_TO_DATE_ADB

NUMBER

  

BEGIN_BALANCE_DR

NUMBER

  

BEGIN_BALANCE_CR

NUMBER

  

PERIOD_NET_DR_BEQ

NUMBER

  

PERIOD_NET_CR_BEQ

NUMBER

  

BEGIN_BALANCE_DR_BEQ

NUMBER

  

BEGIN_BALANCE_CR_BEQ

NUMBER

  

TEMPLATE_ID

NUMBER

15

0

ENCUMBRANCE_DOC_ID

NUMBER

15

0

ENCUMBRANCE_LINE_NUM

NUMBER

15

0

Unlike range partitioning, in list partitioning there is no apparent sense of ordering between the different values that end up falling into any one partition.

Range or hash partitioning or composite partitioning are perfect methods for partitioning data for historical segregations or for striping. Primarily these are done for improved manageability and data placement and to add the ability to take advantage of parallelism when going after the data in the table. Often these ranges are done in data warehouses to segregate the data by date range, range of account numbers if they are sequential, or other information that can be easily chunked under a broad umbrella. Sales by month can be stored in a partition sales by years and within years, further partitioned by month. This allows administrators to be able to drop all of the data for a period of time based on business rules, suggesting that the company will retain only that sales information for the last ten years on the active database.

You can, however, take advantage of the advantages of both range partitioning and list partitioning by using a composite range/list method. This can allow for a two level hierarchy, the first level being a range of values and the second level being an enumerated list of values within that range. This form of composite partitioning would be well suited if you know that all departments in the range of 6000 to 6999 all report to a single upper management general manager, while the physical locations of the departments are disparate and seem to have no relation to each other. Department 6010 may be in Denver while 6020 is in Tampa and 6030 is in London. You would be able to take advantage of the speed in accessing all of the departments that the general manager manages. At the same time, you would be able to group together all of the information that belongs to a certain department by the commonality of that department. The definition of this kind of composite partitioning would follow the following method:

  • Broad outer partitioning method: range.

  • Partitioning column or columns to use for the range of values (you can use multiple columns in this outer range partition).

  • Partition descriptions identifying partition bounds (including maximum value default partition).

  • Narrower subpartitioning method: list.

  • Subpartitioning column (list partition, one column).

  • Subpartition descriptions and for each description a value list of discrete literal values that will tell the table that a row qualifies to be included in the subpartition.

The outer partitions of range-list composite partitioned tables are logical structures only. The data is physically stored in the segments of the subpartitions of the table. The list subpartitions in the composite range list partition have similar characteristics to those of simple list partitions. You can specify a default list subpartition, just as you specify a default partition for list partitioning to assist you in your maintenance and in providing the best product possible for the end users.

Dynamic SGA

Before Oracle 9i, there was a limited amount of tuning that you could do in the System Global Area (SGA). Now, you can exert more control over this part of the system. You can even do dynamic tuning while your instance is up and running. With the dynamic SGA structure, the size of the components of the SGA can be changed dynamically without having to shutdown the database. These structures over which you have extended control include the buffer cache, the large pool, and the process private memory that is allocated within the SGA.

Dynamic SGA limits how much virtual memory is available for use by Oracle and can be set at runtime. The instance can be started unconfigured, which allows the instance to grow and use more or less memory as the demand on the database changes. There is a maximum size that can be set to set upper bounds to which you will permit the SGA to grow. The parameter that allows for the bounding of the SGA is the sga_max_size parameter. This is the only parameter surrounding the use of the SGA that is fixed at database startup and can only be altered by restarting the database. As with a lot of the initialization parameters, Oracle looks at this value as a friendly suggestion, but if all of the components that are either specified or left at the default values sum up to a size larger than the value specified for the sga_max_size parameter, the parameter will be summarily ignored.

Optimally, the entire SGA for all instances on a box summed together should fit into physical memory. If it does not, virtual memory and paging space is used to store different parts of it, thereby decreasing the overall performance because of the increased disk I/O that is associated with operation.

The size of the SGA is determined by several initialization parameters. The following parameters most affect SGA size:

  • DB_CACHE_SIZE: The size of the cache of the standard block size blocks.

  • LOG_BUFFER: The log_buffer parameter designates the number of bytes that are allocated for the redo log buffer.

  • SHARED_POOL_SIZE: The size in bytes of the area devoted to shared SQL and PL/SQL statements.

  • LARGE_POOL_SIZE: This is the size of the large pool. The default is 0.

The total memory allocated to an instance's SGA is displayed on the screen when an instance's startup is processed, or when using OEM or SQL*Plus. You can deliberately display the size of the current instance's SGA using the SQL*Plus SHOW SGA statement.

With the new dynamic SGA, the unit of memory allocation is called a granule. Oracle allocates and tracks the memory used by each component in the SGA (i.e., the buffer cache, the shared pool, the Java pool, and the large pool) and all SGA free space in units of granules. In the SGA, memory is only allocated on the granule boundary, never in portions of granules.

Granule size in the SGA is determined by total overall SGA size. On most OSs and hardware platforms, the size of one granule is 4 MB if the total SGA size is less than 128 MB and 16 MB for any SGA larger than 128 MB. There are some platform dependent idiosyncrasies, most particularly on Windows NT/2000® platforms where the larger granule (for SGAs larger than 128 MB) is 8 MB instead of the typical 16 MB. The current granule size for a running database can be viewed using the V$SGA_DYNAMIC_COMPONENTS view. The same size is used for all dynamic components in the SGA. You can specify a size for these configurable components and when you do, it should be specified in a size that equates to a multiple of the current granule size. If you specify a size that is not a multiple of the current granule size, for example 13 MB on an SGA that is 100 MB, Oracle will round the size to the next largest granule boundary, allocating for the component 16 MB.

Oracle maintains information about these components and their allocated granules in what it calls a scoreboard. For every allocated and configured component for which at least one granule has been allocated, the scoreboard contains (1) the number of granules allocated to the component, (2) the description of any transactions that are currently pending against the component, (3) the target size in granules (you will have to do the math to determine how close it is to its boundary) and the progress made towards reaching its target size, and (4) the initial number of granules allocated to the component and the maximum number of granules that it can have. For operations on the database that can modify the number of granules, Oracle logs the operation, the target size, and start time to the affected SGA component in the scoreboard and updates the progress field until the running process completes. When the process completes, the scoreboard gets updated, sets the current size with the target size, clears the target size and progress fields, and allows the DBA to see how the number of granules allocated changed. Oracle then alters the initialization parameters to reflect how much of the SGA is currently in use.

The scoreboard is, in effect, a circular buffer that contains the last 100 operations made to the scoreboard. The fixed views show the state of the scoreboard at any given time and the current contents to the 100-operation limit of its size.

Information about current and on-going SGA resize-affecting operations can be found in the V$SGA_CURRENT_RESIZE_OPS view, information about those last 100 completed SGA resize operations can be found in the V$SGA_RESIZE_OPS view, and the amount of SGA memory that is currently left available for future dynamic SGA resize operations can be found in the V$SGA_DYNAMIC_FREE_MEMORY view.

You can allocate granules at startup. When the instance starts up, Oracle reads the values in the parameter file (the init.ora or the spfile) and determines what settings it expects to be able to allocate based on the OS's memory limits. Oracle then allocates virtual address space for the SGA. SGA_MAX_SIZE is read from the initialization file and the values for the malleable components are rounded to the next highest granule.

You can grow a component's SGA use by using the Alter System command. If you are using spfile, remember that you need to determine if you want this change to be just for the current instance of the database or if you want it to persist through the next shutdown and pass the scope parameters to the alter command based on that decision. Oracle will take your newly suggested size, round it to the nearest multiple of granules, and add or subtract granules to meet this target size. Oracle will, in this case, stop allocating granules when it comes to the SGA_MAX_SIZE boundary. It will only allocate those free granules that it knows it can safely allocate and will not be able to handle any request that would cross that boundary.

 SQL*Plus: Release 9.2.0.2.0 - Production on Wed Jul 16 08:09:35 2003 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. Connected to an idle instance. SQL> ORACLE instance started. Total System Global Area 110100536 bytes Fixed Size                  741432 bytes Variable Size             92274688 bytes Database Buffers          16777216 bytes Redo Buffers                307200 bytes Database mounted. Database opened. SQL> Disconnected from Oracle 9i Enterprise Edition Release 9.2.0.2.0  - 64bit  Production With the Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.2.0  - Production 

Multiple Block Sizes

New in Oracle 9i, you are no longer tied to the standard block size that is defined at database creation time. Now nonstandard block sizes can be used when creating tablespaces so you can intelligently decide what size to make tablespaces based on what you will be putting in those tablespaces. Have you ever wanted to be able to put indexes into a 16K block size, but your database was created with a standard block size of 4K for performance improvement? Now you can. You can define the standard block size of the database and nonstandard block sizes in discrete values of powers of 2 (2K, 4K, 8K, 16K, 32K). There are some platform specific limitations on the maximum block size, so these are general values and may not necessarily be available depending on what OS your database is residing on. This feature is particularly useful when you are transporting tablespaces between databases created with different standard block sizes, but also can have significant ramifications if you are looking at tuning.

When a tablespace is created using the create tablespace statement, you can use the "blocksize" keyword to specify the size in kilobytes of the blocks that you want to use for that tablespace. To define a table in this manner, first you will have had to define the buffer cache in the SGA that will be dedicated to processing information from components of that block size. The initialization parameters connected with enabling this feature affect the amount of memory that gets allocated in the SGA. With the exception of sga_max_size (the static initialization parameter), all of the parameters that deal with this ability are dynamic and alterable with the alter system command. The size of your SGA could grow or shrink considerably based on the values specified for these dynamic parameters.

The block size of the System tablespace is set when the database is created; it is the db_block_size and can only be changed by recreating the database. Most other tablespaces will use this size as well, unless you deliberately tell them otherwise at creation time. You can specify five total block sizes, including the standard block size for most databases on most platforms.

Before you can create an alternatively block-sized tablespace, you have to create the subcaches (although they are not a part of the size of the db_cache, they are called subcaches) for each of the block sizes that you believe you will need to use. If you find the need later to add another one, they can be added as well as altered while the database is running. DB_nK_CACHE_SIZE needs to be set, replacing the "n" with the number of kilobytes that is being defined as the block size. DB_16K_CACHE_SIZE would be the cache size parameter for the database cache dedicated to handling the objects created in the 16K block size tablespace.

Smaller block sizes are better for small rows with a lot of random accessed information because it tends to reduce block contention. Smaller block sizes also have fairly large space overhead (block header information) due to the metadata that has to be stored. It is not recommended for tables that have large rows where there might only be a few rows able to be stored for each block or for row chaining when not even a single row fits into a block.

Larger block sizes have lower metadata overhead, so there is more usable space to store data. You can utilize this more data per block feature because you can read more rows into the buffer cache with every block retrieved in a single I/O. They are good for tables with high sequential read access, full table scans, or very large rows (especially rows containing LOB (large objects)/BLOB (binary large objects)/CLOB (character large objects) data). However, they can waste space in the buffer cache if you are doing random access reads on tables with small rows that reside in tablespaces that have a large block size. A good example is when you pull in information from a tablespace that has an 8K block size and there is only a 50-byte row size. In this case, you will have potentially wasted 7950 bytes in the buffer cache when doing random access on the table. Further, they may not be good for certain index blocks used in a heavily accessed OLTP environment, because they may tend to increase block contention on the index leaf blocks. Depending on the number of users and the volume of data, you may want to consider them.

Temporary Tablespaces have to be created in the standard block size of the database.

When using partitioning, all partitions of a partitioned table, even those in separate tablespaces, must reside in tablespaces of the same block size. This means if you create the first partition as 8K, all partitions of that table must be 8K.

Cursor Sharing

Although cursor sharing is not new to Oracle 9i, it has seen significant improvements in recent releases. Initially cursor sharing was set as:

 Cursor_sharing = exact 

This meant that any SQL statement that had identically matching text in the SQL area could use the same execution plan. But only statements that were textually identical could make use of this feature. This exactness extended to white space, bind variable values, and literal values; they had to be indexically identical.

The force setting was introduced in Oracle 8i, allowing (causing) vaguely similar statements to use the same execution plan. This can have the effect of severely degrading performance as it may deteriorate the execution plan of the SQL already in the SQL area and allowing suboptimal plans to be used instead or recalculating the plan. Force should be used only when the trade-offs between the new plans in total outweigh the drawbacks of the performance previously.

In Oracle 9i, the similar setting was brought into the equation allowing for a middle ground between hardly ever using them and nearly always using the ability. In short, similar will force similar statements (including the values in the bind variables) to share the SQL area without deteriorating execution plans.

You will want to set Cursor_sharing = similar if you have many similar SQL statements executed in an environment. Because much of what is done in Oracle E-Business Suite is repetitious, and the only difference between the statements is often the values in the update or where clause, making use of the similar setting will likely show performance improvements to much of the application. While the optimal solution would be to have everything written with sharable SQL code, Cursor_sharing will significantly reduce the amount of resources required of the system and will lower the number of hard parses required. Soft parses, the parses of the shared pool, may rise somewhat as Oracle will look there first to see if there is an existent cursor that can be used to apply to the current statement. Similar may slow down queries marginally if a significant amount of literals are used in the query. Overall, setting Cursor_sharing to similar should reduce memory usage, speed up parse times, and reduce overall latch contention.

A statement is considered similar if the statements are identical except for some literals, although a textual similarity does not guarantee sharing. Cursor sharing will have the effect of making the CBO determine if it is a close enough match to something that exists in the shared pool to make it able to use the same explain plan.

External Tables

How do you get external information integrated into your database? Do you have a set of custom tables that you load, probably via SQL*Ldr and then, from there, drag into the database using the Oracle E-Business Suite utilities? Do you import information into custom tables for use in generating reports that combine this information with information resident in the Apps database? There might be an easier way.

Oracle 9i brought with it a new enhancement called external tables. External tables allow data in an OS's flat file to be accessed as if it were a standard Oracle unindexed table set to read-only access. You simply need to define the metadata about the external table to the Oracle Data Dictionary and tell Oracle how the columns in the file are mapped to perceived columns in the end table. Oracle makes no assumptions about the data in the external table; it simply allows for the access of the data in a more simplified and intuitive manner than might otherwise be available. The definition to Oracle includes the keywords "organization external" in the create table statement.

To use external tables, the user or owner of the tables has to have the following system level privileges, so care should be taken to ensure that limited users have this ability. It may be wise not to let some of them know exactly what access they have.

  • Alter any table.

  • Select any table.

  • Read access on the OS directory containing the file.

  • Write access on the OS directory containing the file.

Write access on the directory is required for bad files, log files, and discard files.

Automatic Undo Management

In previous versions of the database, rollback information (the preimage of the data involved in active transactions) was stored in Rollback Segments. The preimages were used so that Oracle understood what the data looked like before any changes were made to it so that if there was a rollback situation, it could be handled elegantly. Oracle also uses this information to provide other sessions' transactions a read consistent view of the data.

Traditionally, management of these Rollback Segments was labor intensive and the sizing of the segments was tedious and prone to error. Frequently batch processes would experience ORA-01555 (Snapshot too old) errors because the read consistent view of the data would get aged out of the segments when another process was trying to access that view of the data.

SPFILEs

A new parameter file has been brought into the picture in Oracle 9i. The Server Parameter File (spfile) is more like a binary repository for the initialization parameters for an Oracle database instance. It is a file located on the server (server part of the name) that holds not only what the parameter file (pfile) (the init.ora) used to contain, but other parameters as well that are persistent across shutdowns as well as instantaneously captures systemwide changes that are made during runtime. Any time that you issue an alter system command, the statements that you issue get written by default to the spfile as well (eliminating the need to manually edit the init.ora file with those values). The file initially gets built based on the init.ora file and appears, when viewed in a text editor, to be a typical text file; however, due to its binary nature, it should never be edited in this manner.

As of Oracle 9i, the default behavior of the database at startup time is to read the spfile, if one exists, to find its initialization parameters, resorting to a pfile only if no spfile exists or if the pfile= command is included in the startup command.

To migrate your instance to spfile while it is running, issue the command:

 Create spfile=$ORACLE_HOME/dbs/initVISspfile.ora from pfile=$ORACLE_HOME/dbs/initVIS.ora 

Or

 Create spfile from pfile=$ORACLE_HOME/dbs/initVIS.ora 

at the sql prompt while logged in as sysdba or sysoper. The database can either be started or not started at the time that the command is issued. From that point on, you can start your instance with the newly created spfile. You can choose to deliberately name your spfile, however, it is Oracle's suggestion that you allow the database server to default the name and location of the parameter file; this will allow the database to know the default location and will ease the administration of the database.

Comments from the pfile, if they were located on the same line as the parameter, are retained in the spfile while all other comments are ignored. Further, the parameters are alphabetized making the parameters and values easier to find from the OS level.

Values for the initialization parameters are altered in the database instance as well as in the spfile based on the values set during an alter system command to the database. If you want to override this default behavior, you can specify the scope that you want the values to take. If you want to make a change to the running instance and not allow the change to make to the spfile, you can use the scope=memory parameter to the alter system command. This will allow you to dynamically alter the values based on database conditions while maintaining the default behavior. The converse is true as well. If you want to make the changes to just the spfile, allowing the current settings to apply to the current running instance of the database, but to change that behavior when the database is next restarted, you can provide the scope=spfile parameter to the alter system command. Scope=spfile applies to parameters that are only alterable at startup of the database (like compatible). Scope=both is the default behavior and makes the changes to the running instance and to the spfile at the same time. Also, if you want the behavior to only be in effect for those sessions connecting after the parameter gets changed, but the old behavior to be in effect for those sessions that are already connected, you can use the deferred keyword following the scope parameter to make this change possible.

If you need to remove a parameter from the spfile, you will need to provide the alter system command, setting the given parameter to a zero length string, altering it back to its default value and removing it from the spfile (typically the spfile holds only those parameters that are no longer at their default values). If the parameter in question has a Boolean value or a numeric value, you need to reset those back to their default values, as they will not take a zero value string.

Resumable Space Allocation

Have you had an operation in Oracle Financials ever error out and rollback because of space related issues (either running out of space in a data file, hitting maximum extents, or preset limits on the amount of data that a schema is able to store in any particular tablespace)? Have these errors caused problems for you or for your end users and you would love to be able to make it possible to recover from those errors without having to recapture the data that was in process at the time? Now you can. With Oracle 9i, Oracle provides a means for suspending a session's activities and resuming it later, after a space related error has been taken care of. This is particularly important in activities that are long running and data intensive. Enabled by the DBA, Resumable Space Allocation allows the statements for which it is enabled to halt at the onset of a space related problem and to be restarted at exactly the same point after the space issue has been resolved or when a predefined amount of time has passed for timeout.

Online Index Rebuild

Have you gone through the exercise of having to rebuild indexes on some of the quickly changing tables in your Oracle E-Business Suite environment? If so, you will recognize how much easier and more time effective (especially for a 24/7 shop), the ability to rebuild indexes online while the database base tables are still available for insert, update, delete, and query by the end users would be. In Oracle 9i, you now have the ability to rebuild indexes online while allowing the information in the base table to be accessed by the end users. Rebuilding indexes in this manner will allow you to move them to a different tablespace (perhaps one with different space characteristics like a different block size) while the underlying data continues to use the old index definition for accessing the information.

Character Set Scanner

The Character Set Scanner, new in Oracle 9i, can provide a quick assessment of the feasibility of and raise any potential issues that you might find in changing the character set of your Oracle database from its existing character set to a new database character set. The scanner checks all of the character-based data in the database and analyzes it for all of the effects and any problems that may be encountered in changing the character set encoding. At the end of the scan, the scanner generates a summary report of its findings and provides the scope of work that would be required to convert to the new character set. Based on the information in the report, you can better determine the level of effort that will be needed to do the conversion. You can make a more informed decision as to the advisability of making a voluntary change as opposed to one that is driven by the business expanding into new areas.



 < Day Day Up > 



Oracle 11i E-Business Suite from the front lines
Oracle 11i E-Business Suite from the Front Lines
ISBN: 0849318610
EAN: 2147483647
Year: 2004
Pages: 122

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net