Chapter 2: 11i, 8i, and 9i New Features

 < Day Day Up > 



With the advent of Oracle E-Business Suite 11i, Oracle made use of some of the newer features of its 8i database and built new functionality into its 11i product suite. You should have some understanding of many of the ones that you may not necessarily be using or dealing with directly on a daily basis, because their functioning will impact you at some point.

Oracle 8i New Features

Cost Based Optimizer

While cost based optimization was released in Oracle 7, Oracle 8i brought with it significant improvements including many bug fixes and extended features. Oracle strongly suggests that everyone over Release 8i use the Cost Based Optimizer (CBO). They support Rule Based Optimizer (RBO) only for compatibility with existing applications. With the CBO, query optimization is calculated differently and the queries run more efficiently than under the RBO in past releases. In a non-Apps installation of the 8i database, you have the option to choose which optimization to use. In Apps 11i's Version 8i database, the preconfigured database installs CBO as the enabled optimization method and you have to allow it to use cost based optimization. The code that is written into this release of Apps was written to take advantage of the new algorithm and it is necessary that you allow it this freedom.

But what is the difference between the RBO and the CBO? The CBO is an expert system that figures all possible execution plans for a query and decides what each one's relative cost is dependent on the gathered statistics. This algorithm determines all execution plans based on available access paths and any hints that may have been used in the query. The execution plan with the lowest relative cost (based on estimated cost proportional to the resources potentially used by the query) is the one that is processed. The weighted costs include network, throughput CPU, and disk input and output (I/O). CBO execution plans may not be reliable over different releases and Oracle makes no guarantees or apologies for any potential differences. The execution plan is only as good as the statistics on which it is based; therefore, it is vital that statistics be gathered regularly on all schema objects. There is a Concurrent Program that does this for you. A Concurrent Program is a program that runs as a batch job in the background (either on command or on a set schedule) while not impacting the end users' ability to work on transaction processing by a Concurrent Manager (a service that resides either on your database tier or on a middle tier). It can be run either manually on demand, or as a scheduled job and should be run on at least a weekly basis. Even with new, valid statistics, the CBO often still makes bad decisions. On the whole, however, it is a vast improvement over having to have all of the rules coded into the program logic.

The CBO has several relevant init<SID>.ora parameters. The parameters and their required (relative to Apps) and current values can be gathered by running a script that can be found under the <FND_TOP>/sql directory called AFCHKCBO.sql (this script comes in patch 1245516 if it is not already in your directory tree). A sample of the output can be seen in Table 2.1.

Table 2.1: AFCHKCBO.sql Sample Output

Parameter Name

Current Value

Required Value

_sort_elimination_cost_ratio

5

5

_optimizer_mode_force

TRUE

TRUE

_fast_full_scan_enabled

FALSE

FALSE

_ordered_nested_loop

TRUE

TRUE

_complex_view_merging

TRUE

TRUE

_push_join_predicate

TRUE

TRUE

_use_column_stats_for_function

TRUE

TRUE

_push_join_union_view

TRUE

TRUE

_like_with_bind_as_equality

TRUE

TRUE

_or_expand_nvl_predicate

TRUE

TRUE

_table_scan_cost_plus_one

TRUE

TRUE

_optimizer_undo_changes

FALSE

FALSE

db_file_multiblock_read_count

8

8

optimizer_max_permutations

79000

79000

optimizer_mode

CHOOSE

CHOOSE

optimizer_percent_parallel

0

0

optimizer_features_enable

8.1.6

8.1.6

query_rewrite_enabled

TRUE

TRUE

compatible

8.1.6

8.1.6

always_anti_join

NESTED_LOOPS

NESTED_LOOPS

always_semi_join

NESTED_LOOPS

NESTED_LOOPS

sort_area_size

5120000

512000 (**)

hash_area_size

10240000

1024000 (**)

Legend:

(**): Required value

Often, the results of this query will have current values set to something other than the required value. If they are significantly different, you can enhance performance by changing the values.

One parameter that needs to be set is optimizer_max_permutations (maximum join permutations of tables in queries with join conditions). It has a default value of 80,000 and a range from 4 to nearly 4.3 billion. Leaving the value at the default, in effect, tells the CBO to consider that there are virtually no limits on join conditions. Setting the value to the Apps required value of 79,000 allows the optimizer to consider more than just the starting table in the query. By regularly maintaining statistics, you can better manage lower parse times for queries in the database.

Optimizer_features_enabled and compatible are compatibility parameters and the required values are considered lower limits of what can be expected. If your database is a release higher than this and all of the required parameters are compatible with the release you are on, it is safe (and often better) to make them higher than required (e.g., if required is 8.1.6 and you are set at 8.1.7, you should be safe in your parameters).

For the CBO to do its job efficiently and effectively, statistics need to be gathered regularly. How frequently statistics are gathered is open to some debate, some suggest that monthly is sufficient, others weekly. I gather statistics immediately after our company closes their accounting books. Further, I have scheduled a concurrent job to run every week, early on Monday morning before anyone gets into the system, to gather schema statistics for all schemas (other than sys). Fortunately, there are Concurrent Programs that you can schedule that call the FND_STATS package to gather statistics for you. Analyze all index columns, gather table statistics, back up table statistics, restore table statistics, gather column statistics, gather all column statistics, and gather schema statistics can be set to run on a defined schedule so that you do not have to worry about them getting done (more on Concurrent Managers and their jobs in Chapter 12).

Materialized Views

Materialized views (MVs) are a materialization of frequently run, usually expensive queries. They are similar to standard views in that they are based on a predefined query and are similar to regular tables in that they take up storage, are queriable directly, and can be indexed on their own. MVs are an improved version of what used to be called Snapshots. There are some restrictions on what can and cannot be materialized. Typically MVs are used to precompute joins and to precalculate expensive functions on single or multiple tables like summarizing or aggregating data to allow queries to run faster. Oracle 11i has the ability to take advantage of this new 8i functionality. A few of the schemas (e.g., APPS, BIX, IBE, and the MSC) are installed with predefined MVs. These predefined MVs are not on a set refresh schedule; you will have to set that on your own. Table 2.2 is an example of a Create Materialized View statement and can give you a better idea on what you are dealing with. In particular, the Create Materialized View scripts in Oracle E-Business Suite create a MV based on a table with the same name. This registers the existing table as a MV.

Table 2.2: Create Materialized View Example

 CREATE MATERIALIZED VIEW ibe_sct_search_mv   ON PREBUILT TABLE WITHOUT REDUCED PRECISION   REFRESH complete on demand   ENABLE QUERY REWRITE   AS   SELECT  *   FROM ibe_sct_search_mv; 

This solution provides performance benefits, but does not permit query rewrite in all cases or support the ability to fast refresh the MV. Fast refresh means that, on a specific schedule, ON COMMIT of a transaction against the base table or on demand when the MV administrator invokes the refresh function, all changed rows and information are migrated via an interim MV log table to the materialized view. Because these underlying existing tables can be quite large and expensive to rebuild, if a table exists that provides the functionality desired and the end goal is to improve performance, registering MVs on existing tables should be used whenever possible. The single caveat is that the base table should exactly reflect the defining definition of the MV query at the time you register the view.

Because it shares similar attributes to a regular table, you have the option of partitioning the MV. This can be an extremely attractive option when the underlying tables are extremely large, and the resulting MV becomes large enough that queries on the MV become longer and longer. Not only does the MV benefit from the same features as the base table (including improved scalability, simplified administration, efficient use of local indexes), but it has the added benefit of being able to fast refresh on the partition boundary when the base table is updated on those same boundary lines.

When you create a MV, you have the option of specifying that the view be refreshed on demand or on commit. ON COMMIT refresh allows the MV to be refreshed every time a transaction commits on the base table, ensuring that the view always contains the most recent view of the data. The DBMS_MVIEW package provides three different types of refresh operations for dealing with MVs built with the on demand manner of refresh:

  • DBMS_MVIEW.REFRESH allows for the refresh of one or more MVs.

  • DBMS_MVIEW.REFRESH_ALL_MVIEWS refreshes all of the MVs that are owned by the calling schema.

  • DBMS_MVIEW.REFRESH_DEPENDENT allows for the refresh of all table-based MVs that depend on a specific detail table or a specific list of detail tables.

An ON DEMAND refresh can only ever be refreshed by calling one of this package's procedures.

Refreshing a MV involves considerable sorting and requires temporary space to rebuild itself and its indexes. Because of this, it may not be wise to schedule your refreshes to occur at the same time. As these are database stored procedures, they can easily be scheduled to run via a Concurrent Manager job on a predefined schedule and can have a different schedule for each refresh job that you choose to define. The choice of when to refresh has to be based in part on business decisions on the required freshness of the Mview data compared with that of the base table. Remember that a full refresh truncates the MV table before inserting a new set of the full data volume of not only what was there before, but also what is there that has been added as well.

Complete Refresh

A complete refresh, likely the option that would be best employed in the Apps instance, is also loosely defined as build immediate unless, as in the case of most canned Apps MVs, it is registered as being built on a prebuilt table. A complete refresh involves the complete rereading of the detail tables and reprocessing of all of the data from those tables. You should fully consider the resource ramifications (i.e., time and physical resources) involved before requesting it.

Fast Refresh

Fast refresh is a more efficient means of refresh because it does not need to recompute the entire query used to create it to begin with. Fast refresh relies on the changes to the underlying data triggering the refresh of the MVs. Often this is accomplished by means of a third structure, a MV log, which is a trade-off between added maintenance of objects and the speed of refresh.

ON COMMIT Refresh

The advantage of having a MV refresh ON COMMIT of a transaction on the base table is that the changes are automatically and immediately reflected in the MV. Further, this means that you never have to remember to refresh the view and your users will never access stale data. Depending on how busy your system is, one disadvantage is that it can add significant time to commit data.

When a MV is refreshed ON DEMAND, one of three refresh methods can be specified as shown in Table 2.3.

Table 2.3: Refresh Options for MVs

Refresh Option

Description

Refresh Complete

Refreshes the data by recalculating the MV's defining query every time refresh is run.

Refresh Fast

Refreshes the MV by incrementally applying all changes from the base table to the MV.

Refresh Force

Attempts a fast refresh. If that is not possible or if it fails, a complete refresh is done. Frequently this is the refresh option of choice since it allows Oracle to decide. Oracle attempts the fastest first.

Query Rewrite

Along with the ability to store the results of predefined queries in a physical structure and automate the maintenance of these structures, MVs allow you to take advantage of another Oracle 8i and newer feature — query rewrite. This feature checks the statistics that have been gathered in the database (e.g., tables and MVs) and the CBO uses those statistics to determine what queries can be rewritten to use the MVs, thus saving time. If you have any queries that make use of aggregation or summaries, you can store them in a MV and the optimizer will use them transparently. No end user or programmer need ever know that they exist. It is a tradeoff between speed and storage, with the end user being the ultimate winner.

One major benefit of creating and going through the maintenance process of defining MVs is the ability to allow queries to take advantage of this rewrite capability. Query rewrite transforms a SQL statement that is written against a particular table, a set of tables, or views to be transformed into a statement accessing one or more MVs based upon the detail tables of the initial query. Because the transformation is transparent to the end user query, the MV can be dropped, added, refreshed, or altered at any time without adversely affecting the query in any way other than performance and without invalidating indexes or application code.

When the determination on whether to rewrite or not is made, several checks are performed on the query to decide if it is a candidate for rewrite or not. If the query fails a single check, it is not a candidate. The check and subsequent redirection can at times be costly in terms of response time and CPU utilization.

The CBO uses two different metrics to determine the rewritablity of a SQL statement. First, it checks to see if the query exactly matches the MV definition query. If it does match, it uses query rewrite. This is the only test that it can fail and still be a candidate for rewrite. If it does not exactly match the query, the CBO will compare joins, selections, data columns, groupings, and aggregate functions between the query and the MV. If it can in any portion of the query run faster using the MV, it will allow for the rewrite.

Ordinarily, the CBO optimizes each given query with and without attempting to rewrite the selects and chooses the least costly alternative to allow it to perform. When the CBO chooses to rewrite the query, it can rewrite the entire query or one or more of the query blocks within the query. The evaluation is done one block at a time. If it has to choose between two MVs on which to rewrite, it will choose the one that will cause the query to read the smallest amount of data to return the required information.

Since the optimization is based on the total cost of the statement, it is critical that statistics be gathered regularly to allow the CBO to make an informed decision.

Database Resource Manager

Before Oracle introduced the Database Resource Manager (DRM), typically resource management and allocation (e.g., disk, memory, CPU, etc.) was left to the OS on which the database was running. This led to issues as the OS did not possess the ability to partition the machine's resources among tasks on priority bases and the Oracle RDBMS interactions with the OS were not always very symbiotic.

There can be significant overhead if there are several RDBMS servers on the box. Running each on its own Logical Partition, if even possible, would reduce some of the overhead associated with several engines running on one box, but would likely result in CPU contention in the end.

Scheduling and descheduling of servers is inefficient if left to the OS, particularly if the RDBMS is holding any latches servers.

Further, the OS is not built to efficiently manage database-specific resources. That is not its function; while it can do so to a significant extent, the efficiency and knowledge should be built into the database to manage this.

Oracle's DRM allows the DBA to have more control over how resources are managed than has typically been possible before. This improved control of resources enables you to provide better application performance and availability to the end users. DRM enables this by allowing you to guarantee a group of users a minimum amount of processing resources regardless of what others on the system are doing, to distribute processing by allotting different users or applications different percentages of CPU time based on their needs, to limit the degree of parallelism that one group is permitted to use over the permissions given to another group, and to set up the instance to allocate resources differently depending on what time of day it is. For instance, batch jobs are a lower priority during the day, but become a higher priority at night when there are less active users on the system.

The DRM comprises the following components, presented here with their basic functionality:

  • Resource Consumer Group

    • Groups users' sessions with similar resource requirements

    • Controls the whole group's consumption

    • Allocates CPU among consumer group sessions

  • Resource Allocation Plans

    • Allocates resources among consumer groups or plans

    • Contains directives that specify each consumer group's resource allocation

    • Groups the groups or plans together

  • Resource Allocation Methods

    • Maintains the policies for each resource's allocation

    • Determines what method is used when allocating to a group or a plan

  • Resource Plan Directives

    • Assigns groups or subplans to a resource plan

    • Allocates resources to groups in a plan by specifying parameters for each method

    • Allows exactly one plan directive for each entry in the plan

A resource plan can reference other plans (i.e., subplans) and can be referenced by others. Figure 2.1 shows one possible configuration of plans and subplans.

click to expand
Figure 2.1: Resource Manager Plan — Configuration of Plans and Subplans

Resource_manager_plan, the init.ora parameter that enables the resource manager, indicates what master resource plan and its associated subplans is to be used for the given instance at database startup. The alter system command can change the resource plan that is needed to be used at any time after startup. This means that you can dynamically change plans to meet batch and transactional processing demand without resorting to restarting the database.

The DBMS packages, DBMS_RESOURCE_MANAGER and DBMS_RESOURCE_MANAGER_PRIVS, can be called to facilitate the care and management of Resource Objects.

Partitioning

Partitioning of tables has been available in previous releases of the database, but until release 11i, Apps did not take advantage of it. Partitioning breaks data in large tables and indexes down into smaller, more manageable chunks, logically grouped so that SQL statements can access the data in a single partition rather than having to parse the entire table. For very large tables, this can be a significant savings in both query processing and batch processing time.

There are two basic ways to partition in Oracle 8i: range or hash or a combination of these two. Range partitioning puts all of the data that fits between two boundaries into the same partition. Frequently the data is grouped by date, location, or other logical boundary. An example of range partitioning is every purchase order for the year 2001 or every widget that is in the Pittsburgh warehouse. Hash partitioning uses a hash function to determine where to store the data. Subpartitioning, or partitioning already partitioned data further, will speed data access up further and help evenly distribute data within the partition (thus helping to avoid hot spots). Table 2.4, Table 2.5, and Table 2.6 show examples of using range partitioning. Table 2.4 is raw data, unpartitioned.

Table 2.4: Partitioning Raw Data

PO_NUMBER

PO_DATE

PO_VENDOR

PO_LOCATION

1234

10-Jun-02

Smith's Office Supply

Pennsylvania

2345

8-Jun-02

Smith's Office Supply

Pennsylvania

5678

7-Jul-02

Smith's Office Supply

Ohio

7891

4-Jul-01

Smith's Office Supply

Ohio

7410

8-Jun-02

Smith's Office Supply

Colorado

8520

28-Jun-02

Smith's Office Supply

Mississippi

9630

20-Jun-01

Smith's Office Supply

Mississippi

1478

14-Jan-02

Smith's Office Supply

Colorado

2589

23-Jan-03

Smith's Office Supply

Tennessee

3697

7-Sep-02

Smith's Office Supply

Tennessee

Table 2.5: Partition by Date

PO_NUMBER

PO_DATE

PO_VENDOR

PO_LOCATION

9630

20-Jun-01

Smith's Office Supply

Mississippi

7891

4-Jul-01

Smith's Office Supply

Ohio

1478

14-Jan-02

Smith's Office Supply

Colorado

2345

8-Jun-02

Smith's Office Supply

Pennsylvania

7410

8-Jun-02

Smith's Office Supply

Colorado

1234

10-Jun-02

Smith's Office Supply

Pennsylvania

8520

28-Jun-02

Smith's Office Supply

Mississippi

5678

7-Jul-02

Smith's Office Supply

Ohio

3697

7-Sep-02

Smith's Office Supply

Tennessee

2589

23-Jan-03

Smith's Office Supply

Tennessee

Table 2.6: Partition by Date, Subpartition by Location

PO_NUMBER

PO_DATE

PO_VENDOR

PO_LOCATION

9630

20-Jun-01

Smith's Office Supply

Mississippi

7891

4-Jul-01

Smith's Office Supply

Ohio

1478

14-Jan-02

Smith's Office Supply

Colorado

7410

8-Jun-02

Smith's Office Supply

Colorado

2345

8-Jun-02

Smith's Office Supply

Pennsylvania

1234

10-Jun-02

Smith's Office Supply

Pennsylvania

8520

28-Jun-02

Smith's Office Supply

Mississippi

5678

7-Jul-02

Smith's Office Supply

Ohio

3697

7-Sep-02

Smith's Office Supply

Tennessee

2589

23-Jan-03

Smith's Office Supply

Tennessee

Table 2.5 is partitioned by date alone.

Table 2.6 shows the data partitioned by date and within date by location. The query to get all purchase orders in 2001 would run as effectively on either Table 2.4 or Table 2.5, however, the performance for all purchase orders for 2001 for the state of Pennsylvania would be much better on Table 2.6.

Partitioning can help you to balance I/O (depending on your disk configuration), separate different parts of the data or index for access and maintenance speed, and help provide the ability to restore parts of tables independently (leading to potentially shorter mean time to recover).

Several schemas in the predefined E-Business Suite database have tables that are partitioned and several more are large enough that they could be. You should keep this in mind when looking at ongoing, longterm maintenance, as you do not want one partition to hold the majority of your data, which defeats the purpose of partitioning.

Parallel Server

Oracle's Parallel Server harnesses the massive processing power of multiple interconnected (or clustered) computers. In a cluster, the united processing capability of each server combines to becomes a single computing environment. Each computer is considered a node and each node processes transactions against a single database. Parallel Server coordinates the access to the data from the nodes and provides for consistency and data integrity, providing a solution to speed access (several small processes running against the database is more efficient than one huge transaction), while helping to achieve high availability. With several nodes contributing to the whole, the failure tolerance is higher. If one node goes down, the slack gets picked up by the remaining nodes and redistributed. This lessened chance of hardware downtime implies higher availability of the database and of Applications to the end user.

A clustered environment is highly scalable. Throwing more users onto a system that has the processing power of several larger servers will show less of an impact than if those same users were added to a single server environment.

Temporary Tables

Apps 11i now makes use of Oracle 8i's temporary tables feature. Temporary tables exist only for the duration of the session that owns them or for the duration of the transaction that creates them. They hold data privately, locked so only that session can update or modify it. These tables live only in memory and, because they are never written to disk, there is no redo associated with the Data Manipulation Language (DML) that is associated with the temporary table's data. Because there is no redo associated and because they are only accessible by the session that owns them, there are no DML locks required when doing data manipulation within them.

Applications Manager for Oracle Enterprise Manager

The new Oracle Applications Manager (OAM) module for Oracle Enterprise Manager (OEM) allows administrators to better manage Oracle E-Business Suite systems from a central console. This console can be used for a wide variety of tasks including monitoring and tuning the running processes on the system. With the familiar look and feel of the other OEM products, it allows a DBA to make the leap between just DBA and Applications DBA.

Invoker's Rights

In past releases, the addition of Multiple Reporting Currencies (MRC) and Multiple Sets of Books to the Oracle Applications architecture brought with it interesting challenges. In companies where these features were implemented, many of the packages that Apps makes use of had to be stored several times each in the database: once for each currency and once for each set of books. This led to longer upgrades and maintenance times as each of the copies had to be taken into account every time an upgrade was performed and every time a minipack or family pack patch was applied. Extra space was also required to store each additional copy of these in the database. This model, that of the user executing a stored procedure and having it execute under the authority of the owner of the package, was known as definer's rights (still the default execution method). The model assumed that an entire application would likely be built within one single Oracle account. Oracle Applications now has over 150 different schemas and owners, so this is not a realistic expectation.

Oracle 8i brings with it a functionality called invoker's rights. Invoker's rights allow each package to be stored once and accessed by different users, each user using their own synonym for the package or procedure and they are only permitted to perform those functions that their role or granted privileges allow them to perform and not the actions that the owner has the rights to perform. This is particularly powerful in the Oracle E-Business Suite, where the Apps schema may own packages that, if run as Apps, can see anything in the system. But if the purchasing department is allowed to see only the relevant personal information about a purchase order requester (e.g., name, department), but not update or delete any of it and that information is stored in the human resources (HR) tables, under definer's rights, purchasing would be able to change the values. Invoker's rights say that the package that queries the values from that table can be created with select, insert, update, and delete privileges, but only HR can insert, update, or delete and purchasing can select and only select, those columns from those tables that are relevant to a purchase order. The added safety of having a procedure or package defined with invoker's rights is that any unqualified call to the procedure will attempt to perform on the schema of the user calling the procedure. One package can now meet these needs, provide a more secure and efficient application, and have one central place for reusable code.

Locally Managed Tablespaces

Until recent releases of the database, the space allocation for all objects was handled in the data dictionary. This could lead to performance problems as new extent allocation requires several single threaded recursive queries to be run on the System tables to find the free space, allocate it to the new extent, and then mark that space as used. The employee table needs another extent? Go check for free space in the free extents table. If there is free space, go ahead and allocate it to the employee table, but now you have to tell the used extents table that you now have one more used extent. These statements generated rollback, themselves causing further single threaded recursive statements to be generated: one row of redo for each insert, one row of redo for each delete, and a row of rollback for the row that is deleted. If you truncate a table (freeing all used extents) or drop a table (again, freeing all used extents), there would now be multiple rows inserted as available and deleted from being marked as used. These inserts and deletes would be occurring on the data dictionary and the corresponding redo would be happening at the same time. Locally managed tablespaces are tablespaces that use bitmaps to manage their own extents, rather than the data dictionary to do the management for them. These bitmaps keep track of what datafile blocks are used or free; each block of each datafile has its own bit in the bitmap that is switched on or off to show the allocation of each given block. Allocation of a block means flipping one bit from a 0 to a 1; deallocation means changing back from a 1 to a 0, with no rollback and no redo associated. Temporary Tablespaces and Rollback Tablespaces have significant amounts of allocation and deallocation and are therefore prime candidates for being locally managed. There is less chance for contention on these data dictionary tables if you do not have to go to them on a regular basis. Because they are self-managed and no Table updates are done, nearly no rollback information is generated. Further, the management of the extents tracks adjacent free space automatically. This means that there is no longer any need to manually coalesce free space (this is now automatic by design) and significantly reduces fragmentation (this is particularly true if you use uniform allocation).

In some Applications 11i installations, locally managed is the default way that tablespaces are built. In others, running dbms_space_admin.tablespace_migrate_to_local and passing it, the tablespace, as a parameter will allow you to migrate existing dictionary managed tablespace to locally managed ones.

Diagnosing and Repairing Locally Managed Tablespace Problems

DBMS_SPACE_ADMIN is the package that provides database administrators with detection, diagnosis, and repair functionality for locally managed tablespaces. It cannot, under any circumstances, be used for dictionary managed tablespaces. This is also the procedure that you use when you are migrating from dictionary managed to locally managed and back again. The DBMS_SPACE_ADMIN package contains the following procedures:

  • Segment_verify: This procedure verifies the consistency of the extent map for the segment. Does this tablespace currently require repair?

  • Segment_corrupt: This procedure marks a segment as either corrupt or valid so that the appropriate recovery can be performed on it. This procedure can never be used on the system tablespace, as any corruptions found in that tablespace cannot be dropped.

  • Segment_drop_corrupt: This procedure provides the ability to drop a segment currently marked as corrupt without reclaiming the space that was marked as corrupt. This can never be used on a locally managed system tablespace.

  • Segment_dump: This procedure dumps the segment headers and extent map of any given segment.

  • Tablespace_verify: This procedure verifies that the bitmaps and the extent maps for the segments in the tablespace are in sync.

  • Tablespace_rebuild_bitmaps: If Tablespace_verify shows that the bitmaps are not in sync, use this procedure to rebuild the appropriate bitmaps. This procedure cannot be used on the system tablespace.

  • Tablespace_fix_bitmaps: This procedure marks the appropriate data block address ranges to reflect their true status. They will be marked as free or used within the bitmap. This procedure cannot be used on a locally managed system tablespace.

  • Tablespace_migrate_from_local: This procedure migrates the passed in locally managed tablespace to a dictionary managed tablespace. You cannot use this procedure to change a locally managed system tablespace to a dictionary managed system tablespace.

  • Tablespace_migrate_to_local: This procedure migrates the passed dictionary managed tablespace to system managed tablespace. This procedure can be used on the system tablespace.

  • Tablespace_relocate_bitmaps: This procedure relocates the bitmap portion of the specified tablespace (not the system tablespace) to the destination specified.

  • Tablespace_fix_segment_states: If the migration of a tablespace gets aborted, from system failure, error condition, or because someone closed the session in which the migration was running, this procedure fixes the state of the segments in that tablespace.

None of these procedures should be used without a backup from which you can restore your system. Some of these procedures, if not used correctly, can result in lost and totally unrecoverable data. If you are not comfortable with what they are doing or exactly sure how to use them, do not hesitate to get the assistance of Oracle Support. It will be quicker to get help in not corrupting your data than it will in dealing with a severity 1 iTAR needed to fix the corruption.

How would you use these procedures? Let us look at some situations under which you might be able to use these. This is where you will find the drawbacks to having your tablespaces migrated to locally managed. No longer will all maintenance of the tablespace be taken care of by the system. No longer can you simply assume that there are no issues with your database just because there are no apparent problems. You will now have to run these package procedures to ensure the status of the tablespaces, identify any issues that may arise, and fix the issues after they are uncovered. It is a good administrative practice to attempt different scenarios with the dbms_space_admin and your own set of test locally managed tablespaces so there is not a panic situation if you have to run any of these procedures in a real problem situation.

  1. You have decided to run the tablespace_verify procedure to find out if the bitmap and extent maps for the IBED tablespace have any issues. The results indicate that there are several blocks that show up as allocated in the bitmap, but that have no overlapping allocated segments in the extent map. What do you do?

    1. Call dbms_space_admin.segment_dump to dump the ranges that have been allocated to that segment.

    2. For each of the dumped ranges, call dbms_space_admin.tablespace_fix_bitmaps with the tablespace_extent_make_used option to mark the space as used.

    3. Call dbms_space_admin.tablespace_rebuild_quotas to fix the quotas for that particular tablespace.

  2. You are trying to unallocate a segment in the ARD tablespace and you are having problems dropping those segments. You discover that you cannot drop the segment because the segment blocks were marked as free and now the system has remarked them as corrupted. What can you do other than export the data from the tablespace, drop and recreate the tablespace, and reimport the information into the rebuilt tablespace?

    1. Run dbms_space_admin.segment_verify with the optional parameter segment_verify_extents_global. If the results returned show no overlaps, proceed; otherwise, contact Oracle Support for assistance before proceeding.

    2. Call dbms_space_admin.segment_dump to dump the ranges that are allocated to the segments.

    3. For each of the ranges returned, call dbms_space_admin.tablespace_fix_bitmaps using the tablespace_extent_make_free parameter to mark all appropriate spaces free.

    4. Call dbms_space_admin.segment_drop_corrupt to drop the corrupt segment entries from the seg$ sys table.

    5. Call dbms_space_admin.tablespace_rebuild_quotas to fix the quotas associated with the tablespace.

  3. You have run dbms_space_admin.tablespace_verify and the results show that there is overlapping in the tablespace. Internal errors that you have become aware of indicate that there has to be some data sacrificed (e.g., there have been ORA-26082 errors when attempting to direct path load to one of the interim tables). You have to determine what objects overlap and which of these objects that you can afford to sacrifice.

    1. Drop the table that you have chosen to sacrifice.

    2. Optionally, run dbms_space_admin.segment_drop_corrupt.

    3. Call dbms_space_admin.segment_verify on all of the objects that t1 may have overlapped.

    4. If necessary, run dbms_space_admin.tablespace_fix_bitmaps to mark the appropriate blocks as used.

    5. Rerun dbms_space_admin.tablespace_verify to make sure that any overlap is gone.

  4. You have determined that there is media corruption causing problems with bitmap blocks in your GLD tablespace.

    1. Call dbms_space_admin.tablespace_rebuild_bitmaps on either all bitmaps in the tablespace or only on the particular corrupt ones.

    2. Call dbms_space_admin.tablespace_rebuild_quotas.

    3. Call dbms_space_admin.tablespace_verify to make sure that the bitmaps in the tablespace are now in a consistent state.

  5. Your implementation was done prior to Oracle packaging the Applications database with the parameters allowing all tablespaces to be built as locally managed. You have decided to migrate your entire database to locally managed tablespaces. How do you do this? The following code will assist you in migrating nonsystem tablespaces to locally managed tablespaces.

     Set head off Set echo off Spool migrate.sql Select 'execute dbms_space_admin.tablespace_migrate_to_local('|| tablespace_name||');'   from v$tablespace where tablespace_name = 'SYSTEM'; Spool off Set head on Set echo on @migrate.sql 

But what about migrating the sys and system tablespaces? In Oracle 9i it is not only supported but encouraged.

 Execute dbms_space_admin.tablespace_migrate_to_local ('SYSTEM'); 

Similar to the nonsystem tablespace, you use the dbms_space_admin procedure to do the migration. Before you start the migration procedure, there are several things that you need to take into account and make sure you have taken care of:

  • Have defined a default temporary tablespace that is not in the SYSTEM tablespace.

  • Have no dictionary managed rollback segment tablespace.

  • Have at least one online rollback segment in a locally managed tablespace or migrated to Automatic Undo Management and that tablespace is online.

  • Migrate any tablespaces to which you ever want the ability to perform write operations against in the future to locally managed. Once system tablespace has been migrated to locally managed, any tablespaces that were dictionary managed at the time of migration can no longer be opened for read/write access.

  • Have all tablespaces other than SYSTEM and the undo or rollback containing tablespaces in read-only mode.

  • Have a reliable cold backup of the database and have it available. This is always a safe precaution to take.

  • Open the database in restricted mode for the duration of the procedure.

Once you have made sure that all of these conditions are met, you can safely proceed with the migration process. As soon as you have completed the migration of the system tablespace to locally managed, you should immediately test to make sure that your system can function with the new configuration. As soon as possible, take an additional cold backup of the database with the new configuration.



 < Day Day Up > 



Oracle 11i E-Business Suite from the front lines
Oracle 11i E-Business Suite from the Front Lines
ISBN: 0849318610
EAN: 2147483647
Year: 2004
Pages: 122

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net