Data Architecture and Availability

 < Day Day Up > 



The architecture of your database objects has much to do with availability. There are multiple factors that play into the ultimate availability of your data: outage during recovery situations, performance across large tables or during heavy usage, and DDL operations that modify objects. Oracle provides specific types of database structures that help mediate the effect of these factors: table and index partitioning, index-organized tables, materialized views, and online reorganization of tables and indices.

It should be noted that we are not offering application availability solutions in this book. Creating applications that enhance availability based on proven execution plans, optimized SQL and PL/SQL, and application server management are all outside the scope of this tome-which is not to say they are not important. We just had to draw the line somewhere. Tom Kyte's books are great sources for building Oracle applications; for performance tuning, we recommend Richard Neimec's works. For application server management (from the Oracle perspective, of course), we suggest Garmany and Burleson's latest Oracle Database 10g tome. See the bibliography at the end of this book for more information.

Partitioned Tables and Indexes

Table and index partitioning refers to the ability within Oracle to organize a single table of data into multiple logical parts that can be maintained and treated independently. Each partition of a table can be housed in a separate tablespace, on different disks (or disk groups), and be dropped or reorganized independent of the other partitions. This provides a superb level of performance and, of course, availability. If a datafile associated with a partition goes belly up, all other partitions can remain available to users while you repair the bad file.

Tables can be partitioned, as can indices. From a performance standpoint, partitions are an excellent method for data searching: in parallel, each partition can be searched at the same time, whether they be index or table partitions. So, index range scans go faster, and even full table scans (heaven forbid) go faster. There are significant maintenance benefits from partitioned tables and indices, as well. Individual partitions can be dropped, moved, and loaded without affecting other partitions in the same table.

Partitioning is a huge topic to cover, and, frankly, we don't have the space here. We will, however, introduce the concepts and benefits, and provide a few simple examples. After that, please refer to the bibliography for where to learn more about all the nitty-gritty details.

What Tables and Indexes Are Partitioning Candidates?

Partitioning is usually a maintenance-to-performance trade-off: maintaining partitions requires more DBA footwork, but typically provides better performance. It is usually assumed that only really large tables benefit from partitioning, but even medium-sized objects can reap benefits if you feel like putting the grunt work in. So, if you are looking for performance gains on data lookup, give partitioning a try. In addition, partitioning

is an excellent way to manage time-generated data. If you have data that rolls in every day, and you need a way to manage older data, partitioning can provide invaluable services. It provides for a way to archive or delete older data in a table without affecting performance for the rest of the table. In a nutshell, partitioning is worth a little exploration for your larger data tables, particularly those that grow over time.

Table Partitioning Types

There are three distinct types of partitions for tables: range, hash, and list. Each type of partition serves a slightly different purpose, based on the nature of the data and how it would be best divided into parts-for performance, as well as for logical grouping. Partitioning is determined by a partition key that is specified at table creation time. The partition key is the column (or columns) that you tell Oracle to use as the basis for data segmentation into partitions.

Range Partitioning  The most common form of partitioned tables is the range partition. The range partition divvies up data into segments that are separated by ranges of values, such as dates. Range partitioning is best implemented when the range values by which you partition result in the partitions having equally sized amounts of data. The following example uses the woodscrew_orders table discussed in Chapter 1, as part of the database for Horatio's Woodscrew company. In this example, we divide the table up by order date column. The DBA believes that his orders are steady enough over the months that he can partition by month, and each month will have roughly the same amount of data.

create table woodscrew_orders ( ord_id            number not null, ord_date            date, cust_id            number not null, scr_id            number not null, ord_cnt            number, warehouse_id      number not null, region            varchar2(20), constraint pk_woodscrew_orders primary key (ord_id, ord_date) using index tablespace ws_app_idx) partition by range (ord_date) (partition values less than ( TO_DATE('1-OCT-2003','DD-MON-YYYY'))     tablespace wdscrord_sep_2003,  partition values less than ( TO_DATE('1-NOV-2003','DD-MON-YYYY'))     tablespace wdscrord_oct_2003,  partition values less than ( TO_DATE('1-DEC-2003','DD-MON-YYYY'))     tablespace wdscrord_nov_2003,  partition values less than ( TO_DATE('1-JAN-2004','DD-MON-YYYY'))     tablespace wdscrord_dec_2003) enable row movement; 

Hash Partitioning  There are times when you have large tables that would benefit from partitioning for performance and maintenance reasons, but the tables do not lend themselves to range partitioning. The Woodscrew table from Chapter 1 is a good example. There are no reliable range values for the table, but the table is growing larger and larger and performance is beginning to trail off. In this situation, a hash partition would be appropriate. Instead of segmenting the table based on a range of values, Oracle will hash the key column you specify and segment the incoming data based on the hash value. This provides a way to take a column such as SCR_ID and use it as the partitioning key, and Oracle will ensure that we get like-sized partitions.

create table woodscrew (  scr_id            number not null,  manufactr_id      varchar2(20) not null,  scr_type         varchar2(20),  thread_cnt       number,  length           number,  head_config      varchar2(20),  constraint pk_woodscrew primary key (scr_id, manufactr_id) using index tablespace ws_app_idx)  partition by hash (scr_id)  partitions 4  store in (wdscr_part1, wdscr_part2);

List Partitioning  List partitioning provides a hybrid-looking table that stands somewhere between a range and a hash partition. List partitioning occurs by grouping specific values to be placed in each partition. Thus, you can group data logically that does not have a natural range grouping, and control which partition is used (unlike a hash, where the DBA has no control over the system hash function). List partitioning is excellent in situations where you need explicit control over partition placement for performance or maintenance reasons, and where the partition key does not lend itself to a range.

List partitioning has only one limitation that the other two types do not have: the partition key can only be a single column-you cannot list partition on two or more columns.

For the DBA at Horatio's Woodscrews Company, a good candidate for list partitioning is his woodscrew_inventory table, where he needs to be able to load data based on manufacturer, and he also has multiple data-mining operations that are manufacturer-centric. So he lists partitions based on MANUFACTR_ID, like this:

create table woodscrew_inventory (  scr_id            number not null,  manufactr_id      varchar2(20) not null,  warehouse_id      number not null,  region            varchar2(20),  count                   number,  lot_price   number) partition by list (manufactr_id)  (partition east_suppliers values ( 'Tommy Hardware', '2Many Parts')              tablespace wdscr_inv_part1,   partition west_suppliers values ( 'Balaji Parts' )              tablespace wdscr_inv_part2,   partition other values (DEFAULT)              tablespace ws_app_data)  enable row movement;

Subpartitioning

For the hierarchy enthralled, you can take partitioning to the next level and subpartition your partitions.

Hash Subpartitioning  The first type of subpartitioning introduced in Oracle was the composite range-hash partition. This methodology allows you to set a range partition key, and then for values in that range subdivide the data along a hash function. This provides better performance and more granularity.

This is, essentially, a way to organize hash-organized partitions into ranges. Using a range-hash partition table, you can organize data by a date range, and then within the date range further divide the data into manageable components that are equitably distributed and provide a higher level of parallelism during large data-mining operations.

In the range partition example in the preceding section, the Woodscrew DBA partitioned his woodscrew_orders table by ORD_DATE. To garner better performance, and to further distribute the load across tablespaces for recoverability, the DBA now subpartitions the same table by hash:

create table woodscrew_orders ( ord_id            number not null, ord_date            date, cust_id            number not null, scr_id            number not null, ord_cnt            number, warehouse_id      number not null, region            varchar2(20), constraint pk_woodscrew_orders primary key (ord_id, ord_date) using index tablespace ws_app_idx) partition by range(ord_date) subpartition by hash(ord_id)  subpartitions 2 (partition values less than ( TO_DATE('1-OCT-2003','DD-MON-YYYY'))     store in (wdscrord_sep_2003_part1, wdscrord_sep_2003_part2),  partition values less than ( TO_DATE('1-NOV-2003','DD-MON-YYYY'))     store in (wdscrord_oct_2003_part1, wdscrord_oct_2003_part2),  partition values less than ( TO_DATE('1-DEC-2003','DD-MON-YYYY'))     store in (wdscrord_nov_2003_part1, wdscrord_nov_2003_part2),  partition values less than ( TO_DATE('1-JAN-2004','DD-MON-YYYY'))     store in (wdscrord_dec_2003_part1, wdscrord_dec_2003_part2)) enable row movement; 

Range-List Partitioning  Oracle finally introduced this option in Oracle9i Release 2-the ability to partition by range, and then subpartition by list. In this way, you can very specifically control how the partitions shape up, so that maintenance and mining operations can be targeted to the best granular level. This is a very targeted approach to partitioning, and really should be undertaken only if you have enough historical data to determine that a highly controlled partitioning structure will provide the right benefits to your application without hampering performance or skewing partitions to the point they are grossly disproportionate to each other.

Our Woodscrew DBA has to rethink the range-hash partitioning of the woodscrew_orders table, due to the search methodologies of his application. While the ORD_DATE is an excellent way to organize historical data, the hash subpartitions don't provide as much control as the DBA would like. So, he decides to use the REGION column for a list subpartition. This allows him to monitor and organize data along regional sales areas.

create table woodscrew_orders ( ord_id            number not null, ord_date            date, cust_id            number not null, scr_id            number not null, ord_cnt            number, warehouse_id      number not null, region             varchar2(20), constraint pk_woodscrew_orders primary key (ord_id, ord_date) using index tablespace ws_app_idx) partition by range(ord_date) subpartition by list(region) (partition wdscrord_sep_2003       values less than ( TO_DATE('1-OCT-2003','DD-MON-YYYY'))     (      subpartition wdscrord_sep_2003_west values ('SOUTHWEST', 'NORTHWEST')           tablespace wdscrord_sep_2003_part1,      subpartition wdscrord_sep_2003_east values ('SOUTHEAST', 'NORTHEAST')           tablespace wdscrord_sep_2003_part2      ),  partition wdscrord_oct_2003       values less than ( TO_DATE('1-NOV-2003','DD-MON-YYYY'))     (      subpartition wdscrord_oct_2003_west values ('SOUTHWEST', 'NORTHWEST')           tablespace wdscrord_oct_2003_part1,      subpartition wdscrord_sep2003_east values ('SOUTHEAST', 'NORTHEAST')           tablespace wdscrord_oct_2003_part2      ),  partition wdscrord_nov_2003       values less than ( TO_DATE('1-DEC-2003','DD-MON-YYYY'))     (      subpartition wdscrord_nov_2003_west values ('SOUTHWEST', 'NORTHWEST')           tablespace wdscrord_nov_2003_part1,      subpartition wdscrord_nov_2003_east values ('SOUTHEAST', 'NORTHEAST')           tablespace wdscrord_nov_2003_part2      ),  partition wdscrord_dec_2003       values less than ( TO_DATE('1-JAN-2004','DD-MON-YYYY'))     (      subpartition wdscrord_dec_2003_west values ('SOUTHWEST', 'NORTHWEST')           tablespace wdscrord_dec_2003_part1,      subpartition wdscrord_dec_2003_east values ('SOUTHEAST', 'NORTHEAST')           tablespace wdscrord_dec_2003_part2     )) enable row movement;

Index Partitioning

In addition to table partitions, Oracle allows indices to be partitioned. With index partitioning, range scans along the index can be greatly reduced because the cost-based optimizer can prune partitions that do not match the query parameters. In addition, index rebuilds, moves, and deletes can be isolated to subsets while the rest of the index remains unaffected. There are two types of index partitions: local and global.

Local Index Partition  A local index has its structure imprinted by the underlying partitioned table. Oracle calls this equipartitioned-the index has the same delimiters as the partitioned table, and will be maintained automatically along with the table that it is based on. That is, if you split a table partition into two new partitions, the local index on that partition will also be split into two new partitions. Local indices require less maintenance than global indices, because they can be automatically altered along with the underlying table.

Global Index Partition  A global index can be partitioned independent of the table on which it is based. In fact, a global index can be partitioned, while the underlying table is not partitioned. This provides a degree of independence, but it also may require more maintenance than a local index. Global partitions are superb when you have access paths that do not match the partitioning path. For instance, our Woodscrew DBA has noted that the application frequently performs batch updates to the woodscrew_orders table based on customer ID (CUST_ID), which is not part of the range partition key for the table. So the DBA can build a global hash-partitioned index on CUST_ID that allows for parallel range scanning.

Partition Maintenance

Partition maintenance provides for a high level of flexibility in how you go about removing old data, adding new partitions, splitting partitions, and other operations. Table 2-3 provides a quick look at maintenance operations at your fingertips.

Table 2-3: Available Partition Maintenance Operations

Maintenance Operation: Tables

Type of Partition

Adding a partition

Range, hash, list, range-hash, range-list

Dropping a partition

Range, list, range-hash, range-list

Exchanging a partition

Range, hash, list, range-hash, range-list

Merging a partition

Range, list, range-hash, range-list

Moving partitions

Range, hash, list, range-hash, range-list

Renaming partitions

Range, hash, list, range-hash, range-list

Splitting partitions

Range, list, range-hash, range-list

Truncating partitions

Range, hash, list, range-hash, range-list

Coalesce a partition

Hash, range-hash

Modify default attributes for a partition table

Range, hash, list, range-hash, range-list

Modify real attributes for a partition

Range, hash, list, range-hash, range-list

List partition: add values

List, range-list

List partition: drop values

List, range-list

Maintenance operations-Indices

Type of Index

Rendering an index partition unusable

Global

Modifying a local index to use NOLOGGING option

Local

Rendering entire index unusable

Local, global

Rebuilding an index partition

Local, global

Dropping an index partition

Global

Coalescing an index partition

Global

Splitting a global index partition

Global

Renaming an index partition

Local, global

Modifying index default values

Local, global

Partitioning and High Availability

So what does all this partitioning mean for availability? Partitioning is an excellent way to leverage functionality inside the database to maximize performance for the end users for heavily used tables. In OLTP and DSS environments, partitioning can make a difference in the usability of large tables, meaning that the end-user perceived availability goes up. In addition, by granulating a large table out across multiple tablespaces, you mitigate the downtime associated with a failure in a single partition-media recovery can be isolated to a single partition. You are also ensuring a higher level of availability during table and index maintenance.

Index-Organized Tables

Index-organized tables (IOTs) deserve brief mention, if for no other reason than they are really cool. IOTs allow you to organize tables into B-tree leaf blocks based on your defined primary key. So, instead of having a table with blocks that are simply piled on top of one another in no order, and then having an index that allows you to locate the rows in those blocks, now you can store the table rows in the index leaf blocks themselves. This means that once the index lookup is complete, the operation is done. No secondary block lookup to the table, because the row values are stored inline with the primary key value in the index.

Like we said, it's all very cool. The gains come from performance, where you are looking up fewer blocks and therefore decreasing disk IO. You also decrease maintenance demands on heavily used OLTP tables that need constant reorganization and defragmentation-with IOTs, the organization is built into the table, and reorgs can be done online (see the upcoming section 'Online Reorganization'). You will also see a decrease in total space required, because you are no longer duplicating the primary key values in the table and in the index.

IOTs also allow for key compression, if the primary key is a composite of multiple columns. In such tables, if key compression is used, Oracle will divide the key into a prefix and a suffix. Then it will share the prefix column among multiple leaf blocks with the same prefix, so that the prefix key value does not need to be recorded as frequently.

IOTs also benefit from decreased sort operations, if the ORDER BY clause of a query names the key value of the IOT. In such a case, Oracle will skip the sort operation because the data will already be returned in key value order. With this in mind, you can maximize the return on queries if you have tables that are frequently ordered by the primary key.

Other than the organizational structure, the IOTs behave just as any normal table would: you can insert, update, delete, truncate, move, and partition them. They can be reorganized online and replicated, and can have materialized views created on them. You can even create secondary indices on them, so that alternate access paths can be used to access the data in the same way you would a heap-organized table (heap meaning you pile the data into the table randomly).

For an example of an IOT, remember our DBA's heavily hit woodscrew_orders table. Because of heavy demand for woodscrews, this table is very volatile and adds new rows constantly, but the access path is generally always the same, coming from a lookup on the order ID and the order date. So, the DBA could rebuild the table as an IOT so as to increase performance on these lookups. He has already partitioned the woodscrew_orders table, but that isn't a problem. An IOT can be partitioned just like a heap-organized table.

create table woodscrew_orders_new ( ord_id            number not null, ord_date            date, cust_id            number not null, scr_id            number not null, ord_cnt            number, warehouse_id      number not null, region            varchar2(20), constraint pk_woodscrew_orders primary key (ord_id, ord_date)) organization index including ord_date pctthreshold 20 overflow tablespace wd_scr_overflow partition by range (ord_date) ...

Like we said before, IOTs are not inherently a high-availability feature of the RDBMS. But anything that helps with performance helps with the perception of availability-which is half the battle. Plus, IOTs are so cool.

Materialized Views

It is always worth repeating one of the HADBA mantras: availability is defined as much by end-user perceptions as by database realities. If the end user cannot access the data in a speedy, reliable way, then the data is not available, even if the problem is due to table size, execution path, or bad ad hoc queries. Essentially, this means that performance is as important to availability as database uptime.

Which is where materialized views (we shorten it to mview for everyone's sake) offer up the same type of availability as IOTs, in the sense that an mview does not stop outages-it just increases performance. A materialized view is an umbrella name for an Oracle object that serves different purposes in different arenas. For our purposes, an mview is just what its name implies. First, consider a regular view in Oracle. A regular view is nothing more than a stored query that can be used to mask a table's columns to users, or to simply provide a quick coding shortcut to a complex lookup task. But each time you perform a select from a view, you are accessing the underlying table and its indices, and causing whatever performance hit is required to do so. This can get very expensive for tables that are large because of the number of rows, or large because of the number of column values.

A materialized view, on the other hand, is an actual re-creation of a subset of the data from the master table (or tables). When you create an mview, you do not just create the query; you create a new table that stores rows from the master table based on your mview criteria. In this way, you can get faster results from queries against the mview, because there are fewer rows (or fewer columns) to navigate through. Granted, you need the storage space to hold copies of rows held elsewhere, but the performance gains are such that it often outweighs the storage expense. Materialized views can contain joins and aggregates, making them extremely useful for data summaries and roll-up value mining.

Materialized views are the logical growth of the object Oracle used to call a snapshot. Snapshots were used in distributed database systems to offer local copies of centralized data tables found at a master database. Mviews take this system and apply it to a local object, such that the same structure can be used for performance gains on the master database itself. Like snapshots, mviews can be refreshed from the base table, and all new rows and row changes are propagated from the base table to the mview. The refresh can be complete or fast. A complete refresh rebuilds the entire mview from scratch. A fast refresh only updates new or changed rows since the last refresh.

A fast refresh requires a materialized view log. This is a separate object that is created on the base table. Whenever a DML operation is performed against the base table, the mview log is updated with the same change. Then, you can periodically check the mview log for those changes, propagate them to the mview, and purge the records from the log.

Mviews can still be used for distributed database needs. You can create mviews on remote databases via database links and refresh them from the master table. However, such mviews have a subset of functionality compared to local mviews, and we will not cover them in this book. Instead, see Chapter 10 for the next generation of distributed database functionality for high availability.

Mviews can be created on tables, partitioned tables, index-organized tables, and partitioned index-organized tables. In addition, you can partition an mview, which provides benefits when refreshing the mview in parallel.

Creating mviews

To create an mview, you first need to determine what subset of data, or what joined data, or what aggregated data, you want to have in the mview. We will continue to look at Horatio's Woodscrew database for our examples. The DBA has noted that the CEO of Horatio's Woodscrews Company makes a weekly review of the yearly order trends of the largest customers, and in particular likes to see what kind of screws are being ordered. Once he sees the whole order list, he reviews how many slot-headed vs. Phillips orders are being placed. To get this data, the DBA had created an application button on the business intelligence screen that the CEO could use to compile the screw orders per customer, and this data includes the head configuration. But this report has been running slowly, and the CEO has requested that it be sped up considerably.

To get past this, the DBA implements a series of materialized views that echo the query pattern of the CEO. For example, he builds an mview for the customer identified by cust_id=2002, with a join from Woodscrew to include screw details. First, he builds the mview logs on the base tables:

create materialized view log on ws_app.woodscrew tablespace ws_app_data; create materialized view log on ws_app.woodscrew_orders tablespace ws_app_data; CREATE MATERIALIZED VIEW ws_app.cust_ws_order_mv PCTFREE 0 TABLESPACE ws_app_data STORAGE (INITIAL 16k NEXT 16k PCTINCREASE 0) PARALLEL BUILD IMMEDIATE REFRESH on demand ENABLE QUERY REWRITE AS SELECT w.scr_type, w.head_config, wo.cust_id, wo.ord_cnt, wo.scr_id FROM ws_app.woodscrew_orders wo, ws_app.woodscrew w where w.scr_id=wo.scr_id and wo.cust_id = 2002;

Query Rewrite

Perhaps the most compelling feature of mviews is the ability for the Oracle cost-based optimizer to automatically use an mview instead of the base table when a query against the base table matches the mview. This functionality is known as query rewrite, and it's yet another HA feature you can exploit-but not in the traditional sense. Query rewrite allows you to implement mviews on the fly, without having to rebuild application code or change user behavior. Once the queries are determined, you can build the mview based on the query, and then Oracle will automatically use the mview instead of the base table. This allows you to increase performance and change execution paths without taking the database out of the users' hands first, or integrating new functionality into existing applications.

Turning on query rewrite is simple. You can do it for a session or for the entire system:

SQL> alter session set query_rewrite_enabled=true; SQL> alter system set query_rewrite_enabled=true;

You can also turn query rewrite on for specific mviews, based on an ALTER or CREATE command on the mview itself, as seen in the preceding section in the creation of the mview CUST_WS_ORDER_MV.

Summary Advisor

You might be asking yourself, how did the Woodscrew DBA know to create materialized views on the specific joins? Well, there is functionality in the Oracle database to provide advice for mview creation. In Oracle9i, this was called Summary Advisor, and was implemented through the DBMS_OLAP utility. In Oracle Database10g, DBMS_OLAP is still around for backward compatibility, but it has been deprecated in favor of the SQL Access Advisor, which can be run from EM. (It can also be run from DBMS_ADVISOR, but it's wicked slick from EM.)

The DBA simply went to the Maintenance page of the EM console, and then clicked on Advisor Central. From here, there is a link to SQLAccess Advisor. This takes you on a four-page wizard path where you specify what usage information to use in making its recommendations.

Online Reorganization

Thus far, we've talked about partitioning, index-organized tables, and materialized views as ways to increase performance and minimize possible downtime due to maintenance or recovery scenarios. But there remains an important question: how can these be implemented for tables and indices that are already in production? For that question, we give you online reorganization.

DBMS_REDEFINITION

Oracle provides the means of reorganizing tables through use of the wrapped procedures in DBMS_REDEFINITION. This bundle provides a series of procedures that allow you to build an interim table with the new structure you desire, and then move rows from the original table to the interim table. After the interim table is complete, the original table is locked briefly; then, the names are transposed on the tables so that the interim table gets the original table name.

DBMS_REDEFINITION uses materialized view logs on the original table to keep track of changes that have occurred since the start of the redefinition process. In this way, the original table is available for query and DML even as it is being rebuilt with new definitions. After the original build is complete, the new rows from the mview log are propagated to the interim table. There is a brief lock at the end of the process so that the final switchover can occur.

Online table reorganization can be used in the following situations:

  • Change a table to an index-organized table (or vice versa)

  • Change a table to a partitioned table (or vice versa)

  • Add or drop a column

  • Move a table to a different tablespace, or change storage parameters

HA Workshop: Use DBMS_REDEFINITION to Change a Table to a Partitioned IOT

start example

Workshop Notes

This workshop will change the woodscrew_orders table into a partitioned, index-organized table while the woodscrew_orders table stays online for OLTP operations. Note that if you built a materialized view on woodscrew_orders, as shown in the section 'Creating Mviews' earlier, you will need to drop the mview before proceeding-a table with mviews is not a candidate for online reorganization.

Step 1.  Get a row count of the table. This will help you determine when the first phase of the reorg is complete.

select count(*) from woodscrew_orders;

Step 2.  Confirm that the table is a candidate for reorganization.

BEGIN DBMS_REDEFINITION.CAN_REDEF_TABLE('sales','woodscrew_orders',  dbms_redefinition.cons_use_pk);| END; /

Step 3.  Build the interim table. In our situation, we need a partitioned, index-organized structure for the woodscrew_orders table.

create table woodscrew_orders_new ( ord_id            number not null, ord_date            date, cust_id            number not null, scr_id            number not null, ord_cnt            number, warehouse_id      number not null, region            varchar2(20), constraint pk_woodscrew_orders primary key (ord_id, ord_date)) organization index including ord_date pctthreshold 20 overflow tablespace wd_scr_overflow partition by range (ord_date) (partition values less than ( TO_DATE('1-OCT-2003','DD-MON-YYYY'))     tablespace wdscrord_sep_2003,  partition values less than ( TO_DATE('1-NOV-2003','DD-MON-YYYY'))     tablespace wdscrord_oct_2003,  partition values less than ( TO_DATE('1-DEC-2003','DD-MON-YYYY'))     tablespace wdscrord_nov_2003,  partition values less than ( TO_DATE('1-JAN-2004','DD-MON-YYYY'))     tablespace wdscrord_dec_2003);

Step 4.  Begin the reorganization process. Because we are not remapping any columns, we need only specify the schema, original table, and interim table.

BEGIN DBMS_REDEFINITION.START_REDEF_TABLE('ws_app', 'woodscrew_orders', 'woodscrew_orders_new', dbms_redefinition.cons_use_pk); END; /

Step 5.  Automatically rebuild all table dependencies (indices, triggers, and so forth).

BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('ws_app', 'woodscrew_orders', 'woodscrew_orders_new', TRUE, TRUE, TRUE, FALSE); END; /

Step 6.  Depending on the amount of data that may accumulate while the dependent objects are built, you may want to perform a refresh of data from the mview logs that have been created in the background.

BEGIN  DBMS_REDEFINITION.SYNC_INTERIM_TABLE('ws_app', 'woodscrew_orders', 'woodscrew_orders_new'); END; /

Step 7.  Complete the reorganization by locking the original table, then finishing any last sync operations with the interim table, and then swapping table names between the two.

BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE('ws_app', 'woodscrew_orders', 'woodscrew_orders_new'); END; /

Step 8.  After you have completed the final step, you can now delete the interim table-in this case, woodscrew_orders_new. This is actually the original table renamed.

end example



 < Day Day Up > 



Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
ISBN: 71752080
EAN: N/A
Year: 2003
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net