Setting Up Streams Replication

 < Day Day Up > 



Streams replication requires a significant amount of planning and configuration. You need to determine what you will be replicating, and to where. With Oracle Database 10g, there is also the determination of using local or downstream capture. After the planning, it's time to perform the configuration itself at both the source database and the destination database.

Planning for Streams Replication

There are a series of decisions that must be made prior to the replication configuration. These steps determine the nature of the replication setup that you will implement.

Determining Your Replication Set

You first need to determine which objects you will be replicating from your source to your destination database. Obviously, if you are creating a full replica database, every DML statement in the redo stream of the source will be converted to an LCR for application at the destination (except for DML against system and sysaux objects, of course).

However, if you want to take advantage of Stream's flexibility, it might make more sense to carefully choose those items that are of absolute importance in case of a disaster, and exclude those database objects that can be sacrificed. In situations where you have an extremely large production database, you may be forced to cut development environments, simply to save room and bandwidth at the destination database.

Because of the way in which Streams records LCRs in the queue, and then interprets them for application, you will get better performance if each table in your replication set has a primary key. The primary key is the default means by which the apply process can resolve the LCR into a change at the destination table. In the absence of a primary key, Streams will use a unique key that has a NOT NULL value in one of its columns. Short of that, it will need to record the value of each field in the row change. As you might imagine, this becomes more and more expensive. So, think about your data structure, and whether there are good keys for Streams replication.

Determining Your Replication Sites

Once you have determined what you will replicate, you have to determine where you will be replicating to. The site of the replica database will reflect your decisions about how to balance the distance of pushing the LCRs over the Internet with the need to create a physical distance between the primary and disaster recovery database. In addition to location, you have to determine what type of system will house the replica; if you have a RAC cluster that you are protecting, obviously another RAC cluster would be ideal. But you do not have to match the replica to the primary database with exact physical structure or components. It might make sense from a cost perspective to house the replica in a smaller system, where you can limp along at a reasonable level until you can get the primary systems back up and operational.

Determining the replica database system is a sort of 'chicken and egg' situation, when combined with the decision to be made about what to replicate. Obviously, if you have a like system for the replica and source, a full database replica would be more feasible than, say, if your source is a multinode RAC with a monstrous SAN, and the replica is a single-node system with mostly local storage.

Local or Downstream Capture?

You need to determine if you will be performing the capture of LCRs from the source database redo logs local at the source, or downstream, at the destination database. Downstream configuration provides a better disaster recovery solution, as it would require that you push the archivelogs from the source to the destination prior to performing a capture. This means you also have an extra set of the archivelogs just in case you need them for the source. When you use downstream capture, the capture process only looks at archivelogs. When you use a local capture process, the local capture process has access to the online redo logs, and will use them and archivelogs whenever necessary.

Downhill capture also means that you limit almost the entire configuration and administration task set to the replica database. At the source, you merely configure the push of the archivelogs to the source. That's it. The rest of the work is done at the destination site (capture and apply). This also means that a propagation job will not be required, as you can establish the same queue table for the capture and apply processes.

Determining the Role of the Destination Database

You need to establish how the destination (replica) database will be used. It is feasible to imagine that the replica database will not be configured for any users, but will merely sit idle until a disaster occurs. Of course, if you've been provided the opportunity to have the resources sit idle, then perhaps a logical (or better yet, a physical) standby database configuration would be more appropriate for you. Streams gets its strength from the fact that you can use the replica database even as it performs its disaster recovery role.

If you do have the replica database open for business, you need to know how, exactly, it will be open to users. Will you allow DML against the replica objects from the production? Or will it only be used for reporting purposes (queries, but not DML)? This is a critical distinction. If you want to set up the replica to serve as a load balancer for the source, you must reconfigure in your head the entire architecture of the Streams environment. Now, you must see that you have two sources and two destinations. You will need to configure Streams to go back and forth to both locations. You will also need to configure some form of conflict resolution, in case users at both databases simultaneously update the same row. (We discuss conflict resolution later in this chapter in the 'Conflict Resolution' section.)

If the replica objects from the source will only be used for reporting, you do not have to make these kinds of considerations, and your life will be much easier. However, keep in mind that a logical standby database is a much easier way to configure a reporting copy of the production database. Both logical standby and Streams replicas can be open and used for reporting even as changes are applied. The difference, of course, is that you can better control the set of replicated objects with Streams. But a logical standby will always be a simpler solution.

Configuring Streams Replication

Once you have planned how your replication environment will work, it is time to get to the business of configuration. In this section, we will discuss the ins and outs of local capture and remote propagation of LCRs, which is the most common form that Streams replication takes. Later in this chapter, we concentrate more on downstream capture for Streams replication.

We also assume that you want to configure Streams such that you can make changes to replicated objects at both databases-in other words, both databases will be a source and a destination database for the same set of tables. Quite frankly, this is the most compelling reason to use Streams as an availability solution. But doing so is an extreme complicater. Such is life.

init.ora Parameters

The first order of business is to configure the initialization parameters for both the source and destination database. There are seven primary values to be concerned with, as they directly affect Streams functionality:

  • COMPATIBILE  Must be set to 10.1.0 for all the newest new.  

  • GLOBAL_NAMES  Must be set to True. This is required to identify all databases in the Streams configuration uniquely.

  • JOB_QUEUE_PROCESSES  You will need to set this to at least 2. Better to have 4 or 6.

  • OPEN_LINKS  The default is 4, which is fine-just don't set this any lower.

  • SHARED_POOL_SIZE  Streams uses the shared pool for staging the captured events as well as for communication if you capture in parallel. If no STREAMS_POOL_SIZE is set, the shared pool is used. Streams needs at least 10MB of memory, but can only use 10 percent of the shared pool. So, if you choose not to use a Streams pool, you will need to set the shared pool to at least 100MB.

  • STREAMS_POOL_SIZE  You can specify a pool that is used exclusively by Streams for captured events, parallel capture, and apply communication. By setting this parameter, you will keep Streams from muddying the already murky shared pool, and stop it from beating up on other shared pool occupants. Of course, by setting this, you also dedicate resources to Streams that cannot be used by other processes if Streams goes idle for any reason. You should set this parameter based on the number of capture processes and apply processes-that is, the level of parallelism: 10MB for each capture process, 1MB for each apply process. We always suggest generosity over scrooging-start at 15MB and go up from there.

  • UNDO_RETENTION  The capture process can be a source of significant ORA-1555 (snapshot too old) errors, so make sure you set the undo retention to a high enough value. Oracle suggests starting at 3600 (1 hour). You will have to monitor your undo stats to make sure you have enough space in the undo tablespace to keep the hour of undo around (see Chapter 9 for more on undo retention).

Setting Up Replication Administrator Users

After you make the necessary modifications to the init file, you need to create users that will be responsible for replication administration. Can you use an existing user to own the Streams replication admin space? Absolutely. Do we recommend it? No. Make a new, clearly defined user at both the source and the destination. This user will be granted specific privileges that will control the capture, propagation, and apply movement. This user won't own the objects being replicated, but will just perform the queuing of the LCRs to the appropriate places.

start sidebar
Streams Pool vs. Shared Pool

As part of the init parameter changes, you need to determine how Streams will use the SGA for staging LCRs. By default, the shared pool is used. However, you can isolate Streams memory utilization to a new memory pool in Oracle Database 10g, called the Streams pool. The decision to use a Streams pool must be based on the level of activity you expect to see in your Streams environment. When Streams uses the shared pool, it is sharing the memory resources with many other Oracle processes. This can be beneficial, as the shared pool can move resources around based on demand. However, it can also mean that Streams is causing you to lose performance on objects that would have otherwise remained pegged-SQL execution plans, PL/SQL blocks, and so forth. Using a Streams pool dedicates resources to Streams, and this means that Streams won't walk all over anyone; it also means that when Streams runs idle, the memory resources in the Streams pool will also be idle.

In general, we suggest a Streams pool. As long as you spend the time it takes to monitor Streams memory usage, you can effectively tune the pool to match usage, and keep Streams out of everyone else's way.

end sidebar

Putting the Databases in Archivelog Mode

Of course (of course!) your production database is running in archivelog mode, but the replica database may not be in archivelog mode. It is only required if you will be allowing users to update production tables at the replica, and you will need to capture rows and move them back to production.

Configuring the Network

You will need to ensure that there is connectivity between the production and replica databases. You will also need to create all necessary TNS aliases in order to facilitate Oracle Net connectivity between the two databases. Then, you build your database links to connect the Streams administrator at each site to the Streams administrator at the other site.

Enabling Supplemental Logging

You need supplemental logging if you do not have a primary key or a unique NOT NULL constraint on the replicated table. Supplemental logging adds the values of every column in the row to the LCR, so that the record can be appropriately applied at the destination.

ALTER TABLE ws_app.woodscrew_inventory   ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

You can also add supplemental logging for the entire database (note that you may have already done so when you configured your database for LogMiner in Chapter 2):

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Creating Streams Queues

The next step is to create the queue table that will serve as the queue for captured and propagated LCRs. Typically, you can create a single queue that is used both for locally captured LCRs and LCRs that are propagated from other source databases. You need to create a queue at each database; it is highly recommended that you give the queue tables at each database a different name (such as ws_app01_queue and ws_app02_queue).

BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'stream_admin.ws_app01_queue_table', queue_name => 'stream_admin.ws_app01_queue'); END; /

Creating a Capture Process

At both databases, you need to create a capture process and associate it with a rule set. The DBMS_STREAMS_ADD_<object>_RULES procedures will do all of this with a single block. You can add rules at the TABLE, SUBSET, SCHEMA, or GLOBAL level with the associated ADD_TABLE_RULES, ADD_SCHEMA_RULES, and so forth. The only one that is not completely clear here is ADD_SUBSET_RULES-this is for creating a capture process for a subset of rows within a table. In our examples, we are setting up Streams for the ws_app schema in our database.

BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'ws_app', streams_type => 'capture', streams_name => 'ws_app01_capture', queue_name => 'ws_app01_queue', include_dml => true, include_ddl => true, include_tagged_lcr => false, source_database => NULL, inclusion_rule => true); END; /

This example creates a capture process named ws_app01_capture and associates it with the ws_app01_queue we created previously. It creates a positive rule set with two rules: capture all DML for the ws_app schema, and capture all DDL for the ws_app schema. You will need to create a capture process at both the production and replica databases. When you create a capture process in this fashion, Streams automatically sets an instantiation SCN for all objects in the schema you have specified.

The instantiation SCN is the starting point for Streams replication to begin. It specifies where in the redo stream to begin looking for change events for the replication set and then convert them to LCRs.

Instantiating the Replica Database

At some point, you will need to move the data from the primary database to the replica. It is not expected that you will have Streams replication set up before any data exists in any of the databases. Rather, it is assumed that one of the databases will hold a significant amount of data that will have to be moved to the replica at the beginning of the replication process. The act of moving the data to the replica, and informing Streams where in the redo history to start, is referred to as instantiation.

First you need to move the data from the source to the destination. Transportable tablespaces, export, RMAN duplication-do whatever you have to do to get the objects moved from the source to the destination. Our examples will use original export/import (as opposed to the new Data Pump exp/imp); the benefit of using export/import is that it will capture the instantiation SCN set when you create your capture process, and move it over with the copy of the data. This is the starting point in the local queue at the destination database for the apply process to begin taking LCRs for the replication set and applying them to the destination database objects.

Creating a Propagation

Once we have our capture processes configured, and we have instantiated our schema, we next establish the propagations that will be used to push LCRs from the ws_app queue at each database to the queue at the other database. We must specify the source queue and the destination queue, with the destination queue suffixed with a database link name, shown here:

BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES( schema_name => 'ws_app', streams_name => 'ws_app01_propagation', source_queue_name => 'stream_admin.ws_app01_queue', destination_queue_name => 'stream_admin.ws_app02_queue@STR10', include_dml => true, include_ddl => true, include_tagged_lcr => false, source_database => 'ORCL', inclusion_rule => true); END; /

This code will create a propagation at our source that moves LCRs from the local ws_app01_queue to the remote ws_app02_queue at the destination. It creates a positive rule set with two rules: push all LCRs that contain DML for the ws_app schema, and push all LCRs that contain DDL for the ws_app schema. Remember that with a multisource replication environment, where we will propagate in both directions, we need to set up a propagation job at both databases, with reverse values for source and destination queues.

Creating an Apply Process

The procedure to create an apply process comes in the same flavors as the capture process; in fact, you'll notice that you use the same procedure to create an apply process that you do to create a capture process. You simply change the STREAMS_TYPE from capture to apply, like this:

BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'ws_app', streams_type => 'apply', streams_name => 'ws_app02_apply', queue_name => 'ws_app02_queue', include_dml => true, include_ddl => true, include_tagged_lcr => false, source_database => 'STR10', inclusion_rule => true); END; /

This should look very familiar by now. You have created an apply process named ws_app01_apply, which looks to the ws_app02_queue for LCRs to be applied. It creates a positive rule set with two rules: DML for the ws_app schema, and DDL for the ws_app schema.

Creating Substitute Key Columns, If Necessary

If you are replicating a table that does not have a primary key, you will need to inform Streams which columns will act as a primary key for the sake of replication. For instance, if the DBA down at Horatio's Woodscrew Company wants to replicate the woodscrew_inventory table, he would need to set a substitute key in Streams-woodscrew_inventory has no primary key.

BEGIN DBMS_APPLY_ADM.SET_KEY_COLUMNS( object_name => 'ws_app.woodscrew_inventory', column_list => 'scr_id,manufactr_id,warehouse_id,region'); END; /

For any columns that are referenced as the substitution key for replication, you will need to enable supplemental logging at the source database as well.

Configuring for Conflict Resolution

If you have a multisource replication environment, where rows in the same objects can be updated at both sites, you will need to consider the need for conflict resolution. If necessary, you will need to configure your conflict handlers at this time. Conflict resolution, and its configuration, is detailed in the next section, 'Conflict Resolution.'

Throwing the Switch

After you have done all of these configuration steps, you are ready to 'throw the switch' and enable the Streams environment to begin capturing, propagating, and applying the changes in your database.

First, enable the capture process at both databases (you would do this for the capture process at both databases, although we only code list for one).

BEGIN DBMS_CAPTURE_ADM.START_CAPTURE( capture_name => 'ws_app01_capture'); END; /

You do not need to enable the propagation you have created-propagation agents are enabled by default. So then it is time to enable the apply processes at both databases.

BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => 'ws_app02_apply'); END; /

After you have enabled the capture and apply processes, the Streams environment has been configured and will now be capturing new changes to the ws_app schema objects, moving them to the other database and applying them.

Conflict Resolution

A Streams replication environment is not complete until you have determined what kind of conflict resolution you need, and created conflict handlers to, you know, handle them. A conflict occurs in a distributed database when two users at different databases are attempting to modify the same record with different values. For instance, at each database a user updates the woodscrew_order table to change the order count value for the same order by the same customer. Which value should the Streams environment accept as the correct value? The determination of which record to keep and which to reject is known as conflict resolution.

There are four distinct types of conflicts that must be accounted for, depending on your application:

  • Update conflicts

  • Uniqueness conflicts

  • Delete conflicts

  • Foreign key conflicts

Update Conflicts

Our previous example is an update conflict: two users at different databases are updating the same record at roughly the same time. For a moment, each database has a different value for the ORD_CNT for a particular customer order for screws. However, as the apply process moves the LCR for that row from the other database, Streams will find that the row has been updated already, and will signal the conflict.

Update conflicts are the most unavoidable types of conflicts in a distributed model where the same tables are being modified at each location. Oracle has built-in conflict handlers for update conflicts that you can implement using the DBMS_APPLY_ADM package. These are set up as apply handlers that get called when a conflict is detected, as in the following example for the woodscrew_orders table:

DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'scr_id'; cols(2) := 'ord_cnt'; cols(3) := 'warehouse_id'; cols(4) := 'region'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'ws_app.woodscrew_orders', method_name => 'OVERWRITE', resolution_column => 'scr_id', column_list => cols); END; /

You have to create a conditional supplemental log group for all columns that you list as being checked as part of this conflict handler-in this example, the SCR_ID, ORD_CNT, WAREHOSUE_ID, and REGION columns.

ALTER TABLE ws_app.woodscrew_orders ADD SUPPLEMENTAL LOG GROUP log_group_ws_ord ('scr_id','ord_cnt','warehouse_id', 'region');

Uniqueness Conflicts

A uniqueness conflict can often be referred to as an insert conflict, as an insert is the most common trigger: two rows are inserted into the table at different databases with the same primary key value. The primary key constraint does not trigger an error until the LCR is moved from the other database. A uniqueness conflict can occur, as well, if the primary key is updated or changed for an existing record.

Uniqueness conflicts should be avoided, instead of resolved. Uniqueness conflicts can be avoided by sticking to two unbending application rules: unique string creation is appended with the GLOBAL_NAME from each origination database, and the application cannot modify a primary key value once created. By sticking to those rules, you will guarantee no unique conflicts in your replication environment.

Delete Conflicts

A delete conflict is triggered when a row is deleted that was also deleted or updated at the other database. As with uniqueness conflicts, it is best to avoid delete conflicts instead of coding to resolve them. At the application level, deleted rows from the application should instead be marked for delete at the application, then pooled together and run in batch format. You can also restrict deletions to the primary database, such that no deletes can occur at a secondary (replica) site.

Foreign Key Conflicts

Foreign key conflicts occur when an LCR is being applied at a database that violates a foreign key constraint. This is primarily a problem in Streams environments with more than two source databases, where it would be possible that two different source databases are sending LCRs to a third database. If the first source sends a record that references a foreign key value that was generated at the second source, but the second source hasn't sent its foreign key generation yet, the third database will trigger a foreign key conflict.

HA Workshop: Configuring Streams Replication

start example

Workshop Notes

This workshop will configure Streams replication for the ws_app schema that we have been using for examples throughout this book; that is, we will be replicating the woodscrew, woodscrew_inventory, and woodscrew_orders tables in the ws_app schema. We will be configuring schema-level replication, so if you are following along at home, we suggest you drop the ws_app schema you have and rebuild it from scratch with just the three tables and their indices. Then, reinsert the base rows as described in Chapter 1. Because of Streams restrictions, we will not be replicating partitioned tables or IOTs. For more on Streams restrictions, see the Oracle Database 10g Streams documentation.

The workshop will configure a bidirectional, multisource Streams environment where data can be entered at the primary and replica database. The primary will be known as ORCL; the replica is STR10. The ws_app schema already exists in ORCL, and we will have to instantiate the existing rows at STR10.

The first phase of this workshop requires us to prepare both databases for Streams replication. The second phase sets up propagation of row changes from our primary database to our new replica (from ORCL to STR10). In the third phase, we will set up Streams to replicate back from STR10 to ORCL (making this a bidirectional multisource Streams environment).

Step 1.  Put all databases in archivelog mode.

SQL> archive log list Database log mode        No Archive Mode Automatic archival            Disabled Archive destination           USE_DB_RECOVERY_FILE_DEST Oldest online log sequence     303 Current log sequence         305 SQL> show parameter recover; NAME                          TYPE          VALUE ------------------------------------------------------------- db_recovery_file_dest         string        /u01/product/oracle/                                             flash_recovery_area db_recovery_file_dest_size    big integer   2G SQL> shutdown immediate; SQL> startup mount; SQL> alter database archivelog; SQL> alter database open;

Step 2.  Change initialization parameters for Streams. The Streams pool size and NLS_DATE_FORMAT require a restart of the instance.

SQL> alter system set global_names=true scope=both; SQL> alter system set undo_retention=3600 scope=both; SQL> alter system set job_queue_processes=4 scope=both; SQL> alter system set streams_pool_size= 20m scope=spfile; SQL> alter system set NLS_DATE_FORMAT=      'YYYY-MM-DD HH24:MI:SS' scope=spfile; SQL> shutdown immediate; SQL> startup

Step 3.  Create Streams administrators at the primary and replica databases, and grant required roles and privileges. Create default tablespaces so that they are not using SYSTEM.

---at the primary: SQL> create tablespace strepadm datafile '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m; ---at the replica: SQL> create tablespace strepadm datafile ---at both sites: '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m; SQL> create user stream_admin     identified by stream_admin     default tablespace strepadm     temporary tablespace temp; SQL> grant connect, resource, dba, aq_administrator_role to stream_admin; SQL> BEGIN          DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (          grantee  => 'stream_admin',          grant_privileges => true);          END;          /

Step 4.  Configure the tnsnames.ora at each site so that a connection can be made to the other database.

---In $ORACLE_HOME/network/admin for the ORCL instance: STR10 =   (DESCRIPTION =     (ADDRESS_LIST =       (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01)(PORT = 1521))     )     (CONNECT_DATA =       (SERVER = DEDICATED)       (SERVICE_NAME = str10)     )   ) ---In $ORACLE_HOME/network/admin for the STR10 instance: ORCL =   (DESCRIPTION =     (ADDRESS_LIST =       (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01)(PORT = 1521))     )     (CONNECT_DATA =       (SERVER = DEDICATED)       (SERVICE_NAME = orcl)     )   )

Step 5.  With the tnsnames.ora squared away, create a database link for the stream_admin user at both ORCL and STR10. With the init parameter global_name set to True, the db_link name must be the same as the global_name of the database you are connecting to. Use a SELECT from the table global_name at each site to determine the global name.

SQL> select * from global_name; SQL> connect stream_admin/stream_admin@ORCL SQL> create database link STR10      connect to stream_admin identified by stream_admin      using 'STR10'; SQL> select sysdate from dual@STR10; SLQ> connect stream_admin/stream_admin@STR10 SQL> create database link ORCL      connect to stream_admin identified by stream_admin      using 'ORCL'; SQL> select sysdate from dual@ORCL;

Step 6.  If you have not already done so, build the ws_app schema in ORCL. (See Chapter 2 for the ws_app schema build scripts.) We are providing the DDL for the three tables here to remind you of the structures, in case you are reading and not doing right now.

SQL> create table woodscrew (  scr_id           number not null,  manufactr_id     varchar2(20) not null,  scr_type         varchar2(20),  thread_cnt       number,  length           number,  head_config      varchar2(20),  constraint pk_woodscrew primary key (scr_id, manufactr_id)  using index tablespace ws_app_idx); SQL> create index woodscrew_identity on woodscrew     (scr_type, thread_cnt, length,head_config)      tablespace ws_app_idx; SQL> create table woodscrew_inventory (  scr_id           number not null,  manufactr_id     varchar2(20) not null,  warehouse_id     number not null,  region           varchar2(20),  count            number,  lot_price  number); SQL> create table woodscrew_orders (  ord_id           number not null,  ord_date         date,  cust_id          number not null,  scr_id           number not null,  ord_cnt          number,  warehouse_id     number not null,  region           varchar2(20),  constraint pk_wdscr_orders primary key (ord_id, ord_date)  using index tablespace ws_app_idx);

Step 7.  Add supplemental logging to the ws_app tables. This is required both for the conflict resolution for the tables and for the woodscrew_inventory table that does not have a primary key. We will later identify a substitution key that will operate as the primary key for replication.

SQL> Alter table ws_app.woodscrew add supplemental log data     (ALL) columns; SQL> alter table ws_app.woodscrew_inventory add supplemental log data     (ALL) columns; SQL> alter table ws_app.woodscrew_orders add supplemental log data     (ALL) columns;

Step 8.  Create Streams queues at the primary and replica database.

---at ORCL (primary): SQL> connect stream_admin/stream_admin@ORCL SQL> BEGIN   DBMS_STREAMS_ADM.SET_UP_QUEUE(   queue_table  => 'stream_admin.ws_app01_queue_table',   queue_name   => 'stream_admin.ws_app01_queue');   END;   / ---At STR10 (replica): SQL> connect stream_admin/stream_admin@STR10 SQL> BEGIN   DBMS_STREAMS_ADM.SET_UP_QUEUE(   queue_table  => 'stream_admin.ws_app02_queue_table',   queue_name   => 'stream_admin.ws_app02_queue');   END;   /

Step 9.  Create the capture process at the primary database (ORCL).

SQL> BEGIN   DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(   schema_name     =>'ws_app',   streams_type    =>'capture',   streams_name    =>'ws_app01_capture',   queue_name      =>'ws_app01_queue',   include_dml     =>true,   include_ddl     =>true,   include_tagged_lcr  =>false,   source_database => NULL,   inclusion_rule  => true);   END;   /

Step 10.  Instantiate the ws_app schema at STR10. This step requires the movement of existing rows in the ws_app schema tables at ORCL to ws_app schema tables at STR10. We will be using traditional import and export to make the movement take place. The benefit of using export/import is that the export utility will take the instantiation SCN generated when you built the capture process above, and document it with each object in the dump file. Then, when you import, the instantiation SCN will be recorded with the new objects that are built. This saves us some steps, mercifully.

Note, as well, that when we instantiate STR10, we have to prebuild the tablespace ws_app_data and ws_app_idx, then build the ws_app user.

---AT ORCL: exp system/123db file=wsapp.dmp log=wsappexp.log object_consistent=y owner=ws_app ---AT STR10: ---Create ws_app tablespaces and user: create tablespace ws_app_data datafile '/u02/oracle/oradata/str10/ws_app_data01.dbf' size 100m; create tablespace ws_app_idx datafile '/u02/oracle/oradata/str10/ws_app_idx01.dbf' size 100m; create user ws_app identified by ws_app default tablespace ws_app_data temporary tablespace temp; grant connect, resource to ws_app; imp system/123db file=wsapp.dmp log=wsappimp.log fromuser=ws_app touser=ws_app streams_instantiation=y Import: Release 10.1.0.1.0 - Beta on Tue Jan 27 14:10:08 2004 Copyright (c) 1982, 2003, Oracle.  All rights reserved. Connected to: Oracle10i Enterprise Edition Release 10.1.0.1.0 - Beta With the Partitioning, OLAP and Data Mining options Export file created by EXPORT:V10.01.00 via conventional path import done in US7ASCII character set and AL16UTF16 NCHAR character set import server uses WE8ISO8859P1 character set (possible charset conversion) . importing WS_APP's objects into WS_APP . . importing table                    "WOODSCREW"         12 rows imported . . importing table          "WOODSCREW_INVENTORY"          4 rows imported . . importing table             "WOODSCREW_ORDERS"         16 rows imported Import terminated successfully without warnings.

Step 11.  Create a propagation job at the primary database (ORCL).

SQL> BEGIN   DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(   schema_name          =>'ws_app',   streams_name         =>'ws_app01_propagation',   source_queue_name    =>'stream_admin.ws_app01_queue',   destination_queue_name=>'stream_admin.ws_app02_queue@STR10',   include_dml          =>true,   include_ddl          =>true,   include_tagged_lcr   =>false,   source_database      =>'ORCL',   inclusion_rule       =>true);   END;   /

Step 12.  Create an apply process at the replica database (STR10).

SQL> BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name          =>'ws_app', streams_type         =>'apply', streams_name         =>'ws_app02_apply', queue_name           =>'ws_app02_queue', include_dml          =>true, include_ddl          =>true, include_tagged_lcr   =>false, source_database      =>'ORCL', inclusion_rule       =>true); END; /

Step 13.  Create substitution key columns for the table ws_app.woodscrew_inventory at STR10. This is required for any table that does not have a primary key. The column combination must provide a unique value for Streams.

SQL> BEGIN DBMS_APPLY_ADM.SET_KEY_COLUMNS( object_name     =>'ws_app.woodscrew_inventory', column_list     =>'scr_id,manufactr_id,warehouse_id,region'); END; /

Step 14.  Configure conflict resolution at the replica site (STR10). The conflict handlers will be created for each table. Because this is a replica, we assume that in the event of a conflict, the primary database is always the correct value. Thus, we will set this up so that the incoming record will always overwrite the existing value.

DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'scr_type'; cols(2) := 'thread_cnt'; cols(3) := 'length'; cols(4) := 'head_config'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name     =>'ws_app.woodscrew', method_name     =>'OVERWRITE', resolution_column=>'scr_type', column_list     =>cols); END; / DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'scr_id'; cols(2) := 'ord_cnt'; cols(3) := 'warehouse_id'; cols(4) := 'region'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name     =>'ws_app.woodscrew_orders', method_name     =>'OVERWRITE', resolution_column=>'scr_id', column_list     =>cols); END; / DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'count'; cols(2) := 'lot_price'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name     =>'ws_app.woodscrew_inventory', method_name     =>'OVERWRITE', resolution_column=>'count', column_list     =>cols); END; /

Step 15.  Enable the capture process at the primary database (ORCL).

BEGIN DBMS_CAPTURE_ADM.START_CAPTURE( capture_name => 'ws_app01_capture'); END; /

Step 16.  Enable the apply process at the replica database (STR10).

BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => 'ws_app02_apply'); END; /

Step 17.  Test propagation of rows from primary (ORCL) to replica (STR10).

AT ORCL: insert into woodscrew values ( 1006, 'Balaji Parts, Inc.', 'Machine', 20, 1.5, 'Slot'); AT STR10: connect ws_app/ws_app select * from woodscrew where head_config = 'Slot';

Step 18.  While it may seem logical, you do not need to add supplemental logging at the replica. This is because the supplemental logging attribute was brought over when we exported from ORCL and imported into STR10 with STREAMS_INSTANTIATION=Y. If you try to create supplemental logging, you will get an error:

ORA-32588: supplemental logging attribute all column exists

Step 19.  Create a capture process at the replica database (STR10).

BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name     =>'ws_app', streams_type    =>'capture', streams_name    =>'ws_app02_capture', queue_name      =>'ws_app02_queue', include_dml     =>true, include_ddl     =>true, include_tagged_lcr  =>false, source_database => NULL, inclusion_rule  => true); END; /

Step 20.  Create a propagation job from the replica (STR10) to the primary (ORCL).

BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES( schema_name          =>'ws_app', streams_name         =>'ws_app02_propagation', source_queue_name    =>'stream_admin.ws_app02_queue', destination_queue_name=>'stream_admin.ws_app01_queue@ORCL', include_dml          =>true, include_ddl          =>true, include_tagged_lcr   =>false, source_database      =>'STR10', inclusion_rule       =>true); END; / 

Step 21.  Create an apply process at the primary database (ORCL).

BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name          =>'ws_app', streams_type         =>'apply', streams_name         =>'ws_app01_apply', queue_name           =>'ws_app01_queue', include_dml          =>true, include_ddl          =>true, include_tagged_lcr   =>false, source_database      =>'STR10', inclusion_rule       =>true); END; /

Step 22.  Create substitution key columns for woodscrew_inventory at the primary database (ORCL).

BEGIN DBMS_APPLY_ADM.SET_KEY_COLUMNS( object_name     =>'ws_app.woodscrew_inventory', column_list     =>'scr_id,manufactr_id,warehouse_id,region'); END; /

Step 23.  Create conflict resolution handlers at ORCL. Because this is the primary, we set a 'DISCARD' resolution type for rows that are generated at STR10 and conflict with rows generated at ORCL. This completes our conflict resolution method, which resembles a site priority system. All rows generated at ORCL overwrite rows generated at STR10; all rows generated at STR10 that conflict with rows at ORCL will be discarded.

DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'scr_type'; cols(2) := 'thread_cnt'; cols(3) := 'length'; cols(4) := 'head_config'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name     =>'ws_app.woodscrew', method_name     =>'DISCARD', resolution_column=>'scr_type', column_list     =>cols); END; / DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'scr_id'; cols(2) := 'ord_cnt'; cols(3) := 'warehouse_id'; cols(4) := 'region'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name     =>'ws_app.woodscrew_orders', method_name     =>'DISCARD', resolution_column=>'scr_id', column_list     =>cols); END; / DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'count'; cols(2) := 'lot_price'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name     =>'ws_app.woodscrew_inventory', method_name     =>'DISCARD', resolution_column=>'count', column_list     =>cols); END; /

Step 24.  Set the instantiation SCN at ORCL for the apply process. Because we are not moving an instantiation over from STR10 with an export dump file, we will have to manually set the instantiation SCN using DBMS_APPLY_ADM. Here, we use the current SCN. You can do this while connected to either the source or the destination; here, we are connected to the source for this phase-STR10. Thus, we push the instantiation SCN to ORCL using the stream_admin user's database link.

DECLARE   iscn NUMBER; BEGIN  iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();  DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@ORCL(     source_schema_name       =>'ws_app',     source_database_name     =>'STR10',     instantiation_scn        =>iscn,     recursive                =>true); END; /

Step 25.  Enable capture at the replica database (STR10).

BEGIN DBMS_CAPTURE_ADM.START_CAPTURE( capture_name => 'ws_app02_capture'); END; /

Step 26.  Enable the apply process at the primary database (ORCL).

BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => 'ws_app01_apply'); END; /

Step 27.  Test propagation of rows from the replica database (STR10) to the primary database (ORCL).

AT STR10: SQL> connect ws_app/ws_app SQL> insert into woodscrew values (   1007, 'Balaji Parts, Inc.', 'Machine', 30, 1.25, 'Slot'); AT ORCL: SQL> connect ws_app/ws_app SQL> select * from woodscrew where head_config = 'Slot';

Step 28.  Find a cold refreshment and congratulate yourself! You have configured a bidirectional multisource Streams replication environment.

end example

Downstream Capture of LCRs

The previous examples have all used a local capture process during the Streams environment configuration. As we mentioned previously, you can also configure Streams for downstream capture. New to Oracle Database 10g, downstream capture is a way of pushing the Streams environment completely downhill to the destination database.

Downstream capture requires that you set up a log transport service from the source to the destination database, as you would do for a Data Guard configuration (see Chapter 7 for more on log transport services). When the archivelogs arrive at the destination database, a capture process at the destination database uses LogMiner to review the archivelogs and extract records into LCRs that are then placed in a queue at the destination.

This allows us to forgo the usage of a propagation job, as the LCRs are queued into a location that is directly accessible by the destination database's apply process. Downstream capture also means that we have moved the entire environment, other than log transport, away from the source database. This can be extremely advantageous in many environments where the source database may need to be cleared of the administration of the Streams processes and database objects. Downstream capture also provides an extra degree of protection against site failure, as we are automatically creating another set of archived redo logs.

The downside is that you are pushing the entire archivelogs across the network, instead of just a subset of the data that a Streams propagation would push. You also sacrifice a degree of recoverability, as the downstream capture can only review archivelogs for records. When there is a local capture process at the source, the capture process can scan the online redo log for records, so there is a quicker uptime on record changes for the replication set.

Downstream capture also has a few restrictions that are avoided by using local capture. These restrictions come from moving a physical file to the destination; this requires a degree of operating system and hardware compatibility between the source and destination sites. When you propagate LCRs using a propagation job, you are sending logical records that have been freed of OS restrictions; with downstream capture, you are copying physical files from one location to another. So you have to have the same OS at both sites, although not the same OS version. You also need the same hardware architecture-no 32-bit to 64-bit transfers allowed.

HA Workshop: Configuring Streams for Downstream Capture

start example

Workshop Notes

This workshop will outline the steps for downstream capture of changes to the ws_app schema. We will be grossly overlooking aspects that are covered in other parts of this book (such as log transport configuration). For this workshop, we will concentrate on configuring the destination database, STR10, to capture changes that originate at the source database, ORCL.

Step 1.  Ensure that there is network connectivity between the source and primary databases via tnsnames.ora entries.

Step 2.  Configure log transport at the source (ORCL) to the destination (STR10). See Chapter 7 for log transport configuration.

Step 3.  At both the source and destination, set the parameter REMOTE_ARCHIVE_ENABLE to True.

alter system set remote_archive_enable=true scope=both;

Step 4.  Build Streams administrators at both databases. You can avoid a Streams administrator at the source, but you still have to grant certain privileges to an existing user. Instead, just build the Streams admin and call it even.

SQL> create user stream_admin     identified by stream_admin     default tablespace strepadm     temporary tablespace temp; SQL> grant connect, resource, dba, aq_administrator_role to stream_admin; SQL> BEGIN        DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (          grantee  => 'stream_admin',          grant_privileges => true);        END;        /

Step 5.  Build a database link from the Streams admin at the destination to the Streams admin at the source.

SQL> create database link ORCL      connect to stream_admin identified by stream_admin      using 'ORCL'; SQL> select sysdate from dual@ORCL;

Step 6.  Create a queue at the destination (STR10), if one does not exist.

SQL> BEGIN   DBMS_STREAMS_ADM.SET_UP_QUEUE(   queue_table  => 'stream_admin.ws_app02_queue_table',   queue_name   => 'stream_admin.ws_app02_queue');   END;   /

Step 7.  Create the capture process at the destination (STR10).

BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'ws_app02_queue', capture_name => 'ws_app_dh01_capture', rule_set_name => NULL, start_scn => NULL, source_database => 'ORCL', use_database_link => true, first_scn => NULL, logfile_assignment => 'implicit'); END; /

Step 8.  Add the positive rule set for the capture process at STR10.

BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'ws_app', streams_type => 'capture', streams_name => 'ws_app_dh01_capture', queue_name => 'ws_app02_queue', include_dml => true, include_ddl => false, include_tagged_lcr => false, source_database => 'ORCL', inclusion_rule => true); END; /

Step 9.  Instantiate the ws_app schema at the destination database (STR10). This will follow the instantiation rules and procedures listed in the previous HA Workshop, 'Configuring Streams Replication.'

Step 10.  Create an apply process at the destination database (STR10). This will reference the queue that you set up in Step 6.

Step 11.  Follow through with all further configuration required from the previous HA Workshop, such as enabling capture and apply processes.

end example



 < Day Day Up > 



Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
ISBN: 71752080
EAN: N/A
Year: 2003
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net