The ever increasing demand of sharing, consolidating and
synchronizing critical enterprise data and the ability to provide secondary
access to updated information without compromising production system performance
require a technology like Oracle Streams.
Oracle Streams is versatile, robust and high performance
information distribution software used for sharing data across multiple Oracle
or heterogeneous database systems. It uses Oracle’s LogMiner-based technology
to extract database changes from the redo log files.
This article describes how to effectively leverage and set
up Oracle archived-log downstream capture technology to replicate your
enterprise data.
Oracle Downstream Capture
Data can basically be captured on a local database where the
capture process runs on the source database or on a remote database where the
capture process runs on a database other than the source database also called
the downstream database.
Oracle downstream capture feature was introduced in Oracle
10g Release 1, to shift the data capturing operations from the source database
to the downstream database. This relieves the system resources on the
production database server for other critical activities.
With archived-log downstream capture, archived redo logs
from the source database are shipped to the downstream database using any copy
mechanisms such as the redo transport services or file transfer protocol (FTP).
The capture process scans and captures DML/DDL changes from the archived redo
logs based on a set of defined rules, formats the captured changes into events
called logical change records (LCRs) and queues them in a staging queue. A
propagation process propagates the LCRs to a destination queue on the destination
database and they are then dequeued and applied by the apply process.
The disadvantage of archived-log based downstream capture is
that database changes on the source are not reflected immediately on the
destination database. Changes in the online redo logs cannot be propagated to
the destination database until they are archived. If the latency of data
capture is not acceptable, explore using real-time downstream capture.
Real-time downstream capture however, does not allow you to capture changes
from multiple source databases on the same downstream database. Refer to http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14229/strms_capture.htm#sthref170
for a description of real-time downstream capture.
Configure Archived-Log Downstream Capture
Typically, the downstream database and the destination
database are hosted on different systems. If you want to avoid the overhead of
maintaining a downstream database and additional database licensing costs, an
alternate solution is to set up the downstream capture process in the
destination database.
We will illustrate the procedure to set up an environment
where the downstream database is also the destination database and replicate a
schema on the source database to the destination database. The steps listed in
this section are also applicable if the downstream and destination databases
are located on different systems.
An overview of the Oracle Streams environment:
Host Name |
Instance |
Global |
Database |
trout3 |
ladb |
ladb.world |
Source |
trout4 |
sfdb |
sfdb.world |
Downstream |
Step 1: Configure archive log mode
Verify ladb is in archive log mode. Archived redo logs are
required by the capture process to capture changes.
Step 2: Modify initialization parameters
Parameter |
ladb |
sfdb |
Description |
compatible |
10.2.0 |
10.2.0 |
To |
global_names |
n/a |
true |
Database |
job_queue_processes |
n/a |
2 |
The |
log_archive_dest_2 |
service=sfdb.world noregister |
n/a |
Enables service noregister template |
log_archive_dest_state_2 |
true |
n/a |
Enables |
parallel_max_servers |
n/a |
6 |
The You |
sga_target |
n/a |
500M |
Enables If |
Step 3: Set up tnsnames.ora
Add the TNS entries on trout3 and trout4.
LADB.WORLD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = trout3)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ladb)
)
)
SFDB.WORLD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = trout4)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = sfdb)
)
)
Step 4: Create Streams administrator
Create a Streams administrator, strmadmin, with the required
administrator privileges on ladb and sfdb. You should create a tablespace on sfdb
to exclusively house the Streams administrator’s tables and indexes. The
strmadmin user on ladb however, does not create any database objects. The account
is used to perform administrative tasks such as objects instantiation and
obtaining the first System Change Number (SCN) for the capture process at the
downstream database.
create tablespace streamsts datafile ‘/u02/oradata/sfdb/streamsts01.dbf’ size 100M;
create user strmadmin identified by strmadmin default tablespace streamsts temporary tablespace temp;
grant dba, select_catalog_role to strmadmin;
exec dbms_streams_auth.grant_admin_privilege (‘strmadmin’, true);
Step 5: Create application schema
Assuming that there is already an existing apps schema in ladb,
create the destination apps schema on sfdb.
create tablespace appsts datafile '/u02/oradata/sfdb/appsts01.dbf' size 100M; create user apps identified by apps default tablespace appsts temporary tablespace temp; grant connect, resource to apps;
Step 6: Enable table level supplemental logging
Set up supplemental logging as the apps user on ladb.
Supplemental logging logs additional columns information into the redo logs for
row identification at the destination database.
The following examples enable supplemental logging for the
customer and orders tables:
alter table customers add supplemental log data (primary key, unique) columns; alter table orders add supplemental log data (primary key, unique) columns;
Step 7: Create database link
As the Streams administrator, create a private database link
from sfdb to ladb. The Streams administrator uses the database link to perform
administrative tasks on ladb.
create database link ladb.world connect to strmadmin identified by strmadmin using 'ladb.world';
Step 8: Create Streams queues
Database changes are captured in queues and propagated to
other databases. Create the capture queue on the downstream database and the
apply queue on the destination database. In this example, since the downstream
database is also the destination database, create both the capture and apply
queues on sfdb.
connect strmadmin/strmadmin@sfdb.world
begin
dbms_streams_adm.set_up_queue(
queue_table => ‘capture_sfqtab’,
queue_name => ‘capture_sfq’,
queue_user => ‘strmadmin’);
end;
/
begin
dbms_streams_adm.set_up_queue(
queue_table => ‘apply_sfqtab’,
queue_name => ‘apply_sfq’,
queue_user => ‘strmadmin’);
end;
/
Step 9: Create capture process
Use the create_capture procedure to create the capture
process, capture_sfstrm on the downstream database, sfdb. The capture process
executes the following activities on ladb – extracts the data dictionary to the
redo log, prepares objects instantiation and gets the first SCN.
connect strmadmin/strmadmin@sfdb.world
begin
dbms_capture_adm.create_capture (
queue_name => ‘capture_sfq’,
capture_name => ‘capture_sfstrm’,
source_database => ‘ladb.world’,
use_database_link => true);
end;
/
Step 10: Configure capture process
Rules are added in positive or negative rule sets using the dbms_streams_adm
package. Changes are captured using positive rule set by specifying a true
value for the inclusion_rule parameter. In the add_schema_rules procedure
below, we specify the previously created capture streams name and queue name
and create a positive rule set for the capture process to extract DML and DDL
changes for the apps schema.
connect strmadmin/strmadmin@sfdb.world
begin
dbms_streams_adm.add_schema_rules (
schema_name => ‘apps’,
streams_type => ‘capture’,
streams_name => ‘capture_sfstrm’,
queue_name => ‘capture_sfq’,
include_dml => true,
include_ddl => true,
source_database => ‘ladb.world’,
inclusion_rule => true);
end;
/
Step 11: Configure propagation process
The configuration for the propagation process is similar to
the capture process. We add rules to the positive rule set and specify the
source queue name and destination queue name. Changes are propagated from the
source queue to the destination queue.
connect strmadmin/strmadmin@sfdb.world
begin
dbms_streams_adm.add_schema_propagation_rules (
schema_name => ‘apps’,
streams_name => ‘prop_sfdb_to_sfdb’,
source_queue_name => ‘capture_sfq’,
destination_queue_name => ‘apply_sfq’,
include_dml => true,
include_ddl => true,
source_database => ‘ladb.world’);
end;
/
Step 12: Create objects for the destination application schema
On the source system:
1. As the system user, obtain the current SCN. This SCN is
used later in the expdp command.
select dbms_flashback.get_system_change_number() from dual;
2. As the system user, create the Oracle directory for the
export dump file. Make sure the physical directory exists on the filesystem.
create directory expdir as '/u01/app/oracle/admin/ladb/export';
3. Export the application schema.
$> expdp system/oracle schemas=apps directory=expdir logfile=expapps_ds.log dumpfile=apps_ds.dmp flashback_scn=<the current SCN obtained above>
On the destination system:
1. As the system user, create the Oracle directory for the
export dump file. Make sure the physical directory exists on the filesystem.
create directory impdir as '/u01/app/oracle/admin/sfdb/import';
2. Copy the export dump file, apps_ds.dmp from the source
system to /u01/app/oracle/admin/sfdb/import.
3. Import the application schema:
$> impdp system/oracle directory=impdir dumpfile=apps_ds.dmp logfile=impapps_ds.log
Step 13: Configure apply process
Create the apply process and add rules to the positive rule
set. The apply process dequeues the LCR events and applies the changes to the
destination schema.
connect strmadmin/strmadmin@sfdb.world begin dbms_streams_adm.add_schema_rules ( schema_name => 'apps', streams_type => 'apply', streams_name => 'apply_sfstrm', queue_name => 'apply_sfq', include_dml => true, include_ddl => true, source_database => 'ladb.world'); end; /
Configure the apply process to continue running even when
there are errors during the apply process.
begin dbms_apply_adm.set_parameter ( apply_name => 'apply_sfstrm', parameter => 'disable_on_error', value => 'n'); end; /
Step 14: Start apply process
Start the apply process at the destination database.
connect strmadmin/strmadmin@sfdb.world begin dbms_apply_adm.start_apply ( apply_name => 'apply_sfstrm'); end; /
Step 15: Start capture process
Start the capture process at the downstream database.
connect strmadmin/strmadmin@sfdb.world begin dbms_capture_adm.start_capture ( capture_name => 'capture_sfstrm'); end; /
Step 16: Test Oracle Streams
The Streams environment is now ready to capture, propagate
and apply changes from the source database to the destination database.
SQL> connect apps/apps@sfdb.world Connected. SQL> select * from customers; CUST_ID NAME EMAIL ---------- ------- --------------- 1 MIKE mike@netk.com 2 JOE joe@apg.net 3 MARY mary@gq.org SQL> select * from orders; ORDER_ID ORDER_DAT CUSTOMER_ID ORDER_TOTAL ---------- --------- ----------- ----------- 1 26-JAN-06 1 100.5 2 27-JAN-06 2 90 3 28-JAN-06 3 80.5
Perform DML and DDL changes on the source database:
SQL> connect apps/apps@ladb.world Connected. SQL> insert into customers values (4,'JAY','jay@defg.com'); 1 row created. SQL> commit; Commit complete. SQL> truncate table orders; Table truncated. SQL> alter table customers add (address varchar2(20)); Table altered. SQL> alter system switch logfile; System altered.
Verify that changes are applied on the destination database:
SQL> connect apps/apps@sfdb.world Connected. SQL> select * from customers; CUST_ID NAME EMAIL ADDRESS ---------- ------- --------------- -------------------- 1 MIKE mike@netk.com 2 JOE sue@apg.net 3 MARY mary@gq.org 4 JAY jay@defg.com SQL> select * from orders; no rows selected
Troubleshooting/Monitoring
Listed here are a few views for obtaining information on the
capture, propagation and apply processes. They provide information such as the
status of the processes, the number of messages enqueued and dequeued and the
error messages encounter during capture and apply.
- v$streams_apply_reader
- v$streams_apply_coordinator
- v$streams_capture
- dba_apply
- dba_apply_error
- dba_capture
- dba_propagation
- dba_queue_schedules
Conclusion
With Oracle Streams’ replication, information can be shared
easily among multiple databases. This article focuses on archived-log
downstream capture solution. Alternatively, Oracle Streams supports local
capture and real-time downstream capture, which are also easy to implement.
Hopefully this article has presented a straightforward and concise overview of
Oracle downstream capture and its capabilities.