Environment Overview
The following infrastructure details apply to the Primary and Standby sites. Note that the Standby instance requires only the database software installation; the database itself is created logically during recovery.
# /etc/hosts Configuration
# Public IPs
192.168.10.10 prod-prm-1
192.168.10.11 prod-prm-2
192.168.10.12 dr-site-dr1
# Virtual IPs (VIP)
192.168.10.20 prmrac-vip1
192.168.10.21 prmrac-vip2
# SCAN IP
192.168.10.22 prmrac-scan
# Private Interconnect
10.1.1.10 prmrac-priv1
10.1.1.11 prmrac-priv2
Database Naming Conventions
To ensure successful synchronization, specific database identifiers must be configured correctly across both environments:
- INSTANCE_NAME: Must match the parameter file (pfile/spfile) names used to identify the instance process.
- DB_NAME: The global database name must be identical on both the Primary and Standby databases.
- DB_UNIQUE_NAME: Must be unique to each physical site to distinguish servers during replication.
- SERVICE_NAME: Configured in
tnsnames.ora. Used for connection strings (verified viatnsping) and bussiness application routing.
In this scenario, the Primary database utilizes Oracle ASM, whereas the Standby database relies on the OS File System. Consequently, initialization parameters db_file_name_convert and log_file_name_convert must define the mapping paths between ASM disk groups and the file system directories.
Prerequisites on Primary Node
Before initiating Data Guard configuration, the Primary instance must meet specific requirements regarding logging and archiving.
Enable Force Logging
Force logging ensures that all changes generate redo records necessary for recovery.
sqlplus / as sysdba
SQL> SELECT FORCE_LOGGING FROM v$database;
-- Should return NO initially
SQL> ALTER DATABASE FORCE LOGGING;
SQL> SELECT FORCE_LOGGING FROM v$database;
-- Should now return YES
Archive Mode Configuration
The database must operate in Archive Log mode.
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;
SQL> ALTER SYSTEM SET log_archive_dest_1='LOCATION=/disk/arch/dg' SCOPE=BOTH;
SQL> ARCHIVE LOG LIST;
Verify available space in the ASM disk group using:
col total_size for a15;
col free_space for a25;
SELECT name, total_mb/1024 || ' GB' AS total_size, free_mb/1024 || ' GB' AS free_space
FROM v$asm_diskgroup;
Setup Procedures
1. Hostname Resolution
Configure the /etc/hosts file on both Primary and Standby systems. For RAC environments, include mappings for public IPs, VIPs, and SCAN addresses.
vi /etc/hosts
# Example Entry
192.168.10.10 prmrac-scan
192.168.10.20 dr-site-dr1
2. Export Initialization Files
Back up the current SPFILE. If using ASM, explicitly specify the location for the generated PFILE to prevent it from storing in ASM inadvertently.
CREATE PFILE='/tmp/init_dr_standby.ora' FROM SPFILE;
SCP /tmp/init_dr_standby.ora oracle@dr-site-dr1:/u01/app/oracle/product/11g/dbhome_1/dbs/
3. Manage Pasword Files
Copy the password file from the Primary to the Standby server and rename it appropriately. Update the SYS password within the new file.
# On Primary
cp $ORACLE_HOME/dbs/orapwprmrac1 /tmp/orapwd_stb
# Generate new password file
ORAPWD FILE=$ORACLE_HOME/dbs/orapwprimary PASSWORD=sys_admin ENTRIES=20
# Transfer to Standby
scp /tmp/orapwd_stb oracle@dr-site-dr1:$ORACLE_HOME/dbs/
4. Standby Control File
Create a standby-specific control file on the Primary instance. You do not need to mount the Primary to create this; however, SCN mismatches may require manual intervention later.
ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/dr_ckpt.ctl';
SCP /tmp/dr_ckpt.ctl oracle@dr-site-dr1:/u01/app/oracle/data/dr/ctl/control01.ctl
CP /tmp/dr_ckpt.ctl /u01/app/oracle/data/dr/ctl/control02.ctl
5. Create Directory Structures
Recreate the directory hierarchy on the Standby server to mirror the required locations, ensuring paths align with conversion parameters if different.
mkdir -p /u01/app/oracle/data/dr/datfiles/
mkdir -p /u01/app/oracle/data/dr/tempfiles/
mkdir -p /u01/app/oracle/data/dr/onlinelogs/
mkdir -p /u01/app/oracle/data/dr/archive/
mkdir -p /u01/app/oracle/admin/dr/adump
mkdir -p /u01/app/oracle/admin/dr/dpdump
6. Network Configuration (TNS & Listener)
Configure tnsnames.ora entries on both ends to facilitate communication.
# tnsnames.ora snippet
PRMRAC =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prmrac-scan)(PORT = 1521))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PRMRAC_SERV))
)
DR_SITE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dr-site-dr1)(PORT = 1521))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = DR_SVC))
)
Test connectivity using tnsping prmrac and tnsping dr_site.
Network Troubleshooting Tips:
- Ensure firewall rules allow ports 1521 and associated private interconnect ports.
- If managed by Grid Infrastructure, configure
tcp.invited_nodesin$GRID_HOME/network/admin/sqlnet.ora. - For RAC connections, prefer using the SCAN IP or VIPs for load balancing and failover capabilities.
7. Listener Configuration
Modify listener.ora to expose the appropriate database services.
(SID_DESC =
(GLOBAL_DBNAME = PROD_GLOBAL)
(ORACLE_HOME = /opt/oracle/11g)
(SID_NAME = PRMRAC1)
)
8. Initialize Parameter Files
Adjust the PFILES on both Primary and Standby. Key differences involve storage destinations (+DATADG vs filesystem paths) and unique identifiers.
Primary Modifications:
*.cluster_database=true
*.log_archive_dest_1='location=+ARCHIVELOG'
*.remote_login_passwordfile='EXCLUSIVE'
*.fal_client='PRMRAC'
*.fal_server='DR_SITE'
*.log_archive_config='DG_CONFIG=(PRMRAC,DR_SITE)'
*.'log_archive_dest_2.service'='DR_SITE lgwr async validate_for=(online_logfiles,primary_role) db_unique_name=DR_SITE'
Standby Modifications:
Ensure paths are converted. Disable automatic file creation if switching back to ASM might occur later, but for FS standby, set the path explicitly.
*.control_files='/u01/app/oracle/data/dr/ctl/current01','/u01/app/oracle/data/dr/ctl/current02'
*.db_block_size=8192
*.db_domain=''
*.db_name='PROD_DB'
*.db_recovery_file_dest='/u01/app/oracle/data/dr/recover'
*.db_recovery_file_dest_size=4G
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=PROD_XDB)'
*.fal_client='DR_SITE'
*.fal_server='PRMRAC'
*.log_archive_config='DG_CONFIG=(PRMRAC,DR_SITE)'
*.log_archive_dest_state_2='ENABLE'
*.standby_file_management='AUTO'
*.db_file_name_convert='+DATADG/PROD/','/u01/app/oracle/data/dr/datfiles/'
Multiple Standby Considerations
If deploying multiple standby databases, update the configuration parameters on the Primary to include all targets. Each standby requires its own FAL_CLIENT setting pointing to itself.
*.log_archive_config='DG_CONFIG=(PROD_RAC,STBY1,STBY2)'
*.log_archive_dest_2='SERVICE=STBY1 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STBY1'
*.log_archive_dest_3='SERVICE=STBY2 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STBY2'
Standby Redo Logs (SRL)
Create SRL groups on both instances. The recommended formula for the number of groups is: (Number of Online Redo Log Groups + 1) * Number of Threads.
-- On Primary
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1
GROUP 7 ('+DATA/prod/srl07.log') SIZE 50M,
GROUP 8 ('+DATA/prod/srl08.log') SIZE 50M;
-- Repeat for subsequent threads/groups
Data Synchronization via RMAN
Step 1: Perform Full Backup
Execute a full backup including control files and archived logs.
RMAN target /
RUN {
ALLOCATE CHANNEL c1 TYPE DISK;
BACKUP AS COMPRESSED BACKUPSET DATABASE FORMAT '/backup/full_db_%U';
SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
BACKUP ARCHIVELOG ALL FORMAT '/backup/arch_%U';
BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/backup/ctl_stdby_%U';
}
SCP /backup/* dr-site-dr1:/restore/
Step 2: Restore on Standby
Start the standby instance in NOMOUNT mode and use the exported PFILE.
STARTUP PFILE='/opt/oracle/11g/dbs/initdr.ora' NOMOUNT;
CREATE SPFILE FROM PFILE;
SHUTDOWN IMMEDIATE;
SCP ctl_* /opt/oracle/data/dr/ctl/
STARTUP MOUNT;
Step 3: Catalog and Restore Data
Connect via RMAN to register backups and restore the datafiles.
RMAN target /
CATALOG START WITH '/restore/';
RESTORE DATABASE;
RELEASE CHANNEL c1;
EXIT;
Step 4: Recover Archived Logs
RMAN target /
RUN {
SET ARCHIVELOG DESTINATION TO '/u01/app/oracle/data/dr/archive';
RECOVER DATABASE;
}
Handling Archive Gaps
Monitor log application status frequently. Gaps often occur due to network interruptions or slow Apply processes.
SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG WHERE THREAD#=>0 ORDER BY SEQUENCE#;
If gaps exist, verify the backup sets containing the missing logs.
LIST BACKUP OF ARCHIVELOG FROM LOGSEQ=<start_seq> UNTIL LOGSEQ=<end_seq> THREAD=<thread_id>;
To resolve gaps manually:
- Catalog the missing backup piece:
CATALOG BACKUPPIECE '/backup/archive_piece_abc'. - Restore specific archived logs:
RESTORE ARCHIVELOG FROM LOGSEQ=<n>. - Re-catalog the restored log file into the control file if necessary.
Temporary Tablespace Issues
If the Standby fails because it cannot locate a tempfile defined in ASM but mapped to the filesystem (via conversion parameter), you may need to drop and recreate the tempfile.
ALTER TABLESPACE TEMP DROP TEMPFILE '/asm_path/temp01.dbf';
ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/app/oracle/data/dr/tempfile/temp01.dbf' SIZE 2G;
Verification and Monitoring
Once configured, verify the operational status of the Data Guard environment.
-- Check Database Role and Protection Mode
SELECT NAME, OPEN_MODE, DATABASE_ROLE, PROTECTION_MODE FROM V$DATABASE;
-- Monitor Managed Processes
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY;
-- 'RFS' manages file receiving, 'MRP' applies changes.
-- Confirm Log Application Status
SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG WHERE APPLIED='YES' ORDER BY SEQUENCE#;
Troubleshooting Common Erors
Use standard diagnostic tools for Oracle errors.
OERR ORA 01187 -- Inspect error details
cd $ORACLE_BASE/diag/rdbms/dr/dr_site/trace
ls -ltr *pr0*
tail -n 200 trace_file.trc
Specific internal errors such as ORA-00600 [kcbr_apply_change_11] may require restarting the standby instance to Mount state and restarting the Managed Recovery Process.