Friday 24 May 2013

ORA-00494 enqueue [CF] held for too long tips


ORA-00494 enqueue [CF] held for too long tips


Question:  I am getting an ORA-00494 error and the trace file.  This error causes my database to crash:. 
ORA-00494: enqueue [CF] held for too long

/u01/app/oracle/admin/mysid/bdump/ora_lgwr_143.trc

Killing enqueue blocker (pid=152) on resource

CF-00000000-00000000 by killing session 83.0
 
What is the cure for this ORA-00494 error?

Answer:  The oerr utility shows this on the ORA-00494
ORA-00494: enqueue string held for too long (more than string seconds) by 'inst string, osid string'

Cause:  The specified process did not release the enqueue within the maximum allowed time.

Action:  Reissue any commands that failed and contact Oracle Support Services with the incident information.
The ORA-00494 error occurs during periods of super-high stress, activity to the point there the server becomes unresponsive due to overloaded disk I/O, CPU or RAM.
  • See MOSC Note ID 779552.1 "ORA-00494 During High Load After 10.2.0.4 Upgrade" says that Oracle kills blocking processes during periods of high DML load and issues the ORA-00494 error.  In your case, it is the log writer process.
  • Also see Database Crashes With ORA-00494 [MOSC Note ID 753290.1

Solutions for ORA-00494:  

  • Resize online redo logs
  • Spread-out the DML over longer periods 
  • Adjust several hidden parameters:
Two Oracle database hidden parameters can be reconfigured to prevent further incident happens by the same cause – Oracle Kill Blocker Interface mechanism:
1) _kill_controlfile_enqueue_blocker = { TRUE | FALSE }
  • TRUE. Default value. Enables this mechanism and kills blocker process in CF enqueue.
  • FALSE. Disables this mechanism and no blocker process in CF enqueue will be killed.
2) _kill_enqueue_blocker = { 0 | 1 | 2 | 3 }
  • 0. Disables this mechanism and no foreground or background blocker process in enqueue will be killed.
  • 1. Enables this mechanism and only kills foreground blocker process in enqueue while background process is not affected.
  • 2. Enables this mechanism and only kills background blocker process in enqueue.
  • 3. Default value. Enables this mechanism and kills blocker processes in enqueue.
3) _controlfile_enqueue_timeout = { INTEGER }
  • 900. Default value.
  • 1800. Optimum value to prevent enqueue timeout.

High Water Mark (HWM)


High Water Mark (HWM)


Oracle uses the high water mark to identify the highest amount of space used by a particular segment.
It acts as the boundary between used and unused space. As the amount of data grows due to row inserts
and updates, the segment's high water mark grows accordingly.

The HIGH WATER MARK is set at the beginning of the segment when the table is created. 
Deleting rows will never reset the HWM even if all the rows are deleted from the table. 
Only the TRUNCATE command will reset the HWM.

To reduce the high water mark export / data pump the table, drop it and import it back in again.

Suppose we create an empty table, the high-water mark would be at the beginning of the table segment

Unused space
 HWM
When you insert rows into the table the high watermark will be bumped up step by step. This is done by 
the server process which makes the inserts.

Used DATA

Un used Space
                                                              HWM                                                            
Let’s assume that we have filled a table with 100’000 rows. And let’s assume that we deleted 50’000 rows 
afterwards.        
                    
Used data
Empty blocks

Un used space
                           <--------------------------------------------------------------->HWM       
                                 Full table scan

  As you seen above by deleting the data, HWM does not move.  The main disadvantage of this is 
that oracle always read the blocks up to high water mark in case of full table scan.  You may have
ever notice that doing a count (*) on empty table, takes time to show you 0 rows.  The reason for delay is 
setting of HWM at higher position.


NOTE:  Whenever optimizer takes full table scan, it scans all the blocks below HWM. This would degrade
the performance and slowdown the Full table scan process. To avoid this, we need to shrink the table to
reset the HWM.


So, now how we set the high-water mark at lower position?
The only way to set the HWM is to truncate a table. Let us see how truncate set the HWM.


No data in the segment
                                 

  HWM

HWM is reset now, after truncating data.   

HWMBLOCKS = “Blocks Containing data” + “Empty blocks”

         HWMBLOCKS – “Blocks containing data”
HWM % = (----------------------------------)  * 100
                  HWMBLOCKS

You can also use DBMS_SPACE.UNUSED_SPACE procedure to determine HWM.

High Watermark = Total blocks - Unused blocks – 1

SQL> select blocks from dba_segments where owner=upper('HR') and segment_name = upper('JOBS');

SQL> analyze table hr.jobs estimate statistics;

SQL> select empty_blocks from dba_tables where owner=upper('HR') and table_name = upper('jobs');


Ways for Resetting HWM………..
Option 1 Alter table  tab_name  move tablespace.

This option requires rebuilding the indexes. Since the index will be invalid/unusable after running
the above command. The downside is, rebuilding the index is additional overhead when we use
this option. Also users cannot use the application or reports while rebuilding the index.


Option 2
1. Export the data
2. Truncate the table
3. Import the table
4. Analyze the table


Option 3  

1. Copy the table data
2. Truncate the original table
3. Insert back.
  
    Option 4    Use DBMS_REDEFINITION package to copy the table

People confused about setting of HWM through ALTER TABLE DEALLOCATE UNUSED clause. 
This clause only frees unused space above the high water mark but cannot reset HWM position.

Starting with ORACLE 10gR1 we can use a new feature for adjusting the high watermark,
it is called segment shrinking and is only possible for segments which use ASSM, in other words, 
which are located in tablespaces which use Automatic Segement Space Management.

oracle alter session set current_schema

oracle alter session set current_schema

Today I was given a note pad file containing DDL &

DML script and they told me to run as a DEMO user in my Dev Environment.
Here I don’t have Winscp(to copy file from windows to my oracle database in Linux) and I don’t have the password for the DEMO user so in these kind of situations I ran the script as a Demo user by setting the Current_schema=DEMO.

irst I copied the script from note pad file and pasted in Vi editor and named as index.sql.
Because instead of putting the schema name in front of the index name in the entire file it’s better to change the current_schema.Then,

oracle$sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 22 16:53:27 2012Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to: 

Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit ProductionWith the Partitioning and Automatic Storage Management options
SQL> show user
USER is "SYS"

SQL> alter session set current_schema=DEMO;
Session altered.

SQL>show user
USER is "SYS"
Still it is SYS user…………

SQL> select sys_context('USERENV','SESSION_USER') current_user,sys_context('USERENV','SESSION_SCHEMA') current_schema from dual;

CURRENT_USER                      CURRENT_SCHEMA
------------------                            -----------------------------
SYS                                            DEMO



Ran the script which is copied from SQL prompt

SQL> @/opt/oracle/index.sql
You can perform any operation as if you logged in as DEMO After finish doing the job then set back to the sys user again using the same command

SQL> alter session set current_schema=SYS;

SQL> select sys_context('USERENV','SESSION_USER') current_user,sys_context('USERENV','SESSION_SCHEMA') current_schema from dual;

CURRENT_USER                     CURRENT_SCHEMA    
-----------------------                    --------------------------
SYS                                           SYS



SQL> exit

Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit ProductionWith the Partitioning and Automatic Storage Management options.
oracle$  sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 22 17:56:09 2012Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit ProductionWith the Partitioning and Automatic Storage Management options

SQL> select sys_context('USERENV','SESSION_USER') current_user,sys_context('USERENV','SESSION_SCHEMA') current_schema from dual;

CURRENT_USER                     CURRENT_SCHEMA
----------------------                      -------------------------------
SYS                                            SYS



So, if you exit from SQL and reconnect, your default schema name is no more DEMO.

NOTE:  The same work can be done using the TOAD by setting the current Schema to Demo in the Schema Browser page, it is quiet easier if you have Toad

To STOP Rman backup job

STOP Rman backup job
If the Rman job process is running in the foreground on your screen then type ‘ctl+C’,

but if its running in the background then you can kill either manually by finding sid/spid or using OEM 

To know the rman sid and spid of your running job use the below script

SQL>  SELECT p.SPID, s.sid, s.serial#, sw.EVENT, sw.SECONDS_IN_WAIT AS  SEC_WAIT, sw.STATE, 
CLIENT_INFOFROM V$SESSION_WAIT sw, 

V$SESSION s, V$PROCESS p WHERE s.client_info  LIKE 'rman%'AND s.SID=sw.SID AND s.PADDR=p.ADDR;

SPID      SID      SERIAL#    EVENT                       SEC_WAIT     STATE        CLIENT_INFO-------   -------    ----------    ------------------       --------------    --------        ------------------1234   5678     98765      control file sequential read     0     WAITING     rman channel=ORA_DISK_1


SQL >   alter system kill session ’SID, SERIAL#′;

ORA-31655,ORA-39154 while Datapump Import


ORA-31655,ORA-39154 while Datapump Import


Today while I’m performing SYSFM schema import using SACORP user on my test server

$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.4 (Tikanga)

I already have the Dump files from prod server and ready to do import in test server with my parfile.

Vi  sysfm_impdp.par

DIRECTORY=DPUMP_OMEGA_DIR1
DUMPFILE=DPUMP_OMEGA_DIR1:SYSFM_%U.dmp
LOGFILE=LOGFILE_OMEGA_dir1:sysfm_impdp.log
PARALLEL=10
SCHEMAS=SYSFM
JOB_NAME=sysfm_Import


Impdp  sacorp/*****  parfile=sysfm_impdp.par
Import: Release 11.2.0.2.0 - Production on Thu Feb 16 15:45:25 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters and Automatic Storage Management options
ORA-31655: no data or metadata objects selected for job
ORA-39154: Objects from foreign schemas have been removed from import
Master table "SACORP"."SYSFM_IMPORT”successfully loaded/unloaded
Starting "SACORP"."SYSFM_IMPORT":  sacorp/********  parfile=sysfm_impdp.par
Job "SACORP"."SYSFM_IMPORT" successfully completed at 15:47:11

I checked the database and found no schema has imported. Then after struggling for some time I came to
know the solution for above error as the "user have no privileges" to perform on the another user
then I granted ‘IMP_FULL_DATABASE’ to the SACORP user from which I’m performing datapump
schema import operation.

SQL> select  banner  from  v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 – Production

SQL> grant IMP_FULL_database to  SACORP;
Grant succeeded.

And started import again, Now the Import operation has running successfully.
Check dba_datapump_jobs  to confirm import  job is running

SQL> select OWNER_NAME,JOB_NAME,STATE from dba_datapump_jobs;

OWNER_NAME          JOB_NAME                STATE
--------------------       --------------------    --------------------
SACORP                     SYSFM_IMPORT         EXECUTING

DBA's SCRIPTS



Important DBA's SCRIPTS
1) DROP USER OBJECTS:

conn username/password --->>> MANDATORY to connect as a user

declare
cursor fkc is  select  table_name,
constraint_name
from  user_constraints
where constraint_type ='R';
tname user_constraints.table_name%TYPE;
cname user_constraints.constraint_name%TYPE;
begin
open fkc;
loop
fetch fkc into tname, cname;
exit when fkc%NOTFOUND;
dbms_output.put_line('alter table '||tname||' drop constraint '||cname);
execute immediate 'alter table '||tname||' drop constraint '||cname;
end loop;
close fkc;

end;
/

declare
cursor fkc is  select  object_name,
object_type
from  user_objects
where object_name not in
('INDEX','PACKAGE BODY');
obj_name user_objects.object_name%TYPE;
obj_type user_objects.object_type%TYPE;
begin
open fkc;
loop
fetch fkc into obj_name, obj_type;
exit when fkc%NOTFOUND;
dbms_output.put_line('Drop '||obj_type||' '||obj_name);
begin
  execute immediate 'Drop '||obj_type||' '||obj_name;
exception
  when others then null;
end;
end loop;
close fkc;
end;
/

PURGE RECYCLEBIN;

DECLARE

the_job user_jobs.job%TYPE;
cursor c1 is select job from user_jobs;
BEGIN
open c1;
loop
fetch c1 into the_job;
exit when c1%NOTFOUND;
dbms_job.remove(the_job);
end loop;
close c1;
END;
/

VERIFY :

     set heading off
SQL >   select 'Objects left in schema : ' from dual;
SQL>    select object_name,object_type from user_objects;

SQL>   select 'Jobs left in schema: ' from dual;
SQL>   select job,what from user_jobs;



2) User privileges & Grantee :

SELECT grantee, privilege, admin_option
FROM sys.dba_sys_privs
WHERE (privilege LIKE '% ANY %'
OR privilege IN ('BECOME USER', 'UNLIMITED TABLESPACE')
OR admin_option = 'YES')
AND grantee NOT IN ('SYS', 'SYSTEM', 'OUTLN', 'AQ_ADMINISTRATOR_ROLE',
'DBA', 'EXP_FULL_DATABASE', 'IMP_FULL_DATABASE',
'OEM_MONITOR', 'CTXSYS', 'DBSNMP', 'IFSSYS',
'IFSSYS$CM', 'MDSYS', 'ORDPLUGINS', 'ORDSYS',
'TIMESERIES_DBA')



3) Free Space :
//** This script lists all Tablespaces and its datafiles with their free and used space **//


SET SERVEROUTPUT ON
SET PAGESIZE 1000
SET LINESIZE 255
SET FEEDBACK OFF

SELECT Substr(df.tablespace_name,1,20) "Tablespace Name",
       Substr(df.file_name,1,40) "File Name",
       Round(df.bytes/1024/1024,2) "Size (M)",
       Round(e.used_bytes/1024/1024,2) "Used (M)",
       Round(f.free_bytes/1024/1024,2) "Free (M)",
       Rpad(' '|| Rpad ('X',Round(e.used_bytes*10/df.bytes,0), 'X'),11,'-') "% Used"
FROM   DBA_DATA_FILES  df,
       (SELECT file_id, Sum(Decode(bytes,NULL,0,bytes)) used_bytes
           FROM dba_extents  GROUP by file_id)  e,
       (SELECT Max(bytes) free_bytes, file_id
           FROM dba_free_space   GROUP BY file_id) f
WHERE  e.file_id (+) = df.file_id
AND    df.file_id  = f.file_id (+)
ORDER BY df.tablespace_name, df.file_name;


4) Tablespaces :

//** This script lists all Tablespaces with their Sizes **// 

SELECT /* + RULE */  df.tablespace_name "Tablespace",
       df.bytes / (1024 * 1024) "Size (MB)",
       SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
       Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
       Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
  FROM dba_free_space fs,
       (SELECT tablespace_name,SUM(bytes) bytes
          FROM dba_data_files
         GROUP BY tablespace_name) df
 WHERE fs.tablespace_name (+)  = df.tablespace_name
 GROUP BY df.tablespace_name,df.bytes
UNION ALL
SELECT /* + RULE */ df.tablespace_name tspace,
       fs.bytes / (1024 * 1024),
       SUM(df.bytes_free) / (1024 * 1024),
       Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
       Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
  FROM dba_temp_files fs,
       (SELECT tablespace_name,bytes_free,bytes_used
          FROM v$temp_space_header
         GROUP BY tablespace_name,bytes_free,bytes_used) df
 WHERE fs.tablespace_name (+)  = df.tablespace_name
 GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
 ORDER BY 4 DESC;
5) Locked_objects :

SET LINESIZE 500
SET PAGESIZE 1000
SET VERIFY OFF

COLUMN owner FORMAT A20
COLUMN username FORMAT A20
COLUMN object_owner FORMAT A20
COLUMN object_name FORMAT A30
COLUMN locked_mode FORMAT A15

SELECT b.session_id AS sid,
       NVL(b.oracle_username, '(oracle)') AS username,
       a.owner AS object_owner,
       a.object_name,
       Decode(b.locked_mode, 0, 'None',
                             1, 'Null (NULL)',
                             2, 'Row-S (SS)',
                             3, 'Row-X (SX)',
                             4, 'Share (S)',
                             5, 'S/Row-X (SSX)',
                             6, 'Exclusive (X)',
                             b.locked_mode) locked_mode,
       b.os_user_name
FROM   dba_objects a,
       v$locked_object b
WHERE  a.object_id = b.object_id
ORDER BY 1, 2, 3, 4;

SET PAGESIZE 14
SET VERIFY ON


6) Data Pump Monitoring Script : 


select sid, serial#, sofar, totalwork, dp.owner_name, dp.state, dp.job_mode
from gv$session_longops sl, gv$datapump_job dp
where sl.opname = dp.job_name and sofar != totalwork;

7) RMAN Job Monitoring Script :
SQL >  SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,
        ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"
        FROM V$SESSION_LONGOPS
        WHERE OPNAME LIKE 'RMAN%'  AND OPNAME NOT LIKE '%aggregate%' 
              AND TOTALWORK != 0 AND SOFAR <> TOTALWORK ;

To see the Rman status and SID:

SQL >  SELECT s.SID, p.SPID, s.CLIENT_INFO FROM V$PROCESS p, V$SESSION s
       WHERE p.ADDR = s.PADDR AND CLIENT_INFO LIKE 'rman%';



8) To see what USERS are Running :
SELECT a.sid, a.serial#, a.username, b.sql_text FROM v$session a, v$sqlarea b
WHERE a.sql_address=b.address;

**** To see for a particular USER, what he is running *******

SELECT a.sid, a.serial#, a.username, b.sql_text FROM v$session a, v$sqlarea b
WHERE a.sql_address=b.address and a.username = '';


9) Find FREE/USED/TOTAL size of oracle database :

(used space):

----------------
SQL> select sum(bytes)/1024/1024/1024 GB from dba_segments;


(free space):
---------------
SQL> select sum(bytes)/1024/1024/1024 GB from dba_free_space;


(Total database size):
---------------------------
SQL> select sum(bytes)/1024/1024/1024 GB from dba_data_files;

  +

(Temp size):
---------------
SQL> select SUM(bytes_used)/1024/1024/1024 GBused, 
SUM(bytes_free)/1024/1024/1024 GBfree from  v$temp_space_header;

(or)

SELECT SUM (a.log_space + b.data_space + c.tempspace) "Total_DB_Size (G)"
   FROM (SELECT ROUND (SUM (BYTES/1024/1024/1024), 2) data_space  FROM 

   dba_data_files) b,(SELECT ROUND (SUM (BYTES*members/1024/1024/1024), 2) 
   log_space  FROM v$log) a, (SELECT NVL(ROUND(SUM(BYTES/1024/1024/1024),2), 0) 
   tempspace FROM dba_temp_files) c;


10) RMAN Cold Backup :


Rman target / nocatalog
Rman > configure controlfile autobackup on;
Rman >  run {
2>  allocate channel d1 type disk;
3> Backup full tag full_offline_bkup
4> Format ‘/var/backup/corppsdb/rman/db_t%t_s%s_p%p’
5> Database plus archivelog;
6> Release channel d1;
}

Manually Create a Physical Standby Database Using Data Guard - Oracle 10g


Oracle 10g - Manually Create a Physical Standby Database Using Data Guard

Step-by-step instructions on how to create a Physical Standby Database on Windows and UNIX servers, and maintenance tips on the databases in a Data Guard Environment.
Oracle 10g Data Guard is a great tool to ensure high availability, data protection and disaster recovery for enterprise data. I have been working on Data Guard/Standby databases using both Grid control and SQL command line for a couple of years, and my latest experience with Data Guard was manually creating a Physical Standby Database for a Laboratory Information Management System (LIMS) half a year ago. I maintain it daily and it works well. I would like to share my experience with the other DBAs.
In this example the database version is 10.2.0.3.. The Primary database and Standby database are located on different machines at different sites. The Primary database is called PRIM and the Standby database is called STAN. I use Flash Recovery Area, and OMF.
I. Before you get started:
1. Make sure the operating system and platform architecture on the primary and standby systems are the same;
2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.
3. Test the Standby Database creation on a test environment first before working on the Production database.
II. On the Primary Database Side:
1. Enable forced logging on your primary database:
SQL> ALTER DATABASE FORCE LOGGING;
2. Create a password file if it doesn’t exist.
1) To check if a password file already exists, run the following command:
SQL> select * from v$pwfile_users;
2) If it doesn’t exist, use the following command to create one:
- On Windows:
$cd %ORACLE_HOME%\database
$orapwd file=pwdPRIM.ora password=xxxxxxxx force=y
(Note: Replace xxxxxxxxx with the password for the SYS user.)
- On UNIX:
$Cd $ORACLE_HOME/dbs
$Orapwd file=pwdPRIM.ora password=xxxxxxxx force=y
(Note: Replace xxxxxxxxx with your actual password for the SYS user.)
3. Configure a Standby Redo log.
1) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:
SQL> select bytes from v$log;
BYTES
----------
52428800
52428800
52428800
2) Use the following command to determine your current log file groups:
SQL> select group#, member from v$logfile;
3) Create standby Redo log groups.
Standby no of redo log group ==> (maximum number of logfiles for each thread + 1) * maximum number of threads My primary database had 3 log file groups originally and I created 4 standby redo log groups using the following commands:
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M; SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M; SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 SIZE 50M;
or
ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 '/u01/app/oracle/oradata/PRIM/redostd04.log' SIZE 52100k ;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 '/u01/app/oracle/oradata/PRIM/redostd05.log' SIZE 52100k ;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 '/u01/app/oracle/oradata/PRIM/redostd06.log' SIZE 52100k ;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 '/u01/app/oracle/oradata/PRIM/redostd07.log' SIZE 52100k ;
4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log;
4. Enable Archiving on Primary.
If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;
5. Set Primary Database Initialization Parameters
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.
1) Create pfile from spfile for the primary database:
- On Windows:
SQL>create pfile=’\database\pfilePRIM.ora’ from spfile;
(Note- specify your Oracle home path to replace ‘’).
- On UNIX:
SQL>create pfile=’/dbs/pfilePRIM.ora’ from spfile;
(Note- specify your Oracle home path to replace ‘’).
2) Edit pfilePRIM.ora to add the new primary and standby role parameters: (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)
db_name=PRIM
db_unique_name=PRIM
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
LOG_ARCHIVE_DEST_1=
'LOCATION=F:\Oracle\flash_recovery_area\PRIM\ARCHIVELOG
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=PRIM'
LOG_ARCHIVE_DEST_2=
'SERVICE=STAN LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STAN'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
remote_login_passwordfile='EXCLUSIVE'
FAL_SERVER=STAN
FAL_CLIENT=PRIM
STANDBY_FILE_MANAGEMENT=AUTO
# Specify the location of the standby DB datafiles followed by the primary location;
DB_FILE_NAME_CONVERT='E:\oracle\product\10.2.0\oradata\STAN\DATAFILE','E:\oracle\product\10.2.0\oradata\PRIM\DATAFILE'
# Specify the location of the standby DB online redo log files followed by the primary location LOG_FILE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\STAN\ONLINELOG’,’E:\oracle\product\10.2.0\oradata\PRIM\ONLINELOG’,’F:\Oracle\flash_recovery_area\STAN\ONLINELOG’,’F:\Oracle\flash_recovery_area\PRIM\ONLINELOG’
6. Create spfile from pfile, and restart primary database using the new spfile.
Data Guard must use SPFILE. Create the SPFILE and restart database.
- On windows:
SQL> shutdown immediate;
SQL> startup nomount pfile=’\database\pfilePRIM.ora’;
SQL>create spfile from pfile=’\database\pfilePRIM.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;
(Note- specify your Oracle home path to replace ‘’).
- On UNIX:
SQL> shutdown immediate;
SQL> startup nomount pfile=’/dbs/pfilePRIM.ora’;
SQL>create spfile from pfile=’/dbs/pfilePRIM.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;
(Note- specify your Oracle home path to replace ‘’).
III. On the Standby Database Site:
1. Create a copy of Primary database data files on the Standby Server:
On Primary DB:
SQL>shutdown immediate;
On Standby Server (While the Primary database is shut down):
1) Create directory for data files, for example, on windows, E:\oracle\product\10.2.0\oradata\STAN\DATAFILE.
On UNIX, create the directory accordingly.
2) Copy the data files and temp files over.
3) Create directory (multiplexing) for online logs, for example, on Windows, E:\oracle\product\10.2.0\oradata\STAN\ONLINELOG and F:\Oracle\flash_recovery_area\STAN\ONLINELOG.
On UNIX, create the directories accordingly.
4) Copy the online logs over.
2. Create a Control File for the standby database:
On Primary DB, create a control file for the standby to use:
SQL>startup mount;
SQL>alter database create standby controlfile as ‘STAN.ctl;
SQL>ALTER DATABASE OPEN;
3. Copy the Primary DB pfile to Standby server and rename/edit the file.
1) Copy pfilePRIM.ora from Primary server to Standby server, to database folder on Windows or dbs folder on UNIX under the Oracle home path.
2) Rename it to pfileSTAN.ora, and modify the file as follows. : (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)
*.audit_file_dest='E:\oracle\product\10.2.0\admin\STAN\adump'
*.background_dump_dest='E:\oracle\product\10.2.0\admin\STAN\bdump'
*.core_dump_dest='E:\oracle\product\10.2.0\admin\STAN\cdump'
*.user_dump_dest='E:\oracle\product\10.2.0\admin\STAN\udump'
*.compatible='10.2.0.3.0'
control_files='E:\ORACLE\PRODUCT\10.2.0\ORADATA\STAN\CONTROLFILE\STAN.CTL','F:\ORACLE\FLASH_RECOVERY_AREA\STAN\CONTROLFILE\STAN.CTL'
db_name='PRIM'
db_unique_name=STAN
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(PRIM,STAN)’
LOG_ARCHIVE_DEST_1=
‘LOCATION=F:\Oracle\flash_recovery_area\STAN\ARCHIVELOG
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=STAN’
LOG_ARCHIVE_DEST_2=
‘SERVICE=PRIM LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=PRIM’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
FAL_SERVER=PRIM
FAL_CLIENT=STAN
remote_login_passwordfile='EXCLUSIVE'
# Specify the location of the primary DB datafiles followed by the standby location
DB_FILE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\PRIM\DATAFILE’,’E:\oracle\product\10.2.0\oradata\STAN\DATAFILE’
# Specify the location of the primary DB online redo log files followed by the standby location
LOG_FILE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\PRIM\ONLINELOG’,’E:\oracle\product\10.2.0\oradata\STAN\ONLINELOG’,’F:\Oracle\flash_recovery_area\PRIM\ONLINELOG’,’F:\Oracle\flash_recovery_area\STAN\ONLINELOG’
STANDBY_FILE_MANAGEMENT=AUTO
(Note: Not all the parameter entries are listed here.)
4. On Standby server, create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.
5. Copy the standby control file ‘STAN.ctl’ from primary to standby destinations ;
6. Copy the Primary password file to standby and rename it to pwdSTAN.ora.
On Windows copy it to \database folder, and on UNIX copy it to /dbs directory. And then rename the password file.
7. For Windows, create a Windows-based services (optional):
$oradim –NEW –SID STAN –STARTMODE manual
8. Configure listeners for the primary and standby databases.
1) On Primary system: use Oracle Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop
$lsnrctl start
2) On Standby server: use Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop
$lsnrctl start
9. Create Oracle Net service names.
1) On Primary system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
2) On Standby system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
10. On Standby server, setup the environment variables to point to the Standby database.
Set up ORACLE_HOME and ORACLE_SID.
11. Start up nomount the standby database and generate a spfile.
- On Windows:
SQL>startup nomount pfile=’\database\pfileSTAN.ora’;
SQL>create spfile from pfile=’\database\pfileSTAN.ora’;
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;
- On UNIX:
SQL>startup nomount pfile=’/dbs/pfileSTAN.ora’;
SQL>create spfile from pfile=’/dbs/pfileSTAN.ora’;
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;
(Note- specify your Oracle home path to replace ‘’).
12. Start Redo apply
1) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;
If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;
13. Verify the standby database is performing properly:
1) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;
2) On Primary, force a logfile switch:
SQL>alter system switch logfile;
3) On Standby, verify the archived redo log files were applied:
SQL>select sequence#, applied from v$archived_log order by sequence#;
14. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.
To start real-time apply:
SQL> alter database recover managed standby database using current logfile disconnect;
15. To create multiple standby databases, repeat this procedure.
IV. Maintenance:
1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment.
2. Cleanup the archive logs on Primary and Standby servers.
I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary.
For the standby database, I run RMAN to backup and delete the archive logs once per week.
$rman target /@STAN;
RMAN>backup archivelog all delete input;
To delete the archivelog backup files on the standby server, I run the following once a month:
RMAN>delete backupset;
3. Password management
The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won’t be shipped to the standby server.
Refer to section II.2, step 2 to update/recreate password file for the Standby database.
Reference:
Oracle Data Guard Concepts and Administration 10g Release 2 (10.2), B14239-04
Also good Metalink for Switchover