Rename ASM disk group in Oracle 12c RAC Database

In this article I am illustrating how to rename ASM Disk group in oracle 12c.

The tool renamedg facilitates to rename the ASM disk group(s) in oracle.  There are two options available in renamedg tool to rename diskgroup(s).

  1. Specify “BOTH” argument in the PHASE parameter of renamedg tool.
  2. Specify “one and two” arguments in the PHASE parameter of renamedg tool.

  How to rename the ASM disk group using “BOTH” argument.

  • Take the backup of the database (optional, but for the safer side).
  • Bring down the database. (If your disk groups are actively used by any Database processes)
  • Log in to ASM Instance and :-
    • Get disk group(s) and its associated disk(s) details.
    • Dismount the disk group(s) from all the nodes.
      (You cannot use renamedg tools to rename CRS disk group because: (1) You cannot dismount the disk group , as it’s being used by Cluster actively. (2) You cannot rename the disk group(s) or disk(s) when CRS is down.)
  • Use renamedg utility to rename the disk group. This is a command line utility.
  • After renaming the disk groups, you can rename the disks in disk group that assign to the disk group. Before rename the disks in disk group, you need to mount the disk group in restricted mode.
  • Once rename activities completed, Mount the entire disk group manually that was renamed.
  • The original disk group resource is not automatically deleted. The status of the old disk group resource can be checked with the Oracle Clusterware Control (CRSCTL) crsctl stat res -t command and then manually deleted with the Server Control Utility (SRVCTL) srvctl remove disk group command. . If those disk group are associated with Database, you need to remove database resource and add it again.

Quick Commands:

  • Get Disk Details

select dg.name “DG Name” ,di.name “Disk Name”,di.path

from v$asm_diskgroup dg, v$asm_disk di

where dg.GROUP_NUMBER=di.GROUP_NUMBER

order by 1;

DG Name Disk Name PATH
PRAD_CTL1 PRAD_CTL1_0000 /dev/rhdiskpower10
PRAD_CTL2 PRAD_CTL2_0000 /dev/rhdiskpower11
PRAD_CTL3 PRAD_CTL3_0000 /dev/rhdiskpower12
PRAD_REDOA1 PRAD_REDOA1_0000 /dev/rhdiskpower6
PRAD_REDOA2 PRAD_REDOA2_0000 /dev/rhdiskpower7
PRAD_REDOB1 PRAD_REDOB1_0000 /dev/rhdiskpower8
PRAD_REDOB2 PRAD_REDOB2_0000 /dev/rhdiskpower9
PRAD_ROOT PRAD_ROOT_0000 /dev/rhdiskpower2
PRAD_SEED PRAD_SEED_0000 /dev/rhdiskpower5
PPRADTRN_DATA1 PPRADTRN_DATA1_0000 /dev/rhdiskpower3
PPRADTRN_INDX1 PPRADTRN_INDX1_0000 /dev/rhdiskpower4
  • Dismount Diskgroup

            alter diskgroup <DG Name> dismount;

  • Rename Diskgroup

Syntax:    renamedg [phase={one|two |both}] dgname=diskgroup

       newdgname=newdiskgroup [config=configfile]

     [asm_diskstring=discoverystring, discoverystring …]

     [clean={true|false}] [check={true|false}]

     [confirm={true|false}] [verbose={ true|false}]

     [keep_voting_files={true|false}]

Example: $ renamedg dgname=PRAD_ROOT newdgname=PRAD1_ROOT asm_diskstring=’/dev/emcpowerp1′ verbose=true

  • Mount Diskgroup Restricted

alter diskgroup <DG Name> mount restricted;

  • Rename Disk name

alter diskgroup <DG Name> rename disks all;

  • Mount Diskgroup

alter diskgroup <DG Name> mount;


How to rename the ASM disk group using “one and two” argument ( Only Commands).

Steps are all same as to rename the disk group(s) with “both” argument, except the usage of  renamedg.

  • Below command generates the configuration file to be used in phase two.

$renamedg phase=one dgname=<Current DG Name>  newdgname=<New DG name>      asm_diskstring=’/devices/disk*’ config=/tmp/disks.conf verbose=true

  • Rename the disk group using configuration file.

$ renamedg phase=two dgname=<Current DG Name> newdgname=<New DG Name> config=/tmp/disks.conf   verbose=true

  • Mount Diskgroup Restricted

alter diskgroup <DG Name> mount restricted;

  • Rename Disk name

alter diskgroup <DG Name> rename disks all;

                                                ***

Physical Standby Processes Architecture (Apply Redo Logs)

A physical standby database is an exact, block-for-block copy of a primary database. A physical standby is maintained as an exact copy through a process called Redo Apply, in which redo data received from a primary database is continuously applied to a physical standby database using the database recovery mechanisms.This also means that rowids stay the same in a physical standby database environment.

A physical standby database can be opened for read-only access and used to offload queries from a primary database. If a license for the Oracle Active Data Guard option has been purchased, Redo Apply can be active while the physical standby database is open, thus allowing queries to return results that are identical to what would be returned from the primary database. This capability is known as the real-time query feature.

How Physical Standby works 

On the Primary database site

The log writer process (LGWR) collects transactions from the log buffer and writes to the online redo logs.

The archiver process (ARCH) creates a copy of the online redo logs, and writes to the local archive destination.

Depending on the configuration, the archiver process or log writer process can also transmit redo logs to standby database. When using the log writer process, you can specify synchronous or asynchronous network transmission of redo logs to remote destinations.

Data Guard achieves asynchronous network I/O using LGWR network server process (LNS). These network severs processes are deployed by LOG_ARCHIVE_DEST_n initialization parameter.

Data Guard’s asynchronous log transport (i.e. the Maximum Performance mode) is recommended for a configuration in which the network distance is up to thousands of miles, providing continual maximum performance, while minimizing the risks of transaction loss in the event of a disaster.

On the Standby database site

The remote file server process (RFS) receives archived redo logs from the primary database. The primary site launches the RFS process during the first log transfer.

The redo logs information received by the RFS process can be stored as either standby redo logs or archived redo logs. Data Guard introduces the concept of standby redo logs (separate pool of log file groups).

Standby redo logs must be archived by the ARCH process to the standby archived destination before the managed recovery process (MRP) applies redo log information to the standby database.

The fetch archive log (FAL) client is the MRP process. The fetch archive log (FAL) server is a foreground process that runs on the primary database and services the fetch archive log requests coming from the FAL client. A separate FAL server is created for each incoming FAL client. FAL_SERVER specifies the FAL (fetch archive log) server for a standby database. The value is an Oracle Net service name, which is assumed to be configured properly on the standby database system to point to the desired FAL server.FAL_CLIENT specifies the FAL (fetch archive log) client name that is used by the FAL service, configured through the FAL_SERVER parameter, to refer to the FAL client. The value is an Oracle Net service name, which is assumed to be configured properly on the FAL server system to point to the FAL client (standby database). Thanks to the FAL_CLIENT and FAL_SERVER parameters, the managed-recovery process in the physical database will automatically check and resolve gaps at the time redo is applied. This helps in the sense that you don’t need to perform the transfer of those gaps by yourself. FAL_CLIENT and FAL_SERVER only need to be defined in the initialization parameter file for the standby database(s). It is possible; however, to define these two parameters in the initialization parameter for the primary database server to ease the amount of work that would need to be performed if the primary database were required to transition its role.

Prior to Oracle 11g, Redo Apply only worked with the standby database in the MOUNT state, preventing queries against the physical standby whilst media recovery was in progress. This has changed in Oracle 11g.When using Data Guard Broker (DG_BROKER_START = TRUE), the monitor agent process named Data Guard Broker Monitor (DMON) is running on every site (primary and standby) and maintain a two-way communication.

standby2

Protection Mode

 Maximum Protection

This mode offers the highest level of data protection. Data is synchronously transmitted to the standby database from the primary database and transactions are not committed on the primary database unless the redo data is available on at least one standby database configured in this mode. If the last standby database configured in this mode becomes unavailable, processing stops on the primary database. This mode ensures no-data-loss.

Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.

Maximum Availability

This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Under normal operations, transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log AND based on user configuration, one of the following is true:

  • redo has been received at the standby, I/O to the standby redo log has been initiated, and acknowledgement sent back to primary
  • redo has been received and written to standby redo log at the standby and acknowledgement sent back to primary

If the primary does not receive acknowledgement from at least one synchronized standby, then it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.

 If the primary database fails, then this mode ensures no data loss will occur provided there is at least one synchronized standby in the Oracle Data Guard configuration.  

Transactions on the primary are considered protected as soon as Oracle Data Guard has written the redo data to persistent storage in a standby redo log file. Once that is done, acknowledgment is quickly made back to the primary database so that it can proceed to the next transaction. This minimizes the impact of synchronous transport on primary database throughput and response time. To fully benefit from complete Oracle Data Guard validation at the standby database, be sure to operate in real-time apply mode so that redo changes are applied to the standby database as fast as they are received. Oracle Data Guard signals any corruptions that are detected so that immediate corrective action can be taken.

Maximum Performance

This mode offers slightly less data protection on the primary database, but higher performance than maximum availability mode. In this mode, as the primary database processes transactions, redo data is asynchronously shipped to the standby database. The commit operation of the primary database does not wait for the standby database to acknowledge receipt of redo data before completing write operations on the primary database. If any standby destination becomes unavailable, processing continues on the primary database and there is little effect on primary database performance. This is the default mode.

Performance Versus Protection in Maximum Availability Mode

When you use Maximum Availability mode, it is important to understand the possible results of using the LOG_ARCHIVE_DEST_n attributes SYNC/AFFIRM versus SYNC/NOAFFIRM(FastSync) so that you can make the choice best suited to your needs.

When a transport is performed using SYNC/AFFIRM, the primary performs write operations and waits for acknowledgment that the redo has been transmitted synchronously to the physical standby and written to disk. A SYNC/AFFIRM transport provides an additional protection benefit at the expense of a performance impact caused by the time required to complete the I/O to the standby redo log.

When a transport is performed using SYNC/NOAFFIRM, the primary performs write operations and waits only for acknowledgement that the data has been received on the standby,not that it has been written to disk. The SYNC/NOAFFIRM transport can provide a performance benefit at the expense of potential exposure to data loss in a special case of multiple simultaneous failures.

With those definitions in mind, suppose you experience a catastrophic failure at the primary site at the same time that power is lost at the standby site. Whether data is lost depends on the transport mode being used. In the case of SYNC/AFFIRM, in which there is a check to confirm that data is written to disk on the standby, there would be no data loss because the data would be available on the standby when the system was recovered. In the case of SYNC/NOAFFIRM, in which there is no check that data has been written to disk on the standby, there may be some data loss.

Archiving to Local Destinations Before Archiving to Remote Destinations

default_archiving

LGWR SYNC Archival to a Remote Destination with Standby Redo Log Files

lgwrsync

LGWR ASYNC Archival with Network Server (LNSn) Processes

lgwrasync.gif

Acronyms

Acronym Short For
ARC0 Archiver Process on Local Server
ARCH Archiver
ASYNC Asynchronous
DMON Data Guard Monitor
DR Disaster Recovery
FAL Fetch Archive Log
LCR Logical Change Record
LGWR Logwriter
LNS Network Server Process
LSP Logical Standby Process (SQL Apply for Logical DG)
MRP Managed Recovery Process (Redo Apply for Physical DG)
RFS Remote File Server
SYNC Synchronous
TAF Transparent Application Failover

Step by Step approach to create an Oracle 12c Data Guard Active Standby Database (Multitenant Database – 4 Node RAC)

In this post I have demonstrated Step by Step approach to create an Oracle 12c Data Guard Active Standby Database.

Environment Setup details

  • Primary database  up and running on 4 node RAC
  • Data files, OCR and voting Disks(in separate DG)  are in ASM storage.
  • Binaries are in  files system .
  • DB Password files and init<sid>.ora  files are in  $ORACLE_HOME/dbs
  • Installed and Configured Grid Infra on Standby machines, 4 Node RAC.
  • OCR and voting Disks(in separate DG)  are in ASM storage.
  • Created required Disk group on Standby site for data files.
  • RDBMS software installed in standby machines.
  • Ensured Network connectivity between PRIMARY and STANDBY server (TCP/IP – Port 1521 & 1525).

Environment details – Primary Databases

OS Version: Linux 6.2
Oracle Version: 12.1.0.2
Host Server: LABPRD1, LABPRD2, LABPRD3, LABPRD4
Primary DB : PRASPRD Instances : PRASPRD1,PRASPRD2,PRASPRD3,PRASPRD4
Storage System: ASM for Data Storage and Filesystem for Binaries
Grid Home(Binaries): /prasprd/gridinfra/grid1
RDBMS Home(Binaries) :/prasprd/db/db1
Disk Groups: CRS1(High Redundancy, For OCR and Voting Disk), LABDATA2(Data files, Red logs and Control files)

Environment details – Standby Databases

OS Version: Linux 6.2
Oracle Version: 12.1.0.2
Host Server : LABSBY1, LABSBY2, LABSBY3, LABSBY4
Primary DB : PRASSBY   Instances : PRASPRD1,PRASPRD2,PRASPRD3,PRASPRD4
Storage System: ASM for Data Storage and Filesystem for Binaries
Grid Home(Binaries): /prasprd/gridinfra/grid1
RDBMS Home(Binaries) :/prasprd/db/db1
Disk Groups: CRS1(High Redundancy, For OCR and Voting Disk), LABDATA3(Data files, Red logs and Control files)

Procedure

  1. Enable force logging in PRIMARY.
  2. Create SRL (standby redo logs) in PRIMARY.
  3. Copy init and password file from primary to STANDBY.
  4. Create directory structure in STANDBY.
  5. Make Standby parameter changes in the parameter file of PRIMARY.
  6. Make Standby parameter changes in the parameter file of STANDBY.
  7. Add an entry in the oratab file
  8. Establish the connectivity between PRIMARY and STANDBY.
  9. Start the STANDBY instance
  10. Use RMAN duplicate to create standby database from PRIMARY database.
  11. Start the MRP process in STANDBY.
  12. Verify whether the log are shipped and applied properly @the STANDBY.
  13. Start other standby RAC nodes.
  14. Verify Pluggable database’s status.

Step by Step Active Standby Build

1.       Enable force logging on PRIMARY

SQL> alter database force logging;

2.       Configure Standby Redo Log on PRIMARY

 a. Check the log files and sizes.

SQL>SELECT GROUP#, BYTES FROM V$LOG;

b. Create SRL

SQL> ALTER DATABASE ADD STANDBY LOGFILE  THREAD <TN> GROUP <GN>  (‘/path/<File Name>’,  ‘/path/<File Name>’   )  SIZE  <S> ;

NOTE: SIZE OF STANDBY LOG FILE SHOULD BE SAME AS ONLINE LOG FILE

 c. Verify the standby redo log file groups were created .Verify this in standby after standby build.

SQL> SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$STANDBY_LOG;

(Primary)

GROUP# THREAD# SEQUENCE# ARC STATUS
9 1 0 YES UNASSIGNED
10 1 0 YES UNASSIGNED
11 2 0 YES UNASSIGNED
12 2 0 YES UNASSIGNED
13 3 0 YES UNASSIGNED
14 3 0 YES UNASSIGNED
15 4 0 YES UNASSIGNED
16 4 0 YES UNASSIGNED

(Standby – Below table output  after the standby is created)

GROUP# THREAD# SEQUENCE# ARC STATUS
9 1 0 YES UNASSIGNED
10 1 509 YES ACTIVE
11 2 0 YES UNASSIGNED
12 2 298 YES ACTIVE
13 3 0 YES UNASSIGNED
14 3 500 YES ACTIVE
15 4 0 YES UNASSIGNED
16 4 287 YES ACTIVE

**Here I have 4 nodes having 2 redo groups each node, total 8 groups; created 8 more standby log groups 2 per node.

3.       Copy init and password file from primary to standby

Copy initPRASPRD1.ora to standby database (all 4 Nodes), name of init<sid>.ora needs to be changed depends on instance names

          Copy the password file orapwPRASPRD1 from primary to standby database (all 4 nodes), name of init needs to be changed depends on instance names

4.       Create directory structure in standby

 $ mkdir -p  /u01/app/oracle/admin/prasprd/adump (on all 4 node)

5.       Make Standby parameter changes in the parameter file of PRIMARY. 

Add below entries in the  init<sid>.ora.(Make the entry in one Node pfile and copy to other nodes)

$ cat initprasprd1.ora
 ##########DG Enrty ################
*.log_archive_max_processes='8'
*.db_unique_name=PRASPRD'
*.log_archive_config='dg_config=(PRASSBY,PRASPRD)'
*.log_archive_dest_1='location=+LABDATA2 valid_for=(all_logfiles,all_roles)
*.log_Archive_dest_2='service=PRASSBY_DR async noaffirm reopen=15 valid_for=(all_logfiles,primary_role) db_unique_name=PRASSBY'
*.log_file_name_convert='+LABDATA3','+LABDATA2'
*.db_file_name_convert='+LABDATA3','+LABDATA2'
*.fal_server=PRASSBY_DR;
*.fal_client=PRASPRD;
#############DG Entry ############################

 6.       Make Standby parameter changes in the parameter file of Standby                                     

Add below entries in the  init<sid>.ora.(Make the entry in one Node pfile and copy to other nodes)

 

$ cat initprasprd1.ora
##########DG Enrty ################
*.log_archive_max_processes='8'
*.db_unique_name='PRASSBY'
*.standby_file_management='AUTO'
*.log_archive_config='dg_config=(PRASPRD,PRASSBY)'
*.log_archive_dest_1='location=+LABDATA3 valid_for=(all_logfiles,all_roles) db_unique_name=PRASSBY'
*.log_Archive_dest_2='service=PRASPRD async noaffirm reopen=15 valid_for=(all_logfiles,primary_role) db_unique_name=PRASPRD'
*.log_file_name_convert='+LABDATA2','+LABDATA3'
*.db_file_name_convert='+LABDATA2','+LABDATA3'
*.fal_server=PRASPRD
*.fal_client=PRASSBY_DR
#############DG Entry ############################

7.       Add an entry in the oratab file

PRASPRD1: /prasprd/db/db1: N  (Node1)
PRASPRD2: /prasprd/db/db1: N  (Node2)
PRASPRD3: /prasprd/db/db1: N  (Node3)
PRASPRD4: /prasprd/db/db1: N  (Node4)

8.       Establish the connectivity between PRIMARY and STANDBY.   

On Standby (Add a static entry in the listener.ora for PRASSBY)

 

LISTENER12C =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST =LABSBY1.me.domain)(PORT = 1525))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1525))
    )
  )
SID_LIST_LISTENER12C =
  (SID_LIST =
 (SID_DESC =
      (GLOBAL_DBNAME = PRASSBY)
      (ORACLE_HOME = /prasprd/db/db1)
      (SID_NAME = PRASPRD1)
    )
 )

Reload the listener

$ lsnrctl reload listener12c
 LSNRCTL for Linux: Version 12.1.0.2.0 - Production on Tue Jan 6 12:11:31 2015
 Copyright (c) 1991, 2014, Oracle.  All rights reserved.
 Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST= LABPRD1.me.domain) (PORT=1525)))
The command completed successfully

On Primary (Add a below entry on tnsnames.ora)

PRASSBY_DR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = LABSBY1.me.domain)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME =PRASSBY)
    )
  )
 PRASPRD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =LABPRD1.corporate.domain)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PRASPRD)
    )
  )
 On Standby(Add a below entry on tnsnames.ora)
PRASSBY_DR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = LABSBY1.corporate.domain)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME =PRASSBY)
    )
  )
PRASPRD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =LABPRD1.corporate.domain)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PRASPRD)
    )
  )
 

9.       Start the STANDBY database instance.

 $ . oraenv

ORACLE_SID = [prasprd1] ? prasprd1
The Oracle base has been set to /u01/app/orabase
 oradb@PRASSBY1.ME.DOMAIN:/home/oradb ~]sqlplus / as sysdba
 SQL*Plus: Release 12.1.0.2.0 Production on Tue Jan 6 12:11:31 2015
 Copyright (c) 1982, 2014, Oracle.  All rights reserved.
 Connected to an idle instance.
 SQL> startup nomount;
 ORACLE instance started.
Total System Global Area  XXXXXXXXX bytes
Fixed Size                  XXXXXXX bytes
Variable Size             XXXXXXXXX bytes
Database Buffers           XXXXXXXX bytes
Redo Buffers                XXXXXXX bytes

10.       Use RMAN duplicate to create standby database on Primary database.

Recovery Manager: Release 12.1.0.2.0 – Production on Tue Jan 6 14:40:12 2015 Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

 

RMAN> connect target sys/*****@PRASPRD connected to target database: PRASPRD (DBID=2845711114)

 

RMAN> connect auxiliary sys/*****@PRASSBY_DR

 

connected to auxiliary database: PRASPRD (not mounted)

 

run

 

{ allocate channel c1 type disk;

 

allocate channel c2 type disk;

 

allocate auxiliary channel aux type disk;

 

duplicate target database for standby from active database;

 

alter system set log_archive_max_processes=’8′;

 

alter system set db_unique_name=’PRASSBY’;

 

alter system standby_file_management=’AUTO’;

 

alter system set log_archive_config=’dg_config=(PRASPRD,PRASSBY)’;

 

alter system set log_archive_dest_1=’location=USE_DB_RECOVERY_FILE_DEST valid_for=(all_logfiles,all_roles) db_unique_name=PRASSBY’;

 

alter system set log_Archive_dest_2=’service=prasprd async noaffirm reopen=15 valid_for=(all_logfiles,primary_role) db_unique_name=PRASPRD’;

 

log_file_name_convert=’+LABDATA3′,’+LABDATA2′

 

db_file_name_convert=’+LABDATA3′,’+LABDATA2′ }

11.   Start the MRP process on standby database

SQL> startup ;
ORACLE instance started.
Total System Global Area  XXXXXXXXX bytes
Fixed Size                  XXXXXXX bytes
Variable Size              XXXXXXXXX bytes
Database Buffers            XXXXXXXXX bytes
Redo Buffers                 XXXXXXX bytes
Database mounted.
Database opened. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

                      Media recovery complete

12.       Verify MRP process on standby database

oradb@LABPRD.CORPORATE.DOMAIN:/home/oradb ~]ps -ef | grep mrp
oradb    23678     1  0 Jan05 ?        00:00:05 ora_mrp0_prasprd1

 SQL> select process,status,thread#,sequence#,blocks from v$managed_standby where process like ‘%MRP%’;

PROCESS STATUS THREAD# SEQUENCE# BLOCKS
MRP0 APPLYING_LOG 1 498 102400

13.     Verify whether the log are shipped and applied properly @the standby.

On Primary

SQL> ALTER SYSTEM SWITCH LOGFILE;

SQL > select thread#, max(sequence#) “Last Primary Seq Generated”

           From v$archived_log val, v$database vdb

           where val.resetlogs_change# = vdb.resetlogs_change#

             group by thread# order by 1;

THREAD# LAST  PRIMARY SEQ GENERATED
1 497
2 283
3 484
4 277

 

On Standby

SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE#

“Last Sequence Received”,

APPL.SEQUENCE# “Last Sequence Applied”,

(ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference”

FROM

(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG

WHERE

(THREAD#,FIRST_TIME )

 IN (SELECT THREAD#,MAX(FIRST_TIME)

FROM V$ARCHIVED_LOG GROUP BY THREAD#))

ARCH,(SELECT THREAD# ,SEQUENCE#

FROM V$LOG_HISTORY

WHERE (THREAD#,FIRST_TIME )

 IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY

GROUP BY THREAD#)) APPL

WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

THREAD# LAST SEQUENCE RECEIVED LAST SEQUENCE APPLIED DIFFERENCE
1 497 497 0
2 283 283 0
3 484 484 0
4 277 277 0

14.        Startup other RAC standby node.

SQL> startup open;

15.      Check pluggable database and status in PRIMARY and STANDBY.

On Primary

 

 SQL> select name, open_mode from v$pdbs; or show pdbs
NAME OPEN_MODE
PDB$SEED READ ONLY
PDB1 MOUNTED
PDB2 MOUNTED

 On Standby

 SQL> select name, open_mode from v$pdbs; or show pdbs
NAME OPEN_MODE
PDB$SEED READ ONLY
PDB1 MOUNTED
PDB2 MOUNTED

 

 

16.       Check database ROLE

 On Primary

 SQL> select NAME, DATABASE_ROLE, SWITCHOVER_STATUS,DB_UNIQUE_NAME,CDB,CON_ID from v$database;

 

NAME DATABASE_ROLE SWITCHOVER_STATUS DB_UNIQUE_NAME CDB CON_ID
CDBORCL PRIMARY TO STANDBY cdborcl YES 0

 

On Standby

             SQL> select NAME, DATABASE_ROLE, SWITCHOVER_STATUS,DB_UNIQUE_NAME,CDB,CON_ID from v$database;

NAME DATABASE_ROLE
SWITCHOVER_STATUS DB_UNIQUE_NAME CDB CON_ID
CDBORCL PHYSICAL STANDBY NOT ALLOWED cdborcdr YES 0

 

 

17.       Confirm that the primary database protection mode

 

SQL> SELECT PROTECTION_MODE FROM V$DATABASE;

PROTECTION_MODE

——————————-

MAXIMUM PERFORMANCE

 ——————> At this point Physical standby creation completed ! Congrats.

Test SCENARIOS

=============

 You can do several test scenarios to make sure changes are getting shipped to standby location.

  • ·         Test 1 – Create Table on any PDB schema can verify the changes take effect on Standby site.

Primary

Create a table, insert some values

Standby

 1. alter pluggable database all open read only;

                  2.  Connect to PDB .

                  3. Select count (*) from schema.tablename

  • ·         Test 2 – Clone a PDB and verify the changes take effect on Standby site.

Primary

CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb2adm IDENTIFIED BY meadmin;

SQL> select name, open_mode from v$pdbs; or show pdbs;

Standby

1.       Verify Alert log
2.  SQL> select name, open_mode from v$pdbs; or show pdbs;
Reference
  • In this exercise we are bringing up standby database on PRASSBY i.e. on HOST LABSBY1.
  • LISTENER 12c – This is a static Listener to build standby database and used 1525 port.
  • PRASSBY_DR – Connect String to Connect DG Instance for Standby Build and Log Shipping
  • PRASPRD – Connect String to Connect Primary Instance for Standby Build and Log Shipping

Oracle 12c Add / Modify SCAN IP, SCAN NAME and SCAN Listener

In this article I demonstrate how to Add/Modify the SCAN IP / SCAN Host and Configure  SCAN Listener  without reinstalling the Grid Infra environment. Some times because of the network team wrongly assign SCAN IP or DBA setup Grid infra environment with wrong SCAN IP / Hostname or the Change in the Network Subnet/Hostname later after the setup.

Check the Current set up  in Node1

racnodepdb01.localdomain.oragrid:/home/oragrid $ cat /etc/hosts
#####Prod IP######
10.133.65.201   racnodepdb01.localdomain            racnodepdb01
10.133.65.202   racnodepdb02.localdomain            racnodepdb02
####SCAN IP#####
10.133.65.205   racnodepdb01-orascan.localdomain    racnodepdb01-orascan
10.133.65.206   racnodepdb02-orascan.localdomain    racnodepdb02-orascan
####Heartbeat IP####
172.26.88.201   racnodepdb01-orahb.localdomain      racnodepdb01-orahb
172.26.88.202   racnodepdb02-orahb.localdomain      racnodepdb02-orahb
####VIP####
10.133.65.209   racnodepdb01-oravip.localdomain     racnodepdb01-oravip
10.133.65.210   racnodepdb02-oravip.localdomain     racnodepdb02-oravip
####NAS IP####
172.26.112.205  racnodepdb01-nas.localdomain        racnodepdb01-nas
172.26.112.206  racnodepdb02-nas.localdomain        racnodepdb02-nas

racnodepdb01.localdomain.oragrid:/home/oragrid $ crsctl stat res –t

——————————————————————————–

Name           Target  State        Server                   State details

——————————————————————————–

Local Resources

——————————————————————————–

ora.DATA_CRS1.dg

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

ora.LISTENER.lsnr

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

ora.asm

ONLINE  ONLINE       racnodepdb01              Started,STABLE

ONLINE  ONLINE       racnodepdb02              Started,STABLE

ora.net1.network

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

ora.ons

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

——————————————————————————–

Cluster Resources

——————————————————————————–

ora.LISTENER_SCAN1.lsnr

1        ONLINE  ONLINE       racnodepdb01              STABLE

ora.MGMTLSNR

1        ONLINE  ONLINE       racnodepdb01              169.254.172.119 172.

26.88.201,STABLE

ora.cvu

1        ONLINE  ONLINE       racnodepdb01              STABLE

ora.mgmtdb

1        ONLINE  ONLINE       racnodepdb01              Open,STABLE

ora.oc4j

1        ONLINE  ONLINE       racnodepdb01              STABLE

ora.racnodepdb01.vip

1        ONLINE  ONLINE       racnodepdb01              STABLE

ora.racnodepdb02.vip

1        ONLINE  ONLINE       racnodepdb02              STABLE

ora.scan1.vip

1        ONLINE  ONLINE       racnodepdb01              STABLE

——————————————————————————–

racnodepdb01.localdomain.oragrid:/home/oragrid $ srvctl status scan

SCAN VIP scan1 is enabled

SCAN VIP scan1 is running on node racnodepdb01

racnodepdb01.localdomain.oragrid:/home/oragrid $ srvctl status scan_listener

SCAN Listener LISTENER_SCAN1 is enabled

SCAN listener LISTENER_SCAN1 is running on node racnodepdb01

racnodepdb01.localdomain.oragrid:/home/oragrid $ srvctl config scan

SCAN name: racnodepdb01-orascan.localdomain, Network: 1

Subnet IPv4: 10.133.65.0/255.255.255.0/bond0.2265, static

Subnet IPv6:

SCAN 0 IPv4 VIP: 10.133.65.205

SCAN VIP is enabled.

SCAN VIP is individually enabled on nodes:

SCAN VIP is individually disabled on nodes:

racnodepdb01.localdomain.oragrid:/home/oragrid $ srvctl config scan_listener

SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521

Registration invited nodes:

Registration invited subnets:

SCAN Listener is enabled.

SCAN Listener is individually enabled on nodes:

SCAN Listener is individually disabled on nodes:

Check the Current set up  in Node2

racnodepdb01.localdomain.oragrid:/home/oragrid $ cat /etc/hosts

#####Prod IP######

10.133.65.201   racnodepdb01.localdomain            racnodepdb01

10.133.65.202   racnodepdb02.localdomain            racnodepdb02

####SCAN IP#####

10.133.65.205   racnodepdb01-orascan.localdomain    racnodepdb01-orascan

10.133.65.206   racnodepdb02-orascan.localdomain    racnodepdb02-orascan

####Heartbeat IP####

172.26.88.201   racnodepdb01-orahb.localdomain      racnodepdb01-orahb

172.26.88.202   racnodepdb02-orahb.localdomain      racnodepdb02-orahb

####VIP####

10.133.65.209   racnodepdb01-oravip.localdomain     racnodepdb01-oravip

10.133.65.210   racnodepdb02-oravip.localdomain     racnodepdb02-oravip

####NAS IP####

172.26.112.205  racnodepdb01-nas.localdomain        racnodepdb01-nas

172.26.112.206  racnodepdb02-nas.localdomain        racnodepdb02-nas

racnodepdb02.localdomain.oragrid:/home/oragrid $ srvctl status scan

SCAN VIP scan1 is enabled

SCAN VIP scan1 is running on node racnodepdb01

racnodepdb02.localdomain.oragrid:/home/oragrid $ srvctl status scan_listener

SCAN Listener LISTENER_SCAN1 is enabled

SCAN listener LISTENER_SCAN1 is running on node racnodepdb01

racnodepdb02.localdomain.oragrid:/home/oragrid $srvctl config scan

SCAN name: racnodepdb01-orascan.localdomain, Network: 1

Subnet IPv4: 10.133.65.0/255.255.255.0/bond0.2265, static

Subnet IPv6:

SCAN 0 IPv4 VIP: 10.133.65.205

SCAN VIP is enabled.

SCAN VIP is individually enabled on nodes:

SCAN VIP is individually disabled on nodes:

racnodepdb02.localdomain.oragrid:/home/oragrid $srvctl config scan_listener

SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521

Registration invited nodes:

Registration invited subnets:

SCAN Listener is enabled.

SCAN Listener is individually enabled on nodes:

SCAN Listener is individually disabled on nodes:

 Stop SCAN and SCAN_LISTENER (Use anyone node as GRID owner)

racnodepdb01.localdomain.oragrid:/home/oragrid$ srvctl stop scan_listener

racnodepdb01.localdomain.oragrid:/home/oragrid$ srvctl stop scan

racnodepdb01.localdomain.oragrid:/home/oragrid $ srvctl status scan

SCAN VIP scan1 is enabled

SCAN VIP scan1 is not running

racnodepdb01.localdomain.oragrid:/home/oragrid $ srvctl status scan_listener

SCAN Listener LISTENER_SCAN1 is enabled

SCAN listener LISTENER_SCAN1 is not running

racnodepdb01.localdomain.oragrid:/home/oragrid $srvctl config scan

SCAN name: racnodepdb01-orascan.localdomain, Network: 1

Subnet IPv4: 10.133.65.0/255.255.255.0/bond0.2265, static

Subnet IPv6:

SCAN 0 IPv4 VIP: 10.133.65.205

SCAN VIP is enabled.

SCAN VIP is individually enabled on nodes:

SCAN VIP is individually disabled on nodes:

racnodepdb01.localdomain.oragrid:/home/oragrid $srvctl config scan_listener

SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521

Registration invited nodes:

Registration invited subnets:

SCAN Listener is enabled.

SCAN Listener is individually enabled on nodes:

SCAN Listener is individually disabled on nodes:

racnodepdb01.localdomain.oragrid:/home/oragrid $ crsctl stat res –t

——————————————————————————–

Name           Target  State        Server                   State details

——————————————————————————–

Local Resources

——————————————————————————–

ora.DATA_CRS1.dg

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

ora.LISTENER.lsnr

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

ora.asm

ONLINE  ONLINE       racnodepdb01              Started,STABLE

ONLINE  ONLINE       racnodepdb02              Started,STABLE

ora.net1.network

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

ora.ons

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

——————————————————————————–

Cluster Resources

——————————————————————————–

ora.LISTENER_SCAN1.lsnr

      1        OFFLINE OFFLINE                               STABLE

ora.MGMTLSNR

1        ONLINE  ONLINE       racnodepdb01              169.254.172.119 172.

26.88.201,STABLE

ora.cvu

1        ONLINE  ONLINE       racnodepdb01              STABLE

ora.mgmtdb

1        ONLINE  ONLINE       racnodepdb01              Open,STABLE

ora.oc4j

1        ONLINE  ONLINE       racnodepdb01              STABLE

ora.racnodepdb01.vip

1        ONLINE  ONLINE       racnodepdb01              STABLE

ora.racnodepdb02.vip

1        ONLINE  ONLINE       racnodepdb02              STABLE

ora.scan1.vip

      1        OFFLINE OFFLINE                               STABLE

 

Modify /etc/host as root with new SCAN HOST & IP – Configure and Setup od DNS is not explained here.

racnodepdb01.localdomain.oragrid:/home/oragrid $ cat /etc/hosts

#####Prod IP######

10.133.65.201   racnodepdb01.localdomain            racnodepdb01

10.133.65.202   racnodepdb02.localdomain            racnodepdb02

####SCAN IP#####

10.133.65.205   racnodepdb01-orascan.localdomain    racnodepdb01-orascan

10.133.65.206   racnodepdb02-orascan.localdomain    racnodepdb02-orascan

####Heartbeat IP####

172.26.88.201   racnodepdb01-orahb.localdomain      racnodepdb01-orahb

172.26.88.202   racnodepdb02-orahb.localdomain      racnodepdb02-orahb

####VIP####

10.133.65.209   racnodepdb01-oravip.localdomain     racnodepdb01-oravip

10.133.65.210   racnodepdb02-oravip.localdomain     racnodepdb02-oravip

####NAS IP####

172.26.112.205  racnodepdb01-nas.localdomain        racnodepdb01-nas

172.26.112.206  racnodepdb02-nas.localdomain        racnodepdb02-nas

#SCAN IP

10.133.65.51     racnodepdb-orascan.localdomain            racnodepdb-orascan

10.133.65.52     racnodepdb-orascan.localdomain            racnodepdb-orascan

10.133.65.53     racnodepdb-orascan.localdomain            racnodepdb-orascan

 

Change SCAN NAME  and Subnet

Here we modify racnodepdb01-orascan.localdomain to racnodepdb-orascan.localdomain with  no change in SUBNET or NETMASK

 Run as root

#GRID_HOME /bin/crsctl modify type ora.scan_vip.type -attr “ATTRIBUTE=SCAN_NAME,DEFAULT_VALUE=racnodepdb-orascan”   -unsupported

#GRID_HOME/bin/crsctl modify resource ora.net1.network -attr “USR_ORA_SUBNET=<new subnet id>”

#GRID_HOME/bin/crsctl modify resource ora.net1.network -attr “USR_ORA_NETMASK=<new subnet mask>”

Modify / Change SCAN

Run as root

# /racnode/oragrid/grid1/bin/srvctl modify scan -n racnodepdb-orascan

# /racnode/oragrid/grid1/bin/srvctl config scan

# /racnode/oragrid/grid1/bin/srvctl config scan

SCAN name: racnodepdb-orascan, Network: 1

Subnet IPv4: 10.133.65.0/255.255.255.0/bond0.2265, static

Subnet IPv6:

SCAN 0 IPv4 VIP: 10.133.65.51

SCAN VIP is enabled.

SCAN VIP is individually enabled on nodes:

SCAN VIP is individually disabled on nodes:

SCAN 1 IPv4 VIP: 10.133.65.52

SCAN VIP is enabled.

SCAN VIP is individually enabled on nodes:

SCAN VIP is individually disabled on nodes:

SCAN 2 IPv4 VIP: 10.133.65.53

SCAN VIP is enabled.

SCAN VIP is individually enabled on nodes:

SCAN VIP is individually disabled on nodes:

 # /racnode/oragrid/grid1/bin/srvctl modify scan_listener –u

# /racnode/oragrid/grid1/bin/srvctl config scan_listener

SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521

Registration invited nodes:

Registration invited subnets:

SCAN Listener is enabled.

SCAN Listener is individually enabled on nodes:

SCAN Listener is individually disabled on nodes:

SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521

Registration invited nodes:

Registration invited subnets:

SCAN Listener is enabled.

SCAN Listener is individually enabled on nodes:

SCAN Listener is individually disabled on nodes:

SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Registration invited nodes:

Registration invited subnets:

SCAN Listener is enabled.

SCAN Listener is individually enabled on nodes:

SCAN Listener is individually disabled on nodes:

racnodepdb01.localdomain.oragrid:/home/oragrid $ crsctl stat res –t

——————————————————————————–

Name           Target  State        Server                   State details

——————————————————————————–

Local Resources

——————————————————————————–

ora.DATA_CRS1.dg

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

ora.LISTENER.lsnr

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

ora.asm

ONLINE  ONLINE       racnodepdb01              Started,STABLE

ONLINE  ONLINE       racnodepdb02              Started,STABLE

ora.net1.network

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

ora.ons

ONLINE  ONLINE       racnodepdb01              STABLE

ONLINE  ONLINE       racnodepdb02              STABLE

 

——————————————————————————–

Cluster Resources

——————————————————————————–

ora.LISTENER_SCAN1.lsnr

      1        OFFLINE OFFLINE                               STABLE

ora.LISTENER_SCAN2.lsnr

      1        OFFLINE OFFLINE                               STABLE

ora.LISTENER_SCAN3.lsnr

      1        OFFLINE OFFLINE                               STABLE

ora.MGMTLSNR

1        ONLINE  ONLINE       a            169.254.172.119 172.

26.88.201,STABLE

ora.cvu

1        ONLINE  ONLINE       racnodepdb01              STABLE

ora.mgmtdb

1        ONLINE  ONLINE       racnodepdb01              Open,STABLE

ora.oc4j

1        ONLINE  ONLINE       racnodepdb01              STABLE

ora.racnodepdb01.vip

1        ONLINE  ONLINE       racnodepdb01              STABLE

ora.phxiampdb02.vip

1        ONLINE  ONLINE       phxiampdb02              STABLE

ora.scan1.vip

      1        OFFLINE OFFLINE                               STABLE

ora.scan2.vip

      1        OFFLINE OFFLINE                               STABLE

ora.scan3.vip

      1        OFFLINE OFFLINE                               STABLE

——————————————————————————–

ORA-46362: Could not translate variable ORACLE_BASE in Oracle 12c

In this article I will demonstrate how to resolve the issue ORA-46362: Could not translate variable ORACLE_BASE.

Problem

After Database/ Instance manually added to cluster resource, DB /Instance or services is not getting started using srvctl shows below error

racnodepdb01.pii.star:/home/oradb>srvctl start service -db C1RACDB

PRCD-1133 : Failed to start services for database

CRS-5017: The resource action “ora.C1RACDB.iam_pracnode1.svc clean” encountered the following error:

ORA-46362: Could not translate variable ORACLE_BASE.

. For details refer to “(:CLSN00106:)” in “/racnode/orabase/diag/crs/racnodepdb02/crs/trace/crsd_oraagent_oradb.trc”.

CRS-5017: The resource action “ora.C1RACDB.iam_pracnode1.svc check” encountered the following error:

ORA-46362: Could not translate variable ORACLE_BASE.

. For details refer to “(:CLSN00109:)” in “/racnode/orabase/diag/crs/racnodepdb02/crs/trace/crsd_oraagent_oradb.trc”.

CRS-2680: Clean of ‘ora.C1RACDB.iam_pracnode1.svc’ on ‘racnodepdb02’ failed

CRS-5017: The resource action “ora.C1RACDB.adhoc_pracnode1.svc clean” encountered the following error:

ORA-46362: Could not translate variable ORACLE_BASE.

. For details refer to “(:CLSN00106:)” in “/racnode/orabase/diag/crs/racnodepdb02/crs/trace/crsd_oraagent_oradb.trc”.

……………………….

 

Resolution

Modify the database resource:

srvctl setenv database -db [DBNAME] -env ORACLE_BASE=$ORACLE_BASE

***