Delete a Node from Oracle Database 12.1.0.2

Delete the Instance from the Oracle RAC Database

Verify that all the instances are up and running.
[oracle@gract3 ~]$ srvctl  status database -d cdbn
  Instance cdbn1 is running on node gract1
  Instance cdbn2 is running on node gract2
  Instance cdbn3 is running on node gract3

Check resources running on note gract3
[grid@gract3 ~]$ crs | egrep 'gract3|STATE|--'
  Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
  -------------------------      ---------- ----------      ------------ ------------------                  
  ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract3       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
  ora.ACFS_DG1.dg                ONLINE     ONLINE          gract3       STABLE   
  ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          gract3       STABLE   
  ora.DATA.dg                    ONLINE     ONLINE          gract3       STABLE   
  ora.LISTENER.lsnr              ONLINE     ONLINE          gract3       STABLE   
  ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract3       mounted on /u01/acfs /acfs-vol1,STABLE
  ora.net1.network               ONLINE     ONLINE          gract3       STABLE   
  ora.ons                        ONLINE     ONLINE          gract3       STABLE   
  ora.proxy_advm                 ONLINE     ONLINE          gract3       STABLE   
  Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
  --------------------------- ----   ------------ ------------ --------------- -----------------------------------------
  ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       gract3          STABLE  
  ora.MGMTLSNR                   1   ONLINE       ONLINE       gract3          169.254.145.224 192. 168.2.113,STABLE
  ora.asm                        2   ONLINE       ONLINE       gract3          Started,STABLE  
  ora.cdbn.db                    3   ONLINE       ONLINE       gract3          Open,STABLE  
  ora.cvu                        1   ONLINE       ONLINE       gract3          STABLE  
  ora.gract3.vip                 1   ONLINE       ONLINE       gract3          STABLE  
  ora.mgmtdb                     1   ONLINE       ONLINE       gract3          Open,STABLE  
 ora.scan2.vip                  1   ONLINE       ONLINE       gract3          STABLE  

Verify the current ocr backup using the command: ocrconfig -showbackup.
[grid@gract3 ~]$ ocrconfig -showbackup
  gract1     2014/08/14 17:07:49     /u01/app/12102/grid/cdata/gract/backup00.ocr     0
  gract1     2014/08/14 13:07:45     /u01/app/12102/grid/cdata/gract/backup01.ocr     0
  gract1     2014/08/14 09:07:40     /u01/app/12102/grid/cdata/gract/backup02.ocr     0
  gract1     2014/08/13 09:07:14     /u01/app/12102/grid/cdata/gract/day.ocr     0
  gract1     2014/08/09 18:45:09     /u01/app/12102/grid/cdata/gract/week.ocr     0
  gract1     2014/08/09 14:38:36     /u01/app/12102/grid/cdata/gract/backup_20140809_143836.ocr     0     

Ensure that all the instances are registered in the default CRS Listener.
[grid@gract3 ~]$ lsnrctl status  LISTENER_SCAN2
  LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-AUG-2014 17:03:42
  Copyright (c) 1991, 2014, Oracle.  All rights reserved.
  Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))
  STATUS of the LISTENER
  ------------------------
  Alias                     LISTENER_SCAN2
  Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
  Start Date                09-AUG-2014 14:10:46
  Uptime                    5 days 2 hr. 52 min. 56 sec
  Trace Level               off
  Security                  ON: Local OS Authentication
  SNMP                      OFF
  Listener Parameter File   /u01/app/12102/grid/network/admin/listener.ora
  Listener Log File         /u01/app/grid/diag/tnslsnr/gract3/listener_scan2/alert/log.xml
  Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.191)(PORT=1521)))
  Services Summary...
  Service "cdbn" has 3 instance(s).
    Instance "cdbn1", status READY, has 1 handler(s) for this service...
    Instance "cdbn2", status READY, has 1 handler(s) for this service...
    Instance "cdbn3", status READY, has 1 handler(s) for this service...
  Service "cdbnXDB" has 3 instance(s).
    Instance "cdbn1", status READY, has 1 handler(s) for this service...
    Instance "cdbn2", status READY, has 1 handler(s) for this service...
    Instance "cdbn3", status READY, has 1 handler(s) for this service...
  Service "gract" has 1 instance(s).
    Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
  The command completed successfully

Start DCBA from a node other than the one that you are removing and select 
  -->"Real Application Clusters" 
    --> "Instance Management"
     -->  "Delete Instance".
      --> Accept the alert windows to delete the instance.

Verify that the instance has been deleted and thread is disabled by querying gv$instance and v$thread.  
SQL> select INST_ID,INSTANCE_NUMBER,INSTANCE_NAME,HOST_NAME from gv$instance;
     INST_ID INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME
  ---------- --------------- ---------------- ------------------------------
         1         1 cdbn1        gract1.example.com
         2         2 cdbn2        gract2.example.com

SQL>   select THREAD# , STATUS, INSTANCE from v$thread;
   THREAD# STATUS INSTANCE
  ---------- ------ ------------------------------
     1 OPEN   cdbn1
     2 OPEN   cdbn2

 Verify that the thread for the deleted instance has been disabled. If it is still enabled, disable it as follows:
    SQL>ALTER DATABASE DISABLE THREAD 2;
--> No need to run the above command - THREAD# 3 is  already disable 

Delete the Node from the Cluster

If there is a listener in the Oracle Home on the RAC node that you are deleting, you must disable and stop it before deleting the
Oracle RAC software, as in the following command:
    $ srvctl disable listener -l <listener_name> -n <NodeToBeDeleted>
    $ srvctl stop listener -l <listener_name> -n <NodeToBeDeleted>
Checking listners:
[grid@gract3 ~]$  ps -elf | grep tns
  0 S grid     11783     1  0  80   0 - 42932 ep_pol Aug12 ?        00:00:05 /u01/app/12102/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
  0 S grid     23099     1  0  80   0 - 42960 ep_pol Aug09 ?        00:00:14 /u01/app/12102/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
  0 S grid     23140     1  0  80   0 - 43080 ep_pol Aug09 ?        00:00:17 /u01/app/12102/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
  0 S grid     23162     1  0  80   0 - 43034 ep_pol Aug09 ?        00:00:38 /u01/app/12102/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
--> No need to run the above commands as all listeners run from GRID_HOME

Run the following command from the $ORACLE_HOME/oui/bin directory on the node that you are deleting to update the inventory on that node:
[oracle@gract3 ~]$  $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME  CLUSTER_NODES=gract3 -local
  Starting Oracle Universal Installer...
  Checking swap space: must be greater than 500 MB.   Actual 4198 MB    Passed
  The inventory pointer is located at /etc/oraInst.loc
  'UpdateNodeList' was successful.

Remove the Oracle RAC software by runing the following command on the node to be deleted from the $ORACLE_HOME/deinstall directory:
[oracle@gract3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
  Checking for required files and bootstrapping ...
  Please wait ...
  Location of logs /u01/app/oraInventory/logs/
  ############ ORACLE DECONFIG TOOL START ############
  ######################### DECONFIG CHECK OPERATION START #########################
  ## [START] Install check configuration ##
  Checking for existence of the Oracle home location /u01/app/oracle/product/12102/racdb
  Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
  Oracle Base selected for deinstall is: /u01/app/oracle
  Checking for existence of central inventory location /u01/app/oraInventory
  Checking for existence of the Oracle Grid Infrastructure home /u01/app/12102/grid
  The following nodes are part of this cluster: gract3,gract2,gract1
  Checking for sufficient temp space availability on node(s) : 'gract3'
  ## [END] Install check configuration ##
  Network Configuration check config START
  Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2014-08-14_05-36-26-PM.log
  Network Configuration check config END
  Database Check Configuration START
  Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2014-08-14_05-36-37-PM.log
  Use comma as separator when specifying list of values as input
  Specify the list of database names that are configured locally on this node for this Oracle home. 
  Local configurations of the discovered databases will be removed [cdbn]: 
  Database Check Configuration END
  Oracle Configuration Manager check START
  OCM check log file location : /u01/app/oraInventory/logs//ocm_check8896.log
  Oracle Configuration Manager check END
  ######################### DECONFIG CHECK OPERATION END #########################
  ####################### DECONFIG CHECK OPERATION SUMMARY #######################
  Oracle Grid Infrastructure Home is: /u01/app/12102/grid
  The following nodes are part of this cluster: gract3,gract2,gract1
  The cluster node(s) on which the Oracle home deinstallation will be performed are:gract3
  Oracle Home selected for deinstall is: /u01/app/oracle/product/12102/racdb
  Inventory Location where the Oracle home registered is: /u01/app/oraInventory
  The option -local will not modify any database configuration for this Oracle home.
  Checking the config status for CCR
  Oracle Home exists with CCR directory, but CCR is not configured
  CCR check is finished
  Do you want to continue (y - yes, n - no)? [n]: y 
  A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-14_05-35-57-PM.out'
  Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-14_05-35-57-PM.err'
  ######################## DECONFIG CLEAN OPERATION START ########################
  Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2014-08-14_05-40-55-PM.log
  Network Configuration clean config START
  Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-08-14_05-40-55-PM.log
  Network Configuration clean config END
  Oracle Configuration Manager clean START
  OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean8896.log
  Oracle Configuration Manager clean END
  ######################### DECONFIG CLEAN OPERATION END #########################
  ####################### DECONFIG CLEAN OPERATION SUMMARY #######################
  Cleaning the config for CCR
  As CCR is not configured, so skipping the cleaning of CCR configuration
  CCR clean is finished
  #######################################################################
  ############# ORACLE DECONFIG TOOL END #############
  Using properties file /tmp/deinstall2014-08-14_05-32-27PM/response/deinstall_2014-08-14_05-35-57-PM.rsp
  Location of logs /u01/app/oraInventory/logs/
  ############ ORACLE DEINSTALL TOOL START ############
  ####################### DEINSTALL CHECK OPERATION SUMMARY #######################
  A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-14_05-35-57-PM.out'
  Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-14_05-35-57-PM.err'
  ######################## DEINSTALL CLEAN OPERATION START ########################
  ## [START] Preparing for Deinstall ##
  Setting LOCAL_NODE to gract3
  Setting CLUSTER_NODES to gract3
  Setting CRS_HOME to false
  Setting oracle.installer.invPtrLoc to /tmp/deinstall2014-08-14_05-32-27PM/oraInst.loc
  Setting oracle.installer.local to true
  ## [END] Preparing for Deinstall ##
  Setting the force flag to false
  Setting the force flag to cleanup the Oracle Base
  Oracle Universal Installer clean START
  Detach Oracle home '/u01/app/oracle/product/12102/racdb' from the central inventory on the local node : Done
  Failed to delete the directory '/u01/app/oracle/product/12102/racdb'. The directory is in use.
  Delete directory '/u01/app/oracle/product/12102/racdb' on the local node : Failed <<<<

  The Oracle Base directory '/u01/app/oracle' will not be removed on local node. 
  The directory is in use by Oracle Home '/u01/app/oracle/product/121/racdb'.

  Oracle Universal Installer cleanup was successful.
  Oracle Universal Installer clean END
  ## [START] Oracle install clean ##
  Clean install operation removing temporary directory '/tmp/deinstall2014-08-14_05-32-27PM' on node 'gract3'
  ## [END] Oracle install clean ##
  ######################### DEINSTALL CLEAN OPERATION END #########################
  ####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
  Successfully detached Oracle home '/u01/app/oracle/product/12102/racdb' from the central inventory on the local node.
  Failed to delete directory '/u01/app/oracle/product/12102/racdb' on the local node.
  Oracle Universal Installer cleanup was successful.
  Oracle deinstall tool successfully cleaned up temporary directories.
  #######################################################################
  ############# ORACLE DEINSTALL TOOL END #############

Update the nodes list on the remaining nodes as in the following example:
gract1: 
[root@gract3 Desktop]# ssh gract1
[root@gract1 ~]# su - oracle
-> Active ORACLE_SID:   cdbn1
[oracle@gract1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=gract1,gract2
  Starting Oracle Universal Installer...
  Checking swap space: must be greater than 500 MB.   Actual 4695 MB    Passed
  The inventory pointer is located at /etc/oraInst.loc
  'UpdateNodeList' was successful.
[oracle@gract1 ~]$  $ORACLE_HOME/OPatch/opatch lsinventory
  ..
  Rac system comprising of multiple nodes
    Local node = gract1
    Remote node = gract2

Verify whether the node to be deleted is active or not by using following command from the $CRS_HOME/bin directory:
[grid@gract1 ~]$  olsnodes -s -t
  gract1    Active    Unpinned
  gract2    Active    Unpinned
  gract3    Active    Unpinned

On gract2:
[root@gract2 ~]# su - oracle
-> Active ORACLE_SID:   cdbn2
[oracle@gract2 ~]$  $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=gract1,gract2 
[oracle@gract2 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory
.. 
  Rac system comprising of multiple nodes
    Local node = gract2
    Remote node = gract1
[root@gract2 ~]# su - grid
[grid@gract2 ~]$ olsnodes -s -t
  gract1    Active    Unpinned
  gract2    Active    Unpinned
  gract3    Active    Unpinned

Disable the Oracle Clusterware applications and daemons running on the node. 
Run the rootcrs.pl script as root from the $CRS_HOME/crs/<install directory> on the node to be deleted 
  (if it is last node use the option -lastnode) as follows:
[root@gract3 Desktop]# $GRID_HOME/crs/install/rootcrs.pl -deconfig -force
  Using configuration parameter file: /u01/app/12102/grid/crs/install/crsconfig_params
  Network 1 exists
  Subnet IPv4: 192.168.1.0/255.255.255.0/eth1, dhcp
  Subnet IPv6: 
  Ping Targets: 
  Network is enabled
  Network is individually enabled on nodes: 
  Network is individually disabled on nodes: 
  VIP exists: network number 1, hosting node gract1
  VIP IPv4 Address: -/gract1-vip/192.168.1.160
  VIP IPv6 Address: 
  VIP is enabled.
  VIP is individually enabled on nodes: 
  VIP is individually disabled on nodes: 
  ..
  ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
  ONS is enabled
  ONS is individually enabled on nodes: 
  ONS is individually disabled on nodes: 
  PRCC-1017 : ons was already stopped on gract3
  PRCR-1005 : Resource ora.ons is already stopped
  PRKO-2440 : Network resource is already stopped.
  PRKO-2313 : A VIP named gract3 does not exist.
  CRS-2797: Shutdown is already in progress for 'gract3', waiting for it to complete
  CRS-2797: Shutdown is already in progress for 'gract3', waiting for it to complete
  CRS-4133: Oracle High Availability Services has been stopped.
  2014/08/14 18:16:26 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

From any node that you are not deleting, run the following command from the $CRS_HOME/bin directory as root to delete the node from the cluster:
[root@gract1 ~]# $GRID_HOME/bin/crsctl delete node -n gract3
  CRS-4661: Node gract3 successfully deleted.

Update the node list on the node to be deleted ( gract3) , run the following command from the CRS_HOME/oui/bin directory:
[grid@gract3 ~]$ $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CLUSTER_NODES=gract3 -local
  Starting Oracle Universal Installer...
  Checking swap space: must be greater than 500 MB.   Actual 4964 MB    Passed
  The inventory pointer is located at /etc/oraInst.loc
  'UpdateNodeList' was successful.

Update the node list on the remaining nodes by running the following command from $CRS_HOME/oui/bin from each of the remaining nodes the cluster:
on gract1:
[grid@gract1 ~]$  $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CLUSTER_NODES={ gract1,gract2} -local
  Starting Oracle Universal Installer...
  Checking swap space: must be greater than 500 MB.   Actual 4582 MB    Passed
  The inventory pointer is located at /etc/oraInst.loc
  'UpdateNodeList' was successful.
[grid@gract1 ~]$  $GRID_HOME/OPatch/opatch lsinventory  
  ..
  Patch level status of Cluster nodes :
   Patching Level              Nodes
   --------------              -----
   0                           gract2,gract1

on gract2:
[grid@gract2 ~]$  $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CLUSTER_NODES={ gract1,gract2} -local
  Starting Oracle Universal Installer...
  Checking swap space: must be greater than 500 MB.   Actual 4977 MB    Passed
  The inventory pointer is located at /etc/oraInst.loc
  'UpdateNodeList' was successful.
[grid@gract2 ~]$  $GRID_HOME/OPatch/opatch lsinventory  
  ..
  Patch level status of Cluster nodes :
   Patching Level              Nodes
   --------------              -----
   0                           gract2,gract1   

Deinstall the Oracle Clusterware home from the node that you want to delete:
grid@gract3 ~]$ $GRID_HOME/deinstall/deinstall -local
  Checking for required files and bootstrapping ...
  Please wait ...
  Location of logs /u01/app/oraInventory/logs/
  ############ ORACLE DECONFIG TOOL START ############
  ######################### DECONFIG CHECK OPERATION START #########################
  ## [START] Install check configuration ##
  Checking for existence of the Oracle home location /u01/app/12102/grid
  Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
  Oracle Base selected for deinstall is: /u01/app/grid
  Checking for existence of central inventory location /u01/app/oraInventory
  Checking for existence of the Oracle Grid Infrastructure home 
  ## [END] Install check configuration ##
  Traces log file: /u01/app/oraInventory/logs//crsdc_2014-08-15_08-35-48AM.log
  Network Configuration check config START
  Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2014-08-15_08-35-48-AM.log
  Specify all Oracle Restart enabled listeners that are to be de-configured. Enter .(dot) to deselect all.       
    [ASMNET1LSNR_ASM,M GMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:
  Network Configuration check config EN
  Asm Check Configuration STAR
  ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2014-08-15_08-36-29-AM.log
  ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: n
  ASM was not detected in the Oracle Home
  Database Check Configuration STAR
  Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2014-08-15_08-36-46-AM.log
  Database Check Configuration END
  ######################### DECONFIG CHECK OPERATION END #########################
  ####################### DECONFIG CHECK OPERATION SUMMARY #######################
  Oracle Grid Infrastructure Home is: 
  The following nodes are part of this cluster: null
  The cluster node(s) on which the Oracle home deinstallation will be performed are:null
  Oracle Home selected for deinstall is: /u01/app/12102/grid
  Inventory Location where the Oracle home registered is: /u01/app/oraInventory
  Following Oracle Restart enabled listener(s) will be de-configured: ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
  ASM was not detected in the Oracle Home
  Do you want to continue (y - yes, n - no)? [n]: y
  A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-15_08-35-46-AM.out'
  Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-15_08-35-46-AM.err'
  ######################## DECONFIG CLEAN OPERATION START ########################
  Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2014-08-15_08-36-48-AM.log
  ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2014-08-15_08-36-48-AM.log
  ASM Clean Configuration END
  Network Configuration clean config START
  Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-08-15_08-36-48-AM.log
  De-configuring Oracle Restart enabled listener(s): ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
  De-configuring listener: ASMNET1LSNR_ASM
    Stopping listener: ASMNET1LSNR_ASM
    Warning: Failed to stop listener. Listener may not be running.
    Deleting listener: ASMNET1LSNR_ASM
    Listener deleted successfully.
  Listener de-configured successfully.
  De-configuring listener: MGMTLSNR
    Stopping listener: MGMTLSNR
    Warning: Failed to stop listener. Listener may not be running.
    Deleting listener: MGMTLSNR
    Listener deleted successfully.
   Listener de-configured successfully.
  De-configuring listener: LISTENER
    Stopping listener: LISTENER
    Warning: Failed to stop listener. Listener may not be running.
    Deleting listener: LISTENER
    Listener deleted successfully.
  Listener de-configured successfully.
  De-configuring listener: LISTENER_SCAN3
    Stopping listener: LISTENER_SCAN3
    Warning: Failed to stop listener. Listener may not be running.
    Deleting listener: LISTENER_SCAN3
    Listener deleted successfully.
  Listener de-configured successfully.
  De-configuring listener: LISTENER_SCAN2
    Stopping listener: LISTENER_SCAN2
    Warning: Failed to stop listener. Listener may not be running.
    Deleting listener: LISTENER_SCAN2
    Listener deleted successfully.
  Listener de-configured successfully
  De-configuring listener: LISTENER_SCAN1
    Stopping listener: LISTENER_SCAN1
    Warning: Failed to stop listener. Listener may not be running.
    Deleting listener: LISTENER_SCAN1
    Listener deleted successfully.
  Listener de-configured successfully.
  De-configuring Listener configuration file...
  Listener configuration file de-configured successfully.
  De-configuring backup files...
  Backup files de-configured successfully.
  The network configuration has been cleaned up successfully. 
  Network Configuration clean config END
  ######################### DECONFIG CLEAN OPERATION END #########################
  ####################### DECONFIG CLEAN OPERATION SUMMARY #######################
  Following Oracle Restart enabled listener(s) were de-configured successfully: ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
  Oracle Restart is stopped and de-configured successfully.
  #######################################################################
  ############# ORACLE DECONFIG TOOL END #############
  Using properties file /tmp/deinstall2014-08-15_08-33-16AM/response/deinstall_2014-08-15_08-35-46-AM.rsp
  Location of logs /u01/app/oraInventory/logs/
  ############ ORACLE DEINSTALL TOOL START ############
  ####################### DEINSTALL CHECK OPERATION SUMMARY #######################
  A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-15_08-35-46-AM.out'
  Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-15_08-35-46-AM.err'
  ######################## DEINSTALL CLEAN OPERATION START ########################
  ## [START] Preparing for Deinstall ##
  Setting LOCAL_NODE to gract3
  Setting CRS_HOME to false
  Setting oracle.installer.invPtrLoc to /tmp/deinstall2014-08-15_08-33-16AM/oraInst.loc
  Setting oracle.installer.local to true
  ## [END] Preparing for Deinstall ##
  Setting the force flag to false
  Setting the force flag to cleanup the Oracle Base
  Oracle Universal Installer clean START
  Detach Oracle home '/u01/app/12102/grid' from the central inventory on the local node : Done
  ..
  Oracle Universal Installer cleanup was successful.
  Oracle Universal Installer clean END
  ## [START] Oracle install clean ##
  Clean install operation removing temporary directory '/tmp/deinstall2014-08-15_08-33-16AM' on node 'gract3'
  ## [END] Oracle install clean ##
  ######################### DEINSTALL CLEAN OPERATION END #########################
  ####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
  Successfully detached Oracle home '/u01/app/12102/grid' from the central inventory on the local node.
  Failed to delete directory '/u01/app/12102/grid' on the local node.
  Oracle Universal Installer cleanup was successful.
  Run 'rm -r /opt/ORCLfmap' as root on node(s) 'gract3' at the end of the session.
   Oracle deinstall tool successfully cleaned up temporary directories.
  #######################################################################
  ############# ORACLE DEINSTALL TOOL END #############

Check cluster and resources status of our 2-Note cluster
grid@gract2 ~]$  olsnodes -s -t
  gract1    Active    Unpinned
  gract2    Active    Unpinned
[grid@gract2 ~]$ crs
  *****  Local Resources: *****
  Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
  -------------------------      ---------- ----------      ------------ ------------------                  
  ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract1       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
  ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract2       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
  ora.ACFS_DG1.dg                ONLINE     ONLINE          gract1       STABLE   
  ora.ACFS_DG1.dg                ONLINE     ONLINE          gract2       STABLE   
  ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          gract1       STABLE   
  ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          gract2       STABLE   
  ora.DATA.dg                    ONLINE     ONLINE          gract1       STABLE   
  ora.DATA.dg                    ONLINE     ONLINE          gract2       STABLE   
  ora.LISTENER.lsnr              ONLINE     ONLINE          gract1       STABLE   
  ora.LISTENER.lsnr              ONLINE     ONLINE          gract2       STABLE   
  ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract1       mounted on /u01/acfs /acfs-vol1,STABLE
  ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract2       mounted on /u01/acfs /acfs-vol1,STABLE
  ora.net1.network               ONLINE     ONLINE          gract1       STABLE   
  ora.net1.network               ONLINE     ONLINE          gract2       STABLE   
  ora.ons                        ONLINE     ONLINE          gract1       STABLE   
  ora.ons                        ONLINE     ONLINE          gract2       STABLE   
  ora.proxy_advm                 ONLINE     ONLINE          gract1       STABLE   
  ora.proxy_advm                 ONLINE     ONLINE          gract2       STABLE   
  *****  Cluster Resources: *****
  Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
  --------------------------- ----   ------------ ------------ --------------- -----------------------------------------
  ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       gract2          STABLE  
  ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       gract1          STABLE  
  ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       gract1          STABLE  
  ora.MGMTLSNR                   1   ONLINE       ONLINE       gract2          169.254.111.246 192. 168.2.112,STABLE
  ora.asm                        1   ONLINE       ONLINE       gract1          Started,STABLE  
  ora.asm                        2   ONLINE       OFFLINE      -               STABLE  
  ora.asm                        3   ONLINE       ONLINE       gract2          Started,STABLE  
  ora.cdbn.db                    1   ONLINE       ONLINE       gract1          Open,STABLE  
  ora.cdbn.db                    2   ONLINE       ONLINE       gract2          Open,STABLE  
  ora.cdbn.db                    3   OFFLINE      OFFLINE      -               Instance Shutdown,ST ABLE
  ora.cvu                        1   ONLINE       ONLINE       gract2          STABLE  
  ora.gns                        1   ONLINE       ONLINE       gract1          STABLE  
  ora.gns.vip                    1   ONLINE       ONLINE       gract1          STABLE  
  ora.gract1.vip                 1   ONLINE       ONLINE       gract1          STABLE  
  ora.gract2.vip                 1   ONLINE       ONLINE       gract2          STABLE  
  ora.hanfs.export               1   ONLINE       ONLINE       gract1          STABLE  
  ora.havip_id.havip             1   ONLINE       ONLINE       gract1          STABLE  
  ora.mgmtdb                     1   ONLINE       ONLINE       gract2          Open,STABLE  
  ora.oc4j                       1   ONLINE       ONLINE       gract1          STABLE  
  ora.scan1.vip                  1   ONLINE       ONLINE       gract2          STABLE  
  ora.scan2.vip                  1   ONLINE       ONLINE       gract1          STABLE  
  ora.scan3.vip                  1   ONLINE       ONLINE       gract1          STABLE

Reference

  • Adding and Deleting Oracle RAC Nodes for Oracle E-Business Suite Release 12 (Doc ID 1134753.1)

Add a Node to 10.2.0.1 RAC using addNote.sh

Overview

To add node to 10.2 RAC database is done by using following steps

  • Verify OS settings for the new Node ract3
  • Run ./addNode.sh on ract1  from $ORA_CRS_HOME to install CW sofware and verify CW Installation
  • Run ./addNode.sh ract1  from $ORACLE_HOME to install RDBMS sofware
  • Use netca on cluster node ract1 and configure ract3 listener
  • Use dbca on ract1 and finally add ASM and Rdbms Instance on ract3

Note OUI  writes addNode.sh logs to   /home/oracle/oraInventory/logs/

OS Verification for node ract3

Verify DNS
[root@ract3 ~]# nslookup ract3
Name:   ract3.example.com
Address: 192.168.1.133

# nslookup ract3int
Name:   ract3int.example.com
Address: 192.168.2.133

Reverse Address resolution :  
# nslookup  192.168.1.133
133.1.168.192.in-addr.arpa      name = ract3.example.com.

# nslookup 192.168.2.133
133.2.168.192.in-addr.arpa      name = ract3int.example.com.

Verify Network devices
#  ifconfig | egrep 'eth|inet addr'
eth0      Link encap:Ethernet  HWaddr 08:00:27:CE:53:81  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
eth1      Link encap:Ethernet  HWaddr 08:00:27:F4:CF:0C  
          inet addr:192.168.1.133  Bcast:192.168.1.255  Mask:255.255.255.0
eth2      Link encap:Ethernet  HWaddr 08:00:27:CA:C8:09  
          inet addr:192.168.2.133  Bcast:192.168.2.255  Mask:255.255.255.0
          inet addr:127.0.0.1  Mask:255.0.0.0

Verify RAW devices 
#  raw -qa
/dev/raw/raw1:  bound to major 8, minor 17
/dev/raw/raw2:  bound to major 8, minor 33
/dev/raw/raw3:  bound to major 8, minor 49
/dev/raw/raw4:  bound to major 8, minor 65
/dev/raw/raw5:  bound to major 8, minor 81
#  ls -l  /dev/raw/ra*
crw-r----- 1 root   oinstall 162, 1 Apr  9 13:24 /dev/raw/raw1
crw-r----- 1 root   oinstall 162, 2 Apr  9 13:24 /dev/raw/raw2
crw-r--r-- 1 oracle oinstall 162, 3 Apr  9 13:24 /dev/raw/raw3
crw-r--r-- 1 oracle oinstall 162, 4 Apr  9 13:24 /dev/raw/raw4
crw-r--r-- 1 oracle oinstall 162, 5 Apr  9 13:24 /dev/raw/raw5

Verify ASM disks
# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
# /usr/sbin/oracleasm listdisks
ASM_DATA01
ASM_DATA02
ASM_DATA03
ASM_DATA04

Verify SSH connectivity
[root@ract3 ~]# su - oracle
[oracle@ract3 ~]$ ssh ract1 date
Wed Apr  9 13:26:46 CEST 2014
[oracle@ract3 ~]$ ssh ract2 date
Wed Apr  9 13:26:55 CEST 2014

Run cluvfy: 
[oracle@ract3 cluvfy12]$   ./bin/cluvfy comp sys -p crs -r 10gR2 -n ract3
Verifying system requirement 
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "ract3:/tmp"
Check for multiple users with UID value 500 passed 
User existence check passed for "oracle"
Check for multiple users with UID value 99 passed 
User existence check passed for "nobody"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"

WARNING: 
PRVF-7584 : Multiple versions of package "control-center" found on node ract3: control-center(x86_64)-2.16.0-16.el5,control-center(i386)-2.16.0-16.el5
Package existence check passed for "control-center"
Package existence check passed for "gcc"

WARNING: 
PRVF-7584 : Multiple versions of package "libstdc++" found on node ract3: libstdc++(x86_64)-4.1.2-54.el5,libstdc++(i386)-4.1.2-54.el5
Package existence check passed for "libstdc++"
Package existence check passed for "libstdc++-devel"
Package existence check passed for "sysstat"
Package existence check passed for "setarch"

WARNING: 
PRVF-7584 : Multiple versions of package "glibc" found on node ract3: glibc(x86_64)-2.5-118.el5_10.2,glibc(i686)-2.5-118.el5_10.2
Package existence check passed for "glibc"
Package existence check passed for "glibc-common"
Check for multiple users with UID value 0 passed 

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed
Time zone consistency check passed

Verification of system requirement was successful. 

If getting fixable errors run cluvfy with -verbose -fixup  
[oracle@ract3 ract3]$  ./bin/cluvfy comp sys -p crs -r 10gR2 -n ract3 -verbose -fixup

Now check CRS readiness with cluvfy
[oracle@ract3 cluvfy12]$ ./bin/cluvfy stage -pre crsinst -n  ract3 -r 10gR2
...
Check for consistency of root user's primary group passed
Time zone consistency check passed
Verification of system requirement was successful.

Ignore potential  Error PRVG-5745  for cluvfy stage -pre nodeadd 
[oracle@ract3 cluvfy12]$ ./bin/cluvfy stage -pre nodeadd -n ract3 
Performing pre-checks for node addition 
ERROR: 
PRVG-5745 : CRS Configuration detected, Restart configuration check not valid in this environment
Verification cannot proceed
Pre-check for node addition was unsuccessful on all the nodes.
---> Run cluvfy with nodeadd option not from the node to be added ( ract3 ) - just run the command from an alreday installed node ( ract2 )
     [oracle@ract2 ~]$   cluvfy stage -pre nodeadd -n ract3 
For details check following Bugs 
Bug 12705949 : CLUVFY COMP NODECON FAILING WHEN THERE IS NO CRS INSTALLED
Bug 13343726 : LNX64-12.1-CVU: MISLEADING PRVG-5745 MESSAGE BEFORE CRS FRESH INSTALLATION

Install CRS software

Start AddNode.sh script from an already installed RAC instance
First testing ssh connectivity  to our newly connecting node
[oracle@ract1 ~]$ ssh ract3 date
Wed Apr  9 14:38:27 CEST 2014
[oracle@ract1 bin]$ cd /u01/app/oracle/product/crs/oui/bin
[oracle@ract1 bin]$ ./addNode.sh 
Starting Oracle Universal Installer...
Select Node                 : ract3  
       Cluster-Interconnect : ract3int  
       VIP                  : ract3vip

Run rootaddnode.sh on ract1:
[root@ract1 etc]# /u01/app/oracle/product/crs/install/rootaddnode.sh
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 3: ract3 ract3int ract3
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Failure -2 opening file handle for (raw5)
Failed to update the voting device raw5 with addnode info. 1
/u01/app/oracle/product/crs/bin/srvctl add nodeapps -n ract3 -A ract3vip.example.com/255.255.255.0/eth1 -o /u01/app/oracle/product/crs

Check OCR after running rootaddnode.sh on ract1
[root@ract1 ~]# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------      
ora.ract3.gsd  application    OFFLINE   OFFLINE               
ora.ract3.ons  application    OFFLINE   OFFLINE               
ora.ract3.vip  application    OFFLINE   OFFLINE  

Now run root.sh on ract3
[root@ract3 ~]# /u01/app/oracle/product/crs/root.sh
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)

[root@ract3 ~] crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

[root@ract3 ~]# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.ract3.gsd  application    ONLINE    ONLINE    ract3       
ora.ract3.ons  application    ONLINE    ONLINE    ract3       
ora.ract3.vip  application    ONLINE    ONLINE    ract   

--> CRS is  up and running - need to add listener and ASM and RAC instances

Install database software

[oracle@ract1 ~]$ cd $ORACLE_HOME/oui/bin/
[oracle@ract1 bin]$ ./addNode.sh
Starting Oracle Universal Installer...
--> Select new node  : ract3

Configure listener on node ract3

Start netca on ract1
$ netca &
Select the Type of Oracle Net Services Configuration     Select Cluster configuration
Select the nodes to configure                            Only select the new Oracle RAC node: ract3
Type of Configuration                                    Select Listener configuration.
Listener Configuration
Next 6 Screens                                           The following screens are now like any other normal listener configuration. 
                                                         What do you want to do: Add
                                                         Listener name: LISTENER
                                                         Selected protocols: TCP
                                                         Port number: 1521
                                                         Configure another listener: No
                                                         Listener configuration complete! [ Next ]

Verify listener configuration 
[oracle@ract3 rac_db1]$ crs_stat ora.ract3.LISTENER_RACT3.lsnr
NAME=ora.ract3.LISTENER_RACT3.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on ract3

[oracle@ract3 rac_db1]$  lsnrctl status LISTENER_RACT3
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 09-APR-2014 18:37:45
Copyright (c) 1991, 2005, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ract3vip.example.com)(PORT=1521)(IP=FIRST)))
..
Service "RACT" has 2 instance(s).
  Instance "RACT1", status READY, has 1 handler(s) for this service...
  Instance "RACT2", status READY, has 1 handler(s) for this service...
..
[oracle@ract3 admin]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------      
ora....T3.lsnr application    ONLINE    ONLINE    ract3       
ora.ract3.gsd  application    ONLINE    ONLINE    ract3       
ora.ract3.ons  application    ONLINE    ONLINE    ract3       
ora.ract3.vip  application    ONLINE    ONLINE    ract3

Run dbca to configure  ASM and RDBMS  instance

 
$ dbca
Welcome Screen     Select Oracle Real Application Clusters database.
Operations     Select Instance Management.
Instance Management     Select Add an instance. 
... 
DBCA verifies the new node ract3, and as the database is configured to use ASM, prompts with the message 
   “ASM is present on the cluster but needs to be extended to the following nodes: [ract3]. 
   Do you want ASM to be extended?” 
Click on Yes to add ASM to the new instance. 

[oracle@ract3 admin]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------       
ora....T3.inst application    ONLINE    ONLINE    ract3            
ora....SM3.asm application    ONLINE    ONLINE    ract3       
ora....T3.lsnr application    ONLINE    ONLINE    ract3       
ora.ract3.gsd  application    ONLINE    ONLINE    ract3       
ora.ract3.ons  application    ONLINE    ONLINE    ract3       
ora.ract3.vip  application    ONLINE    ONLINE    ract3   

SQL> select inst_id, instance_name, host_name,  status, to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS')
       from gv$instance order by inst_id;
   INST_ID INSTANCE_NAME    HOST_NAME            STATUS       TO_CHAR(STARTUP_TIME
---------- ---------------- -------------------- ------------ --------------------
         1 RACT1            ract1.example.com    OPEN         09-APR-2014 14:06:17
         2 RACT2            ract2.example.com    OPEN         09-APR-2014 10:38:20
         3 RACT3            ract3.example.com    OPEN         09-APR-2014 18:41:52

 

Potential CRS-Error

CRS-0223: Resource 'ora.ract3.gsd' has placement error.
[root@ract3 client]# crs_start ora.ract3.gsd 
CRS-1028: Dependency analysis failed because of:
'Resource in UNKNOWN state: ora.ract3.gsd'
CRS-0223: Resource 'ora.ract3.gsd' has placement error.
Fix : Try to manually restart the resource 
[root@ract3 client]#  crs_stop  ora.ract3.gsd 
Attempting to stop `ora.ract3.gsd` on member `ract3`
Stop of `ora.ract3.gsd` on member `ract3` succeeded.
[root@ract3 client]# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.ract3.gsd  application    ONLINE    ONLINE    ract3  

CRS-210 running rootaddnode.sh running  srvctl add nodeapps - for details read following note.

Reference

 

Add a new Node to 11.2.0.3.4 Clusterware

Status: 2-Node cluster using GNS with grac1,grac2 as  active nodes
Node to be added : grac3

If this is a failed CRS setup please check  following link.

Check networking connectivty/ASM disk status at OS level

From grac1 run 
$ ssh grac3 date
Thu Aug 29 12:32:04 CEST 2013

$ nslookup grac3
Server:        192.168.1.50
Address:    192.168.1.50#53
Name:    grac3.example.com
Address: 192.168.1.63

# ping grac3.example.com
PING grac3.example.com (192.168.1.63) 56(84) bytes of data.
64 bytes from grac3.example.com (192.168.1.63): icmp_seq=1 ttl=64 time=0.518 ms
64 bytes from grac3.example.com (192.168.1.63): icmp_seq=2 ttl=64 time=0.356 ms
# ping 192.168.2.73
PING 192.168.2.73 (192.168.2.73) 56(84) bytes of data.
64 bytes from 192.168.2.73: icmp_seq=1 ttl=64 time=1.29 ms
64 bytes from 1 92.168.2.73: icmp_seq=2 ttl=64 time=0.263 ms
On grac3 verify A

Verify ASM disks on grac3 and check that ASM disks has proper protections by running: 
# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
OCR1
OCR2
OCR3
# ls -l  /dev/oracleasm/*
/dev/oracleasm/disks:
total 0
brw-rw---- 1 grid asmadmin 8, 17 Aug 30 09:21 DATA1
brw-rw---- 1 grid asmadmin 8, 33 Aug 29 14:16 DATA2
brw-rw---- 1 grid asmadmin 8, 49 Aug 29 14:16 DATA3
brw-rw---- 1 grid asmadmin 8, 65 Aug 29 14:16 OCR1
brw-rw---- 1 grid asmadmin 8, 81 Aug 29 14:16 OCR2
brw-rw---- 1 grid asmadmin 8, 97 Aug 29 14:16 OCR3

You can also dump the ASM header by running kfed if addnode.sh failed but the file copy operation of addnotes.sh succeeded
$ $GRID_HOME/bin/kfed  read /dev/oracleasm/disks/DATA2  | grep name
kfdhdb.dskname:               DATA_0001 ; 0x028: length=9
kfdhdb.grpname:                    DATA ; 0x048: length=4
kfdhdb.fgname:                DATA_0001 ; 0x068: length=9
kfdhdb.capname:                         ; 0x088: length=0

Use kfod to get an idea about the ASM disk status
$  kfod asm_diskstring='/dev/oracleasm/disks/*' nohdr=true verbose=true disks=all status=true op=disks
5114 CANDIDATE /dev/oracleasm/disks/DATA1 grid     asmadmin
5114 MEMBER /dev/oracleasm/disks/DATA2 grid     asmadmin
5114 MEMBER /dev/oracleasm/disks/DATA3 grid     asmadmin
2047 CANDIDATE /dev/oracleasm/disks/OCR1 grid     asmadmin
2047 CANDIDATE /dev/oracleasm/disks/OCR2 grid     asmadmin
2047 CANDIDATE /dev/oracleasm/disks/OCR3 grid     asmadmin

Verify the current setup with the following cluvfy commands:

$ cluvfy stage -post hwos -n grac41,grac43
$ cluvfy stage -pre nodeadd -n grac43 | grep PRV
PRVG-1013 : The path "/u01/app/11204/grid" does not exist or cannot be created on the nodes to be added
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: grac43
$ cluvfy stage -pre crsinst -n grac43 

cluvfy comp peer -refnode grac1 -n grac3 -orainv oinstall -osdba asmdba -verbose

Run addNode.sh for the Clusterware ( GRID_HOME )

Run addnode.sh 
$ setenv IGNORE_PREADDNODE_CHECKS Y
$ cd $GRID_HOME/oui/bin
$ ./addNode.sh "CLUSTER_NEW_NODES={grac3}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 6095 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes grac2,grac3 are available
............................................................... 100% Done.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/11203/grid
   New Nodes
Space Requirements
   New Nodes
      grac3
         /: Required 4.97GB : Available 14.57GB
Installed Products
   Product Names
  .....
-----------------------------------------------------------------------------
Instantiating scripts for add node (Thursday, August 29, 2013 6:11:18 PM CEST)
Instantiation of add node scripts complete
Copying to remote nodes (Thursday, August 29, 2013 6:11:22 PM CEST)
...............................................................................................                                 96% Done.
Home copied to new nodes
Saving inventory on nodes (Thursday, August 29, 2013 6:29:04 PM CEST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system. 
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'grac3'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes grac3
/u01/app/11203/grid/root.sh #On nodes grac3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/11203/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

Run required root scripts

# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

# /u01/app/11203/grid/root.sh
Performing root user operation for Oracle 11g 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11203/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11203/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node grac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Run cluvfy and crsctl stat res -t   to verify cluster node add installation

# my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.DATA.dg                    ONLINE     ONLINE          grac1         
ora.DATA.dg                    ONLINE     ONLINE          grac2         
ora.DATA.dg                    ONLINE     ONLINE          grac3         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac1         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac2         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac3         
ora.asm                        ONLINE     ONLINE          grac1        Started 
ora.asm                        ONLINE     ONLINE          grac2        Started 
ora.asm                        ONLINE     ONLINE          grac3        Started 
ora.gsd                        OFFLINE    OFFLINE         grac1         
ora.gsd                        OFFLINE    OFFLINE         grac2         
ora.gsd                        OFFLINE    OFFLINE         grac3         
ora.net1.network               ONLINE     ONLINE          grac1         
ora.net1.network               ONLINE     ONLINE          grac2         
ora.net1.network               ONLINE     ONLINE          grac3         
ora.ons                        ONLINE     ONLINE          grac1         
ora.ons                        ONLINE     ONLINE          grac2         
ora.ons                        ONLINE     ONLINE          grac3         
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac3         
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac1         
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac2         
ora.cvu                        ONLINE     ONLINE          grac1         
ora.gns                        ONLINE     ONLINE          grac1         
ora.gns.vip                    ONLINE     ONLINE          grac1         
ora.grac1.vip                  ONLINE     ONLINE          grac1         
ora.grac2.vip                  ONLINE     ONLINE          grac2         
ora.grac3.vip                  ONLINE     ONLINE          grac3         
ora.grace2.db                  ONLINE     ONLINE          grac1        Open 
ora.grace2.db                  ONLINE     ONLINE          grac2        Open 
ora.oc4j                       ONLINE     ONLINE          grac1         
ora.scan1.vip                  ONLINE     ONLINE          grac3         
ora.scan2.vip                  ONLINE     ONLINE          grac1         
ora.scan3.vip                  ONLINE     ONLINE          grac2 

Verify post CRS status with cluvfy

$ cluvfy stage -post crsinst -n grac1,grac2,grac3
$ cluvfy stage -post nodeadd -n grac3

Ignore well known errors like:
PRVF-5217 : An error occurred while trying to look up IP address for "grac1.grid4.example.com"

Install RAC database software on grac3

Install RAC database software on grac3 as oracle owner 
$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),506(asmdba),54322(dba)
$ cd $ORACLE_HOME/oui/bin
$ ./addNode.sh "CLUSTER_NEW_NODES={grac3}"
Performing pre-checks for node addition 
Checking node reachability...
Node reachability check passed from node "grac1"
Checking user equivalence...
User equivalence check passed for user "oracle"
WARNING: 
Node "grac3" already appears to be part of cluster
Pre-check for node addition was successful. 
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 5692 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes grac2,grac3 are available
............................................................... 100% Done.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/oracle/product/11203/racdb
   New Nodes
Space Requirements
   New Nodes
      grac3
         /: Required 4.09GB : Available 11.01GB
Installed Products
   Product Names
      Oracle Database 11g 11.2.0.3.0 
....
-----------------------------------------------------------------------------
Instantiating scripts for add node (Saturday, August 31, 2013 10:55:33 AM CEST)
.                                                                 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Saturday, August 31, 2013 10:55:38 AM CEST)
...............................................................................................                                 96% Done.
Home copied to new nodes
Saving inventory on nodes (Saturday, August 31, 2013 11:36:22 AM CEST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. 
Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11203/racdb/root.sh #On nodes grac3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node    
The Cluster Node Addition of /u01/app/oracle/product/11203/racdb was successful.
Please check '/tmp/silentInstall.log' for more details.

 

Invoke dbca and add instance grac23 on node grac3

Run dbca as oracle owner : 
$ dbca
  Oracle RAC cluster database
   Instance Management
     Add an Instance
       Select your RAC database ( GRACE2 - should be admin based )
         Select instance: GRAC23 - Host_: grac3
          Accept the default values   for 
          Initialization Parameters
             Instance     Name               Value
             GRACE23     instance_number     3
             GRACE23     thread              3
             GRACE23     undo_tablespace     UNDOTBS3    
           Tablespaces
             Name     Type                     Extent Management
             UNDOTBS3     PERMANENT , UNDO     LOCAL  
           Data Files
             Name                     Tablespace     Size(M)
             <OMF_UNDOTBS3_DATAFILE_0>     UNDOTBS3     100
           Redo Log Groups
             Group     Size(K)     Thread
              5     51200             3
              6     51200

 

Verify cluster status using dbca, cluvfy

Verify cluster status           
$ srvctl status database -d GRACE2
Instance GRACE21 is running on node grac1
Instance GRACE22 is running on node grac2
Instance GRACE23 is running on node grac3
Verification of administrative privileges :

$ cluvfy comp admprv -o db_config -d $ORACLE_HOME -n grac1,grac2,grac3 -verbose
Verifying administrative privileges 
Checking user equivalence...
Check: User equivalence for user "oracle"
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac3                                 passed                  
  grac2                                 passed                  
  grac1                                 passed                  
Result: User equivalence check passed for user "oracle"
Checking administrative privileges...
Check: User existence for "oracle" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac3         passed                    exists(54321)           
  grac2         passed                    exists(54321)           
  grac1         passed                    exists(54321)           
Checking for multiple users with UID value 54321
Result: Check for multiple users with UID value 54321 passed 
Result: User existence check passed for "oracle"
Check: Group existence for "oinstall" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac3         passed                    exists                  
  grac2         passed                    exists                  
  grac1         passed                    exists                  
Result: Group existence check passed for "oinstall"
Check: Membership of user "oracle" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac3             yes           yes           yes           yes           passed      
  grac2             yes           yes           yes           yes           passed      
  grac1             yes           yes           yes           yes           passed      
Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed
Check: Group existence for "asmdba" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac3         passed                    exists                  
  grac2         passed                    exists                  
  grac1         passed                    exists                  
Result: Group existence check passed for "asmdba"
Check: Membership of user "oracle" in group "asmdba" 
  Node Name         User Exists   Group Exists  User in Group  Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac3             yes           yes           yes           passed          
  grac2             yes           yes           yes           passed          
  grac1             yes           yes           yes           passed          
Result: Membership check for user "oracle" in group "asmdba" passed
Administrative privileges check passed
Verification of administrative privileges was successful.

Clone Clusterware with active GNS to new node ( Single Node )

Used Software

  • GRID: 11.2.0.3.4
  • OEL 6.3
  • VirtualBox 4.2.14

Steps

  • Install Grid Infrastructure Clusterware + any required patches on our Source CRS
  • Prepare the new cluster nodes
  • Run the clone.pl on the Destination Node
  • Launch the Configuration Wizard
Stop source CRS on grac1 ( Source )
# $GRID_HOME/bin/crsctl stop crs
Create a stage directory 
# mkdir -p /local_tmp/cloneGRID
# cp -prf $GRID_HOME /local_tmp/cloneGRID
Cleanup staging area
cd  /local_tmp/cloneGRID/grid
rm -rf log/grac1
rm -rf gpnp/grac1
find gpnp -type f -exec rm -f {} \;
rm -rf root.sh*
rm -rf gpnp/*
rm -rf crs/init/*
rm -rf cdata/*
rm -rf crf/*
rm -rf network/admin/*.ora
find . -name ’*.ouibak’ -exec rm {} \;
find . -name ‘*.ouibak.1′ -exec rm {} \; 
find cfgtoollogs -type f -exec rm -f {} \;

Create an archive of the cleaned staging area
# cd  /local_tmp/cloneGRID/grid
# tar -zcvpf /RAC/cloneGRID/grid112034.tgz .
Restart CRS on source node

On our cloned system  grace1cl create the need directories:
# mkdir –p /u01/app/11203/grid
# mkdir –p /u01/app/grid
# mkdir –p /u01/app/oracle
# chown grid:oinstall /u01/app/11203/grid
# chown grid:oinstall /u01/app/grid
# chown oracle:oinstall /u01/app/oracle
# chown –R grid:oinstall /u01
# mkdir –p /u01/app/oracle
# chmod –R 775 /u01/
# mkdir -p /u01/app/oraInventory
# chown -R grid:oinstall /u01/app/oraInventory
# chmod -R 775 /u01/app/oraInventory
#  ls -ld /u01/app/oraInventory
drwxr-xr-x 3 grid oinstall 4096 Jul 17 15:12 /u01/app/oraInventory
As root user execute on the new node grac1cl  
# cd $GRID_HOME
# tar -zxvf /tmp/grid112034.tgz .
# chown -R grid:oinstall /u01/app/11203/grid
Change setuid:
# chmod u+s $GRID_HOME/bin/oracle
# chmod g+s $GRID_HOME/bin/oracle
# chmod u+s $GRID_HOME/bin/extjob
# chmod u+s $GRID_HOME/bin/jssu
# chmod u+s $GRID_HOME/bin/oradism

Prepare script to run clone.sh 
#!/bin/sh
export PATH=/u01/11.2.0/grid/bin:$PATH
export THIS_NODE=`/bin/hostname -s`
echo $THIS_NODE
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/11203/grid
E01=ORACLE_BASE=${ORACLE_BASE}
E02=ORACLE_HOME=${GRID_HOME}
E03=ORACLE_HOME_NAME=GridHome1_112034
E04=INVENTORY_LOCATION=/u01/app/oraInventory
C00=-O'"-debug"'
C01=-O"\"CLUSTER_NODES={grac1cl}\""
C02="-O\"LOCAL_NODE=$THIS_NODE\""

Run cloning script  clone.pl :
$ perl ${GRID_HOME}/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C00 $C01 $C02
+ perl /u01/app/11203/grid/clone/bin/clone.pl -silent ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/11203/grid ORACLE_HOME_NAME=GridHome1_112034 INVENTORY_LOCATION=/u01/app/oraInventory '-O"-debug"' '-O"CLUSTER_NODES={grac1cl}"' '-O"LOCAL_NODE=grac1cl"'
./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/u01/app/grid" "ORACLE_HOME=/u01/app/11203/grid" "ORACLE_HOME_NAME=GridHome1_112034" "INVENTORY_LOCATION=/u01/app/oraInventory" "-debug" "CLUSTER_NODES={grac1cl}" "LOCAL_NODE=grac1cl" -silent -noConfig -nowait 
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 6219 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-08-09_07-59-43PM. Please wait ...
LD_LIBRARY_PATH environment variable :
-------------------------------------------------------
Total args: 32
Command line argument array elements ...
Arg:0:/tmp/OraInstall2013-08-09_07-59-43PM/jre/bin/java:
Arg:1:-Doracle.installer.library_loc=/tmp/OraInstall2013-08-09_07-59-43PM/oui/lib/linux64:
Arg:2:-Doracle.installer.oui_loc=/tmp/OraInstall2013-08-09_07-59-43PM/oui:
...
Arg:13:oracle.sysman.oii.oiic.OiicInstaller:
Arg:14:-scratchPath:
Arg:15:/tmp/OraInstall2013-08-09_07-59-43PM:
Arg:16:-sourceType:
Arg:17:network:
Arg:18:-timestamp:
Arg:19:2013-08-09_07-59-43PM:
Arg:20:-clone:
Arg:21:-waitForCompletion:
Arg:22:ORACLE_BASE=/u01/app/grid:
Arg:23:ORACLE_HOME=/u01/app/11203/grid:
Arg:24:ORACLE_HOME_NAME=GridHome1_112034:
Arg:25:INVENTORY_LOCATION=/u01/app/oraInventory:
Arg:26:-debug:
Arg:27:CLUSTER_NODES={grac1cl}:
Arg:28:LOCAL_NODE=grac1cl:
Arg:29:-silent:
Arg:30:-noConfig:
Arg:31:-nowait:
-------------------------------------------------------
Initializing Java Virtual Machine from /tmp/OraInstall2013-08-09_07-59-43PM/jre/bin/java. Please wait...
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2013-08-09_07-59-43PM.log
.[main] [ 2013-08-09 19:59:51.314 CEST ] [Version.isPre:528]  version to be checked 11.2.0.3.0 major version to check against10
[main] [ 2013-08-09 19:59:51.315 CEST ] [Version.isPre:539]  isPre.java: Returning FALSE
[main] [ 2013-08-09 19:59:51.315 CEST ] [UnixSystem.getCSSConfigType:2418]  configFile=/etc/oracle/ocr.loc
[main] [ 2013-08-09 19:59:51.453 CEST ] [UnixSystem.getCSSConfigType:2462]  configType=null
[main] [ 2013-08-09 19:59:51.454 CEST ] [Version.isPre:528]  version to be checked 11.2.0.3.0 major version to check against10
[main] [ 2013-08-09 19:59:51.456 CEST ] [Version.isPre:539]  isPre.java: Returning FALSE
[main] [ 2013-08-09 19:59:51.456 CEST ] [ClusterInfo.<init>:241]  m_olsnodesPath=/u01/app/11203/grid/bin/olsnodes
[main] [ 2013-08-09 19:59:51.457 CEST ] [RuntimeExec.runCommand:75]  Calling Runtime.exec() with the command 
[main] [ 2013-08-09 19:59:51.458 CEST ] [RuntimeExec.runCommand:77]  /u01/app/11203/grid/bin/olsnodes 
[Thread-4] [ 2013-08-09 19:59:51.485 CEST ] [StreamReader.run:61]  In StreamReader.run 
[main] [ 2013-08-09 19:59:51.486 CEST ] [RuntimeExec.runCommand:142]  runCommand: Waiting for the process
[Thread-3] [ 2013-08-09 19:59:51.486 CEST ] [StreamReader.run:61]  In StreamReader.run 
[Thread-3] [ 2013-08-09 19:59:52.152 CEST ] [StreamReader.run:65]  OUTPUT>PRCO-19: Failure retrieving list of nodes in the cluster
[Thread-3] [ 2013-08-09 19:59:52.153 CEST ] [StreamReader.run:65]  OUTPUT>PRCO-2: Unable to communicate with the clusterware
[main] [ 2013-08-09 19:59:52.154 CEST ] [RuntimeExec.runCommand:144]  runCommand: process returns 1
[main] [ 2013-08-09 19:59:52.154 CEST ] [RuntimeExec.runCommand:161]  RunTimeExec: output>
[main] [ 2013-08-09 19:59:52.154 CEST ] [RuntimeExec.runCommand:164]  PRCO-19: Failure retrieving list of nodes in the cluster
[main] [ 2013-08-09 19:59:52.154 CEST ] [RuntimeExec.runCommand:164]  PRCO-2: Unable to communicate with the clusterware
[main] [ 2013-08-09 19:59:52.155 CEST ] [RuntimeExec.runCommand:170]  RunTimeExec: error>
[main] [ 2013-08-09 19:59:52.155 CEST ] [RuntimeExec.runCommand:192]  Returning from RunTimeExec.runCommand
Performing tests to see whether nodes  are available
............................................................... 100% Done.
[main] [ 2013-08-09 19:59:52.580 CEST ] [QueryCluster.<init>:56]  No Cluster detected
[main] [ 2013-08-09 19:59:52.581 CEST ] [QueryCluster.isCluster:65]  Cluster existence check = false
Installation in progress (Friday, August 9, 2013 7:59:53 PM CEST)
........................................................................                                                        72% Done.
Install successful
Linking in progress (Friday, August 9, 2013 7:59:57 PM CEST)
Link successful
Setup in progress (Friday, August 9, 2013 8:00:29 PM CEST)
.................                                               100% Done.
Setup successful
End of install phases.(Friday, August 9, 2013 8:00:51 PM CEST)
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/11203/grid/root.sh #On nodes grac1cl
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node    
Run the script on the local node.
The cloning of GridHome1_112034 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2013-08-09_07-59-43PM.log' for more details.
copying /u01/app/oraInventory/logs/cloneActions2013-08-09_07-59-43PM.log to /u01/app/11203/grid/cfgtoollogs/ouicloneActions2013-08-09_07-59-43PM.log
copying /u01/app/oraInventory/logs/silentInstall2013-08-09_07-59-43PM.log to /u01/app/11203/grid/cfgtoollogs/oui/silentInstall2013-08-09_07-59-43PM.log
copying /u01/app/oraInventory/logs/oraInstall2013-08-09_07-59-43PM.err to /u01/app/11203/grid/cfgtoollogs/oui/oraInstall2013-08-09_07-59-43PM.err
copying /u01/app/oraInventory/logs/oraInstall2013-08-09_07-59-43PM.out to /u01/app/11203/grid/cfgtoollogs/oui/oraInstall2013-08-09_07-59-43PM.out

Now Create your ASM disks using Virtualbox Manager and check that these disks are available 
# /etc/init.d/oracleasm listdisks
DATA1
DATA2

Run as user grid configure CRS script 
$GRID_HOME/crs/config/config.sh using following paramters:
Cluster name   grace2cl  
Scan name:     grace2cl-scan.grid.example.com 
Scan port:     1521
Configure GNS
GNS sub domain:  grid2.example.com
GNS VIP address: 192.168.1.57

Run # /u01/app/11203/grid/root.sh and  monitor logfile the related logfile 
# tail -f /u01/app/11203/grid/install/root_grac1cl.example.com_2013-08-09_20-51-57.log
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 79ace18753134f99bf4b2b846f8df8cd.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   79ace18753134f99bf4b2b846f8df8cd (/dev/oracleasm/disks/DATA1) [DATA]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'grac1cl'
CRS-2676: Start of 'ora.asm' on 'grac1cl' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'grac1cl'
CRS-2676: Start of 'ora.DATA.dg' on 'grac1cl' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'grac1cl'
CRS-2676: Start of 'ora.registry.acfs' on 'grac1cl' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Reference:  How to Clone an 11.2.0.3 Grid Infrastructure Home and Clusterware (Doc ID 1413846.1)