Add a new Node to 11.2.0.3.4 Clusterware

Status: 2-Node cluster using GNS with grac1,grac2 as  active nodes
Node to be added : grac3

If this is a failed CRS setup please check  following link.

Check networking connectivty/ASM disk status at OS level

From grac1 run 
$ ssh grac3 date
Thu Aug 29 12:32:04 CEST 2013

$ nslookup grac3
Server:        192.168.1.50
Address:    192.168.1.50#53
Name:    grac3.example.com
Address: 192.168.1.63

# ping grac3.example.com
PING grac3.example.com (192.168.1.63) 56(84) bytes of data.
64 bytes from grac3.example.com (192.168.1.63): icmp_seq=1 ttl=64 time=0.518 ms
64 bytes from grac3.example.com (192.168.1.63): icmp_seq=2 ttl=64 time=0.356 ms
# ping 192.168.2.73
PING 192.168.2.73 (192.168.2.73) 56(84) bytes of data.
64 bytes from 192.168.2.73: icmp_seq=1 ttl=64 time=1.29 ms
64 bytes from 1 92.168.2.73: icmp_seq=2 ttl=64 time=0.263 ms
On grac3 verify A

Verify ASM disks on grac3 and check that ASM disks has proper protections by running: 
# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
OCR1
OCR2
OCR3
# ls -l  /dev/oracleasm/*
/dev/oracleasm/disks:
total 0
brw-rw---- 1 grid asmadmin 8, 17 Aug 30 09:21 DATA1
brw-rw---- 1 grid asmadmin 8, 33 Aug 29 14:16 DATA2
brw-rw---- 1 grid asmadmin 8, 49 Aug 29 14:16 DATA3
brw-rw---- 1 grid asmadmin 8, 65 Aug 29 14:16 OCR1
brw-rw---- 1 grid asmadmin 8, 81 Aug 29 14:16 OCR2
brw-rw---- 1 grid asmadmin 8, 97 Aug 29 14:16 OCR3

You can also dump the ASM header by running kfed if addnode.sh failed but the file copy operation of addnotes.sh succeeded
$ $GRID_HOME/bin/kfed  read /dev/oracleasm/disks/DATA2  | grep name
kfdhdb.dskname:               DATA_0001 ; 0x028: length=9
kfdhdb.grpname:                    DATA ; 0x048: length=4
kfdhdb.fgname:                DATA_0001 ; 0x068: length=9
kfdhdb.capname:                         ; 0x088: length=0

Use kfod to get an idea about the ASM disk status
$  kfod asm_diskstring='/dev/oracleasm/disks/*' nohdr=true verbose=true disks=all status=true op=disks
5114 CANDIDATE /dev/oracleasm/disks/DATA1 grid     asmadmin
5114 MEMBER /dev/oracleasm/disks/DATA2 grid     asmadmin
5114 MEMBER /dev/oracleasm/disks/DATA3 grid     asmadmin
2047 CANDIDATE /dev/oracleasm/disks/OCR1 grid     asmadmin
2047 CANDIDATE /dev/oracleasm/disks/OCR2 grid     asmadmin
2047 CANDIDATE /dev/oracleasm/disks/OCR3 grid     asmadmin

Verify the current setup with the following cluvfy commands:

$ cluvfy stage -post hwos -n grac41,grac43
$ cluvfy stage -pre nodeadd -n grac43 | grep PRV
PRVG-1013 : The path "/u01/app/11204/grid" does not exist or cannot be created on the nodes to be added
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: grac43
$ cluvfy stage -pre crsinst -n grac43 

cluvfy comp peer -refnode grac1 -n grac3 -orainv oinstall -osdba asmdba -verbose

Run addNode.sh for the Clusterware ( GRID_HOME )

Run addnode.sh 
$ setenv IGNORE_PREADDNODE_CHECKS Y
$ cd $GRID_HOME/oui/bin
$ ./addNode.sh "CLUSTER_NEW_NODES={grac3}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 6095 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes grac2,grac3 are available
............................................................... 100% Done.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/11203/grid
   New Nodes
Space Requirements
   New Nodes
      grac3
         /: Required 4.97GB : Available 14.57GB
Installed Products
   Product Names
  .....
-----------------------------------------------------------------------------
Instantiating scripts for add node (Thursday, August 29, 2013 6:11:18 PM CEST)
Instantiation of add node scripts complete
Copying to remote nodes (Thursday, August 29, 2013 6:11:22 PM CEST)
...............................................................................................                                 96% Done.
Home copied to new nodes
Saving inventory on nodes (Thursday, August 29, 2013 6:29:04 PM CEST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system. 
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'grac3'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes grac3
/u01/app/11203/grid/root.sh #On nodes grac3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/11203/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

Run required root scripts

# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

# /u01/app/11203/grid/root.sh
Performing root user operation for Oracle 11g 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11203/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11203/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node grac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Run cluvfy and crsctl stat res -t   to verify cluster node add installation

# my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.DATA.dg                    ONLINE     ONLINE          grac1         
ora.DATA.dg                    ONLINE     ONLINE          grac2         
ora.DATA.dg                    ONLINE     ONLINE          grac3         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac1         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac2         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac3         
ora.asm                        ONLINE     ONLINE          grac1        Started 
ora.asm                        ONLINE     ONLINE          grac2        Started 
ora.asm                        ONLINE     ONLINE          grac3        Started 
ora.gsd                        OFFLINE    OFFLINE         grac1         
ora.gsd                        OFFLINE    OFFLINE         grac2         
ora.gsd                        OFFLINE    OFFLINE         grac3         
ora.net1.network               ONLINE     ONLINE          grac1         
ora.net1.network               ONLINE     ONLINE          grac2         
ora.net1.network               ONLINE     ONLINE          grac3         
ora.ons                        ONLINE     ONLINE          grac1         
ora.ons                        ONLINE     ONLINE          grac2         
ora.ons                        ONLINE     ONLINE          grac3         
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac3         
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac1         
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac2         
ora.cvu                        ONLINE     ONLINE          grac1         
ora.gns                        ONLINE     ONLINE          grac1         
ora.gns.vip                    ONLINE     ONLINE          grac1         
ora.grac1.vip                  ONLINE     ONLINE          grac1         
ora.grac2.vip                  ONLINE     ONLINE          grac2         
ora.grac3.vip                  ONLINE     ONLINE          grac3         
ora.grace2.db                  ONLINE     ONLINE          grac1        Open 
ora.grace2.db                  ONLINE     ONLINE          grac2        Open 
ora.oc4j                       ONLINE     ONLINE          grac1         
ora.scan1.vip                  ONLINE     ONLINE          grac3         
ora.scan2.vip                  ONLINE     ONLINE          grac1         
ora.scan3.vip                  ONLINE     ONLINE          grac2 

Verify post CRS status with cluvfy

$ cluvfy stage -post crsinst -n grac1,grac2,grac3
$ cluvfy stage -post nodeadd -n grac3

Ignore well known errors like:
PRVF-5217 : An error occurred while trying to look up IP address for "grac1.grid4.example.com"

Install RAC database software on grac3

Install RAC database software on grac3 as oracle owner 
$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),506(asmdba),54322(dba)
$ cd $ORACLE_HOME/oui/bin
$ ./addNode.sh "CLUSTER_NEW_NODES={grac3}"
Performing pre-checks for node addition 
Checking node reachability...
Node reachability check passed from node "grac1"
Checking user equivalence...
User equivalence check passed for user "oracle"
WARNING: 
Node "grac3" already appears to be part of cluster
Pre-check for node addition was successful. 
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 5692 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes grac2,grac3 are available
............................................................... 100% Done.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/oracle/product/11203/racdb
   New Nodes
Space Requirements
   New Nodes
      grac3
         /: Required 4.09GB : Available 11.01GB
Installed Products
   Product Names
      Oracle Database 11g 11.2.0.3.0 
....
-----------------------------------------------------------------------------
Instantiating scripts for add node (Saturday, August 31, 2013 10:55:33 AM CEST)
.                                                                 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Saturday, August 31, 2013 10:55:38 AM CEST)
...............................................................................................                                 96% Done.
Home copied to new nodes
Saving inventory on nodes (Saturday, August 31, 2013 11:36:22 AM CEST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. 
Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11203/racdb/root.sh #On nodes grac3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node    
The Cluster Node Addition of /u01/app/oracle/product/11203/racdb was successful.
Please check '/tmp/silentInstall.log' for more details.

 

Invoke dbca and add instance grac23 on node grac3

Run dbca as oracle owner : 
$ dbca
  Oracle RAC cluster database
   Instance Management
     Add an Instance
       Select your RAC database ( GRACE2 - should be admin based )
         Select instance: GRAC23 - Host_: grac3
          Accept the default values   for 
          Initialization Parameters
             Instance     Name               Value
             GRACE23     instance_number     3
             GRACE23     thread              3
             GRACE23     undo_tablespace     UNDOTBS3    
           Tablespaces
             Name     Type                     Extent Management
             UNDOTBS3     PERMANENT , UNDO     LOCAL  
           Data Files
             Name                     Tablespace     Size(M)
             <OMF_UNDOTBS3_DATAFILE_0>     UNDOTBS3     100
           Redo Log Groups
             Group     Size(K)     Thread
              5     51200             3
              6     51200

 

Verify cluster status using dbca, cluvfy

Verify cluster status           
$ srvctl status database -d GRACE2
Instance GRACE21 is running on node grac1
Instance GRACE22 is running on node grac2
Instance GRACE23 is running on node grac3
Verification of administrative privileges :

$ cluvfy comp admprv -o db_config -d $ORACLE_HOME -n grac1,grac2,grac3 -verbose
Verifying administrative privileges 
Checking user equivalence...
Check: User equivalence for user "oracle"
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac3                                 passed                  
  grac2                                 passed                  
  grac1                                 passed                  
Result: User equivalence check passed for user "oracle"
Checking administrative privileges...
Check: User existence for "oracle" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac3         passed                    exists(54321)           
  grac2         passed                    exists(54321)           
  grac1         passed                    exists(54321)           
Checking for multiple users with UID value 54321
Result: Check for multiple users with UID value 54321 passed 
Result: User existence check passed for "oracle"
Check: Group existence for "oinstall" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac3         passed                    exists                  
  grac2         passed                    exists                  
  grac1         passed                    exists                  
Result: Group existence check passed for "oinstall"
Check: Membership of user "oracle" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac3             yes           yes           yes           yes           passed      
  grac2             yes           yes           yes           yes           passed      
  grac1             yes           yes           yes           yes           passed      
Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed
Check: Group existence for "asmdba" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac3         passed                    exists                  
  grac2         passed                    exists                  
  grac1         passed                    exists                  
Result: Group existence check passed for "asmdba"
Check: Membership of user "oracle" in group "asmdba" 
  Node Name         User Exists   Group Exists  User in Group  Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac3             yes           yes           yes           passed          
  grac2             yes           yes           yes           passed          
  grac1             yes           yes           yes           passed          
Result: Membership check for user "oracle" in group "asmdba" passed
Administrative privileges check passed
Verification of administrative privileges was successful.

4 thoughts on “Add a new Node to 11.2.0.3.4 Clusterware”

  1. Hi,
    Many thanks for this post
    Question:
    Why we have to run dbca to create instances, srvctl could do it, is it right ?

    Kind Regards
    Houcine

  2. Hi,
    I your last post, Add a new node to 11.2 clusterware, you said:
    To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as “root”
    3. Run the scripts in each cluster node

    Why should we run root.sh in each cluster node ?
    I thought we have to run root.sh, only in the one node we are added ?
    Please confirm me if is it OK

    Thank’s in advance
    Kind regards

    1. Hi ,

      Hi,
      for the addnode step you only need to run root.sh for any newly added Node.

      Output from Oracle configurations scripts:
      The following configuration scripts need to be executed as the “root” user in each new cluster node

Leave a Reply

Your email address will not be published. Required fields are marked *