Install 12.2 Oracle Member Cluster in a Virtualbox env

This article only exits because  I’m always getting support, fast feedback  and motivation  from

Anil Nair | Product Manager
Oracle Real Application Clusters (RAC)

Verify RHP-Server IO-Server and MGMTDB  status on our Domain Services Cluster

[grid@dsctw21 ~]$ srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is running on node dsctw21
[grid@dsctw21 ~]$  srvctl status  mgmtdb 
Database is enabled
Instance -MGMTDB is running on node dsctw21
[grid@dsctw21 ~]$ srvctl status ioserver
ASM I/O Server is running on dsctw21

  Prepare RHP Server

DNS requirements for HAVIP IP address 
[grid@dsctw21 ~]$  nslookup rhpserver
Server:        192.168.5.50
Address:    192.168.5.50#53

Name:    rhpserver.example.com
Address: 192.168.5.51

[grid@dsctw21 ~]$  nslookup  192.168.5.51
Server:        192.168.5.50
Address:    192.168.5.50#53

51.5.168.192.in-addr.arpa    name = rhpserver.example.com.

[grid@dsctw21 ~]$ ping nslookup rhpserver
ping: nslookup: Name or service not known
[grid@dsctw21 ~]$ ping rhpserver
PING rhpserver.example.com (192.168.5.51) 56(84) bytes of data.
From dsctw21.example.com (192.168.5.151) icmp_seq=1 Destination Host Unreachable
From dsctw21.example.com (192.168.5.151) icmp_seq=2 Destination Host Unreachable

-> nslookup works - Nobody should respond to our ping request  as HAVIP is not active YET 

As user root create a HAVIP  
[root@dsctw21 ~]#  srvctl add havip -id rhphavip -address rhpserver 

*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.rhphavip.havip             1   OFFLINE      OFFLINE      -               STABLE  

Create a Member Cluster Configuration Manifest

[grid@dsctw21 ~]$ crsctl create  -h
Usage:
  crsctl create policyset -file <filePath>
where 
     filePath        Policy set file to create.

  crsctl create member_cluster_configuration <member_cluster_name> -file <cluster_manifest_file>  -member_type <database|application>  [-version <member_cluster_version>] [-domain_services [asm_storage <local|direct|indirect>][<rhp>]]
  where 
     member_cluster_name    name of the new Member Cluster
     -file                  path of the Cluster Manifest File (including the '.xml' extension) to be created
     -member_type           type of member cluster to be created
     -version               5 digit version of GI (example: 12.2.0.2.0) on the new Member Cluster, if
                            different from the Domain Services Cluster
     -domain_services       services to be initially configured for this member
                            cluster (asm_storage with local, direct, or indirect access paths, and rhp)
                            --note that if "-domain_services" option is not specified,
                            then only the GIMR and TFA services will be configured
     asm_storage            indicates the storage access path for the database member clusters
                            local : storage is local to the cluster
                            direct or indirect : direct or indirect access to storage provided on the Domain Services Cluster
     rhp                    generate credentials and configuration for an RHP client cluster.

Provide access to DSC Data DG - even we use: asm_storage local
[grid@dsctw21 ~]$ sqlplus / as sysasm
SQL> ALTER DISKGROUP data SET ATTRIBUTE 'access_control.enabled' = 'true';
Diskgroup altered.

Create a  Member Cluster Configuration File with local ASM storage

[grid@dsctw21 ~]$ crsctl create member_cluster_configuration mclu2 -file mclu2.xml  -member_type database -domain_services asm_storage indirect 
--------------------------------------------------------------------------------
ASM GIMR TFA ACFS RHP GNS
================================================================================
YES  YES  NO   NO  NO YES
================================================================================

If you get ORA-15365 during crsctl create member_cluster_configuration delete the configuration first
 Error ORA-15365: member cluster 'mclu2' already configured
   [grid@dsctw21 ~]$ crsctl delete member_cluster_configuration mclu2


[grid@dsctw21 ~]$ crsctl query  member_cluster_configuration mclu2 
          mclu2     12.2.0.1.0 a6ab259d51ea6f91ffa7984299059208 ASM,GIMR

Copy the File to the Member Cluster Host where you plan to start the installation
[grid@dsctw21 ~]$ sum  mclu2.xml
54062    22

Copy Member Cluster Manifest File to Member Cluster host
[grid@dsctw21 ~]$ scp  mclu2.xml mclu21:
mclu2.xml                                                                                         100%   25KB  24.7KB/s   00:00  

Verify DSC SCAN Address from our Member Cluster Hosts

[grid@mclu21 grid]$ ping dsctw-scan.dsctw.dscgrid.example.com
PING dsctw-scan.dsctw.dscgrid.example.com (192.168.5.232) 56(84) bytes of data.
64 bytes from 192.168.5.232 (192.168.5.232): icmp_seq=1 ttl=64 time=0.570 ms
64 bytes from 192.168.5.232 (192.168.5.232): icmp_seq=2 ttl=64 time=0.324 ms
64 bytes from 192.168.5.232 (192.168.5.232): icmp_seq=3 ttl=64 time=0.654 ms
^C
--- dsctw-scan.dsctw.dscgrid.example.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.324/0.516/0.654/0.140 ms


[root@mclu21 ~]# nslookup dsctw-scan.dsctw.dscgrid.example.com
Server:        192.168.5.50
Address:    192.168.5.50#53

Non-authoritative answer:
Name:    dsctw-scan.dsctw.dscgrid.example.com
Address: 192.168.5.230
Name:    dsctw-scan.dsctw.dscgrid.example.com
Address: 192.168.5.226
Name:    dsctw-scan.dsctw.dscgrid.example.com
Address: 192.168.5.227

Start Member Cluster installation

Unset the ORACLE_BASE environment variable.
[grid@dsctw21 grid]$ unset ORACLE_BASE
[grid@dsctw21 ~]$ cd $GRID_HOME
[grid@dsctw21 grid]$ pwd
/u01/app/122/grid
[grid@dsctw21 grid]$ unzip -q  /media/sf_kits/Oracle/122/linuxx64_12201_grid_home.zip

[grid@mclu21 grid]$ gridSetup.sh
Launching Oracle Grid Infrastructure Setup Wizard...

-> Configure an Oracle Member Cluster for Oracle Database
 -> Member Cluster Manifest File : /home/grid/FILES/mclu2.xml

During parsing the Member Cluster Manifest File following error pops up:

[INS-30211] An unexpected exception occurred while extracting details from ASM client data

PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml"
PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml
KFOD-00321: failed to read the credential file /home/grid/FILES/mclu2.xml

  • At your DSC: Add GNS client Data to   Member Cluster Configuration File
[grid@dsctw21 ~]$ srvctl export gns -clientdata   mclu2.xml   -role CLIENT
[grid@dsctw21 ~]$ scp  mclu2.xml mclu21: mclu2.xml                          100%   25KB  24.7KB/s   00:00

  •  Restart the Member Cluster Installation – should work NOW !

 

  • Our Window 7 Host is busy and show high memory consumption
  • The GIMR is the most challenging part for the Installation

Verify Member Cluster

Verify Member Cluster Resources 

Cluster Resources 
[root@mclu22 ~]# crs
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.LISTENER.lsnr              ONLINE     ONLINE          mclu21       STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          mclu22       STABLE   
ora.net1.network               ONLINE     ONLINE          mclu21       STABLE   
ora.net1.network               ONLINE     ONLINE          mclu22       STABLE   
ora.ons                        ONLINE     ONLINE          mclu21       STABLE   
ora.ons                        ONLINE     ONLINE          mclu22       STABLE   
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       mclu22          STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       mclu21          STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       mclu21          STABLE  
ora.cvu                        1   ONLINE       ONLINE       mclu21          STABLE  
ora.mclu21.vip                 1   ONLINE       ONLINE       mclu21          STABLE  
ora.mclu22.vip                 1   ONLINE       ONLINE       mclu22          STABLE  
ora.qosmserver                 1   ONLINE       ONLINE       mclu21          STABLE  
ora.scan1.vip                  1   ONLINE       ONLINE       mclu22          STABLE  
ora.scan2.vip                  1   ONLINE       ONLINE       mclu21          STABLE  
ora.scan3.vip                  1   ONLINE       ONLINE       mclu21          STABLE  

[root@mclu22 ~]#  srvctl config scan 
SCAN name: mclu2-scan.mclu2.dscgrid.example.com, Network: 1
Subnet IPv4: 192.168.5.0/255.255.255.0/enp0s8, dhcp
Subnet IPv6: 
SCAN 1 IPv4 VIP: -/scan1-vip/192.168.5.202
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 2 IPv4 VIP: -/scan2-vip/192.168.5.231
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 3 IPv4 VIP: -/scan3-vip/192.168.5.232
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 

[root@mclu22 ~]#  nslookup  mclu2-scan.mclu2.dscgrid.example.com
Server:        192.168.5.50
Address:    192.168.5.50#53
Non-authoritative answer:
Name:    mclu2-scan.mclu2.dscgrid.example.com
Address: 192.168.5.232
Name:    mclu2-scan.mclu2.dscgrid.example.com
Address: 192.168.5.202
Name:    mclu2-scan.mclu2.dscgrid.example.com
Address: 192.168.5.231

[root@mclu22 ~]# ping mclu2-scan.mclu2.dscgrid.example.com
PING mclu2-scan.mclu2.dscgrid.example.com (192.168.5.202) 56(84) bytes of data.
64 bytes from mclu22.example.com (192.168.5.202): icmp_seq=1 ttl=64 time=0.067 ms
64 bytes from mclu22.example.com (192.168.5.202): icmp_seq=2 ttl=64 time=0.037 ms
^C
--- mclu2-scan.mclu2.dscgrid.example.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.037/0.052/0.067/0.015 ms


[grid@mclu21 ~]$  oclumon manage -get MASTER
Master = mclu21

[grid@mclu21 ~]$  oclumon manage -get reppath
CHM Repository Path = +MGMT/_MGMTDB/50472078CF4019AEE0539705A8C0D652/DATAFILE/sysmgmtdata.292.944846507

[grid@mclu21 ~]$  oclumon dumpnodeview -allnodes
----------------------------------------
Node: mclu21 Clock: '2017-05-24 17.51.50+0200' SerialNo:445 
----------------------------------------
SYSTEM:
#pcpus: 1 #cores: 1 #vcpus: 1 cpuht: N chipname: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz cpuusage: 46.68 cpusystem: 5.80 cpuuser: 40.87 cpunice: 0.00 cpuiowait: 0.00 cpusteal: 0.00 cpuq: 1 physmemfree: 1047400 physmemtotal: 7910784 mcache: 4806576 swapfree: 8257532 swaptotal: 8257532 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 0 iow: 41 ios: 10 swpin: 0 swpout: 0 pgin: 0 pgout: 20 netr: 81.940 netw: 85.211 procs: 248 procsoncpu: 1 #procs_blocked: 0 rtprocs: 7 rtprocsoncpu: N/A #fds: 10400 #sysfdlimit: 6815744 #disks: 5 #nics: 3 loadavg1: 6.92 loadavg5: 7.16 loadavg15: 5.56 nicErrors: 0

TOP CONSUMERS:
topcpu: 'gdb(20156) 31.19' topprivmem: 'gdb(20159) 353188' topshm: 'gdb(20159) 151624' topfd: 'crsd(21898) 274' topthread: 'crsd(21898) 52'
....

[root@mclu22 ~]#  tfactl print status
.-----------------------------------------------------------------------------------------------.
| Host   | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status   |
+--------+---------------+------+------+------------+----------------------+--------------------+
| mclu22 | RUNNING       | 2437 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE           |
| mclu21 | RUNNING       | 1209 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE           |
'--------+---------------+------+------+------------+----------------------+--------------------'

Verify DSC status after Member Cluster Setup


SQL> @pdb_info.sql
SQL> /*
SQL>          To connect to GIMR database set ORACLE_SID : export  ORACLE_SID=\-MGMTDB
SQL> */
SQL> 
SQL> set linesize 132
SQL> COLUMN NAME FORMAT A18
SQL> SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;

NAME               CON_ID        DBID    CON_UID GUID
------------------ ---------- ---------- ---------- --------------------------------
CDB$ROOT            1 1149111082      1 4700AA69A9553E5FE05387E5E50AC8DA
PDB$SEED            2  949396570  949396570 50458CC0190428B2E0539705A8C047D8
GIMR_DSCREP_10            3 3606966590 3606966590 504599D57F9148C0E0539705A8C0AD8D
GIMR_CLUREP_20            4 2292678490 2292678490 50472078CF4019AEE0539705A8C0D652

--> Management Database hosts a new PDB named GIMR_CLUREP_20

SQL> 
SQL> !asmcmd  find /DATA/mclu2 \*
+DATA/mclu2/OCRFILE/
+DATA/mclu2/OCRFILE/REGISTRY.257.944845929
+DATA/mclu2/VOTINGFILE/
+DATA/mclu2/VOTINGFILE/vfile.258.944845949

SQL> !asmcmd find \--type VOTINGFILE / \*
+DATA/mclu2/VOTINGFILE/vfile.258.944845949

SQL> !asmcmd find \--type   OCRFILE / \*
+DATA/dsctw/OCRFILE/REGISTRY.255.944835699
+DATA/mclu2/OCRFILE/REGISTRY.257.944845929

SQL> ! crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   6e59072e99f34f66bf750a5c8daf616f (AFD:DATA1) [DATA]
 2. ONLINE   ef0d610cb44d4f2cbf9d977090b88c2c (AFD:DATA2) [DATA]
 3. ONLINE   db3f3572250c4f74bf969c7dbaadfd00 (AFD:DATA3) [DATA]
Located 3 voting disk(s).

SQL> ! crsctl get cluster mode status
Cluster is running in "flex" mode

SQL> ! crsctl get cluster class
CRS-41008: Cluster class is 'Domain Services Cluster'

SQL> ! crsctl get cluster name
CRS-6724: Current cluster name is 'dsctw'

Potential Errors during Member Cluster Setup

   1. Reading Member Cluster Configuration File fails with  
       [INS-30211] An unexpected exception occurred while extracting details from ASM client data
       PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml"
       PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml
       KFOD-00319: No ASM instance available for OCI connection
      Fix : Add GNS client Data to   Member Cluster Configuration File
            $ srvctl export gns -clientdata   mclu2.xml   -role CLIENT
            -> Fix confirmed 

   2. Reading Member Cluster Configuration File fails with  
    [INS-30211] An unexpected exception occurred while extracting details from ASM client data
       PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml"
       PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml
       KFOD-00321: failed to read the credential file /home/grid/FILES/mclu2.xml 
       -> Double check that the DSC ASM Configuration is working
      This error may be related to running 
      [grid@dsctw21 grid]$ /u01/app/122/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid_dsctw2.rsp  
      and not setting passwords in the related rsp File  
     # Password for SYS user of Oracle ASM
    oracle.install.asm.SYSASMPassword=sys
    # Password for ASMSNMP account
    oracle.install.asm.monitorPassword=sys
      Fix: Add passwords before running   -executeConfigTools step
           -> Fix NOT confirmed  
  
   3. Crashes due to limited memory in  my Virtualbox env 32 GByte
   3.1  Crash of DSC [ Virtualbox host freezes - could not track VM via top ]
        A failed failed Cluster Member Setup due to memory shortage can kill your DSC GNS
        Note: This is a very dangerous situation as it kills your DSC env. 
              As said always backup OCR and export GNS !
   3.2  Crash of any or all Member Cluster [ Virtualbox host freezes - could not track VM via top ]
        - GIMR database setup is partially installed but not working 
        - Member cluster itself is working fine

Member Cluster Deinstall

On all Member Cluster Nodes but NOT the last one :
[root@mclu21 grid]#  $GRID_HOME/crs/install/rootcrs.sh -deconfig -force 
On last Member Cluster Node:
[root@mclu21 grid]#  $GRID_HOME/crs/install/rootcrs.sh -deconfig -force -lastnode
..
2017/05/25 14:37:18 CLSRSC-559: Ensure that the GPnP profile data under the 'gpnp' directory in /u01/app/122/grid is deleted on each node before using the software in the current Grid Infrastructure home for reconfiguration.
2017/05/25 14:37:18 CLSRSC-590: Ensure that the configuration for this Storage Client (mclu2) is deleted by running the command 'crsctl delete member_cluster_configuration <member_cluster_name>' on the Storage Server.

Delete Member Cluster mclu2 - Commands running on DSC

[grid@dsctw21 ~]$ crsctl delete  member_cluster_configuration mclu2 
ASMCMD-9477: delete member cluster 'mclu2' failed
KFOD-00327: failed to delete member cluster 'mclu2'
ORA-15366: unable to delete configuration for member cluster 'mclu2' because the directory '+DATA/mclu2/VOTINGFILE' was not empty
ORA-06512: at line 4
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 724
ORA-06512: at line 2

ASMCMD> find mclu2/ *
+DATA/mclu2/VOTINGFILE/
+DATA/mclu2/VOTINGFILE/vfile.258.944845949
ASMCMD> rm +DATA/mclu2/VOTINGFILE/vfile.258.94484594

SQL>    @pdb_info
NAME               CON_ID        DBID    CON_UID GUID
------------------ ---------- ---------- ---------- --------------------------------
CDB$ROOT            1 1149111082      1 4700AA69A9553E5FE05387E5E50AC8DA
PDB$SEED            2  949396570  949396570 50458CC0190428B2E0539705A8C047D8
GIMR_DSCREP_10            3 3606966590 3606966590 504599D57F9148C0E0539705A8C0AD8D

-> GIMR_CLUREP_20 PDB was deleted !

[grid@dsctw21 ~]$ srvctl config gns -list
dsctw21.CLSFRAMEdsctw SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 40020 Weight: 0 Priority: 0 Flags: 0x101
dsctw21.CLSFRAMEdsctw TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
dsctw22.CLSFRAMEdsctw SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 58466 Weight: 0 Priority: 0 Flags: 0x101
dsctw22.CLSFRAMEdsctw TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
mclu21.CLSFRAMEmclu2 SRV Target: 192.168.2.155.mclu2 Protocol: tcp Port: 14064 Weight: 0 Priority: 0 Flags: 0x101
mclu21.CLSFRAMEmclu2 TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
dscgrid.example.com DLV 20682 10 18 ( XoH6wdB6FkuM3qxr/ofncb0kpYVCa+hTubyn5B4PNgJzWF4kmbvPdN2CkEcCRBxt10x/YV8MLXEe0emM26OCAw== ) Unique Flags: 0x314
dscgrid.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAvu/8JsrxQAVTEPjq4+JfqPwewH/dc7Y/QbJfMp9wgIwRQMZyJSBSZSPdlqhw8fSGfNUmWJW8v+mJ4JsPmtFZRsUW4iB7XvO2SwnEuDnk/8W3vN6sooTmH82x8QxkOVjzWfhqJPLkGs9NP4791JEs0wI/HnXBoR4Xv56mzaPhFZ6vM2aJGWG0N/1i67cMOKIDpw90JV4HZKcaWeMsr57tOWqEec5+dhIKf07DJlCqa4UU/oSHH865DBzpqqEhfbGaUAiUeeJVVYVJrWFPhSttbxsdPdCcR9ulBLuR6PhekMj75wxiC8KUgAL7PUJjxkvyk3ugv5K73qkbPesNZf6pEQIDAQAB ) Unique Flags: 0x314
dscgrid.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
dsctw-scan.dsctw A 192.168.5.226 Unique Flags: 0x81
dsctw-scan.dsctw A 192.168.5.235 Unique Flags: 0x81
dsctw-scan.dsctw A 192.168.5.238 Unique Flags: 0x81
dsctw-scan1-vip.dsctw A 192.168.5.238 Unique Flags: 0x81
dsctw-scan2-vip.dsctw A 192.168.5.235 Unique Flags: 0x81
dsctw-scan3-vip.dsctw A 192.168.5.226 Unique Flags: 0x81
dsctw21-vip.dsctw A 192.168.5.225 Unique Flags: 0x81
dsctw22-vip.dsctw A 192.168.5.241 Unique Flags: 0x81
dsctw-scan1-vip A 192.168.5.238 Unique Flags: 0x81
dsctw-scan2-vip A 192.168.5.235 Unique Flags: 0x81
dsctw-scan3-vip A 192.168.5.226 Unique Flags: 0x81
dsctw21-vip A 192.168.5.225 Unique Flags: 0x81
dsctw22-vip A 192.168.5.241 Unique Flags: 0x81
dsctw21.gipcdhaname SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 41795 Weight: 0 Priority: 0 Flags: 0x101
dsctw21.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
dsctw22.gipcdhaname SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 61595 Weight: 0 Priority: 0 Flags: 0x101
dsctw22.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
mclu21.gipcdhaname SRV Target: 192.168.2.155.mclu2 Protocol: tcp Port: 31416 Weight: 0 Priority: 0 Flags: 0x101
mclu21.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
gpnpd h:dsctw21 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 SRV Target: dsctw21.dsctw Protocol: tcp Port: 21099 Weight: 0 Priority: 0 Flags: 0x101
gpnpd h:dsctw21 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 TXT agent="gpnpd", cname="dsctw", guid="c5323627b2484f8fbf20e67a2c4624e1", host="dsctw21", pid="12420" Flags: 0x101
gpnpd h:dsctw22 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 SRV Target: dsctw22.dsctw Protocol: tcp Port: 60348 Weight: 0 Priority: 0 Flags: 0x101
gpnpd h:dsctw22 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 TXT agent="gpnpd", cname="dsctw", guid="c5323627b2484f8fbf20e67a2c4624e1", host="dsctw22", pid="13141" Flags: 0x101
CSSHub1.hubCSS SRV Target: dsctw21.dsctw Protocol: gipc Port: 0 Weight: 0 Priority: 0 Flags: 0x101
CSSHub1.hubCSS TXT HOSTQUAL="dsctw" Flags: 0x101
Net-X-1.oraAsm SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 1526 Weight: 0 Priority: 0 Flags: 0x101
Net-X-2.oraAsm SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 1526 Weight: 0 Priority: 0 Flags: 0x101
Oracle-GNS A 192.168.5.60 Unique Flags: 0x315
dsctw.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 14123 Weight: 0 Priority: 0 Flags: 0x315
dsctw.Oracle-GNS TXT CLUSTER_NAME="dsctw", CLUSTER_GUID="c5323627b2484f8fbf20e67a2c4624e1", NODE_NAME="dsctw21", SERVER_STATE="RUNNING", VERSION="12.2.0.0.0", PROTOCOL_VERSION="0xc200000", DOMAIN="dscgrid.example.com" Flags: 0x315
Oracle-GNS-ZM A 192.168.5.60 Unique Flags: 0x315
dsctw.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 39923 Weight: 0 Priority: 0 Flags: 0x315

--> Most GNS entries for our Member cluster were deleted

Re-Executing GRID setup fails with [FATAL] [INS-30024]

 

Re-Executing GRID setup fails with [FATAL] [INS-30024]

After an unclean deinstallation gridSetup.sh fails with error  [FATAL] [INS-30024]
Instead of offering the option to install a NEW cluster the installer offers the GRID Upgrade option

Debugging with strace

[grid@dsctw21 grid]$   gridSetup.sh -silent  -skipPrereqs -responseFile  /home/grid/grid_dsctw2.rsp    oracle.install.asm.SYSASMPassword=sys    oracle.install.asm.monitorPassword=sys 2>llog2
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-30024] Installer has detected that the location determined as Oracle Grid Infrastructure home (/u01/app/122/grid), is not a valid Oracle home.
   ACTION: Ensure that either there are no environment variables pointing to this invalid location or register the location as an Oracle home in the central inventory.

Using strace to trace system calls 
[grid@dsctw21 grid]$ strace -f  gridSetup.sh -silent  -skipPrereqs -responseFile  /home/grid/grid_dsctw2.rsp    oracle.install.asm.SYSASMPassword=sys    oracle.install.asm.monitorPassword=sys 2>llog
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-30024] Installer has detected that the location determined as Oracle Grid Infrastructure home (/u01/app/122/grid), is not a valid Oracle home.
   ACTION: Ensure that either there are no environment variables pointing to this invalid location or register the location as an Oracle home in the central inventory.

Check Log File for failed open calls or for open calls which  should fail in CLEAN Installation ENV 
grid@dsctw21 grid]$ grep open llog
..
[pid 11525] open("/etc/oracle/ocr.loc", O_RDONLY) = 93
[pid 11525] open("/etc/oracle/ocr.loc", O_RDONLY) = 93

--> It seems the installer is testing for files
 /etc/oracle/ocr.loc
 /etc/oracle/olr.loc 
whether its an upgrade or its a new installation. 

Fix : Rename ocr.log and olr.loc 
[root@dsctw21 ~]# mv /etc/oracle/ocr.loc /etc/oracle/ocr.loc_tbd
[root@dsctw21 ~]# mv /etc/oracle/olr.loc /etc/oracle/olr.loc_tbd

Now gridSetup.sh should start the installation process

Recreate GNS 12.2

Overview

  • Duing a 12.2 Domain Service Cluster installation I’ve filled in the wrong GNS Subdomain name
  • This means nlslookup for my SCAN address doesn’t work
  • Final cluvfy comamnds reports error : PRVF-5218 : Domain name “dsctw21-vip.dsctw2.example.com” did not resolve to an IP address.

-> So this was a good exercise to verify whetjer my older 12.1 article to recreate GNS  also works witht 12.2 !

Backup your RAC profile and local OCR

As of 12.x/11.2 Grid Infrastructure, the private network configuration is not only stored in OCR but also in the gpnp profile -  please take a backup of profile.xml on all cluster nodes before proceeding, as grid user:
[grid@dsctw21 peer]$ cd  $GRID_HOME/gpnp/dsctw21/profiles/peer/
[grid@dsctw21 peer]$ cp  profile.xml profile.xml_backup_5-Mai-2017
[root@dsctw21 ~]# export GRID_HOME=/u01/app/122/grid
[root@dsctw21 ~]# $GRID_HOME/bin/ocrconfig -local -manualbackup
dsctw21     2017/05/05 17:12:50     /u01/app/122/grid/cdata/dsctw21/backup_20170505_171250.olr     0     
dsctw21     2017/05/05 15:07:41     /u01/app/122/grid/cdata/dsctw21/backup_20170505_150741.olr     0  

[grid@dsctw21 peer]$ $GRID_HOME/bin/ocrconfig -local -showbackup
dsctw21     2017/05/05 17:12:50     /u01/app/122/grid/cdata/dsctw21/backup_20170505_171250.olr     0     
dsctw21     2017/05/05 15:07:41     /u01/app/122/grid/cdata/dsctw21/backup_20170505_150741.olr     0 
-> Repeat these steps on all of your RAC nodes

Collect Vip Addresses, Device Names, GNS Deails

[root@dsctw21 ~]# $GRID_HOME/bin/oifcfg getif
enp0s8  192.168.5.0  global  public
enp0s9  192.168.2.0  global  cluster_interconnect,asm

Get the current GNS VIP IP:
[root@dsctw21 ~]# $GRID_HOME/bin/crsctl status resource ora.gns.vip -f | grep USR_ORA_VIP
GEN_USR_ORA_VIP=
USR_ORA_VIP=192.168.5.60

[root@dsctw21 ~]# ifconfig enp0s8
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.5.151  netmask 255.255.255.0  broadcast 192.168.5.255

[root@dsctw21 ~]#  ifconfig enp0s9
enp0s9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.2.151  netmask 255.255.255.0  broadcast 192.168.2.255

[root@dsctw21 ~]#  $GRID_HOME/bin/srvctl config gns -a -l
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5353 to connect to mDNS
GNS status: Self-check failed.
Domain served by GNS: example.com
GNS version: 12.2.0.1.0
Globally unique identifier of the cluster where GNS is running: 3a9c87760b7bdf65ffea8852e7dfdae5
Name of the cluster where GNS is running: dsctw2
Cluster type: server.
GNS log level: 1.
GNS listening addresses: tcp://192.168.5.60:44456.
GNS instance role: primary
GNS is individually enabled on nodes: 
GNS is individually disabled on nodes: 

[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns
GNS is enabled.
GNS VIP addresses: 192.168.5.60
Domain served by GNS: example.com

This should be a subdomain as example.com is our DNS domain !

Stop resources and recreate  gns, nodeapps

[root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop scan_listener 
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop scan
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop nodeapps -f
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop gns
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl remove nodeapps
Please confirm that you intend to remove node-level applications on all nodes of the cluster (y/[n]) y
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl remove gns
Remove GNS? (y/[n]) y
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl add gns -i 192.168.5.60 -d dsctw2.example.com
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns
GNS is enabled.
GNS VIP addresses: 192.168.5.60
Domain served by GNS: dsctw2.example.com
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list
CLSNS-00005: operation timed out
  CLSNS-00041: failure to contact name servers 192.168.5.60:53
    CLSGN-00070: Service location failed.
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl start gns
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list
dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314
dsctw2.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAmxQnG2xkpQMXGRXD2tBTZkUKYUsV+Sj/w6YmpFdpMQVoNVSXJCWgCDqIjLrfVA2AQUeEaAek6pfOlMp6Tev2nPVvNqPpul5Fs63cFVzwjdTI4zU6lSC6+2UVJnAN6BTEmrOzKKt/kuxoNNI7V4DZ5Nj6UoUJ2MXGr/+RSU44GboHnrftvFaVN8pp0TOoOBTj5hHH8C73I+lFfDNhMXEY8WQhb1nP6Cv02qPMsbb8edq1Dy8lt6N6kzjh+9hKPNdqM7HB3OVV5L18E5HtLjWOhMZLqJ7oDTDsQcMMuYmfFjbi3JvGQrdTlGHAv9f4W/vRL/KV8bDkDFnSRSFubxsbdQIDAQAB ) Unique Flags: 0x314
dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
Oracle-GNS A 192.168.5.60 Unique Flags: 0x315
dsctw2.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 59102 Weight: 0 Priority: 0 Flags: 0x315
dsctw2.Oracle-GNS TXT CLUSTER_NAME="dsctw2", CLUSTER_GUID="3a9c87760b7bdf65ffea8852e7dfdae5", NODE_NAME="dsctw22", SERVER_STATE="RUNNING", VERSION="12.2.0.0.0", PROTOCOL_VERSION="0xc200000", DOMAIN="dsctw2.example.com" Flags: 0x315
Oracle-GNS-ZM A 192.168.5.60 Unique Flags: 0x315
dsctw2.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 34148 Weight: 0 Priority: 0 Flags: 0x315
--> No VIP IPs  !

Recreate Nodeapps

[root@dsctw21 ~]#  $GRID_HOME/bin/srvctl add nodeapps -S 192.168.5.0/255.255.255.0/enp0s8
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl start nodeapps
PRKO-2422 : ONS is already started on node(s): dsctw21,dsctw22
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list
dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314
dsctw2.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAmxQnG2xkpQMXGRXD2tBTZkUKYUsV+Sj/w6YmpFdpMQVoNVSXJCWgCDqIjLrfVA2AQUeEaAek6pfOlMp6Tev2nPVvNqPpul5Fs63cFVzwjdTI4zU6lSC6+2UVJnAN6BTEmrOzKKt/kuxoNNI7V4DZ5Nj6UoUJ2MXGr/+RSU44GboHnrftvFaVN8pp0TOoOBTj5hHH8C73I+lFfDNhMXEY8WQhb1nP6Cv02qPMsbb8edq1Dy8lt6N6kzjh+9hKPNdqM7HB3OVV5L18E5HtLjWOhMZLqJ7oDTDsQcMMuYmfFjbi3JvGQrdTlGHAv9f4W/vRL/KV8bDkDFnSRSFubxsbdQIDAQAB ) Unique Flags: 0x314
dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
dsctw2-scan.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan1-vip.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw21-vip.dsctw2 A 192.168.5.233 Unique Flags: 0x1
dsctw22-vip.dsctw2 A 192.168.5.237 Unique Flags: 0x1
dsctw2-scan A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan1-vip A 192.168.5.231 Unique Flags: 0x1
dsctw21-vip A 192.168.5.233 Unique Flags: 0x1
dsctw22-vip A 192.168.5.237 Unique Flags: 0x1
Oracle-GNS A 192.168.5.60 Unique Flags: 0x315
dsctw2.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 59102 Weight: 0 Priority: 0 Flags: 0x315
dsctw2.Oracle-GNS TXT CLUSTER_NAME="dsctw2", CLUSTER_GUID="3a9c87760b7bdf65ffea8852e7dfdae5", NODE_NAME="dsctw22", SERVER_STATE="RUNNING", VERSION="12.2.0.0.0", PROTOCOL_VERSION="0xc200000", DOMAIN="dsctw2.example.com" Flags: 0x315
Oracle-GNS-ZM A 192.168.5.60 Unique Flags: 0x315
dsctw2.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 34148 Weight: 0 Priority: 0 Flags: 0x315
--> GNS knows VIP IPs - Related cluster resources VIPs, GNS and SCAN Listener should be  ONLINE 
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       dsctw22         STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE ...
ora.dsctw21.vip                1   ONLINE       ONLINE       dsctw21         STABLE  
ora.dsctw22.vip                1   ONLINE       ONLINE       dsctw22         STABLE  
ora.gns                        1   ONLINE       ONLINE       dsctw22         STABLE  
ora.gns.vip                    1   ONLINE    

Verify our NEW created GNS

[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list
dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314
..
dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
dsctw2-scan.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan.dsctw2 A 192.168.5.234 Unique Flags: 0x1
dsctw2-scan.dsctw2 A 192.168.5.235 Unique Flags: 0x1
dsctw2-scan1-vip.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan2-vip.dsctw2 A 192.168.5.235 Unique Flags: 0x1
dsctw2-scan3-vip.dsctw2 A 192.168.5.234 Unique Flags: 0x1
dsctw21-vip.dsctw2 A 192.168.5.233 Unique Flags: 0x1
dsctw22-vip.dsctw2 A 192.168.5.237 Unique Flags: 0x1

[root@dsctw21 ~]#  nslookup dsctw2-scan.dsctw2.example.com
Server:        192.168.5.50
Address:    192.168.5.50#53

Non-authoritative answer:
Name:    dsctw2-scan.dsctw2.example.com
Address: 192.168.5.235
Name:    dsctw2-scan.dsctw2.example.com
Address: 192.168.5.234
Name:    dsctw2-scan.dsctw2.example.com
Address: 192.168.5.231

--> VIPS, SCAN and SCAN VIPS should be ONLINE 

Congrats you have successfully reconfigured GNS on 12.2.0.1 !

Reference

Troubleshooting Clusterware startup problems with DTRACE

First Steps which may avoid setting up DTRACE at all

Cleanup your special sockets file in /var/tmp/.oracle

Either reboot your OS or Cleanup sockets file and reboot CRS stack :
[root@hract21 Desktop]# crsctl stop crs -f
[root@hract21 Desktop]# rm -rf /var/tmp/.oracle/*
[root@hract21 Desktop]# crsctl start crs
 CRS-4123: Oracle High Availability Services has been started.

Note: A complete OS reboot may be needed to fix hanging processes waiting on DISKWAIT 
      If possible always try to do an OS reboot. 
      An OS reboot will always cleanup  /var/tmp/.oracle/*

Quickly verify your OS with a simple sh script : chk_os.sh

#!/bin/bash 
NS=ns1.example.com
HOSTNAME1=hract21.example.com
HOSTNAME2=hract22.example.com
PRIV_IP1=192.168.2.121
PRIV_IP2=192.168.2.122
PUBLIC_IF=eth1
PRIVATE_IF=eth2

echo ""
echo "Disk Space : "
df

echo ""
echo "Major Clusterware Executable Protections : "
ls -l $GRID_HOME/bin/ohasd*
ls -l $GRID_HOME/bin/orarootagent*
ls -l $GRID_HOME/bin/oraagent*
ls -l $GRID_HOME/bin/mdnsd*
ls -l $GRID_HOME/bin/evmd*
ls -l $GRID_HOME/bin/gpnpd*
ls -l $GRID_HOME/bin/evmlogger*
ls -l $GRID_HOME/bin/osysmond.*
ls -l $GRID_HOME/bin/gipcd*
ls -l $GRID_HOME/bin/cssdmonitor*
ls -l $GRID_HOME/bin/cssdagent*
ls -l $GRID_HOME/bin/ocssd*
ls -l $GRID_HOME/bin/octssd*
ls -l $GRID_HOME/bin/crsd
ls -l $GRID_HOME/bin/crsd.bin
ls -l $GRID_HOME/bin/tnslsnr


echo ""
echo "Ping Nameserver: "
ping -c 2  $NS 

echo ""
echo "Test your PUBLIC interface and your nameserver setup"
nslookup $HOSTNAME
ping -I $PUBLIC_IF -c 2   $HOSTNAME1
ping -I $PUBLIC_IF -c 2   $HOSTNAME2
 
ping -I $PRIVATE_IF -c 2   $PRIV_IP1 
ping -I $PRIVATE_IF -c 2   $PRIV_IP2

echo ""
echo "Verify protections for HOSTNAME.pid files should be : 644"
find $GRID_HOME -name hract21.pid  -exec ls -l {} \; 

echo ""
echo "Service iptables and avahi-daemon should not run - avahi-daemon uses CW port 5353 "
service iptables status
ps -elf |grep avahi | grep -v avahi

echo ""
echo "Ports :53 :5353 :42422 :8888 should not be used by NON-Clusterware processes "
echo "  - OC4J reports : tcp   0 0 ::ffff:127.0.0.1:8888  :::*  LISTEN   501 67433979  2580/java"           
netstat -taupen | egrep ":53 |:5353 |:42424 |:8888 "

echo ""
echo "Compare profile.xml the IP Address of PUBLIC and PRIVATE Interfaces "
echo " - Devices should report UP BROADCAST RUNNING MULTICAST "
echo " - Double check NETWORK addresses matches profile.xml settings   "
echo ""
$GRID_HOME/bin/gpnptool get 2>/dev/null  |  xmllint --format - | egrep 'CSS-Profile|ASM-Profile|Network id'
echo ""
ifconfig $PUBLIC_IF | egrep 'eth|inet addr|MTU'
echo ""
ifconfig $PRIVATE_IF | egrep 'eth|inet addr|MTU'

echo "Checking ASM disk status for disk named /dev/asm ...  - you may need to changes this "
ls -l  /dev/asm*

echo ""
echo "Verify ASM disk "
su - grid -c "ssh $HOSTNAME2 ocrcheck"
su - grid -c "ssh $HOSTNAME2  asmcmd lsdsk -k"
echo ""
su - grid -c "kfed read /dev/asmdisk1_10G | grep name"
echo ""
su - grid -c "kfed read /dev/asmdisk2_10G | grep name"
echo ""
su - grid -c "kfed read /dev/asmdisk3_10G | grep name"
echo ""
su - grid -c "kfed read /dev/asmdisk4_10G | grep name"
echo ""


Output:
..
Ports :53 :5353 :42422 :8888 should not be used by NON-Clusterware processes 
  - OC4J reports : tcp   0 0 ::ffff:127.0.0.1:8888  :::*  LISTEN   501 67433979  2580/java
udp        0      0 0.0.0.0:5353                0.0.0.0:*    501        54383580   28618/mdnsd.bin     
udp        0      0 0.0.0.0:5353                0.0.0.0:*    501        54383565   28618/mdnsd.bin     
udp        0      0 0.0.0.0:5353                0.0.0.0:*    501        54383564   28618/mdnsd.bin     
udp        0      0 0.0.0.0:5353                0.0.0.0:*    501        54383563   28618/mdnsd.bin     
udp        0      0 192.168.2.255:42424         0.0.0.0:*    0          54429417   28502/ohasd.bin     
udp        0      0 230.0.1.0:42424             0.0.0.0:*    0          54429416   28502/ohasd.bin     
udp        0      0 224.0.0.251:42424           0.0.0.0:*    0          54429415   28502/ohasd.bin     
udp        0      0 192.168.2.255:42424         0.0.0.0:*    501        54412444   28827/ocssd.bin     
udp        0      0 230.0.1.0:42424             0.0.0.0:*    501        54412443   28827/ocssd.bin     
udp        0      0 224.0.0.251:42424           0.0.0.0:*    501        54412442   28827/ocssd.bin     
udp        0      0 192.168.2.255:42424         0.0.0.0:*    501        54406273   28742/gipcd.bin     
udp        0      0 230.0.1.0:42424             0.0.0.0:*    501        54406272   28742/gipcd.bin     
udp        0      0 224.0.0.251:42424           0.0.0.0:*    501        54406271   28742/gipcd.bin     
udp        0      0 192.168.5.58:53             0.0.0.0:*    0          67400781   2472/gnsd.bin 
tcp        0      0 ::ffff:127.0.0.1:8888        LISTEN      501        67433979   2580/java  
--> mdnsd.bin is using port 5353
    ohasd.bin, ohasd.bin, gipcd.bin are using port 42424
    oc4j is using port 8888           
    GNS is using port 53 

Compare profile.xml the IP Address of PUBLIC and PRIVATE Intefaces 
 - Devices should report UP BROADCAST RUNNING MULTICAST 
 - Double check NETWORK addresses matches profile.xml settings   
    <gpnp:HostNetwork id="gen" HostName="*">
      <gpnp:Network id="net1" IP="192.168.5.0" Adapter="eth1" Use="public"/>
      <gpnp:Network id="net2" IP="192.168.2.0" Adapter="eth2" Use="asm,cluster_interconnect"/>
  <orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/>
  <orcl:ASM-Profile id="asm" DiscoveryString="/dev/asm*" SPFile="+DATA/ract2/ASMPARAMETERFILE/registry.253.870352347" Mode="remote"/>

eth1      Link encap:Ethernet  HWaddr 08:00:27:7D:8E:49  
          inet addr:192.168.5.121  Bcast:192.168.5.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth2      Link encap:Ethernet  HWaddr 08:00:27:4E:C9:BF  
          inet addr:192.168.2.121  Bcast:192.168.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  --> IP="192.168.5.0" Adapter="eth1" should match --> eth1 : inet addr:192.168.5.121  Bcast:192.168.5.255  Mask:255.255.255.0 
      IP="192.168.2.0" Adapter="eth2" should match --> eth2 : inet addr:192.168.2.121  Bcast:192.168.2.255  Mask:255.255.255.0

Manually add database/instance resources after a complete CRS reconfiguration

Overview

  • After a complete CRS reconfiguration database and instance resources are gone
  • Of course you should have a script to recreate the resources – but if not this article should give you an idea how to recreate the database and instance resource

Recreate Database resource grac4 and add instance grac42 and grac43 ( host grac42 and host grac43 )

Locate SPFile from as running instance  
[oracle@grac42 ~]$ cat /u01/app/oracle/product/11204/racdb/dbs/initgrac42.ora
SPFILE='+DATA/grac4/spfilegrac4.ora'

Add database and database instance ( grac42 / grac43 - for simplicity hostname and instance name are equal ) 
[oracle@grac42 ~]$ srvctl add database -d grac4 -n grac4 -o /u01/app/oracle/product/11204/racdb -p '+DATA/grac4/spfilegrac4.ora'
       -s OPEN -y AUTOMATIC -a "DATA" -t IMMEDIATE
[oracle@grac42 ~]$ srvctl add instance -d grac4 -i grac42 -n  grac42
[oracle@grac42 ~]$ srvctl add instance -d grac4 -i grac43 -n  grac43
[oracle@grac42 ~]$ crs | egrep 'db|---|Name'
-------------------------      ---------- ----------      ------------ ------------------
ora.grac4.db                   OFFLINE    OFFLINE                       
ora.grac4.db                   OFFLINE    OFFLINE   

Start instances a first time  with sqlplus or srvctl and verify instance status
Instance grac42:
[oracle@grac42 ~]$ env | grep SID
ORACLE_SID=grac42
[oracle@grac42 ~]$ sqlplus / as sysdba
Connected to an idle instance.
SQL> startup
..
Instance grac43: 
[oracle@grac42 ~]$ srvctl start instance -d grac4 -i grac43
[oracle@grac42 ~]$   crs | egrep 'db|---|Name'
-------------------------      ---------- ----------      ------------ ------------------
ora.grac4.db                   ONLINE     ONLINE          grac42       Open 
ora.grac4.db                   ONLINE     ONLINE          grac43       Open 

Verify the current status database and instance status 
SQL> select to_char( INST_ID) INST_ID, to_char(INSTANCE_NUMBER) INST_NUM, INSTANCE_NAME INST_NAME, HOST_NAME,
  2   VERSION, to_char(STARTUP_TIME,'DD-MON HH:MI:SS') STARTUP_TIME , STATUS, PARALLEL,to_char(THREAD#) THREAD#,
  3   ARCHIVER, LOGINS, SHUTDOWN_PENDING, DATABASE_STATUS DB_STATUS, INSTANCE_ROLE, ACTIVE_STATE, BLOCKED
  4      from gv$instance;
INST_ID INST_NUM INST_NAME HOST_NAME          VERSION       STARTUP_TIME    STATUS     PAR THREAD#  ARCHIVE LOGINS     SHU DB_STATUS INSTANCE_ROLE      ACTIVE_ST BLO
------- -------- --------- ------------------ ------------ --------------- ---------- --- -------- ------- ---------- --- --------- ------------------ --------- ---
2    2     grac42    grac42.example.com 11.2.0.4.0   04-OCT 09:42:36 OPEN       YES 2       STARTED ALLOWED    NO  ACTIVE    PRIMARY_INSTANCE   NORMAL     NO
3    3     grac43    grac43.example.com 11.2.0.4.0   04-OCT 09:45:18 OPEN       YES 3       STARTED ALLOWED    NO  ACTIVE    PRIMARY_INSTANCE   NORMAL     NO

SQL> /*
SQL>      Don't use gv$log and gv$logfile - results can be misleading
SQL> */
SQL> 
SQL> col THREAD# format 99999999
SQL> select * from v$log order by THREAD#, GROUP#;
    GROUP#   THREAD#  SEQUENCE#      BYTES  BLOCKSIZE     MEMBERS ARC STATUS    FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
---------- --------- ---------- ---------- ---------- ---------- --- ---------- ------------- --------- ------------ ---------
     3       2        245   52428800      512           2 YES INACTIVE         56715021 04-OCT-14     56729373 04-OCT-14
     4       2        246   52428800      512           2 NO  CURRENT         56729373 04-OCT-14   2.8147E+14
     5       3        257   52428800      512           2 YES INACTIVE         56715019 04-OCT-14     56715064 04-OCT-14
     6       3        258   52428800      512           2 NO  CURRENT         56717477 04-OCT-14   2.8147E+14 04-OCT-14
SQL> select * from v$logfile order by GROUP#;
    GROUP# STATUS     TYPE    MEMBER                         IS_
---------- ---------- ------- -------------------------------------------------- ---
     3          ONLINE  +FRA/grac4/onlinelog/group_3.1026.845590849     YES
     3          ONLINE  +DATA/grac4/onlinelog/group_3.262.845590841     NO
     4          ONLINE  +DATA/grac4/onlinelog/group_4.265.845590853     NO
     4          ONLINE  +FRA/grac4/onlinelog/group_4.1027.845590861     YES
     5          ONLINE  +DATA/grac4/onlinelog/group_5.270.859796931     NO
     5          ONLINE  +FRA/grac4/onlinelog/group_5.368.859796939     YES
     6          ONLINE  +DATA/grac4/onlinelog/group_6.266.859796961     NO
     6          ONLINE  +FRA/grac4/onlinelog/group_6.370.859796975     YES
8 rows selected.
SQL> select THREAD# , STATUS , ENABLED from v$thread order by THREAD#;
  THREAD# STATUS     ENABLED
--------- ---------- --------
    2 OPEN         PUBLIC
    3 OPEN         PUBLIC

--------------------------

Troubleshooting Clusterware startup problems with detailed debugging info

What to do first ?

Note 80 % of Clusterware  startup problems are related to:

  • Disk Space Problems
  • Network Connectivity Problem with following system Calls are failing 
    •  bind()
    • bind() specifies the address & port on the local side of the connection. Check for local IP changes including changes for Netmask, ...           
    • connect()  
    • connect() specifies the address & port on the remote side of the connection. Check for remote IP changes including changes for Netmask, Firewall issues, ...
    •  gethostbyname() 
    • Check your Nameserver connectivity and configuration
  • File Protection Problems

This translates to some  very important task before starting Clusterware Debugging :

    • Check your disk space  using:  #  df
    • Check whether your are running a firewall: # service iptables status ( <— this command is very important and you should disable  iptables asap if enabled )
    • Check whether avahi daemon is running : # service avahi-daemon status
    • Reboot your system to Cleanup special sockets file in:  /var/tmp/.oracle
    • Verify Network Connectivity  (  ping, nslookup ) and don’t forget to ask your Network Admin for any changes done in last couple of days
    • Check your ASM disks with kfed for a a valid ASM diskheader
[grid@ractw21 ~]$ kfed read  /dev/sdb1  | egrep 'name|size|type'
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfdhdb.dskname:                   DATA3 ; 0x028: length=5
kfdhdb.grpname:                    DATA ; 0x048: length=4
kfdhdb.fgname:                    DATA3 ; 0x068: length=5
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  4194304 ; 0x0bc: 0x00400000
kfdhdb.dsksize:                    5119 ; 0x0c4: 0x000013ff
[grid@ractw21 ~]$  kfed read  /dev/sdc1  | egrep 'name|size|type'
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
...

Verify your system with cluvfy

1) If possible try to restart your failing node
   If not - stop and restart at least your CRS stack
# crsctl stop crs -f 
# crsclt start crs

2) If the problem persist collect following data
--->  Working node
# olsnodes -n -i -s -t  
# oifcfg getif

---> Failing node
# crsctl check crs
# crsctl check css
# crsctl check evm
# crsctl stat res -t -init

---> Run on all nodes and compare the results ( CI device name ,MTU  and netmask should be identical )
# ifconfig -a 
# df
# nestat -rn 

Check that avahi is disabled and no Firewall is configured ( very important !! ) 
# service iptables status    ( Linux specific command )
# service avahi status       ( Linux specific command )
# nslookup grac41            ( use any all or your cluster nodes like grac41, grac42, grac43 )

Locate the cluster interconnect and ping the remote nodes
oifcfg getif
eth3  192.168.3.0  global  cluster_interconnect
[root@grac41 Desktop]#     ifconfig | egrep 'eth3|192.168.3'
eth3      Link encap:Ethernet  HWaddr 08:00:27:09:F0:99  
          inet addr:192.168.3.101  Bcast:192.168.3.255  Mask:255.255.255.0
--> Here we know eth3 is our cluster interconnect device with local address 192.168.3.101 
[root@grac41 Desktop]#  ping -I 192.168.3.101 192.168.3.102
[root@grac41 Desktop]#  ping -I 192.168.3.101 192.168.3.103

Login as Grid-User and check the group permissions  (compare results with a working node )
[grid@grac41 ~]$ id

3) Check your voting disks/OCR setup 
On a working Node:
[root@grac41 Desktop]# ocrcheck
...
     Device/File Name         :       +OCR
Locate the related disks
[grid@grac41 ~]$  asmcmd lsdsk -k
Total_MB  Free_MB  OS_MB  Name       Failgroup  Failgroup_Type  Library  Label  UDID  Product  Redund   Path
    2047     1695   2047  OCR_0000   OCR_0000   REGULAR         System                         UNKNOWN  /dev/asm_ocr_2G_disk1
    2047     1697   2047  OCR_0001   OCR_0001   REGULAR         System                         UNKNOWN  /dev/asm_ocr_2G_disk2
    2047     1697   2047  OCR_0002   OCR_0002   REGULAR         System                         UNKNOWN  /dev/asm_ocr_2G_disk3
On the failed node use kfed to read the disk header ( for all disks : asm_ocr_2G_disk1, asm_ocr_2G_disk2, asm_ocr_2G_disk3 ) 
[grid@grac41 ~]$ kfed read /dev/asm_ocr_2G_disk1 | grep name
kfdhdb.dskname:                OCR_0000 ; 0x028: length=8
kfdhdb.grpname:                     OCR ; 0x048: length=3
kfdhdb.fgname:                 OCR_0000 ; 0x068: length=8
kfdhdb.capname:                         ; 0x088: length=0

4) Verify your cluster setup by runnig cluvfy
Download the 12.1 cluvfy from  
   http://www.oracle.com/technetwork/database/options/clustering/downloads/index.html and run 
   and extract the zip file and  run as grid user: 

Verify CRS installation ( if possible from a working node )
[grid@grac41 cluvf12]$  ./bin/cluvfy  stage -post crsinst -n grac41,grac42 -verbose 

Verify file protections ( run this on all nodes - verifies more than 1100 files  )
[grid@grac41 cluvf12]$  ./bin/cluvfy comp software
..
  1178 files verified                 
Software check failed

Overview

      • Version tested GRID version 11.2.0.4.2 / OEL 6.5
      • Before running any command from this article please  backup OLR and OCR and your CW software !

It’s your responsibilty to have a valid backup !
Running any CW process as a wrong user can corrupt OLR,OCR and change protection for tracefiles and IPC sockets.

Must Read  : Top 5 Grid Infrastructure Startup Issues (Doc ID 1368382.1)

      • Issue #1: CRS-4639: Could not contact Oracle High Availability Services, ohasd.bin not running or ohasd.bin is running but no init.ohasd or other processes
      • Issue #2: CRS-4530: Communications failure contacting Cluster Synchronization Services daemon, ocssd.bin is not running
      • Issue #3: CRS-4535: Cannot communicate with Cluster Ready Services, crsd.bin is not running
      • Issue #4: Agent or mdnsd.bin, gpnpd.bin, gipcd.bin not running
      • Issue #5: ASM instance does not start, ora.asm is OFFLINE

 

How can I avoid CW troubleshooting by reading GB of traces – ( step 2 )  ?

Note  more than 50 percents of CW startup problems can be avoided be checking the follwing
1. Check Network connectivity with  ping, traceroute, nslookup 
    ==> For further details see GENERIC Networking chapter

2. Check CW executable file protections ( compare with a working node )
     $ ls -l $ORACLE_HOME/bin/gpnpd*
      -rwxr-xr-x. 1 grid oinstall   8555 May 20 10:03 /u01/app/11204/grid/bin/gpnpd
      -rwxr-xr-x. 1 grid oinstall 368780 Mar 19 17:07 /u01/app/11204/grid/bin/gpnpd.bin
3. Check CW log file   and pid protections and ( compare with a working node )
     $  ls -l ./grac41/gpnpd/grac41.pid
      -rw-r--r--. 1 grid oinstall 6 May 22 11:46 ./grac41/gpnpd/grac41.pid
     $   ls -l ./grac41/gpnpd/grac41.pid
      -rw-r--r--. 1 grid oinstall 6 May 22 11:46 ./grac41/gpnpd/grac41.pid
4. Check IPC sockets   protections( /var/tmp/.oracle )
     $ ls -l /var/tmp/.oracle/sgrac41DBG_GPNPD
      srwxrwxrwx. 1 grid oinstall 0 May 22 11:46 /var/tmp/.oracle/sgrac41DBG_GPNPD
  ==> For further details see GENERIC File Protection chapter

 

Overview CW startup sequence

In a nutshell, the operating system starts ohasd, ohasd starts agents to start up daemons
- Daemons:     gipcd, mdnsd, gpnpd, ctssd, ocssd, crsd, evmd asm .. 


After all local daemons are up crsd start agents that start user resources (database, SCAN, listener etc).

Startup sequence  (from 11gR2 Clusterware and Grid Home - What You Need to Know (Doc ID 1053147.1) )
Level 1: OHASD Spawns:

    cssdagent    - Agent responsible for spawning CSSD.
    orarootagent - Agent responsible for managing all root owned ohasd resources.
    oraagent     - Agent responsible for managing all oracle owned ohasd resources.
    cssdmonitor  - Monitors CSSD and node health (along wth the cssdagent).

Level 2b: OHASD rootagent spawns: 

    CRSD     - Primary daemon responsible for managing cluster resources.
               ( CTSSD, ACFS , MDNSD, GIPCD, GPNPD, EVMD, ASM resources must be ONLINE )
    CTSSD    - Cluster Time Synchronization Services Daemon
    Diskmon  - 
    ACFS     - ASM Cluster File System Drivers 

Level 2a: OHASD oraagent spawns: 

    MDNSD - Used for DNS lookup
    GIPCD - Used for inter-process and inter-node communication
    GPNPD - Grid Plug & Play Profile Daemon
    EVMD  - Event Monitor Daemon
    ASM   - Resource for monitoring ASM instances

Level 3: CRSD spawns:

    orarootagent - Agent responsible for managing all root owned crsd resources.
    oraagent     - Agent responsible for managing all oracle owned crsd resources.

Level 4: CRSD rootagent spawns:

    Network resource   - To monitor the public network
    SCAN VIP(s)        - Single Client Access Name Virtual IPs
    Node VIPs          - One per node
    ACFS Registery     - For mounting ASM Cluster File System
    GNS VIP (optional) - VIP for GNS

Level 4: CRSD oraagent spawns:

    ASM Resouce    - ASM Instance(s) resource
    Diskgroup      - Used for managing/monitoring ASM diskgroups.  
    DB Resource    - Used for monitoring and managing the DB and instances
    SCAN Listener  - Listener for single client access name, listening on SCAN VIP
    Listener       - Node listener listening on the Node VIP
    Services       - Used for monitoring and managing services
    ONS            - Oracle Notification Service
    eONS           - Enhanced Oracle Notification Service
    GSD            - For 9i backward compatibility
    GNS (optional) - Grid Naming Service - Performs name resolution

Stopping CRS after CW startup  failures

During testing you may stop CRS very frequently. 
As the OHASD stack may not fully up you need to run: 
[root@grac41 gpnp]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'grac41'
CRS-2673: Attempting to stop 'ora.crsd' on 'grac41'
CRS-4548: Unable to connect to CRSD
CRS-5022: Stop of resource "ora.crsd" failed: current state is "INTERMEDIATE"
CRS-2675: Stop of 'ora.crsd' on 'grac41' failed
CRS-2679: Attempting to clean 'ora.crsd' on 'grac41'
CRS-4548: Unable to connect to CRSD
CRS-5022: Stop of resource "ora.crsd" failed: current state is "INTERMEDIATE"
CRS-2678: 'ora.crsd' on 'grac41' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-2799: Failed to shut down resource 'ora.crsd' on 'grac41'
CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'grac41' has failed
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.

If this hangs you may need to kill CW processes at OS level
[root@grac41 gpnp]# ps -elf | egrep "PID|d.bin|ohas|oraagent|orarootagent|cssdagent|cssdmonitor" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root      5812     1  0  80   0 -  2847 pipe_w 07:20 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
4 S root     19164  5812 24  80   0 - 176663 futex_ 09:52 ?       00:00:04 /u01/app/11204/grid/bin/ohasd.bin restart
4 S root     19204     1  1  80   0 - 171327 futex_ 09:52 ?       00:00:00 /u01/app/11204/grid/bin/orarootagent.bin
4 S root     19207     1  0 -40   - - 159900 futex_ 09:52 ?       00:00:00 /u01/app/11204/grid/bin/cssdagent
4 S root     19209     1  0 -40   - - 160927 futex_ 09:52 ?       00:00:00 /u01/app/11204/grid/bin/cssdmonitor
4 S grid     19283     1  1  80   0 - 167890 futex_ 09:52 ?       00:00:00 /u01/app/11204/grid/bin/oraagent.bin
0 S grid     19308     1  0  80   0 - 74289 poll_s 09:52 ?        00:00:00 /u01/app/11204/grid/bin/mdnsd.bin

==> Kill remaining CW  processes
[root@grac41 gpnp]# kill -9 19164 19204 19207 19209 19283 19308

Status of a working OHAS stack

      • Note ora.discmon resource become only ONLINE  in EXADATA configurations
[root@grac41 Desktop]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[root@grac41 Desktop]# crsi
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     ONLINE          grac41       Started 
ora.cluster_interconnect.haip  ONLINE     ONLINE          grac41        
ora.crf                        ONLINE     ONLINE          grac41        
ora.crsd                       ONLINE     ONLINE          grac41        
ora.cssd                       ONLINE     ONLINE          grac41        
ora.cssdmonitor                ONLINE     ONLINE          grac41        
ora.ctssd                      ONLINE     ONLINE          grac41       OBSERVER 
ora.diskmon                    OFFLINE    OFFLINE                       
ora.drivers.acfs               ONLINE     ONLINE          grac41        
ora.evmd                       ONLINE     ONLINE          grac41        
ora.gipcd                      ONLINE     ONLINE          grac41        
ora.gpnpd                      ONLINE     ONLINE          grac41        
ora.mdnsd                      ONLINE     ONLINE          grac41

 

Ohasd  startup scritps on OEL 6

OHASD Script location 
[root@grac41 init.d]# find /etc |grep S96
/etc/rc.d/rc5.d/S96ohasd
/etc/rc.d/rc3.d/S96ohasd
[root@grac41 init.d]# ls -l /etc/rc.d/rc5.d/S96ohasd
lrwxrwxrwx. 1 root root 17 May  4 10:57 /etc/rc.d/rc5.d/S96ohasd -> /etc/init.d/ohasd
[root@grac41 init.d]# ls -l /etc/rc.d/rc3.d/S96ohasd
lrwxrwxrwx. 1 root root 17 May  4 10:57 /etc/rc.d/rc3.d/S96ohasd -> /etc/init.d/ohasd
--> Run level 3 and 5 should start ohasd

Check status of init.ohasd process
[root@grac41 bin]# more /etc/init/oracle-ohasd.conf
# Copyright (c) 2001, 2011, Oracle and/or its affiliates. All rights reserved. 
#
# Oracle OHASD startup
start on runlevel [35]
stop  on runlevel [!35]
respawn
exec /etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null

List current PID
[root@grac41 Desktop]#  initctl list | grep oracle-ohasd
oracle-ohasd start/running, process 27558

Check OS processes
[root@grac41 Desktop]# ps -elf | egrep "PID|d.bin|ohas|oraagent|orarootagent|cssdagent|cssdmonitor" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root     27558     1  0  80   0 -  2878 wait   07:01 ?        00:00:02 /bin/sh /etc/init.d/init.ohasd run

Useful OS and CW commands, GREP commands  , OS logfile location  and Clusterware logfile location details

1 : Clusterware logfile structure
CW Alert.log    alert<hostname>.log ( most important one !! )  
OHASD        ohsad.log 
CSSD         ocssd.log   
EVMD         evmd.log 
CRSD         crsd.log

MDSND        mdnsd.log  
GIPCD        gipcd.log  
GPNPD        gpnpd.log  

Agent directories
agent/ohasd
agent/ohasd/oraagent_grid
agent/ohasd/oracssdagent_root
agent/ohasd/oracssdmonitor_root
agent/ohasd/orarootagent_root

2 :  OS System logs

    HPUX       /var/adm/syslog/syslog.log
    AIX        /bin/errpt–a
    Linux      /var/log/messages
    Windows    Refer .TXT log files under Application/System log using Windows Event Viewer
    Solaris    /var/adm/messages 

    Linux Sample
    # grep 'May 20' ./grac41/var/log/messages > SYSLOG
    --> Check SYSLOG for relvant errors

    An typical CW error could look like:  
    # cat  /var/log/messages
    May 13 13:48:27 grac41 OHASD[22203]: OHASD exiting; Directory /u01/app/11204/grid/log/grac41/ohasd not found

3 : Usefull Commands  for a quick check of clusterware status

It may be usefull to run all commands below just to get an idea what is working and what is not working  

3.1 : OS commands ( assume we have CW startup problems on grac41 ) 
# ping grac41 
# route -n 
# /bin/netstat -in
# /sbin/ifconfig -a
# /bin/ping -s <MTU> -c 2 -I source_IP nodename
# /bin/traceroute -s source_IP -r -F  nodename-priv <MTU-28>
# /usr/bin/nslookup  grac41

3.2 : Clusterware commands to debug startup problems
Check Clusterware status 
# crsctl check crs
# crsctl check css
# crsctl check evm
# crsctl stat res -t -init

If OHASD stack is completly up and running you can check your cluster resources with  
# crsctl stat res -t 

3.3 : Checking OLR  to debug startup problems
# ocrcheck -local
# ocrcheck -local -config

3.4 : Checking OCR/Votedisks  to debug startup problems
$ crsctl query css votedisk

Next 2 commands will only work when startup problems are fixed
$ ocrcheck
$ ocrcheck -config

3.5 : Checking GPnP  to debug startup problems 
# $GRID_HOME/bin/gpnptool get
For futher debugging 
# $GRID_HOME/bin/gpnptool lfind  
# $GRID_HOME/bin/gpnptool getpval -asm_spf -p=/u01/app/11204/grid/gpnp/profiles/peer/profile.xml
# $GRID_HOME/bin/gpnptool check -p=/u01/app/11204/grid/gpnp/profiles/peer/profile.xml
# $GRID_HOME/bin/gpnptool verify -p=/u01/app/11204/grid/gpnp/profiles/peer/profile.xml -w="/u01/app/11204/grid/gpnp/grac41/wallets/peer" -wu=peer

3.6 : Cluvfy commands to debug startup problems
Network problems:
$ cluvfy comp nodereach -n grac41 -vebose
Identify your interfaces used for public and private usage and check related networks
$ cluvfy comp nodecon -n grac41,grac42 -i eth1  -verbose    ( public Interface )
$ cluvfy comp nodecon -n grac41,grac42 -i eth2  -verbose    ( private Interface )
$ cluvfy comp nodecon  -n grac41 -verbose
Testing multicast communication for  multicast group "230.0.1.0" .
$ cluvfy  stage -post hwos -n grac42

Cluvfy commands to verify ASM DG and Voting disk location 
Note: Run cluvfy from a working Node ( grac42 ) to get more details 
[grid@grac42 ~]$ cluvfy comp vdisk -n grac41
  ERROR: PRVF-5157 : Could not verify ASM group "OCR" for Voting Disk location "/dev/asmdisk1_udev_sdh1" 
  --> From the error code we know ASM disk group + Voting Disk location 
$ cluvfy comp olr -verbose
$ cluvfy comp software -verbose 
$ cluvfy comp ocr -n grac42,grac41
$ cluvfy comp sys -n grac41 -p crs -verbose 

Comp healthcheck is quite helpfull to get an overview but as OHASD is not running most of the 
errors are related to the CW startup problem.
$ cluvfy comp healthcheck -collect cluster -html
$ firefox cvucheckreport_523201416347.html

4 :  Useful grep Commands

GPnP profile is not accessible - gpnpd needs to be fully up to serve profile
$ fn_egrep.sh "Cannot get GPnP profile|Error put-profile CALL" 
TraceFileName: ./grac41/agent/ohasd/orarootagent_root/orarootagent_root.log
2014-05-20 10:26:44.532: [ default][1199552256]Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running). 
Cannot get GPnP profile
2014-04-21 15:27:06.838: [    GPNP][132114176]clsgpnp_profileCallUrlInt: [at clsgpnp.c:2243] Result: (13) CLSGPNP_NO_DAEMON. 
Error put-profile CALL to remote "tcp://grac41:56376" disco "mdns:service:gpnp._tcp.local.://grac41:56376/agent=gpnpd,cname=grac4,host=grac41,pid=4548/gpnpd h:grac41 c:grac4"

Network socket file doesn't have appropriate ownership or permission
# fn_egrep.sh "clsclisten: Permission denied"
 [ COMMCRS][3534915328]clsclisten: Permission denied for (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_MDNSD))

Problems with Private Interconnect
$ fn.sh "2014-06-03" | egrep 'but no network HB|TraceFileName'
Search String:  no network HB
TraceFileName: ./cssd/ocssd.log
2014-06-02 12:51:52.564: [    CSSD][2682775296]clssnmvDHBValidateNcopy: node 3, grac43, has a disk HB, but no network HB , ..
or
$ fn_egrep.sh "failed to resolve|gipcretFail|gipcretConnectionRefused" | egrep 'TraceFile|2014-05-20 11:0'
TraceFileName: ./grac41/crsd/crsd.log and  ./grac41/evmd/evmd.log may report
2014-05-20 11:04:02.563: [GIPCXCPT][154781440] gipchaInternalResolve: failed to resolve ret gipcretKeyNotFound (36), host 'grac41', port 'ffac-854b-c525-6f9c', hctx 0x2ed3940 [0000000000000010] { gipchaContext : host 'grac41', name 'd541-9a1e-7807-8f4a', luid 'f733b93a-00000000', numNode 0, numInf 1, usrFlags 0x0, flags 0x5 }, ret gipcretKeyNotFound (36)
2014-05-20 11:04:02.563: [GIPCHGEN][154781440] gipchaResolveF [gipcmodGipcResolve : gipcmodGipc.c : 806]: EXCEPTION[ ret gipcretKeyNotFound (36) ]  failed to resolve ctx 0x2ed3940 [0000000000000010] { gipchaContext : host 'grac41', name 'd541-9a1e-7807-8f4a', luid 'f733b93a-00000000', numNode 0, numInf 1, usrFlags 0x0, flags 0x5 }, host 'grac41', port 'ffac-854b-c525-6f9c', flags 0x0

Is there a valid CI network device ?
# fn_egrep.sh "NETDATA" | egrep 'TraceFile|2014-06-03'
TraceFileName: ./gipcd/gipcd.log
2014-06-03 07:48:45.401: [ CLSINET][3977414400] Returning NETDATA: 1 interfaces <-- ok
2014-06-03 07:52:51.589: [ CLSINET][1140848384] Returning NETDATA: 0 interfaces <-- problems !

Are Voting Disks acessible ?
$ fn_egrep.sh "Successful discovery"
TraceFileName: ./grac41/cssd/ocssd.log
2014-05-22 13:41:38.776: [    CSSD][1839290112]clssnmvDiskVerify: Successful discovery of 0 disks

Generic trobleshooting hints :  How to review CW trace files

1 : Limit trace files size and file count by using TFA command:   tfactl diagcollect
Note only single node collection is neccessary for CW startup problem - here node grac41 has CW startup problems
# tfactl diagcollect -node  grac41 -from "May/20/2014 06:00:00" -to "May/20/2014 15:00:00"
--> Scanning files from May/20/2014 06:00:00 to May/20/2014 15:00:00
...
Logs are collected to:
/u01/app/grid/tfa/repository/collection_Wed_May_21_09_19_10_CEST_2014_node_grac41/grac41.tfa_Wed_May_21_09_19_10_CEST_2014.zip

Extract zip file and scan for various Clusterware errors
# mkdir /u01/TFA
# cd /u01/TFA
# unzip /u01/app/grid/tfa/repository/collection_Wed_May_21_09_19_10_CEST_2014_node_grac41/grac41.tfa_Wed_May_21_09_19_10_CEST_2014.zip

Locate important files in our unzipped TFA repository
# pwd
/u01/TFA/
# find . -name "alert*"
./grac41/u01/app/11204/grid/log/grac41/alertgrac41.log
./grac41/asm/+asm/+ASM1/trace/alert_+ASM1.log
./grac41/rdbms/grac4/grac41/trace/alert_grac41.log

# find . -name "mess*"
./grac41/var/log/messages
./grac41/var/log/messages-20140504

2 : Review Clusterware alert.log  errors
#   get_ca.sh alertgrac41.log 2014-05-23
-> File searched:  alertgrac41.log
-> Start search timestamp   :  2014-05-23
->   End search timestamp   :
Begin: CNT 0 -  TS --
2014-05-23 15:29:31.297:  [mdnsd(16387)]CRS-5602:mDNS service stopping by request.
2014-05-23 15:29:34.211:  [gpnpd(16398)]CRS-2329:GPNPD on node grac41 shutdown.
2014-05-23 15:29:45.785:  [ohasd(2736)]CRS-2112:The OLR service started on node grac41.
2014-05-23 15:29:45.845:  [ohasd(2736)]CRS-1301:Oracle High Availability Service started on node grac41.
2014-05-23 15:29:45.861:  [ohasd(2736)]CRS-8017:location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
2014-05-23 15:29:49.999:  [/u01/app/11204/grid/bin/orarootagent.bin(2798)]CRS-2302:Cannot get GPnP profile.
Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
2014-05-23 15:29:55.075:  [ohasd(2736)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
2014-05-23 15:29:55.081:  [gpnpd(2934)]CRS-2328:GPNPD started on node grac41.
2014-05-23 15:29:57.576:  [cssd(3040)]CRS-1713:CSSD daemon is started in clustered mode
2014-05-23 15:29:59.331:  [ohasd(2736)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2014-05-23 15:29:59.331:  [ohasd(2736)]CRS-2769:Unable to failover resource 'ora.diskmon'.
2014-05-23 15:30:01.905:  [cssd(3040)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /u01/app/11204/grid/log/grac41/cssd/ocssd.log
2014-05-23 15:30:16.945:  [cssd(3040)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /u01/app/11204/grid/log/grac41/cssd/ocssd.log
--> Script get_ca.sh adds a timestamp and reduces output and  only dumps errors for certain day
In the above sample we can easily pin-point the problem to a voting disks issue
If you cant find a obvious reason you still need to review your clusterware alert.log

3 : Review Clusterware logfiles for errors and resources failing or not starting upfn.sh 2014-05-25  | egrep  'TraceFileName|CRS-|ORA-|TNS-|LFI-|KFNDG-|KFED-|KFOD-|CLSDNSSD-|CLSGN-|CLSMDNS-|CLS.-
                     |NDFN-|EVM-|GIPC-|PRK.-|PRV.-|PRC.-|PRIF-|SCLS-|PROC-|PROCL-|PROT-|PROTL-'
TraceFileName: ./crsd/crsd.log
2014-05-25 11:29:04.968: [   CRSPE][1872742144]{1:62631:2} CRS-2672: Attempting to start 'ora.SSD.dg' on 'grac41'
2014-05-25 11:29:04.999: [   CRSPE][1872742144]{1:62631:2} CRS-2672: Attempting to start 'ora.DATA.dg' on 'grac41'
2014-05-25 11:29:05.063: [   CRSPE][1872742144]{1:62631:2} CRS-2672: Attempting to start 'ora.FRA.dg' on 'grac41
....

If a certain resource like ora.net1.network doesn't start - grep for details using resource name
[grid@grac42 grac42]$ fn.sh "ora.net1.network" | egrep '2014-05-31 11|TraceFileName'
TraceFileName: ./agent/crsd/orarootagent_root/orarootagent_root.log
2014-05-31 11:58:27.899: [    AGFW][1829066496]{2:12808:5} Agent received the message: RESOURCE_START[ora.net1.network grac42 1] ID 4098:403
2014-05-31 11:58:27.899: [    AGFW][1829066496]{2:12808:5} Preparing START command for: ora.net1.network grac42 1
2014-05-31 11:58:27.899: [    AGFW][1829066496]{2:12808:5} ora.net1.network grac42 1 state changed from: OFFLINE to: STARTING
2014-05-31 11:58:27.917: [    AGFW][1826965248]{2:12808:5} Command: start for resource: ora.net1.network grac42 1 completed with status: SUCCESS
2014-05-31 11:58:27.919: [    AGFW][1829066496]{2:12808:5} Agent sending reply for: RESOURCE_START[ora.net1.network grac42 1] ID 4098:403
2014-05-31 11:58:27.969: [    AGFW][1829066496]{2:12808:5} ora.net1.network grac42 1 state changed from: STARTING to: UNKNOWN

4 : Review ASM alert.log

5 : Track generation of new Tints and follow these Tints
$ fn.sh 2014-05-25 | egrep 'TraceFileName|Generating new Tint'
TraceFileName: ./agent/ohasd/orarootagent_root/orarootagent_root.log
2014-05-25 13:52:07.862: [    AGFW][2550134528]{0:11:3} Generating new Tint for unplanned state change. Original Tint: {0:0:2}
2014-05-25 13:52:36.126: [    AGFW][2550134528]{0:11:6} Generating new Tint for unplanned state change. Original Tint: {0:0:2}
[grid@grac41 grac41]$ fn.sh "{0:11:3}" | more
Search String:  {0:11:3}
TraceFileName: ./alertgrac41.log
[/u01/app/11204/grid/bin/cssdmonitor(1833)]CRS-5822:Agent '/u01/app/11204/grid/bin/cssdmonitor_root' disconnected from server. Details at (:CRSAGF00117:) {0:11:3}
in /u01/app/11204/grid/log/grac41/agent/ohasd/oracssdmonitor_root/oracssdmonitor_root.log.
--------------------------------------------------------------------
TraceFileName: ./agent/ohasd/oracssdmonitor_root/oracssdmonitor_root.log
2014-05-19 14:34:03.312: [   AGENT][3481253632]{0:11:3} {0:11:3} Created alert : (:CRSAGF00117:) :  
                          Disconnected from server, Agent is shutting down.
2014-05-19 14:34:03.312: [ USRTHRD][3481253632]{0:11:3} clsncssd_exit: CSSD Agent was asked to exit with exit code 2
2014-05-19 14:34:03.312: [ USRTHRD][3481253632]{0:11:3} clsncssd_exit: No connection with CSS, exiting.

6 : Investigate GENERIC File Protection problems  for Log File Location, Ownership and Permissions
==> For further details see GENERIC File Protection troubleshooting chapter

7 : Investigate GENERIC Networking problems
==> For further details see GENERIC Networking troubleshooting chapter

8 : Check logs for a certain resource for a certain time  ( in this sample ora.gpnpd resource was used)
[root@grac41 grac41]# fn.sh "ora.gpnpd" | egrep  "TraceFileName|2014-05-22 07" | more
TraceFileName: ./agent/ohasd/oraagent_grid/oraagent_grid.log
2014-05-22 07:18:27.281: [ora.gpnpd][3696797440]{0:0:2} [check] clsdmc_respget return: status=0, ecode=0
2014-05-22 07:18:57.291: [ora.gpnpd][3698898688]{0:0:2} [check] clsdmc_respget return: status=0, ecode=0
2014-05-22 07:19:27.290: [ora.gpnpd][3698898688]{0:0:2} [check] clsdmc_respget return: status=0, ecode=0

9 : Check related trace depending on startup dependencies
GPndp daemon has the following dependencies:  OHASD (root)  starts --> OHASD ORAgent (grid) starts --> GPnP (grid)
Following tracefile needs to reviewed first for add. info
./grac41/u01/app/11204/grid/log/grac41/gpnpd/gpnpd.log ( see above )
./grac41/u01/app/11204/grid/log/grac41/ohasd/ohasd.log
./grac41/u01/app/11204/grid/log/grac41/agent/ohasd/oraagent_grid/oraagent_grid.log

10 : Check for tracefiles updated very frequently ( this helps to identify looping processes )
# date ; find . -type f -printf "%CY-%Cm-%Cd %CH:%CM:%CS  %h/%f\n" | sort -n | tail -5
Thu May 22 07:52:45 CEST 2014
2014-05-22 07:52:12.1781722420  ./grac41/alertgrac41.log
2014-05-22 07:52:44.8401175210  ./grac41/agent/ohasd/oraagent_grid/oraagent_grid.log
2014-05-22 07:52:45.2701299670  ./grac41/client/crsctl_grid.log
2014-05-22 07:52:45.2901305450  ./grac41/ohasd/ohasd.log
2014-05-22 07:52:45.3221314710  ./grac41/client/olsnodes.log
# date ; find . -type f -printf "%CY-%Cm-%Cd %CH:%CM:%CS  %h/%f\n" | sort -n | tail -5
Thu May 22 07:52:48 CEST 2014
2014-05-22 07:52:12.1781722420  ./grac41/alertgrac41.log
2014-05-22 07:52:47.3701907460  ./grac41/client/crsctl_grid.log
2014-05-22 07:52:47.3901913240  ./grac41/ohasd/ohasd.log
2014-05-22 07:52:47.4241923080  ./grac41/client/olsnodes.log
2014-05-22 07:52:48.3812200070  ./grac41/agent/ohasd/oraagent_grid/oraagent_grid.log

Check these trace files with tail -f
# tail -f ./grac41/agent/ohasd/oraagent_grid/oraagent_grid.log
[  clsdmc][1975785216]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_GPNPD)) with status 9
2014-05-22 07:56:11.198: [ora.gpnpd][1975785216]{0:0:2} [start] Error = error 9 encountered when connecting to GPNPD
2014-05-22 07:56:12.199: [ora.gpnpd][1975785216]{0:0:2} [start] without returnbuf
2014-05-22 07:56:12.382: [ COMMCRS][1966565120]clsc_connect: (0x7fd9500cc070) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_GPNPD))

[  clsdmc][1975785216]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_GPNPD)) with status 9
2014-05-22 07:56:12.382: [ora.gpnpd][1975785216]{0:0:2} [start] Error = error 9 encountered when connecting to GPNPD
2014-05-22 07:56:13.382: [ora.gpnpd][1975785216]{0:0:2} [start] without returnbuf
2014-05-22 07:56:13.553: [ COMMCRS][1966565120]clsc_connect: (0x7fd9500cc070) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_GPNPD))
--> Problem is we can't connect to the GPNPD listener

# ps -elf | egrep "PID|d.bin|ohas" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root      1560     1  0 -40   - - 160927 futex_ 07:56 ?       00:00:00 /u01/app/11204/grid/bin/cssdmonitor
4 S root      4494     1  0  80   0 -  2846 pipe_w May21 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
4 S root     30441     1 23  80   0 - 176982 futex_ 07:49 ?       00:02:04 /u01/app/11204/grid/bin/ohasd.bin reboot
4 S grid     30612     1  0  80   0 - 167327 futex_ 07:50 ?       00:00:05 /u01/app/11204/grid/bin/oraagent.bin
0 S grid     30623     1  0  80   0 - 74289 poll_s 07:50 ?        00:00:00 /u01/app/11204/grid/bin/mdnsd.bin
--> Indeed gpnpd is not running . Investiage this further by debugging CW with strace 

11 : If still no root cause was found  try to grep all message for that period and review the output carefullyfn.sh "2014-05-20 07:2"   | more
Search String:  2014-05-20 07:2
...
--------------------------------------------------------------------
TraceFileName: ./grac41/u01/app/11204/grid/log/grac41/agent/ohasd/oraagent_grid/oraagent_grid.l01
2014-05-20 07:23:04.338: [    AGFW][374716160]{0:0:2} Agent received the message: AGENT_HB[Engine] ID 12293:1236
....
--------------------------------------------------------------------
TraceFileName: ./grac41/u01/app/11204/grid/log/grac41/gpnpd/gpnpd.log
2014-05-20 07:23:13.385: [ default][4133218080]
2014-05-20 07:23:13.385: [ default][4133218080]gpnpd START pid=16641 Oracle Grid Plug-and-Play Daemon

GENERIC File Protection problems  for Log File Location, Ownership and Permissions

      • Resource reports status STARTING for a long time before failing with CRS errors
      • After some time  resource becomes OFFLINE
Debug startup problems for  GPnP daemon
Case #1 : Check that GPnP daemon can write to trace file location and new timestamps are written
       Following directory/files needs to have proper protections : 
           Trace directory :   ./log/grac41/gpnpd
           STDOUT log file :   ./log/grac41/gpnpd/gpnpdOUT.log
           Error  log file :   ./log/grac41/gpnpd/gpnpd.log
       If  gpnpdOUT.log and gpnpd.log are not updated when starting GPnP daemon you need to review your file protections
       Sample for GPnP resource:
        #  ls -ld ./grac41/u01/app/11204/grid/log/grac41/gpnpd
         drwxr-xr-x. 2 root root 4096 May 21 09:52 ./grac41/u01/app/11204/grid/log/grac41/gpnpd
        #  ls -l ./grac41/u01/app/11204/grid/log/grac41/gpnpd
         -rw-r--r--. 1 root root 420013 May 21 09:35 gpnpd.log
         -rw-r--r--. 1 root root  26567 May 21 09:31 gpnpdOUT.log

       ==> Here we can see that trace files are owned by root which is wrong !
       After changing directory and tracefils with chown grid:oinstall ... traces where sucessfully written
       If unsure about protection  verify this with a cluster node where CRS is up and running 
        # ls -ld /u01/app/11204/grid/log/grac41/gpnpd
         drwxr-x---. 2 grid oinstall 4096 May 20 13:53 /u01/app/11204/grid/log/grac41/gpnpd
        # ls -l /u01/app/11204/grid/log/grac41/gpnpd
         -rw-r--r--. 1 grid oinstall  122217 May 19 13:35 gpnpd_1.log
         -rw-r--r--. 1 grid oinstall 1747836 May 20 12:32 gpnpd.log
         -rw-r--r--. 1 grid oinstall   26567 May 20 12:31 gpnpdOUT.log

Case #2 : Check that IPC sockets have proper protection  (this info is not available via tfa collector ) 
       ( verify this with a node where CRS is up and running )
       #  ls -l /var/tmp/.oracle  |  grep -i  gpnp
        srwxrwxrwx. 1 grid oinstall 0 May 20 12:31 ora_gipc_GPNPD_grac41
        -rw-r--r--. 1 grid oinstall 0 May 20 10:11 ora_gipc_GPNPD_grac41_lock
        srwxrwxrwx. 1 grid oinstall 0 May 20 12:31 sgrac41DBG_GPNPD
     ==> Again check this against a working cluster node 
         You may consider to compare all IPC socket info available in /var/tmp/.oracle 

      Same sort of debugging can be used for other processes like MDNSD daemon   
      MDNSD process can't listen to IPC socket grac41DBG_MDNSD socket and terminates 
      Grep command:  
      $ fn_egrep.sh "clsclisten: Permission denied"
        TraceFileName: ./grac41/mdnsd/mdnsd.log
      2014-05-19 08:08:53.097: [ COMMCRS][2179102464]clsclisten: Permission denied for (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_MDNSD))
      2014-05-19 08:10:58.177: [ COMMCRS][3534915328]clsclisten: Permission denied for (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_MDNSD))
       Trace file extract:   ./grac41/mdnsd/mdnsd.log
     2014-05-19 08:08:53.087: [ default][2187654912]mdnsd START pid=11855
     2014-05-19 08:08:53.097: [ COMMCRS][2179102464]clsclisten: Permission denied for (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_MDNSD))
       --> Permissiong problems for  MDNSD resource 
     2014-05-19 08:08:53.097: [  clsdmt][2181203712]Fail to listen to (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_MDNSD))
     2014-05-19 08:08:53.097: [  clsdmt][2181203712]Terminating process
     2014-05-19 08:08:53.097: [    MDNS][2181203712] clsdm requested mdnsd exit
     2014-05-19 08:08:53.097: [    MDNS][2181203712] mdnsd exit
     2014-05-19 08:10:58.168: [ default][3543467776]

Case #3 : Check  gpnpd.log for sucessful write of the related PID file  
       # egrep "PID for the Process|Creating PID|Writing PID" ./grac41/u01/app/11204/grid/log/grac41/gpnpd/gpnpd.log
        2014-05-20 07:23:13.417: [  clsdmt][4121581312]PID for the Process [16641], connkey 10 
        2014-05-20 07:23:13.417: [  clsdmt][4121581312]Creating PID [16641] file for home /u01/app/11204/grid host grac41 bin gpnp to /u01/app/11204/grid/gpnp/init/
        2014-05-20 07:23:13.417: [  clsdmt][4121581312]Writing PID [16641] to the file [/u01/app/11204/grid/gpnp/init/grac41.pid] 

Case #4 : Check  gpnpd.log file for fatal errors like PROCL-5 PROCL-26
       # less ./grac41/u01/app/11204/grid/log/grac41/gpnpd/gpnpd.log   
        2014-05-20 07:23:14.377: [    GPNP][4133218080]clsgpnpd_openLocalProfile: [at clsgpnpd.c:3477] Got local profile from file cache provider (LCP-FS).
        2014-05-20 07:23:14.380: [    GPNP][4133218080]clsgpnpd_openLocalProfile: [at clsgpnpd.c:3532] Got local profile from OLR cache provider (LCP-OLR).
        2014-05-20 07:23:14.385: [    GPNP][4133218080]procr_open_key_ext: OLR api procr_open_key_ext failed for key SYSTEM.GPnP.profiles.peer.pending
        2014-05-20 07:23:14.386: [    GPNP][4133218080]procr_open_key_ext: OLR current boot level : 7
        2014-05-20 07:23:14.386: [    GPNP][4133218080]procr_open_key_ext: OLR error code    : 5
        2014-05-20 07:23:14.386: [    GPNP][4133218080]procr_open_key_ext: OLR error message : PROCL-5: User does not have permission to perform a local 
                                                       registry operation on this key. Authentication error [User does not have permission to perform this operation] [0]
        2014-05-20 07:23:14.386: [    GPNP][4133218080]clsgpnpco_ocr2profile: [at clsgpnpco.c:578] Result: (58) CLSGPNP_OCR_ERR. Failed to open requested OLR Profile.
        2014-05-20 07:23:14.386: [    GPNP][4133218080]clsgpnpd_lOpen: [at clsgpnpd.c:1734] Listening on ipc://GPNPD_grac41
        2014-05-20 07:23:14.386: [    GPNP][4133218080]clsgpnpd_lOpen: [at clsgpnpd.c:1743] GIPC gipcretFail (1) gipcListen listen failure on 
        2014-05-20 07:23:14.386: [ default][4133218080]GPNPD failed to start listening for GPnP peers. 
        2014-05-20 07:23:14.388: [    GPNP][4133218080]clsgpnpd_term: [at clsgpnpd.c:1344] STOP GPnPD terminating. Closing connections...
        2014-05-20 07:23:14.400: [ default][4133218080]clsgpnpd_term STOP terminating.

GENERIC Networking troubleshooting chapter

      • Private IP address is not directly used by clusterware
      • If changing IP from 192.168.2.102 to  192.168.2.108 CW still comes up as network address  192.168.2.0 does not change
      • If changing IP from 192.168.2.102 to  192.168.3.103 CW doesn’t come up as network address changed from 192.168.2.0 to 192.168.3.0 – Network address is used by GPnP profile.xml for private and public network
      • Check crfmond/crfmond.trc trace file for private network errors ( usefull for CI errors )
      • If you get any GIPC error message always think on a real network problems first
Case #1 : Nameserver not running/available

Reported error in evmd.log                :  [  OCRMSG][3360876320]GIPC error [29] msg [gipcretConnectionRefused
Reported Clusterware Error in CW alert.log:  CRS-5011:Check of resource "+ASM" failed:

Testing scenario :
Stop nameserver and restart CRS 
# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

Clusterware status 
[root@grac41 gpnpd]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager
--> CSS, HAS are ONLINE - EVM and CRS are OFFLINE

# ps -elf | egrep "PID|d.bin|ohas|oraagent|orarootagent|cssdagent|cssdmonitor" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root      5396     1  0  80   0 -  2847 pipe_w 10:52 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
4 S root      9526     1  3  80   0 - 178980 futex_ 14:47 ?       00:00:08 /u01/app/11204/grid/bin/ohasd.bin reboot
4 S grid      9705     1  0  80   0 - 174922 futex_ 14:47 ?       00:00:00 /u01/app/11204/grid/bin/oraagent.bin
0 S grid      9716     1  0  80   0 - 74289 poll_s 14:47 ?        00:00:00 /u01/app/11204/grid/bin/mdnsd.bin
0 S grid      9749     1  1  80   0 - 127375 hrtime 14:47 ?       00:00:02 /u01/app/11204/grid/bin/gpnpd.bin
0 S grid      9796     1  1  80   0 - 159711 hrtime 14:47 ?       00:00:04 /u01/app/11204/grid/bin/gipcd.bin
4 S root      9799     1  1  80   0 - 168656 futex_ 14:47 ?       00:00:03 /u01/app/11204/grid/bin/orarootagent.bin
4 S root      9812     1  3 -40   - - 160908 hrtime 14:47 ?       00:00:08 /u01/app/11204/grid/bin/osysmond.bin
4 S root      9823     1  0 -40   - - 162793 futex_ 14:47 ?       00:00:00 /u01/app/11204/grid/bin/cssdmonitor
4 S root      9842     1  0 -40   - - 162920 futex_ 14:47 ?       00:00:00 /u01/app/11204/grid/bin/cssdagent
4 S grid      9855     1  2 -40   - - 166594 futex_ 14:47 ?       00:00:04 /u01/app/11204/grid/bin/ocssd.bin 
4 S root     10884     1  1  80   0 - 159388 futex_ 14:48 ?       00:00:02 /u01/app/11204/grid/bin/octssd.bin reboot
0 S grid     10904     1  0  80   0 - 75285 hrtime 14:48 ?        00:00:00 /u01/app/11204/grid/bin/evmd.bin

$ crsi
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     OFFLINE         CLEANING      
ora.cluster_interconnect.haip  ONLINE     ONLINE          grac41        
ora.crf                        ONLINE     ONLINE          grac41        
ora.crsd                       ONLINE     OFFLINE                       
ora.cssd                       ONLINE     ONLINE          grac41        
ora.cssdmonitor                ONLINE     ONLINE          grac41        
ora.ctssd                      ONLINE     ONLINE          grac41       OBSERVER 
ora.diskmon                    OFFLINE    OFFLINE                       
ora.drivers.acfs               ONLINE     ONLINE          grac41        
ora.evmd                       ONLINE     INTERMEDIATE    grac41        
ora.gipcd                      ONLINE     ONLINE          grac41        
ora.gpnpd                      ONLINE     ONLINE          grac41        
ora.mdnsd                      ONLINE     ONLINE          grac41   
--> Event manager is in INTERMEDIATE state --> need to  reivew EVMD logfile first 

Detailed Tracefile report
Grep Syntax:
$ fn_egrep.sh "failed to resolve|gipcretFail" | egrep 'TraceFile|2014-05-20 11:0'
Failed case:
TraceFileName: ./grac41/crsd/crsd.log and  ./grac41/evmd/evmd.log may report
2014-05-20 11:04:02.563: [GIPCXCPT][154781440] gipchaInternalResolve: failed to resolve ret gipcretKeyNotFound (36),
           host 'grac41', port 'ffac-854b-c525-6f9c', hctx 0x2ed3940 [0000000000000010] { gipchaContext : 
           host 'grac41', name 'd541-9a1e-7807-8f4a', luid 'f733b93a-00000000', numNode 0, numInf 1, usrFlags 0x0, flags 0x5 }, ret gipcretKeyNotFound (36)
2014-05-20 11:04:02.563: [GIPCHGEN][154781440] gipchaResolveF [gipcmodGipcResolve : gipcmodGipc.c : 806]: 
           EXCEPTION[ ret gipcretKeyNotFound (36) ]  failed to resolve ctx 0x2ed3940 [0000000000000010] { gipchaContext : 
           host 'grac41', name 'd541-9a1e-7807-8f4a', luid 'f733b93a-00000000', numNode 0, numInf 1, usrFlags 0x0, flags 0x5 }, 
           host 'grac41', port 'ffac-854b-c525-6f9c', flags 0x0
--> Both request trying to use a host name. 
    If this isn't resolved we very likely have a Name server problem !

TraceFileName: ./grac41/ohasd/ohasd.log reports
2014-05-20 11:03:21.364: [GIPCXCPT][2905085696]gipchaInternalReadGpnp: No network info configured in GPNP, 
                                               using defaults, ret gipcretFail (1)

TraceFileName: ./evmd/evmd.log
2014-05-13 15:01:00.690: [  OCRMSG][2621794080]prom_waitconnect: CONN NOT ESTABLISHED (0,29,1,2)
2014-05-13 15:01:00.690: [  OCRMSG][2621794080]GIPC error [29] msg [gipcretConnectionRefused]
2014-05-13 15:01:00.690: [  OCRMSG][2621794080]prom_connect: error while waiting for connection complete [24]
2014-05-13 15:01:00.690: [  CRSOCR][2621794080] OCR context init failure. 
                          Error: PROC-32: Cluster Ready Services on the local node 

TraceFileName: ./grac41/gpnpd/gpnpd.log
2014-05-22 11:18:02.209: [  OCRCLI][1738393344]proac_con_init: Local listener using IPC. [(ADDRESS=(PROTOCOL=ipc)(KEY=procr_local_conn_0_PROC))]
2014-05-22 11:18:02.209: [  OCRMSG][1738393344]prom_waitconnect: CONN NOT ESTABLISHED (0,29,1,2)
2014-05-22 11:18:02.209: [  OCRMSG][1738393344]GIPC error [29] msg [gipcretConnectionRefused]
2014-05-22 11:18:02.209: [  OCRMSG][1738393344]prom_connect: error while waiting for connection complete [24]
2014-05-22 11:18:02.209: [  OCRCLI][1738393344]proac_con_init: Failed to connect to server [24]
2014-05-22 11:18:02.209: [  OCRCLI][1738393344]proac_con_init: Post sema. Con count [1]
                         [  OCRCLI][1738393344]ac_init:2: Could not initialize con structures. proac_con_init failed with [32]

Debug problem using cluvfy  
[grid@grac42 ~]$  cluvfy comp nodereach -n grac41
Verifying node reachability 
Checking node reachability...
PRVF-6006 : Unable to reach any of the nodes
PRKN-1034 : Failed to retrieve IP address of host "grac41"
==> Confirmation that we have a Name Server problem
Verification of node reachability was unsuccessful on all the specified nodes. 
[grid@grac42 ~]$  cluvfy comp nodecon  -n grac41
Verifying node connectivity 
ERROR: 
PRVF-6006 : Unable to reach any of the nodes
PRKN-1034 : Failed to retrieve IP address of host "grac41"
Verification cannot proceed
Verification of node connectivity was unsuccessful on all the specified nodes. 

Debug problem with OS comands like ping and nslookup
==> For futher details see GENERIC Networking troubleshooting chapter


Case #2 : Private Interface down or wrong IP address- CSSD not starting 

Reported Clusterware Error in CW alert.log:  [/u01/app/11204/grid/bin/cssdagent(16445)] 
                                             CRS-5818:Aborted command 'start' for resource 'ora.cssd'.
Reported in ocssd.log                     :  [    CSSD][491194112]clssnmvDHBValidateNcopy: 
                                             node 1, grac41, has a disk HB, but no network HB,
Reported in crfmond.log                   :  [    CRFM][4239771392]crfm_connect_to: 
                                              Wait failed with gipcret: 16 for conaddr tcp://192.168.2.103:61020

Testing scenario :
Shutdown private interface
[root@grac42 ~]# ifconfig eth2 down
[root@grac42 ~]# ifconfig eth2 
eth2      Link encap:Ethernet  HWaddr 08:00:27:DF:79:B9  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:754556 errors:0 dropped:0 overruns:0 frame:0
          TX packets:631900 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:378302114 (360.7 MiB)  TX bytes:221328282 (211.0 MiB)
[root@grac42 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

Clusterware status : 
[root@grac42 grac42]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager

[grid@grac42 grac42]$ crsi
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     OFFLINE         grac41       Instance Shutdown 
ora.cluster_interconnect.haip  ONLINE     OFFLINE                        
ora.crf                        ONLINE     ONLINE          grac42         
ora.crsd                       ONLINE     OFFLINE                        
ora.cssd                       ONLINE     OFFLINE         STARTING       
ora.cssdmonitor                ONLINE     ONLINE          grac42         
ora.ctssd                      ONLINE     OFFLINE                        
ora.diskmon                    OFFLINE    OFFLINE                        
ora.drivers.acfs               ONLINE     OFFLINE                        
ora.evmd                       ONLINE     OFFLINE                        
ora.gipcd                      ONLINE     ONLINE          grac42         
ora.gpnpd                      ONLINE     ONLINE          grac42         
ora.mdnsd                      ONLINE     ONLINE          grac42  
--> CSSD in mode STARTING and not progressing over time 
    After some minutes CSSD goes OFFLINE

Tracefile Details:
alertgrac42.log:
[cssd(16469)]CRS-1656:The CSS daemon is terminating due to a fatal error; 
             Details at (:CSSSC00012:) in .../cssd/ocssd.log
2014-05-31 14:15:41.828: 
[cssd(16469)]CRS-1603:CSSD on node grac42 shutdown by user.
2014-05-31 14:15:41.828: 
[/u01/app/11204/grid/bin/cssdagent(16445)]CRS-5818:Aborted command 'start' for resource 'ora.cssd'. 
             Details at (:CRSAGF00113:) {0:0:2} in ../agent/ohasd/oracssdagent_root/oracssdagent_root.log.

ocssd.log: 
2014-05-31 14:23:11.534: [    CSSD][491194112]clssnmvDHBValidateNcopy: node 1, grac41, 
                         has a disk HB, but no network HB, 
                         DHB has rcfg 296672934, wrtcnt, 25000048, LATS 9730774, lastSeqNo 25000045, 
                         uniqueness 1401378465, timestamp 1401538986/54631634
2014-05-31 14:23:11.550: [    CSSD][481683200]clssnmvDHBValidateNcopy: node 1, grac41, 
                         has a disk HB, but no network HB, 
                         DHB has rcfg 296672934, wrtcnt, 25000050, LATS 9730794, lastSeqNo 25000047, 
                         uniqueness 1401378465, timestamp 1401538986/54632024

Using grep to locate errors
CRF resource is checking CI every 5s and reports errors:
$ fn.sh 2014-06-03 | egrep 'TraceFileName|failed' 
TraceFileName: ./crfmond/crfmond.log 
2014-06-03 08:28:40.859: [    CRFM][4239771392]crfm_connect_to: 
                         Wait failed with gipcret: 16 for conaddr tcp://192.168.2.103:61020
2014-06-03 08:28:46.065: [    CRFM][4239771392]crfm_connect_to: Wait failed with gipcret: 16 for conaddr tcp://192.168.2.103:61020
2014-06-03 08:28:51.271: [    CRFM][4239771392]crfm_connect_to: Wait failed with gipcret: 16 for conaddr tcp://192.168.2.103:61020
2014-06-03 08:37:11.264: [ CSSCLNT][4243982080]clsssInitNative: connect to (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_grac42_)) failed, rc 13
       DHB has rcfg 296672934, wrtcnt, 23890957, LATS 9730794, lastSeqNo 23890939, uniqueness 1401292202, timestamp 1401538981/89600384

Do we have any network problems ?
$ fn.sh "2014-06-03" | egrep 'but no network HB|TraceFileName'
Search String:  no network HB
TraceFileName: ./cssd/ocssd.log
2014-06-02 12:51:52.564: [    CSSD][2682775296]clssnmvDHBValidateNcopy: node 3, grac43, 
                          has a disk HB, but no network HB, DHB has rcfg 297162159, wrtcnt, 24036295, LATS 167224, 
                          lastSeqNo 24036293, uniqueness 1401692525, timestamp 1401706311/13179394
2014-06-02 12:51:53.569: [    CSSD][2682775296]clssnmvDHBValidateNcopy: node 1, grac41, 
                          has a disk HB, but no network HB, DHB has rcfg 297162159, wrtcnt, 25145340, LATS 168224, 
                          lastSeqNo 25145334, uniqueness 1401692481, timestamp 1401706313/13192074

Is there a valid CI network device ?
# fn_egrep.sh "NETDATA" | egrep 'TraceFile|2014-06-03'
TraceFileName: ./gipcd/gipcd.log
2014-06-03 07:48:40.372: [ CLSINET][3977414400] Returning NETDATA: 1 interfaces
2014-06-03 07:48:45.401: [ CLSINET][3977414400] Returning NETDATA: 1 interfaces _-> ok 
2014-06-03 07:52:51.589: [ CLSINET][1140848384] Returning NETDATA: 0 interfaces --> Problem with CI
2014-06-03 07:52:51.669: [ CLSINET][1492440832] Returning NETDATA: 0 interfaces

Debug with cluvfy
[grid@grac41 ~]$   cluvfy comp nodecon -n grac41,grac42 -i eth1,eth2
Verifying node connectivity
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
ERROR:
PRVG-11049 : Interface "eth2" does not exist on nodes "grac42"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity for interface "eth2"
Node connectivity failed for interface "eth2"
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.0.2.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "169.254.0.0".
Subnet mask consistency check passed for subnet "192.168.122.0".
Subnet mask consistency check passed.
Node connectivity check failed
Verification of node connectivity was unsuccessful on all the specified nodes.

Debug using OS commands
[grid@grac42 NET]$ /bin/ping -s 1500 -c 2 -I 192.168.2.102 192.168.2.101
bind: Cannot assign requested address
[grid@grac42 NET]$  /bin/ping -s 1500 -c 2 -I 192.168.2.102 192.168.2.102
bind: Cannot assign requested address

Verify GnpP profile and find out CI device
[root@grac42 crfmond]# $GRID_HOME/bin/gpnptool get 2>/dev/null  |  xmllint --format - | egrep 'CSS-Profile|ASM-Profile|Network id'
    <gpnp:HostNetwork id="gen" HostName="*">
      <gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth1" Use="public"/>
      <gpnp:Network id="net2" IP="192.168.2.0" Adapter="eth2" Use="cluster_interconnect"/>
--> eth2 is our cluster interconnect

[root@grac42 crfmond]#  ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 08:00:27:DF:79:B9
          inet addr:192.168.3.102  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fedf:79b9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
-> Here we have  a wrong address 192.168.3.102 should be 192.168.2.102
   Note CI device should have following flags : UP BROADCAST RUNNING MULTICAST


Case #3 : Public Interface down - Public network ora.net1.network not starting 
Reported in  ./agent/crsd/orarootagent_root/orarootagent_root.log
    2014-05-31 11:58:27.899: [    AGFW][1829066496]{2:12808:5} Preparing START command for: ora.net1.network grac42 1
    2014-05-31 11:58:27.969: [    AGFW][1829066496]{2:12808:5} ora.net1.network grac42 1 state changed from: STARTING to: UNKNOWN
    2014-05-31 11:58:27.969: [    AGFW][1829066496]{2:12808:5} ora.net1.network grac42 1 would be continued to monitored!
Reported Clusterware Error in CW alert.log:  no errors reported  

Testing scenario :
- Shutdown public interface
[root@grac42 evmd]# ifconfig eth1 down
[root@grac42 evmd]# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 08:00:27:63:08:07  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:2889 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2458 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:484535 (473.1 KiB)  TX bytes:316359 (308.9 KiB)
[root@grac42 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

Clusterware status : 
[grid@grac42 grac42]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--> OHASD stack is ok ! 
[grid@grac42 grac42]$ crsi
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     ONLINE          grac42       Started  
ora.cluster_interconnect.haip  ONLINE     ONLINE          grac42         
ora.crf                        ONLINE     ONLINE          grac42         
ora.crsd                       ONLINE     ONLINE          grac42         
ora.cssd                       ONLINE     ONLINE          grac42         
ora.cssdmonitor                ONLINE     ONLINE          grac42         
ora.ctssd                      ONLINE     ONLINE          grac42       OBSERVER  
ora.diskmon                    OFFLINE    OFFLINE                        
ora.drivers.acfs               ONLINE     ONLINE          grac42         
ora.evmd                       ONLINE     ONLINE          grac42         
ora.gipcd                      ONLINE     ONLINE          grac42         
ora.gpnpd                      ONLINE     ONLINE          grac42         
ora.mdnsd                      ONLINE     ONLINE          grac42       
--> OAHSD stack is up and running 

[grid@grac42 grac42]$ crs  | grep grac42
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.ASMLIB_DG.dg               ONLINE     ONLINE          grac42        
ora.DATA.dg                    ONLINE     ONLINE          grac42        
ora.FRA.dg                     ONLINE     ONLINE          grac42        
ora.LISTENER.lsnr              ONLINE     OFFLINE         grac42        
ora.OCR.dg                     ONLINE     ONLINE          grac42        
ora.SSD.dg                     ONLINE     ONLINE          grac42        
ora.asm                        ONLINE     ONLINE          grac42       Started 
ora.gsd                        OFFLINE    OFFLINE         grac42        
ora.net1.network               ONLINE     OFFLINE         grac42        
ora.ons                        ONLINE     OFFLINE         grac42        
ora.registry.acfs              ONLINE     ONLINE          grac42        
ora.grac4.db                   ONLINE     OFFLINE         grac42       Instance Shutdown
ora.grac4.grac42.svc           ONLINE     OFFLINE         grac42        
ora.grac42.vip                 ONLINE     INTERMEDIATE    grac43       FAILED OVER
--> ora.net1.network OFFLINE  
    ora.grac42.vip n status INTERMEDIATE -   FAILED OVER to grac43 

Check messages logged for resource   ora.net1.network from 2014-05-31 11:00:00 -  2014-05-31 11:59_59 
[grid@grac42 grac42]$ fn.sh "ora.net1.network" | egrep '2014-05-31 11|TraceFileName'
TraceFileName: ./agent/crsd/orarootagent_root/orarootagent_root.log
2014-05-31 11:58:27.899: [    AGFW][1829066496]{2:12808:5} Agent received the message: RESOURCE_START[ora.net1.network grac42 1] ID 4098:403
2014-05-31 11:58:27.899: [    AGFW][1829066496]{2:12808:5} Preparing START command for: ora.net1.network grac42 1
2014-05-31 11:58:27.899: [    AGFW][1829066496]{2:12808:5} ora.net1.network grac42 1 state changed from: OFFLINE to: STARTING
2014-05-31 11:58:27.917: [    AGFW][1826965248]{2:12808:5} Command: start for resource: ora.net1.network grac42 1 completed with status: SUCCESS
2014-05-31 11:58:27.919: [    AGFW][1829066496]{2:12808:5} Agent sending reply for: RESOURCE_START[ora.net1.network grac42 1] ID 4098:403
2014-05-31 11:58:27.969: [    AGFW][1829066496]{2:12808:5} ora.net1.network grac42 1 state changed from: STARTING to: UNKNOWN
2014-05-31 11:58:27.969: [    AGFW][1829066496]{2:12808:5} ora.net1.network grac42 1 would be continued to monitored!
2014-05-31 11:58:27.969: [    AGFW][1829066496]{2:12808:5} Started implicit monitor for [ora.net1.network grac42 1] interval=1000 delay=1000
2014-05-31 11:58:27.969: [    AGFW][1829066496]{2:12808:5} Agent sending last reply for: RESOURCE_START[ora.net1.network grac42 1] ID 4098:403
2014-05-31 11:58:27.982: [    AGFW][1829066496]{2:12808:5} Agent received the message: RESOURCE_CLEAN[ora.net1.network grac42 1] ID 4100:409
2014-05-31 11:58:27.982: [    AGFW][1829066496]{2:12808:5} Preparing CLEAN command for: ora.net1.network grac42 1
2014-05-31 11:58:27.982: [    AGFW][1829066496]{2:12808:5} ora.net1.network grac42 1 state changed from: UNKNOWN to: CLEANING
2014-05-31 11:58:27.983: [    AGFW][1826965248]{2:12808:5} Command: clean for resource: ora.net1.network grac42 1 completed with status: SUCCESS
2014-05-31 11:58:27.984: [    AGFW][1829066496]{2:12808:5} Agent sending reply for: RESOURCE_CLEAN[ora.net1.network grac42 1] ID 4100:409
2014-05-31 11:58:27.984: [    AGFW][1829066496]{2:12808:5} ora.net1.network grac42 1 state changed from: CLEANING to: OFFLINE

Debug with cluvfy
Run cluvfy on the failing node
[grid@grac42 grac42]$ cluvfy comp nodereach -n grac42
Verifying node reachability 
Checking node reachability...
PRVF-6006 : Unable to reach any of the nodes
PRKC-1071 : Nodes "grac42" did not respond to ping in "3" seconds, 
PRKN-1035 : Host "grac42" is unreachable
Verification of node reachability was unsuccessful on all the specified nodes. 

Debug with OS Commands 
[grid@grac42 NET]$ /bin/ping -s 1500 -c 2 -I 192.168.1.102 grac42
bind: Cannot assign requested address
--> Here we are failing as eht1 is not up and running 

[grid@grac42 NET]$ /bin/ping -s 1500 -c 2 -I 192.168.1.102 grac41
ping: unknown host grac41
--> Here we are failing due as Nameserver can be reached

Debugging complete CW startup with strace

      • resource STATE remains STARTING for a long time
      • resource process gets restarted quickly but could not successful start at all
      • Note strace will only help for protection or connection issues.
      • If there is a logical corruption you need to review CW log files
Command used as user root : # strace -t -f -o /tmp/ohasd.trc crsctl start crs
CW status : GPNP daemon doesn't come up
Verify OHASD stack 
NAME                           TARGET     STATE           SERVER       STATE_DETAILS
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     OFFLINE                      Instance Shutdown
ora.cluster_interconnect.haip  ONLINE     OFFLINE
ora.crf                        ONLINE     OFFLINE
ora.crsd                       ONLINE     OFFLINE
ora.cssd                       ONLINE     OFFLINE
ora.cssdmonitor                ONLINE     OFFLINE
ora.ctssd                      ONLINE     OFFLINE
ora.diskmon                    ONLINE     OFFLINE
ora.drivers.acfs               ONLINE     ONLINE          grac41
ora.evmd                       ONLINE     OFFLINE
ora.gipcd                      ONLINE     OFFLINE
ora.gpnpd                      ONLINE     OFFLINE         STARTING
ora.mdnsd                      ONLINE     ONLINE          grac41
--> gpnpd daemon does not progress over time : STATE shows STARTING  
 
Now stop CW and restart CW startup with strace -t -f  

[root@grac41 gpnpd]# strace -t -f -o /tmp/ohasd.trc crsctl start crs
[root@grac41 gpnpd]# grep -i gpnpd /tmp/ohasd.trc | more

Check whether gpnpd shell and gpnpd.bin were scheduled for running :
root@grac41 log]# grep -i execve /tmp/ohasd.trc | grep gpnp 
9866  08:13:56 execve("/u01/app/11204/grid/bin/gpnpd", ["/u01/app/11204/grid/bin/gpnpd"], [/* 72 vars */] <unfinished ...>
9866  08:13:56 execve("/u01/app/11204/grid/bin/gpnpd.bin", ["/u01/app/11204/grid/bin/gpnpd.bi"...], [/* 72 vars */] <unfinished ...>
11017 08:16:01 execve("/u01/app/11204/grid/bin/gpnpd", ["/u01/app/11204/grid/bin/gpnpd"], [/* 72 vars */] <unfinished ...>
11017 08:16:01 execve("/u01/app/11204/grid/bin/gpnpd.bin", ["/u01/app/11204/grid/bin/gpnpd.bi"...], [/* 72 vars */] <unfinished ...>
--> gpnpd.bin was scheduled 2x in 5 seconeds - seems we have problem here check return codes

Check ohasd.trc for errors like: 
$ egrep 'EACCES|ENOENT|EADDRINUSE|ECONNREFUSED|EPERM' /tmp/ohasd.trc
Check ohasd.trc for certain return codes
# grep EACCES /tmp/ohasd.trc
# grep ENOENT ohasd.trc      # returns a lot of info 

Linux error codes leading to a failed CW startup
EACCES :
  open("/u01/app/11204/grid/log/grac41/gpnpd/gpnpdOUT.log", O_RDWR|O_CREAT|O_APPEND, 0644) = -1 EACCES (Permission denied)
ENOENT :
  stat("/u01/app/11204/grid/log/grac41/ohasd", 0x7fff17d68f40) = -1 ENOENT (No such file or directory)
EADDRINUSE :
[pid  7391] bind(6, {sa_family=AF_FILE, path="/var/tmp/.oracle/sgrac41DBG_MDNSD"}, 110) = -1 EADDRINUSE (Address already in use)
ECONNREFUSED :
[pid  7391] connect(6, {sa_family=AF_FILE, path="/var/tmp/.oracle/sgrac41DBG_MDNSD"}, 110) = -1 ECONNREFUSED (Connection refused)
EPERM :
[pid  7391] unlink("/var/tmp/.oracle/sgrac41DBG_MDNSD") = -1 EPERM (Operation not permitted)

Check for LOGfile and PID file usage
PID file usage :
# grep "\.pid"  ohasd.trc
Sucessful open of  mdns/init/grac41.pid through MDSND 
9848  08:13:55 stat("/u01/app/11204/grid/mdns/init/grac41.pid",  <unfinished ...>
9848  08:13:55 stat("/u01/app/11204/grid/mdns/init/grac41.pid",  <unfinished ...>
9848  08:13:55 access("/u01/app/11204/grid/mdns/init/grac41.pid", F_OK <unfinished ...>
9848  08:13:55 statfs("/u01/app/11204/grid/mdns/init/grac41.pid",  <unfinished ...>
9848  08:13:55 open("/u01/app/11204/grid/mdns/init/grac41.pid", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 7
9841  08:13:56 stat("/u01/app/11204/grid/mdns/init/grac41.pid", {st_mode=S_IFREG|0644, st_size=5, ...}) = 0
9841  08:13:56 stat("/u01/app/11204/grid/mdns/init/grac41.pid", {st_mode=S_IFREG|0644, st_size=5, ...}) = 0
9841  08:13:56 access("/u01/app/11204/grid/mdns/init/grac41.pid", F_OK) = 0
9841  08:13:56 statfs("/u01/app/11204/grid/mdns/init/grac41.pid", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096,
                 f_blocks=9900906, f_bfree=2564283, f_bavail=2061346, f_files=2514944, f_ffree=2079685, f_fsid={1657171729, 223082106}, f_namelen=255, f_frsize=4096}) = 0
9841  08:13:56 open("/u01/app/11204/grid/mdns/init/grac41.pid", O_RDONLY) = 27
9845  08:13:56 open("/var/tmp/.oracle/mdnsd.pid", O_WRONLY|O_CREAT|O_TRUNC, 0666 <unfinished ...>

Failed open of  gpnp/init/grac41.pid through GPNPD 
9842  08:16:00 stat("/u01/app/11204/grid/gpnp/init/grac41.pid",  <unfinished ...>
9842  08:16:00 stat("/u01/app/11204/grid/gpnp/init/grac41.pid",  <unfinished ...>
9842  08:16:00 access("/u01/app/11204/grid/gpnp/init/grac41.pid", F_OK <unfinished ...>
9842  08:16:00 statfs("/u01/app/11204/grid/gpnp/init/grac41.pid",  <unfinished ...>
9842  08:16:00 open("/u01/app/11204/grid/gpnp/init/grac41.pid", O_RDONLY <unfinished ...>
9860  08:16:01 stat("/u01/app/11204/grid/gpnp/init/grac41.pid",  <unfinished ...>
9860  08:16:01 stat("/u01/app/11204/grid/gpnp/init/grac41.pid",  <unfinished ...>
9860  08:16:01 access("/u01/app/11204/grid/gpnp/init/grac41.pid", F_OK <unfinished ...>
9860  08:16:01 statfs("/u01/app/11204/grid/gpnp/init/grac41.pid",  <unfinished ...>
9860  08:16:01 open("/u01/app/11204/grid/gpnp/init/grac41.pid", O_RDONLY <unfinished ...>

Sucessful open of Logfile  log/grac41/mdnsd/mdnsd.log  through MDSND 
# grep "\.log" /tmp/ohasd.trc       # this on is very helplful 
9845  08:13:55 open("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", O_WRONLY|O_APPEND) = 4
9845  08:13:55 stat("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", {st_mode=S_IFREG|0644, st_size=509983, ...}) = 0
9845  08:13:55 chmod("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", 0644) = 0
9845  08:13:55 stat("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", {st_mode=S_IFREG|0644, st_size=509983, ...}) = 0
9845  08:13:55 chmod("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", 0644) = 0
9845  08:13:55 stat("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", {st_mode=S_IFREG|0644, st_size=509983, ...}) = 0
9845  08:13:55 stat("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", {st_mode=S_IFREG|0644, st_size=509983, ...}) = 0
9845  08:13:55 access("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", F_OK) = 0
9845  08:13:55 statfs("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=9900906, 
               f_bfree=2564312, f_bavail=2061375, f_files=2514944, f_ffree=2079685, f_fsid={1657171729, 223082106}, f_namelen=255, f_frsize=4096}) = 0
9845  08:13:55 open("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", O_WRONLY|O_APPEND) = 4
9845  08:13:55 stat("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log", {st_mode=S_IFREG|0644, st_size=509983, ...}) = 0
9845  08:13:55 stat("/u01/app/11204/grid/log/grac41/mdnsd/mdnsd.log",  <unfinished ...>

Failed open of Logfile  log/grac41/gpnpd/gpnpdOUT.log and  through GPNPD 
9866  08:13:56 open("/u01/app/11204/grid/log/grac41/gpnpd/gpnpdOUT.log", O_RDWR|O_CREAT|O_APPEND, 0644 <unfinished ...>
       --> We need to get a file descriptor back from open call
9842  08:15:56 stat("/u01/app/11204/grid/log/grac41/alertgrac41.log", {st_mode=S_IFREG|0664, st_size=1877580, ...}) = 0
9842  08:15:56 stat("/u01/app/11204/grid/log/grac41/alertgrac41.log", {st_mode=S_IFREG|0664, st_size=1877580, ...}) = 0

Checking IPC sockets usage
Succesfull opening of IPC sockets throuph of MSDMD process  grep MDNSD /tmp/ohasd.trc 
9849  08:13:56 chmod("/var/tmp/.oracle/sgrac41DBG_MDNSD", 0777) = 0
9862  08:13:56 access("/var/tmp/.oracle/sgrac41DBG_MDNSD", F_OK <unfinished ...>
9862  08:13:56 connect(28, {sa_family=AF_FILE, path="/var/tmp/.oracle/sgrac41DBG_MDNSD"}, 110 <unfinished ...>
9849  08:13:56 <... getsockname resumed> {sa_family=AF_FILE, path="/var/tmp/.oracle/sgrac41DBG_MDNSD"}, [36]) = 0
9849  08:13:56 chmod("/var/tmp/.oracle/sgrac41DBG_MDNSD", 0777 <unfinished ...>
--> connect was successfull at 13:56 - further processing with system calls like bind() will be seend in trace
    No furhter connect requests are happing for  /var/tmp/.oracle/sgrac41DBG_MDNSD 
Complete log including a successful connect 
9862  08:13:56 connect(28, {sa_family=AF_FILE, path="/var/tmp/.oracle/sgrac41DBG_MDNSD"}, 110 <unfinished ...>
9834  08:13:56 <... times resumed> {tms_utime=3, tms_stime=4, tms_cutime=0, tms_cstime=0}) = 435715860
9862  08:13:56 <... connect resumed> )  = 0
9862  08:13:56 connect(28, {sa_family=AF_FILE, path="/var/tmp/.oracle/sgrac41DBG_MDNSD"}, 110 <unfinished ...>
9834  08:13:56 <... times resumed> {tms_utime=3, tms_stime=4, tms_cutime=0, tms_cstime=0}) = 435715860
9862  08:13:56 <... connect resumed> )  = 0
..
9849  08:13:55 bind(6, {sa_family=AF_FILE, path="/var/tmp/.oracle/sgrac41DBG_MDNSD"}, 110) = 0
..
9849  08:13:55 listen(6, 500 <unfinished ...>
9731  08:13:55 nanosleep({0, 1000000},  <unfinished ...>
9849  08:13:55 <... listen resumed> )   = 0
--> After a successfull listen system call clients can connect 
    To allow clients to connect we need as succesful connect(), bind() and listen() system call !


Failed  opening of IPC sockets throuph of GPNPD process  grep MDNSD /tmp/ohasd.trc 
9924  08:14:37 access("/var/tmp/.oracle/sgrac41DBG_GPNPD", F_OK) = 0
9924  08:14:37 connect(30, {sa_family=AF_FILE, path="/var/tmp/.oracle/sgrac41DBG_GPNPD"}, 110) 
                = -1 ECONNREFUSED (Connection refused)
9924  08:14:37 access("/var/tmp/.oracle/sgrac41DBG_GPNPD", F_OK) = 0
9924  08:14:37 connect(30, {sa_family=AF_FILE, path="/var/tmp/.oracle/sgrac41DBG_GPNPD"}, 110) = 
               = -1 ECONNREFUSED (Connection refused)
--> The connect request was unsucssful and was repeated again and again. 

Debugging a single CW process with strace

      • resource STATE remains STARTING for a long time
      • resource process gets restarted quickly but could not succesfully start at all
      • Note strace will only help for protection or connection issues.
      • If there is a logical corruption you need to review CW log files
[root@grac41 .oracle]# crsi
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     OFFLINE                      Instance Shutdown 
ora.cluster_interconnect.haip  ONLINE     OFFLINE                        
ora.crf                        ONLINE     OFFLINE                        
ora.crsd                       ONLINE     OFFLINE                        
ora.cssd                       ONLINE     OFFLINE                        
ora.cssdmonitor                ONLINE     OFFLINE                        
ora.ctssd                      ONLINE     OFFLINE                        
ora.diskmon                    ONLINE     OFFLINE                        
ora.drivers.acfs               ONLINE     OFFLINE                        
ora.evmd                       ONLINE     OFFLINE                        
ora.gipcd                      ONLINE     OFFLINE                        
ora.gpnpd                      ONLINE     OFFLINE         STARTING       
ora.mdnsd                      ONLINE     ONLINE          grac41 

Check whether  the related  process is running ?
[root@grac41 .oracle]#  ps -elf | egrep "PID|d.bin|ohas|oraagent|orarootagent|cssdagent|cssdmonitor" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root      7023     1  0  80   0 -  2846 pipe_w 06:01 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
0 S grid     17764 10515  0  80   0 -  1113 wait   12:11 pts/8    00:00:00 strace -t -f -o /tmp/mdnsd.trc /u01/app/11204/grid/bin/mdnsd
0 S grid     17767 17764  0  80   0 - 78862 poll_s 12:11 ?        00:00:00 /u01/app/11204/grid/bin/mdnsd.bin
4 S root     18501     1 23  80   0 - 176836 futex_ 12:13 ?       00:05:07 /u01/app/11204/grid/bin/ohasd.bin reboot
4 S grid     18567     1  1  80   0 - 170697 futex_ 12:13 ?       00:00:14 /u01/app/11204/grid/bin/oraagent.bin
4 S root     20306     1  0 -40   - - 160927 futex_ 12:19 ?       00:00:03 /u01/app/11204/grid/bin/cssdmonitor
4 S root     21396     1  0  80   0 - 163115 futex_ 12:23 ?       00:00:00 /u01/app/11204/grid/bin/orarootagent.bin
--> gpnpd.bin is not running - but was restarted very often

Starting gpnpd.bin process with strace :  $ strace -t -f -o /tmp/gpnpd.trc /u01/app/11204/grid/bin/gpnpd

Note this is very very dangerous as you need to know which user start this process
IF you run this procless as the wrong user OLR,OCR, IPC sockest privs and log file location can we corrupted !
Before tracing  a single process you may run strace -t -f -o ohasd.trc crsctl start crs ( see the above chapater ) 
as this command alwas starts the process as correct user, in correct order and pull up needed resources 
Run the following commands only our your test system  as a last ressort.

Manually start gpnpd as user grid with strace attached! 
[grid@grac41 grac41]$  strace -t -f -o /tmp/gpnpd.trc /u01/app/11204/grid/bin/gpnpd
Unable to open file /u01/app/11204/grid/log/grac41/gpnpd/gpnpdOUT.log: %

[grid@grac41 grac41]$   egrep 'EACCES|ENOENT|EADDRINUSE|ECONNREFUSED|EPERM' /tmp/gpnpd.trc 
25251 12:37:39 open("/u01/app/11204/grid/log/grac41/gpnpd/gpnpdOUT.log", 
              O_RDWR|O_CREAT|O_APPEND, 0644) = -1 EACCES (Permission denied)
-==> Fix:  # chown grid:oinstall /u01/app/11204/grid/log/grac41/gpnpd/gpnpdOUT.log

Repeat the above command as long we errors
[grid@grac41 grac41]$  strace -t -f -o /tmp/gpnpd.trc /u01/app/11204/grid/bin/gpnpd
[grid@grac41 grac41]$   egrep 'EACCES|ENOENT|EADDRINUSE|ECONNREFUSED|EPERM' /tmp/gpnpd.trc 
27089 12:44:35 connect(45, {sa_family=AF_FILE, path="/var/tmp/.oracle/sprocr_local_conn_0_PROC"}, 110) =
              = -1 ENOENT (No such file or directory)
27089 12:44:58 connect(45, {sa_family=AF_FILE, path="/var/tmp/.oracle/sprocr_local_conn_0_PROC"}, 110) 
               = -1 ENOENT (No such file or directory)
..
32486 13:03:52 connect(45, {sa_family=AF_FILE, path="/var/tmp/.oracle/sprocr_local_conn_0_PROC"}, 110) 
                = -1 ENOENT (No such file or directory)
--> Connect was unsuccesfull - check IPC socket protections !

Note strace will only help for protection or connection issues.
If there is a logical corruption you need to review CW log files 

Details of a logical OCR corruption with PROCL-5 error :   ./gpnpd/gpnpd.log
..
[   CLWAL][3606726432]clsw_Initialize: OLR initlevel [70000]
[  OCRAPI][3606726432]a_init:10: AUTH LOC [/u01/app/11204/grid/srvm/auth]
[  OCRMSG][3606726432]prom_init: Successfully registered comp [OCRMSG] in clsd.
2014-05-20 07:57:51.118: [  OCRAPI][3606726432]a_init:11: Messaging init successful.
[  OCRCLI][3606726432]oac_init: Successfully registered comp [OCRCLI] in clsd.
2014-05-20 07:57:51.118: [  OCRCLI][3606726432]proac_con_init: Local listener using IPC. [(ADDRESS=(PROTOCOL=ipc)(KEY=procr_local_conn_0_PROL))]
2014-05-20 07:57:51.119: [  OCRCLI][3606726432]proac_con_init: Successfully connected to the server
2014-05-20 07:57:51.119: [  OCRCLI][3606726432]proac_con_init: Post sema. Con count [1]
2014-05-20 07:57:51.120: [  OCRAPI][3606726432]a_init:12: Client init successful.
2014-05-20 07:57:51.120: [  OCRAPI][3606726432]a_init:21: OCR init successful. Init Level [7]
2014-05-20 07:57:51.120: [  OCRAPI][3606726432]a_init:2: Init Level [7]
2014-05-20 07:57:51.132: [  OCRCLI][3606726432]proac_con_init: Post sema. Con count [2]
[  clsdmt][3595089664]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_GPNPD))
2014-05-20 07:57:51.133: [  clsdmt][3595089664]PID for the Process [31034], connkey 10 
2014-05-20 07:57:51.133: [  clsdmt][3595089664]Creating PID [31034] file for home /u01/app/11204/grid host grac41 bin gpnp to /u01/app/11204/grid/gpnp/init/
2014-05-20 07:57:51.133: [  clsdmt][3595089664]Writing PID [31034] to the file [/u01/app/11204/grid/gpnp/init/grac41.pid] 
2014-05-20 07:57:52.108: [    GPNP][3606726432]clsgpnpd_validateProfile: [at clsgpnpd.c:2919] GPnPD taken cluster name 'grac4'
2014-05-20 07:57:52.108: [    GPNP][3606726432]clsgpnpd_openLocalProfile: [at clsgpnpd.c:3477] Got local profile from file cache provider (LCP-FS).
2014-05-20 07:57:52.111: [    GPNP][3606726432]clsgpnpd_openLocalProfile: [at clsgpnpd.c:3532] Got local profile from OLR cache provider (LCP-OLR).
2014-05-20 07:57:52.113: [    GPNP][3606726432]procr_open_key_ext: OLR api procr_open_key_ext failed for key SYSTEM.GPnP.profiles.peer.pending
2014-05-20 07:57:52.113: [    GPNP][3606726432]procr_open_key_ext: OLR current boot level : 7
2014-05-20 07:57:52.113: [    GPNP][3606726432]procr_open_key_ext: OLR error code    : 5
2014-05-20 07:57:52.126: [    GPNP][3606726432]procr_open_key_ext: OLR error message : PROCL-5: User does not have permission to perform a local registry operation on this key. 
                                   Authentication error [User does not have permission to perform this operation] [0]
2014-05-20 07:57:52.126: [    GPNP][3606726432]clsgpnpco_ocr2profile: [at clsgpnpco.c:578] Result: (58) CLSGPNP_OCR_ERR. Failed to open requested OLR Profile.
2014-05-20 07:57:52.127: [    GPNP][3606726432]clsgpnpd_lOpen: [at clsgpnpd.c:1734] Listening on ipc://GPNPD_grac41
2014-05-20 07:57:52.127: [    GPNP][3606726432]clsgpnpd_lOpen: [at clsgpnpd.c:1743] GIPC gipcretFail (1) gipcListen listen failure on 
2014-05-20 07:57:52.127: [ default][3606726432]GPNPD failed to start listening for GPnP peers. 
2014-05-20 07:57:52.135: [    GPNP][3606726432]clsgpnpd_term: [at clsgpnpd.c:1344] STOP GPnPD terminating. Closing connections...
2014-05-20 07:57:52.137: [ default][3606726432]clsgpnpd_term STOP terminating.
2014-05-20 07:57:53.136: [  OCRAPI][3606726432]a_terminate:1:current ref count = 1
2014-05-20 07:57:53.136: [  OCRAPI][3606726432]a_terminate:1:current ref count = 0
--> Fatal OLR error ==> OLR is corrupted ==> GPnPD terminating . 
For details how to fix PROCL-5 error  please read following link.

OHASD does not start

Understanding CW startup configuration in OEL 6  
OHASD Script location 
[root@grac41 init.d]# find /etc |grep S96
/etc/rc.d/rc5.d/S96ohasd
/etc/rc.d/rc3.d/S96ohasd
[root@grac41 init.d]# ls -l /etc/rc.d/rc5.d/S96ohasd
lrwxrwxrwx. 1 root root 17 May  4 10:57 /etc/rc.d/rc5.d/S96ohasd -> /etc/init.d/ohasd
[root@grac41 init.d]# ls -l /etc/rc.d/rc3.d/S96ohasd
lrwxrwxrwx. 1 root root 17 May  4 10:57 /etc/rc.d/rc3.d/S96ohasd -> /etc/init.d/ohasd
--> Run level 3 and 5 should start ohasd daemon

Check status of init.ohasd process
[root@grac41 bin]# more /etc/init/oracle-ohasd.conf
# Copyright (c) 2001, 2011, Oracle and/or its affiliates. All rights reserved. 
#
# Oracle OHASD startup
start on runlevel [35]
stop  on runlevel [!35]
respawn
exec /etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null

List current PID
[root@grac41 Desktop]#  initctl list | grep oracle-ohasd
oracle-ohasd start/running, process 27558
[root@grac41 Desktop]# ps -elf | egrep "PID|d.bin|ohas|oraagent|orarootagent|cssdagent|cssdmonitor" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root     27558     1  0  80   0 -  2878 wait   07:01 ?        00:00:02 /bin/sh /etc/init.d/init.ohasd run

Case #1 : OHASD does not start
Check your runlevel, running init.ohasd process and clusterware configuration
  # who -r
         run-level 5  2014-05-19 14:48
  # ps -elf | egrep "PID|d.bin|ohas" | grep -v grep
  F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
  4 S root      6098     1  0  80   0 -  2846 wait   04:44 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
  # crsctl config crs
  CRS-4622: Oracle High Availability Services autostart is enabled.

Case #2 : OLR not accessilbe - CW doesn't start - Error CRS-4124
Reported error in ocssd.log               : 
Reported Clusterware Error in CW alert.log: CRS-0704:Oracle High Availability Service aborted due to Oracle Local Registry error 
                                            [PROCL-33: Oracle Local Registry is not configured Storage layer error 
                                            [Error opening olr.loc file. No such file or directory] [2]]. 
                                            Details at (:OHAS00106:) in /u01/app/11204/grid/log/grac41/ohasd/ohasd.log.
Testing scenario :
# mv  /etc/oracle/olr.loc  /etc/oracle/olr.loc_bck
# crsctl start crs
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.

Clusterware status :
# crsi
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
                               CRS-4639:  Could           not          contact Oracle
                               CRS-4000:  Command         Status       failed, or
[root@grac41 Desktop]# ps -elf | egrep "PID|d.bin|ohas" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root      6098     1  0  80   0 -  2846 wait   04:44 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
--> OHASD.bin does not start 

Tracefile Details:
./grac41/alertgrac41.log
[ohasd(20436)]CRS-0704:Oracle High Availability Service aborted due to Oracle Local Registry error 
     [PROCL-33: Oracle Local Registry is not configured Storage layer error [Error opening olr.loc file. No such file or directory] [2]]. 
     Details at (:OHAS00106:) in /u01/app/11204/grid/log/grac41/ohasd/ohasd.log.

./grac41/ohasd/ohasd.log
2014-05-11 11:45:42.892: [  CRSOCR][149608224] OCR context init failure.  Error: PROCL-33: Oracle Local Registry is not configured Storage layer error 
                           [Error opening olr.loc file. No such file or directory] [2]
2014-05-11 11:45:42.893: [ default][149608224] Created alert : (:OHAS00106:) :  OLR initialization failed, error: 
                            PROCL-33: Oracle Local Registry is not configured Storage layer error 
                         [Error opening olr.loc file. No such file or directory] [2]
2014-05-11 11:45:42.893: [ default][149608224][PANIC] OHASD exiting; Could not init OLR
2014-05-11 11:45:42.893: [ default][149608224] Done.

OS log : /var/log/messages
May 24 09:21:53 grac41 clsecho: /etc/init.d/init.ohasd: Ohasd restarts 11 times in 2 seconds.
May 24 09:21:53 grac41 clsecho: /etc/init.d/init.ohasd: Ohasd restarts too rapidly. Stop auto-restarting.

Debugging steps
Verify your local Cluster repository 
# ocrcheck -local -config
PROTL-604: Failed to retrieve the configured location of the local registry
Error opening olr.loc file. No such file or directory

# ocrcheck -local
PROTL-601: Failed to initialize ocrcheck
PROCL-33: Oracle Local Registry is not configured Storage layer error [Error opening olr.loc file. No such file or directory] [2]

# ls -l /etc/oracle/olr.loc
ls: cannot access /etc/oracle/olr.loc: No such file or directory

Note a working OLR should look like:
#  more /etc/oracle/olr.loc
olrconfig_loc=/u01/app/11204/grid/cdata/grac41.olr
crs_home=/u01/app/11204/grid
# ls -l /u01/app/11204/grid/cdata/grac41.olr
-rw-------. 1 root oinstall 272756736 May 24 09:15 /u01/app/11204/grid/cdata/grac41.olr

Verify your OLR configuration with cluvfy 
[grid@grac41 ~]$  cluvfy comp olr -verbose
 ERROR: 
Oracle Grid Infrastructure not configured. 
You cannot run '/u01/app/11204/grid/bin/cluvfy' without the Oracle Grid Infrastructure.

Strace the command to get more details:
[grid@grac41 crsd]$ strace -t -f -o clu.trc cluvfy comp olr -verbose
clu.trc reports :
29993 09:32:19 open("/etc/oracle/olr.loc", O_RDONLY) = -1 ENOENT (No such file or directory)

OHASD Agents do not start

      • OHASD.BIN will spawn four agents/monitors to start resource:
      • oraagent: responsible for ora.asm, ora.evmd, ora.gipcd, ora.gpnpd, ora.mdnsd etc
      • orarootagent: responsible for ora.crsd, ora.ctssd, ora.diskmon, ora.drivers.acfs etc
      • cssdagent / cssdmonitor: responsible for ora.cssd(for ocssd.bin) and ora.cssdmonitor(for cssdmonitor itself)
If ohasd.bin can not start any of above agents properly, clusterware will not come to healthy state.
Potential Problems
1. Common causes of agent failure are that the log file or log directory for the agents don't have proper ownership or permission.
2. If agent binary (oraagent.bin or orarootagent.bin etc) is corrupted, agent will not start resulting in related resources not coming up:

Debugging CRS startup if  trace file location is not accessible  
Action - Change trace directory 
[grid@grac41 log]$ mv  $GRID_HOME/log/grac41 $GRID_HOME/log/grac41_nw
[grid@grac41 log]$ crsctl start crs
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.

Process Status and CRS status
[root@grac41 .oracle]# ps -elf | egrep "PID|d.bin|ohas|oraagent|orarootagent|cssdagent|cssdmonitor" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root      5396     1  0  80   0 -  2847 pipe_w 10:52 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
4 S root     26705 25370  1  80   0 - 47207 hrtime 14:05 pts/7    00:00:00 /u01/app/11204/grid/bin/crsctl.bin start crs
[root@grac41 .oracle]# crsctl check crs
CRS-4639: Could not contact Oracle High Availability Services

OS Tracefile: /var/log/messages 
May 13 13:48:27 grac41 root: exec /u01/app/11204/grid/perl/bin/perl -I/u01/app/11204/grid/perl/lib 
                 /u01/app/11204/grid/bin/crswrapexece.pl 
  /u01/app/11204/grid/crs/install/s_crsconfig_grac41_env.txt /u01/app/11204/grid/bin/ohasd.bin "reboot"
May 13 13:48:27 grac41 OHASD[22203]: OHASD exiting; Directory /u01/app/11204/grid/log/grac41/ohasd not found

Debugging steps
[root@grac41 gpnpd]# strace -f -o ohas.trc crsctl start crs
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
[root@grac41 gpnpd]# grep ohasd ohas.trc
...
22203 execve("/u01/app/11204/grid/bin/ohasd.bin", ["/u01/app/11204/grid/bin/ohasd.bi"..., "reboot"], [/* 60 vars */]) = 0
22203 stat("/u01/app/11204/grid/log/grac41/ohasd", 0x7fff17d68f40) = -1 ENOENT (No such file or directory)
==> Directory /u01/app/11204/grid/log/grac41/ohasd was missing or has wrong protection

Using clufy comp  olr
[grid@grac41 ~]$ cluvfy comp  olr

Verifying OLR integrity 
Checking OLR integrity...
Checking OLR config file...

ERROR: 
2014-05-17 18:26:41.576:  CLSD: A file system error occurred while attempting to create default permissions for 
                          file "/u01/app/11204/grid/log/grac41/alertgrac41.log" during alert open processing for 
                          process "client". Additional diagnostics: 
                          LFI-00133: Trying to create file /u01/app/11204/grid/log/grac41/alertgrac41.log 
                          that already exists. 
                          LFI-01517: open() failed(OSD return value = 13).  
2014-05-17 18:26:41.585:  CLSD: An error was encountered while attempting to 
                          open alert log "/u01/app/11204/grid/log/grac41/alertgrac41.log". 
                          Additional diagnostics: (:CLSD00155:) 2014-05-17 18:26:41.585:  

OLR config file check successful
Checking OLR file attributes...
ERROR: 
PRVF-4187 : OLR file check failed on the following nodes:
    grac41
    grac41:PRVF-4127 : Unable to obtain OLR location
/u01/app/11204/grid/bin/ocrcheck -config -local
<CV_CMD>/u01/app/11204/grid/bin/ocrcheck -config -local </CV_CMD><CV_VAL>2014-05-17 18:26:45.202: 
CLSD: A file system error occurred while attempting to create default permissions for file 
"/u01/app/11204/grid/log/grac41/alertgrac41.log" during alert open processing for process "client". 
Additional diagnostics: LFI-00133: Trying to create file /u01/app/11204/grid/log/grac41/alertgrac41.log 
that already exists.
LFI-01517: open() failed(OSD return value = 13).

2014-05-17 18:26:45.202: 
CLSD: An error was encountered while attempting to open alert log 
"/u01/app/11204/grid/log/grac41/alertgrac41.log". Additional diagnostics: (:CLSD00155:)
2014-05-17 18:26:45.202: 
CLSD: Alert logging terminated for process client. File name: "/u01/app/11204/grid/log/grac41/alertgrac41.log"
2014-05-17 18:26:45.202: 
CLSD: A file system error occurred while attempting to create default permissions for file 
"/u01/app/11204/grid/log/grac41/client/ocrcheck_7617.log" during log open processing for process "client". 
Additional diagnostics: LFI-00133: Trying to create file /u01/app/11204/grid/log/grac41/client/ocrcheck_7617.log 
that already exists.
LFI-01517: open() failed(OSD return value = 13).

2014-05-17 18:26:45.202: 
CLSD: An error was encounteredcluvfy comp gpnp  while attempting to open log file 
"/u01/app/11204/grid/log/grac41/client/ocrcheck_7617.log". 
Additional diagnostics: (:CLSD00153:)
2014-05-17 18:26:45.202: 
CLSD: Logging terminated for process client. File name: "/u01/app/11204/grid/log/grac41/client/ocrcheck_7617.log"
Oracle Local Registry configuration is :
     Device/File Name         : /u01/app/11204/grid/cdata/grac41.olr
</CV_VAL><CV_VRES>0</CV_VRES><CV_LOG>Exectask: runexe was successful</CV_LOG><CV_ERES>0</CV_ERES>
OLR integrity check failed
Verification of OLR integrity was unsuccessful.

OCSSD.BIN does not start

Case #1 : GPnP profile is not accessible - gpnpd needs to be fully up to serve profile

Grep option to search traces :
$ fn_egrep.sh "Cannot get GPnP profile|Error put-profile CALL" 
TraceFileName: ./grac41/agent/ohasd/orarootagent_root/orarootagent_root.log
2014-05-20 10:26:44.532: [ default][1199552256]Cannot get GPnP profile. 
                          Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running). 
Cannot get GPnP profile
2014-04-21 15:27:06.838: [    GPNP][132114176]clsgpnp_profileCallUrlInt: [at clsgpnp.c:2243] 
                         Result: (13) CLSGPNP_NO_DAEMON. 
                         Error put-profile CALL to remote "tcp://grac41:56376" 
                         disco "mdns:service:gpnp._tcp.local.://grac41:56376/agent=gpnpd,cname=grac4,host=grac41,
                         pid=4548/gpnpd h:grac41 c:grac4"
The above problem was related to a Name Server problem
==> For further details see GENERIC Networking chapter

Case #2 : Voting Disk is not accessible 
In 11gR2, ocssd.bin discover voting disk with setting from GPnP profile, if not enough voting disks can be identified, 
ocssd.bin will abort itself.

Reported error in ocssd.log               : clssnmvDiskVerify: Successful discovery of 0 disks
Reported Clusterware Error in CW alert.log: CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds     

Testing scenario :
[grid@grac41 ~]$ chmod 000 /dev/asmdisk1_udev_sdf1
[grid@grac41 ~]$ chmod 000 /dev/asmdisk1_udev_sdg1
[grid@grac41 ~]$ chmod 000 /dev/asmdisk1_udev_sdh1
[grid@grac41 ~]$ ls -l  /dev/asmdisk1_udev_sdf1 /dev/asmdisk1_udev_sdg1 /dev/asmdisk1_udev_sdh1
b---------. 1 grid asmadmin 8,  81 May 14 09:51 /dev/asmdisk1_udev_sdf1
b---------. 1 grid asmadmin 8,  97 May 14 09:51 /dev/asmdisk1_udev_sdg1
b---------. 1 grid asmadmin 8, 113 May 14 09:51 /dev/asmdisk1_udev_sdh1

Clusterware status :
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     OFFLINE                      Instance Shutdown 
ora.cluster_interconnect.haip  ONLINE     OFFLINE                        
ora.crf                        ONLINE     ONLINE          grac41         
ora.crsd                       ONLINE     OFFLINE                        
ora.cssd                       ONLINE     OFFLINE         STARTING       
ora.cssdmonitor                ONLINE     ONLINE          grac41         
ora.ctssd                      ONLINE     OFFLINE                        
ora.diskmon                    OFFLINE    OFFLINE                        
ora.drivers.acfs               ONLINE     OFFLINE                        
ora.evmd                       ONLINE     OFFLINE                        
ora.gipcd                      ONLINE     ONLINE          grac41         
ora.gpnpd                      ONLINE     ONLINE          grac41         
ora.mdnsd                      ONLINE     ONLINE          grac41 
--> cssd is STARTING mode for a long time before switching to OFFLINE


#  ps -elf | egrep "PID|d.bin|ohas|oraagent|orarootagent|cssdagent|cssdmonitor" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root      6098     1  0  80   0 -  2846 pipe_w 04:44 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
4 R root     31696     1  4  80   0 - 179039 -     07:59 ?        00:00:06 /u01/app/11204/grid/bin/ohasd.bin reboot
4 S grid     31825     1  0  80   0 - 169311 futex_ 07:59 ?       00:00:00 /u01/app/11204/grid/bin/oraagent.bin
0 S grid     31836     1  0  80   0 - 74289 poll_s 07:59 ?        00:00:00 /u01/app/11204/grid/bin/mdnsd.bin
0 S grid     31847     1  0  80   0 - 127382 hrtime 07:59 ?       00:00:00 /u01/app/11204/grid/bin/gpnpd.bin
0 S grid     31859     1  2  80   0 - 159711 hrtime 07:59 ?       00:00:03 /u01/app/11204/grid/bin/gipcd.bin
4 S root     31861     1  0  80   0 - 165832 futex_ 07:59 ?       00:00:00 /u01/app/11204/grid/bin/orarootagent.bin
4 S root     31875     1  5 -40   - - 160907 hrtime 07:59 ?       00:00:08 /u01/app/11204/grid/bin/osysmond.bin
4 S root     31934     1  0 -40   - - 162468 futex_ 07:59 ?       00:00:00 /u01/app/11204/grid/bin/cssdmonitor
4 S root     31953     1  0 -40   - - 161056 futex_ 07:59 ?       00:00:00 /u01/app/11204/grid/bin/cssdagent
4 S grid     31965     1  0 -40   - - 109118 futex_ 07:59 ?       00:00:00 /u01/app/11204/grid/bin/ocssd.bin 
4 S root     32201     1  0 -40   - - 161632 poll_s 07:59 ?       00:00:01 /u01/app/11204/grid/bin/ologgerd -M -d /u01/app/11204/grid/crf/db/grac41

Quick Tracefile Review using grep 
$ fn_egrep.sh "Successful discovery"
Working case: 
TraceFileName: ./grac41/cssd/ocssd.log
2014-05-22 11:46:57.229: [    CSSD][201324288]clssnmvDiskVerify: Successful discovery for disk /dev/asmdisk1_udev_sdg1, 
                                    UID 88c2a08b-4c8c4f85-bf0109e0-990388e4, Pending CIN 0:1399993206:1, Committed CIN 0:1399993206:1
2014-05-22 11:46:57.230: [    CSSD][201324288]clssnmvDiskVerify: Successful discovery for disk /dev/asmdisk1_udev_sdf1, 
                                    UID b0e94e5d-83054fe9-bf58b6b9-8bfacd65, Pending CIN 0:1399993206:1, Committed CIN 0:1399993206:1
2014-05-22 11:46:57.230: [    CSSD][201324288]clssnmvDiskVerify: Successful discovery for disk /dev/asmdisk1_udev_sdh1, 
                                    UID 2121ff6e-acab4f49-bf01195f-a0a3e00b, Pending CIN 0:1399993206:1, Committed CIN 0:1399993206:1
2014-05-22 11:46:57.231: [    CSSD][201324288]clssnmvDiskVerify: Successful discovery of 3 disks

Failed case: 
2014-05-22 13:41:38.776: [    CSSD][1839290112]clssnmvDiskVerify: Successful discovery of 0 disks
2014-05-22 13:41:53.803: [    CSSD][1839290112]clssnmvDiskVerify: Successful discovery of 0 disks
2014-05-22 13:42:08.851: [    CSSD][1839290112]clssnmvDiskVerify: Successful discovery of 0 disks
--> disk redicovery is restarted every 15 seconds in case of errors

Tracefile Details:
grid@grac41 grac41]$c d $GRID_HOME/log/grac41 ; get_ca.sh alertgrac41.log "2014-05-24"
2014-05-24 07:59:27.167:  [cssd(31965)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; 
                          Details at (:CSSNM00070:) in /u01/app/11204/grid/log/grac41/cssd/ocssd.log 
2014-05-24 07:59:42.197:  [cssd(31965)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; 
                          Details at (:CSSNM00070:) in /u01/app/11204/grid/log/grac41/cssd/ocssd.log 

grac41/cssd/ocssd.log
2014-05-24 08:10:14.145: [   SKGFD][268433152]Lib :UFS:: closing handle 0x7fa3041525a0 for disk :/dev/asmdisk1_udev_sdb1:
2014-05-24 08:10:14.145: [   SKGFD][268433152]Lib :UFS:: closing handle 0x7fa304153050 for disk :/dev/asmdisk8_ssd3:
2014-05-24 08:10:14.145: [   SKGFD][268433152]Lib :UFS:: closing handle 0x7fa304153c70 for disk :/dev/oracleasm/disks/ASMLIB_DISK1:
2014-05-24 08:10:14.145: [   SKGFD][268433152]Lib :UFS:: closing handle 0x7fa304154ac0 for disk :/dev/oracleasm/disks/ASMLIB_DISK2:
2014-05-24 08:10:14.145: [   SKGFD][268433152]Lib :UFS:: closing handle 0x7fa304155570 for disk :/dev/oracleasm/disks/ASMLIB_DISK3:
2014-05-24 08:10:14.145: [    CSSD][268433152]clssnmvDiskVerify: Successful discovery of 0 disks
2014-05-24 08:10:14.145: [    CSSD][268433152]clssnmvDiskVerify: exit
2014-05-24 08:10:14.145: [    CSSD][268433152]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
2014-05-24 08:10:14.145: [    CSSD][268433152]clssnmvFindInitialConfigs: No voting files found

Debugging steps with cluvfy 
Note you must run cluvfy from a node which is up and running as we need ASM to retrieve Voting Disk location 
Here we are ruuning cluvfy from grac42 to test voting disks on grac41 
[grid@grac42 ~]$ cluvfy comp vdisk -n grac41
Verifying Voting Disk: 
Checking Oracle Cluster Voting Disk configuration...
ERROR: 
PRVF-4194 : Asm is not running on any of the nodes. Verification cannot proceed.
ERROR: 
PRVF-5157 : Could not verify ASM group "OCR" for Voting Disk location "/dev/asmdisk1_udev_sdf1"
ERROR: 
PRVF-5157 : Could not verify ASM group "OCR" for Voting Disk location "/dev/asmdisk1_udev_sdg1"
ERROR: 
PRVF-5157 : Could not verify ASM group "OCR" for Voting Disk location "/dev/asmdisk1_udev_sdh1"
PRVF-5431 : Oracle Cluster Voting Disk configuration check failed
UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations 
Verification of Voting Disk was unsuccessful on all the specified nodes. 

Verify disk protections by using kfed and ls 
[grid@grac41 ~/cluvfy]$ ls -l /dev/asmdisk1_udev_sdf1 /dev/asmdisk1_udev_sdg1 /dev/asmdisk1_udev_sdh1
b---------. 1 grid asmadmin 8,  81 May 14 09:51 /dev/asmdisk1_udev_sdf1
b---------. 1 grid asmadmin 8,  97 May 14 09:51 /dev/asmdisk1_udev_sdg1
b---------. 1 grid asmadmin 8, 113 May 14 09:51 /dev/asmdisk1_udev_sdh1

[grid@grac41 ~/cluvfy]$ kfed read  /dev/asmdisk1_udev_sdf1
KFED-00303: unable to open file '/dev/asmdisk1_udev_sdf

Case #3 : GENERIC NETWORK problems 
==> For futher details see GENERIC Networking chapter

ASM instance does not start / EVMD.BIN in State INTERMEDIATE

Reported error in oraagent_grid.log       : CRS-5017: The resource action "ora.asm start" encountered the following error: 
                                            ORA-12546: TNS:permission denied
Reported Clusterware Error in CW alert.log: [/u01/app/11204/grid/bin/oraagent.bin(6784)]CRS-5011:Check of resource "+ASM" failed: 
                                            [ohasd(6536)]CRS-2807:Resource 'ora.asm' failed to start automatically. 
Reported Error in ASM alert log           : ORA-07274: spdcr: access error, access to oracle denied.
                                            Linux-x86_64 Error: 13: Permission denied
                                            PSP0 (ospid: 3582): terminating the instance due to error 7274
...
Testing scenario :
[grid@grac41 grac41]$ cd $GRID_HOME/bin
[grid@grac41 bin]$ chmod 444 oracle
[grid@grac41 bin]$ ls -l oracle
-r--r--r--. 1 grid oinstall 209950863 May  4 10:26 oracle
# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

Clusterware status :
[grid@grac41 bin]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager

[grid@grac41 bin]$ crsi
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     OFFLINE                        
ora.cluster_interconnect.haip  ONLINE     ONLINE          grac41         
ora.crf                        ONLINE     ONLINE          grac41         
ora.crsd                       ONLINE     OFFLINE                        
ora.cssd                       ONLINE     ONLINE          grac41         
ora.cssdmonitor                ONLINE     ONLINE          grac41         
ora.ctssd                      ONLINE     ONLINE          grac41       OBSERVER  
ora.diskmon                    OFFLINE    OFFLINE                        
ora.drivers.acfs               ONLINE     ONLINE          grac41         
ora.evmd                       ONLINE     INTERMEDIATE    grac41         
ora.gipcd                      ONLINE     ONLINE          grac41         
ora.gpnpd                      ONLINE     ONLINE          grac41         
ora.mdnsd                      ONLINE     ONLINE          grac41         

[grid@grac41 bin]$ ps -elf | egrep "PID|d.bin|ohas|oraagent|orarootagent|cssdagent|cssdmonitor" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root      3408 28369  0  80   0 - 30896 poll_s 10:03 pts/3    00:00:00 view ./agent/crsd/oraagent_grid/oraagent_grid.log
4 S root      6098     1  0  80   0 -  2846 pipe_w May24 ?        00:00:01 /bin/sh /etc/init.d/init.ohasd run
4 S root      6536     1  2  80   0 - 179140 futex_ 13:51 ?       00:01:11 /u01/app/11204/grid/bin/ohasd.bin reboot
4 S grid      6784     1  0  80   0 - 173902 futex_ 13:52 ?       00:00:05 /u01/app/11204/grid/bin/oraagent.bin
0 S grid      6795     1  0  80   0 - 74289 poll_s 13:52 ?        00:00:00 /u01/app/11204/grid/bin/mdnsd.bin
0 S grid      6806     1  0  80   0 - 127382 hrtime 13:52 ?       00:00:04 /u01/app/11204/grid/bin/gpnpd.bin
0 S grid      6823     1  1  80   0 - 159711 hrtime 13:52 ?       00:00:39 /u01/app/11204/grid/bin/gipcd.bin
4 S root      6825     1  0  80   0 - 168698 futex_ 13:52 ?       00:00:24 /u01/app/11204/grid/bin/orarootagent.bin
4 S root      6840     1  4 -40   - - 160907 hrtime 13:52 ?       00:02:29 /u01/app/11204/grid/bin/osysmond.bin
4 S root      6851     1  0 -40   - - 162793 futex_ 13:52 ?       00:00:07 /u01/app/11204/grid/bin/cssdmonitor
4 S root      6870     1  0 -40   - - 162920 futex_ 13:52 ?       00:00:07 /u01/app/11204/grid/bin/cssdagent
4 S grid      6881     1  2 -40   - - 166593 futex_ 13:52 ?       00:01:24 /u01/app/11204/grid/bin/ocssd.bin 
4 S root      7256     1  6 -40   - - 178527 poll_s 13:52 ?       00:03:28 /u01/app/11204/grid/bin/ologgerd -M -d ..
4 S root      7847     1  0  80   0 - 159388 futex_ 13:52 ?       00:00:29 /u01/app/11204/grid/bin/octssd.bin reboot
0 S grid      7875     1  0  80   0 - 76018 hrtime 13:52 ?        00:00:05 /u01/app/11204/grid/bin/evmd.bin

Tracefile Details:
CW alert.log
[/u01/app/11204/grid/bin/oraagent.bin(6784)]CRS-5011:Check of resource "+ASM" failed: 
          details at "(:CLSN00006:)" in "../agent/ohasd/oraagent_grid/oraagent_grid.log"
[/u01/app/11204/grid/bin/oraagent.bin(6784)]CRS-5011:Check of resource "+ASM" failed: 
          details at "(:CLSN00006:)" in "../agent/ohasd/oraagent_grid/oraagent_grid.log"
2014-05-25 13:55:04.626:  [ohasd(6536)]CRS-2807:Resource 'ora.asm' failed to start automatically.
2014-05-25 13:55:04.626:  [ohasd(6536)]CRS-2807:Resource 'ora.crsd' failed to start automatically.

ASM alert log : ./log/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log
Sun May 25 13:50:50 2014
Errors in file /u01/app/11204/grid/log/diag/asm/+asm/+ASM1/trace/+ASM1_psp0_3582.trc:
ORA-07274: spdcr: access error, access to oracle denied.
Linux-x86_64 Error: 13: Permission denied
PSP0 (ospid: 3582): terminating the instance due to error 7274
Sun May 25 13:50:50 2014
System state dump requested by (instance=1, osid=3591 (DIAG)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/11204/grid/log/diag/asm/+asm/+ASM1/trace/+ASM1_diag_3591_20140525135050.trc
Dumping diagnostic data in directory=[cdmp_20140525135050], requested by (instance=1, osid=3591 (DIAG)), 
   summary=[abnormal instance termination].
Instance terminated by PSP0, pid = 3582

agent/ohasd/oraagent_grid/oraagent_grid.log
2014-05-25 13:54:54.226: [    AGFW][3273144064]{0:0:2} ora.asm 1 1 state changed from: STARTING to: OFFLINE
2014-05-25 13:54:54.229: [    AGFW][3273144064]{0:0:2} Agent sending last reply for: RESOURCE_START[ora.asm 1 1] ID 4098:624
2014-05-25 13:54:54.239: [    AGFW][3273144064]{0:0:2} Agent received the message: RESOURCE_CLEAN[ora.asm 1 1] ID 4100:644
2014-05-25 13:54:54.239: [    AGFW][3273144064]{0:0:2} Preparing CLEAN command for: ora.asm 1 1
2014-05-25 13:54:54.239: [    AGFW][3273144064]{0:0:2} ora.asm 1 1 state changed from: OFFLINE to: CLEANING

Debugging steps :
  Try to start ASM databbase manually 
  [grid@grac41 grid]$ sqlplus / as sysasm
  ERROR:
  ORA-12546: TNS:permission denied

Fix: 
  [grid@grac41 grid]$  chmod 6751  $GRID_HOME/bin/oracle
  [grid@grac41 grid]$  ls -l  $GRID_HOME/bin/oracle
   -rwsr-s--x. 1 grid oinstall 209950863 May  4 10:26 /u01/app/11204/grid/bin/oracle
  [grid@grac41 grid]$ sqlplus / as sysasm
  --> works now again

EVMD.BIN does not start : State INTERMEDIATE 
==> For further details see GENERIC Networking troubleshooting chapter

CRSD.BIN does not start

      • Note: in 11.2 ASM starts before crsd.bin, and brings up the diskgroup automatically if it contains the OCR.
      • Typical Problems   :
        • OCR not acessible
        • Networking problems ( includes firewall, Nameserver problems and Network related errors )     ==> For futher details see GENERIC Networking troubleshooting chapter
        • Common File Protection problems ( Oracle executable, Log Files, IPC sockets )     ==> For futher details see GENERIC File Protection troubleshooting chapter
  
Case #1: OCR not accessbible - CRS-5019 error

Reported error in ocssd.log               : [check] DgpAgent::queryDgStatus no data found in v$asm_diskgroup_stat
Reported Clusterware Error in CW alert.log: CRS-5019:All OCR locations are on ASM disk groups [OCR3], 
                                            and none of these disk groups are mounted. 
                                            Details are at "(:CLSN00100:)" in ".../ohasd/oraagent_grid/oraagent_grid.log".
Testing scenario :
In file  /etc/oracle/ocr.loc
Change entry  ocrconfig_loc=+OCR to ocrconfig_loc=+OCR3 

Clusterware status :
# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager

# crsi
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     INTERMEDIATE    grac41       OCR not started
ora.cluster_interconnect.haip  ONLINE     ONLINE          grac41         
ora.crf                        ONLINE     ONLINE          grac41         
ora.crsd                       ONLINE     OFFLINE                        
ora.cssd                       ONLINE     ONLINE          grac41         
ora.cssdmonitor                ONLINE     ONLINE          grac41         
ora.ctssd                      ONLINE     ONLINE          grac41       OBSERVER  
ora.diskmon                    OFFLINE    OFFLINE                        
ora.drivers.acfs               ONLINE     ONLINE          grac41         
ora.evmd                       ONLINE     INTERMEDIATE    grac41         
ora.gipcd                      ONLINE     ONLINE          grac41         
ora.gpnpd                      ONLINE     ONLINE          grac41         
ora.mdnsd                      ONLINE     ONLINE          grac41         
--> Both resoruces ora.evmd and ora.asm reports their state in INTERMEDIATE - CRSD/EVMD doesn't come up

# ps -elf | egrep "PID|d.bin|ohas|oraagent|orarootagent|cssdagent|cssdmonitor" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root      6098     1  0  80   0 -  2846 pipe_w 04:44 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
4 S root     10132     1  1  80   0 - 179077 futex_ 10:09 ?       00:00:07 /u01/app/11204/grid/bin/ohasd.bin reboot
4 S grid     10295     1  0  80   0 - 175970 futex_ 10:09 ?       00:00:03 /u01/app/11204/grid/bin/oraagent.bin
0 S grid     10306     1  0  80   0 - 74289 poll_s 10:09 ?        00:00:00 /u01/app/11204/grid/bin/mdnsd.bin
0 S grid     10317     1  0  80   0 - 127382 hrtime 10:09 ?       00:00:01 /u01/app/11204/grid/bin/gpnpd.bin
0 S grid     10328     1  1  80   0 - 159711 hrtime 10:09 ?       00:00:04 /u01/app/11204/grid/bin/gipcd.bin
4 S root     10330     1  1  80   0 - 168735 futex_ 10:09 ?       00:00:04 /u01/app/11204/grid/bin/orarootagent.bin
4 S root     10344     1  6 -40   - - 160907 hrtime 10:09 ?       00:00:25 /u01/app/11204/grid/bin/osysmond.bin
4 S root     10355     1  0 -40   - - 162793 futex_ 10:09 ?       00:00:01 /u01/app/11204/grid/bin/cssdmonitor
4 S root     10374     1  0 -40   - - 162921 futex_ 10:09 ?       00:00:01 /u01/app/11204/grid/bin/cssdagent
4 S grid     10385     1  4 -40   - - 166593 futex_ 10:09 ?       00:00:17 /u01/app/11204/grid/bin/ocssd.bin 
4 S root     10733     1  0 -40   - - 127270 poll_s 10:09 ?       00:00:03 /u01/app/11204/grid/bin/ologgerd 
4 S root     11286     1  0  80   0 - 159388 futex_ 10:09 ?       00:00:03 /u01/app/11204/grid/bin/octssd.bin reboot
0 S grid     11307     1  0  80   0 - 75351 hrtime 10:09 ?        00:00:00 /u01/app/11204/grid/bin/evmd.bin

Tracefile Details :
alertgrac41.log:
2014-05-24 10:13:48.085: 
[/u01/app/11204/grid/bin/oraagent.bin(10295)]CRS-5019:All OCR locations are on ASM disk groups [OCR3], 
                                                      and none of these disk groups are mounted. 
              Details are at "(:CLSN00100:)"   in ".../agent/ohasd/oraagent_grid/oraagent_grid.log".
2014-05-24 10:14:18.144: 
[/u01/app/11204/grid/bin/oraagent.bin(10295)]CRS-5019:All OCR locations are on ASM disk groups [OCR3], 
                                                      and none of these disk groups are mounted. 
           Details are at "(:CLSN00100:)" in ".../agent/ohasd/oraagent_grid/oraagent_grid.log".

ohasd/oraagent_grid/oraagent_grid.log
2014-05-24 10:10:47.848: [ora.asm][4179105536]{0:0:2} [check] AsmAgent::check ocrCheck 1 m_OcrOnline 0 m_OcrTimer 30
2014-05-24 10:10:47.848: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::initOcrDgpSet { entry
2014-05-24 10:10:47.848: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::initOcrDgpSet procr_get_conf: retval [0] 
                                                                                      configured [1] local only [0] error buffer []
2014-05-24 10:10:47.848: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::initOcrDgpSet procr_get_conf: OCR loc [0], Disk Group : [+OCR3]
2014-05-24 10:10:47.848: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::initOcrDgpSet m_ocrDgpSet 02d965f8 dgName OCR3
2014-05-24 10:10:47.848: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::initOcrDgpSet ocrret 0 found 1
2014-05-24 10:10:47.848: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::initOcrDgpSet ocrDgpSet OCR3 
2014-05-24 10:10:47.848: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::initOcrDgpSet exit }
2014-05-24 10:10:47.848: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::ocrDgCheck Entry {
2014-05-24 10:10:47.848: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::getConnxn connected
2014-05-24 10:10:47.850: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::queryDgStatus excp no data found
2014-05-24 10:10:47.850: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::queryDgStatus no data found in v$asm_diskgroup_stat
2014-05-24 10:10:47.851: [ora.asm][4179105536]{0:0:2} [check] DgpAgent::queryDgStatus dgName OCR3 ret 1
2014-05-24 10:10:47.851: [ora.asm][4179105536]{0:0:2} [check] (:CLSN00100:)DgpAgent::ocrDgCheck OCR dgName OCR3 state 1
2014-05-24 10:10:47.851: [ora.asm][4179105536]{0:0:2} [check] AsmAgent::check ocrCheck 2 m_OcrOnline 0 m_OcrTimer 31
2014-05-24 10:10:47.851: [ora.asm][4179105536]{0:0:2} [check] CrsCmd::ClscrsCmdData::stat entity 5 statflag 32 useFilter 1
2014-05-24 10:10:48.053: [ COMMCRS][3665618688]clsc_connect: (0x7f77d4101f30) no listener at (ADDRESS=(PROTOCOL=IPC)(KEY=CRSD_UI_SOCKET))

Debugging steps
Identify OCR  ASM DG on a running instance with asmcmd
[grid@grac42 ~]$  asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB    Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576      3057                0             N  ASMLIB_DG/
MOUNTED  NORMAL  N         512   4096  1048576     30708                0             N  DATA/
MOUNTED  EXTERN  N         512   4096  1048576     40946                0             N  FRA/
MOUNTED  NORMAL  N         512   4096  1048576      6141                0             OCR/
MOUNTED  NORMAL  N         512   4096  1048576      3057                0             N  SSD/
--> diskgroup OCR is serving our voting disks ( and not DG +OCR3 )

Check availale disks:
[grid@grac42 ~]$ asmcmd lsdsk -k
Total_MB  Free_MB  OS_MB  Name            Failgroup       Failgroup_Type  Library   Path
   10236     2809  10236  DATA_0001       DATA_0001       REGULAR         System    /dev/asmdisk1_udev_sdc1
   10236     2804  10236  DATA_0002       DATA_0002       REGULAR         System    /dev/asmdisk1_udev_sdd1
   10236     2804  10236  DATA_0003       DATA_0003       REGULAR         System    /dev/asmdisk1_udev_sde1
    2047     1696   2047  OCR_0000        OCR_0000        REGULAR         System    /dev/asmdisk1_udev_sdf1
    2047     1696   2047  OCR_0001        OCR_0001        REGULAR         System    /dev/asmdisk1_udev_sdg1
    2047     1697   2047  OCR_0002        OCR_0002        REGULAR         System    /dev/asmdisk1_udev_sdh1
    1019      875   1019  SSD_0000        SSD_0000        REGULAR         System    /dev/asmdisk8_ssd1
    1019      875   1019  SSD_0001        SSD_0001        REGULAR         System    /dev/asmdisk8_ssd2
    1019      875   1019  SSD_0002        SSD_0002        REGULAR         System    /dev/asmdisk8_ssd3
   20473    10467  20473  FRA_0001        FRA_0001        REGULAR         System    /dev/asmdisk_fra1
   20473    10461  20473  FRA_0002        FRA_0002        REGULAR         System    /dev/asmdisk_fra2
    1019      882   1019  ASMLIB_DG_0000  ASMLIB_DG_0000  REGULAR         System    /dev/oracleasm/disks/ASMLIB_DISK1
    1019      882   1019  ASMLIB_DG_0001  ASMLIB_DG_0001  REGULAR         System    /dev/oracleasm/disks/ASMLIB_DISK2
    1019      882   1019  ASMLIB_DG_0002  ASMLIB_DG_0002  REGULAR         System    /dev/oracleasm/disks/ASMLIB_DISK3
-->  /dev/asmdisk1_udev_sdf1,  /dev/asmdisk1_udev_sdg1 , /dev/asmdisk1_udev_sdh1 are 
     our disk serving the voting files

Verify on a working system the voting disk 
[grid@grac42 ~]$  kfed read  /dev/asmdisk1_udev_sdf1  | egrep 'vf|name'
kfdhdb.dskname:                OCR_0000 ; 0x028: length=8
kfdhdb.grpname:                     OCR ; 0x048: length=3
kfdhdb.fgname:                 OCR_0000 ; 0x068: length=8
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.vfstart:                     320 ; 0x0ec: 0x00000140
kfdhdb.vfend:                       352 ; 0x0f0: 0x00000160
Note: If the markers between vfstart & vfend are not 0 then disk does contain voting disk !

Check OCR with ocrcheck
[grid@grac41 ~]$ ocrcheck
PROT-602: Failed to retrieve data from the cluster registry
PROC-26: Error while accessing the physical storage

Tracing ocrcheck              
[grid@grac41 ~]$ strace -f -o ocrcheck.trc ocrcheck
[grid@grac41 ~]$ grep open ocrcheck.trc  | grep ocr.loc
17530 open("/etc/oracle/ocr.loc", O_RDONLY) = 6
--> ocrcheck reads /etc/oracle/ocr.loc
[grid@grac41 ~]$ cat /etc/oracle/ocr.loc
ocrconfig_loc=+OCR3
local_only=false
==> +OCR3 is a wrong entry 

Verify CW state with cluvfy
[grid@grac42 ~]$ cluvfy comp ocr -n grac42,grac41
Verifying OCR integrity 
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
ERROR: 
PRVF-4193 : Asm is not running on the following nodes. 
            Proceeding with the remaining nodes.
    grac41
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
ERROR: 
PRVF-4195 : Disk group for ocr location "+OCR" not available on the following nodes:
    grac41
NOTE: 
This check does not verify the integrity of the OCR contents. 
Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check failed
Verification of OCR integrity was unsuccessful. 
Checks did not pass for the following node(s):
    grac41

Some agents of the OHASD stack like mdnsd.bin, gpnpd.bin or gipcd.bin  are not starting up

Case 1 : Wrong protection for executable  $GRID_HOME/bin/gpnpd.bin
    ==> For add. details see GENERIC File Protection chapter   

Reported error in oraagent_grid.log         : [  clsdmc][1103787776]Fail to connect (ADDRESS=(PROTOCOL=ipc)
                                              (KEY=grac41DBG_GPNPD)) with status 9
                                              [ora.gpnpd][1103787776]{0:0:2} [start] Error = error 9 encountered 
                                              when connecting to GPNPD
Reported Clusterware Error in CW alert.log: [/u01/app/11204/grid/bin/oraagent.bin(20333)]
                                             CRS-5818:Aborted command 'start' for resource 'ora.gpnpd'. 
                                             Details at (:CRSAGF00113:) {0:0:2} in  ..... ohasd/oraagent_grid/oraagent_grid.log 
Testing scenario :
[grid@grac41 ~]$ chmod 444 $GRID_HOME/bin/gpnpd.bin
[grid@grac41 ~]$ ls -l $GRID_HOME/bin/gpnpd.bin
-r--r--r--. 1 grid oinstall 368780 Mar 19 17:07 /u01/app/11204/grid/bin/gpnpd.bin
[root@grac41 gpnp]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

Clusterware status :
$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager

$ crsi
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     OFFLINE                      Instance Shutdown 
ora.cluster_interconnect.haip  ONLINE     OFFLINE                        
ora.crf                        ONLINE     OFFLINE                        
ora.crsd                       ONLINE     OFFLINE                        
ora.cssd                       ONLINE     OFFLINE                        
ora.cssdmonitor                OFFLINE    OFFLINE                        
ora.ctssd                      ONLINE     OFFLINE                        
ora.diskmon                    OFFLINE    OFFLINE                        
ora.drivers.acfs               ONLINE     OFFLINE                        
ora.evmd                       ONLINE     OFFLINE                        
ora.gipcd                      ONLINE     OFFLINE                        
ora.gpnpd                      ONLINE     OFFLINE         STARTING       
ora.mdnsd                      ONLINE     ONLINE          grac41         
--> GPnP deamon  remains ins starting Mode

$ ps -elf | egrep "PID|d.bin|ohas" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
4 S root      6098     1  0  80   0 -  2846 pipe_w 04:44 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
4 S root     20127     1 23  80   0 - 176890 futex_ 11:52 ?       00:00:51 /u01/app/11204/grid/bin/ohasd.bin reboot
4 S grid     20333     1  0  80   0 - 166464 futex_ 11:52 ?       00:00:02 /u01/app/11204/grid/bin/oraagent.bin
0 S grid     20344     1  0  80   0 - 74289 poll_s 11:52 ?        00:00:00 /u01/app/11204/grid/bin/mdnsd.bin

Review Tracefile : 
alertgrac41.log
[/u01/app/11204/grid/bin/oraagent.bin(27632)]CRS-5818:Aborted command 'start' for resource 'ora.gpnpd'. 
       Details at (:CRSAGF00113:) {0:0:2} in  ... agent/ohasd/oraagent_grid/oraagent_grid.log.
2014-05-12 10:27:51.747: 
[ohasd(27477)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.gpnpd'. 
       Details at (:CRSPE00111:) {0:0:2} in /u01/app/11204/grid/log/grac41/ohasd/ohasd.log 

oraagent_grid.log
[  clsdmc][1103787776]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_GPNPD)) with status 9
2014-05-12 10:27:17.476: [ora.gpnpd][1103787776]{0:0:2} [start] Error = error 9 encountered when connecting to GPNPD
2014-05-12 10:27:18.477: [ora.gpnpd][1103787776]{0:0:2} [start] without returnbuf
2014-05-12 10:27:18.659: [ COMMCRS][1125422848]clsc_connect: (0x7f3b300d92d0) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=grac41DBG_GPNPD))

ohasd.log 
2014-05-12 10:27:51.745: [    AGFW][2693531392]{0:0:2} Received the reply to the message: RESOURCE_START[ora.gpnpd 1 1] ID 4098:362 from 
                                                       the agent /u01/app/11204/grid/bin/oraagent_grid
2014-05-12 10:27:51.745: [    AGFW][2693531392]{0:0:2} Agfw Proxy Server sending the reply to PE for message:
                                                       RESOURCE_START[ora.gpnpd 1 1] ID 4098:361
2014-05-12 10:27:51.747: [   CRSPE][2212488960]{0:0:2} Received reply to action [Start] message ID: 361
2014-05-12 10:27:51.747: [    INIT][2212488960]{0:0:2} {0:0:2} Created alert : (:CRSPE00111:) :  Start action timed out!
2014-05-12 10:27:51.747: [   CRSPE][2212488960]{0:0:2} Start action failed with error code: 3
2014-05-12 10:27:52.123: [    AGFW][2693531392]{0:0:2} Received the reply to the message: RESOURCE_START[ora.gpnpd 1 1] ID 4098:362 from the 
                                                       agent /u01/app/11204/grid/bin/oraagent_grid
2014-05-12 10:27:52.123: [    AGFW][2693531392]{0:0:2} Agfw Proxy Server sending the last reply to PE for message:
                                                       RESOURCE_START[ora.gpnpd 1 1] ID 4098:361
2014-05-12 10:27:52.123: [   CRSPE][2212488960]{0:0:2} Received reply to action [Start] message ID: 361
2014-05-12 10:27:52.123: [   CRSPE][2212488960]{0:0:2} RI [ora.gpnpd 1 1] new internal state: [STABLE] old value: [STARTING]
2014-05-12 10:27:52.123: [   CRSPE][2212488960]{0:0:2} CRS-2674: Start of 'ora.gpnpd' on 'grac41' failed
--> Here we see that are failing to start GPnP resource

Debugging steps :
Is process gpnpd.bin running ?
[root@grac41 ~]# ps  -elf  | egrep "PID|gpnpd" | grep -v grep
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
--> Missing process gpnpd.bin 

Restart CRS with strace support
# crsctl stop crs -f
# strace -t -f -o crs_startup.trc crsctl start crs

Check for EACESS erros and check return values of execve() and aceess() sytem calls :
[root@grac41 oracle]# grep EACCES crs_startup.trc
28301 12:19:46 access("/u01/app/11204/grid/bin/gpnpd.bin", X_OK) = -1 EACCES (Permission denied)

Review  crs_startup.trc more in detail 
27345 12:15:35 execve("/u01/app/11204/grid/bin/gpnpd.bin", ["/u01/app/11204/grid/bin/gpnpd.bi"...], [/* 73 vars */] <unfinished ...>
27238 12:15:35 <... lseek resumed> )    = 164864
27345 12:15:35 <... execve resumed> )   = -1 EACCES (Permission denied)

27345 12:15:35 access("/u01/app/11204/grid/bin/gpnpd.bin", X_OK <unfinished ...>
27238 12:15:35 <... read resumed> "\25\23\"\1\23\3\t\t\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 256) = 256
27345 12:15:35 <... access resumed> )   = -1 EACCES (Permission denied)

Verify problem with cluvfy
[grid@grac41 ~]$ cluvfy comp software -verbose 
Verifying software 
Check: Software
  Component: crs                      
  Node Name: grac41 
..
        Permissions of file "/u01/app/11204/grid/bin/gpnpd.bin" did not match the expected value. 
        [Expected = "0755" ; Found = "0444"]
    /u01/app/11204/grid/bin/gpnptool.bin..."Permissions" did not match reference
..

References

Relocate OCR and Voting Disks to a different ASM diskgroup ( 11.2.0.4 )

Create a new diskgroup for OCR and Voting disk

Use ascma and create new Data group named OCR and verify that this datagroup is mounted
$ asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576     40944    25698            10236            7731              0             Y  DATA/
MOUNTED  NORMAL  N         512   4096  1048576      6141     5730             2047            1841              0             N  OCR/
Attention:
  To avoid error CRS-4602: Failed 27 to add voting file .. during running $GRID_HOME/bin/crsctl  replace votedisk  double check that the
  the newly created diskgroup is mounted on any cluster instances by running 
$ asmcmd lsdg
$ asmcmd lsdsk 
on each instance.

Change VIP status from INTERMEDIATE state back to ONLINE state

 

Check current VIP status:
$  crsctl status resource ora.grac1.vip
NAME=ora.grac1.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=INTERMEDIATE on grac2

Stop the VIP resource:
$ crsctl stop resource ora.grac1.vip
CRS-2673: Attempting to stop 'ora.grac1.vip' on 'grac2'
CRS-2677: Stop of 'ora.grac1.vip' on 'grac2' succeeded

Start the VIP resource:
$ crsctl start resource ora.grac1.vip
CRS-2672: Attempting to start 'ora.grac1.vip' on 'grac1'
CRS-2676: Start of 'ora.grac1.vip' on 'grac1' succeeded

Verify VIP resource:
$  crsctl status resource ora.grac1.vip
NAME=ora.grac1.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on grac1