Install 12.2 Oracle Member Cluster in a Virtualbox env

This article only exits because  I’m always getting support, fast feedback  and motivation  from

Anil Nair | Product Manager
Oracle Real Application Clusters (RAC)

Verify RHP-Server IO-Server and MGMTDB  status on our Domain Services Cluster

[grid@dsctw21 ~]$ srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is running on node dsctw21
[grid@dsctw21 ~]$  srvctl status  mgmtdb 
Database is enabled
Instance -MGMTDB is running on node dsctw21
[grid@dsctw21 ~]$ srvctl status ioserver
ASM I/O Server is running on dsctw21

  Prepare RHP Server

DNS requirements for HAVIP IP address 
[grid@dsctw21 ~]$  nslookup rhpserver
Server:        192.168.5.50
Address:    192.168.5.50#53

Name:    rhpserver.example.com
Address: 192.168.5.51

[grid@dsctw21 ~]$  nslookup  192.168.5.51
Server:        192.168.5.50
Address:    192.168.5.50#53

51.5.168.192.in-addr.arpa    name = rhpserver.example.com.

[grid@dsctw21 ~]$ ping nslookup rhpserver
ping: nslookup: Name or service not known
[grid@dsctw21 ~]$ ping rhpserver
PING rhpserver.example.com (192.168.5.51) 56(84) bytes of data.
From dsctw21.example.com (192.168.5.151) icmp_seq=1 Destination Host Unreachable
From dsctw21.example.com (192.168.5.151) icmp_seq=2 Destination Host Unreachable

-> nslookup works - Nobody should respond to our ping request  as HAVIP is not active YET 

As user root create a HAVIP  
[root@dsctw21 ~]#  srvctl add havip -id rhphavip -address rhpserver 

*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.rhphavip.havip             1   OFFLINE      OFFLINE      -               STABLE  

Create a Member Cluster Configuration Manifest

[grid@dsctw21 ~]$ crsctl create  -h
Usage:
  crsctl create policyset -file <filePath>
where 
     filePath        Policy set file to create.

  crsctl create member_cluster_configuration <member_cluster_name> -file <cluster_manifest_file>  -member_type <database|application>  [-version <member_cluster_version>] [-domain_services [asm_storage <local|direct|indirect>][<rhp>]]
  where 
     member_cluster_name    name of the new Member Cluster
     -file                  path of the Cluster Manifest File (including the '.xml' extension) to be created
     -member_type           type of member cluster to be created
     -version               5 digit version of GI (example: 12.2.0.2.0) on the new Member Cluster, if
                            different from the Domain Services Cluster
     -domain_services       services to be initially configured for this member
                            cluster (asm_storage with local, direct, or indirect access paths, and rhp)
                            --note that if "-domain_services" option is not specified,
                            then only the GIMR and TFA services will be configured
     asm_storage            indicates the storage access path for the database member clusters
                            local : storage is local to the cluster
                            direct or indirect : direct or indirect access to storage provided on the Domain Services Cluster
     rhp                    generate credentials and configuration for an RHP client cluster.

Provide access to DSC Data DG - even we use: asm_storage local
[grid@dsctw21 ~]$ sqlplus / as sysasm
SQL> ALTER DISKGROUP data SET ATTRIBUTE 'access_control.enabled' = 'true';
Diskgroup altered.

Create a  Member Cluster Configuration File with local ASM storage

[grid@dsctw21 ~]$ crsctl create member_cluster_configuration mclu2 -file mclu2.xml  -member_type database -domain_services asm_storage indirect 
--------------------------------------------------------------------------------
ASM GIMR TFA ACFS RHP GNS
================================================================================
YES  YES  NO   NO  NO YES
================================================================================

If you get ORA-15365 during crsctl create member_cluster_configuration delete the configuration first
 Error ORA-15365: member cluster 'mclu2' already configured
   [grid@dsctw21 ~]$ crsctl delete member_cluster_configuration mclu2


[grid@dsctw21 ~]$ crsctl query  member_cluster_configuration mclu2 
          mclu2     12.2.0.1.0 a6ab259d51ea6f91ffa7984299059208 ASM,GIMR

Copy the File to the Member Cluster Host where you plan to start the installation
[grid@dsctw21 ~]$ sum  mclu2.xml
54062    22

Copy Member Cluster Manifest File to Member Cluster host
[grid@dsctw21 ~]$ scp  mclu2.xml mclu21:
mclu2.xml                                                                                         100%   25KB  24.7KB/s   00:00  

Verify DSC SCAN Address from our Member Cluster Hosts

[grid@mclu21 grid]$ ping dsctw-scan.dsctw.dscgrid.example.com
PING dsctw-scan.dsctw.dscgrid.example.com (192.168.5.232) 56(84) bytes of data.
64 bytes from 192.168.5.232 (192.168.5.232): icmp_seq=1 ttl=64 time=0.570 ms
64 bytes from 192.168.5.232 (192.168.5.232): icmp_seq=2 ttl=64 time=0.324 ms
64 bytes from 192.168.5.232 (192.168.5.232): icmp_seq=3 ttl=64 time=0.654 ms
^C
--- dsctw-scan.dsctw.dscgrid.example.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.324/0.516/0.654/0.140 ms


[root@mclu21 ~]# nslookup dsctw-scan.dsctw.dscgrid.example.com
Server:        192.168.5.50
Address:    192.168.5.50#53

Non-authoritative answer:
Name:    dsctw-scan.dsctw.dscgrid.example.com
Address: 192.168.5.230
Name:    dsctw-scan.dsctw.dscgrid.example.com
Address: 192.168.5.226
Name:    dsctw-scan.dsctw.dscgrid.example.com
Address: 192.168.5.227

Start Member Cluster installation

Unset the ORACLE_BASE environment variable.
[grid@dsctw21 grid]$ unset ORACLE_BASE
[grid@dsctw21 ~]$ cd $GRID_HOME
[grid@dsctw21 grid]$ pwd
/u01/app/122/grid
[grid@dsctw21 grid]$ unzip -q  /media/sf_kits/Oracle/122/linuxx64_12201_grid_home.zip

[grid@mclu21 grid]$ gridSetup.sh
Launching Oracle Grid Infrastructure Setup Wizard...

-> Configure an Oracle Member Cluster for Oracle Database
 -> Member Cluster Manifest File : /home/grid/FILES/mclu2.xml

During parsing the Member Cluster Manifest File following error pops up:

[INS-30211] An unexpected exception occurred while extracting details from ASM client data

PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml"
PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml
KFOD-00321: failed to read the credential file /home/grid/FILES/mclu2.xml

  • At your DSC: Add GNS client Data to   Member Cluster Configuration File
[grid@dsctw21 ~]$ srvctl export gns -clientdata   mclu2.xml   -role CLIENT
[grid@dsctw21 ~]$ scp  mclu2.xml mclu21: mclu2.xml                          100%   25KB  24.7KB/s   00:00

  •  Restart the Member Cluster Installation – should work NOW !

 

  • Our Window 7 Host is busy and show high memory consumption
  • The GIMR is the most challenging part for the Installation

Verify Member Cluster

Verify Member Cluster Resources 

Cluster Resources 
[root@mclu22 ~]# crs
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.LISTENER.lsnr              ONLINE     ONLINE          mclu21       STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          mclu22       STABLE   
ora.net1.network               ONLINE     ONLINE          mclu21       STABLE   
ora.net1.network               ONLINE     ONLINE          mclu22       STABLE   
ora.ons                        ONLINE     ONLINE          mclu21       STABLE   
ora.ons                        ONLINE     ONLINE          mclu22       STABLE   
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       mclu22          STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       mclu21          STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       mclu21          STABLE  
ora.cvu                        1   ONLINE       ONLINE       mclu21          STABLE  
ora.mclu21.vip                 1   ONLINE       ONLINE       mclu21          STABLE  
ora.mclu22.vip                 1   ONLINE       ONLINE       mclu22          STABLE  
ora.qosmserver                 1   ONLINE       ONLINE       mclu21          STABLE  
ora.scan1.vip                  1   ONLINE       ONLINE       mclu22          STABLE  
ora.scan2.vip                  1   ONLINE       ONLINE       mclu21          STABLE  
ora.scan3.vip                  1   ONLINE       ONLINE       mclu21          STABLE  

[root@mclu22 ~]#  srvctl config scan 
SCAN name: mclu2-scan.mclu2.dscgrid.example.com, Network: 1
Subnet IPv4: 192.168.5.0/255.255.255.0/enp0s8, dhcp
Subnet IPv6: 
SCAN 1 IPv4 VIP: -/scan1-vip/192.168.5.202
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 2 IPv4 VIP: -/scan2-vip/192.168.5.231
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 3 IPv4 VIP: -/scan3-vip/192.168.5.232
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 

[root@mclu22 ~]#  nslookup  mclu2-scan.mclu2.dscgrid.example.com
Server:        192.168.5.50
Address:    192.168.5.50#53
Non-authoritative answer:
Name:    mclu2-scan.mclu2.dscgrid.example.com
Address: 192.168.5.232
Name:    mclu2-scan.mclu2.dscgrid.example.com
Address: 192.168.5.202
Name:    mclu2-scan.mclu2.dscgrid.example.com
Address: 192.168.5.231

[root@mclu22 ~]# ping mclu2-scan.mclu2.dscgrid.example.com
PING mclu2-scan.mclu2.dscgrid.example.com (192.168.5.202) 56(84) bytes of data.
64 bytes from mclu22.example.com (192.168.5.202): icmp_seq=1 ttl=64 time=0.067 ms
64 bytes from mclu22.example.com (192.168.5.202): icmp_seq=2 ttl=64 time=0.037 ms
^C
--- mclu2-scan.mclu2.dscgrid.example.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.037/0.052/0.067/0.015 ms


[grid@mclu21 ~]$  oclumon manage -get MASTER
Master = mclu21

[grid@mclu21 ~]$  oclumon manage -get reppath
CHM Repository Path = +MGMT/_MGMTDB/50472078CF4019AEE0539705A8C0D652/DATAFILE/sysmgmtdata.292.944846507

[grid@mclu21 ~]$  oclumon dumpnodeview -allnodes
----------------------------------------
Node: mclu21 Clock: '2017-05-24 17.51.50+0200' SerialNo:445 
----------------------------------------
SYSTEM:
#pcpus: 1 #cores: 1 #vcpus: 1 cpuht: N chipname: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz cpuusage: 46.68 cpusystem: 5.80 cpuuser: 40.87 cpunice: 0.00 cpuiowait: 0.00 cpusteal: 0.00 cpuq: 1 physmemfree: 1047400 physmemtotal: 7910784 mcache: 4806576 swapfree: 8257532 swaptotal: 8257532 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 0 iow: 41 ios: 10 swpin: 0 swpout: 0 pgin: 0 pgout: 20 netr: 81.940 netw: 85.211 procs: 248 procsoncpu: 1 #procs_blocked: 0 rtprocs: 7 rtprocsoncpu: N/A #fds: 10400 #sysfdlimit: 6815744 #disks: 5 #nics: 3 loadavg1: 6.92 loadavg5: 7.16 loadavg15: 5.56 nicErrors: 0

TOP CONSUMERS:
topcpu: 'gdb(20156) 31.19' topprivmem: 'gdb(20159) 353188' topshm: 'gdb(20159) 151624' topfd: 'crsd(21898) 274' topthread: 'crsd(21898) 52'
....

[root@mclu22 ~]#  tfactl print status
.-----------------------------------------------------------------------------------------------.
| Host   | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status   |
+--------+---------------+------+------+------------+----------------------+--------------------+
| mclu22 | RUNNING       | 2437 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE           |
| mclu21 | RUNNING       | 1209 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE           |
'--------+---------------+------+------+------------+----------------------+--------------------'

Verify DSC status after Member Cluster Setup


SQL> @pdb_info.sql
SQL> /*
SQL>          To connect to GIMR database set ORACLE_SID : export  ORACLE_SID=\-MGMTDB
SQL> */
SQL> 
SQL> set linesize 132
SQL> COLUMN NAME FORMAT A18
SQL> SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;

NAME               CON_ID        DBID    CON_UID GUID
------------------ ---------- ---------- ---------- --------------------------------
CDB$ROOT            1 1149111082      1 4700AA69A9553E5FE05387E5E50AC8DA
PDB$SEED            2  949396570  949396570 50458CC0190428B2E0539705A8C047D8
GIMR_DSCREP_10            3 3606966590 3606966590 504599D57F9148C0E0539705A8C0AD8D
GIMR_CLUREP_20            4 2292678490 2292678490 50472078CF4019AEE0539705A8C0D652

--> Management Database hosts a new PDB named GIMR_CLUREP_20

SQL> 
SQL> !asmcmd  find /DATA/mclu2 \*
+DATA/mclu2/OCRFILE/
+DATA/mclu2/OCRFILE/REGISTRY.257.944845929
+DATA/mclu2/VOTINGFILE/
+DATA/mclu2/VOTINGFILE/vfile.258.944845949

SQL> !asmcmd find \--type VOTINGFILE / \*
+DATA/mclu2/VOTINGFILE/vfile.258.944845949

SQL> !asmcmd find \--type   OCRFILE / \*
+DATA/dsctw/OCRFILE/REGISTRY.255.944835699
+DATA/mclu2/OCRFILE/REGISTRY.257.944845929

SQL> ! crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   6e59072e99f34f66bf750a5c8daf616f (AFD:DATA1) [DATA]
 2. ONLINE   ef0d610cb44d4f2cbf9d977090b88c2c (AFD:DATA2) [DATA]
 3. ONLINE   db3f3572250c4f74bf969c7dbaadfd00 (AFD:DATA3) [DATA]
Located 3 voting disk(s).

SQL> ! crsctl get cluster mode status
Cluster is running in "flex" mode

SQL> ! crsctl get cluster class
CRS-41008: Cluster class is 'Domain Services Cluster'

SQL> ! crsctl get cluster name
CRS-6724: Current cluster name is 'dsctw'

Potential Errors during Member Cluster Setup

   1. Reading Member Cluster Configuration File fails with  
       [INS-30211] An unexpected exception occurred while extracting details from ASM client data
       PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml"
       PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml
       KFOD-00319: No ASM instance available for OCI connection
      Fix : Add GNS client Data to   Member Cluster Configuration File
            $ srvctl export gns -clientdata   mclu2.xml   -role CLIENT
            -> Fix confirmed 

   2. Reading Member Cluster Configuration File fails with  
    [INS-30211] An unexpected exception occurred while extracting details from ASM client data
       PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml"
       PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml
       KFOD-00321: failed to read the credential file /home/grid/FILES/mclu2.xml 
       -> Double check that the DSC ASM Configuration is working
      This error may be related to running 
      [grid@dsctw21 grid]$ /u01/app/122/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid_dsctw2.rsp  
      and not setting passwords in the related rsp File  
     # Password for SYS user of Oracle ASM
    oracle.install.asm.SYSASMPassword=sys
    # Password for ASMSNMP account
    oracle.install.asm.monitorPassword=sys
      Fix: Add passwords before running   -executeConfigTools step
           -> Fix NOT confirmed  
  
   3. Crashes due to limited memory in  my Virtualbox env 32 GByte
   3.1  Crash of DSC [ Virtualbox host freezes - could not track VM via top ]
        A failed failed Cluster Member Setup due to memory shortage can kill your DSC GNS
        Note: This is a very dangerous situation as it kills your DSC env. 
              As said always backup OCR and export GNS !
   3.2  Crash of any or all Member Cluster [ Virtualbox host freezes - could not track VM via top ]
        - GIMR database setup is partially installed but not working 
        - Member cluster itself is working fine

Member Cluster Deinstall

On all Member Cluster Nodes but NOT the last one :
[root@mclu21 grid]#  $GRID_HOME/crs/install/rootcrs.sh -deconfig -force 
On last Member Cluster Node:
[root@mclu21 grid]#  $GRID_HOME/crs/install/rootcrs.sh -deconfig -force -lastnode
..
2017/05/25 14:37:18 CLSRSC-559: Ensure that the GPnP profile data under the 'gpnp' directory in /u01/app/122/grid is deleted on each node before using the software in the current Grid Infrastructure home for reconfiguration.
2017/05/25 14:37:18 CLSRSC-590: Ensure that the configuration for this Storage Client (mclu2) is deleted by running the command 'crsctl delete member_cluster_configuration <member_cluster_name>' on the Storage Server.

Delete Member Cluster mclu2 - Commands running on DSC

[grid@dsctw21 ~]$ crsctl delete  member_cluster_configuration mclu2 
ASMCMD-9477: delete member cluster 'mclu2' failed
KFOD-00327: failed to delete member cluster 'mclu2'
ORA-15366: unable to delete configuration for member cluster 'mclu2' because the directory '+DATA/mclu2/VOTINGFILE' was not empty
ORA-06512: at line 4
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 724
ORA-06512: at line 2

ASMCMD> find mclu2/ *
+DATA/mclu2/VOTINGFILE/
+DATA/mclu2/VOTINGFILE/vfile.258.944845949
ASMCMD> rm +DATA/mclu2/VOTINGFILE/vfile.258.94484594

SQL>    @pdb_info
NAME               CON_ID        DBID    CON_UID GUID
------------------ ---------- ---------- ---------- --------------------------------
CDB$ROOT            1 1149111082      1 4700AA69A9553E5FE05387E5E50AC8DA
PDB$SEED            2  949396570  949396570 50458CC0190428B2E0539705A8C047D8
GIMR_DSCREP_10            3 3606966590 3606966590 504599D57F9148C0E0539705A8C0AD8D

-> GIMR_CLUREP_20 PDB was deleted !

[grid@dsctw21 ~]$ srvctl config gns -list
dsctw21.CLSFRAMEdsctw SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 40020 Weight: 0 Priority: 0 Flags: 0x101
dsctw21.CLSFRAMEdsctw TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
dsctw22.CLSFRAMEdsctw SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 58466 Weight: 0 Priority: 0 Flags: 0x101
dsctw22.CLSFRAMEdsctw TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
mclu21.CLSFRAMEmclu2 SRV Target: 192.168.2.155.mclu2 Protocol: tcp Port: 14064 Weight: 0 Priority: 0 Flags: 0x101
mclu21.CLSFRAMEmclu2 TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
dscgrid.example.com DLV 20682 10 18 ( XoH6wdB6FkuM3qxr/ofncb0kpYVCa+hTubyn5B4PNgJzWF4kmbvPdN2CkEcCRBxt10x/YV8MLXEe0emM26OCAw== ) Unique Flags: 0x314
dscgrid.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAvu/8JsrxQAVTEPjq4+JfqPwewH/dc7Y/QbJfMp9wgIwRQMZyJSBSZSPdlqhw8fSGfNUmWJW8v+mJ4JsPmtFZRsUW4iB7XvO2SwnEuDnk/8W3vN6sooTmH82x8QxkOVjzWfhqJPLkGs9NP4791JEs0wI/HnXBoR4Xv56mzaPhFZ6vM2aJGWG0N/1i67cMOKIDpw90JV4HZKcaWeMsr57tOWqEec5+dhIKf07DJlCqa4UU/oSHH865DBzpqqEhfbGaUAiUeeJVVYVJrWFPhSttbxsdPdCcR9ulBLuR6PhekMj75wxiC8KUgAL7PUJjxkvyk3ugv5K73qkbPesNZf6pEQIDAQAB ) Unique Flags: 0x314
dscgrid.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
dsctw-scan.dsctw A 192.168.5.226 Unique Flags: 0x81
dsctw-scan.dsctw A 192.168.5.235 Unique Flags: 0x81
dsctw-scan.dsctw A 192.168.5.238 Unique Flags: 0x81
dsctw-scan1-vip.dsctw A 192.168.5.238 Unique Flags: 0x81
dsctw-scan2-vip.dsctw A 192.168.5.235 Unique Flags: 0x81
dsctw-scan3-vip.dsctw A 192.168.5.226 Unique Flags: 0x81
dsctw21-vip.dsctw A 192.168.5.225 Unique Flags: 0x81
dsctw22-vip.dsctw A 192.168.5.241 Unique Flags: 0x81
dsctw-scan1-vip A 192.168.5.238 Unique Flags: 0x81
dsctw-scan2-vip A 192.168.5.235 Unique Flags: 0x81
dsctw-scan3-vip A 192.168.5.226 Unique Flags: 0x81
dsctw21-vip A 192.168.5.225 Unique Flags: 0x81
dsctw22-vip A 192.168.5.241 Unique Flags: 0x81
dsctw21.gipcdhaname SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 41795 Weight: 0 Priority: 0 Flags: 0x101
dsctw21.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
dsctw22.gipcdhaname SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 61595 Weight: 0 Priority: 0 Flags: 0x101
dsctw22.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
mclu21.gipcdhaname SRV Target: 192.168.2.155.mclu2 Protocol: tcp Port: 31416 Weight: 0 Priority: 0 Flags: 0x101
mclu21.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
gpnpd h:dsctw21 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 SRV Target: dsctw21.dsctw Protocol: tcp Port: 21099 Weight: 0 Priority: 0 Flags: 0x101
gpnpd h:dsctw21 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 TXT agent="gpnpd", cname="dsctw", guid="c5323627b2484f8fbf20e67a2c4624e1", host="dsctw21", pid="12420" Flags: 0x101
gpnpd h:dsctw22 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 SRV Target: dsctw22.dsctw Protocol: tcp Port: 60348 Weight: 0 Priority: 0 Flags: 0x101
gpnpd h:dsctw22 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 TXT agent="gpnpd", cname="dsctw", guid="c5323627b2484f8fbf20e67a2c4624e1", host="dsctw22", pid="13141" Flags: 0x101
CSSHub1.hubCSS SRV Target: dsctw21.dsctw Protocol: gipc Port: 0 Weight: 0 Priority: 0 Flags: 0x101
CSSHub1.hubCSS TXT HOSTQUAL="dsctw" Flags: 0x101
Net-X-1.oraAsm SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 1526 Weight: 0 Priority: 0 Flags: 0x101
Net-X-2.oraAsm SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 1526 Weight: 0 Priority: 0 Flags: 0x101
Oracle-GNS A 192.168.5.60 Unique Flags: 0x315
dsctw.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 14123 Weight: 0 Priority: 0 Flags: 0x315
dsctw.Oracle-GNS TXT CLUSTER_NAME="dsctw", CLUSTER_GUID="c5323627b2484f8fbf20e67a2c4624e1", NODE_NAME="dsctw21", SERVER_STATE="RUNNING", VERSION="12.2.0.0.0", PROTOCOL_VERSION="0xc200000", DOMAIN="dscgrid.example.com" Flags: 0x315
Oracle-GNS-ZM A 192.168.5.60 Unique Flags: 0x315
dsctw.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 39923 Weight: 0 Priority: 0 Flags: 0x315

--> Most GNS entries for our Member cluster were deleted

Re-Executing GRID setup fails with [FATAL] [INS-30024]

 

Re-Executing GRID setup fails with [FATAL] [INS-30024]

After an unclean deinstallation gridSetup.sh fails with error  [FATAL] [INS-30024]
Instead of offering the option to install a NEW cluster the installer offers the GRID Upgrade option

Debugging with strace

[grid@dsctw21 grid]$   gridSetup.sh -silent  -skipPrereqs -responseFile  /home/grid/grid_dsctw2.rsp    oracle.install.asm.SYSASMPassword=sys    oracle.install.asm.monitorPassword=sys 2>llog2
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-30024] Installer has detected that the location determined as Oracle Grid Infrastructure home (/u01/app/122/grid), is not a valid Oracle home.
   ACTION: Ensure that either there are no environment variables pointing to this invalid location or register the location as an Oracle home in the central inventory.

Using strace to trace system calls 
[grid@dsctw21 grid]$ strace -f  gridSetup.sh -silent  -skipPrereqs -responseFile  /home/grid/grid_dsctw2.rsp    oracle.install.asm.SYSASMPassword=sys    oracle.install.asm.monitorPassword=sys 2>llog
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-30024] Installer has detected that the location determined as Oracle Grid Infrastructure home (/u01/app/122/grid), is not a valid Oracle home.
   ACTION: Ensure that either there are no environment variables pointing to this invalid location or register the location as an Oracle home in the central inventory.

Check Log File for failed open calls or for open calls which  should fail in CLEAN Installation ENV 
grid@dsctw21 grid]$ grep open llog
..
[pid 11525] open("/etc/oracle/ocr.loc", O_RDONLY) = 93
[pid 11525] open("/etc/oracle/ocr.loc", O_RDONLY) = 93

--> It seems the installer is testing for files
 /etc/oracle/ocr.loc
 /etc/oracle/olr.loc 
whether its an upgrade or its a new installation. 

Fix : Rename ocr.log and olr.loc 
[root@dsctw21 ~]# mv /etc/oracle/ocr.loc /etc/oracle/ocr.loc_tbd
[root@dsctw21 ~]# mv /etc/oracle/olr.loc /etc/oracle/olr.loc_tbd

Now gridSetup.sh should start the installation process

Restoring the OCR – 12.2

Backup currently active OCR 
[root@dsctw21 peer]# ocrconfig -manualbackup
dsctw21     2017/05/21 09:07:10     +MGMT:/dsctw/OCRBACKUP/backup_20170521_090710.ocr.292.944557631     0     
[root@dsctw21 peer]#  ocrconfig -showbackup
PROT-24: Auto backups for the Oracle Cluster Registry are not available
dsctw21     2017/05/21 09:07:10     +MGMT:/dsctw/OCRBACKUP/backup_20170521_090710.ocr.292.944557631     0   

Locate all OCR backups 
ASMCMD> find --type OCRBACKUP / *
+MGMT/dsctw/OCRBACKUP/14348721.293.944515403
+MGMT/dsctw/OCRBACKUP/backup_20170521_090710.ocr.292.944557631
ASMCMD> ls -l +MGMT/dsctw/OCRBACKUP/14348721.293.944515403
Type       Redund  Striped  Time             Sys  Name
OCRBACKUP  UNPROT  COARSE   MAY 20 21:00:00  Y    14348721.293.944515403
ASMCMD> ls -l +MGMT/dsctw/OCRBACKUP/backup_20170521_090710.ocr.292.944557631
Type       Redund  Striped  Time             Sys  Name
OCRBACKUP  UNPROT  COARSE   MAY 21 09:00:00  Y    backup_20170521_090710.ocr.292.944557631

--> Note the first backup was created by root.sh !
    After a GNS corruption we need to restore to the OCR created by root.sh

List the nodes and cluster resources in your cluster by running the following command on one node:
[grid@dsctw21 ~]$ olsnodes
dsctw21
dsctw22

[grid@dsctw21 ~]$ crs
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       dsctw22         STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.MGMTLSNR                   1   ONLINE       ONLINE       dsctw21         169.254.156.94 192.1 68.2.151,STABLE
ora.asm                        1   ONLINE       ONLINE       dsctw21         Started,STABLE  
ora.asm                        2   ONLINE       ONLINE       dsctw22         Started,STABLE  
ora.asm                        3   OFFLINE      OFFLINE      -               STABLE  
ora.cvu                        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.dsctw21.vip                1   ONLINE       ONLINE       dsctw21         STABLE  
ora.dsctw22.vip                1   ONLINE       ONLINE       dsctw22         STABLE  
ora.gns                        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.gns.vip                    1   ONLINE       ONLINE       dsctw21         STABLE  
ora.ioserver                   1   ONLINE       ONLINE       dsctw21         STABLE  
ora.ioserver                   2   ONLINE       ONLINE       dsctw22         STABLE  
ora.ioserver                   3   ONLINE       OFFLINE      -               STABLE  
ora.mgmtdb                     1   ONLINE       ONLINE       dsctw21         Open,STABLE  
ora.qosmserver                 1   ONLINE       ONLINE       dsctw21         STABLE  
ora.rhpserver                  1   ONLINE       ONLINE       dsctw21         STABLE  
ora.scan1.vip                  1   ONLINE       ONLINE       dsctw22         STABLE  
ora.scan2.vip                  1   ONLINE       ONLINE       dsctw21         STABLE  
ora.scan3.vip                  1   ONLINE       ONLINE       dsctw21         STABLE  

If OCR is located in an Oracle ASM disk group, then stop the Oracle Clusterware daemon:
[root@dsctw21 ~]# crsctl stop crs 
[root@dsctw22 ~]# crsctl stop crs 
  
Start the Oracle Clusterware stack on one node in exclusive mode by running the following command as root:
[root@dsctw21 ~]#  crsctl start crs -excl -nocrs

The -nocrs option ensures that the CRSD process and OCR do not start with the rest of the Oracle Clusterware stack.

Ignore any errors that display.
Check whether CRSD is running by running the following command:
[root@dsctw21 ~]# crsi

*****  Local Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.asm                        1   ONLINE       ONLINE       dsctw21         Started,STABLE  
ora.cluster_interconnect.haip  1   ONLINE       ONLINE       dsctw21         STABLE  
ora.crf                        1   OFFLINE      OFFLINE      -               STABLE  
ora.crsd                       1   OFFLINE      OFFLINE      -               STABLE  
ora.cssd                       1   ONLINE       ONLINE       dsctw21         STABLE  
ora.cssdmonitor                1   ONLINE       ONLINE       dsctw21         STABLE  
ora.ctssd                      1   ONLINE       ONLINE       dsctw21         OBSERVER,STABLE  
ora.diskmon                    1   OFFLINE      OFFLINE      -               STABLE  
ora.driver.afd                 1   ONLINE       ONLINE       dsctw21         STABLE  
ora.drivers.acfs               1   ONLINE       ONLINE       dsctw21         STABLE  
ora.evmd                       1   ONLINE       INTERMEDIATE dsctw21         STABLE  
ora.gipcd                      1   ONLINE       ONLINE       dsctw21         STABLE  
ora.gpnpd                      1   ONLINE       ONLINE       dsctw21         STABLE  
ora.mdnsd                      1   ONLINE       ONLINE       dsctw21         STABLE  
ora.storage                    1   OFFLINE      OFFLINE      -               STABLE  


    If CRSD is running, then stop it by running the following command as root:
    # crsctl stop resource ora.crsd -init

   
Locate all OCR backups 
ASMCMD> find --type OCRBACKUP / *
+MGMT/dsctw/OCRBACKUP/14348721.293.944515403
+MGMT/dsctw/OCRBACKUP/backup_20170521_090710.ocr.292.944557631
ASMCMD> ls -l +MGMT/dsctw/OCRBACKUP/14348721.293.944515403
Type       Redund  Striped  Time             Sys  Name
OCRBACKUP  UNPROT  COARSE   MAY 20 21:00:00  Y    14348721.293.944515403
ASMCMD> ls -l +MGMT/dsctw/OCRBACKUP/backup_20170521_090710.ocr.292.944557631
Type       Redund  Striped  Time             Sys  Name
OCRBACKUP  UNPROT  COARSE   MAY 21 09:00:00  Y    backup_20170521_090710.ocr.292.944557631

Restore OCR with an OCR backup that you can identify in "Listing Backup Files" by running the following command as root:
[root@dsctw21 ~]# ocrconfig -restore +MGMT/dsctw/OCRBACKUP/14348721.293.944515403

    Note:
        If the original OCR location does not exist, then you must create an empty (0 byte) OCR location before 
        you run the ocrconfig -restore command.
        Ensure that the OCR devices that you specify in the OCR configuration exist and that these OCR devices are valid.
        If you configured OCR in an Oracle ASM disk group, then ensure that the Oracle ASM disk group exists and is mounted.
        If the OCR backup file is located in an Oracle ASM disk group, then ensure that the disk group exists and is mounted.

[root@dsctw21 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
     Version                  :          4
     Total space (kbytes)     :     409568
     Used space (kbytes)      :       3992
     Available space (kbytes) :     405576
     ID                       : 2008703361
     Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
     Cluster registry integrity check succeeded
     Logical corruption check succeeded

[root@dsctw21 ~]# crsctl stop crs -f
    
    Run the ocrconfig -repair -replace command as root on all the nodes in the cluster where you did not the 
    ocrconfig -restore command. For example, if you ran the ocrconfig -restore command on node 1 of a four-node 
    cluster, then you must run the ocrconfig -repair -replace command on nodes 2, 3, and 4.

Begin to start Oracle Clusterware by running the following command as root on all of the nodes:
[root@dsctw21 ~]#  crsctl start crs
[root@dsctw22 ~]#  crsctl start crs

Verify OCR integrity of all of the cluster nodes that are configured as part of your cluster by running the following CVU command:
[grid@dsctw21 ~]$  cluvfy comp ocr -n all -verbose
Verifying OCR Integrity ...PASSED
Verification of OCR integrity was successful. 
CVU operation performed:      OCR integrity
Date:                         May 21, 2017 8:13:54 PM
CVU home:                     /u01/app/122/grid/
User:                         grid

Verify cluster resources 
[root@dsctw22 ~]#  crs
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       dsctw22         STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.MGMTLSNR                   1   ONLINE       ONLINE       dsctw21         169.254.156.94 192.1 68.2.151,STABLE
ora.asm                        1   ONLINE       ONLINE       dsctw21         Started,STABLE  
ora.asm                        2   ONLINE       ONLINE       dsctw22         Started,STABLE  
ora.asm                        3   OFFLINE      OFFLINE      -               STABLE  
ora.cvu                        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.dsctw21.vip                1   ONLINE       ONLINE       dsctw21         STABLE  
ora.dsctw22.vip                1   ONLINE       ONLINE       dsctw22         STABLE  
ora.gns                        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.gns.vip                    1   ONLINE       ONLINE       dsctw21         STABLE  
ora.ioserver                   1   ONLINE       ONLINE       dsctw21         STABLE  
ora.ioserver                   2   ONLINE       ONLINE       dsctw22         STABLE  
ora.ioserver                   3   ONLINE       OFFLINE      -               STABLE  
ora.mgmtdb                     1   ONLINE       ONLINE       dsctw21         Open,STABLE  
ora.qosmserver                 1   ONLINE       ONLINE       dsctw21         STABLE  
ora.rhpserver                  1   ONLINE       ONLINE       dsctw21         STABLE  
ora.scan1.vip                  1   ONLINE       ONLINE       dsctw22         STABLE  
ora.scan2.vip                  1   ONLINE       ONLINE       dsctw21         STABLE  
ora.scan3.vip                  1   ONLINE       ONLINE       dsctw21         STABLE

A deeper dive into JPA, 2-Phase-Commit [ 2PC ] and RAC

Overview JPA and 2-Phase-Commit

Mike Keith Architect at Oracle and Author

Pro JPA 2: Mastering the Java Persistence API (Second edition)

summarizes the usage of JPA in a distributed evironment the following :

  • A JPA application will get the 2PC benefits the same as any other application
  • The peristence unit data source is using JTA and  is configured to use an XA data source
  • The XA resources and transaction manager 2PC interactions happen on their own without the JPA EMF knowing or having to be involved.
  • If a 2PC XA tx fails then an exception will be thrown just the same as if the tx was optimized to not have 2PC.

This was enough motivation for me working on Oracle RAC and JDBC projects to have a closer look on JPA and 2PC.

Versions used  / Configuration File persistence.xml

Wildfly:  8.2
Hibernate Version: 4.3.7.Final
--> Collecting Data for RAC database1
    Driver Name             : Oracle JDBC driver
    Driver Version          : 12.1.0.2.0
    Database Product Version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    DB Name:  BANKA
    1. Instance Name: bankA_2 - Host: hract21.example.com - Pooled XA Connections: 61

--> Collecting Data for RAC database2
    Driver Name             : Oracle JDBC driver
    Driver Version          : 12.1.0.2.0
    Database Product Version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    DB Name:  BANKB
    1. Instance Name: bankb_3 - Host: hract21.example.com - Pooled XA Connections: 62

persistence.xml

<?xml version="1.0"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
             version="2.0">

    <persistence-unit name="RacBankAHibPU" transaction-type="JTA">
        <provider>org.hibernate.ejb.HibernatePersistence</provider>
        <jta-data-source>java:/jboss/datasources/xa_rac12g_banka</jta-data-source>
        <class>com.hhu.wfjpa2pc.Accounts</class>
        <properties>
            <property name="hibernate.transaction.jta.platform"
                 value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform" />
            <property name="hibernate.show_sql" value="true" />
            <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/>
        </properties>
    </persistence-unit>
    <persistence-unit name="RacBankBHibPU" transaction-type="JTA">
        <provider>org.hibernate.ejb.HibernatePersistence</provider>
        <jta-data-source>java:/jboss/datasources/xa_rac12g_bankb</jta-data-source>
        <class>com.hhu.wfjpa2pc.Accounts</class>
        <properties>
            <property name="hibernate.transaction.jta.platform"
                 value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform" />
            <property name="hibernate.show_sql" value="true" />
            <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/>
        </properties>
    </persistence-unit>
</persistence>

Running a successful 2PC operation with JPA

Call Flow 

- Get EntityManager for RAC Database1 [ em1=getEntityManager1(); ]
- Get EntityManager for RAC Database2 [ em2=getEntityManager2(); ]
- Start as Usertransacation             [ ut.begin(); ]
- Join transaction from EntityManager 1  [ em1.joinTransaction(); ]
- Join transaction from EntityManager 2  [ em2.joinTransaction(); ]
- Chance Balance on both databases
bankA_acct.setBalance( bankA_acct.getBalance().add(b) );
em1.merge(bankA_acct);
if (isEnableFlush() )
em1.flush();

bankB_acct.setBalance( bankB_acct.getBalance().subtract(b) );
em2.merge(bankB_acct);
if (isEnableFlush() )
em2.flush();
- Finally commit the Transaction [ ut.commit(); ]

Application log :
14:51:58.071 transferMoneyImpl():: Found both Entity Managers for PUs : RacBankAHibPU and RacBankBHibPU
14:51:58.074 transferMoneyImpl():: Account at bank A: User99_at_BANKA - Balance: 10000
14:51:58.075 transferMoneyImpl():: Account at bank B: User98_at_BANKB - Balance: 10000
14:51:58.076 transferMoneyImpl():: Both EMs joined our XA Transaction...
14:51:58.092 transferMoneyImpl():: Before Commit ...
14:51:58.160 transferMoneyImpl():: Tx Commit worked !
14:51:58.165 Database Name:BANKA -- Account: User99_at_BANKA -- Balance: 11000.0
14:51:58.168 Database Name:BANKB -- Account: User98_at_BANKB -- Balance: 9000.0
14:51:58.169 transferMoneyImpl():: Leaving with TX Status:: [UT status:  6 - STATUS_NO_TRANSACTION]

-> We successfully managed to transfer some money from bankA to bankB !

Testing Rollback operation with EM flush enabled [ transaction status : STATUS_MARKED_ROLLBACK ]

Account Balance
 transferMoneyImpl():: Account at bank A: User99_at_BANKA - Balance: 20000
 transferMoneyImpl():: Account at bank B: User98_at_BANKB - Balance: 0
Note the next money transfer/transaction should trigger a constraint violation ! 

Call Flow
- Get EntityManager for RAC Database1 [ em1=getEntityManager1(); ]
- Get EntityManager for RAC Database2 [ em2=getEntityManager2(); ]
- Start a User transaction             [ ut.begin(); ] 
- Join transaction from EntityManager 1  [ em1.joinTransaction(); ]
- Join transaction from EntityManager 2  [ em2.joinTransaction(); ]
- Chance Balance on both databases
     bankA_acct.setBalance( bankA_acct.getBalance().add(b) );
        em1.merge(bankA_acct);
        if (isEnableFlush() )
          em1.flush();
                
        bankB_acct.setBalance( bankB_acct.getBalance().subtract(b) );
        em2.merge(bankB_acct);           
        if (isEnableFlush() )
          em2.flush();              
- em2.flush is failing due to a constraint violation and set the TX status to  : STATUS_MARKED_ROLLBACK 
   Error : org.hibernate.exception.ConstraintViolationException: could not execute statement
- Exception handler checks transaction status : STATUS_MARKED_ROLLBACK and is rolling back the TX
       if ( status != javax.transaction.Status.STATUS_NO_TRANSACTION   ) 
         {
         ut.rollback();
         ...
- After rollback() transaction status changed to   STATUS_NO_TRANSACTION                      
  
Application log :
15:11:03.920 transferMoneyImpl():: Found both Entity Managers for PUs : RacBankAHibPU and RacBankBHibPU
15:11:03.929 transferMoneyImpl():: Account at bank A: User99_at_BANKA - Balance: 20000
15:11:03.931 transferMoneyImpl():: Account at bank B: User98_at_BANKB - Balance: 0
15:11:03.931 transferMoneyImpl():: Both EMs joined our XA Transaction... 
15:11:03.960 transferMoneyImpl():: FATAL ERROR - Tx Status : [UT status:  1 - STATUS_MARKED_ROLLBACK]
15:11:03.962 transferMoneyImpl():: Before TX rollback ... 
15:11:03.974 transferMoneyImpl():: TX rollback worked !
15:11:03.974 transferMoneyImpl():: Leaving with TX Status:: [UT status:  6 - STATUS_NO_TRANSACTION]


Exception stack :
15:11:03.960 Error in top level function: transferMoneyImpl():: 
15:11:03.960 org.hibernate.exception.ConstraintViolationException: could not execute statement
15:11:03.961 javax.persistence.PersistenceException: org.hibernate.exception.ConstraintViolationException: could not execute statement
    at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1763)
    at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1677)
    at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1683)
    at org.hibernate.jpa.spi.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:1338)
    at com.hhu.wfjpa2pc.Jpa2pcTest.transferMoneyImpl(Jpa2pcTest.java:235)
    at com.hhu.wfjpa2pc.Jpa2pcTest.transferMoney(Jpa2pcTest.java:166)
        ..
Caused by: org.hibernate.exception.ConstraintViolationException: could not execute statement
    at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:72)
    at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java
    ... 
Caused by: java.sql.SQLIntegrityConstraintViolationException: ORA-02290: check constraint (SCOTT.S_LOWER_CHK) violated

Testing Rollback operation without EM flush enabled [ transaction status : STATUS_NO_TRANSACTION  ]

Account Balance
 transferMoneyImpl():: Account at bank A: User99_at_BANKA - Balance: 20000
 transferMoneyImpl():: Account at bank B: User98_at_BANKB - Balance: 0
Note the next money transfer/transaction should trigger a constraint violation ! 

Call Flow
- Get EntityManager for RAC Database1 [ em1=getEntityManager1(); ]
- Get EntityManager for RAC Database2 [ em2=getEntityManager2(); ]
- Start a User transaction            [ ut.begin(); ] 
- Join transaction from EntityManager 1  [ em1.joinTransaction(); ]
- Join transaction from EntityManager 2  [ em2.joinTransaction(); ]
- Chance Balance on both databases
     bankA_acct.setBalance( bankA_acct.getBalance().add(b) );
        em1.merge(bankA_acct);
        if (isEnableFlush() )
          em1.flush();
                
        bankB_acct.setBalance( bankB_acct.getBalance().subtract(b) );
        em2.merge(bankB_acct);           
        if (isEnableFlush() )
          em2.flush();        
- Commit the Transaction [ ut.commit(); ] fails with :  ARJUNA016053: Could not commit transaction.
- As the Commit itself fails Wildfly rollback the transaction 
- Tx Status after COMMIT error :  STATUS_NO_TRANSACTION 
- Exception handler checks transaction status : STATUS_MARKED_ROLLBACK and is not rolling back the TX
       if ( status != javax.transaction.Status.STATUS_NO_TRANSACTION   ) 
         {
         ut.rollback();
         ...
- Here we don't run any rollback operation -> the TX status remains at   STATUS_NO_TRANSACTION                      
  
Application log :
  15:27:53.818 transferMoneyImpl():: Found both Entity Managers for PUs : RacBankAHibPU and RacBankBHibPU
  15:27:53.827 transferMoneyImpl():: Account at bank A: User99_at_BANKA - Balance: 20000
  15:27:53.829 transferMoneyImpl():: Account at bank B: User98_at_BANKB - Balance: 0
  15:27:53.829 transferMoneyImpl():: Both EMs joined our XA Transaction... 
  15:27:53.829 transferMoneyImpl():: Before Commit ... 
  15:27:53.857 transferMoneyImpl():: FATAL ERROR - Tx Status : [UT status:  6 - STATUS_NO_TRANSACTION]
  15:27:53.859 transferMoneyImpl():: TX not active / TX already rolled back
  15:27:53.859 transferMoneyImpl():: Leaving with TX Status:: [UT status:  6 - STATUS_NO_TRANSACTION]

Testing transaction Recovery with JPA

What we are expecting  and what we are testing
  - Transaction Timeout is set to 600 seconds
  - We set a breakpoint at   OracleXAResource.commit
    ==> This means Wildfly has written a COMMIT record to  the  Wildlfly LOG-STORE
  - After stop at the first OracleXAResource.commit breakpoint  we kill the Wildfly server 
  - Both RMs [ Oracle RAC databases ] are now counting down the Transaction Timeout 
  - If Timeout is reached the failed transaction becomes visible in dba_2pc_pending table
  - Trying to get a lock on these records should lead to a ORA-1591 error 
  - After Wildfly restart the Periodic Recovery should run OracleXAResource.commit and release all locks

Preparing and running the test scenario

Start Wildfly in Debug Mode :
Set breakpoint on OracleXAResource.commit and run the application

Stack Trace 
"default task-3"
oracle.jdbc.xa.client.OracleXAResource.commit(OracleXAResource.java:553)
org.jboss.jca.adapters.jdbc.xa.XAManagedConnection.commit(XAManagedConnection.java:338)
org.jboss.jca.core.tx.jbossts.XAResourceWrapperImpl.commit(XAResourceWrapperImpl.java:107)
com.arjuna.ats.internal.jta.resources.arjunacore.XAResourceRecord.topLevelCommit(XAResourceRecord.java:461)
com.arjuna.ats.arjuna.coordinator.BasicAction.doCommit(BasicAction.java:2810)
com.arjuna.ats.arjuna.coordinator.BasicAction.doCommit(BasicAction.java:2726)
com.arjuna.ats.arjuna.coordinator.BasicAction.phase2Commit(BasicAction.java:1820)
com.arjuna.ats.arjuna.coordinator.BasicAction.End(BasicAction.java:1504)
com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:96)
com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:162)
com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1166)
com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:126)
com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75)
org.jboss.tm.usertx.client.ServerVMClientUserTransaction.commit(ServerVMClientUserTransaction.java:173)
com.hhu.wfjpa2pc.Jpa2pcTest.transferMoneyImpl(Jpa2pcTest.java:242)
com.hhu.wfjpa2pc.Jpa2pcTest.transferMoney(Jpa2pcTest.java:166)

Wildfly Check for prepared transaction 
$ $WILDFLY_HOME/bin/jboss-cli.sh --connect --file=list_prepared_xa_tx.cli
{"outcome" => "success"}
0:ffffc0a805c9:f5a10ef:56039e68:d

Locate and kill JBOSS server process 
0 S oracle    5875  5821  7  80   0 - 413473 futex_ 08:55 ?       00:00:30 
     /usr/java/latest/bin/java .... -Djboss.server.base.dir=/usr/local/wildfly-8.2.0.Final/standalone -c standalone.xml
0 S oracle    6174  5680  0  80   0 - 25827 pipe_w 09:02 pts/1    00:00:00 grep java
[oracle@wls1 WILDFLY]$ kill -9 5875

Now wait [ at lest 600 seconds ] until the Transaction becomes visible in  dba_2pc_pending

SQL> SELECT * FROM GLOBAL_NAME;
GLOBAL_NAME
----------------
BANKA

SQL> select * from dba_2pc_pending;
LOCAL_TRAN_ID           GLOBAL_TRAN_ID                            STATE         MIX A TRAN_COMMENT
---------------------- ---------------------------------------------------------------- ---------------- --- - ----------------
FAIL_TIM FORCE_TI RETRY_TI OS_USER    OS_TERMINAL  HOST          DB_USER       COMMIT#
-------- -------- -------- ------------ ------------ ---------------- ------------ ----------------
9.21.7139           131077.00000000000000000000FFFFC0A805C90F5A10EF56039E680000000D3 prepared     no
               1
09:07:22      09:15:34 oracle    unknown      wls1.example.com           43619336



SQL> SELECT * FROM GLOBAL_NAME;
GLOBAL_NAME
----------------
BANKB

SQL> select * from dba_2pc_pending;
LOCAL_TRAN_ID           GLOBAL_TRAN_ID                            STATE         MIX A TRAN_COMMENT
---------------------- ---------------------------------------------------------------- ---------------- --- - ----------------
FAIL_TIM FORCE_TI RETRY_TI OS_USER    OS_TERMINAL  HOST          DB_USER       COMMIT#
-------- -------- -------- ------------ ------------ ---------------- ------------ ----------------
4.15.3293           131077.00000000000000000000FFFFC0A805C90F5A10EF56039E680000000D3 prepared     no
               1
09:07:22      09:15:34 oracle    unknown      wls1.example.com           20931538

Check for locks 
-> Connected to  scott/tiger@ract2-scan.grid12c.example.com:1521/banka
select * from accounts for update
*
ERROR at line 1:
ORA-01591: lock held by in-doubt distributed transaction 9.21.7139


-> Connected to  scott/tiger@ract2-scan.grid12c.example.com:1521/bankb
select * from accounts for update
*
ERROR at line 1:
ORA-01591: lock held by in-doubt distributed transaction 4.15.3293


Restart Wildfly in Debug Mode and let the Periodic Recovery Thread commit the transaction 

"Periodic Recovery"
oracle.jdbc.xa.client.OracleXAResource.commit(OracleXAResource.java:553)
org.jboss.jca.adapters.jdbc.xa.XAManagedConnection.commit(XAManagedConnection.java:338)
org.jboss.jca.core.tx.jbossts.XAResourceWrapperImpl.commit(XAResourceWrapperImpl.java:107)
com.arjuna.ats.internal.jta.resources.arjunacore.XAResourceRecord.topLevelCommit(XAResourceRecord.java:461)
com.arjuna.ats.arjuna.coordinator.BasicAction.doCommit(BasicAction.java:2810)
com.arjuna.ats.arjuna.coordinator.BasicAction.doCommit(BasicAction.java:2726)
com.arjuna.ats.arjuna.coordinator.BasicAction.phase2Commit(BasicAction.java:1820)
com.arjuna.ats.arjuna.recovery.RecoverAtomicAction.replayPhase2(RecoverAtomicAction.java:71)
com.arjuna.ats.internal.arjuna.recovery.AtomicActionRecoveryModule.doRecoverTransaction(AtomicActionRecoveryModule.java:152)
com.arjuna.ats.internal.arjuna.recovery.AtomicActionRecoveryModule.processTransactionsStatus(AtomicActionRecoveryModule.java:253)
com.arjuna.ats.internal.arjuna.recovery.AtomicActionRecoveryModule.periodicWorkSecondPass(AtomicActionRecoveryModule.java:109)
com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:789)
com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:371)

-> WildFly Thread Periodic Recovery stops at OracleXAResource.commit
-> Press Debugger Command : Continue 
-> WildFly Thread Periodic Recovery has committed Transaction Branch 1
-> WildFly Thread Periodic Recovery stops again  at .OracleXAResource.commit
-> Press Debugger Command : Continue 
-> WildFly Thread Periodic Recovery has committed Transaction Branch 2
-> Complete Transaction is now committed 

Verify Access to the Database records and Wildfly Prepared Transaction Cleanup 
-> Connected to  scott/tiger@ract2-scan.grid12c.example.com:1521/banka
ACCOUNT                 BALANCE
-------------------------------- ----------
User99_at_BANKA               14000


-> Connected to  scott/tiger@ract2-scan.grid12c.example.com:1521/bankb
ACCOUNT                 BALANCE
-------------------------------- ----------
User98_at_BANKB                6000


List prepared Transaction
$  $WILDFLY_HOME/bin/jboss-cli.sh --connect --file=list_prepared_xa_tx.cli
{"outcome" => "success"}

-> After a successful transaction recovery the locks are gone 

 

Java Code

public void transferMoneyImpl()
      {
        String methodName = "transferMoneyImpl():: ";
        EntityManager em1;
        EntityManager em2;
        UserTransaction ut =null;
      try
        {
        setRunTimeInfo(methodName  + "Entering ... ");
            
        HttpSession session = (HttpSession) FacesContext.getCurrentInstance().getExternalContext().getSession(true);
        if ( session == null)
              {
                throw new IllegalArgumentException(methodName+ ": Could not get HTTP session : ");    
              }                        
        final Object lock = session.getId().intern();       
        synchronized(lock) 
              {
                em1=getEntityManager1();
                em2=getEntityManager2();
                    //
                    // Note even we get an EntityManager Object we still not sure that the 
                    // EntityManager Could open connection the underlying JDBC connection !
                    //
                if ( em1 == null )
                    setRunTimeInfo(methodName  + "Faild to get EM for PU: " + EMF.getPU1() );
                else if ( em2 == null )
                    setRunTimeInfo(methodName  + "Faild to get EM for PU: " + EMF.getPU2() );
                else
                    setRunTimeInfo(methodName  + "Found both Entity Managers for PUs : " + 
                       EMF.getPU1()  + " and " +  EMF.getPU2()  ); 
                   
                 
                String bankA_acct_name = "User99_at_BANKA";
                Accounts bankA_acct = em1.find(Accounts.class, bankA_acct_name);
                if ( bankA_acct == null)
                    { 
                    setRunTimeInfo(methodName + "Could not locate Account at bankA : " + bankA_acct_name );
                    return;
                    }
                setRunTimeInfo(methodName  +"Account at bank A: " + bankA_acct.getAccount()  + " - Balance: " +  bankA_acct.getBalance() );
                
                String bankB_acct_name = "User98_at_BANKB";
                Accounts bankB_acct = em2.find(Accounts.class, bankB_acct_name);
                if ( bankB_acct == null)
                    { 
                    setRunTimeInfo(methodName + "Could not locate Account at bankB : " + bankB_acct_name );
                    return;
                    }
                setRunTimeInfo(methodName  +"Account at bank B: " + bankB_acct.getAccount()  + " - Balance: " +  bankB_acct.getBalance() );
              
                ut  = (javax.transaction.UserTransaction)new InitialContext().lookup("java:comp/UserTransaction"); 
                    // Set tranaction time to 120 seconds to avoid any timeouts during testing -
                    // especially when testing transaction recovery by restarting Wildfly server 
                    // Note as we kill the JAVA process both RMs will wait 120 s before Tx becomes visible in dba_2pc_pending 
                int tx_timeout = 120;
                ut.setTransactionTimeout(tx_timeout);
                ut.begin();
                em1.joinTransaction();
                em2.joinTransaction();
                setRunTimeInfo(methodName  + "Both EMs joined our XA Transaction... - TX Timeout: " + tx_timeout );
                BigDecimal b = new BigDecimal(1000);
                bankA_acct.setBalance( bankA_acct.getBalance().add(b) );
                em1.merge(bankA_acct);
                if (isEnableFlush() )
                    em1.flush();
                
                bankB_acct.setBalance( bankB_acct.getBalance().subtract(b) );
                em2.merge(bankB_acct);           
                if (isEnableFlush() )
                    em2.flush();
                
                setRunTimeInfo(methodName  + "Before Commit ... ");                
                ut.commit();
                setRunTimeInfo(methodName  + "Tx Commit worked !");
                checkBalanceImpl();
              }
        } catch ( Throwable t1)
          { 
            try
              {    
              String tx_status = returnTXStatus(ut);
              setRunTimeInfo( methodName  + "FATAL ERROR - Tx Status : " + tx_status  );
                // Use Throwable as we don't want to loose any important imformation
                // Note: Throwable is super class of Exception class          
               genericException("Error in top level function: " + methodName , (Exception)t1);                          
               if ( ut != null )
                  {
                    int status = ut.getStatus();    
                        // rollback transaction if still active - if not do nothing 
                    if ( status != javax.transaction.Status.STATUS_NO_TRANSACTION   ) {
                        setRunTimeInfo(methodName  + "Before TX rollback ... ");
                        ut.rollback();
                        setRunTimeInfo(methodName  + "TX rollback worked !");
                    } else
                        setRunTimeInfo(methodName  + "TX not active / TX already rolled back");
                  }
              }  catch ( Throwable t2)
                 { 
                   genericException(methodName + "FATAL ERROR during ut.rollback() ", (Exception)t2); 
                 } 
          }
        closeEntityManagers();       
        String tx_status_exit = "";
        try
          {    
            tx_status_exit = returnTXStatus(ut);
          }   catch ( Throwable t3)
            { 
              genericException(methodName + " Error during returning TX status ", (Exception)t3); 
            }    
        setRunTimeInfo(methodName  + "Leaving with TX Status:: " + tx_status_exit );
      }

Reference

Install 12.2 Oracle Domain Service Cluster with Virtualbox env

Overview Domain Service Cluster

-> From Cluster Domains ORACLE WHITE PAPER

Domain Services Cluster Key Facts
DSC: 
The Domain Services Cluster is the heart of the Cluster Domain, as it is configured to provide the services that will be utilized by the various Member Clusters within the Cluster Domain. As per the name, it is a cluster itself, thus providing the required high availability and scalability for the provisioned services.

GIMR :
The centralized GIMR is host to cluster health and diagnostic information for all the clusters in the Cluster Domain.  As such, it is accessed by the client applications of the Autonomous Health Framework (AHF), the Trace File Analyzer (TFA) facility and Rapid Home Provisioning (RHP) Server across the Cluster Domain.  
Thus, it acts in support of the DSC’s role as the management hub.

IOServer [ promised with 12.1 - finally implemented with 12.2 ]
Configuring the Database Member Cluster to use an indirect I/O path to storage is simpler still, requiring no locally configured shared storage, thus dramatically improving the ease of deploying new clusters, and changing the shared storage for those clusters (adding disks to the storage is done at the DSC - an invisible operation to the Database Member Cluster).
Instead, all database I/O operations are channeled through the IOServer processes on the DSC.  From the database instances on the Member Cluster, the database’s data files are fully accessible and seen as individual files, exactly as they would be with locally attached shared storage.  
The real difference is that the actual I/O operation is handed off to the IOServers on the DSC instead of being processed locally on the nodes of
the Member Cluster.  The major benefit of this approach is that new Database Member Clusters don’t need to be configured with locally  attached shared storage, making deployment simpler and easier

Rapid Home Provisioning Server
The Domain Services Cluster may also be configured to host a Rapid Home Provisioning (RHP) server.  RHP is used to manage the provisioning, patching and upgrading of the Oracle Database and GI software stacks
and any other critical software across the Member Clusters in the Cluster Domain.  Through this service, the RHP server would be used to maintain the currency of the installations on the Member Clusters as RHP clients, thus simplifying and standardizing the deployments across the Cluster Domain.


The services available consist of
  • Grid Infrastructure Management Repository ( GIMR)
  • ASM Storage service
  • IOServer service
  • Rapid Home Provisioning Server

 Domain Service Cluster Resources

  •  If you think that 12.1.0.2 RAC Installation is a Resource Monster than your are completely wrong
  • A 12.2 Domain Service Cluster installation  will eaten up even much more resources
 
Memory Resource Calculation  when trying to setup Domain Serivce cluster with 16 GByte Memory 
VM DSC System1 [     running GIMR Database ] : 7 GByte
VM DSC System2 [ NOT running GIMR Database ] : 6 GByte
VM NameServer                                : 1 GByte
Window 7 Host                                : 2 GByte 

I really think we need 32 GByte Memory for running a  Domain Service cluster  ...
But as s I'm waiting on a 16 GByte memory  upgrade I will try to run the setup with 16 GByte memory.

The major problem are the GIMR Database Memory Requirements [ see DomainServicesCluster_GIMR.dbc ]
 - sga_target           : 4 GByte 
 - pga_aggregate_target : 2 GByte 

This will kill my above 16 Gyte setup so I need to change  DomainServicesCluster_GIMR.dbc. 

 

Disk Requirements 
Shared Disks 
03.05.2017  19:09    21.476.933.632 asm1_dsc_20G.vdi
03.05.2017  19:09    21.476.933.632 asm2_dsc_20G.vdi
03.05.2017  19:09    21.476.933.632 asm3_dsc_20G.vdi
03.05.2017  19:09    21.476.933.632 asm4_dsc_20G.vdi
03.05.2017  19:09   107.376.279.552 asm5_GIMR_100G.vdi
03.05.2017  19:09   107.376.279.552 asm6_GIMR_100G.vdi
03.05.2017  19:09   107.376.279.552 asm7_GIMR_100G.vdi

Disk Group +DATA : 4 x 50 GByte Mode : Normal
Disk Group +GIMR : 3 x 100 GByte 
                 : Mode : External
                 : Space Required during Installation : 289 GByte  
                 : Space provided: 300 GByte

04.05.2017  08:48    22.338.863.104 dsctw21.vdi
03.05.2017  21:44                 0 dsctw21_OBASE_120G.vdi
03.05.2017  18:03    <DIR>          dsctw22
04.05.2017  08:48    15.861.809.152 dsctw22.vdi
03.05.2017  21:43                 0 dsctw22_OBASE_120G.vdi
per RAC VM :  50 GByte for OS, Swap, GRID Software installation
           : 120 GByte for ORACLE_BASE 
               : Space Required for ORACLE_BASE during Installation : 102 GByte  
               : Space provided: 120 GByte  

This translate to about 450 GByte Diskspace for installiong a  Domain Service Cluster

Note:  -> Disk Space Resources may are quite huge for this type of installation
       ->  For the GIMR disk group we need 300 GByte space with EXTERNAL redundancy
 

Network Requirements  
GNS Entry

Name Server Entry for GNS
$ORIGIN ggrid.example.com.
@       IN          NS        ggns.ggrid.example.com. ; NS  grid.example.com
        IN          NS        ns1.example.com.      ; NS example.com
ggns    IN          A         192.168.5.60 ; glue record

NOTE : For the GIMR disk group we need 300 GByte space wiht EXTERNAL redundancy

Cluvfy commands to verify our RAC VMs

[grid@dsctw21 linuxx64_12201_grid_home]$ cd  /media/sf_kits/Oracle/122/linuxx64_12201_grid_home 

 [grid@dsctw21 linuxx64_12201_grid_home]$ runcluvfy.sh comp admprv -n "ractw21,ractw22" -o user_equiv -verbose -fixup
 
[grid@dsctw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh stage -pre crsinst -fixup -n  dsctw21 

[grid@dsctw21 linuxx64_12201_grid_home]$  ./runcluvfy.sh  comp gns -precrsinst -domain  dsctw2.example.com  -vip 192.168.5.60 -verbose 

[grid@dsctw21 linuxx64_12201_grid_home]$  ./runcluvfy.sh  comp gns -precrsinst -domain  dsctww2.example.com  -vip 192.168.5.60 -verbose 
     
 [grid@dsctw21 linuxx64_12201_grid_home]$  runcluvfy.sh comp dns -server -domain ggrid.example.com -vipaddress 192.168.5.60/255.255.255.0/enp0s8 -verbose -method root 
      -> The server command should block here
[grid@dsctw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh comp  dns -client -domain   dsctw2.example.com -vip  192.168.5.60  -method root -verbose -last  
    -> The client command with -last should terminate the server too 

Only memory related errors like PRVF-7530 and DNS configuration check errors should be ignored if you run your VM with less that 8 GByte memory 

Verifying Physical Memory ...FAILED dsctw21: PRVF-7530 : Sufficient physical memory is not available on node          "dsctw21" [Required physical memory = 8GB (8388608.0KB)] 

Task DNS configuration check - This task verifies if GNS subdomain delegation has been implemented in the DNS . This Warning could be ignored too as GNS is not running YET

Create ASM Disks

Create the ASM Disks for +DATA Disk Group holding OCR, Voting Disks 

M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm1_dsc_20G.vdi --size 20480 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 8c914ad2-30c0-4c4d-88e0-ff94aef761c8

M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm2_dsc_20G.vdi --size 20480 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 72791d07-9b21-41dd-8630-483902343e22

M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm3_dsc_20G.vdi --size 20480 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 7f5684e6-e4d2-47ab-8166-b259e3e626e5

M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm4_dsc_20G.vdi --size 20480 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 2c564704-46ad-4f37-921b-e56f0812c0bf

M:\VM\DSCRACTW2>VBoxManage modifyhd  asm1_dsc_20G.vdi  --type shareable
M:\VM\DSCRACTW2>VBoxManage modifyhd  asm2_dsc_20G.vdi  --type shareable
M:\VM\DSCRACTW2>VBoxManage modifyhd  asm3_dsc_20G.vdi  --type shareable
M:\VM\DSCRACTW2>VBoxManage modifyhd  asm4_dsc_20G.vdi  --type shareable

M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_dsc_20G.vdi  --mtype shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_dsc_20G.vdi  --mtype shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_dsc_20G.vdi  --mtype shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_dsc_20G.vdi  --mtype shareable

M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_dsc_20G.vdi  --mtype shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_dsc_20G.vdi  --mtype shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_dsc_20G.vdi  --mtype shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_dsc_20G.vdi  --mtype shareable

Create and attach the GIMR Disk Group
M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\DSCRACTW2\asm5_GIMR_100G.vdi --size 102400 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 8604878c-8c73-421a-b758-4ef5bf0a3d61
M:\VM\DSCRACTW2>VBoxManage modifyhd  asm5_GIMR_100G.vdi  --type shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 5 --device 0 --type hdd --medium asm5_GIMR_100G.vdi  --mtype shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 5 --device 0 --type hdd --medium asm5_GIMR_100G.vdi  --mtype shareable

M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\DSCRACTW2\asm6_GIMR_100G.vdi --size 102400 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 8604878c-8c73-421a-b758-4ef5bf0a3d61
M:\VM\DSCRACTW2>VBoxManage modifyhd  asm6_GIMR_100G.vdi  --type shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 6 --device 0 --type hdd --medium asm6_GIMR_100G.vdi  --mtype shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 6 --device 0 --type hdd --medium asm6_GIMR_100G.vdi  --mtype shareable

M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\DSCRACTW2\asm7_GIMR_100G.vdi --size 102400 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 8604878c-8c73-421a-b758-4ef5bf0a3d61
M:\VM\DSCRACTW2>VBoxManage modifyhd  asm7_GIMR_100G.vdi  --type shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 7 --device 0 --type hdd --medium asm7_GIMR_100G.vdi  --mtype shareable
M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 7 --device 0 --type hdd --medium asm7_GIMR_100G.vdi  --mtype shareable

Create and Attach the ORACLE_BASE disks - each VM gets its own ORACLE_BASE disk
M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\DSCRACTW2\dsctw21_OBASE_120G.vdi --size 122800 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 35ab9546-2967-4f43-9a52-305906ff24e1
M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\DSCRACTW2\dsctw22_OBASE_120G.vdi --size 122800 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 32e1fcaa-9609-4027-968e-2d35d33584a8

M:\VM\DSCRACTW2> VBoxManage storageattach dsctw21 --storagectl "SATA" --port 8 --device 0 --type hdd --medium dsctw21_OBASE_120G.vdi 
M:\VM\DSCRACTW2> VBoxManage storageattach dsctw22 --storagectl "SATA" --port 8 --device 0 --type hdd --medium dsctw22_OBASE_120G.vdi 

You may use parted to configure and mount the Diskspace 

The Linux XFS file systems should NOW look like the following 
[root@dsctw21 app]# df / /u01 /u01/app/grid
Filesystem                  1K-blocks    Used Available Use% Mounted on
/dev/mapper/ol_ractw21-root  15718400 9085996   6632404  58% /
/dev/mapper/ol_ractw21-u01   15718400 7409732   8308668  48% /u01
/dev/sdi1                   125683756   32928 125650828   1% /u01/app/grid

  • See Chapter : Using parted to create a new ORACLE_BASE partition for a Domain Service Cluster in the following article

Disk protections for our ASM disks

  • Disk label should be msdos
  • To allow the installation process to pick up the disk set following protections
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 
..
brw-rw----. 1 grid asmadmin 8,  16 May  5 08:21 /dev/sdb
brw-rw----. 1 grid asmadmin 8,  32 May  5 08:21 /dev/sdc
brw-rw----. 1 grid asmadmin 8,  48 May  5 08:21 /dev/sdd
brw-rw----. 1 grid asmadmin 8,  64 May  5 08:21 /dev/sde
brw-rw----. 1 grid asmadmin 8,  80 May  5 08:21 /dev/sdf
brw-rw----. 1 grid asmadmin 8,  96 May  5 08:21 /dev/sdg
  • If you need to recover from a failed Installation and disk are already labeled by AFD please read:

Start the installation process

Unset the ORACLE_BASE environment variable.
[grid@dsctw21 grid]$ unset ORACLE_BASE
[grid@dsctw21 ~]$ cd $GRID_HOME
[grid@dsctw21 grid]$ pwd
/u01/app/122/grid
[grid@dsctw21 grid]$ unzip -q  /media/sf_kits/Oracle/122/linuxx64_12201_grid_home.zip
As root allow X-Windows app. to run on this node from any host  
[root@dsctw21 ~]# xhost +
access control disabled, clients can connect from any host
[grid@dsctw21 grid]$ export DISPLAY=:0.0

If your are running a test env with low memory resources [ <= 16 GByte ] don't forget to limit the GIMR memory requirements by reading: 

Start of GIMR database fails during 12.2 installation

Now start the Oracle Grid Infrastructure installer by running the following command:

[grid@dsctw21 grid]$ ./gridSetup.sh
Launching Oracle Grid Infrastructure Setup Wizard...

Initial Installation Steps

 Run requrired Server root scripts: 

[root@dsctw22 app]# /u01/app/oraInventory/orainstRoot.sh 

Running root.sh on first Rac Node:
[root@dsctw21 ~]# /u01/app/122/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/122/grid
...
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/122/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/dsctw21/crsconfig/rootcrs_dsctw21_2017-05-04_12-22-04AM.log
2017/05/04 12:22:07 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2017/05/04 12:22:07 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2017/05/04 12:22:07 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/05/04 12:22:07 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2017/05/04 12:22:09 CLSRSC-363: User ignored prerequisites during installation
2017/05/04 12:22:09 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2017/05/04 12:22:11 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2017/05/04 12:22:12 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2017/05/04 12:22:13 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
2017/05/04 12:22:16 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2017/05/04 12:22:16 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2017/05/04 12:22:18 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2017/05/04 12:22:19 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2017/05/04 12:22:19 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2017/05/04 12:22:20 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2017/05/04 12:22:21 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2017/05/04 12:22:23 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2017/05/04 12:22:24 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2017/05/04 12:22:28 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dsctw21'
CRS-2673: Attempting to stop 'ora.ctssd' on 'dsctw21'

....
CRS-2676: Start of 'ora.diskmon' on 'dsctw21' succeeded
CRS-2676: Start of 'ora.cssd' on 'dsctw21' succeeded

Disk label(s) created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170504PM122337.log for details.
Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170504PM122337.log for details.

2017/05/04 12:24:28 CLSRSC-482: Running command: '/u01/app/122/grid/bin/ocrconfig -upgrade grid oinstall'
CRS-2672: Attempting to start 'ora.crf' on 'dsctw21'
CRS-2672: Attempting to start 'ora.storage' on 'dsctw21'
CRS-2676: Start of 'ora.storage' on 'dsctw21' succeeded
CRS-2676: Start of 'ora.crf' on 'dsctw21' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'dsctw21'
CRS-2676: Start of 'ora.crsd' on 'dsctw21' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk c397468902ba4f76bf99287b7e8b1e91.
Successful addition of voting disk fbb3600816064f02bf3066783b703f6d.
Successful addition of voting disk f5dec135cf474f56bf3a69bdba629daf.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   c397468902ba4f76bf99287b7e8b1e91 (AFD:DATA1) [DATA]
 2. ONLINE   fbb3600816064f02bf3066783b703f6d (AFD:DATA2) [DATA]
 3. ONLINE   f5dec135cf474f56bf3a69bdba629daf (AFD:DATA3) [DATA]
Located 3 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dsctw21'
CRS-2673: Attempting to stop 'ora.crsd' on 'dsctw21'
..'
CRS-2677: Stop of 'ora.driver.afd' on 'dsctw21' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'dsctw21' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dsctw21' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/05/04 12:25:58 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
..
CRS-2676: Start of 'ora.crsd' on 'dsctw21' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: dsctw21
CRS-6016: Resource auto-start has completed for server dsctw21
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/05/04 12:28:36 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/05/04 12:28:36 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
CRS-2672: Attempting to start 'ora.net1.network' on 'dsctw21'
CRS-2676: Start of 'ora.net1.network' on 'dsctw21' succeeded
..
CRS-2676: Start of 'ora.DATA.dg' on 'dsctw21' succeeded
2017/05/04 12:31:44 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.

Disk label(s) created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170504PM123151.log for details.
2017/05/04 12:38:07 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Run root.sh on the second Node:
[root@dsctw22 app]# /u01/app/122/grid/root.sh
Performing root user operation.
..
2017/05/04 12:47:44 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2017/05/04 12:47:54 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2017/05/04 12:48:19 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

After all root scripts have been finished continue the installation process !
  • After GIMR database was created and installation process runs a final cluvfy – hopefully successful verify your installation logs :
Install Logs Location 
 /u01/app/oraInventory/logs/GridSetupActions2017-05-05_02-24-23PM/gridSetupActions2017-05-05_02-24-23PM.log

Verify Domain Service Cluster setup using cluvfy

Verify Domain Service cluster setup using cluvfy 
[grid@dsctw21 ~]$ cluvfy stage -post crsinst -n dsctw21,dsctw22

Verifying Node Connectivity ...
  Verifying Hosts File ...PASSED
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "192.168.2.0" ...PASSED
  Verifying subnet mask consistency for subnet "192.168.5.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast check ...PASSED
Verifying ASM filter driver configuration consistency ...PASSED
Verifying Time zone consistency ...PASSED
Verifying Cluster Manager Integrity ...PASSED
Verifying User Mask ...PASSED
Verifying Cluster Integrity ...PASSED
Verifying OCR Integrity ...PASSED
Verifying CRS Integrity ...
  Verifying Clusterware Version Consistency ...PASSED
Verifying CRS Integrity ...PASSED
Verifying Node Application Existence ...PASSED
Verifying Single Client Access Name (SCAN) ...
  Verifying DNS/NIS name service 'dsctw2-scan.dsctw2.dsctw2.example.com' ...
    Verifying Name Service Switch Configuration File Integrity ...PASSED
  Verifying DNS/NIS name service 'dsctw2-scan.dsctw2.dsctw2.example.com' ...PASSED
Verifying Single Client Access Name (SCAN) ...PASSED
Verifying OLR Integrity ...PASSED
Verifying GNS Integrity ...
  Verifying subdomain is a valid name ...PASSED
  Verifying GNS VIP belongs to the public network ...PASSED
  Verifying GNS VIP is a valid address ...PASSED
  Verifying name resolution for GNS sub domain qualified names ...PASSED
  Verifying GNS resource ...PASSED
  Verifying GNS VIP resource ...PASSED
Verifying GNS Integrity ...PASSED
Verifying Voting Disk ...PASSED
Verifying ASM Integrity ...
  Verifying Node Connectivity ...
    Verifying Hosts File ...PASSED
    Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
    Verifying subnet mask consistency for subnet "192.168.2.0" ...PASSED
    Verifying subnet mask consistency for subnet "192.168.5.0" ...PASSED
  Verifying Node Connectivity ...PASSED
Verifying ASM Integrity ...PASSED
Verifying Device Checks for ASM ...PASSED
Verifying ASM disk group free space ...PASSED
Verifying I/O scheduler ...
  Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying I/O scheduler ...PASSED
Verifying User Not In Group "root": grid ...PASSED
Verifying Clock Synchronization ...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP

  Verifying Network Time Protocol (NTP) ...
    Verifying '/etc/chrony.conf' ...PASSED
    Verifying '/var/run/chronyd.pid' ...PASSED
    Verifying Daemon 'chronyd' ...PASSED
    Verifying NTP daemon or service using UDP port 123 ...PASSED
    Verifying chrony daemon is synchronized with at least one external time source ...PASSED
  Verifying Network Time Protocol (NTP) ...PASSED
Verifying Clock Synchronization ...PASSED
Verifying Network configuration consistency checks ...PASSED
Verifying File system mount options for path GI_HOME ...PASSED

Post-check for cluster services setup was successful. 

CVU operation performed:      stage -post crsinst
Date:                         May 7, 2017 10:10:04 AM
CVU home:                     /u01/app/122/grid/
User:                         grid

Check cluster Resources used by DSC

[root@dsctw21 ~]# crs
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          dsctw21      STABLE   
ora.DATA.dg                    ONLINE     ONLINE          dsctw21      STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          dsctw21      STABLE   
ora.MGMT.GHCHKPT.advm          ONLINE     ONLINE          dsctw21      STABLE   
ora.MGMT.dg                    ONLINE     ONLINE          dsctw21      STABLE   
ora.chad                       ONLINE     ONLINE          dsctw21      STABLE   
ora.helper                     ONLINE     ONLINE          dsctw21      IDLE,STABLE   
ora.mgmt.ghchkpt.acfs          ONLINE     ONLINE          dsctw21      mounted on /mnt/oracle/rhpimages/chkbase,STABLE
ora.net1.network               ONLINE     ONLINE          dsctw21      STABLE   
ora.ons                        ONLINE     ONLINE          dsctw21      STABLE   
ora.proxy_advm                 ONLINE     ONLINE          dsctw21      STABLE   
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.MGMTLSNR                   1   ONLINE       ONLINE       dsctw21         169.254.108.231 192. 168.2.151,STABLE
ora.asm                        1   ONLINE       ONLINE       dsctw21         Started,STABLE  
ora.asm                        2   ONLINE       OFFLINE      -               STABLE  
ora.asm                        3   OFFLINE      OFFLINE      -               STABLE  
ora.cvu                        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.dsctw21.vip                1   ONLINE       ONLINE       dsctw21         STABLE  
ora.dsctw22.vip                1   ONLINE       INTERMEDIATE dsctw21         FAILED OVER,STABLE 
ora.gns                        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.gns.vip                    1   ONLINE       ONLINE       dsctw21         STABLE  
ora.ioserver                   1   ONLINE       OFFLINE      -               STABLE  
ora.ioserver                   2   ONLINE       ONLINE       dsctw21         STABLE  
ora.ioserver                   3   ONLINE       OFFLINE      -               STABLE  
ora.mgmtdb                     1   ONLINE       ONLINE       dsctw21         Open,STABLE  
ora.qosmserver                 1   ONLINE       ONLINE       dsctw21         STABLE  
ora.rhpserver                  1   ONLINE       ONLINE       dsctw21         STABLE  
ora.scan1.vip                  1   ONLINE       ONLINE       dsctw21         STABLE  
ora.scan2.vip                  1   ONLINE       ONLINE       dsctw21         STABLE  
ora.scan3.vip                  1   ONLINE       ONLINE       dsctw21         STABLE

Following Resouces should be ONLINE for a DSC cluster

-> ioserver
-> mgmtdb
-> rhpserver
  • If any of these resources are not ONLINE try to start them with: $srvctl start

Verify Domain Service cluster setup using srvclt,rhpctl,asmcmd

[grid@dsctw21 peer]$ rhpctl query server
Rapid Home Provisioning Server (RHPS): dsctw2
Storage base path: /mnt/oracle/rhpimages
Disk Groups: MGMT
Port number: 23795
[grid@dsctw21 peer]$ rhpctl quey workingcopy
No software home has been configured

[grid@dsctw21 peer]$ rhpctl query image
No image has been configured

Check ASM disk groups
[grid@dsctw21 peer]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512             512   4096  4194304     81920    81028            20480           30274              0             Y  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304    307200   265376                0          265376              0             N  MGMT/

Verify GNS
[grid@dsctw21 peer]$  srvctl config gns
GNS is enabled.
GNS VIP addresses: 192.168.5.60
Domain served by GNS: dsctw2.example.com

[grid@dsctw21 peer]$  srvctl config gns -list
dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314
dsctw2.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAmxQnG2xkpQMXGRXD2tBTZkUKYUsV+Sj/w6YmpFdpMQVoNVSXJCWgCDqIjLrfVA2AQUeEaAek6pfOlMp6Tev2nPVvNqPpul5Fs63cFVzwjdTI4zU6lSC6+2UVJnAN6BTEmrOzKKt/kuxoNNI7V4DZ5Nj6UoUJ2MXGr/+RSU44GboHnrftvFaVN8pp0TOoOBTj5hHH8C73I+lFfDNhMXEY8WQhb1nP6Cv02qPMsbb8edq1Dy8lt6N6kzjh+9hKPNdqM7HB3OVV5L18E5HtLjWOhMZLqJ7oDTDsQcMMuYmfFjbi3JvGQrdTlGHAv9f4W/vRL/KV8bDkDFnSRSFubxsbdQIDAQAB ) Unique Flags: 0x314
dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
dsctw2-scan.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan.dsctw2 A 192.168.5.234 Unique Flags: 0x1
dsctw2-scan.dsctw2 A 192.168.5.235 Unique Flags: 0x1
dsctw2-scan1-vip.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan2-vip.dsctw2 A 192.168.5.235 Unique Flags: 0x1
dsctw2-scan3-vip.dsctw2 A 192.168.5.234 Unique Flags: 0x1

[grid@dsctw21 peer]$ nslookup dsctw2-scan.dsctw2.example.com
Server:        192.168.5.50
Address:    192.168.5.50#53

Non-authoritative answer:
Name:    dsctw2-scan.dsctw2.example.com
Address: 192.168.5.234
Name:    dsctw2-scan.dsctw2.example.com
Address: 192.168.5.231
Name:    dsctw2-scan.dsctw2.example.com
Address: 192.168.5.235


Verify Management Repository
[grid@dsctw21 peer]$ oclumon manage -get MASTER
Master = dsctw21

[grid@dsctw21 peer]$ srvctl status mgmtdb 
Database is enabled
Instance -MGMTDB is running on node dsctw21
[grid@dsctw21 peer]$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name: 
Oracle home: <CRS home>
Oracle user: grid
Spfile: +MGMT/_MGMTDB/PARAMETERFILE/spfile.272.943198901
Password file: 
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: GIMR_DSCREP_10
PDB service: GIMR_DSCREP_10
Cluster name: dsctw2
Database instance: -MGMTDB

--> PDB Name and Serive Name GIMR_DSCREP_10 is NEW with 12.2
    With lower versions get the cluster name here !



[grid@dsctw21 peer]$  oclumon manage -get reppath
CHM Repository Path = +MGMT/_MGMTDB/4EC81829D5715AD0E0539705A8C084C6/DATAFILE/sysmgmtdata.280.943199159
[grid@dsctw21 peer]$ asmcmd  ls -ls +MGMT/_MGMTDB/4EC81829D5715AD0E0539705A8C084C6/DATAFILE/sysmgmtdata.280.943199159
Type      Redund  Striped  Time             Sys  Block_Size  Blocks       Bytes       Space  Name
DATAFILE  UNPROT  COARSE   MAY 05 17:00:00  Y          8192  262145  2147491840  2155872256  sysmgmtdata.280.943199159

[grid@dsctw21 peer]$  oclumon dumpnodeview -allnodes
----------------------------------------
Node: dsctw21 Clock: '2017-05-06 09.29.55+0200' SerialNo:4469 
----------------------------------------
SYSTEM:
#pcpus: 1 #cores: 4 #vcpus: 4 cpuht: N chipname: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz cpuusage: 26.48 cpusystem: 2.78 cpuuser: 23.70 cpunice: 0.00 cpuiowait: 0.05 cpusteal: 0.00 cpuq: 0 physmemfree: 695636 physmemtotal: 6708204 mcache: 2800060 swapfree: 7202032 swaptotal: 8257532 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 311 iow: 229 ios: 92 swpin: 0 swpout: 0 pgin: 3 pgout: 40 netr: 32.601 netw: 27.318 procs: 479 procsoncpu: 3 #procs_blocked: 0 rtprocs: 17 rtprocsoncpu: N/A #fds: 34496 #sysfdlimit: 6815744 #disks: 14 #nics: 3 loadavg1: 2.24 loadavg5: 1.99 loadavg15: 1.89 nicErrors: 0
TOP CONSUMERS:
topcpu: 'gnome-shell(6512) 5.00' topprivmem: 'java(660) 347292' topshm: 'mdb_dbw0_-MGMTDB(28946) 352344' topfd: 'ocssd.bin(6204) 370' topthread: 'crsd.bin(8615) 52' 

----------------------------------------
Node: dsctw22 Clock: '2017-05-06 09.29.55+0200' SerialNo:3612 
----------------------------------------
SYSTEM:
#pcpus: 1 #cores: 4 #vcpus: 4 cpuht: N chipname: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz cpuusage: 1.70 cpusystem: 0.77 cpuuser: 0.92 cpunice: 0.00 cpuiowait: 0.00 cpusteal: 0.00 cpuq: 0 physmemfree: 828740 physmemtotal: 5700592 mcache: 2588336 swapfree: 8244596 swaptotal: 8257532 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 2 iow: 68 ios: 19 swpin: 0 swpout: 0 pgin: 0 pgout: 63 netr: 10.747 netw: 18.222 procs: 376 procsoncpu: 1 #procs_blocked: 0 rtprocs: 15 rtprocsoncpu: N/A #fds: 29120 #sysfdlimit: 6815744 #disks: 14 #nics: 3 loadavg1: 1.44 loadavg5: 1.39 loadavg15: 1.43 nicErrors: 0
TOP CONSUMERS:
topcpu: 'orarootagent.bi(7345) 1.20' topprivmem: 'java(8936) 270140' topshm: 'ocssd.bin(5833) 119060' topfd: 'gnsd.bin(9072) 1242' topthread: 'crsd.bin(7137) 49' 


Verify TFA status
[grid@dsctw21 peer]$ tfactl print status
TFA-00099: Printing status of TFA

.-----------------------------------------------------------------------------------------------.
| Host    | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+---------+---------------+-------+------+------------+----------------------+------------------+
| dsctw21 | RUNNING       | 32084 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE         |
| dsctw22 | RUNNING       |  3929 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE         |
'---------+---------------+-------+------+------------+----------------------+------------------'

[grid@dsctw21 peer]$ tfactl print config
.------------------------------------------------------------------------------------.
|                                       dsctw21                                      |
+-----------------------------------------------------------------------+------------+
| Configuration Parameter                                               | Value      |
+-----------------------------------------------------------------------+------------+
| TFA Version                                                           | 12.2.1.0.0 |
| Java Version                                                          | 1.8        |
| Public IP Network                                                     | true       |
| Automatic Diagnostic Collection                                       | true       |
| Alert Log Scan                                                        | true       |
| Disk Usage Monitor                                                    | true       |
| Managelogs Auto Purge                                                 | false      |
| Trimming of files during diagcollection                               | true       |
| Inventory Trace level                                                 | 1          |
| Collection Trace level                                                | 1          |
| Scan Trace level                                                      | 1          |
| Other Trace level                                                     | 1          |
| Repository current size (MB)                                          | 13         |
| Repository maximum size (MB)                                          | 10240      |
| Max Size of TFA Log (MB)                                              | 50         |
| Max Number of TFA Logs                                                | 10         |
| Max Size of Core File (MB)                                            | 20         |
| Max Collection Size of Core Files (MB)                                | 200        |
| Minimum Free Space to enable Alert Log Scan (MB)                      | 500        |
| Time interval between consecutive Disk Usage Snapshot(minutes)        | 60         |
| Time interval between consecutive Managelogs Auto Purge(minutes)      | 60         |
| Logs older than the time period will be auto purged(days[d]|hours[h]) | 30d        |
| Automatic Purging                                                     | true       |
| Age of Purging Collections (Hours)                                    | 12         |
| TFA IPS Pool Size                                                     | 5          |
'-----------------------------------------------------------------------+------------'

.------------------------------------------------------------------------------------.
|                                       dsctw22                                      |
+-----------------------------------------------------------------------+------------+
| Configuration Parameter                                               | Value      |
+-----------------------------------------------------------------------+------------+
| TFA Version                                                           | 12.2.1.0.0 |
| Java Version                                                          | 1.8        |
| Public IP Network                                                     | true       |
| Automatic Diagnostic Collection                                       | true       |
| Alert Log Scan                                                        | true       |
| Disk Usage Monitor                                                    | true       |
| Managelogs Auto Purge                                                 | false      |
| Trimming of files during diagcollection                               | true       |
| Inventory Trace level                                                 | 1          |
| Collection Trace level                                                | 1          |
| Scan Trace level                                                      | 1          |
| Other Trace level                                                     | 1          |
| Repository current size (MB)                                          | 0          |
| Repository maximum size (MB)                                          | 10240      |
| Max Size of TFA Log (MB)                                              | 50         |
| Max Number of TFA Logs                                                | 10         |
| Max Size of Core File (MB)                                            | 20         |
| Max Collection Size of Core Files (MB)                                | 200        |
| Minimum Free Space to enable Alert Log Scan (MB)                      | 500        |
| Time interval between consecutive Disk Usage Snapshot(minutes)        | 60         |
| Time interval between consecutive Managelogs Auto Purge(minutes)      | 60         |
| Logs older than the time period will be auto purged(days[d]|hours[h]) | 30d        |
| Automatic Purging                                                     | true       |
| Age of Purging Collections (Hours)                                    | 12         |
| TFA IPS Pool Size                                                     | 5          |
'-----------------------------------------------------------------------+------------'

[grid@dsctw21 peer]$ tfactl  print  actions
.-----------------------------------------------------------.
| HOST | START TIME | END TIME | ACTION | STATUS | COMMENTS |
+------+------------+----------+--------+--------+----------+
'------+------------+----------+--------+--------+----------'

[grid@dsctw21 peer]$ tfactl print errors 
Total Errors found in database: 0
DONE

[grid@dsctw21 peer]$  tfactl print startups
++++++ Startup Start +++++
Event Id     : nullfom14v2mu0u82nkf5uufjoiuia
File Name    : /u01/app/grid/diag/apx/+apx/+APX1/trace/alert_+APX1.log
Startup Time : Fri May 05 15:07:03 CEST 2017
Dummy        : FALSE
++++++ Startup End +++++
++++++ Startup Start +++++
Event Id     : nullgp6ei43ke5qeqo8ugemsdqrle1
File Name    : /u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log
Startup Time : Fri May 05 14:58:28 CEST 2017
Dummy        : FALSE
++++++ Startup End +++++
++++++ Startup Start +++++
Event Id     : nullt7p1681pjq48qt17p4f8odrrgf
File Name    : /u01/app/grid/diag/rdbms/_mgmtdb/-MGMTDB/trace/alert_-MGMTDB.log
Startup Time : Fri May 05 15:27:13 CEST 2017
Dummy        : FALSE


Potential Error: ORA-845 starting IOServer Instances

 
[grid@dsctw21 ~]$ srvctl start ioserver
PRCR-1079 : Failed to start resource ora.ioserver
CRS-5017: The resource action "ora.ioserver start" encountered the following error: 
ORA-00845: MEMORY_TARGET not supported on this system
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/dsctw22/crs/trace/crsd_oraagent_grid.trc".

CRS-2674: Start of 'ora.ioserver' on 'dsctw22' failed
CRS-5017: The resource action "ora.ioserver start" encountered the following error: 
ORA-00845: MEMORY_TARGET not supported on this system
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/dsctw21/crs/trace/crsd_oraagent_grid.trc".

CRS-2674: Start of 'ora.ioserver' on 'dsctw21' failed
CRS-2632: There are no more servers to try to place resource 'ora.ioserver' on that would satisfy its placement policy

From +IOS1 alert.log :  ./diag/ios/+ios/+IOS1/trace/alert_+IOS1.log

WARNING: You are trying to use the MEMORY_TARGET feature. This feature requires the /dev/shm file system to be mounted for at least 4513071104 bytes. /dev/shm is either not mounted or is mounted with available space less than this size. Please fix this so that MEMORY_TARGET can work as expected. Current available is 2117439488 and used is 1317158912 bytes. Ensure that the mount point is /dev/shm for this directory.

Verify /dev/shm
[root@dsctw22 ~]# df -h /dev/shm
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           2.8G  1.3G  1.5G  46% /dev/shm


Modify /etc/fstab 
# /etc/fstab
# Created by anaconda on Tue Apr  4 12:13:16 2017
#
#
tmpfs                                           /dev/shm                tmpfs   defaults,size=6g  0 0 

and increase  /dev/shm  to 6 GByte. Remount tmpfs 
[root@dsctw22 ~]# mount -o remount tmpfs 
[root@dsctw22 ~]# df -h /dev/shm
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           6.0G  1.3G  4.8G  21% /dev/shm

Do a silent installation

From Grid Infrastructure Installation and Upgrade Guide
A.7.2 Running Postinstallation Configuration Using Response File

Complete this procedure to run configuration assistants with the executeConfigTools command.

Edit the response file and specify the required passwords for your configuration. 
You can use the response file created during installation, located at $ORACLE_HOME/install/response/product_timestamp.rsp. 

[root@dsctw21 ~]# ls -l $ORACLE_HOME/install/response/
total 112
-rw-r--r--. 1 grid oinstall 34357 Jan 26 17:10 grid_2017-01-26_04-10-28PM.rsp
-rw-r--r--. 1 grid oinstall 35599 May 23 15:50 grid_2017-05-22_04-51-05PM.rsp

Verify that Password Settings  For Oracle Grid Infrastructure:
[root@dsctw21 ~]# cd  $ORACLE_HOME/install/response/
[root@dsctw21 response]#  grep -i passw grid_2017-05-22_04-51-05PM.rsp
# Password for SYS user of Oracle ASM
oracle.install.asm.SYSASMPassword=sys
# Password for ASMSNMP account
oracle.install.asm.monitorPassword=sys

I have not verified this but its seems that not setting passwords could lead to following errors during Member Cluster Setup :
[INS-30211] An unexpected exception occurred while extracting details from ASM client data
       PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml"
       PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml
       KFOD-00319: failed to read the credential file /home/grid/FILES/mclu2.xml 

[grid@dsctw21 grid]$ gridSetup.sh -silent  -skipPrereqs -responseFile grid_2017-05-22_04-51-05PM.rsp  
Launching Oracle Grid Infrastructure Setup Wizard...
..
You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2017-05-20_12-17-29PM/gridSetupActions2017-05-20_12-17-29PM.log

As a root user, execute the following script(s):
    1. /u01/app/oraInventory/orainstRoot.sh
    2. /u01/app/122/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: 
[dsctw22]
Execute /u01/app/122/grid/root.sh on the following nodes: 
[dsctw21, dsctw22]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
    /u01/app/122/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid_dsctw2.rsp [-silent]

-> Run root.sh scripts 

[grid@dsctw21 grid]$ /u01/app/122/grid/gridSetup.sh -executeConfigTools -responseFile grid_2017-05-22_04-51-05PM.rsp  
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2017-05-20_05-34-08PM

Backup OCR and export GNS

  • Note as Member cluster install has killed my shared GNS 2x is may be a good idea to backup OCR and export GNS right NOW
Backup  OCR 

[root@dsctw21 cfgtoollogs]# ocrconfig -manualbackup
dsctw21     2017/05/22 19:03:53     +MGMT:/dsctw/OCRBACKUP/backup_20170522_190353.ocr.284.944679833     0   
  
[root@dsctw21 cfgtoollogs]# ocrconfig -showbackup
PROT-24: Auto backups for the Oracle Cluster Registry are not available
dsctw21     2017/05/22 19:03:53     +MGMT:/dsctw/OCRBACKUP/backup_20170522_190353.ocr.284.944679833     0   
   
Locate all OCR backups 
ASMCMD>  find --type OCRBACKUP / *
+MGMT/dsctw/OCRBACKUP/backup_20170522_190353.ocr.284.944679833
ASMCMD> ls -l +MGMT/dsctw/OCRBACKUP/backup_20170522_190353.ocr.284.944679833
Type       Redund  Striped  Time             Sys  Name
OCRBACKUP  UNPROT  COARSE   MAY 22 19:00:00  Y    backup_20170522_190353.ocr.284.944679833

Export the GNS to a file
[root@dsctw21 cfgtoollogs]# srvctl stop gns
[root@dsctw21 cfgtoollogs]# srvctl export gns -instance /root/dsc-gns.export 
[root@dsctw21 cfgtoollogs]# srvctl start gns

Dump GNS data
[root@dsctw21 cfgtoollogs]#  srvctl export gns -instance /root/dsc-gns.export 
[root@dsctw21 cfgtoollogs]# srvctl start gns
[root@dsctw21 cfgtoollogs]# srvctl config gns -list
dsctw21.CLSFRAMEdsctw SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 12642 Weight: 0 Priority: 0 Flags: 0x101
dsctw21.CLSFRAMEdsctw TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
dsctw22.CLSFRAMEdsctw SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 35675 Weight: 0 Priority: 0 Flags: 0x101
dsctw22.CLSFRAMEdsctw TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101
dscgrid.example.com DLV 35418 10 18 ( /a+Iu8QgPs9k96CoQ6rFVQrqmGFzZZNKRo952Ujjkj8dcDlHSA+JMcEMHLC3niuYrM/eFeAj3iFpihrIEohHXQ== ) Unique Flags: 0x314
dscgrid.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAxnVyA60TYUeEKkNvEaWrAFg2oDXrFbR9Klx7M5N/UJadFtF8h1e32Bf8jpL6cq1yKRI3TVdrneuiag0OiQfzAycLjk98VUz+L3Q5AHGYCta62Kjaq4hZOFcgF/BCmyY+6tWMBE8wdivv3CttCiH1U7x3FUqbgCb1iq3vMcS6X64k3MduhRankFmfs7zkrRuWJhXHfRaDz0mNXREeW2VvPyThXPs+EOPehaDhXRmJBWjBkeZNIaBTiR8jKTTY1bSPzqErEqAYoH2lR4rAg9TVKjOkdGrAmJJ6AGvEBfalzo4CJtphAmygFd+/ItFm5koFb2ucFr1slTZz1HwlfdRVGwIDAQAB ) Unique Flags: 0x314
dscgrid.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
dsctw-scan.dsctw A 192.168.5.225 Unique Flags: 0x81
dsctw-scan.dsctw A 192.168.5.227 Unique Flags: 0x81
dsctw-scan.dsctw A 192.168.5.232 Unique Flags: 0x81
dsctw-scan1-vip.dsctw A 192.168.5.232 Unique Flags: 0x81
dsctw-scan2-vip.dsctw A 192.168.5.227 Unique Flags: 0x81
dsctw-scan3-vip.dsctw A 192.168.5.225 Unique Flags: 0x81
dsctw21-vip.dsctw A 192.168.5.226 Unique Flags: 0x81
dsctw22-vip.dsctw A 192.168.5.235 Unique Flags: 0x81
dsctw-scan1-vip A 192.168.5.232 Unique Flags: 0x81
dsctw-scan2-vip A 192.168.5.227 Unique Flags: 0x81
dsctw-scan3-vip A 192.168.5.225 Unique Flags: 0x81
dsctw21-vip A 192.168.5.226 Unique Flags: 0x81
dsctw22-vip A 192.168.5.235 Unique Flags: 0x81


Reference

 

Start of GIMR database fails during 12.2 installation

Status of a  failed 12.2 GIMR startup

  • You’re starting a 12.2 RAC Database Installation either as
    • Standalone RAC cluster or
    • Domain Service Cluster
  • Your memory Capacity Planning looks  like the following
Memory Resources when trying to setup Domain Service cluster with 16 GByte Memory
  VM DSC System1 [     running GIMR Database ] : 7 GByte
  VM DSC System2 [ NOT running GIMR Database ] : 6 GByte
  VM NameServer                                : 1 GByte
  Window 7 Host                                : 2 GByte 
  • You’re installing the RAC envronment in a Virtualbox env and you have only 16 GByte memory
  • The installation works fine until start of the GIMR databae [near at end of the GRID installation process ! ]
  • Your Virtualbox WindowsVMs freeze or any the RAC VMs reboots
  • Sometimes the start of the GIMR database during the Installation process fails with ORA-3113 or timeout Errors starting a database process like DWB, ..
  • Check the related log File in /u01/app/grid/cfgtoollogs/dbca/_mgmtdb/

 

Typical Error when starting GIMR db during the 12.2 installation

[WARNING] [DBT-11209] Current available physical memory is less than the required physical memory (6,144MB) for creating the database.
Registering database with Oracle Grid Infrastructure
Copying database files
Creating and starting Oracle instance
DBCA Operation failed.
Look at the log file "/u01/app/grid/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log" for further details.
Creating Container Database for Oracle Grid Infrastructure Management Repository failed.

-> /u01/app/grid/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log
Registering database with Oracle Grid Infrastructure
Copying database files
....
DBCA_PROGRESS : 22%
DBCA_PROGRESS : 36%
[ 2017-05-04 18:01:38.759 CEST ] Creating and starting Oracle instance
DBCA_PROGRESS : 37%
[ 2017-05-04 18:07:20.869 CEST ] ORA-03113: end-of-file on communication channel
[ 2017-05-04 18:07:55.288 CEST ] ORA-03113: end-of-file on communication channel
[ 2017-05-04 18:11:07.145 CEST ] DBCA_PROGRESS : DBCA Operation failed.

Analyzing the problem of a failed GIMR db startup

  • During startup the shared memory segment created by GIMR db is about 4 GByte
  • If we are short in memory the  shmget  system call may fail [ shmget(key, SHM_SIZE, 0644 | IPC_CREAT) system ] will kill the installation process by :
    • rebooting any of  your RAC VM
    • freeze your VirtualBox host
    • freeze all the the 3 RAC VMs

In any case the installation process gets terminated and you need to cleanup your system and restart the installation process which will fail again untill you don’t add some memory. 

Potential Workaround by modifying  DomainServicesCluster_GIMR.dbc

 
After you have extracted the GRID zip file save the DomainServicesCluster_GIMR.dbc and change the 
following setting [ red : original - blue marks new setting ]

Save and Modify  DomainServicesCluster_GIMR.dbc
[grod@dsctw21 FILES]# diff DomainServicesCluster_GIMR.dbc  DomainServicesCluster_GIMR.dbc_ORIG
<          <initParam name="sga_target" value="800" unit="MB"/>
>          <initParam name="sga_target" value="4" unit="GB"/>

<          <initParam name="processes" value="200"/>
>          <initParam name="processes" value="2000"/>

<          <initParam name="open_cursors" value="300"/>
<          <initParam name="pga_aggregate_target" value="400" unit="MB"/>
<          <initParam name="target_pdbs" value="2"/>

>          <initParam name="open_cursors" value="600"/>
>          <initParam name="pga_aggregate_target" value="2" unit="GB"/>
>          <initParam name="target_pdbs" value="5"/>

Copy the modified DomainServicesCluster_GIMR.dbc to its default locaction

[grid@dsctw21 grid]$ cp  /root/FILES/DomainServicesCluster_GIMR.dbc  $GRID_HOME/assistants/dbca/templates
[grid@dsctw21 grid]$ ls -l $GRID_HOME/assistants/dbca/templates/Domain*
-rw-r--r--. 1 grid oinstall 5737 May  5 14:06 /u01/app/122/grid/assistants/dbca/templates/DomainServicesCluster_GIMR.dbc
  • Don’t use this in any Production System – This is for testing ONLY !

-> Now start the installation process by invoking gridSetup.sh in $GRID_HOME !

Reference

Cleanup a failed 12.2 GRID installation

Cleanup GRID resources and directories

Log in as the root user on a node where you encountered an error.
Change directory to Grid_home/crs/install. For example:
Run rootcrs.sh with the -deconfig and -force flags. For example:

[root@dsctw21 ~]# export GRID_HOME=/u01/app/122/grid
[root@dsctw22 ~]# $GRID_HOME/crs/install/rootcrs.sh -verbose -deconfig -force

Repeat on other nodes as required.
If you are deconfiguring Oracle Clusterware on all nodes in the cluster, then on the last node, enter the following command:

[root@dsctw21 ~]# export GRID_HOME=/u01/app/122/grid
[root@dsctw21 ~]# $GRID_HOME/crs/install/rootcrs.sh -verbose -deconfig -force -lastnode

The -lastnode flag completes deconfiguration of the cluster, including the OCR and voting files.

Current Settings
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/122/grid

Remove oraInventory Files
[root@dsctw21 oraInventory]# rm -rf  /u01/app/oraInventory/*

Remove Files from ORACLE_BASE, ORACLE_HOME
[root@dsctw21 ~]# rm -rf /u01/app/grid/*
[root@dsctw21 ~]# rm -rf /u01/app/122/grid/*

Remove above files/directories on the 2.nd node too !

[root@dsctw21 grid]# chown  grid:oinstall /u01/app/122/grid
[root@dsctw21 grid]# chown  grid:oinstall /u01/app/grid
[root@dsctw21 grid]# chown  grid:oinstall /u01/app/oraInventory/

[root@dsctw22 oraInventory]# chown  grid:oinstall /u01/app/122/grid
[root@dsctw22 oraInventory]# chown  grid:oinstall /u01/app/grid
[root@dsctw22 oraInventory]# chown  grid:oinstall /u01/app/oraInventory/

Verify directories
[root@dsctw21 ~]# ls  -lasi /u01/app/122/grid
total 0
33595586 0 drwxr-xr-x. 2 grid oinstall  6 May  4 15:49 .
50331840 0 drwxr-xr-x. 3 root oinstall 17 May  4 15:49 ..
[root@dsctw22 ~]#  ls  -lasi /u01/app/122/grid
total 0
33595586 0 drwxr-xr-x. 2 grid oinstall  6 May  4 15:49 .
50331840 0 drwxr-xr-x. 3 root oinstall 17 May  4 15:49 ..
.....

Cleanup our ASM Disks

On Node 1 run 
[root@dsctw21 ~]# ./cleanupASM_Disks.sh
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 
..
brw-rw----. 1 root disk     8,   0 May  5 07:36 /dev/sda
brw-rw----. 1 root disk     8,   1 May  5 07:36 /dev/sda1
brw-rw----. 1 root disk     8,   2 May  5 07:36 /dev/sda2
brw-rw----. 1 grid asmadmin 8,  16 May  5 08:21 /dev/sdb
brw-rw----. 1 grid asmadmin 8,  32 May  5 08:21 /dev/sdc
brw-rw----. 1 grid asmadmin 8,  48 May  5 08:21 /dev/sdd
brw-rw----. 1 grid asmadmin 8,  64 May  5 08:21 /dev/sde
brw-rw----. 1 grid asmadmin 8,  80 May  5 08:21 /dev/sdf
brw-rw----. 1 grid asmadmin 8,  96 May  5 08:21 /dev/sdg
brw-rw----. 1 grid asmadmin 8, 112 May  5 08:21 /dev/sdh
brw-rw----. 1 root disk     8, 128 May  5 07:36 /dev/sdi
brw-rw----. 1 root disk     8, 129 May  5 07:36 /dev/sdi1

On Node 2 just run 
[root@dsctw22 ~]# ./modifyASM_Disks.sh
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 
....
brw-rw----. 1 root disk     8,   0 May  5 07:22 /dev/sda
brw-rw----. 1 root disk     8,   1 May  5 07:22 /dev/sda1
brw-rw----. 1 root disk     8,   2 May  5 07:22 /dev/sda2
brw-rw----. 1 grid asmadmin 8,  16 May  5 08:25 /dev/sdb
brw-rw----. 1 grid asmadmin 8,  32 May  5 08:25 /dev/sdc
brw-rw----. 1 grid asmadmin 8,  48 May  5 08:25 /dev/sdd
brw-rw----. 1 grid asmadmin 8,  64 May  5 08:25 /dev/sde
brw-rw----. 1 grid asmadmin 8,  80 May  5 08:25 /dev/sdf
brw-rw----. 1 grid asmadmin 8,  96 May  5 08:25 /dev/sdg
brw-rw----. 1 grid asmadmin 8, 112 May  5 07:25 /dev/sdh
brw-rw----. 1 root disk     8, 128 May  5 07:22 /dev/sdi
brw-rw----. 1 root disk     8, 129 May  5 07:22 /dev/sdi1

Note protections on both nodes should be identical !

Scripts used in this article

[root@dsctw21 ~]# cat  cleanupASM_Disks.sh
parted /dev/sdb mklabel msdos
parted /dev/sdc mklabel msdos
parted /dev/sdd mklabel msdos
parted /dev/sde mklabel msdos
parted /dev/sdf mklabel msdos
parted /dev/sdg mklabel msdos
parted /dev/sdh mklabel msdos

parted /dev/sdb print
parted /dev/sdc print
parted /dev/sdd print
parted /dev/sde print
parted /dev/sdf print
parted /dev/sdg print
parted /dev/sdh print
./modifyASM_Disks.sh

[root@dsctw21 ~]# cat  modifyASM_Disks.sh
parted /dev/sdb print
parted /dev/sdc print
parted /dev/sdd print
parted /dev/sde print
parted /dev/sdf print
parted /dev/sdg print
parted /dev/sdh print

chmod 660 /dev/sdb
chmod 660 /dev/sdc
chmod 660 /dev/sdd
chmod 660 /dev/sde
chmod 660 /dev/sdf
chmod 660 /dev/sdg
chmod 660 /dev/sdh

chown grid:asmadmin /dev/sdb
chown grid:asmadmin /dev/sdc
chown grid:asmadmin /dev/sdd
chown grid:asmadmin /dev/sde
chown grid:asmadmin /dev/sdf
chown grid:asmadmin /dev/sdg
chown grid:asmadmin /dev/sdh

ls -l /dev/sd*

Recreate GNS 12.2

Overview

  • Duing a 12.2 Domain Service Cluster installation I’ve filled in the wrong GNS Subdomain name
  • This means nlslookup for my SCAN address doesn’t work
  • Final cluvfy comamnds reports error : PRVF-5218 : Domain name “dsctw21-vip.dsctw2.example.com” did not resolve to an IP address.

-> So this was a good exercise to verify whetjer my older 12.1 article to recreate GNS  also works witht 12.2 !

Backup your RAC profile and local OCR

As of 12.x/11.2 Grid Infrastructure, the private network configuration is not only stored in OCR but also in the gpnp profile -  please take a backup of profile.xml on all cluster nodes before proceeding, as grid user:
[grid@dsctw21 peer]$ cd  $GRID_HOME/gpnp/dsctw21/profiles/peer/
[grid@dsctw21 peer]$ cp  profile.xml profile.xml_backup_5-Mai-2017
[root@dsctw21 ~]# export GRID_HOME=/u01/app/122/grid
[root@dsctw21 ~]# $GRID_HOME/bin/ocrconfig -local -manualbackup
dsctw21     2017/05/05 17:12:50     /u01/app/122/grid/cdata/dsctw21/backup_20170505_171250.olr     0     
dsctw21     2017/05/05 15:07:41     /u01/app/122/grid/cdata/dsctw21/backup_20170505_150741.olr     0  

[grid@dsctw21 peer]$ $GRID_HOME/bin/ocrconfig -local -showbackup
dsctw21     2017/05/05 17:12:50     /u01/app/122/grid/cdata/dsctw21/backup_20170505_171250.olr     0     
dsctw21     2017/05/05 15:07:41     /u01/app/122/grid/cdata/dsctw21/backup_20170505_150741.olr     0 
-> Repeat these steps on all of your RAC nodes

Collect Vip Addresses, Device Names, GNS Deails

[root@dsctw21 ~]# $GRID_HOME/bin/oifcfg getif
enp0s8  192.168.5.0  global  public
enp0s9  192.168.2.0  global  cluster_interconnect,asm

Get the current GNS VIP IP:
[root@dsctw21 ~]# $GRID_HOME/bin/crsctl status resource ora.gns.vip -f | grep USR_ORA_VIP
GEN_USR_ORA_VIP=
USR_ORA_VIP=192.168.5.60

[root@dsctw21 ~]# ifconfig enp0s8
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.5.151  netmask 255.255.255.0  broadcast 192.168.5.255

[root@dsctw21 ~]#  ifconfig enp0s9
enp0s9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.2.151  netmask 255.255.255.0  broadcast 192.168.2.255

[root@dsctw21 ~]#  $GRID_HOME/bin/srvctl config gns -a -l
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5353 to connect to mDNS
GNS status: Self-check failed.
Domain served by GNS: example.com
GNS version: 12.2.0.1.0
Globally unique identifier of the cluster where GNS is running: 3a9c87760b7bdf65ffea8852e7dfdae5
Name of the cluster where GNS is running: dsctw2
Cluster type: server.
GNS log level: 1.
GNS listening addresses: tcp://192.168.5.60:44456.
GNS instance role: primary
GNS is individually enabled on nodes: 
GNS is individually disabled on nodes: 

[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns
GNS is enabled.
GNS VIP addresses: 192.168.5.60
Domain served by GNS: example.com

This should be a subdomain as example.com is our DNS domain !

Stop resources and recreate  gns, nodeapps

[root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop scan_listener 
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop scan
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop nodeapps -f
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop gns
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl remove nodeapps
Please confirm that you intend to remove node-level applications on all nodes of the cluster (y/[n]) y
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl remove gns
Remove GNS? (y/[n]) y
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl add gns -i 192.168.5.60 -d dsctw2.example.com
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns
GNS is enabled.
GNS VIP addresses: 192.168.5.60
Domain served by GNS: dsctw2.example.com
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list
CLSNS-00005: operation timed out
  CLSNS-00041: failure to contact name servers 192.168.5.60:53
    CLSGN-00070: Service location failed.
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl start gns
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list
dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314
dsctw2.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAmxQnG2xkpQMXGRXD2tBTZkUKYUsV+Sj/w6YmpFdpMQVoNVSXJCWgCDqIjLrfVA2AQUeEaAek6pfOlMp6Tev2nPVvNqPpul5Fs63cFVzwjdTI4zU6lSC6+2UVJnAN6BTEmrOzKKt/kuxoNNI7V4DZ5Nj6UoUJ2MXGr/+RSU44GboHnrftvFaVN8pp0TOoOBTj5hHH8C73I+lFfDNhMXEY8WQhb1nP6Cv02qPMsbb8edq1Dy8lt6N6kzjh+9hKPNdqM7HB3OVV5L18E5HtLjWOhMZLqJ7oDTDsQcMMuYmfFjbi3JvGQrdTlGHAv9f4W/vRL/KV8bDkDFnSRSFubxsbdQIDAQAB ) Unique Flags: 0x314
dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
Oracle-GNS A 192.168.5.60 Unique Flags: 0x315
dsctw2.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 59102 Weight: 0 Priority: 0 Flags: 0x315
dsctw2.Oracle-GNS TXT CLUSTER_NAME="dsctw2", CLUSTER_GUID="3a9c87760b7bdf65ffea8852e7dfdae5", NODE_NAME="dsctw22", SERVER_STATE="RUNNING", VERSION="12.2.0.0.0", PROTOCOL_VERSION="0xc200000", DOMAIN="dsctw2.example.com" Flags: 0x315
Oracle-GNS-ZM A 192.168.5.60 Unique Flags: 0x315
dsctw2.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 34148 Weight: 0 Priority: 0 Flags: 0x315
--> No VIP IPs  !

Recreate Nodeapps

[root@dsctw21 ~]#  $GRID_HOME/bin/srvctl add nodeapps -S 192.168.5.0/255.255.255.0/enp0s8
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl start nodeapps
PRKO-2422 : ONS is already started on node(s): dsctw21,dsctw22
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list
dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314
dsctw2.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAmxQnG2xkpQMXGRXD2tBTZkUKYUsV+Sj/w6YmpFdpMQVoNVSXJCWgCDqIjLrfVA2AQUeEaAek6pfOlMp6Tev2nPVvNqPpul5Fs63cFVzwjdTI4zU6lSC6+2UVJnAN6BTEmrOzKKt/kuxoNNI7V4DZ5Nj6UoUJ2MXGr/+RSU44GboHnrftvFaVN8pp0TOoOBTj5hHH8C73I+lFfDNhMXEY8WQhb1nP6Cv02qPMsbb8edq1Dy8lt6N6kzjh+9hKPNdqM7HB3OVV5L18E5HtLjWOhMZLqJ7oDTDsQcMMuYmfFjbi3JvGQrdTlGHAv9f4W/vRL/KV8bDkDFnSRSFubxsbdQIDAQAB ) Unique Flags: 0x314
dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
dsctw2-scan.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan1-vip.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw21-vip.dsctw2 A 192.168.5.233 Unique Flags: 0x1
dsctw22-vip.dsctw2 A 192.168.5.237 Unique Flags: 0x1
dsctw2-scan A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan1-vip A 192.168.5.231 Unique Flags: 0x1
dsctw21-vip A 192.168.5.233 Unique Flags: 0x1
dsctw22-vip A 192.168.5.237 Unique Flags: 0x1
Oracle-GNS A 192.168.5.60 Unique Flags: 0x315
dsctw2.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 59102 Weight: 0 Priority: 0 Flags: 0x315
dsctw2.Oracle-GNS TXT CLUSTER_NAME="dsctw2", CLUSTER_GUID="3a9c87760b7bdf65ffea8852e7dfdae5", NODE_NAME="dsctw22", SERVER_STATE="RUNNING", VERSION="12.2.0.0.0", PROTOCOL_VERSION="0xc200000", DOMAIN="dsctw2.example.com" Flags: 0x315
Oracle-GNS-ZM A 192.168.5.60 Unique Flags: 0x315
dsctw2.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 34148 Weight: 0 Priority: 0 Flags: 0x315
--> GNS knows VIP IPs - Related cluster resources VIPs, GNS and SCAN Listener should be  ONLINE 
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       dsctw22         STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       dsctw21         STABLE ...
ora.dsctw21.vip                1   ONLINE       ONLINE       dsctw21         STABLE  
ora.dsctw22.vip                1   ONLINE       ONLINE       dsctw22         STABLE  
ora.gns                        1   ONLINE       ONLINE       dsctw22         STABLE  
ora.gns.vip                    1   ONLINE    

Verify our NEW created GNS

[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list
dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314
..
dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
dsctw2-scan.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan.dsctw2 A 192.168.5.234 Unique Flags: 0x1
dsctw2-scan.dsctw2 A 192.168.5.235 Unique Flags: 0x1
dsctw2-scan1-vip.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan2-vip.dsctw2 A 192.168.5.235 Unique Flags: 0x1
dsctw2-scan3-vip.dsctw2 A 192.168.5.234 Unique Flags: 0x1
dsctw21-vip.dsctw2 A 192.168.5.233 Unique Flags: 0x1
dsctw22-vip.dsctw2 A 192.168.5.237 Unique Flags: 0x1

[root@dsctw21 ~]#  nslookup dsctw2-scan.dsctw2.example.com
Server:        192.168.5.50
Address:    192.168.5.50#53

Non-authoritative answer:
Name:    dsctw2-scan.dsctw2.example.com
Address: 192.168.5.235
Name:    dsctw2-scan.dsctw2.example.com
Address: 192.168.5.234
Name:    dsctw2-scan.dsctw2.example.com
Address: 192.168.5.231

--> VIPS, SCAN and SCAN VIPS should be ONLINE 

Congrats you have successfully reconfigured GNS on 12.2.0.1 !

Reference

Install Oracle RAC 12.2 ( 12cR2 ) PM – Policy Managed

Overview

This tutorial is based on :

Feature Overview

  • FLEX ASM
  • Convert a Adminstrator Managed Database to Policy Managed  ( PM)
  • Configure GNS – requirement for Policy Managed Database
  • UDEV to manage ASM disks
  • Using Chrony as our Timeserver [ replacing NTP ]

Sofware Versions

  • VirtualBox 5.1.18
  • OEL 7.3
  • Oracle RAC 12.2.0.1 using UDEV, Policy Managed Database , Flex ASM feature

Virtualbox Images used

  • ns1 – Name Server / DHCP server / Chrony server running on 192.168.5.50 IP addresss
  • ractw21 – Cluster node 1
  • ractw22 – Cluster node 2

For installing the nameserver please read the article mentioned below :

In this tutorial we have replaced NTP by Chrony. For Chrony setup please read

Networking Details

Domain            : example.com            Name Server: ns1.example.com                192.168.5.50
RAC Sub-Domain    : grid122.example.com    Name Server: gns122.grid122.example.com     192.168.5.55
DHCP Server       : ns1.example.com
Chrony NTP  Server: ns1.example.com
DHCP adresses     : 192.168.1.100 ... 192.168.1.254

Configure DNS:
Identity     Home Node     Host Node                         Given Name                        Type        Address        Address Assigned By     Resolved By
GNS VIP        None        Selected by Oracle Clusterware    gns122.example.com                Virtual     192.168.5.59   Net administrator       DNS + GNS
Node 1 Public  Node 1      ractw21                           ractw21.example.com               Public      192.168.5.141  Fixed                   DNS
Node 1 VIP     Node 1      Selected by Oracle Clusterware    ractw21-vip.grid122.example.com   Private     Dynamic        DHCP                    GNS
Node 1 Private Node 1      ractw21int                        ractw21-int.example.com           Private     192.168.2.141  Fixed                   DNS
Node 2 Public  Node 2      ractw22                           ractw22.example.com               Public      192.168.5.142  Fixed                   DNS
Node 2 VIP     Node 2      Selected by Oracle Clusterware    ractw22-vip.grid122.example.com   Private     Dynamic        DHCP                    GNS
Node 2 Private Node 2      ractw22int                        ractw22-int.example.com           Private     192.168.2.142  Fixed                   DNS
SCAN VIP 1     none        Selected by Oracle Clusterware    RACTW2-scan.grid122.example.com   Virtual     Dynamic        DHCP                    GNS
SCAN VIP 2     none        Selected by Oracle Clusterware    RACTW2-scan.grid122.example.com   Virtual     Dynamic        DHCP                    GNS
SCAN VIP 3     none        Selected by Oracle Clusterware    RACTW2-scan.grid122.example.com   Virtual     Dynamic        DHCP                    GNS

Create VirtualBox Image OEL73 as our RAC Provisioning Server

After basic OS Installation install some helpful YUM packages

[root@ractw21 ~]# yum install system-config-users
[root@ractw21 ~]# yum install wireshark
[root@ractw21 ~]# yum install wireshark-gnome

Install X11 applications like xclock
[root@ractw21 ~]# yum install xorg-x11-apps
[root@ractw21 ~]# yum install telnet 

Use the “oracle-database-server-12cR2-preinstall” package to perform all your prerequisite setup

[root@ractw21 ~]# yum install oracle-database-server-12cR2-preinstall -y

Perpare Network

Change the setting of SELinux to permissive by editing the "/etc/selinux/config" file, i
making sure the SELINUX flag is set as follows.

SELINUX=permissive

If you have the Linux firewall enabled, you will need to disable or configure it, as shown 
here or here. The following is an example of disabling the firewall.

Disable AVAHI daemon  [ only if running ]
# /etc/init.d/avahi-daemon stop
To disable it: 
# /sbin/chkconfig  avahi-daemon off
Stop Firewall 
[root@ractw21 ~]# systemctl stop firewalld
[root@ractw21 ~]# systemctl disable firewalld
[root@ractw21 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
Potential Nameserver issues

Make sure the "/etc/resolv.conf" file includes a nameserver entry that points to the correct nameserver. 
Also, if the "domain" and "search" entries are both present, comment out one of them. For the OS installation
 
my "/etc/resolv.conf" looked like this.

#domain localdomain
search localdomain
nameserver 192.168.1.1

The changes to the "resolv.conf" will be overwritten by the network manager, due to the presence of the 
NAT interface. For this reason, this interface should now be disabled on startup. You can enable it manually 
if you need to access the internet from the VMs. Edit the "/etc/sysconfig/network-scripts/ifcfg-enp0s3" (eth0) 
file, making the following change. This will take effect after the next restart.

ONBOOT=no

Modify resolv.conf and reboot the system:
# Generated by Helmut 
search example.com
nameserver 192.168.5.50

Afer a reboot we have following IP setting and Nameserver loopup should working fine
[root@ractw21 ~]# ifconfig
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 08:00:27:c6:ab:10  txqueuelen 1000  (Ethernet)
       
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.5.141  netmask 255.255.255.0  broadcast 192.168.5.255
        inet6 fe80::a00:27ff:fe83:429  prefixlen 64  scopeid 0x20<link>
        
enp0s9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.2.141  netmask 255.255.255.0  broadcast 192.168.2.255

Note device emp0s3 has no IP V4 address assigned ! 
        
[root@ractw21 ~]# nslookup ractw21
Server:        192.168.5.50
Address:    192.168.5.50#53

Name:    ractw21.example.com
Address: 192.168.5.141

Perpare and verify Swap Space and /u01 pratition

Follow article :   Prepare Swap and /u01 Oracle partition

Verify Space and Memory
[root@ractw21 ~]# df / /u01
Filesystem                  1K-blocks    Used Available Use% Mounted on
/dev/mapper/ol_ractw21-root  15718400 9381048   6337352  60% /
/dev/mapper/ol_ractw21-u01   15718400   39088  15679312   1% /u01
[root@ractw21 ~]# free
              total        used        free      shared  buff/cache   available
Mem:        5700592      595844     4515328       13108      589420     5001968
Swap:       8257532           0     8257532

Note as we can only provide 6 MByte memory the cluvfy Physical Memory Test will fail !

Switching between RAC and Internet ACCESS Networking

RAC Networking
 - enp0s3 down - No IP V4 address assigned 
 - /etc/resolv.conf points to our RAC Nameserver 192.168.5.50
 - RAC Nameserver is working 
 - BUT Internet access is disabled  

[root@ractw21 ~]# ifdown enp0s3
Device 'enp0s3' successfully disconnected.
[root@ractw21 ~]# ifconfig
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 08:00:27:c6:ab:10  txqueuelen 1000  (Ethernet)

[root@ractw21 ~]# ping google.de
connect: Network is unreachable

[root@ractw21 ~]# nslookup ractw21
Server:        192.168.5.50
Address:    192.168.5.50#53

Name:    ractw21.example.com
Address: 192.168.5.141

Internet Access Networking
Note we may need to have Internet access for running yum, ...
 - activate enp0s3  
 - /etc/resolv.conf points to our Router Gateway  192.168.1.1
 - RAC Nameserver is NOT working 
 - Internet access is enabled  

[root@ractw21 ~]# ifup enp0s3
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)
[root@ractw21 ~]# ifconfig enp0s3
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
Verify 
[root@ractw21 ~]#   ping google.de
PING google.de (172.217.18.3) 56(84) bytes of data.
64 bytes from fra15s28-in-f3.1e100.net (172.217.18.3): icmp_seq=1 ttl=46 time=45.0 ms
64 bytes from fra15s28-in-f3.1e100.net (172.217.18.3): icmp_seq=2 ttl=46 time=60.8 ms

nslookup for our RAC IP fails - this is expected
[root@ractw21 ~]# nslookup ractw21
Server:        192.168.1.1
Address:    192.168.1.1#53
** server can't find ractw21: NXDOMAIN

Note if ping google.de does not work you may need to add following entrry to 
/etc/sysconfig/network
    GATEWAYDEV=enp0s3  

After reboot ping and route should work fine
[root@ractw21 ~]# ip route
default via 10.0.2.2 dev enp0s3  proto static  metric 100 
10.0.2.0/24 dev enp0s3  proto kernel  scope link  src 10.0.2.15  metric 100 
192.168.2.0/24 dev enp0s9  proto kernel  scope link  src 192.168.2.141  metric 100 
192.168.5.0/24 dev enp0s8  proto kernel  scope link  src 192.168.5.141  metric 100 
192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1 

[root@ractw21 ~]# traceroute google.de
traceroute to google.de (172.217.22.35), 30 hops max, 60 byte packets
 1  gateway (10.0.2.2)  0.164 ms  0.195 ms  0.074 ms
 2  gateway (10.0.2.2)  1.156 ms  1.917 ms  1.832 ms

Create and Verify User accounts

[root@ractw21 ~]# /usr/sbin/groupadd -g 1001 oinstall
[root@ractw21 ~]# /usr/sbin/groupadd -g 1002 dba
[root@ractw21 ~]# /usr/sbin/groupadd -g 1004 asmadmin
[root@ractw21 ~]# /usr/sbin/groupadd -g 1006 asmdba
[root@ractw21 ~]# /usr/sbin/groupadd -g 1007 asmoper

Create the users that will own the Oracle software using the commands:
[root@ractw21 ~]# /usr/sbin/useradd -u 1001 -g oinstall -G asmadmin,asmdba,asmoper grid
[root@ractw21 ~]#  /usr/sbin/useradd -u 1002 -g oinstall -G dba,asmdba oracle

[root@ractw21 ~]# usermod -G vboxsf -a grid
[root@ractw21 ~]# usermod -G vboxsf -a oracle
[root@ractw21 ~]# passwd oracle
[root@ractw21 ~]# passwd grid

[root@ractw21 ~]# su - grid
[grid@ractw21 ~]$ id
uid=1001(grid) gid=1001(oinstall) groups=1001(oinstall),983(vboxsf),1004(asmadmin),1006(asmdba),1007(asmoper) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

[root@ractw21 ~]# su - oracle
[oracle@ractw21 ~]$ id
uid=1002(oracle) gid=1001(oinstall) groups=1001(oinstall),983(vboxsf),1002(dba),1006(asmdba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Creating Installation directories

Oracle GridHome restrictions

    It must not be placed under one of the Oracle base directories, including the Oracle base 
    directory of the Oracle Grid Infrastructure installation owner.

    It must not be placed in the home directory of an installation owner. These requirements 
    are specific to Oracle Grid Infrastructure for a cluster installations.

Create Directories:
 - Have a separate ORACLE_BASE for both GRID and RDBMS install !

Create the Oracle Inventory Directory
To create the Oracle Inventory directory, enter the following commands as the root user:
  [root@ractw21 ~]#  mkdir -p /u01/app/oraInventory
  [root@ractw21 ~]# chown -R grid:oinstall /u01/app/oraInventory

Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:

Creating the Oracle Base Directory for Grid User
  [root@ractw21 ~]# mkdir -p /u01/app/grid
  [root@ractw21 ~]# chown -R grid:oinstall /u01/app/grid
  [root@ractw21 ~]# chmod -R 775 /u01/app/grid

Creating  GridHome 
  [root@ractw21 ~]#  mkdir -p /u01/app/122/grid
  [root@ractw21 ~]# chown -R grid:oinstall /u01/app/122/grid
  [root@ractw21 ~]# chmod -R 775 /u01/app/122/grid

Creating the Oracle Base Directory for Database Software 
To create the Oracle Base directory, enter the following commands as the root user:
  [root@ractw21 ~]# mkdir -p /u01/app/oracle
  [root@ractw21 ~]# chown -R oracle:oinstall /u01/app/oracle
  [root@ractw21 ~]# chmod -R 775 /u01/app/oracle

Create file /etc/oraInst.loc wiht following content

<inventory_loc=/u01/app/oraInventory
inst_group=oinstall

Modify .bashrc scripts for users oracle and grid

Add following lines to grid user .bashrc script
export ORACLE_BASE=/u01/app/grid
export ORACLE_SID=+ASM1
export GRID_HOME=/u01/app/122/grid
export ORACLE_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:.:$PATH
export HOST=`/bin/hostname`
alias h=history
alias log='cd $ORACLE_BASE/diag/crs/hract21/crs/trace'
alias trc='cd $ORACLE_BASE/diag/crs/hract21/crs/trace'
#unalias ls 
alias sys='sqlplus / as sysdba'
alias sql='sqlplus scott/tiger'

Add following lines to oracle user .bashrc script

export ORACLE_BASE=/u01/app/oracle
export ORACLE_SID=ractw21
export ORACLE_HOME=/u01/app/oracle/product/122/ractw2
export PATH=$ORACLE_HOME/bin:.:$PATH
export  LD_LIBRARY_PATH=$ORACLE_HOME/lib:.
export HOST=`/bin/hostname`
alias h=history
#unalias ls 
alias sys='sqlplus / as sysdba'
alias sql='sqlplus scott/tiger'

Verify our  newly created VM with cluvfy

 Prepare 12.2 Clufy Usage and Cluvfy log File Location


Create a shared Folder and mount the Installation media
[root@ractw21 ~]# mkdir /kits
[root@ractw21 ~]# mount -t vboxsf kits /kits
[root@ractw21 ~]# su - grid

Extract the GRID Zip file on your VBox Host Computer ot get access to Cluvfy Tool  ./runcluvfy.sh

The runcluvfy.sh script contains temporary variable definitions which enable it to 
run before you install Oracle Grid Infrastructure or Oracle Database. 
After you install Oracle Grid Infrastructure, use the cluvfy command to check 
prerequisites and perform other system readiness checks.

Add the proper protections for grid and oracle user  to get access to the GRID media
[root@ractw21 ~]# usermod -G vboxsf -a grid
[root@ractw21 ~]# usermod -G vboxsf -a oracle


Run Cluvfy from 12.2 Installation Media and print the cluvfy version number 

Clufy logs [ very usefull ] are found under $ORACLE_BASE of our grid user 
[grid@ractw21 ~]$ ls -ld $ORACLE_BASE
drwxrwxr-x. 2 grid oinstall 6 Apr 18 14:47 /u01/app/grid

[grid@ractw21 linuxx64_12201_grid_home]$ cd  /media/sf_kits/Oracle/122/linuxx64_12201_grid_home
[grid@ractw21 linuxx64_12201_grid_home]$  ./runcluvfy.sh -version
12.2.0.1.0 Build 010417x8664

Run cluvfy a first time  with -fixup options

Run cluvfy a first time  with -fixup options before we are cloning the system 
[grid@ractw21 linuxx64_12201_grid_home]$  cd /media/sf_kits/Oracle/122/linuxx64_12201_grid_home

[grid@ractw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh stage -pre crsinst -fixup -n  ractw21
..
Got a lot of Errors on the first run !

Execute fixup script "/tmp/CVU_12.2.0.1.0_grid/runfixup.sh" as root user on nodes "ractw21" 
to perform the fix up operations manually

Rerunning clufy ends up with ONLY a following Error / Warning 
[grid@ractw21 linuxx64_12201_grid_home]$  ./runcluvfy.sh stage -pre crsinst -fixup -n  ractw21

..
Verifying Physical Memory ...FAILED
ractw21: PRVF-7530 : Sufficient physical memory is not available on node
         "ractw21" [Required physical memory = 8GB (8388608.0KB)]

-
CVU operation performed:      stage -pre crsinst
Date:                         Apr 8, 2017 7:40:07 AM
CVU home:                     /media/sf_kits/Oracle/122/linuxx64_12201_grid_home/
User:                         grid

PRVF-7530  : I could not fix this as I have only 16 GByte memory - but I've spend at least 6 GByte for the RAC VMs


Cluvfy Logs can be found below:  ORACLE_BASE=/u01/app/grid
      /u01/app/grid/crsdata/ractw21/cvu

Testing the DHCP connectivty with cluvfy

Testing  DHCP connectivty with cluvfy: 
[grid@ractw21 linuxx64_12201_grid_home]$ runcluvfy.sh comp dhcp -clustername  ractw2 -method root
Enter "ROOT" password:

Verifying Task DHCP configuration check ...
  Verifying IP address availability ...PASSED
  Verifying DHCP response time ...PASSED
Verifying Task DHCP configuration check ...PASSED

Verification of DHCP Check was successful. 

CVU operation performed:      DHCP Check
Date:                         Apr 11, 2017 1:25:39 PM
CVU home:                     /media/sf_kits/Oracle/122/linuxx64_12201_grid_home/
User:                         grid

Our NameServer should log the following in /var/log/messages
Apr 11 13:26:07 ns1 dhcpd: DHCPOFFER on 192.168.5.198 to 00:00:00:00:00:00 via eth1
Apr 11 13:26:07 ns1 dhcpd: DHCPRELEASE of 192.168.5.198 from 00:00:00:00:00:00 via eth1 (found)
Apr 11 13:26:12 ns1 dhcpd: DHCPRELEASE of 192.168.5.198 from 00:00:00:00:00:00 via eth1 (found)
Apr 11 13:26:13 ns1 dhcpd: DHCPDISCOVER from 00:00:00:00:00:00 via eth1
Apr 11 13:26:13 ns1 dhcpd: DHCPOFFER on 192.168.5.252 to 00:00:00:00:00:00 via eth1

Verify DHCP Connectivity at OS Level

Verify DHCP Connectivity at OS Level
[root@ractw21 OEL73]# dhclient enp0s8

Check messages file on our RAC system
[root@ractw21 ~]#   tail -f /var/log/messages
Apr 11 13:11:21 ractw21 dhclient[10998]: DHCPREQUEST on enp0s8 to 255.255.255.255 port 67 (xid=0x43037720)
Apr 11 13:11:21 ractw21 dhclient[10998]: DHCPACK from 192.168.5.50 (xid=0x43037720)
Apr 11 13:11:21 ractw21 NET[11077]: /usr/sbin/dhclient-script : updated /etc/resolv.conf
Apr 11 13:11:21 ractw21 dhclient[10998]: bound to 192.168.5.218 -- renewal in 8225 second

--> Our NameServer sucessfully answered with an DHCP address !

Cleanup OS test by Killing the dhclient process 
[root@ractw21 OEL73]#  ps -efl | grep dhclient
1 S root     11106     1  0  80   0 - 28223 poll_s 13:11 ?        00:00:00 dhclient enp0s8
[root@ractw21 OEL73]# kill -9 11106

Verify GNS Setup

Name Server Entry for GNS
$ORIGIN grid122.example.com.
@       IN          NS        gns122.grid122.example.com. ; NS  grid.example.com
        IN          NS        ns1.example.com.      ; NS example.com
gns122  IN          A         192.168.5.59 ; glue record

Verify  whether UDP port 53 is in use on our RAC VM
[root@ractw21 ~]# lsof -P  -i:53 
dnsmasq 2507 nobody    5u  IPv4  22448      0t0  UDP ractw21.example.com:53 
dnsmasq 2507 nobody    6u  IPv4  22449      0t0  TCP ractw21.example.com:53 (LISTEN)

Run cluvfy - should fail  !
[grid@ractw21 linuxx64_12201_grid_home]$  runcluvfy.sh  comp gns -precrsinst -domain grid122.example.com  -vip 192.168.5.59 -verbose 
Verifying GNS Integrity ...
  Verifying subdomain is a valid name ...PASSED
  Verifying GNS VIP is a valid address ...PASSED
  Verifying Port 53 available for component 'GNS' ...FAILED (PRVG-0330)
Verifying GNS Integrity ...FAILED (PRVG-0330)
Verification of GNS integrity was unsuccessful on all the specified nodes. 
Failures were encountered during execution of CVU verification request "GNS integrity".
Verifying GNS Integrity ...FAILED
  Verifying Port 53 available for component 'GNS' ...FAILED
  ractw21: PRVG-0330 : "UDP" port number "53" required for component "GNS" is
           already in use on nodes "ractw21"

As expected cluvfy fails wiht  PRVG-0330 !
 
You may read article to identify the service which occupies port #53:
  What process/service occupies a certain port like port 53 ? 

Having done this you may now disable the related service
[root@ractw21 ~]# systemctl stop libvirtd.service
[root@ractw21 ~]# systemctl disable libvirtd.service
You may need to kill the remaining /sbin/dnsmasq  processess
[root@ractw21 ~]#  ps -elf | grep dnsmasq
5 S nobody   17124     1  0  80   0 -  3888 poll_s 07:59 ?        00:00:00 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
1 S root     17126 17124  0  80   0 -  3881 pipe_w 07:59 ?        00:00:00 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
0 R root     18099 11262  0  80   0 - 28162 -      08:20 pts/4    00:00:00 grep --color=auto dnsmasq
[root@ractw21 ~]# kill -9 17124 17126

Note soemtimes dnsmasq service blocks this port. In that case run
[root@ractw21 ~]# systemctl stop dnsmasq.service
[root@ractw21 ~]# systemctl disable dnsmasq.service

Now cluvfy GNS verification should run fine 
[grid@ractw21 linuxx64_12201_grid_home]$ runcluvfy.sh  comp gns -precrsinst -domain grid122.example.com  -vip 192.168.5.59 -verbose
Verifying GNS Integrity ...
  Verifying subdomain is a valid name ...PASSED
  Verifying GNS VIP is a valid address ...PASSED
  Verifying Port 53 available for component 'GNS' ...PASSED
Verifying GNS Integrity ...PASSED

Verification of GNS integrity was successful. 

CVU operation performed:      GNS integrity
Date:                         Apr 11, 2017 12:38:29 PM
CVU home:                     /media/sf_kits/Oracle/122/linuxx64_12201_grid_home/
User:                         grid

Testing subdomain delegation with cluvfy

Start the DNS server 
[grid@ractw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh comp dns -server -domain grid122.example.com -vipaddress 192.168.5.59/255.255.255.0/enp0s8 -verbose -method root
Enter "ROOT" password:

Verifying Task DNS configuration check ...
Waiting for DNS client requests...

--> Server blocks here

Note : If the /runcluvfy.sh comp dns -server doesn't block [ means cluvfy returns to command prompt ] you may read the following article :     
Common cluvfy errors and warnings including first debugging steps  
  Section:: Running cluvfy comp dns -server fails silent – Cluvfy logs show PRCZ-2090 error

Now Start the  DNS client
[grid@ractw21 linuxx64_12201_grid_home]$ runcluvfy.sh comp  dns -client -domain   grid122.example.com -vip  192.168.5.59  -method root -verbose
Enter "ROOT" password:
Verifying Task DNS configuration check ...PASSED
Verification of DNS Check was successful. 
CVU operation performed:      DNS Check
Date:                         Apr 14, 2017 6:18:49 PM
CVU home:                     /media/sf_kits/Oracle/122/linuxx64_12201_grid_home/
User:                         grid

Run DNS client again and terminate DNS server by using -last swich
[grid@ractw21 linuxx64_12201_grid_home]$  runcluvfy.sh comp  dns -client -domain   grid122.example.com -vip  192.168.5.59  -method root -verbose -last
...
Now server responds but terminates after servicing the request 
Verifying Task DNS configuration check ...PASSED
Verification of DNS Check was successful. 
CVU operation performed:      DNS Check
Date:                         Apr 14, 2017 6:17:53 PM
CVU home:                     /media/sf_kits/Oracle/122/linuxx64_12201_grid_home/
User:                         grid

Current Status

  • At this time we have created a base system which we will use for cloning
  • Following cluvfy commands runs sucessfully
  • Only the first command fails with PRVF-7530 : Sufficient physical memory is not available on node
[grid@ractw21 linuxx64_12201_grid_home]$ cd  /media/sf_kits/Oracle/122/linuxx64_12201_grid_home
[grid@ractw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh stage -pre crsinst -fixup -n  ractw21
[grid@ractw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh  comp gns -precrsinst -domain grid122.example.com  -vip 192.168.5.59 -verbose
[grid@ractw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh comp dhcp -clustername  ractw2 -method root

[grid@ractw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh comp dns -server -domain grid122.example.com -vipaddress 192.168.5.59/255.255.255.0/enp0s8 -verbose -method root
[grid@ractw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh comp  dns -client -domain   grid122.example.com -vip  192.168.5.59  -method root -verbose -last
  • Its now time to shutdown the VM and  do backup  VM and Vbox file

Clone ractw21 system

You man change in File-> Preference the default machine path first  
M:\VM\RACTW2

Cloning ractw21 :
Now cleanly shutdown your Reference/Clone system 
Virtualbox -> Clone [ Name clone ractw22 ]  ->  add new Network Addresses -> Full Clone 


Boot ractw22 VM a first time - retrieve the new MAC addresses 
[root@ractw22 ~]#  dmesg |grep eth
[    8.153534] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 08:00:27:dd:af:2c
[    8.153539] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
[    8.565884] e1000 0000:00:08.0 eth1: (PCI:33MHz:32-bit) 08:00:27:05:1d:93
[    8.565888] e1000 0000:00:08.0 eth1: Intel(R) PRO/1000 Network Connection
[    8.956019] e1000 0000:00:09.0 eth2: (PCI:33MHz:32-bit) 08:00:27:7b:22:80
[    8.956023] e1000 0000:00:09.0 eth2: Intel(R) PRO/1000 Network Connection
[    8.957249] e1000 0000:00:03.0 enp0s3: renamed from eth0
[    8.966330] e1000 0000:00:09.0 enp0s9: renamed from eth2
[    8.983008] e1000 0000:00:08.0 enp0s8: renamed from eth1

Change and verify the NEW hostname 
[root@ractw21 ~]# hostnamectl set-hostname ractw22.example.com  --static
[root@ractw21 ~]# hostnamectl status
   Static hostname: ractw22.example.com
Transient hostname: ractw21.example.com
         Icon name: computer-vm
           Chassis: vm
        Machine ID: c173da783e4c45eab1402e69f783ce10
           Boot ID: 3dad3ed3b1f345d3b7fee60778257cd6
    Virtualization: kvm
  Operating System: Oracle Linux Server 7.3
       CPE OS Name: cpe:/o:oracle:linux:7:3:server
            Kernel: Linux 4.1.12-61.1.33.el7uek.x86_64
      Architecture: x86-64


Go to /etc/sysconfig/network-scripts and change IP addresses 
file "ifcfg-enp0s8" IPADDR=192.168.5.142
     "ifcfg-enp0s9" IPADDR=192.168.2.142

Restart the Network 
[root@ractw22 network-scripts]# service network restart

Verify that our Network devices has the proper setttings
[root@ractw22 ~]# ifconfig
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 08:00:27:dd:af:2c  txqueuelen 1000  (Ethernet)
       
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.5.142  netmask 255.255.255.0  broadcast 192.168.5.255

enp0s9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.2.142  netmask 255.255.255.0  broadcast 192.168.2.255

Note as mentioned before enp0s3 ( our NAT device ) is disabled and has no IP V4 address

Verify RAC nameserver connectivity
[root@ractw22 ~]# nslookup ractw22
Server:        192.168.5.50
Address:    192.168.5.50#53

Name:    ractw22.example.com
Address: 192.168.5.142
 
Now boot rawtw21 VM and check IP connectivty
[root@ractw22 ~]#  ping ractw21
PING ractw21.example.com (192.168.5.141) 56(84) bytes of data.
64 bytes from ractw21.example.com (192.168.5.141): icmp_seq=1 ttl=64 time=0.663 ms
 64 bytes from ractw21.example.com (192.168.5.141): icmp_seq=2 ttl=64 time=0.309 ms
..
[root@ractw22 ~]# ping ractw21int
PING ractw21int.example.com (192.168.2.141) 56(84) bytes of data.
64 bytes from ractw21int.example.com (192.168.2.141): icmp_seq=1 ttl=64 time=0.210 ms
64 bytes from ractw21int.example.com (192.168.2.141): icmp_seq=2 ttl=64 time=0.264 ms
...
Check disk space and memory 
[root@ractw22 ~]# free
              total        used        free      shared  buff/cache   available
Mem:        5700592      528364     4684328       11256      487900     5083212
Swap:       8257532           0     8257532
[root@ractw22 ~]# df / /u01
Filesystem                  1K-blocks    Used Available Use% Mounted on
/dev/mapper/ol_ractw21-root  15718400 9381788   6336612  60% /
/dev/mapper/ol_ractw21-u01   15718400   46552  15671848   1% /u01


Login into ractw21
[root@ractw21 ~]# ping ractw22
PING ractw22.example.com (192.168.5.142) 56(84) bytes of data.
64 bytes from ractw22.example.com (192.168.5.142): icmp_seq=1 ttl=64 time=0.216 ms
64 bytes from ractw22.example.com (192.168.5.142): icmp_seq=2 ttl=64 time=0.303 ms
...

[root@ractw21 ~]#  ping ractw22int
PING ractw22int.example.com (192.168.2.142) 56(84) bytes of data.
64 bytes from ractw22int.example.com (192.168.2.142): icmp_seq=1 ttl=64 time=0.266 ms
64 bytes from ractw22int.example.com (192.168.2.142): icmp_seq=2 ttl=64 time=0.445 ms

Create and attach ASM Disks

Create ASM disks 

M:\VM>cd RACTW2
M:\VM\RACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm1_122_20G.vdi --size 20480 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 886d76b0-eeb0-43e4-9ab2-bd7d9e8ad879

M:\VM\RACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm2_122_20G.vdi --size 20480 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: be823513-0d1c-4b25-af29-fe504e28e910

M:\VM\RACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm3_122_20G.vdi --size 20480 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 3238860f-a90f-459c-965b-f93773f2ad38

M:\VM\RACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm4_122_20G.vdi --size 20480 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 185218e4-a7f6-4d29-ae7c-dc7499b05238

M:\VM\RACTW2>VBoxManage modifyhd  asm1_122_20G.vdi  --type shareable
M:\VM\RACTW2>VBoxManage modifyhd  asm2_122_20G.vdi  --type shareable
M:\VM\RACTW2>VBoxManage modifyhd  asm3_122_20G.vdi  --type shareable
M:\VM\RACTW2>VBoxManage modifyhd  asm4_122_20G.vdi  --type shareable

Attach Disks to ractw21
M:\VM\RACTW2>VBoxManage storageattach ractw21 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_122_20G.vdi  --mtype shareable
M:\VM\RACTW2>VBoxManage storageattach ractw21 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_122_20G.vdi  --mtype shareable
M:\VM\RACTW2>VBoxManage storageattach ractw21 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_122_20G.vdi  --mtype shareable
M:\VM\RACTW2>VBoxManage storageattach ractw21 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_122_20G.vdi  --mtype shareable


Attach the ASM disks to ractw22
M:\VM\RACTW2>VBoxManage storageattach ractw22 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_122_20G.vdi  --mtype shareable
M:\VM\RACTW2>VBoxManage storageattach ractw22 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_122_20G.vdi  --mtype shareable
M:\VM\RACTW2>VBoxManage storageattach ractw22 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_122_20G.vdi  --mtype shareable
M:\VM\RACTW2>VBoxManage storageattach ractw22 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_122_20G.vdi  --mtype shareable

Use parted to create a single disk partition
root@ractw21 ~]# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print                                                            
Error: /dev/sdb: unrecognised disk label
Model: ATA VBOX HARDDISK (scsi)                                           
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags: 

(parted) mklabel msdos
(parted) print                                                            
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 
Number  Start  End  Size  Type  File system  Flags

(parted) mkpart                                                           
Partition type?  primary/extended? primary                                
File system type?  [ext2]?                                                
Start?                                                                  
End? -1                                                                   
(parted) print                                                            
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 
Number  Start   End     Size    Type     File system  Flags
 1      1049kB  21.5GB  21.5GB  primary

Display in disk sectors
(parted) unit s print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 41943040s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 
Number  Start  End        Size       Type     File system  Flags
 1      2048s  41940991s  41938944s  primary

Repeat this for /dev/sdc /dev/sdd and /dev/sde

Create following bash script
[root@ractw21 ~]# cat ./check_wwid.sh 
#!/bin/bash
#
#Usage:  As root user run : ./check_wwid.sh 
#
for FILE in `find /dev -name "sd*" | sort`
   do
     WWID=`/lib/udev/scsi_id --whitelisted --replace-whitespace --device=$FILE `
     echo $FILE " WWID:  "  $WWID
   done

Display the disk UUIDs
[root@ractw21 ~]# ./check_wwid.sh 
/dev/sda  WWID:   1ATA_VBOX_HARDDISK_VB9a03ce8e-7e44db17
/dev/sda1  WWID:   1ATA_VBOX_HARDDISK_VB9a03ce8e-7e44db17
/dev/sda2  WWID:   1ATA_VBOX_HARDDISK_VB9a03ce8e-7e44db17
/dev/sdb  WWID:   1ATA_VBOX_HARDDISK_VB4f9101ae-136f6f41
/dev/sdb1  WWID:   1ATA_VBOX_HARDDISK_VB4f9101ae-136f6f41
/dev/sdc  WWID:   1ATA_VBOX_HARDDISK_VBe92a2992-3bbc2c78
/dev/sdc1  WWID:   1ATA_VBOX_HARDDISK_VBe92a2992-3bbc2c78
/dev/sdd  WWID:   1ATA_VBOX_HARDDISK_VBed1a6a99-97750413
/dev/sdd1  WWID:   1ATA_VBOX_HARDDISK_VBed1a6a99-97750413
/dev/sde  WWID:   1ATA_VBOX_HARDDISK_VBc47b21a6-a7b01781
/dev/sde1  WWID:   1ATA_VBOX_HARDDISK_VBc47b21a6-a7b01781



Add the following to the "/etc/scsi_id.config" file to configure SCSI devices as trusted. Create the file if it doesn't already exist.
options=-g


[root@ractw21 ~]# /sbin/partprobe /dev/sdb1
[root@ractw21 ~]# /sbin/partprobe /dev/sdc1
[root@ractw21 ~]# /sbin/partprobe /dev/sdd1
[root@ractw21 ~]# /sbin/partprobe /dev/sde1

[root@ractw21 ~]# /sbin/udevadm control --reload-rules

[root@ractw21 ~]#  ls -l /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
brw-rw----. 1 grid   asmadmin 8, 17 Apr 19 15:53 /dev/sdb1
brw-rw----. 1 grid   asmadmin 8, 33 Apr 19 15:53 /dev/sdc1
brw-rw----. 1 grid   asmadmin 8, 49 Apr 19 15:53 /dev/sdd1
brw-rw----. 1 grid   asmadmin 8, 65 Apr 19 15:53 /dev/sde1

[root@ractw21 ~]#  ls -l /dev/oracleasm
total 0
lrwxrwxrwx. 1 root root 7 Apr 19 15:53 asmdisk1_sdb1 -> ../sdb1
lrwxrwxrwx. 1 root root 7 Apr 19 15:53 asmdisk2_sdc -> ../sdc1
lrwxrwxrwx. 1 root root 7 Apr 19 15:53 asmdisk3_sdd -> ../sdd1
lrwxrwxrwx. 1 root root 7 Apr 19 15:53 asmdisk_sde -> ../sde1


Copy scsi_id.config 99-oracle-asmdevices.rules to ractw22 
[root@ractw21 ~]# scp  /etc/scsi_id.config ractw22:/etc
scsi_id.config                                                                                    100%   11     0.0KB/s   00:00    
[root@ractw21 ~]# scp  /etc/udev/rules.d/99-oracle-asmdevices.rules  ractw22:/etc/udev/rules.d/
99-oracle-asmdevices.rules                                                                        100%  887     0.9KB/s   00:00 

Reboot ractw22 and verify ASM disks 
[root@ractw22 ~]#  ls -l /dev/oracleasm
total 0
lrwxrwxrwx. 1 root root 7 Apr 19 16:06 asmdisk1_sdb1 -> ../sdb1
lrwxrwxrwx. 1 root root 7 Apr 19 16:06 asmdisk2_sdc1 -> ../sdc1
lrwxrwxrwx. 1 root root 7 Apr 19 16:06 asmdisk3_sdd1 -> ../sdd1
lrwxrwxrwx. 1 root root 7 Apr 19 16:06 asmdisk_sde1 -> ../sde1

[root@ractw22 ~]# ls -l /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
brw-rw----. 1 grid asmadmin 8, 17 Apr 19 16:06 /dev/sdb1
brw-rw----. 1 grid asmadmin 8, 33 Apr 19 16:06 /dev/sdc1
brw-rw----. 1 grid asmadmin 8, 49 Apr 19 16:06 /dev/sdd1
brw-rw----. 1 grid asmadmin 8, 65 Apr 19 16:06 /dev/sde1

Setup ssh connectivity for grid user with cluvfy

Cleanup .shh directories which may have inconsistent data due to cloning
On Note ractw21
[grid@ractw21 linuxx64_12201_grid_home]$  rm -rf /home/grid/.ssh
[grid@ractw21 linuxx64_12201_grid_home]$  ssh grid@ractw22

On Note ractw22
[grid@ractw22 linuxx64_12201_grid_home]$  rm -rf /home/grid/.ssh
[grid@ractw22 linuxx64_12201_grid_home]$  ssh grid@ractw21

Now run cluvfy with -fixup option
[grid@ractw21 linuxx64_12201_grid_home]$ runcluvfy.sh comp admprv -n "ractw21,ractw22" -o user_equiv -verbose -fixup
Verifying User Equivalence ...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  ractw22                               failed                  
Verifying User Equivalence ...FAILED (PRVG-2019, PRKN-1038)

Verification of administrative privileges was unsuccessful. 
Checks did not pass for the following nodes:
    ractw22
Failures were encountered during execution of CVU verification request "administrative privileges".

Verifying User Equivalence ...FAILED
ractw22: PRVG-2019 : Check for equivalence of user "grid" from node "ractw21"
         to node "ractw22" failed
         PRKN-1038 : The command "/usr/bin/ssh -o FallBackToRsh=no  -o
         PasswordAuthentication=no  -o StrictHostKeyChecking=yes  -o
         NumberOfPasswordPrompts=0  ractw22 -n /bin/true" run on node "ractw21"
         gave an unexpected output: "Permission denied
         (publickey,gssapi-keyex,gssapi-with-mic,password)."

CVU operation performed:      administrative privileges
Date:                         Apr 19, 2017 5:05:53 PM
CVU home:                     /media/sf_kits/Oracle/122/linuxx64_12201_grid_home/
User:                         grid
Setting up SSH user equivalence for user "grid" between nodes "ractw21,ractw22"
Enter "grid" password:
SSH user equivalence was successfully set up for user "grid" on nodes "ractw21,ractw22"

Now rerun the command :
[grid@ractw21 linuxx64_12201_grid_home]$  runcluvfy.sh comp admprv -n "ractw21,ractw22" -o user_equiv -verbose 
Verifying User Equivalence ...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  ractw22                               passed                  
  Verifying Checking user equivalence for user "grid" on all cluster nodes ...PASSED
  From node     To node                   Status                  
  ------------  ------------------------  ------------------------
  ractw22       ractw21                   SUCCESSFUL              
Verifying User Equivalence ...PASSED

Verification of administrative privileges was successful. 
CVU operation performed:      administrative privileges
Date:                         Apr 19, 2017 5:18:21 PM
CVU home:                     /media/sf_kits/Oracle/122/linuxx64_12201_grid_home/
User:                         grid

Verify at OS level:
[grid@ractw21 linuxx64_12201_grid_home]$  ssh grid@ractw22
Last login: Wed Apr 19 17:14:22 2017 from ractw21.example.com

[grid@ractw22 ~]$ ssh grid@ractw21
Last login: Wed Apr 19 17:12:16 2017 from ractw22.example.com




Run cluvfy  a first time against both nodes

  • with Network  Connectivity checks
  • with ASM Disk checks
 
[grid@ractw21 ~]$  cd  /media/sf_kits/Oracle/122/linuxx64_12201_grid_home
[grid@ractw21 linuxx64_12201_grid_home]$ runcluvfy.sh stage -pre crsinst -asm -presence local -asmgrp asmadmin  \
>     -asmdev /dev/oracleasm/asmdisk1_sdb1,/dev/oracleasm/asmdisk2_sdc1,/dev/oracleasm/asmdisk3_sdd1,/dev/oracleasm/asmdisk4_sde1     \
>     -networks enp0s8:192.168.5.0:PUBLIC/enp0s9:192.168.2.0:cluster_interconnect  \
>     -n "ractw21,ractw22" 

Verifying Physical Memory ...FAILED (PRVF-7530)
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: ractw22:/usr,ractw22:/var,ractw22:/etc,ractw22:/sbin,ractw22:/tmp ...PASSED
Verifying Free Space: ractw21:/usr,ractw21:/var,ractw21:/etc,ractw21:/sbin,ractw21:/tmp ...PASSED
..
Verifying Node Connectivity ...
  Verifying Hosts File ...PASSED
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "192.168.2.0" ...PASSED
  Verifying subnet mask consistency for subnet "192.168.5.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast check ...PASSED
Verifying Device Checks for ASM ...
  Verifying ASM device sharedness check ...
    Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
    Verifying Shared Storage Accessibility:/dev/oracleasm/asmdisk1_sdb1,/dev/oracleasm/asmdisk3_sdd1,/dev/oracleasm/asmdisk2_sdc1,/dev/oracleasm/asmdisk4_sde1 ...PASSED
  Verifying ASM device sharedness check ...PASSED
  Verifying Access Control List check ...PASSED
Verifying Device Checks for ASM ...PASSED
...

Verifying ASM Filter Driver configuration ...PASSED
Pre-check for cluster services setup was unsuccessful on all the nodes. 
Failures were encountered during execution of CVU verification request "stage -pre crsinst".
Verifying Physical Memory ...FAILED
ractw22: PRVF-7530 : Sufficient physical memory is not available on node
         "ractw22" [Required physical memory = 8GB (8388608.0KB)]

ractw21: PRVF-7530 : Sufficient physical memory is not available on node
         "ractw21" [Required physical memory = 8GB (8388608.0KB)]

CVU operation performed:      stage -pre crsinst
Date:                         Apr 19, 2017 6:12:00 PM
CVU home:                     /media/sf_kits/Oracle/122/linuxx64_12201_grid_home/
User:                         grid

Again the above failures were expected due to memory constraints 

Install GRID software

Unzip and install GRID software

$ cd $GRID_HOME
$ unzip -q $SOFTWARE_LOCATION/linuxx64_12201_grid_home.zip

$ ./gridSetup.sh

-> Configure a standard cluster
-> Advanced Installation
   Cluster name : ractw2
   Scan name    : ractw2-scan
   Scan port    : 1521
   -> Create New GNS
      GNS VIP address: 192.168.5.59
      GNS Sub domain : grid122.example.com
  Public Hostname           Virtual Hostname 
  ractw21.example.com        AUTO
  ractw22.example.com        AUTO

-> Test and Setup SSH connectivity
-> Setup network Interfaces
   enp0s3: don't use
   enp0s8: PUBLIC                              192.168.5.X
   enp0s9: Private Cluster_Interconnect,ASM    192.168.2.X
 
-> Configure GRID Infrastruce: YES
-> Use standard ASM for storage
-> ASM setup
   Diskgroup         : DATA
   Disk discover PATH: /dev/asm*
--> Don't use IPMI

Run root scritps as requested by the installation script !
[root@ractw22 etc]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@ractw21 etc]# /u01/app/122/grid/root.sh

[root@ractw22 etc]# /u01/app/122/grid/root.sh

Verify GRID Installation status

Verify Grid Installation Status using crsctl stat res 

[grid@ractw21 ~]$  crsi

*****  Local Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.asm                        1   ONLINE       ONLINE       ractw21         STABLE  
ora.cluster_interconnect.haip  1   ONLINE       ONLINE       ractw21         STABLE  
ora.crf                        1   ONLINE       ONLINE       ractw21         STABLE  
ora.crsd                       1   ONLINE       ONLINE       ractw21         STABLE  
ora.cssd                       1   ONLINE       ONLINE       ractw21         STABLE  
ora.cssdmonitor                1   ONLINE       ONLINE       ractw21         STABLE  
ora.ctssd                      1   ONLINE       ONLINE       ractw21         OBSERVER,STABLE  
ora.diskmon                    1   OFFLINE      OFFLINE      -               STABLE  
ora.driver.afd                 1   ONLINE       ONLINE       ractw21         STABLE  
ora.drivers.acfs               1   ONLINE       ONLINE       ractw21         STABLE  
ora.evmd                       1   ONLINE       ONLINE       ractw21         STABLE  
ora.gipcd                      1   ONLINE       ONLINE       ractw21         STABLE  
ora.gpnpd                      1   ONLINE       ONLINE       ractw21         STABLE  
ora.mdnsd                      1   ONLINE       ONLINE       ractw21         STABLE  
ora.storage                    1   ONLINE       ONLINE       ractw21         STABLE  

[grid@ractw21 ~]$ crs
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          ractw21      STABLE   
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          ractw22      STABLE   
ora.DATA.dg                    ONLINE     ONLINE          ractw21      STABLE   
ora.DATA.dg                    ONLINE     ONLINE          ractw22      STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          ractw21      STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          ractw22      STABLE   
ora.net1.network               ONLINE     ONLINE          ractw21      STABLE   
ora.net1.network               ONLINE     ONLINE          ractw22      STABLE   
ora.ons                        ONLINE     ONLINE          ractw21      STABLE   
ora.ons                        ONLINE     ONLINE          ractw22      STABLE   
ora.proxy_advm                 OFFLINE    OFFLINE         ractw21      STABLE   
ora.proxy_advm                 OFFLINE    OFFLINE         ractw22      STABLE   
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       ractw22         STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       ractw21         STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       ractw21         STABLE  
ora.MGMTLSNR                   1   ONLINE       ONLINE       ractw21         169.254.51.105 192.1 68.2.141,STABLE
ora.asm                        1   ONLINE       ONLINE       ractw21         Started,STABLE  
ora.asm                        2   ONLINE       ONLINE       ractw22         Started,STABLE  
ora.asm                        3   OFFLINE      OFFLINE      -               STABLE  
ora.cvu                        1   ONLINE       ONLINE       ractw21         STABLE  
ora.gns                        1   ONLINE       ONLINE       ractw21         STABLE  
ora.gns.vip                    1   ONLINE       ONLINE       ractw21         STABLE  
ora.mgmtdb                     1   ONLINE       ONLINE       ractw21         Open,STABLE  
ora.qosmserver                 1   ONLINE       ONLINE       ractw21         STABLE  
ora.ractw21.vip                1   ONLINE       ONLINE       ractw21         STABLE  
ora.ractw22.vip                1   ONLINE       ONLINE       ractw22         STABLE  
ora.scan1.vip                  1   ONLINE       ONLINE       ractw22         STABLE  
ora.scan2.vip                  1   ONLINE       ONLINE       ractw21         STABLE  
ora.scan3.vip                  1   ONLINE       ONLINE       ractw21         STABLE

Verify Grid Installation Status by using cluvfy

[grid@ractw21 ~]$ cluvfy stage -post crsinst -n ractw21,ractw22

Verifying Node Connectivity ...
  Verifying Hosts File ...PASSED
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "192.168.2.0" ...PASSED
  Verifying subnet mask consistency for subnet "192.168.5.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast check ...PASSED
Verifying ASM filter driver configuration consistency ...PASSED
Verifying Time zone consistency ...PASSED
Verifying Cluster Manager Integrity ...PASSED
Verifying User Mask ...PASSED
Verifying Cluster Integrity ...PASSED
Verifying OCR Integrity ...PASSED
Verifying CRS Integrity ...
  Verifying Clusterware Version Consistency ...PASSED
Verifying CRS Integrity ...PASSED
Verifying Node Application Existence ...PASSED
Verifying Single Client Access Name (SCAN) ...
  Verifying DNS/NIS name service 'ractw2-scan.ractw2.grid122.example.com' ...
    Verifying Name Service Switch Configuration File Integrity ...PASSED
  Verifying DNS/NIS name service 'ractw2-scan.ractw2.grid122.example.com' ...PASSED
Verifying Single Client Access Name (SCAN) ...PASSED
Verifying OLR Integrity ...PASSED
Verifying GNS Integrity ...
  Verifying subdomain is a valid name ...PASSED
  Verifying GNS VIP belongs to the public network ...PASSED
  Verifying GNS VIP is a valid address ...PASSED
  Verifying name resolution for GNS sub domain qualified names ...PASSED
  Verifying GNS resource ...PASSED
  Verifying GNS VIP resource ...PASSED
Verifying GNS Integrity ...PASSED
Verifying Voting Disk ...PASSED
Verifying ASM Integrity ...
  Verifying Node Connectivity ...
    Verifying Hosts File ...PASSED
    Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
    Verifying subnet mask consistency for subnet "192.168.2.0" ...PASSED
    Verifying subnet mask consistency for subnet "192.168.5.0" ...PASSED
  Verifying Node Connectivity ...PASSED
Verifying ASM Integrity ...PASSED
Verifying Device Checks for ASM ...PASSED
Verifying ASM disk group free space ...PASSED
Verifying I/O scheduler ...
  Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying I/O scheduler ...PASSED
Verifying User Not In Group "root": grid ...PASSED
Verifying Clock Synchronization ...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP

  Verifying Network Time Protocol (NTP) ...
    Verifying '/etc/chrony.conf' ...PASSED
    Verifying '/var/run/chronyd.pid' ...PASSED
    Verifying Daemon 'chronyd' ...PASSED
    Verifying NTP daemon or service using UDP port 123 ...PASSED
    Verifying chrony daemon is synchronized with at least one external time source ...PASSED
  Verifying Network Time Protocol (NTP) ...PASSED
Verifying Clock Synchronization ...PASSED
Verifying Network configuration consistency checks ...PASSED
Verifying File system mount options for path GI_HOME ...PASSED

Post-check for cluster services setup was successful. 

CVU operation performed:      stage -post crsinst
Date:                         Apr 20, 2017 10:00:30 AM
CVU home:                     /u01/app/122/grid/
User:                         grid

Verify GNS installation with OS tools

[root@ractw22 ~]#  dig @192.168.5.50 ractw2-scan.grid122.example.com
...
;ractw2-scan.grid122.example.com. IN    A

;; ANSWER SECTION:
ractw2-scan.grid122.example.com. 0 IN    A    192.168.5.223
ractw2-scan.grid122.example.com. 0 IN    A    192.168.5.221
ractw2-scan.grid122.example.com. 0 IN    A    192.168.5.222

;; AUTHORITY SECTION:
grid122.example.com.    3600    IN    NS    ns1.example.com.
grid122.example.com.    3600    IN    NS    gns122.grid122.example.com.

;; ADDITIONAL SECTION:
ns1.example.com.    3600    IN    A    192.168.5.50

;; Query time: 11 msec
;; SERVER: 192.168.5.50#53(192.168.5.50)
;; WHEN: Thu Apr 20 09:49:09 CEST 2017
;; MSG SIZE  rcvd: 163

[grid@ractw21 ~]$ nslookup ractw2-scan.grid122.example.com
Server:        192.168.5.50
Address:    192.168.5.50#53

Non-authoritative answer:
Name:    ractw2-scan.grid122.example.com
Address: 192.168.5.223
Name:    ractw2-scan.grid122.example.com
Address: 192.168.5.221
Name:    ractw2-scan.grid122.example.com
Address: 192.168.5.222

[grid@ractw21 ~]$  srvctl config gns -a -list
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5353 to connect to mDNS
GNS status: OK
Domain served by GNS: grid122.example.com
GNS version: 12.2.0.1.0
..
ractw2-scan.ractw2 A 192.168.5.221 Unique Flags: 0x81
ractw2-scan.ractw2 A 192.168.5.222 Unique Flags: 0x81
ractw2-scan.ractw2 A 192.168.5.223 Unique Flags: 0x81
ractw2-scan1-vip.ractw2 A 192.168.5.221 Unique Flags: 0x81
ractw2-scan2-vip.ractw2 A 192.168.5.223 Unique Flags: 0x81
..

Install RDBMS software

  • Extract database zip File on a VirtualBox Host
Setup ssh connectivity for oracle user by using sshUserSetup.sh

[oracle@ractw21 sshsetup]$ cd /media/sf_kits/ORACLE/122/linuxx64_12201_database/database/sshsetup
[oracle@ractw21 sshsetup]$ ./sshUserSetup.sh -user oracle -hosts "ractw21 ractw22" -advanced -exverify -noPromptPassphrase

Verify setup
[oracle@ractw22 ~]$ /usr/bin/ssh -x -l oracle ractw21
Last login: Thu Apr 20 13:32:51 2017

[oracle@ractw21 ~]$  /usr/bin/ssh -x -l oracle ractw22
Last login: Thu Apr 20 13:43:35 2017 from ractw21.example.com

In case trouble you may read following article :

Verify oracle account for rdbms installation with cluvfy

 
[oracle@ractw21 ~]$ /u01/app/122/grid/bin/cluvfy stage -pre  dbinst  -n  ractw21,ractw22 -d /u01/app/oracle/product/122/ractw2 -fixup

Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: ractw22:/u01/app/oracle/product/122/ractw2 ...PASSED
Verifying Free Space: ractw22:/tmp ...PASSED
Verifying Free Space: ractw21:/u01/app/oracle/product/122/ractw2 ...PASSED
Verifying Free Space: ractw21:/tmp ...PASSED
Verifying User Existence: oracle ...
  Verifying Users With Same UID: 1002 ...PASSED
...
Verifying Maximum locked memory check ...PASSED

Pre-check for database installation was successful. 

CVU operation performed:      stage -pre dbinst
Date:                         Apr 20, 2017 1:49:00 PM
CVU home:                     /u01/app/122/grid/
User:                         oracle

Install RDBMS software

Allow and check X-Windows Usage 
[root@ractw21 ~]# xhost +
access control disabled, clients can connect from any host
[oracle@ractw21 ~]$ export DISPLAY=:0.0
[oracle@ractw21 ~]$ xclock

[oracle@ractw21 ~]$ cd /media/sf_kits/ORACLE/122/linuxx64_12201_database/database/
[oracle@ractw21 database]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 5972 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 7977 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-04-20_02-01-56PM. Please wait ...

-> Select "Install database software only" option
-> Select "Oracle Real Application Clusters database installation" option 
-> Select Oracle Nodes: ractw21, ractw22
-> Test SSH Connectivity
-> Select Enterpris Edition
-> Seelct Group: dba 
-> Select ORACLE_BASE:  /u01/app/oracle
          ORACLE_HOME:  /u01/app/oracle/product/122/ractw2

-> Press Install Button 

-> Execute root scripts
   ractw21 :
   ractw22 :  

Create a RAC database with dbca

Create a RAC database with dbca – Adminstator managed

[oracle@ractw21 ~]$ id
uid=1002(oracle) gid=1001(oinstall) groups=1001(oinstall),983(vboxsf),1002(dba),1006(asmdba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

[oracle@ractw21 database]$ dbca
--> Create and Configure a Database
  --> Oracle Real Application Cluster database 
   --> Policy Managed : false 
    --> Generel Purpose or Transaction Procession
     --> Server Pool:  Top_Priority Cardinality :2
      --> Global database Name : ractw2.example.com
       --> Use local UNDO for PDPs
       --> Create as Container db 
        --> Let ASM unchanged 
         --> Enable Archiving
         --> Enable FRA
         --> recoveryAreaDestination=+DATA
          --> Limit SGA, PGA
              sga_target=1367MB
              pga_aggregate_target=456MB
            --> password : sys/sys
 
       --> PDP Name : ractw2pdb
     --> Select all 2 RAC members
      --> Test/Create SSH connectivity
       --> Advanced Install 
        --> Select Generell Purpose / Transaction database type
         --> recoveryAreaDestination=+DATA
          --> Select ASM and for OSDDBA use group:  dba ( default )


Convert Adminstrator Managed Database to Policy Managed

Verify setup by connecting via Scan Name

Find the Scan Name 
[grid@ractw21 ~]$  srvctl config scan
SCAN name: ractw2-scan.ractw2.grid122.example.com, Network: 1
Subnet IPv4: 192.168.5.0/255.255.255.0/enp0s8, dhcp
Subnet IPv6: 
SCAN 1 IPv4 VIP: -/scan1-vip/192.168.5.234
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 2 IPv4 VIP: -/scan2-vip/192.168.5.232
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 3 IPv4 VIP: -/scan3-vip/192.168.5.231
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
--> Our Scan Name is  ractw2-scan.ractw2.grid122.example.com

Test the Namesever connectivity: Note we expect to see our 3 VIP addresses here for a working setup
[grid@ractw21 ~]$ nslookup ractw2-scan.ractw2.grid122.example.com
Server:        192.168.5.50
Address:    192.168.5.50#53

Non-authoritative answer:
Name:    ractw2-scan.ractw2.grid122.example.com
Address: 192.168.5.231
Name:    ractw2-scan.ractw2.grid122.example.com
Address: 192.168.5.234
Name:    ractw2-scan.ractw2.grid122.example.com
Address: 192.168.5.232

Find one of our SCAN listener running on this node 
[oracle@ractw21 ~]$ ps -elf | grep LISTENER_SCAN
0 S grid     16391     1  0  80   0 - 70631 -      10:25 ?        00:00:00 /u01/app/122/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit

Query the SCAN Listener for registered services 
[grid@ractw21 ~]$  lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 23-APR-2017 13:25:23
...
Services Summary...
Service "-MGMTDBXDB" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "4d93c06c65234fffe0538d05a8c0a06a" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "4dac7be3b0930f43e0538e05a8c01163.example.com" has 2 instance(s).
  Instance "ractw_1", status READY, has 1 handler(s) for this service...
  Instance "ractw_2", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "gimr_dscrep_10" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "ractw.example.com" has 2 instance(s).
  Instance "ractw_1", status READY, has 1 handler(s) for this service...
  Instance "ractw_2", status READY, has 1 handler(s) for this service...
... 
Both instances ractw_1 and ractw_2 are registered for service ractw.example.com at portnumber 1521 !

Now build the connect string using service and scan name
[oracle@ractw21 ~]$  sqlplus system/sys@ractw2-scan.ractw2.grid122.example.com:1521/ractw.example.com @v
INSTANCE_NUMBER INSTANCE_NAME     STATUS       HOST_NAME
--------------- ---------------- ------------ ----------------------------------------------------------------
          2 ractw_2      OPEN          ractw22.example.com


[oracle@ractw21 ~]$  sqlplus system/sys@ractw2-scan.ractw2.grid122.example.com:1521/ractw.example.com @v
INSTANCE_NUMBER INSTANCE_NAME     STATUS       HOST_NAME
--------------- ---------------- ------------ ----------------------------------------------------------------
          1 ractw_1      OPEN          ractw21.example.com

- Using the SCAN connect string connect us to different RAC instances on different hosts.
- This means our scan setup works fine and connection load balacing takes place

Reference

Convert a administrator managed RAC database to policy managed [ 12.2 ]

Check Database and Pool Status

Check RAC database status
[grid@ractw21 ~]$   srvctl config database -d ractw
Database unique name: ractw
Database name: ractw
Oracle home: /u01/app/oracle/product/122/ractw2
Oracle user: oracle
Spfile: +DATA/RACTW/PARAMETERFILE/spfile.325.941894301
Password file: +DATA/RACTW/PASSWORD/pwdractw.284.941893247
Domain: example.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: 
Database instances: ractw1,ractw2
Configured nodes: ractw21,ractw22
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services: 
Database is administrator managed

Check pool status 
[oracle@ractw21 ~]$  srvctl status srvpool -a
Server pool name: Free
Active servers count: 0
Active server names: 
Server pool name: Generic
Active servers count: 2
Active server names: ractw21,ractw22
NAME=ractw21 STATE=ONLINE
NAME=ractw22 STATE=ONLINE

Convert Database to Policy Managed

Stop database,add RAC database to pool and finally check pool status [grid@ractw21 grid]$ srvctl stop database -d ractw
[grid@ractw21 grid]$ srvctl add srvpool -g TopPriority  -l 1 -u 2 -i 5
[grid@ractw21 grid]$ srvctl modify database -d ractw -g TopPriority
                        
PRCD-1130 : Failed to convert administrator-managed database ractw into a policy-managed database to use server pool TopPriority
PRCR-1071 : Failed to register or update resource ora.ractw.db
CRS-0245:  User doesn't have enough privilege to perform the operation

Run as user Oracle :
[oracle@ractw21 ~]$ srvctl modify database -d ractw -g TopPriority

[grid@ractw21 grid]$ srvctl config  srvpool -g TopPriority
Server pool name: TopPriority
Importance: 5, Min: 1, Max: 2
Category: hub
Candidate server names: 

[grid@ractw21 grid]$  srvctl config  database -d ractw
Database unique name: ractw
Database name: ractw
Oracle home: /u01/app/oracle/product/122/ractw2
Oracle user: oracle
Spfile: +DATA/RACTW/PARAMETERFILE/spfile.325.941894301
Password file: +DATA/RACTW/PASSWORD/pwdractw.284.941893247
Domain: example.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: TopPriority
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: 
Database instances: 
Configured nodes: 
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services: 
Database is policy managed

[oracle@ractw21 ~]$ srvctl start database -d ractw
PRCR-1079 : Failed to start resource ora.ractw.db
CRS-2643: The server pool(s) where resource 'ora.ractw.db' could run have no servers

Remove category and add servers 

Trying to add servers 
[grid@ractw21 ~]$ srvctl modify srvpool -serverpool TopPriority  -servers  "ractw21,ractw22" -verbose
[grid@ractw21 ~]$ srvctl config srvpool -serverpool TopPriority 
Server pool name: TopPriority
Importance: 5, Min: 1, Max: 2
Category: 
Candidate server names: ractw21,ractw22

[grid@ractw21 ~]$  srvctl status srvpool -a
Server pool name: Free
Active servers count: 0
Active server names: 
Server pool name: Generic
Active servers count: 2
Active server names: ractw21,ractw22
NAME=ractw21 STATE=ONLINE
NAME=ractw22 STATE=ONLINE
Server pool name: TopPriority
Active servers count: 0
Active server names: 

Both severs  ractw21,ractw22 still assigned to GENERIC pool !

Removing our admin managed test database ! 
[oracle@ractw21 ~]$ srvctl remove database -db test
Remove the database test? (y/[n]) y

Verify Database Status and Pool Status after changes

Now verify pool 
[grid@ractw21 ~]$  srvctl status srvpool -a
Server pool name: Free
Active servers count: 0
Active server names: 
Server pool name: Generic
Active servers count: 0
Active server names: 
Server pool name: TopPriority
Active servers count: 2
Active server names: ractw21,ractw22
NAME=ractw21 STATE=ONLINE
NAME=ractw22 STATE=ONLINE
--> Now both servers are assigned to our TopPriority pool 

Starting ractw database and confirm cluster resource status
[oracle@ractw21 dbs]$ srvctl start  database -db ractw

*****  Cluster Resources: *****
Resource NAME               INST   TARGET    STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.ractw.db                   1   ONLINE    ONLINE       ractw21         Open,HOME=/u01/app/oracle/product/122/ractw2,STABLE
ora.ractw.db                   2   ONLINE    ONLINE       ractw22         Open,HOME=/u01/app/oracle/product/122/ractw2,STABLE