Install RAC 12.1.0.2 PM – Policy Managed

Feature Overview

  • FLEX ASM
  • Policy Managed Database ( PM)
  • GNS – requirement for Policy Managed Database
  • UDEV to manage ASM disks

 Versions used

  • VirtualBox 4.3.20
  • OEL 6.6
  • Oracle RAC  12.1.0.2 using UDEV, Policy Managed Database , Flex ASM  feature

 Virtualbox Images used

  • ns1      – Name Server / DHCP server running on 192.168.5.50 IP addresss
  • oel66    – Basic Linix Image which we use for cloning [ Vbox image we need to install first ]
  • hract21  – Cluster node 1
  • hract22  – Cluster node 2
  • hract23  – Cluster node 3

  For installing the nameserver please read the article mentioned  below :

Create VirtualBox Image OEL66 as our RAC Provisioning Server

  -Install OEL 6.6 base Linux System with all needed packages 
  -Setup Network, Groups, Users to allow this RAC Provisiong Server to used :
     For cloning 
     For quick reinstall 
     For adding a new node to our cluster 

Download OEL V52218-01.iso and bind this ISO image to your Boot CD boot !
During the Linux installation --> Use customize NOW 
The "Package Group Selection" screen allows you to select the required package groups, and i
ndividual packages within the details section. When you've made your selection, click the "Next" button. 
If you want the server to have a regular gnome desktop you need to include the following package 
groups from the "Desktops" section:

    Desktop
    Desktop Platform
    Fonts
    General Purpose Desktop
    Graphical Administration Tools
    X Windows System
For details see : http://oracle-base.com/articles/linux/oracle-linux-6-installation.php

Network Configurations 
Status : We assume DNS/DHCP server  are already runing  running !

Please read the following article to configure:  Virtualbox Network devices for RAC and Internet Access

Install the newest available Kernel package version : 
[root@hract21 Desktop]# yum update
Loaded plugins: refresh-packagekit, security
Setting up Update Process
public_ol6_UEKR3_latest                                  | 1.2 kB     00:00     
public_ol6_UEKR3_latest/primary                          |  11 MB     00:08     
public_ol6_UEKR3_latest                                                 288/288
public_ol6_latest                                        | 1.4 kB     00:00     
public_ol6_latest/primary                                |  45 MB     00:50     
public_ol6_latest                                                   29161/29161
Resolving Dependencies
--> Running transaction check
---> Package at.x86_64 0:3.1.10-43.el6_2.1 will be updated
---> Package at.x86_64 0:3.1.10-44.el6_6.2 will be an update
..
  xorg-x11-server-Xorg.x86_64 0:1.15.0-25.el6_6                                 
  xorg-x11-server-common.x86_64 0:1.15.0-25.el6_6                               
  yum-rhn-plugin.noarch 0:0.9.1-52.0.1.el6_6                                    
Complete!

Determine your current kernel versions 
[root@hract21 Desktop]# uname -a
Linux hract21.example.com 3.8.13-55.1.2.el6uek.x86_64 #2 SMP Thu Dec 18 00:15:51 PST 2014 x86_64 x86_64 x86_64 GNU/Linux

Find the related kernel packages for downloading 
[root@hract21 Desktop]# yum list | grep 3.8.13-55
kernel-uek.x86_64                    3.8.13-55.1.2.el6uek     @public_ol6_UEKR3_latest
kernel-uek-devel.x86_64              3.8.13-55.1.2.el6uek     @public_ol6_UEKR3_latest
kernel-uek-doc.noarch                3.8.13-55.1.2.el6uek     @public_ol6_UEKR3_latest
kernel-uek-firmware.noarch           3.8.13-55.1.2.el6uek     @public_ol6_UEKR3_latest
dtrace-modules-3.8.13-55.1.1.el6uek.x86_64
dtrace-modules-3.8.13-55.1.2.el6uek.x86_64
dtrace-modules-3.8.13-55.el6uek.x86_64
kernel-uek-debug.x86_64              3.8.13-55.1.2.el6uek     public_ol6_UEKR3_latest
kernel-uek-debug-devel.x86_64        3.8.13-55.1.2.el6uek     public_ol6_UEKR3_latest

[root@hract21 Desktop]# yum install kernel-uek-devel.x86_64
Loaded plugins: refresh-packagekit, security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package kernel-uek-devel.x86_64 0:3.8.13-55.1.2.el6uek will be installed
--> Finished Dependency Resolution

Verify installed kernel sources
[root@hract21 Desktop]#  ls /usr/src/kernels
2.6.32-504.3.3.el6.x86_64  3.8.13-55.1.2.el6uek.x86_64

Rebuild VirtualBox non-DKMS kernel 
[root@hract21 Desktop]# /etc/init.d/vboxadd setup
Removing existing VirtualBox non-DKMS kernel modules       [  OK  ]
Building the VirtualBox Guest Additions kernel modules
Building the main Guest Additions module                   [  OK  ]
Building the shared folder support module                  [  OK  ]
Building the OpenGL support module                         [  OK  ]
Doing non-kernel setup of the Guest Additions              [  OK  ]
Starting the VirtualBox Guest Additions                    [  OK  ]

--> Reboot system and verify Virtualbox device drivers
[root@hract21 Desktop]# lsmod | grep vbox
vboxsf                 38015  0 
vboxguest             263369  7 vboxsf
vboxvideo               2154  1 
drm                   274140  2 vboxvideo

Verify Network setup 
[root@hract21 network-scripts]# cat /etc/resolv.conf
# Generated by NetworkManager
search example.com grid12c.example.com
nameserver 192.168.5.50

After configuring your network devices / Firewall status:
[root@hract21 network-scripts]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 08:00:27:31:B8:A0  
          inet addr:192.168.1.8  Bcast:192.168.1.255  Mask:255.255.255.0
  
eth1      Link encap:Ethernet  HWaddr 08:00:27:7B:E2:09  
          inet addr:192.168.5.121  Bcast:192.168.5.255  Mask:255.255.255.0

eth2      Link encap:Ethernet  HWaddr 08:00:27:C8:CA:AD  
          inet addr:192.168.2.121  Bcast:192.168.2.255  Mask:255.255.255.0

eth3      Link encap:Ethernet  HWaddr 08:00:27:2A:0B:EC  
          inet addr:192.168.3.121  Bcast:192.168.3.255  Mask:255.255.255.0

Note : eth0 is a DHCP address from our router whereas eth1, eht2 and eth3 are fixed addresses.

At this point DNS name resoluiton and ping  even for Internet hosts  should work :  
[root@hract21 network-scripts]#  ping google.de
PING google.de (173.194.112.191) 56(84) bytes of data.
64 bytes from fra07s32-in-f31.1e100.net (173.194.112.191): icmp_seq=1 ttl=47 time=40.6 ms
64 bytes from fra07s32-in-f31.1e100.net (173.194.112.191): icmp_seq=2 ttl=47 time=40.2 ms

For add. info you may read: 
 - Configure DNS, NTP and DHCP  for a mixed RAC/Internet usage  

Setup NTP ( this in the NTP setup for the nameserver !) 
As we have Internet connection we can use the default configure NTP servers 
[root@hract21 Desktop]#  more /etc/ntp.conf
server 0.rhel.pool.ntp.org iburst
server 1.rhel.pool.ntp.org iburst
server 2.rhel.pool.ntp.org iburst
server 3.rhel.pool.ntp.org iburst

# service ntpd stop
Shutting down ntpd:                                        [  OK  ]

If your RAC is going to be permanently connected to your main network and you want to use NTP, you must add 
the "-x" option into the following line in the "/etc/sysconfig/ntpd" file.

    OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

Then restart NTP and verify your setup 
Note NTP must run on all custer nodes and of course on your nameservr
# service ntpd restart

[root@hract21 Desktop]# ntpq -p
     remote           refid     st t when poll reach   delay   offset  jitter
==============================================================================
*ridcully.episod 5.9.39.5         3 u    -   64    1   47.564  -66.443   3.231
 alvo.fungus.at  193.170.62.252   3 u    1   64    1   52.608  -68.424   2.697
 main.macht.org  192.53.103.103   2 u    1   64    1   56.734  -71.436   3.186
 formularfetisch 131.188.3.223    2 u    -   64    1   37.748  -78.676  13.875

Add to rc.local 
service ntpd stop
ntpdate -u 192.168.5.50
service ntpd start


OS setup : 
Turn off and disable the firewall IPTables , disable SELinux and disable AVAHI daemon
# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]
# chkconfig iptables off
# chkconfig --list iptables
iptables        0:off   1:off   2:off   3:off   4:off   5:off   6:off

[root@hract21 Desktop]# service iptables status
iptables: Firewall is not running.

Disable SELinux. Open the config file and change the SELINUX variable from enforcing to disabled.
[root@hract21 Desktop]#   cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Disable AVAHI daemon  [ only if running ]
# /etc/init.d/avahi-daemon stop
To disable it: 
# /sbin/chkconfig  avahi-daemon off

Install some helpful packages
[root@hract21 network-scripts]# yum install wireshark
[root@hract21 network-scripts]# yum install wireshark-gnome
Install X11 applications like xclock
[root@hract21 network-scripts]# yum install xorg-x11-apps
[root@hract21 network-scripts]# yum install telnet 

[grid@hract21 rpm]$ cd /media/sf_kits/Oracle/12.1.0.2/grid/rpm
[root@hract21 rpm]#  rpm -iv cvuqdisk-1.0.9-1.rpm
Preparing packages for installation...
Using default group oinstall to install package
cvuqdisk-1.0.9-1

[root@hract21 network-scripts]# yum list | grep oracle-rdbms-server
oracle-rdbms-server-11gR2-preinstall.x86_64
oracle-rdbms-server-12cR1-preinstall.x86_64
[root@hract21 network-scripts]# yum install oracle-rdbms-server-12cR1-preinstall.x86_64
Loaded plugins: refresh-packagekit, security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package oracle-rdbms-server-12cR1-preinstall.x86_64 0:1.0-12.el6 will be installed

Create Users and Groups
Add to grid .bashrc 
export ORACLE_BASE=/u01/app/grid
export ORACLE_SID=+ASM1
export GRID_HOME=/u01/app/121/grid
export ORACLE_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:.:$PATH
export HOST=`/bin/hostname`
alias h=history
unalias ls 
alias sys='sqlplus / as sysdba'
alias sql='sqlplus scott/tiger'

Add to oracle .bashrc 
export ORACLE_BASE=/u01/app/grid
export ORACLE_SID=ract2
export GRID_HOME=/u01/app/121/grid
export ORACLE_HOME=/u01/app/oracle/product/121/rac121
export PATH=$ORACLE_HOME/bin:.:$PATH
export  LD_LIBRARY_PATH=$ORACLE_HOME/lib:.
export HOST=`/bin/hostname`
alias h=history
unalias ls 
alias sys='sqlplus / as sysdba'
alias sql='sqlplus scott/tiger'

export ORACLE_SID=`ps -elf | grep ora_smon | grep -v grep | awk ' { print  substr( $15,10) }' `
export CLASSPATH=$ORACLE_HOME/jdbc/lib/ojdbc6_g.jar:.
echo  "-> Active ORACLE_SID:  " $ORACLE_SID 

alias h=history 
alias oh='cd $ORACLE_HOME'
alias sys1='sqlplus sys/sys@ract2_1 as sysdba'
alias sys2='sqlplus sys/sys@ract2_2 as sysdba'
alias sys3='sqlplus sys/sys@ract2_3 as sysdba'
alias sql1='sqlplus scott/tiger@ract1'
alias sql2='sqlplus scott/tiger@ract2'
alias sql3='sqlplus scott/tiger@ract3'
alias trc1='cd /u01/app/oracle/diag/rdbms/ract2/ract2_1/trace'
alias trc2='cd /u01/app/oracle/diag/rdbms/ract2/gract2_2/trace'
alias trc3='cd /u01/app/oracle/diag/rdbms/ractr/gract2_3/trace'

Create groups: 
[root@hract21 network-scripts]# /usr/sbin/groupadd -g 501 oinstall
groupadd: group 'oinstall' already exists
[root@hract21 network-scripts]# /usr/sbin/groupadd -g 502 dba
groupadd: group 'dba' already exists
[root@hract21 network-scripts]# /usr/sbin/groupadd -g 504 asmadmin
[root@hract21 network-scripts]# /usr/sbin/groupadd -g 506 asmdba
[root@hract21 network-scripts]# /usr/sbin/groupadd -g 507 asmoper
[root@hract21 network-scripts]# /usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
[root@hract21 network-scripts]# /usr/sbin//userdel oracle
[root@hract21 network-scripts]# /usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle

[root@hract21 network-scripts]#  su - oracle 
[oracle@hract21 ~]$ id
uid=502(oracle) gid=54321(oinstall) groups=54321(oinstall),506(asmdba),54322(dba) 

[root@hract21 network-scripts]# su - grid
[grid@hract21 ~]$ id
uid=501(grid) gid=54321(oinstall) groups=54321(oinstall),504(asmadmin),506(asmdba),507(asmoper) 

For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file:
  if ( $USER = "oracle" || $USER = "grid" ) then
  limit maxproc 16384
  limit descriptors 65536
  endif

Modify  /etc/security/limits.conf
oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    2047
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768
grid     soft   nofile    1024
grid     hard   nofile    65536
grid     soft   nproc    2047
grid     hard   nproc    16384
grid     soft   stack    10240
grid     hard   stack    32768

Create Directories:
 - Have a separate ORACLE_BASE for both GRID and RDBMS install !
Create the Oracle Inventory Directory
To create the Oracle Inventory directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oraInventory
  # chown -R grid:oinstall /u01/app/oraInventory

Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/grid
  # chown -R grid:oinstall /u01/app/grid
  # chmod -R 775 /u01/app/grid
  # mkdir -p /u01/app/121/grid
  # chown -R grid:oinstall /u01//app/121/grid
  # chmod -R 775 /u01/app/121/grid

Creating the Oracle Base Directory
  To create the Oracle Base directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle
  # chown -R oracle:oinstall /u01/app/oracle
  # chmod -R 775 /u01/app/oracle

Creating the Oracle RDBMS Home Directory
  To create the Oracle RDBMS Home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle/product/121/rac121
  # chown -R oracle:oinstall /u01/app/oracle/product/121/rac121
  # chmod -R 775 /u01/app/oracle/product/121/rac121

Download cluvfy from : http://www.oracle.com/technetwork/database/options/clustering/downloads/index.html

Cluster Verification Utility Download for Oracle Grid Infrastructure 12c 
Note: The latest CVU version (July 2013) can be used with all currently supported Oracle RAC versions, including Oracle RAC 10g, 
      Oracle RAC 11g and Oracle RAC 12c.

Unzip cluvfy:
[grid@hract21 CLUVFY]$ unzip /tmp/cvupack_Linux_x86_64.zip
[grid@hract21 CLUVFY]$ pwd
/home/grid/CLUVFY
[grid@hract21 CLUVFY]$ ls
bin  clone  crs  css  cv  deinstall  diagnostics  has  install  jdbc  jdk  jlib  lib  network  nls  oracore  oui  srvm  utl  xdk
[grid@hract21 CLUVFY]$ bin/cluvfy -version
12.1.0.1.0 Build 112713x8664

Run cluvfy to verify the OS current installation  
Verify OS setup :
As Grid user
$ ./bin/cluvfy comp sys -p crs -n gract2 -verbose -fixup
--> If needed run the fix script and/or fix underlying problems 
As root user verify DHCP setup 

Verify DHCP setup :
[root@hract21 CLUVFY]#  ./bin/cluvfy comp dhcp -clustername  ract2 -verbose
Verifying DHCP Check 
Checking if any DHCP server exists on the network...
DHCP server returned server: 192.168.5.50, loan address: 192.168.5.218/255.255.255.0, lease time: 21600
At least one DHCP server exists on the network and is listening on port 67
Checking if DHCP server has sufficient free IP addresses for all VIPs...
Sending DHCP "DISCOVER" packets for client ID "ract2-scan1-vip"
DHCP server returned server: 192.168.5.50, loan address: 192.168.5.218/255.255.255.0, lease time: 21600
Sending DHCP "REQUEST" packets for client ID "ract2-scan1-vip"
.. 

Verify GNS setup : 
[grid@hract21 CLUVFY]$  ./bin/cluvfy comp gns -precrsinst -domain grid12c.example.com  -vip 192.168.5.58 
Verifying GNS integrity 
Checking GNS integrity...
The GNS subdomain name "grid12c.example.com" is a valid domain name
GNS VIP "192.168.5.58" resolves to a valid IP address
GNS integrity check passed
Verification of GNS integrity was successful. 
--> Note you may get the PRVF-5229 warning if this address is in use [ maybe be a different GNS VIP ]

Add this time we have created a base system which we will now clone 3x for our rac nodes 

Clone base system

You man change in File-> Preference the default machine path first  
M:\VM\RAC_OEL66_12102

Cloning ract21 :
Now cleanly shutdown your Reference/Clone system 
Virtualbox -> Clone [ Name clone ract21 ]  ->  add new Network Addresses -> Full Clone 

Boot system a first time - retrieve the new MAC addresses 
[root@hract21 Desktop]# dmesg |grep eth
e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 08:00:27:e7:c0:6b
e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
e1000 0000:00:08.0 eth1: (PCI:33MHz:32-bit) 08:00:27:7d:8e:49
e1000 0000:00:08.0 eth1: Intel(R) PRO/1000 Network Connection
e1000 0000:00:09.0 eth2: (PCI:33MHz:32-bit) 08:00:27:4e:c9:bf
e1000 0000:00:09.0 eth2: Intel(R) PRO/1000 Network Connection
e1000 0000:00:0a.0 eth3: (PCI:33MHz:32-bit) 08:00:27:3b:89:bf
e1000 0000:00:0a.0 eth3: Intel(R) PRO/1000 Network Connection

[root@hract21 network-scripts]# egrep 'HWADDR|IP' ifcfg-eth*
ifcfg-eth0:HWADDR=08:00:27:e7:c0:6b
ifcfg-eth1:HWADDR=08:00:27:7d:8e:49
ifcfg-eth1:IPADDR=192.168.5.121
ifcfg-eth2:HWADDR=08:00:27:4e:c9:bf
ifcfg-eth2:IPADDR=192.168.2.121
ifcfg-eth3:HWADDR=08:00:27:3b:89:bf
ifcfg-eth3:IPADDR=192.168.3.121
 
Remove file 
[root@hract21 Desktop]# rm  /etc/udev/rules.d/70-persistent-net.rules
rm: remove regular file `/etc/udev/rules.d/70-persistent-net.rules'? y

Change hostname in the “/etc/sysconfig/network” 
[root@gract21 network-scripts]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hract21.example.com
NTPSERVERARGS=iburst
# oracle-rdbms-server-12cR1-preinstall : Add NOZEROCONF=yes
NOZEROCONF=yes

-> And finally  reboot system and verify network setup

[root@hract21 network-scripts]# ifconfig | egrep 'eth|inet addr'
eth0      Link encap:Ethernet  HWaddr 08:00:27:E7:C0:6B  
          inet addr:192.168.1.14  Bcast:192.168.1.255  Mask:255.255.255.0
eth1      Link encap:Ethernet  HWaddr 08:00:27:7D:8E:49  
          inet addr:192.168.5.121  Bcast:192.168.5.255  Mask:255.255.255.0
eth2      Link encap:Ethernet  HWaddr 08:00:27:4E:C9:BF  
          inet addr:192.168.2.121  Bcast:192.168.2.255  Mask:255.255.255.0
eth3      Link encap:Ethernet  HWaddr 08:00:27:3B:89:BF  
          inet addr:192.168.3.121  Bcast:192.168.3.255  Mask:255.255.255.0

root@hract21 network-scripts]# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG        0 0          0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U         0 0          0 eth0
192.168.2.0     0.0.0.0         255.255.255.0   U         0 0          0 eth2
192.168.3.0     0.0.0.0         255.255.255.0   U         0 0          0 eth3
192.168.5.0     0.0.0.0         255.255.255.0   U         0 0          0 eth1

Repeat steps for ract22, ract23 !

Create Asm Disks

cd M:\VM\RAC_OEL66_12102

VBoxManage createhd --filename M:\VM\RAC_OEL66_12102\asm1_12102_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\RAC_OEL66_12102\asm2_12102_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\RAC_OEL66_12102\asm3_12102_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\RAC_OEL66_12102\asm4_12102_10G.vdi --size 10240 --format VDI --variant Fixed

VBoxManage modifyhd  asm1_12102_10G.vdi  --type shareable
VBoxManage modifyhd  asm2_12102_10G.vdi  --type shareable
VBoxManage modifyhd  asm3_12102_10G.vdi  --type shareable
VBoxManage modifyhd  asm4_12102_10G.vdi  --type shareable

VBoxManage storageattach ract21 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract21 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract21 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract21 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_12102_10G.vdi --mtype shareable
   
VBoxManage storageattach ract22 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract22 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract22 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract22 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_12102_10G.vdi --mtype shareable

VBoxManage storageattach ract23 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract23 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract23 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract23 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_12102_10G.vdi --mtype shareable

Check newly created disk devices after RAC node reboot
[root@hract21 Desktop]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde

Run fdisk to partttion the new disk ( we only want a single partition )
[root@hract21 Desktop]# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf9bddbc6
   Device Boot      Start         End      Blocks   Id  System
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305): 
Using default value 1305
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
--> Repeat above step for  /dev/sdc  /dev/sdd  /dev/sde and verify the created devices.
[root@hract21 Desktop]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdb1  /dev/sdc  /dev/sdc1  /dev/sdd  /dev/sdd1  /dev/sde  /dev/sde1

Use following bash script to return the WWID disk IDs  : http://www.hhutzler.de/blog/configure-udev-rules-for-asm-devices/

[root@hract21 ~]# ./check_wwid.sh
/dev/sda  WWID:    1ATA_VBOX_HARDDISK_VB98f7f6e6-e47cb456
/dev/sda1  WWID:   1ATA_VBOX_HARDDISK_VB98f7f6e6-e47cb456
/dev/sda2  WWID:   1ATA_VBOX_HARDDISK_VB98f7f6e6-e47cb456
/dev/sdb  WWID:    1ATA_VBOX_HARDDISK_VBe7363848-cbf94b0c
/dev/sdb1  WWID:   1ATA_VBOX_HARDDISK_VBe7363848-cbf94b0c
/dev/sdc  WWID:    1ATA_VBOX_HARDDISK_VBb322a188-b4771866
/dev/sdc1  WWID:   1ATA_VBOX_HARDDISK_VBb322a188-b4771866
/dev/sdd  WWID:    1ATA_VBOX_HARDDISK_VB00b7878b-c50d45f4
/dev/sdd1  WWID:   1ATA_VBOX_HARDDISK_VB00b7878b-c50d45f4
/dev/sde  WWID:    1ATA_VBOX_HARDDISK_VB7a3701f8-f1272747
/dev/sde1  WWID:   1ATA_VBOX_HARDDISK_VB7a3701f8-f1272747

Create 99-oracle-asmdevices.rules - change the RESULT values by using the output of our  ./check_wwid.sh script :
[root@hract21 rules.d]#  cd /etc/udev/rules.d
[root@hract21 rules.d]#  cat  99-oracle-asmdevices.rules
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBe7363848-cbf94b0c", NAME= "asmdisk1_10G", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb322a188-b4771866", NAME= "asmdisk2_10G", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB00b7878b-c50d45f4", NAME= "asmdisk3_10G", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB7a3701f8-f1272747", NAME= "asmdisk4_10G", OWNER="grid", GROUP="asmadmin", MODE="0660"

[root@hract21 ~]# udevadm control --reload-rules
[root@hract21 ~]# start_udev
Starting udev: udevd[14512]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
                                                           [  OK  ]
[root@hract21 ~]#  ls -ltr /dev/asmd*
brw-rw---- 1 grid asmadmin 8, 17 Jan 29 09:33 /dev/asmdisk1_10G
brw-rw---- 1 grid asmadmin 8, 49 Jan 29 09:33 /dev/asmdisk3_10G
brw-rw---- 1 grid asmadmin 8, 33 Jan 29 09:33 /dev/asmdisk2_10G
brw-rw---- 1 grid asmadmin 8, 65 Jan 29 09:33 /dev/asmdisk4_10G

Copy newly created rules file to the remaining rac nodes and restart udev
[root@hract21 rules.d]#  scp 99-oracle-asmdevices.rules hract22:/etc/udev/rules.d
[root@hract21 rules.d]#  scp 99-oracle-asmdevices.rules hract23:/etc/udev/rules.d
and run  following bash script to restart udev

Bash script: restart_udev.sh  
#!/bin/bash 
udevadm control --reload-rules
start_udev
ls -ltr /dev/asm*

Note the ls output on hract22 and hract23 should be indentical to the output on hract21 !

Here you may add oracle and grid user to the vboxsf group. 
This allows us to use the mounted/shared VBOX devices !
[root@hract21 ~]#  usermod -G vboxsf oracle
[root@hract21 ~]#  usermod -G vboxsf grid

Setup ssh connectivity
[grid@hract21 ~]$  cp /media/sf_kits/Oracle/12.1.0.2/grid/sshsetup/sshUserSetup.sh .
[grid@hract21 ~]$  ./sshUserSetup.sh -user grid -hosts "hract21  hract22 hract23" -noPromptPassphrase

[grid@hract21 ~]$  /usr/bin/ssh -x -l grid hract21 date
Thu Jan 29 11:06:55 CET 2015
[grid@hract21 ~]$  /usr/bin/ssh -x -l grid hract22 date
Thu Jan 29 11:06:56 CET 2015
[grid@hract21 ~]$  /usr/bin/ssh -x -l grid hract21 date
Thu Jan 29 11:07:01 CET 2015

NTP setup on all RAC nodes
Note only our Name server is getting the time from the Internet 

For the RAC nodes add to ntp.conf only a single server ( which is our nameserver ) 
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 192.168.5.50 
[root@hract21 etc]#  service ntp restart
ntp: unrecognized service
[root@hract21 etc]# service ntpd restart
Shutting down ntpd:                                        [  OK  ]
Starting ntpd:                                             [  OK  ]
[root@hract21 etc]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 ns1.example.com 131.188.3.220   10 u    5   64    1    0.149  -170.05   0.000

Run cluvfy  and install GRID software

Run Cluvfy 
[grid@hract21 CLUVFY]$ ./bin/cluvfy stage -pre crsinst -asm -presence local -asmgrp asmadmin  \
    -asmdev /dev/asmdisk1_10G,/dev/asmdisk2_10G,/dev/asmdisk3_10G,/dev/asmdisk4_10G    \
    -networks eth1:192.168.5.0:PUBLIC/eth2:192.168.2.0:cluster_interconnect  \
    -n hract21,hract22,hract23 | egrep 'PRVF|fail'
Node reachability check failed from node "hract21"
Total memory check failed
Check failed on nodes: 
PRVF-9802 : Attempt to get udev information from node "hract21" failed
PRVF-9802 : Attempt to get udev information from node "hract23" failed
UDev attributes check failed for ASM Disks 
--> The PRVFV-9802 error is explained in following article 
    The Memory check failed as I had reduced RAC Vbox images to 4 GByte 
    For other cluvfy errors you may check this article 

Installing  GRID software 
[grid@hract21 CLUVFY]$ cd /media/sf_kits/Oracle/12.1.0.2/grid

$ cd grid
$ ls
install  response  rpm    runcluvfy.sh  runInstaller  sshsetup  stage  welcome.html
$ ./runInstaller 
-> Configure a standard cluster
-> Advanced Installation
   Cluster name : ract2
   Scan name    : ract2-scan.grid12c.example.com
   Scan port    : 1521
   -> Create New GNS
      GNS VIP address: 192.168.1.58
      GNS Sub domain : grid12c.example.com
  Public Hostname           Virtual Hostname 
  hract21.example.com        AUTO
  hract22.example.com        AUTO
  hract23.example.com        AUTO

-> Test and Setup SSH connectivity
-> Setup network Interfaces
   eth0: don't use
   eth1: PUBLIC                              192.168.5.X
   eht2: Private Cluster_Interconnect,ASM    192.168.2.X
 
-> Configure GRID Infrastruce: YES
-> Use standard ASM for storage
-> ASM setup
   Diskgroup         : DATA
   Disk discover PATH: /dev/asm*
--> Don't use IPMI

Run root scritps:
[root@hract21 etc]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@hract21 etc]# /u01/app/121/grid/root.sh
Performing root user operation.
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/121/grid
..
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 30-JAN-2015 12:39:53
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
CRS-5014: Agent "ORAAGENT" timed out starting process "/u01/app/121/grid/bin/lsnrctl" for action "check": details at "(:CLSN00009:)" in "/u01/app/grid/diag/crs/hract21/crs/trace/crsd_oraagent_grid.trc"
CRS-5017: The resource action "ora.MGMTLSNR check" encountered the following error: 
(:CLSN00009:)Command Aborted. For details refer to "(:CLSN00109:)" in "/u01/app/grid/diag/crs/hract21/crs/trace/crsd_oraagent_grid.trc".
CRS-2664: Resource 'ora.DATA.dg' is already running on 'hract21'
CRS-6017: Processing resource auto-start for servers: hract21
CRS-2672: Attempting to start 'ora.oc4j' on 'hract21'
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'hract21'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'hract21' succeeded
CRS-2676: Start of 'ora.oc4j' on 'hract21' succeeded
CRS-6016: Resource auto-start has completed for server hract21
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2015/01/30 12:41:03 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Now run scripts below on hrac22 and hrac23
# /u01/app/oraInventory/orainstRoot.sh
# /u01/app/121/grid/root.sh

Verify CW status 
[root@hract21 ~]# crs
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          hract21      STABLE   
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          hract22      STABLE   
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          hract23      STABLE   
ora.DATA.dg                    ONLINE     ONLINE          hract21      STABLE   
ora.DATA.dg                    ONLINE     ONLINE          hract22      STABLE   
ora.DATA.dg                    ONLINE     ONLINE          hract23      STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          hract21      STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          hract22      STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          hract23      STABLE   
ora.net1.network               ONLINE     ONLINE          hract21      STABLE   
ora.net1.network               ONLINE     ONLINE          hract22      STABLE   
ora.net1.network               ONLINE     ONLINE          hract23      STABLE   
ora.ons                        ONLINE     ONLINE          hract21      STABLE   
ora.ons                        ONLINE     ONLINE          hract22      STABLE   
ora.ons                        ONLINE     ONLINE          hract23      STABLE   
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       hract22         STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       hract23         STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       hract21         STABLE  
ora.MGMTLSNR                   1   ONLINE       ONLINE       hract21         169.254.213.86 192.1 68.2.121,STABLE
ora.asm                        1   ONLINE       ONLINE       hract21         Started,STABLE  
ora.asm                        2   ONLINE       ONLINE       hract22         Started,STABLE  
ora.asm                        3   ONLINE       ONLINE       hract23         Started,STABLE  
ora.cvu                        1   ONLINE       ONLINE       hract21         STABLE  
ora.gns                        1   ONLINE       ONLINE       hract21         STABLE  
ora.gns.vip                    1   ONLINE       ONLINE       hract21         STABLE  
ora.hract21.vip                1   ONLINE       ONLINE       hract21         STABLE  
ora.hract22.vip                1   ONLINE       ONLINE       hract22         STABLE  
ora.hract23.vip                1   ONLINE       ONLINE       hract23         STABLE  
ora.mgmtdb                     1   ONLINE       ONLINE       hract21         Open,STABLE  
ora.oc4j                       1   ONLINE       ONLINE       hract21         STABLE  
ora.scan1.vip                  1   ONLINE       ONLINE       hract22         STABLE  
ora.scan2.vip                  1   ONLINE       ONLINE       hract23         STABLE  
ora.scan3.vip                  1   ONLINE       ONLINE       hract21         STABLE


Verify GNS SETUP / Network Setup 
[root@hract21 ~]# sh -x  check_net_12c.sh
+ dig @192.168.5.50 ract2-scan.grid12c.example.com
;; QUESTION SECTION:
;ract2-scan.grid12c.example.com.    IN    A

;; ANSWER SECTION:
ract2-scan.grid12c.example.com.    34 IN    A    192.168.5.236
ract2-scan.grid12c.example.com.    34 IN    A    192.168.5.220
ract2-scan.grid12c.example.com.    34 IN    A    192.168.5.218

;; AUTHORITY SECTION:
grid12c.example.com.    3600    IN    NS    gns12c.grid12c.example.com.
grid12c.example.com.    3600    IN    NS    ns1.example.com.

;; ADDITIONAL SECTION:
ns1.example.com.    3600    IN    A    192.168.5.50


+ dig @192.168.5.58 ract2-scan.grid12c.example.com

;; QUESTION SECTION:
;ract2-scan.grid12c.example.com.    IN    A

;; ANSWER SECTION:
ract2-scan.grid12c.example.com.    120 IN    A    192.168.5.218
ract2-scan.grid12c.example.com.    120 IN    A    192.168.5.220
ract2-scan.grid12c.example.com.    120 IN    A    192.168.5.236

;; AUTHORITY SECTION:
grid12c.example.com.    10800    IN    SOA    hract22. hostmaster.grid12c.example.com. 46558097 10800 10800 30 120

;; ADDITIONAL SECTION:
ract2-gns-vip.grid12c.example.com. 10800 IN A    192.168.5.58


+ nslookup ract2-scan
Server:        192.168.5.50
Address:    192.168.5.50#53
Non-authoritative answer:
Name:    ract2-scan.grid12c.example.com
Address: 192.168.5.236
Name:    ract2-scan.grid12c.example.com
Address: 192.168.5.218
Name:    ract2-scan.grid12c.example.com
Address: 192.168.5.220

+ ping -c 2 google.de
PING google.de (173.194.65.94) 56(84) bytes of data.
64 bytes from ee-in-f94.1e100.net (173.194.65.94): icmp_seq=1 ttl=38 time=177 ms
64 bytes from ee-in-f94.1e100.net (173.194.65.94): icmp_seq=2 ttl=38 time=134 ms
..

+ ping -c 2 hract21
PING hract21.example.com (192.168.5.121) 56(84) bytes of data.
64 bytes from hract21.example.com (192.168.5.121): icmp_seq=1 ttl=64 time=0.013 ms
64 bytes from hract21.example.com (192.168.5.121): icmp_seq=2 ttl=64 time=0.024 ms
..

+ ping -c 2 ract2-scan.grid12c.example.com
PING ract2-scan.grid12c.example.com (192.168.5.220) 56(84) bytes of data.
64 bytes from 192.168.5.220: icmp_seq=1 ttl=64 time=0.453 ms
64 bytes from 192.168.5.220: icmp_seq=2 ttl=64 time=0.150 ms
..

+ cat /etc/resolv.conf
# Generated by NetworkManager
search example.com grid12c.example.com
nameserver 192.168.5.50

Run Cluvfy and Install RDBMS software

Verify that your .bashrc doens't read/write any data from/to stdin/stdout 
Setup ssh connectivity :
[oracle@hract21 ~]$  ./sshUserSetup.sh -user oracle -hosts "hract21  hract22 hract23" -noPromptPassphrase
Verify ssh connectivity ( run this on hract22 and hract23 too )
[oracle@hract21 ~]$ ssh -x -l oracle hract21 date
Fri Jan 30 15:40:45 CET 2015
[oracle@hract21 ~]$ ssh -x -l oracle hract22 date
Fri Jan 30 15:40:50 CET 2015
[oracle@hract21 ~]$ ssh -x -l oracle hract23 date
Fri Jan 30 15:40:52 CET 2015

[grid@hract21 CLUVFY]$./bin/cluvfy stage -pre  dbinst  -n hract21,hract22,hract23 -d /u01/app/oracle/product/121/rac121 -fixup
Performing pre-checks for database installation 
Checking node reachability...
Node reachability check passed from node "hract21"
Checking user equivalence...
User equivalence check passed for user "grid"
ERROR: 
PRVG-11318 : The following error occurred during database operating system groups check. "PRCT-1005 :
 Directory /u01/app/oracle/product/121/rac121/bin does not exist"
 --> You can ignore this as we haven't installed a RAC DB software yet 

Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.5.0"
Node connectivity passed for subnet "192.168.5.0" with node(s) hract22,hract23,hract21
TCP connectivity check passed for subnet "192.168.5.0"
Check: Node connectivity using interfaces on subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) hract22,hract23,hract21
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.5.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "hract23:/u01/app/oracle/product/121/rac121,hract23:/tmp"
Free disk space check passed for "hract22:/u01/app/oracle/product/121/rac121,hract22:/tmp"
Free disk space check passed for "hract21:/u01/app/oracle/product/121/rac121,hract21:/tmp"
Check for multiple users with UID value 501 passed 
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Group existence check passed for "asmdba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Membership check for user "grid" in group "asmdba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
..
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed 
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Default user file creation mask check passed
Checking CRS integrity...
Clusterware version consistency passed.
CRS integrity check passed
Checking Cluster manager integrity... 
Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of ONS node application (optional)
ONS node application check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
CTSS resource check passed
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
Check of common NTP Time Server passed
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Checking integrity of file "/etc/resolv.conf" across nodes
"domain" and "search" entries do not coexist in any  "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes
Check for integrity of file "/etc/resolv.conf" passed
Time zone consistency check passed
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "ract2-scan.grid12c.example.com"...
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking SCAN IP addresses...
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Checking GNS integrity...
The GNS subdomain name "grid12c.example.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.5.0, 192.168.5.0, 192.168.5.0" match with the GNS VIP "192.168.5.0, 192.168.5.0, 192.168.5.0"
GNS VIP "192.168.5.58" resolves to a valid IP address
GNS resolved IP addresses are reachable
GNS resource configuration check passed
GNS VIP resource configuration check passed.
GNS integrity check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

ASM and CRS versions are compatible
Database Clusterware version compatibility passed.
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passe

NOTE: 
No fixable verification failures to fix
Pre-check for database installation was successful


Install the database software 

[oracle@hract21 database]$ id
uid=502(oracle) gid=54321(oinstall) groups=54321(oinstall),493(vboxsf),506(asmdba),54322(dba)
[oracle@hract21 database]$ cd /media/sf_kits/oracle/12.1.0.2/database
[oracle@hract21 database]$  ./runInstaller
--> Create and Configure a Database 
 --> Server Class
  --> Oracle Real Application Cluster database installation
   --> Policy Managed  
    --> Server Pool:  Top_Priority Cardinality :2
     --> Select all 3 RAC membemers
      --> Test/Create SSH connectivity
       --> Advanced Install 
        --> Select Generell Purpose / Transaction database type
         --> Target Database Memory : 800 MByte 
          --> Select ASM and for OSDDBA use group:  dba ( default )
 
Run root.sh : hract21, hract22, hract23

Start database banka on all nodes 
[oracle@hract21 database]$  srvctl status srvpool -a
Server pool name: Free
Active servers count: 2
Active server names: hract21,hract22
NAME=hract21 STATE=ONLINE
NAME=hract22 STATE=ONLINE
Server pool name: Generic
Active servers count: 0
Active server names: 
Server pool name: Top_Priority
Active servers count: 1
Active server names: hract23
NAME=hract23 STATE=ONLINE
[oracle@hract21 database]$ srvctl modify srvpool -g Top_Priority -l 3 -u 3

*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.banka.db                   1   ONLINE       ONLINE       hract23         Open,STABLE  
ora.banka.db                   2   ONLINE       ONLINE       hract21         Open,STABLE  
ora.banka.db                   3   ONLINE       ONLINE       hract22         Open,STABLE 

Stop one instance 
[oracle@hract21 database]$ srvctl modify srvpool -g Top_Priority -l 2 -u 2 -f
ora.banka.db                   1   ONLINE       ONLINE       hract23         Open,STABLE  
ora.banka.db                   2   ONLINE       OFFLINE      -               Instance Shutdown,ST ABLE
ora.banka.db                   3   ONLINE       ONLINE       hract22         Open,STABLE 

Invoke dbca and create database bankb

[oracle@hract21 database]$  ./dbca
   --> Policy Managed  
    --> Server Pool:  Low_Priority Cardinality :1
     --> Target Database Memory : 800 MByte 

Check server pools :
[oracle@hract21 database]$ srvctl status srvpool -a
Server pool name: Free
Active servers count: 0
Active server names: 
Server pool name: Generic
Active servers count: 0
Active server names: 
Server pool name: Low_Priority
Active servers count: 1
Active server names: hract21
NAME=hract21 STATE=ONLINE
Server pool name: Top_Priority
Active servers count: 2
Active server names: hract22,hract23
NAME=hract22 STATE=ONLINE
NAME=hract23 STATE=ONLINE
    

For details about serverpools read follwing article : http://www.hhutzler.de/blog/managing-server-pools/
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.banka.db                   1   ONLINE       ONLINE       hract23         Open,STABLE  
ora.banka.db                   2   ONLINE       OFFLINE      -               Instance Shutdown,ST ABLE
ora.banka.db                   3   ONLINE       ONLINE       hract22         Open,STABLE  
ora.bankb.db                   1   ONLINE       ONLINE       hract21         Open,STABLE  

Testing the current configuration 
Database bankB:
[oracle@hract21 ~]$ sqlplus system/sys@ract2-scan.grid12c.example.com:1521/bankb  @v
HOST_NAME               INSTANCE_NAME
------------------------------ ----------------
hract21.example.com           bankb_1
--> As database bankB runs only on one instance 

Verify load balancing for Database bankA:
[oracle@hract21 ~]$  sqlplus system/sys@ract2-scan.grid12c.example.com:1521/banka @v
HOST_NAME               INSTANCE_NAME
------------------------------ ----------------
hract23.example.com           bankA_1

[oracle@hract21 ~]$ sqlplus system/sys@ract2-scan.grid12c.example.com:1521/banka @v
HOST_NAME               INSTANCE_NAME
------------------------------ ----------------
hract22.example.com           bankA_3

--> As database bankA runs on 2 instances load balancing takes place .

 

Reference

DEBUG Install problems

Run cluvfy before starting the Installation process

$ cluvfy stage -pre crsinst -n grac121,grac122  -networks eth1:192.168.1.0:PUBLIC/eth2:192.168.2.0:cluster_interconnect

Problems before running root,sh

Review  Top 5 Installation CRS/GRID problems ( see Top 5 CRS/Grid Infrastructure Install issues (Doc ID 1367631.1) )

  • Multicast Problems ( available since 11.2.0.2 +)
  •  Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to  Multicasting Requirement (Doc ID 1212703.1)
  •  root.sh fails due to knows Bugs fixed in any PSU
    - Install base release 
    - Before running root.sh insall PSE like 11.2.0.4.2
    - In generell you always want to have the latest PSU anyway
    - For upgrades run: $ orachk -u -o -pre 
  • Complete GI installation if OUI session has died before running root.sh on remaining nodes
    As Grid user run : $GRID_HOME/cfgtoollogs/configToolAllCommands 
       cat  $GRID_HOME/cfgtoollogs/configToolAllCommands
       # Copyright (c) 1999, 2013, Oracle. All rights reserved.
       /u01/app/11204/grid/oui/bin/runConfig.sh ORACLE_HOME=/u01/app/11204/grid MODE=perform ACTION=configure RERUN=true $

 

  •  Installation fails because network requirements aren’t met
    • The Basics of IPv4 Subnet and Oracle Clusterware (Doc ID 1386709.1)
    • The Basics of IPv4 Subnet and Oracle Clusterware (Doc ID 1386709.1)
    • Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to Multicasting Requirement (Doc ID 1212703.1)
    • How to Validate Network and Name Resolution Setup for the Clusterware and RAC (Doc ID 1054902.1)
  • Rolling GI upgrade
    • Prior to rolling upgrade run:  $ orachk -u -o -pre
    • –> If complete cluster outage is possible perform a NON rolling upgrade

 

Review Note:     Troubleshoot 11gR2 Grid Infrastructure/RAC Database runInstaller Issues (Doc ID 1056322.1)

 

Problems running root.sh

How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation (Doc ID 942166.1)

  • Identify cause of root.sh failure by reviewing logs in $GRID_HOME/cfgtoollogs/crsconfig and $GRID_HOME/log
  • Once cause is identified and problem is fixed, deconfigure and reconfigure with steps below – keep in mind that you will need wait till each step finishes successfully before move to next one:
Step 0: For 11.2.0.2 and above, root.sh is restartable.
  Once cause is identified and the problem is fixed, root.sh can be executed again 
  on the failed node. If it succeeds, continue with your planned installation procedure; 
  otherwise as root sequentially execute
      "$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force"
  and 
      $GRID_HOME/root.sh on local node
  If it succeeds, continue with your planned installation procedure, otherwise proceed to 
  next step (Step 1) of the note.
  ....
  For  complete deconfiguration steps read note   942166.1.

 

Reference

  • Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to Multicasting Requirement (Doc ID 1212703.1)
  • Top 5 CRS/Grid Infrastructure Install issues (Doc ID 1367631.1)
  • The Basics of IPv4 Subnet and Oracle Clusterware (Doc ID 1386709.1)
  • Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to Multicasting Requirement (Doc ID 1212703.1)
  • How to Validate Network and Name Resolution Setup for the Clusterware and RAC (Doc ID 1054902.1)
  • Troubleshoot 11gR2 Grid Infrastructure/RAC Database runInstaller Issues (Doc ID 1056322.1)

Install CRS 10.2.0.1 on top of OEL 5.10 / Virtualbox 4.2

Disk Layout

Using Virtualbox devices attached to SATA controller :
Raw-Devices for OCR:
/dev/sdb1 -> /dev/raw/raw1:  bound to major 8, minor 17  - Size:  1 GByte
/dev/sdc1 -> /dev/raw/raw2:  bound to major 8, minor 33  - Size:  1 GByte

Raw-Devices Voting disks:
/dev/sdd1 -> /dev/raw/raw3:  bound to major 8, minor 49  - Size:  1 GByte
/dev/sde1 -> /dev/raw/raw4:  bound to major 8, minor 65  - Size:  1 GByte
/dev/sdf1 -> /dev/raw/raw5:  bound to major 8, minor 81  - Size:  1 GByte

ASM Devices:
/dev/sdg1  - Size:  2 GByte 
/dev/sdh1  - Size:  2 GByte
/dev/sdi1  - Size:  2 GByte 
/dev/sdj1  - Size:  2 GByte

Verify disk size with dd after reboot
# dd if=/dev/sdb1 of=/dev/null bs=1M
1019+1 records in
1019+1 records out
...
Note be careful not to mix ASM disks with Raw devices. If your create an ASM on top of a RAW disk
already used  OCR or voting disk will corrupt your RAW devices ! 

 

Verify OS packages

#  rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils  compat-libstdc++-33  elfutils-libelf  \
   elfutils-libelf-devel  gcc  gcc-c++  glibc  glibc-common  glibc-devel  glibc-headers  ksh  libaio  \
   libaio-devel  libgcc  libstdc++  libstdc++-devel  make  sysstat  unixODBC  unixODBC-devel

binutils-2.17.50.0.6-26.el5 (x86_64)
libstdc++-devel-4.1.2-54.el5 (x86_64)
make-3.81-3.el5 (x86_64)
sysstat-7.0.2-12.0.1.el5 (x86_64)
...
package unixODBC is not installed
package unixODBC-devel is not installed

 

Prepare ASMLib

# yum install oracleasm-support
# ls
oracleasmlib-2.0.4-1.el5.x86_64.rpm  oracleasm-support-2.1.8-1.el5.x86_64.rpm
# rpm -iv oracleasmlib-2.0.4-1.el5.x86_64.rpm
Preparing packages for installation...
oracleasmlib-2.0.4-1.el5
# rpm -iv oracleasm-support-2.1.8-1.el5.x86_64.rpm
Preparing packages for installation...
        package oracleasm-support-2.1.8-1.el5.x86_64 is already installed
# rpm -qa | grep asm
oracleasm-support-2.1.8-1.el5
oracleasmlib-2.0.4-1.el5

Configuring the Oracle ASM library driver.
# /etc/init.d/oracleasm configure
This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: 
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

 

Create and format  Virtualbox  disks

M:\VM\RAC10g\SHARED_DISK> VBoxManage createhd --filename M:\VM\RAC10g\SHARED_DISK\DATA01.vdi --size 2048 --format VDI --variant Fixed
M:\VM\RAC10g\SHARED_DISK> VBoxManage createhd --filename M:\VM\RAC10g\SHARED_DISK\DATA02.vdi --size 2048 --format VDI --variant Fixed
....
M:\VM\RAC10g\SHARED_DISK>  VBoxManage modifyhd DATA01.vdi --type shareable 
M:\VM\RAC10g\SHARED_DISK> VBoxManage modifyhd DATA01.vdi --type shareable
..

Format disk
# fdisk /dev/sdb
# fdisk /dev/sdc
# /sbin/partprobe
Warning: Unable to open /dev/sr0 read-write (Read-only file system).  /dev/sr0 has been opened read-only.
Error: Error opening /dev/md0: No such file or directory
# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdb1  /dev/sdc  /dev/sdc1

Create ASM disks

# /usr/sbin/oracleasm createdisk ASM_DATA01 /dev/sdg1
Writing disk header: done
Instantiating disk: done
# /usr/sbin/oracleasm createdisk ASM_DATA02 /dev/sdh1
# /usr/sbin/oracleasm createdisk ASM_DATA03 /dev/sdi1
# /usr/sbin/oracleasm createdisk ASM_DATA04 /dev/sdj1
If you need to delete disks run
# /usr/sbin/oracleasm deletedisk DATA01
Clearing disk header: done
Dropping disk: done

After any ASMLib operation run scandisks and listdisks on all RAC nodes
#  /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
# /usr/sbin/oracleasm listdisks
ASM_DATA01
ASM_DATA02
ASM_DATA03
ASM_DATA04

Prepare UDEV rules for our RAW devices

# cat  /etc/udev/rules.d/63-oracle-raw.rules
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBc477f753-2ce5f51a_", RUN+="/bin/raw /dev/raw/raw5 %N"       
KERNEL=="raw[1-2]*", OWNER="root", GROUP="oinstall", MODE="640"
KERNEL=="raw[3-5]*", OWNER="oracle", GROUP="oinstall", MODE="6
--> Always try to map your disk with /sbin/scsi_id ( like line 5 ) and not by using sdX devices ( line 1-4 )

Reload udev rules
# /sbin/udevcontrol reload_rules
# /sbin/start_udev

Verify raw devices after reboot
#  raw -qa
/dev/raw/raw1:  bound to major 8, minor 17
/dev/raw/raw2:  bound to major 8, minor 33
/dev/raw/raw3:  bound to major 8, minor 49
/dev/raw/raw4:  bound to major 8, minor 65
/dev/raw/raw5:  bound to major 8, minor 81

# ls -l  /dev/raw/ra*
crw-r----- 1 root   oinstall 162, 1 Apr  4 09:09 /dev/raw/raw1
crw-r----- 1 root   oinstall 162, 2 Apr  4 09:09 /dev/raw/raw2
crw-r--r-- 1 oracle oinstall 162, 3 Apr  4 09:09 /dev/raw/raw3
crw-r--r-- 1 oracle oinstall 162, 4 Apr  4 09:09 /dev/raw/raw4
crw-r--r-- 1 oracle oinstall 162, 5 Apr  4 09:09 /dev/raw/raw5

SSH setup

On both RAC Nodes run :
$ su - oracle
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ /usr/bin/ssh-keygen -t rsa # Accept the default settings

[oracle@ract1 .ssh]$ cd ~/.ssh
[oracle@ract1 .ssh]$ cat id_rsa.pub >> authorized_keys
[oracle@ract1 .ssh]$ scp authorized_keys ract2:.ssh/
[oracle@ract1 ~]$  ssh ract2 date
Tue Apr  1 14:24:32 CEST 2014

[oracle@ract2 .ssh]$ cd ~/.ssh
[oracle@ract2 .ssh]$ cat id_rsa.pub >> authorized_keys
[oracle@ract2 .ssh]$ scp authorized_keys ract1:.ssh/
[oracle@ract2 ~]$  ssh ract1 date
Tue Apr  1 14:24:32 CEST 2014

Use cluvfy 12.1 to test node readiness

Always install newest cluvfy version even for 10gR2 CRS validations!
[root@ract1 ~]$  ./bin/cluvfy  -version
12.1.0.1.0 Build 112713x8664

Verify OS setup on ract1
[root@ract1 ~]$ ./bin/cluvfy comp sys -p crs -r 10gR2 -n ract1 -verbose -fixup
--> Run required scripts
[root@ract1 ~]# /tmp/CVU_12.1.0.1.0_oracle/runfixup.sh
All Fix-up operations were completed successfully.

Repeat this step on ract2
[root@ract2 ~]$ ./bin/cluvfy comp sys -p crs -r 10gR2 -n ract2 -verbose -fixup
--> Run required scripts
[root@ract2 ~]# /tmp/CVU_12.1.0.1.0_oracle/runfixup.sh
All Fix-up operations were completed successfully.

Now verify System requirements on both nodes
[oracle@ract1 cluvfy12]$  ./bin/cluvfy comp sys -p crs -r 10gR2 -n ract1 -verbose -fixup
Verifying system requirement
..
NOTE:
No fixable verification failures to fix

Finally run cluvfy to test CRS installation readiness 
$ cluvfy12/bin/cluvfy stage -pre crsinst -r 10gR2 \
  -networks eth1:192.168.1.0:PUBLIC/eth2:192.168.2.0:cluster_interconnect \
  -n ract1,ract2 -verbose
..
Pre-check for cluster services setup was successful.

Install CRS 10.2.0.1

Unzip clusterware kits
# cd /Kits
# gunzip  /media/sf_kits/Oracle/10.2/Linux64/10201_clusterware_linux_x86_64.cpio.gz
# cpio  -idmv < /media/sf_kits/Oracle/10.2/Linux64/10201_clusterware_linux_x86_64.cpio
# gunzip  /media/sf_kits/Oracle/10.2/Linux64/10201_database_linux_x86_64.cpio.gz
# cpio  -idmv <   /media/sf_kits/Oracle/10.2/Linux64/10201_database_linux_x86_64.cpio

Run  ./rootpre.sh on both nodes
# ./rootpre.sh
No OraCM running
The "No OraCM" message can be ignored since the clusterware is not installed yet.

Install CRS software stack: 
Problem 1: Installer fails with java.lang.UnsatisfiedLinkError:  libXp.so.6:

oracle@ract1 clusterware]$ ./runInstaller -ignoreSysPrereqs   
Exception java.lang.UnsatisfiedLinkError: /tmp/OraInstall2014-04-01_02-56-03PM/jre/1.4.2/lib/i386/libawt.so: libXp.so.6: 
    cannot open shared object file: No such file or directory occurred..
    java.lang.UnsatisfiedLinkError: /tmp/OraInstall2014-04-01_02-56-03PM/jre/1.4.2/lib/i386/libawt.so: libXp.so.6: 
    cannot open shared object file: No such file or directory

Fix : Install libXp via yum 
#  yum install libXp
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
You can use up2date --register to register.
ULN support will be disabled.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package libXp.i386 0:1.0.0-8.1.el5 set to be updated
---> Package libXp.x86_64 0:1.0.0-8.1.el5 set to be updated
... 

Problem 2:  Vipca fails with error loading shared libraries: libpthread.so.0x:
Fix vipca and srvctl scripts by unsetting LD_ASSUME_KERNEL parameter
# vipca
/u01/app/oracle/product/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: 
    cannot open shared object file: No such file or directory
# which vipca
/u01/app/oracle/product/crs/bin/vipca
[root@ract1 ~]# vi /u01/app/oracle/product/crs/bin/vipca
After the IF statement around line 123 add an unset command to ensure LD_ASSUME_KERNEL is not set as follows:
if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
unset LD_ASSUME_KERNEL <<<== Line to be added Similarly for srvctl

Retest vipca ( Ignore Error:  Error 0(Native: listNetInterfaces:[3] - this error will be fixed later)
# vipca
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]
# which srvctl
/u01/app/oracle/product/crs/bin/srvctl
# vi /u01/app/oracle/product/crs/bin/srvctl  <-- unset LD_ASSUME_KERNEL in srvcl scipt too

Usage: srvctl <command> <object> [<options>]
Execute the same steps  on ract2 and verify that vipca and srvctl are running :

Now rerun root.sh again be cleaning up last setup
Run on ract1,ract2
# cd /u01/app/oracle/product/crs/install
# ./rootdelete.sh 
# ./rootdeinstall.sh
#   rm -rf /var/tmp/.oracle
For more details rerunning root.sh please read following link 

Run on root.sh on ract1
# /u01/app/oracle/product/crs/root.sh
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: ract1 ract1int ract1
node 2: ract2 ract2int ract2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        ract1
CSS is inactive on these nodes.
        ract2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

Later run root.sh on ract2
# /u01/app/oracle/product/crs/root.sh
--> CRS doesn't come up 
    On 2nd node, root.sh fails with message:
     Failure at final check of Oracle CRS stack.
     10

Verify Logs
# cd /u01/app/oracle/product/crs/log
Error 
[root@ract2 log]# more ./ract2/client/css.log
Oracle Database 10g CRS Release 10.2.0.1.0 Production Copyright 1996, 2005 Oracle.  All rights reserved.
2014-04-01 16:13:01.176: [ CSSCLNT][3312432864]clsssInitNative: connect failed, rc 9
2014-04-01 16:13:02.201: [ CSSCLNT][3312432864]clsssInitNative: connect failed, rc 9
2014-04-01 16:13:03.222: [ CSSCLNT][3312432864]clsssInitNative: connect failed, rc 9
--> Disable Firewall on all cluster nodes
    For Details check:   Pre-11.2: Root.sh Unable To Start CRS On Second Node (Doc ID 369699.1)

Disable firewall on all cluster nodes 
# service iptables stop
# chkconfig iptables off

Cleanup clusterware setup on both nodes  ( read following link if you need details ) 
Now run root.sh on ract2:
#  /u01/app/oracle/product/crs/root.sh
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: ract1 ract1int ract1
node 2: ract2 ract2int ract2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        ract1
        ract2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]

Finally fix the  Vipca errors 
# oifcfg setif -global eth1/192.168.1.0:public 
# oifcfg setif -global eth2/192.168.2.0:cluster_interconnect 
# oifcfg getif 
eth1  192.168.1.0  global  public
eth2  192.168.2.0  global  cluster_interconnect

Now run vipca
# vipca
Node   VIP-alias   VIP-IP-address  
ract1  ract1vip.example.com  192.168.1.135 255.255.255.0
ract2  ract2vip.example.com  192.168.1.136 255.255.255.0

Once vipca completes running, all the Clusterware resources (VIP, GSD, ONS) will be started, 
there is no need to re-run  root.sh since vipca is the last step in root.sh. 

Verify  CRS setup

#crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.ract1.gsd  application    ONLINE    ONLINE    ract1       
ora.ract1.ons  application    ONLINE    ONLINE    ract1       
ora.ract1.vip  application    ONLINE    ONLINE    ract1       
ora.ract2.gsd  application    ONLINE    ONLINE    ract2       
ora.ract2.ons  application    ONLINE    ONLINE    ract2       
ora.ract2.vip  application    ONLINE    ONLINE    ract2

References

  • Pre-11.2: Root.sh Unable To Start CRS On Second Node (Doc ID 369699.1)
  • Unable To Connect To Cluster Manager Ora-29701 as Network Socket Files are Removed (Doc ID 391790.1)
  • http://oracleview.wordpress.com/2011/03/31/oracle-10gr2-rac-on-linux-5-5-using-virtualbox-4/
  • http://www.databaseskill.com/2699596/
  • CLUVFY Fails With Error: Could not find a suitable set of interfaces for VIPs or Private Interconnect (Doc ID 338924.1)
  • How to Proceed From a Failed 10g or 11.1 Oracle Clusterware (CRS) Installation (Doc ID 239998.1)
  • 10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA / SRVCTL / OUI Failures) (Doc ID 414163.1)

 

Install RAC 10.2.0.1 on top of OEL 5.10 and Virtualbox 4.2

Overview

  - Install ASM instance and Rdbms instance into same OH even not suggested by Oracle
  - Use ASM for RAC  datafiles
  - You can switch between ASM instance and RAC instance by changing ORACLE_SID ( RACT1 +ASM1 / RACT2 +ASM2 )
  - RAC spfile location: +DATA/RACT/spfileRACT.ora
  - ASM  pfile location: /u01/app/oracle/product/10.2/rac_db1/dbs/init+ASM1.ora

Install RDBSMS/ASM software and create database

[oracle@ract1 ~]$ cd /Kits
[oracle@ract1 Kits]$ cd database
[oracle@ract1 database]$ ls
doc  install  response  runInstaller  stage  welcome.html
[oracle@ract1 database]$ ./runInstaller -ignoreSysPrereqs   
Starting Oracle Universal Installer...
-> Database : RACT
   select ASM and don't install ASM in a separate OH even suggested !

Error:
Create database fais with ORA-27125 on OEL 5.10
Solution:
cd $ORACLE_HOME/bin
mv oracle oracle.bin

-- Paste it as one 
cat >oracle <<"EOF"
#!/bin/bash
export DISABLE_HUGETLBFS=1
exec $ORACLE_HOME/bin/oracle.bin $@
EOF
-- End of paste 
chmod +x oracle
--> Rerun create database assistent 

It seems only Java utilities need above WA - after installation disable WA
$ srvctl stop  database -d RACT
$ mv oracle oracle.wrapper
$ mv oracle.bin oracle
$  ls -l oracle
-rwsr-s--x 1 oracle oinstall 108916387 Apr  6 11:27 oracle
$  srvctl start  database -d RACT

Verify Cluster database status

# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....T1.inst application    ONLINE    ONLINE    ract1       
ora....T2.inst application    ONLINE    ONLINE    ract2       
ora.RACT.db    application    ONLINE    ONLINE    ract1       
ora....SM1.asm application    ONLINE    ONLINE    ract1       
ora....T1.lsnr application    ONLINE    ONLINE    ract1       
ora.ract1.gsd  application    ONLINE    ONLINE    ract1       
ora.ract1.ons  application    ONLINE    ONLINE    ract1       
ora.ract1.vip  application    ONLINE    ONLINE    ract1       
ora....SM2.asm application    ONLINE    ONLINE    ract2       
ora....T2.lsnr application    ONLINE    ONLINE    ract2       
ora.ract2.gsd  application    ONLINE    ONLINE    ract2       
ora.ract2.ons  application    ONLINE    ONLINE    ract2       
ora.ract2.vip  application    ONLINE    ONLINE    ract2   

Check ASM instance  ASM diskgroup RAC instances
ASM instance
[oracle@ract1 bin]$ export ORACLE_SID=+ASM1
[oracle@ract1 bin]$ asmcmd lsdg
State    Type    Rebal  Unbal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Name
MOUNTED  NORMAL  N      N         512   4096  1048576      8188     5786             2047            1869              0  DATA/
[oracle@ract1 bin]$ asmcmd ls +DATA/RACT/spfileRACT.ora
spfileRACT.ora

RAC instance
[oracle@ract1 bin]$ export ORACLE_SID=RACT1
SQL>  select INST_ID,INSTANCE_NUMBER,INSTANCE_NAME, status from  gv$instance;
   INST_ID INSTANCE_NUMBER INSTANCE_NAME    STATUS
---------- --------------- ---------------- ------------
         2               2 RACT2            OPEN
         1               1 RACT1            OPEN

Start and monitor instances
[oracle@ract1 ~]$ srvctl start  database -d RACT
[oracle@ract1 ~]$ srvctl status database -d RACT
Instance RACT1 is running on node ract1
Instance RACT2 is running on node ract2

Reference

Install Oracle RAC 12.1,OEL 6.4 and Virtualbox 4.2 with GNS and ASMLib

Network/DNS setup

Virtualbox Device Configuration 
eth0   -  VirtualBox NAT              - DHCP  either using  local LTE-Router 192.168.1.1 or Coorporate VPN network 
eth1   -  VirtualBox Internal Network - Public interface ( grac121: 192.168.1.81 / grac122: 192.168.1.82 ) 
eth2   -  VirtualBox Internal Network - Private Cluster Interconnect ( grac121int: 192.168.2.81 / grac122int: 192.168.2.82  )

Restart the network service ( sometimes network restarts will overwrite your resolv.conf file - just be sure) 
$ service network restart 
After network restart /etc/resolv.conf should look like: 
# Generated by NetworkManager 
search example.com grid.example.com de.oracle.com 
nameserver 192.168.1.50 

Add the Corporate Nameservers as forwarders in our DNS   
/etc/named.conf :    
forwarders { 192.135.82.44; 10.165.246.33; } ; 
Verify the ping works fine from our DNS nameserver to the corporate DNS name servers: 
$ ping 192.135.82.44 
$ ping 10.165.246.33 
Details: 
Nameserver settings:    
  192.35.82.44     : Corporate name server I    
  10.165.246.33    : Corporate name server II       
  192.168.1.50     : DNS name server used for GNS delagation ( GNS NS: 192.168.1.55 ) 

After above setup network devices  should look like:
# ifconfig | egrep 'HWaddr|Bcast'
eth0      Link encap:Ethernet  HWaddr 08:00:27:A8:27:BD  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
eth1      Link encap:Ethernet  HWaddr 08:00:27:1E:7D:B0  
          inet addr:192.168.1.81  Bcast:192.168.1.255  Mask:255.255.255.0
eth2      Link encap:Ethernet  HWaddr 08:00:27:97:59:C3  
          inet addr:192.168.2.81  Bcast:192.168.2.255  Mask:255.255.255.0

Preparing your coorporate name server GNS zone delegation
/etc/named.conf 
zone  "example.com" IN {
      type master;
       notify no;
       file "example.com.db";
};

/var/named/example.com.db
...
$ORIGIN grid12c.example.com.
@       IN          NS        gns12c.grid12c.example.com. ; NS  grid.example.com
        IN          NS        ns1.example.com.      ; NS example.com
gns12c  IN          A         192.168.1.58 ; glue record

Check DNS resolution
Testing GNS ( Note : ping will not work as GNS isn't active yet )
$  nslookup 192.168.1.58
Server:        192.168.1.50
Address:    192.168.1.50#53
58.1.168.192.in-addr.arpa    name = gns12c.grid12c.example.com.

$ nslookup gns12c.grid12c.example.com
;; Got SERVFAIL reply from 192.168.1.50, trying next server
--> No problem
#   nslookup grac112-scan ( Again this will only work after CRS is installed and active ) 
Server:        192.168.1.50
Address:    192.168.1.50#53
Non-authoritative answer:
Name:    grac112-scan.grid12c.example.com
Address: 192.168.1.149
Name:    grac112-scan.grid12c.example.com
Address: 192.168.1.150
Name:    grac112-scan.grid12c.example.com
Address: 192.168.1.148
...
$  nslookup grac121.example.com
Name:    grac121.example.com
Address: 192.168.1.81
$ nslookup 192.168.1.81
81.1.168.192.in-addr.arpa    name = grac121.example.com.
$  nslookup grac121int.example.com
Name:    grac121int.example.com
Address: 192.168.2.81
$ nslookup  192.168.2.81
81.2.168.192.in-addr.arpa    name = grac121int.example.com.
....
--> Repeat above nslookup steps for grac122


Configure you network name by modifying: /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=grac122.example.com

NTP Setup

NTP Setup - Clients: grac121.example.com, grac122.example.com
 # cat /etc/ntp.conf
 restrict default nomodify notrap noquery
 restrict 127.0.0.1
 # -- CLIENT NETWORK -------
 # --- OUR TIMESERVERS -----
 # 192.168.1.2 is the address for my timeserver,
 # use the address of your own, instead:
 server 192.168.1.50
 server  127.127.1.0
 # --- NTP MULTICASTCLIENT ---
 # --- GENERAL CONFIGURATION ---
 # Undisciplined Local Clock.
 fudge   127.127.1.0 stratum 12
 # Drift file.
 driftfile /var/lib/ntp/drift
 broadcastdelay  0.008
 # Keys file.
 keys /etc/ntp/keys

# ntpq -p

     remote           refid      st t when poll reach   delay   offset  jitter
==================== ==========================================================
 ns1.example.com LOCAL(0)        10 u   20   64    1    0.244   -0.625   0.000
 LOCAL(0)        .LOCL.          12 l   19   64    1    0.000    0.000   0.000

Add to  /etc/rc.local
#
service ntpd stop
ntpdate -u 192.168.1.50 
service ntpd start

Account setup

Check User setup  for users: oracle,grid ( Note oracle user should belong to  asmdba )

See :  Grid Infrastructure Installation Guide 12c – Chapter 6 

  • OSDBA for ASM Database Administrator group for ASM, typically asmdba)  Members of the ASM Database Administrator group (OSDBA for ASM) are granted read and write access to files managed by Oracle ASM. The Oracle Grid Infrastructure installation owner and all Oracle Database software owners must be a member of this group, and all users with OSDBA membership on databases that have access to the files managed by Oracle ASM must be members of the OSDBA group for ASM.
$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),500(vboxsf),506(asmdba),54322(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ id
uid=501(grid) gid=54321(oinstall) groups=54321(oinstall),500(vboxsf),504(asmadmin),506(asmdba),507(asmoper),54322(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Create directories:
To create the Oracle Inventory directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oraInventory
  # chown -R grid:oinstall /u01/app/oraInventory
Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/grid
  # chown -R grid:oinstall /u01/app/grid
  # chmod -R 775 /u01/app/grid
  # mkdir -p /u01/app/121/grid
  # chown -R grid:oinstall /u01//app/121/grid
  # chmod -R 775 /u01/app/121/grid
Creating the Oracle Base Directory
  To create the Oracle Base directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle
  # chown -R oracle:oinstall /u01/app/oracle
  # chmod -R 775 /u01/app/oracle
Creating the Oracle RDBMS Home Directory
  To create the Oracle RDBMS Home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle/product/121/racdb
  # chown -R oracle:oinstall /u01/app/oracle/product/121/racdb
  # chmod -R 775 /u01/app/oracle/product/121/racdb

Install and verify RPM from rpm directory ( GRID media ) : cvuqdisk-1.0.9-1.rpm
# rpm -qa | grep  cvu
cvuqdisk-1.0.9-1.x86_64

Verify the current OS status before installing CRS using cluvfy

Download  cluvfy from 
http://www.oracle.com/technetwork/database/clustering/downloads/cvu-download-homepage-099973.html
Cluster Verification Utility Download for Oracle Grid Infrastructure 12c 
Note: The latest CVU version (July 2013) can be used with all currently supported Oracle RAC versions, including Oracle RAC 10g, 
      Oracle RAC 11g and Oracle RAC 12c.

Run cluvfy to prepare the CRS installation 
$ ./bin/cluvfy comp sys -p crs -n grac121 -verbose -fixup
Verifying system requirement 
Check: Total memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       3.7426GB (3924412.0KB)    4GB (4194304.0KB)         failed    
Result: Total memory check failed
Check: Available memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       3.3971GB (3562152.0KB)    50MB (51200.0KB)          passed    
Result: Available memory check passed
Check: Swap space 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       6.0781GB (6373372.0KB)    3.7426GB (3924412.0KB)    passed    
Result: Swap space check passed
Check: Free disk space for "grac121:/usr,grac121:/var,grac121:/etc,grac121:/u01/app/11203/grid,grac121:/sbin,grac121:/tmp" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              grac121       /             13.5332GB     7.9635GB      passed      
  /var              grac121       /             13.5332GB     7.9635GB      passed      
  /etc              grac121       /             13.5332GB     7.9635GB      passed      
  /u01/app/11203/grid  grac121       /             13.5332GB     7.9635GB      passed      
  /sbin             grac121       /             13.5332GB     7.9635GB      passed      
  /tmp              grac121       /             13.5332GB     7.9635GB      passed      
Result: Free disk space check passed for "grac121:/usr,grac121:/var,grac121:/etc,grac121:/u01/app/11203/grid,grac121:/sbin,grac121:/tmp"
Check: User existence for "grid" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac121       passed                    exists(501)             
Checking for multiple users with UID value 501
Result: Check for multiple users with UID value 501 passed 
Result: User existence check passed for "grid"
Check: Group existence for "oinstall" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac121       passed                    exists                  
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac121       passed                    exists                  
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           yes           yes           yes           yes           passed      
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group "dba" 
  Node Name         User Exists   Group Exists  User in Group  Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac121           yes           yes           yes           passed          
Result: Membership check for user "grid" in group "dba" passed
Check: Run level 
  Node Name     run level                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       5                         3,5                       passed    
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac121           hard          4096          65536         failed          
Result: Hard limits check failed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac121           hard          30524         16384         passed          
Result: Hard limits check passed for "maximum user processes"
Check: System architecture 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       x86_64                    x86_64                    passed    
Result: System architecture check passed
Check: Kernel version 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       2.6.39-200.24.1.el6uek.x86_64  2.6.32                    passed    
Result: Kernel version check passed
Check: Kernel parameter for "semmsl" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           250           250           250           passed          
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           32000         32000         32000         passed          
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           100           100           100           passed          
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           128           128           128           passed          
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4398046511104  4398046511104  2009298944    passed          
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4096          4096          4096          passed          
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4294967296    4294967296    392441        passed          
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           6815744       6815744       6815744       passed          
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed          
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           262144        262144        262144        passed          
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4194304       4194304       4194304       passed          
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           262144        262144        262144        passed          
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "binutils" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       binutils-2.20.51.0.2-5.34.el6  binutils-2.20.51.0.2      passed    
Result: Package existence check passed for "binutils"
Check: Package existence for "compat-libcap1" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       compat-libcap1-1.10-1     compat-libcap1-1.10       passed    
Result: Package existence check passed for "compat-libcap1"
Check: Package existence for "compat-libstdc++-33(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed    
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "libgcc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libgcc(x86_64)-4.4.6-4.el6  libgcc(x86_64)-4.4.4      passed    
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libstdc++(x86_64)-4.4.6-4.el6  libstdc++(x86_64)-4.4.4   passed    
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libstdc++-devel(x86_64)-4.4.6-4.el6  libstdc++-devel(x86_64)-4.4.4  passed    
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       sysstat-9.0.4-20.el6      sysstat-9.0.4             passed    
Result: Package existence check passed for "sysstat"
Check: Package existence for "gcc" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       gcc-4.4.6-4.el6           gcc-4.4.4                 passed    
Result: Package existence check passed for "gcc"
Check: Package existence for "gcc-c++" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       gcc-c++-4.4.6-4.el6       gcc-c++-4.4.4             passed    
Result: Package existence check passed for "gcc-c++"
Check: Package existence for "ksh" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       ksh-20100621-16.el6       ksh-...                   passed    
Result: Package existence check passed for "ksh"
Check: Package existence for "make" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       make-3.81-20.el6          make-3.81                 passed    
Result: Package existence check passed for "make"
Check: Package existence for "glibc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       glibc(x86_64)-2.12-1.80.el6_3.5  glibc(x86_64)-2.12        passed    
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "glibc-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       glibc-devel(x86_64)-2.12-1.80.el6_3.5  glibc-devel(x86_64)-2.12  passed    
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "libaio(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed    
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "libaio-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed    
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "nfs-utils" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       nfs-utils-1.2.3-26.el6    nfs-utils-1.2.3-15        passed    
Result: Package existence check passed for "nfs-utils"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed 
Starting check for consistency of primary group of root user
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac121                               passed                  
Check for consistency of root user's primary group passed
Check: Time zone consistency 
Result: Time zone consistency check passed
******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
******************************************************************************************
--------------                ---------------     ----------------    
Check failed.                 Failed on nodes     Reboot required?    
--------------                ---------------     ----------------    
Hard Limit: maximum open      grac121             no                  
file descriptors                                                      
Execute "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" as root user on nodes "grac121" to perform the fix up operations manually
--> Now run runfixup.sh" as root   on nodes "grac121" 
Press ENTER key to continue after execution of "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has completed on nodes "grac121"
Fix: Hard Limit: maximum open file descriptors 
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac121                               successful              
Result: "Hard Limit: maximum open file descriptors" was successfully fixed on all the applicable nodes
Fix up operations were successfully completed on all the applicable nodes
Verification of system requirement was unsuccessful on all the specified nodes.
--
Fixup may fix some errors but errors like to low memory/swap needs manual intervention:
Check: Total memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       3.7426GB (3924412.0KB)    4GB (4194304.0KB)         failed    
Result: Total memory check failed

Verify  GNS integrity ( Note is a GNS is already active you will get a warning ) 
$ ./bin/cluvfy comp gns -precrsinst -domain grid12c.example.com  -vip 192.168.1.55
Verifying GNS integrity 
Checking GNS integrity...
The GNS subdomain name "grid12c.example.com" is a valid domain name
GNS VIP "192.168.1.58" resolves to a valid IP address
GNS integrity check passed
Verification of GNS integrity was successfull

Create ASM disks

D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm1_5G.vdi --size 5120 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: a24ac5ee-f045-434d-8c2d-8fde5c73d6fa
D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm2_5G.vdi --size 5120 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: ade56ace-a8fd-4383-aa8e-f2b4f7645372
D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm3_5G.vdi --size 5120 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 033563bc-e63d-435a-8fc6-e4f67dd54128
D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm4_5G.vdi --size 5120 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 7b60806c-78fc-4f4c-beb1-ff9bafd36eeb
Attach ASM Diks and make them sharable
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 1  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm1_5G.vdi
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 2  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm2_5G.vdi
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 3  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm3_5G.vdi
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 4  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm4_5G.vdi
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm1_5G.vdi --type shareable
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm2_5G.vdi --type shareable
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm3_5G.vdi --type shareable
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm4_5G.vdi --type shareable

Reboot your system and format disks:

# fdisk /dev/sde
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-652, default 652): 
Using default value 652
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
--> Repeat above format command for all newly created disks !

Configure ASMlib

Configure Oracle ASM library driver
# /usr/sbin/oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.
Default user to own the driver interface [grid]: 
Default group to own the driver interface [asmadmin]: 
Start Oracle ASM library driver on boot (y/n) [y]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

Create ASM disks:
# /etc/init.d/oracleasm createdisk data1 /dev/sdb1
Marking disk "data1" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data2 /dev/sdc1
Marking disk "data2" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data3 /dev/sdd1
Marking disk "data3" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data4 /dev/sde1
Marking disk "data4" as an ASM disk:                       [  OK  ]

Verify ASM disks
# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
DATA4
# /etc/init.d/oracleasm querydisk -d data1
Disk "DATA1" is a valid ASM disk on device [8, 17]
# /etc/init.d/oracleasm querydisk -d data2
Disk "DATA2" is a valid ASM disk on device [8, 33]
# /etc/init.d/oracleasm querydisk -d data3
Disk "DATA3" is a valid ASM disk on device [8, 49]
# /etc/init.d/oracleasm querydisk -d data4
Disk "DATA4" is a valid ASM disk on device [8, 65]

Clone VM and attach ASM disks

D:\VM>VBoxManage clonehd  d:\VM\GNS12c\grac121\grac121.vdi   d:\VM\GNS12c\grac122\grac122.vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VDI'. UUID: 5fb10575-2293-489b-b105-289d5d49ab18
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 1  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm1_5G.vdi
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 2  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm2_5G.vdi
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 3  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm3_5G.vdi
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 4  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm4_5G.vdi

Setup 2nd note

Reboot grac122 and change your TCP/IP settings. Verify with ping and nslookup

Run sshUserSetup.sh  on grac121:
./sshUserSetup.sh -user grid -hosts "grac121 grac122"  -noPromptPassphrase

Verify CRS for both nodes using newly created  ASM disk and asmadmin group 

./bin/cluvfy stage -pre crsinst -n grac121,grac122 -asm -asmdev /dev/oracleasm/disks/DATA1,
      /dev/oracleasm/disks/DATA2,/dev/oracleasm/disks/DATA3,/dev/oracleasm/disks/DATA4 
      -presence local -networks eth1:192.168.1.0:PUBLIC/eth2:192.168.2.0:cluster_interconnect

Potential Error:
ERROR:  /dev/oracleasm/disks/DATA4
grac122:Cannot verify the shared state for device /dev/sde1 due to Universally Unique Identifiers 
    (UUIDs) not being found, or different values being found, for this device across nodes:
    grac121,grac122
--> Error is due to test system using VirtualBox to setup the RAC and partitions not returning an UUID. 
Installation could be continued ignoring this error. In a proper system where UUID is available the cluvfy 
would have the following messages when these check succeed.
( See http://asanga-pradeep.blogspot.co.uk/2013/08/installing-12c-12101-rac-on-rhel-6-with.html )

Install 12.1 clusterware

$ cd grid
$ ls
install  response  rpm    runcluvfy.sh  runInstaller  sshsetup  stage  welcome.html
$ ./runInstaller 
-> Configure a standard cluster
-> Advanced Installation
   Cluster name : grac112
   Scan name    : grac112-scan.grid12c.example.com
   Scan port    : 1521
   -> Create New GNS
      GNS VIP address: 192.168.1.58
      GNS Sub domain : grid12c.example.com
  Public Hostname           Virtual Hostname 
  grac121.example.com        AUTO
  grac122.example.com        AUTO
-> Test and Setuop SSH connectivity
-> Setup network Interfaces
   eth0: don't use
   eth1: PUBLIC
   eht2: Private Cluster_Interconnect
-> Configure GRID Infrastruce: YES
-> Use standard ASM for storage
-> ASM setup
   Diskgroup         : DATA
   Disk discover PATH: /dev/oracleasm/disks/*

Run root.sh scripts on grac121:

# /u01/app/121/grid/root.sh
Performing root user operation for Oracle 12c 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/121/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/121/grid/crs/install/crsconfig_params
2013/08/25 14:56:52 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
2013/08/25 14:57:38 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'grac121'
CRS-2677: Stop of 'ora.drivers.acfs' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'grac121'
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac121'
CRS-2676: Start of 'ora.mdnsd' on 'grac121' succeeded
CRS-2676: Start of 'ora.evmd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac121'
CRS-2676: Start of 'ora.gpnpd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac121'
CRS-2672: Attempting to start 'ora.gipcd' on 'grac121'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac121' succeeded
CRS-2676: Start of 'ora.gipcd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac121'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac121'
CRS-2676: Start of 'ora.diskmon' on 'grac121' succeeded
CRS-2676: Start of 'ora.cssd' on 'grac121' succeeded
ASM created and started successfully.
Disk Group DATA created successfully.
CRS-2672: Attempting to start 'ora.crf' on 'grac121'
CRS-2672: Attempting to start 'ora.storage' on 'grac121'
CRS-2676: Start of 'ora.storage' on 'grac121' succeeded
CRS-2676: Start of 'ora.crf' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'grac121'
CRS-2676: Start of 'ora.crsd' on 'grac121' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk e158882a16cf4f44bfab3fac241e5152.
Successful addition of voting disk b93b579e97f24ff4bfb58e7a1d9e628b.
Successful addition of voting disk 2a29ac7797544f8cbfb6650ce7c287fe.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   e158882a16cf4f44bfab3fac241e5152 (/dev/oracleasm/disks/DATA1) [DATA]
 2. ONLINE   b93b579e97f24ff4bfb58e7a1d9e628b (/dev/oracleasm/disks/DATA2) [DATA]
 3. ONLINE   2a29ac7797544f8cbfb6650ce7c287fe (/dev/oracleasm/disks/DATA3) [DATA]
Located 3 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'grac121'
CRS-2673: Attempting to stop 'ora.crsd' on 'grac121'
CRS-2677: Stop of 'ora.crsd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'grac121'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'grac121'
CRS-2673: Attempting to stop 'ora.ctssd' on 'grac121'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'grac121'
CRS-2677: Stop of 'ora.drivers.acfs' on 'grac121' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'grac121' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'grac121' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'grac121'
CRS-2673: Attempting to stop 'ora.storage' on 'grac121'
CRS-2677: Stop of 'ora.storage' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'grac121'
CRS-2677: Stop of 'ora.asm' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'grac121'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'grac121' succeeded
CRS-2677: Stop of 'ora.evmd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'grac121'
CRS-2677: Stop of 'ora.cssd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'grac121'
CRS-2677: Stop of 'ora.crf' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'grac121'
CRS-2677: Stop of 'ora.gipcd' on 'grac121' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'grac121' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac121'
CRS-2672: Attempting to start 'ora.evmd' on 'grac121'
CRS-2676: Start of 'ora.mdnsd' on 'grac121' succeeded
CRS-2676: Start of 'ora.evmd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac121'
CRS-2676: Start of 'ora.gpnpd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'grac121'
CRS-2676: Start of 'ora.gipcd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac121'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac121'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac121'
CRS-2676: Start of 'ora.diskmon' on 'grac121' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'grac121'
CRS-2676: Start of 'ora.cssd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'grac121'
CRS-2672: Attempting to start 'ora.ctssd' on 'grac121'
CRS-2676: Start of 'ora.ctssd' on 'grac121' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'grac121'
CRS-2676: Start of 'ora.asm' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'grac121'
CRS-2676: Start of 'ora.storage' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'grac121'
CRS-2676: Start of 'ora.crf' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'grac121'
CRS-2676: Start of 'ora.crsd' on 'grac121' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: grac121
CRS-6016: Resource auto-start has completed for server grac121
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2013/08/25 15:07:34 CLSRSC-343: Successfully started Oracle clusterware stack
CRS-2672: Attempting to start 'ora.asm' on 'grac121'
CRS-2676: Start of 'ora.asm' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'grac121'
CRS-2676: Start of 'ora.DATA.dg' on 'grac121' succeeded
2013/08/25 15:11:00 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Run root.sh scripts on grac122

# /u01/app/121/grid/root.sh
Performing root user operation for Oracle 12c 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/121/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/121/grid/crs/install/crsconfig_params
2013/08/25 18:51:55 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
2013/08/25 18:52:18 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'grac122'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'grac122'
CRS-2677: Stop of 'ora.drivers.acfs' on 'grac122' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'grac122' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac122'
CRS-2672: Attempting to start 'ora.evmd' on 'grac122'
CRS-2676: Start of 'ora.evmd' on 'grac122' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac122'
CRS-2676: Start of 'ora.gpnpd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'grac122'
CRS-2676: Start of 'ora.gipcd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac122'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac122'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac122'
CRS-2676: Start of 'ora.diskmon' on 'grac122' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'grac122'
CRS-2676: Start of 'ora.cssd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'grac122'
CRS-2672: Attempting to start 'ora.ctssd' on 'grac122'
CRS-2676: Start of 'ora.ctssd' on 'grac122' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'grac122'
CRS-2676: Start of 'ora.asm' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'grac122'
CRS-2676: Start of 'ora.storage' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'grac122'
CRS-2676: Start of 'ora.crf' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'grac122'
CRS-2676: Start of 'ora.crsd' on 'grac122' succeeded
CRS-6017: Processing resource auto-start for servers: grac122
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'grac121'
CRS-2672: Attempting to start 'ora.ons' on 'grac122'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'grac121'
CRS-2677: Stop of 'ora.scan1.vip' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'grac122'
CRS-2676: Start of 'ora.ons' on 'grac122' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'grac122'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'grac122' succeeded
CRS-6016: Resource auto-start has completed for server grac122
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2013/08/25 18:58:50 CLSRSC-343: Successfully started Oracle clusterware stack
2013/08/25 18:59:06 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 

Verify CRS installation with a modified : crsctl stat res -t

NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
                               Name       Target          State        Server State
ora.DATA.dg                    ONLINE     ONLINE          grac121      STABLE 
ora.DATA.dg                    ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER.lsnr              ONLINE     ONLINE          grac121      STABLE 
ora.LISTENER.lsnr              ONLINE     ONLINE          grac122      STABLE 
ora.asm                        ONLINE     ONLINE          grac121      Started,STABLE 
ora.asm                        ONLINE     ONLINE          grac122      Started,STABLE 
ora.net1.network               ONLINE     ONLINE          grac121      STABLE 
ora.net1.network               ONLINE     ONLINE          grac122      STABLE 
ora.ons                        ONLINE     ONLINE          grac121      STABLE 
ora.ons                        ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac121      STABLE 
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac121      STABLE 
ora.MGMTLSNR                   ONLINE     ONLINE          grac121      169.254.187.22 192.1
ora.cvu                        ONLINE     ONLINE          grac121      STABLE 
ora.gns                        ONLINE     ONLINE          grac121      STABLE 
ora.gns.vip                    ONLINE     ONLINE          grac121      STABLE 
ora.grac121.vip                ONLINE     ONLINE          grac121      STABLE 
ora.grac122.vip                ONLINE     ONLINE          grac122      STABLE 
ora.mgmtdb                     ONLINE     ONLINE          grac121      Open,STABLE 
ora.oc4j                       ONLINE     ONLINE          grac121      STABLE 
ora.scan1.vip                  ONLINE     ONLINE          grac122      STABLE 
ora.scan2.vip                  ONLINE     ONLINE          grac121      STABLE 
ora.scan3.vip                  ONLINE     ONLINE          grac121      STABLE

Verify CRS installation with cluvfy


$ ./bin/cluvfy stage -post crsinst -n grac121,grac122 
Performing post-checks for cluster services setup 
Checking node reachability...
Node reachability check passed from node "grac121"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) grac122,grac121
TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity using interfaces on subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) grac121,grac122
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Time zone consistency check passed
Checking Cluster manager integrity... 
Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations 
UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations 
Default user file creation mask check passed
Checking cluster integrity...
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+DATA" is available on all the nodes
NOTE: 
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Checking CRS integrity...
Clusterware version consistency passed.
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of ONS node application (optional)
ONS node application check passed
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "grac112-scan.grid12c.example.com"...
Checking integrity of name service switch configur
ation file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking SCAN IP addresses...
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed
WARNING: 
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
OLR integrity check passed
Checking GNS integrity...
The GNS subdomain name "grid12c.example.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0" match with the GNS VIP "192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0"
GNS VIP "192.168.1.58" resolves to a valid IP address
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resource configuration check passed
GNS VIP resource configuration check passed.
GNS integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
CTSS resource check passed
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Post-check for cluster services setup was successful.

 

RDBMS install

Verify pre RDBMS install with cluvfy

$ ./bin/cluvfy stage -pre dbcfg -n grac121,grac122 -d /u01/app/oracle/product/121/racdb -verbose -fixup
In this case cluvfy builds a fixup script to create the oper group - run it on both notes
# /tmp/CVU_12.1.0.1.0_oracle/runfixup.sh
Solve all errors until cluvfy reports : Pre-check for database configuration was successful.

Run Installer from Database media and run related root.sh scripts

$ cd /KITS/ORACLE/121/database
$ ./runInstaller  
   server class
    Oracle Real application cluster installation
     Test/Create SSH connectivity
      Advanced Install 
        Enterprise Edition
         Global Database name : crac12             
          OSDBA  group : dba
          OSOPER group : oper 
Run root.sh in grac121 and grac122

 Verify Rdbms installation with : $GRID_HOME/bin/crsctl stat res -t

$ my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
                               Name       Target          State        Server State
ora.DATA.dg                    ONLINE     ONLINE          grac121      STABLE 
ora.DATA.dg                    ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER.lsnr              ONLINE     ONLINE          grac121      STABLE 
ora.LISTENER.lsnr              ONLINE     ONLINE          grac122      STABLE 
ora.asm                        ONLINE     ONLINE          grac121      Started,STABLE 
ora.asm                        ONLINE     ONLINE          grac122      Started,STABLE 
ora.net1.network               ONLINE     ONLINE          grac121      STABLE 
ora.net1.network               ONLINE     ONLINE          grac122      STABLE 
ora.ons                        ONLINE     ONLINE          grac121      STABLE 
ora.ons                        ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac121      STABLE 
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac121      STABLE 
ora.MGMTLSNR                   ONLINE     ONLINE          grac121      169.254.187.22 192.1
ora.crac12.db                  ONLINE     ONLINE          grac121      Open,STABLE 
ora.crac12.db                  ONLINE     ONLINE          grac122      Open,STABLE 
ora.cvu                        ONLINE     ONLINE          grac121      STABLE 
ora.gns                        ONLINE     ONLINE          grac121      STABLE 
ora.gns.vip                    ONLINE     ONLINE          grac121      STABLE 
ora.grac121.vip                ONLINE     ONLINE          grac121      STABLE 
ora.grac122.vip                ONLINE     ONLINE          grac122      STABLE 
ora.mgmtdb                     ONLINE     ONLINE          grac121      Open,STABLE 
ora.oc4j                       ONLINE     ONLINE          grac121      STABLE 
ora.scan1.vip                  ONLINE     ONLINE          grac122      STABLE 
ora.scan2.vip                  ONLINE     ONLINE          grac121      STABLE 
ora.scan3.vip                  ONLINE     ONLINE          grac121      STABLE

 

Verify database status with srvctl, olsnodes

$ srvctl config database -d crac12
Database unique name: crac12
Database name: crac12
Oracle home: /u01/app/oracle/product/121/rac121
Oracle user: oracle
Spfile: +DATA/crac12/spfilecrac12.ora
Password file: +DATA/crac12/orapwcrac12
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: crac12
Database instances: crac121,crac122
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
Database is administrator managed
$  srvctl status  database -d crac12
Instance crac121 is running on node grac121
Instance crac122 is running on node grac122
$ sqlplus / as sysdba
SQL> select * from v$active_instances
INST_NUMBER INST_NAME                   CON_ID
----------- ------------------------------ ----------
      1 grac121.example.com:crac121         0
      2 grac122.example.com:crac122         0

Print node number with the node name
$ olsnodes -n -l
grac121    1
Print private interconnect address for the local node
$ olsnodes -p -l
grac121    192.168.2.81
Print virtual IP address with the node name
$ olsnodes -n -l
grac121    1
Print above info via a single command
$  olsnodes -n -p -i -l
grac121    1    192.168.2.81    192.168.1.147

Verify GNS/SCAN settings:
$ $GRID_HOME/bin/srvctl config gns -list
Oracle-GNS A 192.168.1.58 Unique Flags: 0x15
grac112-scan A 192.168.1.148 Unique Flags: 0x81
grac112-scan A 192.168.1.149 Unique Flags: 0x81
grac112-scan A 192.168.1.150 Unique Flags: 0x81
grac112-scan1-vip A 192.168.1.148 Unique Flags: 0x81
grac112-scan2-vip A 192.168.1.149 Unique Flags: 0x81
grac112-scan3-vip A 192.168.1.150 Unique Flags: 0x81
grac112.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 22526 Weight: 0 Priority: 0 Flags: 0x15
grac112.Oracle-GNS TXT CLUSTER_NAME="grac112", CLUSTER_GUID="191a52ec780d5f30bf460333c96cb46e", NODE_ADDRESS="192.168.1.58", SERVER_STATE="RUNNING", VERSION="12.1.0.1.0", DOMAIN="grid12c.example.com" Flags: 0x15
grac121-vip A 192.168.1.147 Unique Flags: 0x81
grac122-vip A 192.168.1.152 Unique Flags: 0x81

$  $GRID_HOME/bin/srvctl config gns  -subdomain
Domain served by GNS: grid12c.example.com

$  $GRID_HOME/bin/srvctl config scan
SCAN name: grac112-scan.grid12c.example.com, Network: 1
Subnet IPv4: 192.168.1.0/255.255.255.0/eth1
Subnet IPv6: 
SCAN 0 IPv4 VIP: -/scan1-vip/192.168.1.148
SCAN name: grac112-scan.grid12c.example.com, Network: 1
Subnet IPv4: 192.168.1.0/255.255.255.0/eth1
Subnet IPv6: 
SCAN 1 IPv4 VIP: -/scan2-vip/192.168.1.149
SCAN name: grac112-scan.grid12c.example.com, Network: 1
Subnet IPv4: 192.168.1.0/255.255.255.0/eth1
Subnet IPv6: 
SCAN 2 IPv4 VIP: -/scan3-vip/192.168.1.150

 

Reference:

  • http://www.oracle-base.com/articles/12c/oracle-db-12cr1-rac-installation-on-oracle-linux-6-using-virtualbox.php#install_db_software

 

Install Oracle RAC 11.2.0.4,OEL 6.4 and Virtualbox 4.2 with GNS and UDEV

Network/DNS setup

Virtualbox Device Configuration 
eth0 - NAT                : Used for VPN connection to company network/local router ( DHCP )   
eth1 - Host-Only Adapater : public  ( grac1: 192.168.1.101,  grac2: 192.168.1.102, grac3: 192.168.1.103, ..)
eth2 - Internal           : private cluster interconnect ( grac1int: 192.168.2.101, grac2int: 192.168.2.102, grac3int: 192.168.2.103, .. ) 

Modify eth0 device using network manager  ( see   /etc/sysconfig/network-scripts/ifcfg-eth0 )
Goto IPV4 settings -> change Method to : Automatic(DHCP) addresses only ( ->Now we can modify Nameservers/Search ) to 
  Nameservers: 192.168.1.50
  Search:      example.com,grid.example.com,de.oracle.com   

Restart the network service
$ service network restart
After network restart  /etc/resolv.conf should look like:
# Generated by NetworkManager
search example.com grid.example.com de.oracle.com
nameserver 192.168.1.50

Add the Corporate Nameservers  as forwarders in our DNS 
 /etc/named.conf :
   forwarders { 192.135.82.44; 10.165.246.33; } ;
Verify the ping works fine from our DNS nameserver to the corporate DNS name servers:
$ ping 192.135.82.44
$ ping 10.165.246.33
Details: 
Nameserver settings:
   192.35.82.44     : Corporate name server I
   10.165.246.33    : Corporate name server II   
   192.168.1.50     : DNS name server used for GNS delagation ( GNS NS: 192.168.1.55 )    oto IPV4 settings -> change Method to : Automatic(DHCP) addresses only ( ->Now we can modify Nameservers/Search ) to 
  Nameservers: 192.168.1.50
  Search:      example.com,grid.example.com,de.oracle.com   

Prepare your DNS server for zone delegation to the GNS name server 
 /var/named/example.com.db
/etc/named.conf 
zone  "example.com" IN {
      type master;
       notify no;
       file "example.com.db";
};

/var/named/example.com.db
$ORIGIN grid4.example.com.
@       IN          NS        gns4.grid4.example.com. ; NS  grid.example.com
        IN          NS        ns1.example.com.      ; NS example.com
gns4    IN          A         192.168.1.59 ; glue record

After above setup network devices  should look like:
# ifconfig | egrep 'HWaddr|Bcast'
eth0      Link encap:Ethernet  HWaddr 08:00:27:A8:27:BD  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
eth1      Link encap:Ethernet  HWaddr 08:00:27:1E:7D:B0  
          inet addr:192.168.1.101  Bcast:192.168.1.255  Mask:255.255.255.0
eth2      Link encap:Ethernet  HWaddr 08:00:27:97:59:C3  
          inet addr:192.168.2.101  Bcast:192.168.2.255  Mask:255.255.255.0

Check local DNS resolution
# nslookup grac41
Name:    grac41.example.com
Address: 192.168.1.101
# nslookup 192.168.1.101
101.1.168.192.in-addr.arpa    name = grac41.example.com.
# nslookup grac41int.example.com
Name:    grac41int.example.com
Address: 192.168.2.101
# nslookup 192.168.2.101
Server:        192.168.1.50
Address:    192.168.1.50#53
101.2.168.192.in-addr.arpa    name = grac41int.example.com

Check coorporate DNS resolution
# nslookup  supsunhh3
Non-authoritative answer:
Name:    supsunhh3.de.oracle.com
Address: xxxxxxx

Configure you network name by modifying: /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=grac41.example.com
NTP Setup - Clients: grac41.example.com, grac42.example.com,  ...
 # cat /etc/ntp.conf
 restrict default nomodify notrap noquery
 restrict 127.0.0.1
 # -- CLIENT NETWORK -------
 # --- OUR TIMESERVERS -----
 # 192.168.1.2 is the address for my timeserver,
 # use the address of your own, instead:
 server 192.168.1.50
 server  127.127.1.0
 # --- NTP MULTICASTCLIENT ---
 # --- GENERAL CONFIGURATION ---
 # Undisciplined Local Clock.
 fudge   127.127.1.0 stratum 12
 # Drift file.
 driftfile /var/lib/ntp/drift
 broadcastdelay  0.008
 # Keys file.
 keys /etc/ntp/keys

# ntpq -p
 remote           refid      st t when poll reach   delay   offset  jitter
 ==============================================================================
 gns.example.com LOCAL(0)        10 u   22   64    1    2.065  -11.015   0.000
 LOCAL(0)        .LOCL.          12 l   21   64    1    0.000    0.000   0.000
 Verify setup with cluvfy :

Add to  /etc/rc.local
#
service ntpd stop
ntpdate -u 192.168.1.50 
service ntpd start

Account setup

Check User setup  for users: oracle,grid ( Note oracle user should belong to  asmdba )
$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),500(vboxsf),506(asmdba),54322(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ id
uid=501(grid) gid=54321(oinstall) groups=54321(oinstall),500(vboxsf),504(asmadmin),506(asmdba),507(asmoper),54322(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Create directories:
To create the Oracle Inventory directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oraInventory
  # chown -R grid:oinstall /u01/app/oraInventory
Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/grid
  # chown -R grid:oinstall /u01/app/grid
  # chmod -R 775 /u01/app/grid
  # mkdir -p /u01/app/11204/grid
  # chown -R grid:oinstall /u01//app/11204/grid
  # chmod -R 775 /u01/app/11203/grid
Creating the Oracle Base Directory
  To create the Oracle Base directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle
  # chown -R oracle:oinstall /u01/app/oracle
  # chmod -R 775 /u01/app/oracle
Creating the Oracle RDBMS Home Directory
  To create the Oracle RDBMS Home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle/product/11204/racdb
  # chown -R oracle:oinstall /u01/app/oracle/product/11204/racdb
  # chmod -R 775 /u01/app/oracle/product/11204/racdb

Cluvfy commands to run after Virtualbox installation for checking our Master VM before CRS installation/cloning

Post-check for hardware and OS:
$ ./bin/cluvfy stage -post hwos -n grac41  -verbose

Pre-check for CRS installation:
$ ./bin/cluvfy comp sys -p crs -n grac41 -verbose

Check GNS ( Note 192.168.1.59 is the IP address of our GNS name server )
$./bin/cluvfy comp gns -precrsinst -domain grid.example.com -vip 192.168.1.59 -verbose -n grac41
../dev/asmdisk2_udev_sdc1

Create ASM disks

VBoxManage createhd --filename M:\VM\GRAC_OEL64_11204\asm1_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\GRAC_OEL64_11204\asm2_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\GRAC_OEL64_11204\asm3_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\GRAC_OEL64_11204\asm4_10G.vdi --size 10240 --format VDI --variant Fixed

VBoxManage storageattach grac41 --storagectl "SATA" --port 1  --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm1_10G.vdi
VBoxManage storageattach grac41 --storagectl "SATA" --port 2  --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm2_10G.vdi
VBoxManage storageattach grac41 --storagectl "SATA" --port 3  --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm3_10G.vdi
VBoxManage storageattach grac41 --storagectl "SATA" --port 4  --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm4_10G.vdi

VBoxManage modifyhd  M:\VM\GRAC_OEL64_11204\asm1_10G.vdi --type shareable
VBoxManage modifyhd  M:\VM\GRAC_OEL64_11204\asm2_10G.vdi --type shareable
VBoxManage modifyhd  M:\VM\GRAC_OEL64_11204\asm3_10G.vdi --type shareable
VBoxManage modifyhd  M:\VM\GRAC_OEL64_11204\asm4_10G.vdi --type shareable´

Format disks ( sample for /dev/sdf )

# fdisk /dev/sdf
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x2a0f0902.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): p

Disk /dev/sdf: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x2a0f0902

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-261, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-261, default 261): 
Using default value 261

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Configure udev rules for ASM disks

# cat  /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sdb1", NAME="asmdisk1_udev_sdb1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sdc1", NAME="asmdisk2_udev_sdc1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sdd1", NAME="asmdisk3_udev_sdd1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sde1", NAME="asmdisk4_udev_sde1", OWNER="grid", GROUP="asmadmin", MODE="0660"

Reload and Restart the udev rules:
# udevadm control --reload-rules
# start_udev
Starting udev:                                             [  OK  ]
Verify disk protections:
# ls -ltr /dev/asm*
brw-rw----. 1 grid asmadmin 8, 33 Sep 11 18:24 /dev/asmdisk2_udev_sdc1
brw-rw----. 1 grid asmadmin 8, 65 Sep 11 18:24 /dev/asmdisk4_udev_sde1
brw-rw----. 1 grid asmadmin 8, 49 Sep 11 18:24 /dev/asmdisk3_udev_sdd1
brw-rw----. 1 grid asmadmin 8, 17 Sep 11 18:24 /dev/asmdisk1_udev_sdb1

Cluvfy command to run after Virtualbox installation on our Master VM  after adding shared devices for ASM

$ ./bin/cluvfy stage -pre crsinst -asm -presence local -asmgrp asmadmin 
  -asmdev /dev/asmdisk1_udev_sdb1,/dev/asmdisk2_udev_sdc1,/dev/asmdisk3_udev_sdd1,/dev/asmdisk4_udev_sde1  -n grac41

Configure 2nd system: add udev rules and attach shared devices

VBoxManage storageattach grac42 --storagectl "SATA" --port 1  --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm1_10G.vdi
VBoxManage storageattach grac42 --storagectl "SATA" --port 2  --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm2_10G.vdi
VBoxManage storageattach grac42 --storagectl "SATA" --port 3  --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm3_10G.vdi
VBoxManage storageattach grac42 --storagectl "SATA" --port 4  --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm4_10G.vdi

Run cluvfy with ASM disk info and network info using both RAC members : grac41 and grac42

$ ./bin/cluvfy stage -pre crsinst -asm -presence local -asmgrp asmadmin -asmdev /dev/asmdisk1_udev_sdb1,/dev/asmdisk2_udev_sdc1,/dev/asmdisk3_udev_sdd1,/dev/asmdisk4_udev_sde1  -networks eth1:192.168.1.0:PUBLIC/eth2:192.168.2.0:cluster_interconnect -n grac41,grac42
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "grac41"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) grac41,grac42
TCP connectivity check passed for subnet "192.168.2.0"
Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) grac41,grac42
TCP connectivity check passed for subnet "192.168.1.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "grac42:/usr,grac42:/var,grac42:/etc,grac42:/sbin,grac42:/tmp"
Free disk space check passed for "grac41:/usr,grac41:/var,grac41:/etc,grac41:/sbin,grac41:/tmp"
Check for multiple users with UID value 501 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Group existence check passed for "asmadmin"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Membership check for user "grid" in group "asmadmin" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
..
Package existence check passed for "nfs-utils"
Checking availability of ports "23792,23791" required for component "Oracle Remote Method Invocation (ORMI)"
Port availability check passed for ports "23792,23791"
Checking availability of ports "6200,6100" required for component "Oracle Notification Service (ONS)"
Port availability check passed for ports "6200,6100"
Checking availability of ports "2016" required for component "Oracle Notification Service (ONS) Enterprise Manager support"
Port availability check passed for ports "2016"
Checking availability of ports "1521" required for component "Oracle Database Listener"
Port availability check passed for ports "1521"
Checking availability of ports "8888" required for component "Oracle Containers for J2EE (OC4J)"
Port availability check passed for ports "8888"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Package existence check passed for "cvuqdisk"
Checking Devices for ASM...
Checking for shared devices...
Device                                Device Type
------------------------------------  ------------------------
/dev/asmdisk4_udev_sde1               Disk
/dev/asmdisk2_udev_sdc1               Disk
/dev/asmdisk3_udev_sdd1               Disk
/dev/asmdisk1_udev_sdb1               Disk
Checking consistency of device owner across all nodes...
Consistency check of device owner for "/dev/asmdisk2_udev_sdc1" PASSED
Consistency check of device owner for "/dev/asmdisk4_udev_sde1" PASSED
Consistency check of device owner for "/dev/asmdisk3_udev_sdd1" PASSED
Consistency check of device owner for "/dev/asmdisk1_udev_sdb1" PASSED
Checking consistency of device group across all nodes...
Consistency check of device group for "/dev/asmdisk2_udev_sdc1" PASSED
Consistency check of device group for "/dev/asmdisk4_udev_sde1" PASSED
Consistency check of device group for "/dev/asmdisk3_udev_sdd1" PASSED
Consistency check of device group for "/dev/asmdisk1_udev_sdb1" PASSED
Checking consistency of device permissions across all nodes...
Consistency check of device permissions for "/dev/asmdisk2_udev_sdc1" PASSED
Consistency check of device permissions for "/dev/asmdisk4_udev_sde1" PASSED
Consistency check of device permissions for "/dev/asmdisk3_udev_sdd1" PASSED
Consistency check of device permissions for "/dev/asmdisk1_udev_sdb1" PASSED
Checking consistency of device size across all nodes...
Consistency check of device size for "/dev/asmdisk2_udev_sdc1" PASSED
Consistency check of device size for "/dev/asmdisk4_udev_sde1" PASSED
Consistency check of device size for "/dev/asmdisk3_udev_sdd1" PASSED
Consistency check of device size for "/dev/asmdisk1_udev_sdb1" PASSED
UDev attributes check for ASM Disks started...
Checking udev settings for device "/dev/asmdisk1_udev_sdb1"
Checking udev settings for device "/dev/asmdisk2_udev_sdc1"
Checking udev settings for device "/dev/asmdisk3_udev_sdd1"
Checking udev settings for device "/dev/asmdisk4_udev_sde1"
UDev attributes check passed for ASM Disks
Devices check for ASM passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
Core file name pattern consistency check passed.
User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes
"domain" and "search" entries do not coexist in any  "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes
Check for integrity of file "/etc/resolv.conf" passed
Time zone consistency check passed
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking daemon "avahi-daemon" is not configured and running
Daemon not configured check passed for process "avahi-daemon"
Daemon not running check passed for process "avahi-daemon"
Starting check for Reverse path filter setting ...
Check for Reverse path filter setting passed
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passed
Pre-check for cluster services setup was successful.

Run installer from GRID installation media

$ ./runInstaller
From OUI log: /tmp/OraInstall2013-09-12_10-45-12AM/installActions2013-09-12_10-45-12AM.log
--------------------------------------------------------------------------------
Global Settings
--------------------------------------------------------------------------------
- Disk Space : required 5.5 GB available 28.16 GB
- Install Option : Install and Configure Oracle Grid Infrastructure for a Cluster
- Oracle base for Oracle Grid Infrastructure : /u01/app/grid
- Grid home : /u01/app/11204/grid
- Source Location : /media/sf_mykits/Oracle/11.2.0.4/grid/grid/install/../stage/products.xml
- Privileged Operating System Groups : asmdba (OSDBA), asmoper (OSOPER), asmadmin (OSASM)
--------------------------------------------------------------------------------
Inventory information
--------------------------------------------------------------------------------
- Inventory location : /u01/app/oraInventory
- Central inventory (oraInventory) group : oinstall
--------------------------------------------------------------------------------
Grid Infrastructure Settings
--------------------------------------------------------------------------------
- Cluster Name : grac4
- Local Node : grac41
- Remote Nodes : grac42
- GNS Subdomain : grac.example.com
- GNS VIP Address : 192.168.1.59
- Single Client Access Name (SCAN) : grac4-scan.grid4.example.com
- SCAN Port : 1521
- Public Interfaces : eth1
- Private Interfaces : eth2
--------------------------------------------------------------------------------
Storage Information
--------------------------------------------------------------------------------
- Storage Type : Oracle ASM
- ASM Disk Group : DATA
- Storage Redundancy : NORMAL
- Disks Selected : /dev/asmdisk1_udev_sdb1,/dev/asmdisk2_udev_sdc1,/dev/asmdisk3_udev_sdd1,/dev/asmdisk4_udev_sde1

Run root.sh scripts on grac41:

# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

# /u01/app/11204/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=  /u01/app/11204/grid
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11204/grid/crs/install/crsconfig_params
Creating trace directory
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac41'
CRS-2676: Start of 'ora.mdnsd' on 'grac41' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac41'
CRS-2676: Start of 'ora.gpnpd' on 'grac41' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac41'
CRS-2672: Attempting to start 'ora.gipcd' on 'grac41'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac41' succeeded
CRS-2676: Start of 'ora.gipcd' on 'grac41' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac41'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac41'
CRS-2676: Start of 'ora.diskmon' on 'grac41' succeeded
CRS-2676: Start of 'ora.cssd' on 'grac41' succeeded
ASM created and started successfully.
Disk Group DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 10c81d1ce5a14fb6bf35cbb22fff3ebf.
Successful addition of voting disk 98010612be6b4fc9bf3bc1b186d8758d.
Successful addition of voting disk 9688bec3914d4f70bfc959664ddd8584.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   10c81d1ce5a14fb6bf35cbb22fff3ebf (/dev/asmdisk1_udev_sdb1) [DATA]
2. ONLINE   98010612be6b4fc9bf3bc1b186d8758d (/dev/asmdisk2_udev_sdc1) [DATA]
3. ONLINE   9688bec3914d4f70bfc959664ddd8584 (/dev/asmdisk3_udev_sdd1) [DATA]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'grac41'
CRS-2676: Start of 'ora.asm' on 'grac41' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'grac41'
CRS-2676: Start of 'ora.DATA.dg' on 'grac41' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Run root.sh scripts on grac42:

 # /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

# /u01/app/11204/grid/root.sh
Performing root user operation for Oracle 11g 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11204/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11204/grid/crs/install/crsconfig_params
Creating trace directory
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node grac41, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Verify CRS installation with : $GRID_HOME/bin/crsctl stat res -t

# my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.DATA.dg                    ONLINE     ONLINE          grac41        
ora.DATA.dg                    ONLINE     ONLINE          grac42        
ora.LISTENER.lsnr              ONLINE     ONLINE          grac41        
ora.LISTENER.lsnr              ONLINE     ONLINE          grac42        
ora.asm                        ONLINE     ONLINE          grac41       Started 
ora.asm                        ONLINE     ONLINE          grac42       Started 
ora.gsd                        OFFLINE    OFFLINE         grac41        
ora.gsd                        OFFLINE    OFFLINE         grac42        
ora.net1.network               ONLINE     ONLINE          grac41        
ora.net1.network               ONLINE     ONLINE          grac42        
ora.ons                        ONLINE     ONLINE          grac41        
ora.ons                        ONLINE     ONLINE          grac42        
ora.registry.acfs              ONLINE     ONLINE          grac41        
ora.registry.acfs              ONLINE     ONLINE          grac42        
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac42        
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac41        
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac41        
ora.cvu                        ONLINE     ONLINE          grac41        
ora.gns                        ONLINE     ONLINE          grac41        
ora.gns.vip                    ONLINE     ONLINE          grac41        
ora.grac41.vip                 ONLINE     ONLINE          grac41        
ora.grac42.vip                 ONLINE     ONLINE          grac42        
ora.oc4j                       ONLINE     ONLINE          grac41        
ora.scan1.vip                  ONLINE     ONLINE          grac42        
ora.scan2.vip                  ONLINE     ONLINE          grac41        
ora.scan3.vip                  ONLINE     ONLINE          grac41

 

Verify CRS installation with cluvfy

$ ./bin/cluvfy stage -post crsinst -n grac41,grac42
Performing post-checks for cluster services setup 
Checking node reachability...
Node reachability check passed from node "grac41"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) grac41,grac42
TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity using interfaces on subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) grac41,grac42
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Time zone consistency check passed
Checking Cluster manager integrity... 
Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations 
UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations 
Default user file creation mask check passed
Checking cluster integrity...
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+DATA" is available on all the nodes
NOTE: 
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Checking CRS integrity...
Clusterware version consistency passed.
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of GSD node application (optional)
GSD node application is offline on nodes "grac41,grac42"
Checking existence of ONS node application (optional)
ONS node application check passed
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "grac4-scan.grid4.example.com"...
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking SCAN IP addresses...
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed
WARNING: 
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
OLR integrity check passed
Checking GNS integrity...
The GNS subdomain name "grid4.example.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0" match with the GNS VIP "192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0"
GNS VIP "192.168.1.59" resolves to a valid IP address
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resource configuration check passed
GNS VIP resource configuration check passed.
GNS integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
CTSS resource check passed
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Post-check for cluster services setup was successful

 

RDBMS install

Verify pre RDBMS install with cluvfy

 $ ./bin/cluvfy stage -pre dbcfg -n grac41,grac42 -d /u01/app/oracle/product/11204/racdb -verbose -fixup
In this case cluvfy builds a fixup script to create the oper group - run it on both notes
# /tmp/CVU_12.1.0.1.0_oracle/runfixup.sh
Solve all errors until cluvfy reports : Pre-check for database configuration was successful. 
$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),500(vboxsf),506(asmdba),54322(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Run Installer from Database media and run related root.sh scripts

$ env | grep ORA
ORACLE_BASE=/u01/app/oracle
ORACLE_SID=grace41
ORACLE_HOME=/u01/app/oracle/product/11204/racdb
$ cd /media/sf_mykits/Oracle/11.2.0.4/database
$ ./runInstaller  
   server class
    Oracle Real application cluster installation
     Test/Create SSH connectivity
      Advanced Install 
        Enterprise Edition
         Global Database name : grac4             
          OSDBA  group : dba
          OSOPER group : oper 
Run /u01/app/oracle/product/11204/racdb/root.sh in grac1 and grac2

 Verify Rdbms installation with : $GRID_HOME/bin/crsctl stat res -t

$ my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS
-------------------------      ---------- ----------      ------------ ------------------
ora.DATA.dg                    ONLINE     ONLINE          grac41
ora.DATA.dg                    ONLINE     ONLINE          grac42
ora.LISTENER.lsnr              ONLINE     ONLINE          grac41
ora.LISTENER.lsnr              ONLINE     ONLINE          grac42
ora.asm                        ONLINE     ONLINE          grac41       Started
ora.asm                        ONLINE     ONLINE          grac42       Started
ora.gsd                        OFFLINE    OFFLINE         grac41
ora.gsd                        OFFLINE    OFFLINE         grac42
ora.net1.network               ONLINE     ONLINE          grac41
ora.net1.network               ONLINE     ONLINE          grac42
ora.ons                        ONLINE     ONLINE          grac41
ora.ons                        ONLINE     ONLINE          grac42
ora.registry.acfs              ONLINE     ONLINE          grac41
ora.registry.acfs              ONLINE     ONLINE          grac42
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac42
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac41
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac41
ora.cvu                        ONLINE     ONLINE          grac41
ora.gns                        ONLINE     ONLINE          grac42
ora.gns.vip                    ONLINE     ONLINE          grac42
ora.grac4.db                   ONLINE     ONLINE          grac41       Open
ora.grac4.db                   ONLINE     ONLINE          grac42       Open
ora.grac41.vip                 ONLINE     ONLINE          grac41
ora.grac42.vip                 ONLINE     ONLINE          grac42
ora.oc4j                       ONLINE     ONLINE          grac41
ora.scan1.vip                  ONLINE     ONLINE          grac42
ora.scan2.vip                  ONLINE     ONLINE          grac41
ora.scan3.vip                  ONLINE     ONLINE          grac41

 

Verify gv$instance:


INST_ID INST_NUM INST_NAME HOST_NAME          VERSION       STARTUP_TIME    STATUS PAR THREAD# ARCHIVE LOGINS     SHU DB_STATUS INSTANCE_ROLE      ACTIVE_ST BLO
------- -------- --------- ------------------ ------------ --------------- ------ --- ------- ------- ---------- --- --------- ------------------ --------- ---
1    1     grac41    grac41.example.com 11.2.0.4.0   14-SEP 11:31:45 OPEN   YES 1       STOPPED ALLOWED     NO  ACTIVE    PRIMARY_INSTANCE   NORMAL    NO
2    2     grac42    grac42.example.com 11.2.0.4.0   14-SEP 11:32:00 OPEN   YES 2       STOPPED ALLOWED     NO  ACTIVE    PRIMARY_INSTANCE   NORMAL    NO

 

Reference:
How To Setup Partitioned Linux Block Devices Using UDEV (Non-ASMLIB) And Assign Them To ASM? (Doc ID 1528148.1)

Installing Oracle RAC 11.2.0.3, OEL 6.3 and Virtualbox 4.2 with GNS

Linux, Virtualbox Installation

Check the following link for Linux/VirtualBox installation details: http://www.oracle-base.com/articles/11g/oracle-db-11gr2-rac-installation-on-oracle-linux-6-using-virtualbox.php

  • Install Virtualbox Guest Additons
  • Install package : # yum install oracle-rdbms-server-11gR2-preinstall
  • Update the installation: : # yum update
  • Install Wireshark:  # yum install wireshark     # yum install wireshark-gnome
  • Install ASMlib
  • Install cluvfy as user grid – download here and extract files under user grid
  • Extract grid software to folder grid and  install rpm from  folder:  grid/rpm 
# cd /media/sf_kits/Oracle/11.2.0.4/grid/rpm
# rpm -iv cvuqdisk-1.0.9-1.rpm
Preparing packages for installation...
Using default group oinstall to install package
cvuqdisk-1.0.9-1
  • Verify the currrent OS status by running : $ ./bin/cluvfy stage -pre crsinst -n grac41

 

Check OS setting

Install X11 applications like xclock
# yum install xorg-x11-apps

Turn off and disable the firewall IPTables and disable SELinux
# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]
# chkconfig iptables off
# chkconfig --list iptables
iptables        0:off   1:off   2:off   3:off   4:off   5:off   6:off

Disable SELinux. Open the config file and change the SELINUX variable from enforcing to disabled.
# vim /etc/selinux/config
 # This file controls the state of SELinux on the system.
 # SELINUX= can take one of these three values:
 #     enforcing - SELinux security policy is enforced.
 #     permissive - SELinux prints warnings instead of enforcing.
 #     disabled - No SELinux policy is loaded.
 SELINUX=disabled

DNS Setup including BIND, NTP, DHCP in a LAN   on a separate VirtualBox VM  

Even if you are using a DNS, Oracle recommends to list the public IP, VIP and private addresses

for each node in the hosts file on each node.

Domain:         example.com       Name Server: ns1.example.com            192.168.1.50
RAC Sub-Domain: grid.example.com  Name Server: gns.example.com            192.168.1.55
DHCP Server:    ns1.example.com
NTP  Server:    ns1.example.com
DHCP adresses:  192.168.1.100 ... 192.168.1.254

Configure DNS:
Identity     Home Node    Host Node                          Given Name                      Type        Address        Address Assigned By     Resolved By
GNS VIP        None        Selected by Oracle Clusterware    gns.example.com                 Virtual     192.168.1.55   Net administrator       DNS + GNS
Node 1 Public  Node 1      grac1                             grac1.example.com               Public      192.168.1.61   Fixed                   DNS
Node 1 VIP     Node 1      Selected by Oracle Clusterware    grac1-vip.grid.example.com      Private     Dynamic        DHCP                    GNS
Node 1 Private Node 1      grac1int                          grac1int.example.com            Private     192.168.2.71   Fixed                   DNS
Node 2 Public  Node 2      grac2                             grac2.example.com               Public      192.168.1.62   Fixed                   DNS
Node 2 VIP     Node 2      Selected by Oracle Clusterware    grac2-vip.grid.example.com      Private     Dynamic        DHCP                    GNS
Node 2 Private Node 2      grac2int                          grac2int.example.com            Private     192.168.2.72   Fixed                   DNS
SCAN VIP 1     none        Selected by Oracle Clusterware    GRACE2-scan.grid.example.com    Virtual     Dynamic        DHCP                    GNS
SCAN VIP 2     none        Selected by Oracle Clusterware    GRACE2-scan.grid.example.com    Virtual     Dynamic        DHCP                    GNS
SCAN VIP 3     none        Selected by Oracle Clusterware    GRACE2-scan.grid.example.com    Virtual     Dynamic        DHCP                    GNS

 

Note: the cluster node VIPs and SCANs are obtained via DHCP and if GNS is up all DHCP  adresses should be found with nslookup. If you have problems with zone delegation add your GNS name server to /etc/resolv.conf

Install BIND – Make sure the following rpms are installed

Install – Make sure the following rpms are installed:

dhcp-common-4.1.1-34.P1.0.1.el6

dhcp-common-4.1.1-34.P1.0.1.el6.x86_64

bind-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm

bind-libs-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm

bind-utils-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm

Install Bind packages

# rpm -Uvh bind-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm bind-libs-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm

bind-utils-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm

For a detailed describtion using zone delegations check following link:

Configure DNS:

-> named.conf
options {
    listen-on port 53 {  192.168.1.50; 127.0.0.1; };
    directory     "/var/named";
    dump-file     "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
    allow-query     {  any; };
    allow-recursion     {  any; };
    recursion yes;
    dnssec-enable no;
    dnssec-validation no;

};
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
zone "." IN {
    type hint;
    file "named.ca";
};
zone    "1.168.192.in-addr.arpa" IN { // Reverse zone
    type master;
    file "192.168.1.db";
        allow-transfer { any; };
    allow-update { none; };
};
zone    "2.168.192.in-addr.arpa" IN { // Reverse zone
    type master;
    file "192.168.2.db";
        allow-transfer { any; };
    allow-update { none; };
};
zone  "example.com" IN {
      type master;
       notify no;
       file "example.com.db";
};

-> Forward zone: example.com.db 
;
; see http://www.zytrax.com/books/dns/ch9/delegate.html 
; 
$TTL 1H         ; Time to live
$ORIGIN example.com.
@       IN      SOA     ns1.example.com.  hostmaster.example.com.  (
                        2009011202      ; serial (todays date + todays serial #)
                        3H              ; refresh 3 hours
                        1H              ; retry 1 hour
                        1W              ; expire 1 week
                        1D )            ; minimum 24 hour
;
        IN          A         192.168.1.50
        IN          NS        ns1.example.com. ; name server for example.com
ns1     IN          A         192.168.1.50
grac1   IN          A         192.168.1.61
grac2   IN          A         192.168.1.62
grac3   IN          A         192.168.1.63
;
$ORIGIN grid.example.com.
@       IN          NS        gns.grid.example.com. ; NS  grid.example.com
        IN          NS        ns1.example.com.      ; NS example.com
gns     IN          A         192.168.1.55 ; glue record

-> Reverse zone:  192.168.1.db 
$TTL 1H
@       IN      SOA     ns1.example.com.  hostmaster.example.com.  (
                        2009011201      ; serial (todays date + todays serial #)
                        3H              ; refresh 3 hours
                        1H              ; retry 1 hour
                        1W              ; expire 1 week
                        1D )            ; minimum 24 hour
; 
              NS        ns1.example.com.
              NS        gns.grid.example.com.
50            PTR       ns1.example.com.
55            PTR       gns.grid.example.com. ; reverse mapping for GNS
61            PTR       grac1.example.com. ; reverse mapping for GNS
62            PTR       grac2.example.com. ; reverse mapping for GNS
63            PTR       grac3.example.com. ; reverse mapping for GNS

-> Reverse zone:  192.168.2.db 
$TTL 1H
@       IN      SOA     ns1.example.com. hostmaster.example.com.  (
                        2009011201      ; serial (todays date + todays serial #)
                        3H              ; refresh 3 hours
                        1H              ; retry 1 hour
                        1W              ; expire 1 week
                        1D )            ; minimum 24 hour
; 
             NS        ns1.example.com.
71           PTR       grac1int.example.com. ; reverse mapping for GNS
72           PTR       grac2int.example.com. ; reverse mapping for GNS
73           PTR       grac3int.example.com. ; reverse mapping for GNS

->/etc/resolv.conf
# Generated by NetworkManager
search example.com
nameserver 192.168.1.50

Verify DNS ( Note: Commands where execute with a running GNS - means we already have install GRID )
Check the current GNS status
#   /u01/app/11203/grid/bin/srvctl config gns -a -l
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5353 to connect to mDNS
GNS status: OK
Domain served by GNS: grid3.example.com
GNS version: 11.2.0.3.0
GNS VIP network: ora.net1.network
Name            Type Value
grac3-scan      A    192.168.1.220
grac3-scan      A    192.168.1.221
grac3-scan      A    192.168.1.222
grac3-scan1-vip A    192.168.1.220
grac3-scan2-vip A    192.168.1.221
grac3-scan3-vip A    192.168.1.222
grac31-vip      A    192.168.1.219
grac32-vip      A    192.168.1.224
grac33-vip      A    192.168.1.226


$ nslookup grac1.example.com
Name:    grac1.example.com
Address: 192.168.1.61
$ nslookup grac1.example.com
Name:    grac1.example.com
Address: 192.168.1.61
$ nslookup grac1.example.com
Name:    grac1.example.com
Address: 192.168.1.61
$ nslookup grac1int.example.com
Name:    grac1int.example.com
Address: 192.168.2.71
$ nslookup grac1int.example.com
Name:    grac1int.example.com
Address: 192.168.2.71
$ nslookup grac1int.example.com
Name:    grac1int.example.com
Address: 192.168.2.71
$ nslookup 192.168.2.71
71.2.168.192.in-addr.arpa    name = grac1int.example.com.
$ nslookup 192.168.2.72
72.2.168.192.in-addr.arpa    name = grac2int.example.com.
$ nslookup 192.168.2.73
73.2.168.192.in-addr.arpa    name = grac3int.example.com.
$ nslookup 192.168.1.61
61.1.168.192.in-addr.arpa    name = grac1.example.com.
$ nslookup 192.168.1.62
62.1.168.192.in-addr.arpa    name = grac2.example.com.
$ nslookup 192.168.1.63
63.1.168.192.in-addr.arpa    name = grac3.example.com.
$ nslookup grac1-vip.grid.example.com
Non-authoritative answer:
Name:    grac1-vip.grid.example.com
Address: 192.168.1.107
$ nslookup grac2-vip.grid.example.com
Non-authoritative answer:
Name:    grac2-vip.grid.example.com
Address: 192.168.1.112
$ nslookup GRACE2-scan.grid.example.com
Non-authoritative answer:
Name:    GRACE2-scan.grid.example.com
Address: 192.168.1.108
Name:    GRACE2-scan.grid.example.com
Address: 192.168.1.110
Name:    GRACE2-scan.grid.example.com
Address: 192.168.1.109

Use dig against DNS name server - DNS name server should use Zone Delegation
$ dig @192.168.1.50 GRACE2-scan.grid.example.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 <<>> @192.168.1.50 GRACE2-scan.grid.example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64626
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 2, ADDITIONAL: 1
;; QUESTION SECTION:
;GRACE2-scan.grid.example.com.    IN    A
;; ANSWER SECTION:
GRACE2-scan.grid.example.com. 1    IN    A    192.168.1.108
GRACE2-scan.grid.example.com. 1    IN    A    192.168.1.109
GRACE2-scan.grid.example.com. 1    IN    A    192.168.1.110
;; AUTHORITY SECTION:
grid.example.com.    3600    IN    NS    ns1.example.com.
grid.example.com.    3600    IN    NS    gns.grid.example.com.
;; ADDITIONAL SECTION:
ns1.example.com.    3600    IN    A    192.168.1.50
;; Query time: 0 msec
;; SERVER: 192.168.1.50#53(192.168.1.50)
;; WHEN: Sun Jul 28 13:50:26 2013
;; MSG SIZE  rcvd: 146

Use dig against GNS name server 
$ dig @192.168.1.55 GRACE2-scan.grid.example.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 <<>> @192.168.1.55 GRACE2-scan.grid.example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32138
;; flags: qr aa; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;GRACE2-scan.grid.example.com.    IN    A
;; ANSWER SECTION:
GRACE2-scan.grid.example.com. 120 IN    A    192.168.1.108
GRACE2-scan.grid.example.com. 120 IN    A    192.168.1.109
GRACE2-scan.grid.example.com. 120 IN    A    192.168.1.110
;; AUTHORITY SECTION:
grid.example.com.    10800    IN    SOA    GRACE2-gns-vip.grid.example.com. GRACE2-gns-vip.grid.example.com. 3173463 10800 10800 30 120
;; ADDITIONAL SECTION:
GRACE2-gns-vip.grid.example.com. 10800 IN A    192.168.1.55
;; Query time: 15 msec
;; SERVER: 192.168.1.55#53(192.168.1.55)
;; WHEN: Sun Jul 28 13:50:26 2013
;; MSG SIZE  rcvd: 161

Start the DNS server

# service named restart

Starting named:                                            [  OK  ]

Ensure DNS service restart on the reboot

# chkconfig named on

# chkconfig –list named

named              0:off    1:off    2:on    3:on    4:on    5:on    6:off

Display all records for zone example.com with dig 

 

$ dig example.com AXFR
$ dig @192.168.1.55  AXFR
$ dig GRACE2-scan.grid.example.com

 

Configure DHCP server 

  • dhclient is recreate /etc/resolv,conf . Run $ service network restart after testing dhclient that to have a consistent /etc/resolv,conf on all cluster nodes

 

Verify that you don't use any DHCP server from a  vbriged network
- Note If using Virtualbox briged network devices using same Network address as our local Router 
  the Virtualbox DHCP server is used ( of course you can disable 
  M:\VM> vboxmanage list bridgedifs
   Name:            Realtek PCIe GBE Family Controller
   GUID:            7e0af9ff-ea37-4e63-b2e5-5128c60ab300
   DHCP:            Enabled
   IPAddress:       192.168.1.4
   NetworkMask:     255.255.255.0

M:\VM\GRAC_OEL64_11203>ipconfig
   Windows-IP-Konfiguration
   Ethernet-Adapter LAN-Verbindung:
   Verbindungsspezifisches DNS-Suffix: speedport.ip
   Verbindungslokale IPv6-Adresse  . : fe80::c52f:f681:bb0b:c358%11
   IPv4-Adresse  . . . . . . . . . . : 192.168.1.4
   Subnetzmaske  . . . . . . . . . . : 255.255.255.0
   Standardgateway . . . . . . . . . : 192.168.1.1

Solution:  Use Internal Network devices instead of Bridged Network devices for the Virtulbox Network setup


-> /etc/sysconfig/dhcpd
Command line options here
 DHCPDARGS="eth0"

-> /etc/dhcp/dhcpd.conf ( don't used domain-name as this will create a new resolv.conf )
 ddns-update-style interim;
 ignore client-updates;
 subnet 192.168.1.0 netmask 255.255.255.0 {
 option routers                  192.168.1.1;                    # Default gateway to be used by DHCP clients
 option subnet-mask              255.255.255.0;                  # Default subnet mask to be used by DHCP clients.
 option ip-forwarding            off;                            # Do not forward DHCP requests.
 option broadcast-address        192.168.1.255;                  # Default broadcast address to be used by DHCP client.
#  option domain-name              "grid.example.com"; 
 option domain-name-servers      192.168.1.50;                   # IP address of the DNS server. In this document it will be oralab1
 option time-offset              -19000;                           # Central Standard Time
 option ntp-servers              0.pool.ntp.org;                   # Default NTP server to be used by DHCP clients
 range                           192.168.1.100 192.168.1.254;    # Range of IP addresses that can be issued to DHCP client
 default-lease-time              21600;                            # Amount of time in seconds that a client may keep the IP address
 max-lease-time                  43200;
 }
 # service dhcpd restart
 # chkconfig dhcpd on

Test on all cluster instances:
 # dhclient eth0
 Check /var/log/messages
 #  tail -f /var/log/messages
 Jul  8 12:46:09 gns dhclient[3909]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7 (xid=0x6fb12d80)
 Jul  8 12:46:09 gns dhcpd: DHCPDISCOVER from 08:00:27:e6:71:54 via eth0
 Jul  8 12:46:10 gns dhcpd: 0.pool.ntp.org: temporary name server failure
 Jul  8 12:46:10 gns dhcpd: DHCPOFFER on 192.168.1.100 to 08:00:27:e6:71:54 via eth0
 Jul  8 12:46:10 gns dhclient[3909]: DHCPOFFER from 192.168.1.50
 Jul  8 12:46:10 gns dhclient[3909]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x6fb12d80)
 Jul  8 12:46:10 gns dhcpd: DHCPREQUEST for 192.168.1.100 (192.168.1.50) from 08:00:27:e6:71:54 via eth0
 Jul  8 12:46:10 gns dhcpd: DHCPACK on 192.168.1.100 to 08:00:27:e6:71:54 via eth0
 Jul  8 12:46:10 gns dhclient[3909]: DHCPACK from 192.168.1.50 (xid=0x6fb12d80)
 Jul  8 12:46:12 gns avahi-daemon[1407]: Registering new address record for 192.168.1.100 on eth0.IPv4.
 Jul  8 12:46:12 gns NET[3962]: /sbin/dhclient-script : updated /etc/resolv.conf
 Jul  8 12:46:12 gns dhclient[3909]: bound to 192.168.1.100 -- renewal in 9071 seconds.
 Jul  8 12:46:13 gns ntpd[2051]: Listening on interface #6 eth0, 192.168.1.100#123 Enabled
  • Verify that the right DHCP server is in use ( at least check the bound an renwal values )

NTP Setup – Server: gns.example.com

# cat /etc/ntp.conf
 restrict default nomodify notrap noquery
 restrict 127.0.0.1
 # -- CLIENT NETWORK -------
 restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
 # --- OUR TIMESERVERS -----  can't reach NTP servers - build my own server
 #server 0.pool.ntp.org iburst
 #server 1.pool.ntp.org iburst
 server 127.127.1.0
 # --- NTP MULTICASTCLIENT ---
 # --- GENERAL CONFIGURATION ---
 # Undisciplined Local Clock.
 fudge   127.127.1.0 stratum 9
 # Drift file.
 driftfile /var/lib/ntp/drift
 broadcastdelay  0.008
 # Keys file.
 keys /etc/ntp/keys
 # chkconfig ntpd on
 # ntpq -p
 remote           refid      st t when poll reach   delay   offset  jitter
 ==============================================================================
 *LOCAL(0)        .LOCL.           9 l   11   64  377    0.000    0.000   0.000

NTP Setup - Clients: grac1.example.com, grac2.example.com,  ...
 # cat /etc/ntp.conf
 restrict default nomodify notrap noquery
 restrict 127.0.0.1
 # -- CLIENT NETWORK -------
 # --- OUR TIMESERVERS -----
 # 192.168.1.2 is the address for my timeserver,
 # use the address of your own, instead:
 server 192.168.1.50
 server  127.127.1.0
 # --- NTP MULTICASTCLIENT ---
 # --- GENERAL CONFIGURATION ---
 # Undisciplined Local Clock.
 fudge   127.127.1.0 stratum 12
 # Drift file.
 driftfile /var/lib/ntp/drift
 broadcastdelay  0.008
 # Keys file.
 keys /etc/ntp/keys
 # ntpq -p
 remote           refid      st t when poll reach   delay   offset  jitter
 ==============================================================================
 gns.example.com LOCAL(0)        10 u   22   64    1    2.065  -11.015   0.000
 LOCAL(0)        .LOCL.          12 l   21   64    1    0.000    0.000   0.000
 Verify setup with cluvfy :

Add to  our /etc/rc.local
#
service ntpd stop
ntpdate -u 192.168.1.50 
service ntpd start

 

Verify GNS setup with cluvfy:

$ ./bin/cluvfy comp gns -precrsinst -domain grid.example.com -vip 192.168.2.100 -verbose -n grac1,grac2
 Verifying GNS integrity
 Checking GNS integrity...
 Checking if the GNS subdomain name is valid...
 The GNS subdomain name "grid.example.com" is a valid domain name
 Checking if the GNS VIP is a valid address...
 GNS VIP "192.168.2.100" resolves to a valid IP address
 Checking the status of GNS VIP...
 GNS integrity check passed
 Verification of GNS integrity was successful.

 

Setup User Accounts

NOTE: Oracle recommend different users for the installation of the Grid  Infrastructure (GI) and the Oracle RDBMS home. The GI will be installed in  a separate Oracle base, owned by user ‘grid.’ After the grid install the GI home will be owned by root, and inaccessible to unauthorized users.

Create OS groups using the command below. Enter these commands as the 'root' user:
  #/usr/sbin/groupadd -g 501 oinstall
  #/usr/sbin/groupadd -g 502 dba
  #/usr/sbin/groupadd -g 504 asmadmin
  #/usr/sbin/groupadd -g 506 asmdba
  #/usr/sbin/groupadd -g 507 asmoper

Create the users that will own the Oracle software using the commands:
  #/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
  #/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle
  $ id
  uid=501(grid) gid=54321(oinstall) groups=54321(oinstall),504(asmadmin),506(asmdba),507(asmoper)
  $ id
  uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),506(adba),54322(dba)

For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file:
  if ( $USER = "oracle" || $USER = "grid" ) then
  limit maxproc 16384
  limit descriptors 65536
  endif

Modify  /etc/security/limits.conf
  # oracle-rdbms-server-11gR2-preinstall setting for nofile soft limit is 1024
  oracle   soft   nofile    1024
  grid   soft   nofile    1024
  # oracle-rdbms-server-11gR2-preinstall setting for nofile hard limit is 65536
  oracle   hard   nofile    65536
  grid   hard   nofile    65536
  # oracle-rdbms-server-11gR2-preinstall setting for nproc soft limit is 2047
  oracle   soft   nproc    2047
  grid     soft   nproc    2047
  # oracle-rdbms-server-11gR2-preinstall setting for nproc hard limit is 16384
  oracle   hard   nproc    16384
  grid     hard   nproc    16384
  # oracle-rdbms-server-11gR2-preinstall setting for stack soft limit is 10240KB
  oracle   soft   stack    10240
  grid     soft   stack    10240
  # oracle-rdbms-server-11gR2-preinstall setting for stack hard limit is 32768KB
  oracle   hard   stack    32768
  grid     hard   stack    32768

Create Directories:
 - Have a separate ORACLE_BASE for both GRID and RDBMS install !
Create the Oracle Inventory Directory ( needed or 11.2.0.3 will ) 
To create the Oracle Inventory directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oraInventory
  # chown -R grid:oinstall /u01/app/oraInventory

Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/grid
  # chown -R grid:oinstall /u01/app/grid
  # chmod -R 775 /u01/app/grid
  # mkdir -p /u01/app/11203/grid
  # chown -R grid:oinstall /u01//app/11203/grid
  # chmod -R 775 /u01/app/11203/grid

Creating the Oracle Base Directory
  To create the Oracle Base directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle
  # chown -R oracle:oinstall /u01/app/oracle
  # chmod -R 775 /u01/app/oracle

Creating the Oracle RDBMS Home Directory
  To create the Oracle RDBMS Home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle/product/11203/racdb
  # chown -R oracle:oinstall /u01/app/oracle/product/11203/racdb
  # chmod -R 775 /u01/app/oracle/product/11203/racdb

Add divider=10″ to /boot/grub/grub.conf
Finally, add “divider=10″ to the boot parameters in grub.conf to improve VM performance. 
This is often recommended as a way to reduce host CPU utilization when a VM is idle, but 
it also improves overall guest performance. When I tried my first run-through of this 
process without this parameter enabled, the cluster configuration script bogged down 
terribly, and failed midway through creating the database

Verify Initial Virtualbox Image using cluvfy
  Install the cluvfy as Grid Owner ( grid )  in  ~/cluvfy112

Check the minimum system for our 1.st Virtualbox image  by running: cluvfy -p crs
$ ./bin/cluvfy comp sys -p crs -n grac1
Verifying system requirement 
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "grac1:/u01/app/11203/grid,grac1:/tmp"
Check for multiple users with UID value 501 passed 
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed 
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Time zone consistency check passed
Verification of system requirement was successful.

 

 Setup ASM disks

Create ASM disks
  Note : Create all ASM disks on my SSD device ( C:\VM\GRACE2\ASM ) 
  Create 6 ASM disks : 
    3 disks with 5 Gbyte each   
    3 disks with 2 Gbyte each   
D:\VM>set_it
D:\VM>set path="d:\Program Files\Oracle\VirtualBox";D:\Windows\system32;D:\Windo
ws;D:\Windows\System32\Wbem;D:\Windows\System32\WindowsPowerShell\v1.0\;D:\Progr
am Files (x86)\IDM Computer Solutions\UltraEdit\

D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm1_5G.vdi --size 5120 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 7c9711c7-14e9-4bc4-8390-3e7dbb2ad130
D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm2_5G.vdi --size 5120 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 5c801291-7083-4030-9221-cfab1460f527
D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm3_5G.vdi --size 5120 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 28b0e0b4-c9ae-474e-b339-d742a10bb120
D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm1_2G.vdi --size 2048 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: acc2b925-fa58-4d5f-966f-1c9cac014d1b
D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm2_2G.vdi --size 2048 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: a93f5fd8-bb10-4421-af07-3dfe4fc0d740
D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm3_2G.vdi --size 2048 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 89c0f4cd-569e-4a30-9b6e-5ce3044fcde5
D:\VM>dir  C:\VM\GRACE2\ASM\*
 Volume in Laufwerk C: hat keine Bezeichnung.
 Volumeseriennummer: 20BF-FC17
 Verzeichnis von C:\VM\GRACE2\ASM
13.07.2013  13:00     2.147.495.936 asm1_2G.vdi
13.07.2013  12:56     5.368.733.696 asm1_5G.vdi
13.07.2013  13:00     2.147.495.936 asm2_2G.vdi
13.07.2013  12:57     5.368.733.696 asm2_5G.vdi
13.07.2013  13:00     2.147.495.936 asm3_2G.vdi
13.07.2013  12:59     5.368.733.696 asm3_5G.vdi

Attach disk to VM

D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 1  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm1_5G.vdi
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 2  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm2_5G.vdi
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 3  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm3_5G.vdi
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 4  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm1_2G.vdi
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 5  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm2_2G.vdi
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 6  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm3_2G.vdi

Change disk type to sharable disks:
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm1_5G.vdi --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm2_5G.vdi --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm3_5G.vdi --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm1_2G.vdi --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm2_2G.vdi --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm3_2G.vdi --type shareable

Reboot and format disks

 # ls /dev/sd*
/dev/sda   /dev/sda2  /dev/sdb  /dev/sdd  /dev/sdf
/dev/sda1  /dev/sda3  /dev/sdc  /dev/sde  /dev/sdg
# fdisk /dev/sdb
  Command (m for help): n
  Command action
   e   extended
   p   primary partition (1-4)
  p 
  Partition number (1-4): 1
  First sector (2048-10485759, default 2048): 
  Using default value 2048
  Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): 
  Using default value 10485759
  Command (m for help): w
  The partition table has been altered!
  In each case, the sequence of answers is "n", "p", "1", "Return", "Return" and "w".
  Repeat steps for : /dev/sdb -> /dev/sdg
#  ls /dev/sd*
/dev/sda   /dev/sda3  /dev/sdc   /dev/sdd1  /dev/sdf   /dev/sdg1
/dev/sda1  /dev/sdb   /dev/sdc1  /dev/sde   /dev/sdf1
/dev/sda2  /dev/sdb1  /dev/sdd   /dev/sde1  /dev/sdg

 

Configure ASMLib and Disks

# /usr/sbin/oracleasm configure -i

#  /etc/init.d/oracleasm createdisk data1 /dev/sdb1
Marking disk "data1" as an ASM disk:                       [  OK  ]
#  /etc/init.d/oracleasm createdisk data2 /dev/sdc1
Marking disk "data2" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data3 /dev/sdd1
Marking disk "data3" as an ASM disk:                       [  OK  ]
#  /etc/init.d/oracleasm createdisk ocr1 /dev/sde1
Marking disk "ocr1" as an ASM disk:                        [  OK  ]
# /etc/init.d/oracleasm createdisk ocr2  /dev/sdf1
Marking disk "ocr2" as an ASM disk:                        [  OK  ]
[root@grac1 Desktop]#  /etc/init.d/oracleasm createdisk ocr3 /dev/sdg1
Marking disk "ocr3" as an ASM disk:                        [  OK  ]

# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
OCR1
OCR2
OCR3

# ls -l /dev/oracleasm/disks
total 0
brw-rw---- 1 grid asmadmin 8, 17 Jul 13 16:32 DATA1
brw-rw---- 1 grid asmadmin 8, 33 Jul 13 16:32 DATA2
brw-rw---- 1 grid asmadmin 8, 49 Jul 13 16:33 DATA3
brw-rw---- 1 grid asmadmin 8, 65 Jul 13 16:33 OCR1
brw-rw---- 1 grid asmadmin 8, 81 Jul 13 16:33 OCR2
brw-rw---- 1 grid asmadmin 8, 97 Jul 13 16:33 OCR3

#  /etc/init.d/oracleasm status 
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
[root@grac1 Desktop]# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
OCR1
OCR2
OCR3

# /etc/init.d/oracleasm querydisk -d DATA1
Disk "DATA1" is a valid ASM disk on device [8, 17]
# /etc/init.d/oracleasm querydisk -d DATA2
Disk "DATA2" is a valid ASM disk on device [8, 33]
# /etc/init.d/oracleasm querydisk -d DATA3
Disk "DATA3" is a valid ASM disk on device [8, 49]
# /etc/init.d/oracleasm querydisk -d OCR1
Disk "OCR1" is a valid ASM disk on device [8, 65]
# /etc/init.d/oracleasm querydisk -d OCR2
# /etc/init.d/oracleasm querydisk -d OCR3
Disk "OCR3" is a valid ASM disk on device [8, 97]
# /etc/init.d/oracleasm  scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]

 

Clone VirtualBox Image

Shutdown Virtualbox image 1 and manually clone the "grac1.vdi" disk using the following commands on the host server.
D:\VM> set_it
D:\VM> md D:\VM\GNS_RACE2\grac2

D:\VM> VBoxManage clonehd D:\VM\GNS_RACE2\grac1\grac1.vdi d:\VM\GNS_RACE2\grac2\grac2.vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VDI'. UUID: 0d626e95-9354-4f65-8fc0-e40ba44e1
Manually clone the "ol6-112-rac1.vdi" disk using the following commands on the host server.
Create new VM grac2 by using disk grac2.vdi

Attach disk to VM: grac2
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 1  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm1_5G.vdi
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 2  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm2_5G.vdi
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 3  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm3_5G.vdi
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 4  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm1_2G.vdi
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 5  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm2_2G.vdi
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 6  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm3_2G.vdi 
Start the "grac2" virtual machine by clicking the "Start" button on the toolbar. Ignore any network errors during the startup.
Log in to the "grac2" virtual machine as the "root" user so we can reconfigure the network settings to match the following.
    hostname: grac2.example.com
    IP Address eth0: 192.168.1.62 (public address)
    Default Gateway eth0: 192.168.1.1 (public address)
    IP Address eth1: 192.168.2.72 (private address)
    Default Gateway eth1: none
Amend the hostname in the "/etc/sysconfig/network" file.
    NETWORKING=yes
    HOSTNAME=grac2.example.com 
Check the MAC address of each of the available network connections. Don't worry that they are listed as "eth2" and "eth3". These are dynamically created connections because the MAC address of the "eth0" and "eth1" connections is incorrect.

# ifconfig -a | grep eth
eth2      Link encap:Ethernet  HWaddr 08:00:27:1F:2E:33  
eth3      Link encap:Ethernet  HWaddr 08:00:27:8E:6D:24  
Edit the "/etc/sysconfig/network-scripts/ifcfg-eth0", amending only the IPADDR and HWADDR settings as follows and deleting the UUID entry. Note, the HWADDR value comes from the "eth2" interface displayed above.
    IPADDR=192.168.1.62
    HWADDR=08:00:27:1F:2E:33 
Edit the "/etc/sysconfig/network-scripts/ifcfg-eth1", amending only the IPADDR and HWADDR settings as follows and deleting the UUID entry. Note, the HWADDR value comes from the "eth3" interface displayed above.
    HWADDR=08:00:27:8E:6D:24
    IPADDR=192.168.2.102
Change .login for grid user
 setenv ORACLE_SID +ASM2
Remove udev rules:
# rm  /etc/udev/rules.d/70-persistent-net.rules
# reboot
Verify network devices ( use graphical tool if needed for changes )
# ifconfig
eth0      Link encap:Ethernet  HWaddr 08:00:27:1F:2E:33  
          inet addr:192.168.1.62  Bcast:192.168.1.255  Mask:255.255.255.0
..
eth1      Link encap:Ethernet  HWaddr 08:00:27:8E:6D:24  
          inet addr:192.168.2.72  Bcast:192.168.2.255  Mask:255.255.255.0 
..

Check Ntp
$ ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 gns.example.com LOCAL(0)        10 u   30   64    1    0.462  2233.72   0.000
 LOCAL(0)        .LOCL.          12 l   29   64    1    0.000    0.000   0.000

Check DHCP
$ grep -i dhcp /var/log/messages
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> Activation (eth2) Beginning DHCPv4 transaction
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> Activation (eth2) DHCPv4 will time out in 45 seconds
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> Activation (eth3) Beginning DHCPv4 transaction
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> Activation (eth3) DHCPv4 will time out in 45 seconds
Jul 15 19:12:21 grac1 dhclient[1547]: Internet Systems Consortium DHCP Client 4.1.1-P1
Jul 15 19:12:21 grac1 dhclient[1547]: For info, please visit https://www.isc.org/software/dhcp/
Jul 15 19:12:21 grac1 dhclient[1537]: Internet Systems Consortium DHCP Client 4.1.1-P1
Jul 15 19:12:21 grac1 dhclient[1537]: For info, please visit https://www.isc.org/software/dhcp/
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> (eth2): DHCPv4 state changed nbi -> preinit
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> (eth3): DHCPv4 state changed nbi -> preinit
Jul 15 19:12:22 grac1 dhclient[1537]: DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 4 (xid=0x5ddfdccc)
Jul 15 19:12:23 grac1 dhclient[1547]: DHCPDISCOVER on eth3 to 255.255.255.255 port 67 interval 5 (xid=0x5c751799)
Jul 15 19:12:26 grac1 dhclient[1537]: DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 11 (xid=0x5ddfdccc)
Jul 15 19:12:28 grac1 dhclient[1547]: DHCPDISCOVER on eth3 to 255.255.255.255 port 67 interval 11 (xid=0x5c751799)
Jul 15 19:12:32 grac1 dhclient[1537]: DHCPOFFER from 192.168.1.50
Jul 15 19:12:32 grac1 dhclient[1537]: DHCPREQUEST on eth2 to 255.255.255.255 port 67 (xid=0x5ddfdccc)
Jul 15 19:12:32 grac1 dhclient[1537]: DHCPACK from 192.168.1.50 (xid=0x5ddfdccc)
Jul 15 19:12:32 grac1 NetworkManager[1528]: <info> (eth2): DHCPv4 state changed preinit -> bound
Jul 15 19:12:33 grac1 dhclient[1547]: DHCPOFFER from 192.168.1.50
Jul 15 19:12:33 grac1 dhclient[1547]: DHCPREQUEST on eth3 to 255.255.255.255 port 67 (xid=0x5c751799)
Jul 15 19:12:33 grac1 dhclient[1547]: DHCPACK from 192.168.1.50 (xid=0x5c751799)
Jul 15 19:12:33 grac1 NetworkManager[1528]: <info> (eth3): DHCPv4 state changed preinit -> bound
Jul 15 19:27:53 grac2 NetworkManager[1617]: <info> Activation (eth2) Beginning DHCPv4 transaction
Jul 15 19:27:53 grac2 NetworkManager[1617]: <info> Activation (eth2) DHCPv4 will time out in 45 seconds
Jul 15 19:27:53 grac2 dhclient[1637]: Internet Systems Consortium DHCP Client 4.1.1-P1
Jul 15 19:27:53 grac2 dhclient[1637]: For info, please visit https://www.isc.org/software/dhcp/
Jul 15 19:27:53 grac2 dhclient[1637]: DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 4 (xid=0x44e12e9)
Jul 15 19:27:53 grac2 NetworkManager[1617]: <info> (eth2): DHCPv4 state changed nbi -> preinit
Jul 15 19:27:57 grac2 dhclient[1637]: DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 10 (xid=0x44e12e9)
Jul 15 19:28:03 grac2 dhclient[1637]: DHCPOFFER from 192.168.1.50
Jul 15 19:28:03 grac2 dhclient[1637]: DHCPREQUEST on eth2 to 255.255.255.255 port 67 (xid=0x44e12e9)
Jul 15 19:28:03 grac2 dhclient[1637]: DHCPACK from 192.168.1.50 (xid=0x44e12e9)
Jul 15 19:28:03 grac2 NetworkManager[1617]: <info> (eth2): DHCPv4 state changed preinit -> bound
Jul 15 19:32:52 grac2 NetworkManager[1690]: <info> Activation (eth0) Beginning DHCPv4 transaction
Jul 15 19:32:52 grac2 NetworkManager[1690]: <info> Activation (eth0) DHCPv4 will time out in 45 seconds
Jul 15 19:32:52 grac2 dhclient[1703]: Internet Systems Consortium DHCP Client 4.1.1-P1
Jul 15 19:32:52 grac2 dhclient[1703]: For info, please visit https://www.isc.org/software/dhcp/
Jul 15 19:32:52 grac2 dhclient[1703]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 6 (xid=0x6781ea4f)
Jul 15 19:32:52 grac2 NetworkManager[1690]: <info> (eth0): DHCPv4 state changed nbi -> preinit
Jul 15 19:32:58 grac2 dhclient[1703]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 (xid=0x6781ea4f)
Jul 15 19:33:02 grac2 dhclient[1703]: DHCPOFFER from 192.168.1.50
Jul 15 19:33:02 grac2 dhclient[1703]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x6781ea4f)
Jul 15 19:33:02 grac2 dhclient[1703]: DHCPACK from 192.168.1.50 (xid=0x6781ea4f)
Jul 15 19:33:02 grac2 NetworkManager[1690]: <info> (eth0): DHCPv4 state changed preinit -> bound
Jul 15 19:37:56 grac2 NetworkManager[1690]: <info> (eth0): canceled DHCP transaction, DHCP client pid 1703
Rerun clufify for 2.nd node and test GNS connectivity:

Verify GNS: 
$ ./bin/cluvfy comp gns -precrsinst -domain oracle-gns.example.com -vip 192.168.2.72 -verbose -n grac2
Verifying GNS integrity 
Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "oracle-gns.example.com" is a valid domain name
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.2.72" resolves to a valid IP address
Checking the status of GNS VIP...
GNS integrity check passed
Verification of GNS integrity was successful. 

Verify CRS for both nodes using newly created  ASM disk and asmadmin group 
$ ./bin/cluvfy stage -pre crsinst -n grac1,grac2 -asm -asmgrp asmadmin -asmdev /dev/oracleasm/disks/DATA1,/dev/oracleasm/disks/DATA2,/dev/oracleasm/disks/DATA3
Performing pre-checks for cluster services setup 
Checking node reachability...
Node reachability check passed from node "grac1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "192.168.1.0" with node(s) grac2,grac1
TCP connectivity check passed for subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) grac2,grac1
TCP connectivity check passed for subnet "192.168.2.0"
Node connectivity passed for subnet "169.254.0.0" with node(s) grac2,grac1
TCP connectivity check passed for subnet "169.254.0.0"
Interfaces found on subnet "169.254.0.0" that are likely candidates for VIP are:
grac2 eth1:169.254.86.205
grac1 eth1:169.254.168.215
Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect are:
grac2 eth1:192.168.2.102
grac1 eth1:192.168.2.101
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "169.254.0.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "grac2:/u01/app/11203/grid,grac2:/tmp"
Free disk space check passed for "grac1:/u01/app/11203/grid,grac1:/tmp"
Check for multiple users with UID value 501 passed 
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Group existence check passed for "asmadmin"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Membership check for user "grid" in group "asmadmin" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value passed 
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Package existence check passed for "cvuqdisk"
Checking Devices for ASM...
Checking for shared devices...
  Device                                Device Type             
  ------------------------------------  ------------------------
  /dev/oracleasm/disks/DATA3            Disk                    
  /dev/oracleasm/disks/DATA2            Disk                    
  /dev/oracleasm/disks/DATA1            Disk                    
Checking consistency of device owner across all nodes...
Consistency check of device owner for "/dev/oracleasm/disks/DATA3" PASSED
Consistency check of device owner for "/dev/oracleasm/disks/DATA1" PASSED
Consistency check of device owner for "/dev/oracleasm/disks/DATA2" PASSED
Checking consistency of device group across all nodes...
Consistency check of device group for "/dev/oracleasm/disks/DATA3" PASSED
Consistency check of device group for "/dev/oracleasm/disks/DATA1" PASSED
Consistency check of device group for "/dev/oracleasm/disks/DATA2" PASSED
Checking consistency of device permissions across all nodes...
Consistency check of device permissions for "/dev/oracleasm/disks/DATA3" PASSED
Consistency check of device permissions for "/dev/oracleasm/disks/DATA1" PASSED
Consistency check of device permissions for "/dev/oracleasm/disks/DATA2" PASSED
Checking consistency of device size across all nodes...
Consistency check of device size for "/dev/oracleasm/disks/DATA3" PASSED
Consistency check of device size for "/dev/oracleasm/disks/DATA1" PASSED
Consistency check of device size for "/dev/oracleasm/disks/DATA2" PASSED
UDev attributes check for ASM Disks started...
ERROR: 
PRVF-9802 : Attempt to get udev info from node "grac2" failed
ERROR: 
PRVF-9802 : Attempt to get udev info from node "grac1" failed
UDev attributes check failed for ASM Disks 
Devices check for ASM passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
NTP daemon slewing option check passed
NTP daemon's boot time configuration check for slewing option passed
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Core file name pattern consistency check passed.
User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: grac2,grac1
File "/etc/resolv.conf" is not consistent across nodes
Time zone consistency check passed
Pre-check for cluster services setup was unsuccessful on all the nodes. 
Ignore PRVF-9802 , PRVF-5636.  For details check the following link.

 

Install Clusterware Software

As user Root 
# xhost +
    access control disabled, clients can connect from any host
As user Grid
$  xclock      ( Testing X connection )
$ cd /KITS/Oracle/11.2.0.3/Linux_64/grid   ( your grid staging area )
$ ./runInstaller  
--> Important : Select Installation type : Advanced Installation
Cluster name   GRACE2  
Scan name:     GRACE2-scan.grid.example.com
Scan port:     1521
Configure GNS
GNS sub domain:  grid.example.com
GNS VIP address: 192.168.1.55
   ( This address shouldn't be in use:   # ping 192.168.1.55 should fail ) 
  Hostname:  grac1.example.com     Virtual hostnames  : AUTO
  Hostname:  grac1.example.com     Virtual hostnames  : AUTO 
Test and configure SSH connectivity 
Configure ASM disk string: /dev/oracleasm/disks/*
ASM password: sys 
Don't user IPM
Dont't change groups
ORACLE_BASE: /u01/app/grid
Sofware Location : /u01/app/11.2.0/grid
--> Check OUI Prerequisites Check 
  -> Ignore the wellknown  PRVF-5636, PRVF-9802 errors/warnings ( see the former clufvfy reports ) 
Install software and run the related root.sh scripts

Run on grac1:  /u01/app/11203/grid/root.sh
Performing root user operation for Oracle 11g 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11203/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11203/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac1'
CRS-2676: Start of 'ora.mdnsd' on 'grac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac1'
CRS-2676: Start of 'ora.gpnpd' on 'grac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'grac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'grac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac1'
CRS-2676: Start of 'ora.diskmon' on 'grac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'grac1' succeeded
ASM created and started successfully.
Disk Group DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 3ee007b399cc4f59bfa0fc80ff3fa9ff.
Successful addition of voting disk 7a73147a81dc4f71bfc8757343aee181.
Successful addition of voting disk 25fcfbdb854a4f49bf0addd0fa32d0a2.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   3ee007b399cc4f59bfa0fc80ff3fa9ff (/dev/oracleasm/disks/DATA1) [DATA]
 2. ONLINE   7a73147a81dc4f71bfc8757343aee181 (/dev/oracleasm/disks/DATA2) [DATA]
 3. ONLINE   25fcfbdb854a4f49bf0addd0fa32d0a2 (/dev/oracleasm/disks/DATA3) [DATA]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'grac1'
CRS-2676: Start of 'ora.asm' on 'grac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'grac1'
CRS-2676: Start of 'ora.DATA.dg' on 'grac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Run on grac2:  /u01/app/11203/grid/root.sh
# /u01/app/11203/grid/root.sh
Performing root user operation for Oracle 11g 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11203/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11203/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node grac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Run cluvfyy and crsctl to verify Oracle Grid Installation
$ ./bin/cluvfy stage -post crsinst -n grac1,grac2 -verbose
Performing post-checks for cluster services setup 
Checking node reachability...
Check: Node reachability from node "grac1"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  grac2                                 yes                     
  grac1                                 yes                     
Result: Node reachability check passed from node "grac1"
Checking user equivalence...
Check: User equivalence for user "grid"
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac2                                 passed                  
  grac1                                 passed                  
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac2                                 passed                  
  grac1                                 passed                  
Verification of the hosts config file successful
Interface information for node "grac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.62    192.168.1.0     0.0.0.0         168.1.0.1       08:00:27:1F:2E:33 1500  
 eth0   192.168.1.112   192.168.1.0     0.0.0.0         168.1.0.1       08:00:27:1F:2E:33 1500  
 eth0   192.168.1.108   192.168.1.0     0.0.0.0         168.1.0.1       08:00:27:1F:2E:33 1500  
 eth1   192.168.2.102   192.168.2.0     0.0.0.0         168.1.0.1       08:00:27:8E:6D:24 1500  
 eth1   169.254.86.205  169.254.0.0     0.0.0.0         168.1.0.1       08:00:27:8E:6D:24 1500  
Interface information for node "grac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.61    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:6E:17:DB 1500  
 eth0   192.168.1.55    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:6E:17:DB 1500  
 eth0   192.168.1.110   192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:6E:17:DB 1500  
 eth0   192.168.1.109   192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:6E:17:DB 1500  
 eth0   192.168.1.107   192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:6E:17:DB 1500  
 eth1   192.168.2.101   192.168.2.0     0.0.0.0         192.168.1.1     08:00:27:F5:31:22 1500  
 eth1   169.254.168.215 169.254.0.0     0.0.0.0         192.168.1.1     08:00:27:F5:31:22 1500  
Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  grac2[192.168.1.62]             grac2[192.168.1.112]            yes             
  grac2[192.168.1.62]             grac2[192.168.1.108]            yes             
  grac2[192.168.1.62]             grac1[192.168.1.61]             yes             
  grac2[192.168.1.62]             grac1[192.168.1.55]             yes             
  grac2[192.168.1.62]             grac1[192.168.1.110]            yes             
  grac2[192.168.1.62]             grac1[192.168.1.109]            yes             
  grac2[192.168.1.62]             grac1[192.168.1.107]            yes             
  grac2[192.168.1.112]            grac2[192.168.1.108]            yes             
  grac2[192.168.1.112]            grac1[192.168.1.61]             yes             
  grac2[192.168.1.112]            grac1[192.168.1.55]             yes             
  grac2[192.168.1.112]            grac1[192.168.1.110]            yes             
  grac2[192.168.1.112]            grac1[192.168.1.109]            yes             
  grac2[192.168.1.112]            grac1[192.168.1.107]            yes             
  grac2[192.168.1.108]            grac1[192.168.1.61]             yes             
  grac2[192.168.1.108]            grac1[192.168.1.55]             yes             
  grac2[192.168.1.108]            grac1[192.168.1.110]            yes             
  grac2[192.168.1.108]            grac1[192.168.1.109]            yes             
  grac2[192.168.1.108]            grac1[192.168.1.107]            yes             
  grac1[192.168.1.61]             grac1[192.168.1.55]             yes             
  grac1[192.168.1.61]             grac1[192.168.1.110]            yes             
  grac1[192.168.1.61]             grac1[192.168.1.109]            yes             
  grac1[192.168.1.61]             grac1[192.168.1.107]            yes             
  grac1[192.168.1.55]             grac1[192.168.1.110]            yes             
  grac1[192.168.1.55]             grac1[192.168.1.109]            yes             
  grac1[192.168.1.55]             grac1[192.168.1.107]            yes             
  grac1[192.168.1.110]            grac1[192.168.1.109]            yes             
  grac1[192.168.1.110]            grac1[192.168.1.107]            yes             
  grac1[192.168.1.109]            grac1[192.168.1.107]            yes             
Result: Node connectivity passed for interface "eth0"
Check: TCP connectivity of subnet "192.168.1.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  grac1:192.168.1.61              grac2:192.168.1.62              passed          
  grac1:192.168.1.61              grac2:192.168.1.112             passed          
  grac1:192.168.1.61              grac2:192.168.1.108             passed          
  grac1:192.168.1.61              grac1:192.168.1.55              passed          
  grac1:192.168.1.61              grac1:192.168.1.110             passed          
  grac1:192.168.1.61              grac1:192.168.1.109             passed          
  grac1:192.168.1.61              grac1:192.168.1.107             passed          
Result: TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  grac2[192.168.2.102]            grac1[192.168.2.101]            yes             
Result: Node connectivity passed for interface "eth1"
Check: TCP connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  grac1:192.168.2.101             grac2:192.168.2.102             passed          
Result: TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check: Time zone consistency 
Result: Time zone consistency check passed
Checking Oracle Cluster Voting Disk configuration...
ASM Running check passed. ASM is running on all specified nodes
Oracle Cluster Voting Disk configuration check passed
Checking Cluster manager integrity... 
Checking CSS daemon...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac2                                 running                 
  grac1                                 running                 
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations 
UDev attributes check for Voting Disk locations started...
Result: UDev attributes check passed for Voting Disk locations 
Check default user file creation mask
  Node Name     Available                 Required                  Comment   
  ------------  ------------------------  ------------------------  ----------
  grac2         22                        0022                      passed    
  grac1         22                        0022                      passed    
Result: Default user file creation mask check passed
Checking cluster integrity...
  Node Name                           
  ------------------------------------
  grac1                               
  grac2                               
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
ASM Running check passed. ASM is running on all specified nodes
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+DATA" available on all the nodes
NOTE: 
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Checking CRS integrity...
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "grac2"
The Oracle Clusterware is healthy on node "grac1"
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  grac2         yes                       yes                       passed    
  grac1         yes                       yes                       passed    
VIP node application check passed
Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  grac2         yes                       yes                       passed    
  grac1         yes                       yes                       passed    
NETWORK node application check passed
Checking existence of GSD node application (optional)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  grac2         no                        no                        exists    
  grac1         no                        no                        exists    
GSD node application is offline on nodes "grac2,grac1"
Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  grac2         no                        yes                       passed    
  grac1         no                        yes                       passed    
ONS node application check passed
Checking Single Client Access Name (SCAN)...
  SCAN Name         Node          Running?      ListenerName  Port          Running?    
  ----------------  ------------  ------------  ------------  ------------  ------------
  GRACE2-scan.grid.example.com  grac2         true          LISTENER_SCAN1  1521          true        
  GRACE2-scan.grid.example.com  grac1         true          LISTENER_SCAN2  1521          true        
  GRACE2-scan.grid.example.com  grac1         true          LISTENER_SCAN3  1521          true        
Checking TCP connectivity to SCAN Listeners...
  Node          ListenerName              TCP connectivity?       
  ------------  ------------------------  ------------------------
  grac1         LISTENER_SCAN1            yes                     
  grac1         LISTENER_SCAN2            yes                     
  grac1         LISTENER_SCAN3            yes                     
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "GRACE2-scan.grid.example.com"...
  SCAN Name     IP Address                Status                    Comment   
  ------------  ------------------------  ------------------------  ----------
  GRACE2-scan.grid.example.com  192.168.1.110             passed                              
  GRACE2-scan.grid.example.com  192.168.1.109             passed                              
  GRACE2-scan.grid.example.com  192.168.1.108             passed                              
Verification of SCAN VIP and Listener setup passed
Checking OLR integrity...
Checking OLR config file...
OLR config file check successful
Checking OLR file attributes...
OLR file check successful
WARNING: 
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
OLR integrity check passed
Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "grid.example.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.1.0" match with the GNS VIP "192.168.1.0"
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.1.55" resolves to a valid IP address
Checking the status of GNS VIP...
Checking if FDQN names for domain "grid.example.com" are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
Checking status of GNS resource...
  Node          Running?                  Enabled?                
  ------------  ------------------------  ------------------------
  grac2         no                        yes                     
  grac1         yes                       yes                     
GNS resource configuration check passed
Checking status of GNS VIP resource...
  Node          Running?                  Enabled?                
  ------------  ------------------------  ------------------------
  grac2         no                        yes                     
  grac1         yes                       yes                     
GNS VIP resource configuration check passed.
GNS integrity check passed
Checking to make sure user "grid" is not in "root" group
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac2         passed                    does not exist          
  grac1         passed                    does not exist          
Result: User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac2                                 passed                  
  grac1                                 passed                  
Result: CTSS resource check passed
Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed
Check CTSS state started...
Check: CTSS state
  Node Name                             State                   
  ------------------------------------  ------------------------
  grac2                                 Observer                
  grac1                                 Observer                
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
  Node Name                             Running?                
  ------------------------------------  ------------------------
  grac2                                 yes                     
  grac1                                 yes                     
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
  Node Name                             Slewing Option Set?     
  ------------------------------------  ------------------------
  grac2                                 yes                     
  grac1                                 yes                     
Result: 
NTP daemon slewing option check passed
Checking NTP daemon's boot time configuration, in file "/etc/sysconfig/ntpd", for slewing option "-x"
Check: NTP daemon's boot time configuration
  Node Name                             Slewing Option Set?     
  ------------------------------------  ------------------------
  grac2                                 yes                     
  grac1                                 yes                     
Result: 
NTP daemon's boot time configuration check for slewing option passed
Checking whether NTP daemon or service is using UDP port 123 on all nodes
Check for NTP daemon or service using UDP port 123
  Node Name                             Port Open?              
  ------------------------------------  ------------------------
  grac2                                 yes                     
  grac1                                 yes                     
NTP common Time Server Check started...
NTP Time Server ".LOCL." is common to all nodes on which the NTP daemon is running
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Checking on nodes "[grac2, grac1]"... 
Check: Clock time offset from NTP Time Server
Time Server: .LOCL. 
Time Offset Limit: 1000.0 msecs
  Node Name     Time Offset               Status                  
  ------------  ------------------------  ------------------------
  grac2         0.0                       passed                  
  grac1         0.0                       passed                  
Time Server ".LOCL." has time offsets that are within permissible limits for nodes "[grac2, grac1]". 
Clock time offset check passed
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Post-check for cluster services setup was successful. 

Checking CRS status after installation]
$ my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.DATA.dg                    ONLINE     ONLINE          grac1         
ora.DATA.dg                    ONLINE     ONLINE          grac2         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac1         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac2         
ora.asm                        ONLINE     ONLINE          grac1        Started 
ora.asm                        ONLINE     ONLINE          grac2        Started 
ora.gsd                        OFFLINE    OFFLINE         grac1         
ora.gsd                        OFFLINE    OFFLINE         grac2         
ora.net1.network               ONLINE     ONLINE          grac1         
ora.net1.network               ONLINE     ONLINE          grac2         
ora.ons                        ONLINE     ONLINE          grac1         
ora.ons                        ONLINE     ONLINE          grac2         
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac2         
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac1         
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac1         
ora.cvu                        ONLINE     ONLINE          grac1         
ora.gns                        ONLINE     ONLINE          grac1         
ora.gns.vip                    ONLINE     ONLINE          grac1         
ora.grac1.vip                  ONLINE     ONLINE          grac1         
ora.grac2.vip                  ONLINE     ONLINE          grac2         
ora.oc4j                       ONLINE     ONLINE          grac1         
ora.scan1.vip                  ONLINE     ONLINE          grac2         
ora.scan2.vip                  ONLINE     ONLINE          grac1         
ora.scan3.vip                  ONLINE     ONLINE          grac1                              

Grid post installation - Ologgerd process cosumes high CPU time
  It had been noticed that after a while, the ologgerd process can consume excessive CPU resource. 
  The ologgerd is part of Oracle Cluster Health Monitor and used by Oracle Support to troubleshoot RAC problems. 
  You can check that by starting top:  (sometime up we see up to 60% WA states ) 
  top - 15:02:38 up 15 min,  6 users,  load average: 3.70, 2.54, 1.78
    Tasks: 215 total,   2 running, 213 sleeping,   0 stopped,   0 zombie
    Cpu(s):  3.6%us,  8.9%sy,  0.0%ni, 55.4%id, 31.4%wa,  0.0%hi,  0.8%si,  0.0%st
    Mem:   3234376k total,  2512568k used,   721808k free,   108508k buffers
    Swap:  3227644k total,        0k used,  3227644k free,  1221196k cached
      PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
     5602 root      RT   0  501m 145m  60m S 48.1  4.6   0:31.29 ologgerd    
If ologgerd process is consuming a lot of CPU, you can stop it by executing:
# crsctl stop resource ora.crf -init
  Now top looks good as IDLE CPU time increases from 55 % to 95 % !
  hrac1: 
    top - 15:07:56 up 20 min,  6 users,  load average: 2.57, 3.33, 2.41
    Tasks: 212 total,   1 running, 211 sleeping,   0 stopped,   0 zombie
    Cpu(s):  1.3%us,  4.2%sy,  0.0%ni, 94.3%id,  0.1%wa,  0.0%hi,  0.2%si,  0.0%st
    Mem:   3234376k total,  2339268k used,   895108k free,   132604k buffers
    Swap:  3227644k total,        0k used,  3227644k free,  1126964k cached
  hrac2:   
    top - 15:48:37 up 33 min,  3 users,  load average: 2.63, 2.40, 2.13
    Tasks: 204 total,   1 running, 203 sleeping,   0 stopped,   0 zombie
    Cpu(s):  0.9%us,  3.3%sy,  0.0%ni, 95.6%id,  0.1%wa,  0.0%hi,  0.2%si,  0.0%st
    Mem:   2641484k total,  1975444k used,   666040k free,   158212k buffers
    Swap:  3227644k total,        0k used,  3227644k free,   993328k cached
 If you want to disable ologgerd permanently, then execute:
 # crsctl delete resource ora.crf -init

 

Fixing a failed GRID Installation

Fixing a failed Grid Installation ( runt this commands on all instances )
[grid@grac31 ~]$ rm -rf  /u01/app/11203/grid/*
[grid@grac31 ~]$ rm /u01/app/oraInventory/*

 

Install RDBMS and  create RAC database

Login as Oracle user and verify the accout
$ id
  uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),506(asmdba),54322(dba)
$ env | grep ORA 
  ORACLE_BASE=/u01/app/oracle
  ORACLE_SID=RACE2
  ORACLE_HOME=/u01/app/oracle/product/11203/racdb

Verfiy system by  running  cluvfy with: stage -pre dbinst
$ ./bin/cluvfy stage -pre dbinst -n grac1,grac2
Performing pre-checks for database installation 
Checking node reachability...
Node reachability check passed from node "grac1"
Checking user equivalence...
User equivalence check passed for user "oracle"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "grac2:/tmp"
Free disk space check passed for "grac1:/tmp"
Check for multiple users with UID value 54321 passed 
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
...
Check for multiple users with UID value 0 passed 
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Default user file creation mask check passed
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking Cluster manager integrity... 
Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of GSD node application (optional)
GSD node application is offline on nodes "grac2,grac1"
Checking existence of ONS node application (optional)
ONS node application check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
CTSS resource check passed
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
NTP daemon slewing option check passed
NTP daemon's boot time configuration check for slewing option passed
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: grac2,grac1
File "/etc/resolv.conf" is not consistent across nodes
Time zone consistency check passed
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "GRACE2-scan.grid.example.com"...
Verification of SCAN VIP and Listener setup passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
ASM and CRS versions are compatible
Database Clusterware version compatibility passed
Pre-check for database installation was unsuccessful on all the nodes. 

Run cluvfy with:  stage -pre dbcfg
$ ./bin/cluvfy stage -pre dbcfg -n grac1,grac2 -d $ORACLE_HOME
Performing pre-checks for database configuration 
ERROR: 
Unable to determine OSDBA group from Oracle Home "/u01/app/oracle/product/11203/racdb"
Checking node reachability...
Node reachability check passed from node "grac1"
Checking user equivalence...
User equivalence check passed for user "oracle"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
ERROR: 
PRVF-7617 : Node connectivity between "grac1 : 192.168.1.61" and "grac2 : 192.168.1.108" failed
TCP connectivity check failed for subnet "192.168.1.0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check failed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "grac2:/u01/app/oracle/product/11203/racdb,grac2:/tmp"
Free disk space check passed for "grac1:/u01/app/oracle/product/11203/racdb,grac1:/tmp"
Check for multiple users with UID value 54321 passed 
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
...
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed 
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of GSD node application (optional)
GSD node application is offline on nodes "grac2,grac1"
Checking existence of ONS node application (optional)
ONS node application check passed
Time zone consistency check passed
Pre-check for database configuration was unsuccessful on all the nodes. 

Ignore ERROR: 
   Unable to determine OSDBA group from Oracle Home "/u01/app/oracle/product/11203/racdb"
   -> Oracle software isn't software installed yet and cluvfy can't find $ORACLE_HOME/bin/osdbagrp
    stat("/u01/app/oracle/product/11203/racdb/bin/osdbagrp", 0x7fff2fd6e530) = -1 ENOENT (No such file or directory) 
   Run only cluvfy stage -pre dbcfg only after you have installed the software and before you have created the database.

Run Installer
As user Root 
  # xhost +
    access control disabled, clients can connect from any host
As user Oracle
  $ xclock      ( Testing X connection )
  $ cd /KITS/Oracle/11.2.0.3/Linux_64/database  ( rdbms staging area ) 
  $ ./runInstaller ( select SERVER class )
     Node Name           : grac1,grac2  
     Storage type        : ASM
     Location            : DATA
     OSDBDBA group       : asmdba
     Global database name: GRACE2
On grac1 run:  /u01/app/oracle/product/11203/racdb/root.sh
On grac2 run:  /u01/app/oracle/product/11203/racdb/root.sh
Enterprise Manager Database Control URL - (RACE2) :   https://hrac1.de.oracle.com:1158/em

Verify Rac Install
$ my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.DATA.dg                    ONLINE     ONLINE          grac1         
ora.DATA.dg                    ONLINE     ONLINE          grac2         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac1         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac2         
ora.asm                        ONLINE     ONLINE          grac1        Started 
ora.asm                        ONLINE     ONLINE          grac2        Started 
ora.gsd                        OFFLINE    OFFLINE         grac1         
ora.gsd                        OFFLINE    OFFLINE         grac2         
ora.net1.network               ONLINE     ONLINE          grac1         
ora.net1.network               ONLINE     ONLINE          grac2         
ora.ons                        ONLINE     ONLINE          grac1         
ora.ons                        ONLINE     ONLINE          grac2         
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac2         
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac1         
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac1         
ora.cvu                        ONLINE     ONLINE          grac1         
ora.gns                        ONLINE     ONLINE          grac1         
ora.gns.vip                    ONLINE     ONLINE          grac1         
ora.grac1.vip                  ONLINE     ONLINE          grac1         
ora.grac2.vip                  ONLINE     ONLINE          grac2         
ora.grace2.db                  ONLINE     ONLINE          grac1        Open 
ora.grace2.db                  ONLINE     ONLINE          grac2        Open 
ora.oc4j                       ONLINE     ONLINE          grac1         
ora.scan1.vip                  ONLINE     ONLINE          grac2         
ora.scan2.vip                  ONLINE     ONLINE          grac1         
ora.scan3.vip                  ONLINE     ONLINE          grac1     

$ srvctl  status database -d GRACE2
Instance GRACE21 is running on node grac1
Instance GRACE22 is running on node grac2

$GRID_HOME/bin/olsnodes -n
grac1    1
grac2    2

 

Reference