Install RAC 12.1.0.2 PM – Policy Managed

Feature Overview

  • FLEX ASM
  • Policy Managed Database ( PM)
  • GNS – requirement for Policy Managed Database
  • UDEV to manage ASM disks

 Versions used

  • VirtualBox 4.3.20
  • OEL 6.6
  • Oracle RAC  12.1.0.2 using UDEV, Policy Managed Database , Flex ASM  feature

 Virtualbox Images used

  • ns1      – Name Server / DHCP server running on 192.168.5.50 IP addresss
  • oel66    – Basic Linix Image which we use for cloning [ Vbox image we need to install first ]
  • hract21  – Cluster node 1
  • hract22  – Cluster node 2
  • hract23  – Cluster node 3

  For installing the nameserver please read the article mentioned  below :

Create VirtualBox Image OEL66 as our RAC Provisioning Server

  -Install OEL 6.6 base Linux System with all needed packages 
  -Setup Network, Groups, Users to allow this RAC Provisiong Server to used :
     For cloning 
     For quick reinstall 
     For adding a new node to our cluster 

Download OEL V52218-01.iso and bind this ISO image to your Boot CD boot !
During the Linux installation --> Use customize NOW 
The "Package Group Selection" screen allows you to select the required package groups, and i
ndividual packages within the details section. When you've made your selection, click the "Next" button. 
If you want the server to have a regular gnome desktop you need to include the following package 
groups from the "Desktops" section:

    Desktop
    Desktop Platform
    Fonts
    General Purpose Desktop
    Graphical Administration Tools
    X Windows System
For details see : http://oracle-base.com/articles/linux/oracle-linux-6-installation.php

Network Configurations 
Status : We assume DNS/DHCP server  are already runing  running !

Please read the following article to configure:  Virtualbox Network devices for RAC and Internet Access

Install the newest available Kernel package version : 
[root@hract21 Desktop]# yum update
Loaded plugins: refresh-packagekit, security
Setting up Update Process
public_ol6_UEKR3_latest                                  | 1.2 kB     00:00     
public_ol6_UEKR3_latest/primary                          |  11 MB     00:08     
public_ol6_UEKR3_latest                                                 288/288
public_ol6_latest                                        | 1.4 kB     00:00     
public_ol6_latest/primary                                |  45 MB     00:50     
public_ol6_latest                                                   29161/29161
Resolving Dependencies
--> Running transaction check
---> Package at.x86_64 0:3.1.10-43.el6_2.1 will be updated
---> Package at.x86_64 0:3.1.10-44.el6_6.2 will be an update
..
  xorg-x11-server-Xorg.x86_64 0:1.15.0-25.el6_6                                 
  xorg-x11-server-common.x86_64 0:1.15.0-25.el6_6                               
  yum-rhn-plugin.noarch 0:0.9.1-52.0.1.el6_6                                    
Complete!

Determine your current kernel versions 
[root@hract21 Desktop]# uname -a
Linux hract21.example.com 3.8.13-55.1.2.el6uek.x86_64 #2 SMP Thu Dec 18 00:15:51 PST 2014 x86_64 x86_64 x86_64 GNU/Linux

Find the related kernel packages for downloading 
[root@hract21 Desktop]# yum list | grep 3.8.13-55
kernel-uek.x86_64                    3.8.13-55.1.2.el6uek     @public_ol6_UEKR3_latest
kernel-uek-devel.x86_64              3.8.13-55.1.2.el6uek     @public_ol6_UEKR3_latest
kernel-uek-doc.noarch                3.8.13-55.1.2.el6uek     @public_ol6_UEKR3_latest
kernel-uek-firmware.noarch           3.8.13-55.1.2.el6uek     @public_ol6_UEKR3_latest
dtrace-modules-3.8.13-55.1.1.el6uek.x86_64
dtrace-modules-3.8.13-55.1.2.el6uek.x86_64
dtrace-modules-3.8.13-55.el6uek.x86_64
kernel-uek-debug.x86_64              3.8.13-55.1.2.el6uek     public_ol6_UEKR3_latest
kernel-uek-debug-devel.x86_64        3.8.13-55.1.2.el6uek     public_ol6_UEKR3_latest

[root@hract21 Desktop]# yum install kernel-uek-devel.x86_64
Loaded plugins: refresh-packagekit, security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package kernel-uek-devel.x86_64 0:3.8.13-55.1.2.el6uek will be installed
--> Finished Dependency Resolution

Verify installed kernel sources
[root@hract21 Desktop]#  ls /usr/src/kernels
2.6.32-504.3.3.el6.x86_64  3.8.13-55.1.2.el6uek.x86_64

Rebuild VirtualBox non-DKMS kernel 
[root@hract21 Desktop]# /etc/init.d/vboxadd setup
Removing existing VirtualBox non-DKMS kernel modules       [  OK  ]
Building the VirtualBox Guest Additions kernel modules
Building the main Guest Additions module                   [  OK  ]
Building the shared folder support module                  [  OK  ]
Building the OpenGL support module                         [  OK  ]
Doing non-kernel setup of the Guest Additions              [  OK  ]
Starting the VirtualBox Guest Additions                    [  OK  ]

--> Reboot system and verify Virtualbox device drivers
[root@hract21 Desktop]# lsmod | grep vbox
vboxsf                 38015  0 
vboxguest             263369  7 vboxsf
vboxvideo               2154  1 
drm                   274140  2 vboxvideo

Verify Network setup 
[root@hract21 network-scripts]# cat /etc/resolv.conf
# Generated by NetworkManager
search example.com grid12c.example.com
nameserver 192.168.5.50

After configuring your network devices / Firewall status:
[root@hract21 network-scripts]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 08:00:27:31:B8:A0  
          inet addr:192.168.1.8  Bcast:192.168.1.255  Mask:255.255.255.0
  
eth1      Link encap:Ethernet  HWaddr 08:00:27:7B:E2:09  
          inet addr:192.168.5.121  Bcast:192.168.5.255  Mask:255.255.255.0

eth2      Link encap:Ethernet  HWaddr 08:00:27:C8:CA:AD  
          inet addr:192.168.2.121  Bcast:192.168.2.255  Mask:255.255.255.0

eth3      Link encap:Ethernet  HWaddr 08:00:27:2A:0B:EC  
          inet addr:192.168.3.121  Bcast:192.168.3.255  Mask:255.255.255.0

Note : eth0 is a DHCP address from our router whereas eth1, eht2 and eth3 are fixed addresses.

At this point DNS name resoluiton and ping  even for Internet hosts  should work :  
[root@hract21 network-scripts]#  ping google.de
PING google.de (173.194.112.191) 56(84) bytes of data.
64 bytes from fra07s32-in-f31.1e100.net (173.194.112.191): icmp_seq=1 ttl=47 time=40.6 ms
64 bytes from fra07s32-in-f31.1e100.net (173.194.112.191): icmp_seq=2 ttl=47 time=40.2 ms

For add. info you may read: 
 - Configure DNS, NTP and DHCP  for a mixed RAC/Internet usage  

Setup NTP ( this in the NTP setup for the nameserver !) 
As we have Internet connection we can use the default configure NTP servers 
[root@hract21 Desktop]#  more /etc/ntp.conf
server 0.rhel.pool.ntp.org iburst
server 1.rhel.pool.ntp.org iburst
server 2.rhel.pool.ntp.org iburst
server 3.rhel.pool.ntp.org iburst

# service ntpd stop
Shutting down ntpd:                                        [  OK  ]

If your RAC is going to be permanently connected to your main network and you want to use NTP, you must add 
the "-x" option into the following line in the "/etc/sysconfig/ntpd" file.

    OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

Then restart NTP and verify your setup 
Note NTP must run on all custer nodes and of course on your nameservr
# service ntpd restart

[root@hract21 Desktop]# ntpq -p
     remote           refid     st t when poll reach   delay   offset  jitter
==============================================================================
*ridcully.episod 5.9.39.5         3 u    -   64    1   47.564  -66.443   3.231
 alvo.fungus.at  193.170.62.252   3 u    1   64    1   52.608  -68.424   2.697
 main.macht.org  192.53.103.103   2 u    1   64    1   56.734  -71.436   3.186
 formularfetisch 131.188.3.223    2 u    -   64    1   37.748  -78.676  13.875

Add to rc.local 
service ntpd stop
ntpdate -u 192.168.5.50
service ntpd start


OS setup : 
Turn off and disable the firewall IPTables , disable SELinux and disable AVAHI daemon
# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]
# chkconfig iptables off
# chkconfig --list iptables
iptables        0:off   1:off   2:off   3:off   4:off   5:off   6:off

[root@hract21 Desktop]# service iptables status
iptables: Firewall is not running.

Disable SELinux. Open the config file and change the SELINUX variable from enforcing to disabled.
[root@hract21 Desktop]#   cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Disable AVAHI daemon  [ only if running ]
# /etc/init.d/avahi-daemon stop
To disable it: 
# /sbin/chkconfig  avahi-daemon off

Install some helpful packages
[root@hract21 network-scripts]# yum install wireshark
[root@hract21 network-scripts]# yum install wireshark-gnome
Install X11 applications like xclock
[root@hract21 network-scripts]# yum install xorg-x11-apps
[root@hract21 network-scripts]# yum install telnet 

[grid@hract21 rpm]$ cd /media/sf_kits/Oracle/12.1.0.2/grid/rpm
[root@hract21 rpm]#  rpm -iv cvuqdisk-1.0.9-1.rpm
Preparing packages for installation...
Using default group oinstall to install package
cvuqdisk-1.0.9-1

[root@hract21 network-scripts]# yum list | grep oracle-rdbms-server
oracle-rdbms-server-11gR2-preinstall.x86_64
oracle-rdbms-server-12cR1-preinstall.x86_64
[root@hract21 network-scripts]# yum install oracle-rdbms-server-12cR1-preinstall.x86_64
Loaded plugins: refresh-packagekit, security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package oracle-rdbms-server-12cR1-preinstall.x86_64 0:1.0-12.el6 will be installed

Create Users and Groups
Add to grid .bashrc 
export ORACLE_BASE=/u01/app/grid
export ORACLE_SID=+ASM1
export GRID_HOME=/u01/app/121/grid
export ORACLE_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:.:$PATH
export HOST=`/bin/hostname`
alias h=history
unalias ls 
alias sys='sqlplus / as sysdba'
alias sql='sqlplus scott/tiger'

Add to oracle .bashrc 
export ORACLE_BASE=/u01/app/grid
export ORACLE_SID=ract2
export GRID_HOME=/u01/app/121/grid
export ORACLE_HOME=/u01/app/oracle/product/121/rac121
export PATH=$ORACLE_HOME/bin:.:$PATH
export  LD_LIBRARY_PATH=$ORACLE_HOME/lib:.
export HOST=`/bin/hostname`
alias h=history
unalias ls 
alias sys='sqlplus / as sysdba'
alias sql='sqlplus scott/tiger'

export ORACLE_SID=`ps -elf | grep ora_smon | grep -v grep | awk ' { print  substr( $15,10) }' `
export CLASSPATH=$ORACLE_HOME/jdbc/lib/ojdbc6_g.jar:.
echo  "-> Active ORACLE_SID:  " $ORACLE_SID 

alias h=history 
alias oh='cd $ORACLE_HOME'
alias sys1='sqlplus sys/sys@ract2_1 as sysdba'
alias sys2='sqlplus sys/sys@ract2_2 as sysdba'
alias sys3='sqlplus sys/sys@ract2_3 as sysdba'
alias sql1='sqlplus scott/tiger@ract1'
alias sql2='sqlplus scott/tiger@ract2'
alias sql3='sqlplus scott/tiger@ract3'
alias trc1='cd /u01/app/oracle/diag/rdbms/ract2/ract2_1/trace'
alias trc2='cd /u01/app/oracle/diag/rdbms/ract2/gract2_2/trace'
alias trc3='cd /u01/app/oracle/diag/rdbms/ractr/gract2_3/trace'

Create groups: 
[root@hract21 network-scripts]# /usr/sbin/groupadd -g 501 oinstall
groupadd: group 'oinstall' already exists
[root@hract21 network-scripts]# /usr/sbin/groupadd -g 502 dba
groupadd: group 'dba' already exists
[root@hract21 network-scripts]# /usr/sbin/groupadd -g 504 asmadmin
[root@hract21 network-scripts]# /usr/sbin/groupadd -g 506 asmdba
[root@hract21 network-scripts]# /usr/sbin/groupadd -g 507 asmoper
[root@hract21 network-scripts]# /usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
[root@hract21 network-scripts]# /usr/sbin//userdel oracle
[root@hract21 network-scripts]# /usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle

[root@hract21 network-scripts]#  su - oracle 
[oracle@hract21 ~]$ id
uid=502(oracle) gid=54321(oinstall) groups=54321(oinstall),506(asmdba),54322(dba) 

[root@hract21 network-scripts]# su - grid
[grid@hract21 ~]$ id
uid=501(grid) gid=54321(oinstall) groups=54321(oinstall),504(asmadmin),506(asmdba),507(asmoper) 

For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file:
  if ( $USER = "oracle" || $USER = "grid" ) then
  limit maxproc 16384
  limit descriptors 65536
  endif

Modify  /etc/security/limits.conf
oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    2047
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768
grid     soft   nofile    1024
grid     hard   nofile    65536
grid     soft   nproc    2047
grid     hard   nproc    16384
grid     soft   stack    10240
grid     hard   stack    32768

Create Directories:
 - Have a separate ORACLE_BASE for both GRID and RDBMS install !
Create the Oracle Inventory Directory
To create the Oracle Inventory directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oraInventory
  # chown -R grid:oinstall /u01/app/oraInventory

Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/grid
  # chown -R grid:oinstall /u01/app/grid
  # chmod -R 775 /u01/app/grid
  # mkdir -p /u01/app/121/grid
  # chown -R grid:oinstall /u01//app/121/grid
  # chmod -R 775 /u01/app/121/grid

Creating the Oracle Base Directory
  To create the Oracle Base directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle
  # chown -R oracle:oinstall /u01/app/oracle
  # chmod -R 775 /u01/app/oracle

Creating the Oracle RDBMS Home Directory
  To create the Oracle RDBMS Home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle/product/121/rac121
  # chown -R oracle:oinstall /u01/app/oracle/product/121/rac121
  # chmod -R 775 /u01/app/oracle/product/121/rac121

Download cluvfy from : http://www.oracle.com/technetwork/database/options/clustering/downloads/index.html

Cluster Verification Utility Download for Oracle Grid Infrastructure 12c 
Note: The latest CVU version (July 2013) can be used with all currently supported Oracle RAC versions, including Oracle RAC 10g, 
      Oracle RAC 11g and Oracle RAC 12c.

Unzip cluvfy:
[grid@hract21 CLUVFY]$ unzip /tmp/cvupack_Linux_x86_64.zip
[grid@hract21 CLUVFY]$ pwd
/home/grid/CLUVFY
[grid@hract21 CLUVFY]$ ls
bin  clone  crs  css  cv  deinstall  diagnostics  has  install  jdbc  jdk  jlib  lib  network  nls  oracore  oui  srvm  utl  xdk
[grid@hract21 CLUVFY]$ bin/cluvfy -version
12.1.0.1.0 Build 112713x8664

Run cluvfy to verify the OS current installation  
Verify OS setup :
As Grid user
$ ./bin/cluvfy comp sys -p crs -n gract2 -verbose -fixup
--> If needed run the fix script and/or fix underlying problems 
As root user verify DHCP setup 

Verify DHCP setup :
[root@hract21 CLUVFY]#  ./bin/cluvfy comp dhcp -clustername  ract2 -verbose
Verifying DHCP Check 
Checking if any DHCP server exists on the network...
DHCP server returned server: 192.168.5.50, loan address: 192.168.5.218/255.255.255.0, lease time: 21600
At least one DHCP server exists on the network and is listening on port 67
Checking if DHCP server has sufficient free IP addresses for all VIPs...
Sending DHCP "DISCOVER" packets for client ID "ract2-scan1-vip"
DHCP server returned server: 192.168.5.50, loan address: 192.168.5.218/255.255.255.0, lease time: 21600
Sending DHCP "REQUEST" packets for client ID "ract2-scan1-vip"
.. 

Verify GNS setup : 
[grid@hract21 CLUVFY]$  ./bin/cluvfy comp gns -precrsinst -domain grid12c.example.com  -vip 192.168.5.58 
Verifying GNS integrity 
Checking GNS integrity...
The GNS subdomain name "grid12c.example.com" is a valid domain name
GNS VIP "192.168.5.58" resolves to a valid IP address
GNS integrity check passed
Verification of GNS integrity was successful. 
--> Note you may get the PRVF-5229 warning if this address is in use [ maybe be a different GNS VIP ]

Add this time we have created a base system which we will now clone 3x for our rac nodes 

Clone base system

You man change in File-> Preference the default machine path first  
M:\VM\RAC_OEL66_12102

Cloning ract21 :
Now cleanly shutdown your Reference/Clone system 
Virtualbox -> Clone [ Name clone ract21 ]  ->  add new Network Addresses -> Full Clone 

Boot system a first time - retrieve the new MAC addresses 
[root@hract21 Desktop]# dmesg |grep eth
e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 08:00:27:e7:c0:6b
e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
e1000 0000:00:08.0 eth1: (PCI:33MHz:32-bit) 08:00:27:7d:8e:49
e1000 0000:00:08.0 eth1: Intel(R) PRO/1000 Network Connection
e1000 0000:00:09.0 eth2: (PCI:33MHz:32-bit) 08:00:27:4e:c9:bf
e1000 0000:00:09.0 eth2: Intel(R) PRO/1000 Network Connection
e1000 0000:00:0a.0 eth3: (PCI:33MHz:32-bit) 08:00:27:3b:89:bf
e1000 0000:00:0a.0 eth3: Intel(R) PRO/1000 Network Connection

[root@hract21 network-scripts]# egrep 'HWADDR|IP' ifcfg-eth*
ifcfg-eth0:HWADDR=08:00:27:e7:c0:6b
ifcfg-eth1:HWADDR=08:00:27:7d:8e:49
ifcfg-eth1:IPADDR=192.168.5.121
ifcfg-eth2:HWADDR=08:00:27:4e:c9:bf
ifcfg-eth2:IPADDR=192.168.2.121
ifcfg-eth3:HWADDR=08:00:27:3b:89:bf
ifcfg-eth3:IPADDR=192.168.3.121
 
Remove file 
[root@hract21 Desktop]# rm  /etc/udev/rules.d/70-persistent-net.rules
rm: remove regular file `/etc/udev/rules.d/70-persistent-net.rules'? y

Change hostname in the “/etc/sysconfig/network” 
[root@gract21 network-scripts]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hract21.example.com
NTPSERVERARGS=iburst
# oracle-rdbms-server-12cR1-preinstall : Add NOZEROCONF=yes
NOZEROCONF=yes

-> And finally  reboot system and verify network setup

[root@hract21 network-scripts]# ifconfig | egrep 'eth|inet addr'
eth0      Link encap:Ethernet  HWaddr 08:00:27:E7:C0:6B  
          inet addr:192.168.1.14  Bcast:192.168.1.255  Mask:255.255.255.0
eth1      Link encap:Ethernet  HWaddr 08:00:27:7D:8E:49  
          inet addr:192.168.5.121  Bcast:192.168.5.255  Mask:255.255.255.0
eth2      Link encap:Ethernet  HWaddr 08:00:27:4E:C9:BF  
          inet addr:192.168.2.121  Bcast:192.168.2.255  Mask:255.255.255.0
eth3      Link encap:Ethernet  HWaddr 08:00:27:3B:89:BF  
          inet addr:192.168.3.121  Bcast:192.168.3.255  Mask:255.255.255.0

root@hract21 network-scripts]# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG        0 0          0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U         0 0          0 eth0
192.168.2.0     0.0.0.0         255.255.255.0   U         0 0          0 eth2
192.168.3.0     0.0.0.0         255.255.255.0   U         0 0          0 eth3
192.168.5.0     0.0.0.0         255.255.255.0   U         0 0          0 eth1

Repeat steps for ract22, ract23 !

Create Asm Disks

cd M:\VM\RAC_OEL66_12102

VBoxManage createhd --filename M:\VM\RAC_OEL66_12102\asm1_12102_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\RAC_OEL66_12102\asm2_12102_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\RAC_OEL66_12102\asm3_12102_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\RAC_OEL66_12102\asm4_12102_10G.vdi --size 10240 --format VDI --variant Fixed

VBoxManage modifyhd  asm1_12102_10G.vdi  --type shareable
VBoxManage modifyhd  asm2_12102_10G.vdi  --type shareable
VBoxManage modifyhd  asm3_12102_10G.vdi  --type shareable
VBoxManage modifyhd  asm4_12102_10G.vdi  --type shareable

VBoxManage storageattach ract21 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract21 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract21 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract21 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_12102_10G.vdi --mtype shareable
   
VBoxManage storageattach ract22 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract22 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract22 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract22 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_12102_10G.vdi --mtype shareable

VBoxManage storageattach ract23 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract23 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract23 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_12102_10G.vdi --mtype shareable
VBoxManage storageattach ract23 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_12102_10G.vdi --mtype shareable

Check newly created disk devices after RAC node reboot
[root@hract21 Desktop]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde

Run fdisk to partttion the new disk ( we only want a single partition )
[root@hract21 Desktop]# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf9bddbc6
   Device Boot      Start         End      Blocks   Id  System
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305): 
Using default value 1305
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
--> Repeat above step for  /dev/sdc  /dev/sdd  /dev/sde and verify the created devices.
[root@hract21 Desktop]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdb1  /dev/sdc  /dev/sdc1  /dev/sdd  /dev/sdd1  /dev/sde  /dev/sde1

Use following bash script to return the WWID disk IDs  : http://www.hhutzler.de/blog/configure-udev-rules-for-asm-devices/

[root@hract21 ~]# ./check_wwid.sh
/dev/sda  WWID:    1ATA_VBOX_HARDDISK_VB98f7f6e6-e47cb456
/dev/sda1  WWID:   1ATA_VBOX_HARDDISK_VB98f7f6e6-e47cb456
/dev/sda2  WWID:   1ATA_VBOX_HARDDISK_VB98f7f6e6-e47cb456
/dev/sdb  WWID:    1ATA_VBOX_HARDDISK_VBe7363848-cbf94b0c
/dev/sdb1  WWID:   1ATA_VBOX_HARDDISK_VBe7363848-cbf94b0c
/dev/sdc  WWID:    1ATA_VBOX_HARDDISK_VBb322a188-b4771866
/dev/sdc1  WWID:   1ATA_VBOX_HARDDISK_VBb322a188-b4771866
/dev/sdd  WWID:    1ATA_VBOX_HARDDISK_VB00b7878b-c50d45f4
/dev/sdd1  WWID:   1ATA_VBOX_HARDDISK_VB00b7878b-c50d45f4
/dev/sde  WWID:    1ATA_VBOX_HARDDISK_VB7a3701f8-f1272747
/dev/sde1  WWID:   1ATA_VBOX_HARDDISK_VB7a3701f8-f1272747

Create 99-oracle-asmdevices.rules - change the RESULT values by using the output of our  ./check_wwid.sh script :
[root@hract21 rules.d]#  cd /etc/udev/rules.d
[root@hract21 rules.d]#  cat  99-oracle-asmdevices.rules
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBe7363848-cbf94b0c", NAME= "asmdisk1_10G", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb322a188-b4771866", NAME= "asmdisk2_10G", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB00b7878b-c50d45f4", NAME= "asmdisk3_10G", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB7a3701f8-f1272747", NAME= "asmdisk4_10G", OWNER="grid", GROUP="asmadmin", MODE="0660"

[root@hract21 ~]# udevadm control --reload-rules
[root@hract21 ~]# start_udev
Starting udev: udevd[14512]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
                                                           [  OK  ]
[root@hract21 ~]#  ls -ltr /dev/asmd*
brw-rw---- 1 grid asmadmin 8, 17 Jan 29 09:33 /dev/asmdisk1_10G
brw-rw---- 1 grid asmadmin 8, 49 Jan 29 09:33 /dev/asmdisk3_10G
brw-rw---- 1 grid asmadmin 8, 33 Jan 29 09:33 /dev/asmdisk2_10G
brw-rw---- 1 grid asmadmin 8, 65 Jan 29 09:33 /dev/asmdisk4_10G

Copy newly created rules file to the remaining rac nodes and restart udev
[root@hract21 rules.d]#  scp 99-oracle-asmdevices.rules hract22:/etc/udev/rules.d
[root@hract21 rules.d]#  scp 99-oracle-asmdevices.rules hract23:/etc/udev/rules.d
and run  following bash script to restart udev

Bash script: restart_udev.sh  
#!/bin/bash 
udevadm control --reload-rules
start_udev
ls -ltr /dev/asm*

Note the ls output on hract22 and hract23 should be indentical to the output on hract21 !

Here you may add oracle and grid user to the vboxsf group. 
This allows us to use the mounted/shared VBOX devices !
[root@hract21 ~]#  usermod -G vboxsf oracle
[root@hract21 ~]#  usermod -G vboxsf grid

Setup ssh connectivity
[grid@hract21 ~]$  cp /media/sf_kits/Oracle/12.1.0.2/grid/sshsetup/sshUserSetup.sh .
[grid@hract21 ~]$  ./sshUserSetup.sh -user grid -hosts "hract21  hract22 hract23" -noPromptPassphrase

[grid@hract21 ~]$  /usr/bin/ssh -x -l grid hract21 date
Thu Jan 29 11:06:55 CET 2015
[grid@hract21 ~]$  /usr/bin/ssh -x -l grid hract22 date
Thu Jan 29 11:06:56 CET 2015
[grid@hract21 ~]$  /usr/bin/ssh -x -l grid hract21 date
Thu Jan 29 11:07:01 CET 2015

NTP setup on all RAC nodes
Note only our Name server is getting the time from the Internet 

For the RAC nodes add to ntp.conf only a single server ( which is our nameserver ) 
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 192.168.5.50 
[root@hract21 etc]#  service ntp restart
ntp: unrecognized service
[root@hract21 etc]# service ntpd restart
Shutting down ntpd:                                        [  OK  ]
Starting ntpd:                                             [  OK  ]
[root@hract21 etc]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 ns1.example.com 131.188.3.220   10 u    5   64    1    0.149  -170.05   0.000

Run cluvfy  and install GRID software

Run Cluvfy 
[grid@hract21 CLUVFY]$ ./bin/cluvfy stage -pre crsinst -asm -presence local -asmgrp asmadmin  \
    -asmdev /dev/asmdisk1_10G,/dev/asmdisk2_10G,/dev/asmdisk3_10G,/dev/asmdisk4_10G    \
    -networks eth1:192.168.5.0:PUBLIC/eth2:192.168.2.0:cluster_interconnect  \
    -n hract21,hract22,hract23 | egrep 'PRVF|fail'
Node reachability check failed from node "hract21"
Total memory check failed
Check failed on nodes: 
PRVF-9802 : Attempt to get udev information from node "hract21" failed
PRVF-9802 : Attempt to get udev information from node "hract23" failed
UDev attributes check failed for ASM Disks 
--> The PRVFV-9802 error is explained in following article 
    The Memory check failed as I had reduced RAC Vbox images to 4 GByte 
    For other cluvfy errors you may check this article 

Installing  GRID software 
[grid@hract21 CLUVFY]$ cd /media/sf_kits/Oracle/12.1.0.2/grid

$ cd grid
$ ls
install  response  rpm    runcluvfy.sh  runInstaller  sshsetup  stage  welcome.html
$ ./runInstaller 
-> Configure a standard cluster
-> Advanced Installation
   Cluster name : ract2
   Scan name    : ract2-scan.grid12c.example.com
   Scan port    : 1521
   -> Create New GNS
      GNS VIP address: 192.168.1.58
      GNS Sub domain : grid12c.example.com
  Public Hostname           Virtual Hostname 
  hract21.example.com        AUTO
  hract22.example.com        AUTO
  hract23.example.com        AUTO

-> Test and Setup SSH connectivity
-> Setup network Interfaces
   eth0: don't use
   eth1: PUBLIC                              192.168.5.X
   eht2: Private Cluster_Interconnect,ASM    192.168.2.X
 
-> Configure GRID Infrastruce: YES
-> Use standard ASM for storage
-> ASM setup
   Diskgroup         : DATA
   Disk discover PATH: /dev/asm*
--> Don't use IPMI

Run root scritps:
[root@hract21 etc]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@hract21 etc]# /u01/app/121/grid/root.sh
Performing root user operation.
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/121/grid
..
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 30-JAN-2015 12:39:53
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
CRS-5014: Agent "ORAAGENT" timed out starting process "/u01/app/121/grid/bin/lsnrctl" for action "check": details at "(:CLSN00009:)" in "/u01/app/grid/diag/crs/hract21/crs/trace/crsd_oraagent_grid.trc"
CRS-5017: The resource action "ora.MGMTLSNR check" encountered the following error: 
(:CLSN00009:)Command Aborted. For details refer to "(:CLSN00109:)" in "/u01/app/grid/diag/crs/hract21/crs/trace/crsd_oraagent_grid.trc".
CRS-2664: Resource 'ora.DATA.dg' is already running on 'hract21'
CRS-6017: Processing resource auto-start for servers: hract21
CRS-2672: Attempting to start 'ora.oc4j' on 'hract21'
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'hract21'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'hract21' succeeded
CRS-2676: Start of 'ora.oc4j' on 'hract21' succeeded
CRS-6016: Resource auto-start has completed for server hract21
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2015/01/30 12:41:03 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Now run scripts below on hrac22 and hrac23
# /u01/app/oraInventory/orainstRoot.sh
# /u01/app/121/grid/root.sh

Verify CW status 
[root@hract21 ~]# crs
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          hract21      STABLE   
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          hract22      STABLE   
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          hract23      STABLE   
ora.DATA.dg                    ONLINE     ONLINE          hract21      STABLE   
ora.DATA.dg                    ONLINE     ONLINE          hract22      STABLE   
ora.DATA.dg                    ONLINE     ONLINE          hract23      STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          hract21      STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          hract22      STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          hract23      STABLE   
ora.net1.network               ONLINE     ONLINE          hract21      STABLE   
ora.net1.network               ONLINE     ONLINE          hract22      STABLE   
ora.net1.network               ONLINE     ONLINE          hract23      STABLE   
ora.ons                        ONLINE     ONLINE          hract21      STABLE   
ora.ons                        ONLINE     ONLINE          hract22      STABLE   
ora.ons                        ONLINE     ONLINE          hract23      STABLE   
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       hract22         STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       hract23         STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       hract21         STABLE  
ora.MGMTLSNR                   1   ONLINE       ONLINE       hract21         169.254.213.86 192.1 68.2.121,STABLE
ora.asm                        1   ONLINE       ONLINE       hract21         Started,STABLE  
ora.asm                        2   ONLINE       ONLINE       hract22         Started,STABLE  
ora.asm                        3   ONLINE       ONLINE       hract23         Started,STABLE  
ora.cvu                        1   ONLINE       ONLINE       hract21         STABLE  
ora.gns                        1   ONLINE       ONLINE       hract21         STABLE  
ora.gns.vip                    1   ONLINE       ONLINE       hract21         STABLE  
ora.hract21.vip                1   ONLINE       ONLINE       hract21         STABLE  
ora.hract22.vip                1   ONLINE       ONLINE       hract22         STABLE  
ora.hract23.vip                1   ONLINE       ONLINE       hract23         STABLE  
ora.mgmtdb                     1   ONLINE       ONLINE       hract21         Open,STABLE  
ora.oc4j                       1   ONLINE       ONLINE       hract21         STABLE  
ora.scan1.vip                  1   ONLINE       ONLINE       hract22         STABLE  
ora.scan2.vip                  1   ONLINE       ONLINE       hract23         STABLE  
ora.scan3.vip                  1   ONLINE       ONLINE       hract21         STABLE


Verify GNS SETUP / Network Setup 
[root@hract21 ~]# sh -x  check_net_12c.sh
+ dig @192.168.5.50 ract2-scan.grid12c.example.com
;; QUESTION SECTION:
;ract2-scan.grid12c.example.com.    IN    A

;; ANSWER SECTION:
ract2-scan.grid12c.example.com.    34 IN    A    192.168.5.236
ract2-scan.grid12c.example.com.    34 IN    A    192.168.5.220
ract2-scan.grid12c.example.com.    34 IN    A    192.168.5.218

;; AUTHORITY SECTION:
grid12c.example.com.    3600    IN    NS    gns12c.grid12c.example.com.
grid12c.example.com.    3600    IN    NS    ns1.example.com.

;; ADDITIONAL SECTION:
ns1.example.com.    3600    IN    A    192.168.5.50


+ dig @192.168.5.58 ract2-scan.grid12c.example.com

;; QUESTION SECTION:
;ract2-scan.grid12c.example.com.    IN    A

;; ANSWER SECTION:
ract2-scan.grid12c.example.com.    120 IN    A    192.168.5.218
ract2-scan.grid12c.example.com.    120 IN    A    192.168.5.220
ract2-scan.grid12c.example.com.    120 IN    A    192.168.5.236

;; AUTHORITY SECTION:
grid12c.example.com.    10800    IN    SOA    hract22. hostmaster.grid12c.example.com. 46558097 10800 10800 30 120

;; ADDITIONAL SECTION:
ract2-gns-vip.grid12c.example.com. 10800 IN A    192.168.5.58


+ nslookup ract2-scan
Server:        192.168.5.50
Address:    192.168.5.50#53
Non-authoritative answer:
Name:    ract2-scan.grid12c.example.com
Address: 192.168.5.236
Name:    ract2-scan.grid12c.example.com
Address: 192.168.5.218
Name:    ract2-scan.grid12c.example.com
Address: 192.168.5.220

+ ping -c 2 google.de
PING google.de (173.194.65.94) 56(84) bytes of data.
64 bytes from ee-in-f94.1e100.net (173.194.65.94): icmp_seq=1 ttl=38 time=177 ms
64 bytes from ee-in-f94.1e100.net (173.194.65.94): icmp_seq=2 ttl=38 time=134 ms
..

+ ping -c 2 hract21
PING hract21.example.com (192.168.5.121) 56(84) bytes of data.
64 bytes from hract21.example.com (192.168.5.121): icmp_seq=1 ttl=64 time=0.013 ms
64 bytes from hract21.example.com (192.168.5.121): icmp_seq=2 ttl=64 time=0.024 ms
..

+ ping -c 2 ract2-scan.grid12c.example.com
PING ract2-scan.grid12c.example.com (192.168.5.220) 56(84) bytes of data.
64 bytes from 192.168.5.220: icmp_seq=1 ttl=64 time=0.453 ms
64 bytes from 192.168.5.220: icmp_seq=2 ttl=64 time=0.150 ms
..

+ cat /etc/resolv.conf
# Generated by NetworkManager
search example.com grid12c.example.com
nameserver 192.168.5.50

Run Cluvfy and Install RDBMS software

Verify that your .bashrc doens't read/write any data from/to stdin/stdout 
Setup ssh connectivity :
[oracle@hract21 ~]$  ./sshUserSetup.sh -user oracle -hosts "hract21  hract22 hract23" -noPromptPassphrase
Verify ssh connectivity ( run this on hract22 and hract23 too )
[oracle@hract21 ~]$ ssh -x -l oracle hract21 date
Fri Jan 30 15:40:45 CET 2015
[oracle@hract21 ~]$ ssh -x -l oracle hract22 date
Fri Jan 30 15:40:50 CET 2015
[oracle@hract21 ~]$ ssh -x -l oracle hract23 date
Fri Jan 30 15:40:52 CET 2015

[grid@hract21 CLUVFY]$./bin/cluvfy stage -pre  dbinst  -n hract21,hract22,hract23 -d /u01/app/oracle/product/121/rac121 -fixup
Performing pre-checks for database installation 
Checking node reachability...
Node reachability check passed from node "hract21"
Checking user equivalence...
User equivalence check passed for user "grid"
ERROR: 
PRVG-11318 : The following error occurred during database operating system groups check. "PRCT-1005 :
 Directory /u01/app/oracle/product/121/rac121/bin does not exist"
 --> You can ignore this as we haven't installed a RAC DB software yet 

Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.5.0"
Node connectivity passed for subnet "192.168.5.0" with node(s) hract22,hract23,hract21
TCP connectivity check passed for subnet "192.168.5.0"
Check: Node connectivity using interfaces on subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) hract22,hract23,hract21
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.5.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "hract23:/u01/app/oracle/product/121/rac121,hract23:/tmp"
Free disk space check passed for "hract22:/u01/app/oracle/product/121/rac121,hract22:/tmp"
Free disk space check passed for "hract21:/u01/app/oracle/product/121/rac121,hract21:/tmp"
Check for multiple users with UID value 501 passed 
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Group existence check passed for "asmdba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Membership check for user "grid" in group "asmdba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
..
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed 
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Default user file creation mask check passed
Checking CRS integrity...
Clusterware version consistency passed.
CRS integrity check passed
Checking Cluster manager integrity... 
Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of ONS node application (optional)
ONS node application check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
CTSS resource check passed
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
Check of common NTP Time Server passed
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Checking integrity of file "/etc/resolv.conf" across nodes
"domain" and "search" entries do not coexist in any  "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes
Check for integrity of file "/etc/resolv.conf" passed
Time zone consistency check passed
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "ract2-scan.grid12c.example.com"...
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking SCAN IP addresses...
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Checking GNS integrity...
The GNS subdomain name "grid12c.example.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.5.0, 192.168.5.0, 192.168.5.0" match with the GNS VIP "192.168.5.0, 192.168.5.0, 192.168.5.0"
GNS VIP "192.168.5.58" resolves to a valid IP address
GNS resolved IP addresses are reachable
GNS resource configuration check passed
GNS VIP resource configuration check passed.
GNS integrity check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

ASM and CRS versions are compatible
Database Clusterware version compatibility passed.
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passe

NOTE: 
No fixable verification failures to fix
Pre-check for database installation was successful


Install the database software 

[oracle@hract21 database]$ id
uid=502(oracle) gid=54321(oinstall) groups=54321(oinstall),493(vboxsf),506(asmdba),54322(dba)
[oracle@hract21 database]$ cd /media/sf_kits/oracle/12.1.0.2/database
[oracle@hract21 database]$  ./runInstaller
--> Create and Configure a Database 
 --> Server Class
  --> Oracle Real Application Cluster database installation
   --> Policy Managed  
    --> Server Pool:  Top_Priority Cardinality :2
     --> Select all 3 RAC membemers
      --> Test/Create SSH connectivity
       --> Advanced Install 
        --> Select Generell Purpose / Transaction database type
         --> Target Database Memory : 800 MByte 
          --> Select ASM and for OSDDBA use group:  dba ( default )
 
Run root.sh : hract21, hract22, hract23

Start database banka on all nodes 
[oracle@hract21 database]$  srvctl status srvpool -a
Server pool name: Free
Active servers count: 2
Active server names: hract21,hract22
NAME=hract21 STATE=ONLINE
NAME=hract22 STATE=ONLINE
Server pool name: Generic
Active servers count: 0
Active server names: 
Server pool name: Top_Priority
Active servers count: 1
Active server names: hract23
NAME=hract23 STATE=ONLINE
[oracle@hract21 database]$ srvctl modify srvpool -g Top_Priority -l 3 -u 3

*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.banka.db                   1   ONLINE       ONLINE       hract23         Open,STABLE  
ora.banka.db                   2   ONLINE       ONLINE       hract21         Open,STABLE  
ora.banka.db                   3   ONLINE       ONLINE       hract22         Open,STABLE 

Stop one instance 
[oracle@hract21 database]$ srvctl modify srvpool -g Top_Priority -l 2 -u 2 -f
ora.banka.db                   1   ONLINE       ONLINE       hract23         Open,STABLE  
ora.banka.db                   2   ONLINE       OFFLINE      -               Instance Shutdown,ST ABLE
ora.banka.db                   3   ONLINE       ONLINE       hract22         Open,STABLE 

Invoke dbca and create database bankb

[oracle@hract21 database]$  ./dbca
   --> Policy Managed  
    --> Server Pool:  Low_Priority Cardinality :1
     --> Target Database Memory : 800 MByte 

Check server pools :
[oracle@hract21 database]$ srvctl status srvpool -a
Server pool name: Free
Active servers count: 0
Active server names: 
Server pool name: Generic
Active servers count: 0
Active server names: 
Server pool name: Low_Priority
Active servers count: 1
Active server names: hract21
NAME=hract21 STATE=ONLINE
Server pool name: Top_Priority
Active servers count: 2
Active server names: hract22,hract23
NAME=hract22 STATE=ONLINE
NAME=hract23 STATE=ONLINE
    

For details about serverpools read follwing article : http://www.hhutzler.de/blog/managing-server-pools/
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.banka.db                   1   ONLINE       ONLINE       hract23         Open,STABLE  
ora.banka.db                   2   ONLINE       OFFLINE      -               Instance Shutdown,ST ABLE
ora.banka.db                   3   ONLINE       ONLINE       hract22         Open,STABLE  
ora.bankb.db                   1   ONLINE       ONLINE       hract21         Open,STABLE  

Testing the current configuration 
Database bankB:
[oracle@hract21 ~]$ sqlplus system/sys@ract2-scan.grid12c.example.com:1521/bankb  @v
HOST_NAME               INSTANCE_NAME
------------------------------ ----------------
hract21.example.com           bankb_1
--> As database bankB runs only on one instance 

Verify load balancing for Database bankA:
[oracle@hract21 ~]$  sqlplus system/sys@ract2-scan.grid12c.example.com:1521/banka @v
HOST_NAME               INSTANCE_NAME
------------------------------ ----------------
hract23.example.com           bankA_1

[oracle@hract21 ~]$ sqlplus system/sys@ract2-scan.grid12c.example.com:1521/banka @v
HOST_NAME               INSTANCE_NAME
------------------------------ ----------------
hract22.example.com           bankA_3

--> As database bankA runs on 2 instances load balancing takes place .

 

Reference

Leave a Reply

Your email address will not be published. Required fields are marked *