Install Oracle RAC 12.1,OEL 6.4 and Virtualbox 4.2 with GNS and ASMLib

Network/DNS setup

Virtualbox Device Configuration 
eth0   -  VirtualBox NAT              - DHCP  either using  local LTE-Router 192.168.1.1 or Coorporate VPN network 
eth1   -  VirtualBox Internal Network - Public interface ( grac121: 192.168.1.81 / grac122: 192.168.1.82 ) 
eth2   -  VirtualBox Internal Network - Private Cluster Interconnect ( grac121int: 192.168.2.81 / grac122int: 192.168.2.82  )

Restart the network service ( sometimes network restarts will overwrite your resolv.conf file - just be sure) 
$ service network restart 
After network restart /etc/resolv.conf should look like: 
# Generated by NetworkManager 
search example.com grid.example.com de.oracle.com 
nameserver 192.168.1.50 

Add the Corporate Nameservers as forwarders in our DNS   
/etc/named.conf :    
forwarders { 192.135.82.44; 10.165.246.33; } ; 
Verify the ping works fine from our DNS nameserver to the corporate DNS name servers: 
$ ping 192.135.82.44 
$ ping 10.165.246.33 
Details: 
Nameserver settings:    
  192.35.82.44     : Corporate name server I    
  10.165.246.33    : Corporate name server II       
  192.168.1.50     : DNS name server used for GNS delagation ( GNS NS: 192.168.1.55 ) 

After above setup network devices  should look like:
# ifconfig | egrep 'HWaddr|Bcast'
eth0      Link encap:Ethernet  HWaddr 08:00:27:A8:27:BD  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
eth1      Link encap:Ethernet  HWaddr 08:00:27:1E:7D:B0  
          inet addr:192.168.1.81  Bcast:192.168.1.255  Mask:255.255.255.0
eth2      Link encap:Ethernet  HWaddr 08:00:27:97:59:C3  
          inet addr:192.168.2.81  Bcast:192.168.2.255  Mask:255.255.255.0

Preparing your coorporate name server GNS zone delegation
/etc/named.conf 
zone  "example.com" IN {
      type master;
       notify no;
       file "example.com.db";
};

/var/named/example.com.db
...
$ORIGIN grid12c.example.com.
@       IN          NS        gns12c.grid12c.example.com. ; NS  grid.example.com
        IN          NS        ns1.example.com.      ; NS example.com
gns12c  IN          A         192.168.1.58 ; glue record

Check DNS resolution
Testing GNS ( Note : ping will not work as GNS isn't active yet )
$  nslookup 192.168.1.58
Server:        192.168.1.50
Address:    192.168.1.50#53
58.1.168.192.in-addr.arpa    name = gns12c.grid12c.example.com.

$ nslookup gns12c.grid12c.example.com
;; Got SERVFAIL reply from 192.168.1.50, trying next server
--> No problem
#   nslookup grac112-scan ( Again this will only work after CRS is installed and active ) 
Server:        192.168.1.50
Address:    192.168.1.50#53
Non-authoritative answer:
Name:    grac112-scan.grid12c.example.com
Address: 192.168.1.149
Name:    grac112-scan.grid12c.example.com
Address: 192.168.1.150
Name:    grac112-scan.grid12c.example.com
Address: 192.168.1.148
...
$  nslookup grac121.example.com
Name:    grac121.example.com
Address: 192.168.1.81
$ nslookup 192.168.1.81
81.1.168.192.in-addr.arpa    name = grac121.example.com.
$  nslookup grac121int.example.com
Name:    grac121int.example.com
Address: 192.168.2.81
$ nslookup  192.168.2.81
81.2.168.192.in-addr.arpa    name = grac121int.example.com.
....
--> Repeat above nslookup steps for grac122


Configure you network name by modifying: /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=grac122.example.com

NTP Setup

NTP Setup - Clients: grac121.example.com, grac122.example.com
 # cat /etc/ntp.conf
 restrict default nomodify notrap noquery
 restrict 127.0.0.1
 # -- CLIENT NETWORK -------
 # --- OUR TIMESERVERS -----
 # 192.168.1.2 is the address for my timeserver,
 # use the address of your own, instead:
 server 192.168.1.50
 server  127.127.1.0
 # --- NTP MULTICASTCLIENT ---
 # --- GENERAL CONFIGURATION ---
 # Undisciplined Local Clock.
 fudge   127.127.1.0 stratum 12
 # Drift file.
 driftfile /var/lib/ntp/drift
 broadcastdelay  0.008
 # Keys file.
 keys /etc/ntp/keys

# ntpq -p

     remote           refid      st t when poll reach   delay   offset  jitter
==================== ==========================================================
 ns1.example.com LOCAL(0)        10 u   20   64    1    0.244   -0.625   0.000
 LOCAL(0)        .LOCL.          12 l   19   64    1    0.000    0.000   0.000

Add to  /etc/rc.local
#
service ntpd stop
ntpdate -u 192.168.1.50 
service ntpd start

Account setup

Check User setup  for users: oracle,grid ( Note oracle user should belong to  asmdba )

See :  Grid Infrastructure Installation Guide 12c – Chapter 6 

  • OSDBA for ASM Database Administrator group for ASM, typically asmdba)  Members of the ASM Database Administrator group (OSDBA for ASM) are granted read and write access to files managed by Oracle ASM. The Oracle Grid Infrastructure installation owner and all Oracle Database software owners must be a member of this group, and all users with OSDBA membership on databases that have access to the files managed by Oracle ASM must be members of the OSDBA group for ASM.
$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),500(vboxsf),506(asmdba),54322(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ id
uid=501(grid) gid=54321(oinstall) groups=54321(oinstall),500(vboxsf),504(asmadmin),506(asmdba),507(asmoper),54322(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Create directories:
To create the Oracle Inventory directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oraInventory
  # chown -R grid:oinstall /u01/app/oraInventory
Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/grid
  # chown -R grid:oinstall /u01/app/grid
  # chmod -R 775 /u01/app/grid
  # mkdir -p /u01/app/121/grid
  # chown -R grid:oinstall /u01//app/121/grid
  # chmod -R 775 /u01/app/121/grid
Creating the Oracle Base Directory
  To create the Oracle Base directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle
  # chown -R oracle:oinstall /u01/app/oracle
  # chmod -R 775 /u01/app/oracle
Creating the Oracle RDBMS Home Directory
  To create the Oracle RDBMS Home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle/product/121/racdb
  # chown -R oracle:oinstall /u01/app/oracle/product/121/racdb
  # chmod -R 775 /u01/app/oracle/product/121/racdb

Install and verify RPM from rpm directory ( GRID media ) : cvuqdisk-1.0.9-1.rpm
# rpm -qa | grep  cvu
cvuqdisk-1.0.9-1.x86_64

Verify the current OS status before installing CRS using cluvfy

Download  cluvfy from 
http://www.oracle.com/technetwork/database/clustering/downloads/cvu-download-homepage-099973.html
Cluster Verification Utility Download for Oracle Grid Infrastructure 12c 
Note: The latest CVU version (July 2013) can be used with all currently supported Oracle RAC versions, including Oracle RAC 10g, 
      Oracle RAC 11g and Oracle RAC 12c.

Run cluvfy to prepare the CRS installation 
$ ./bin/cluvfy comp sys -p crs -n grac121 -verbose -fixup
Verifying system requirement 
Check: Total memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       3.7426GB (3924412.0KB)    4GB (4194304.0KB)         failed    
Result: Total memory check failed
Check: Available memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       3.3971GB (3562152.0KB)    50MB (51200.0KB)          passed    
Result: Available memory check passed
Check: Swap space 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       6.0781GB (6373372.0KB)    3.7426GB (3924412.0KB)    passed    
Result: Swap space check passed
Check: Free disk space for "grac121:/usr,grac121:/var,grac121:/etc,grac121:/u01/app/11203/grid,grac121:/sbin,grac121:/tmp" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              grac121       /             13.5332GB     7.9635GB      passed      
  /var              grac121       /             13.5332GB     7.9635GB      passed      
  /etc              grac121       /             13.5332GB     7.9635GB      passed      
  /u01/app/11203/grid  grac121       /             13.5332GB     7.9635GB      passed      
  /sbin             grac121       /             13.5332GB     7.9635GB      passed      
  /tmp              grac121       /             13.5332GB     7.9635GB      passed      
Result: Free disk space check passed for "grac121:/usr,grac121:/var,grac121:/etc,grac121:/u01/app/11203/grid,grac121:/sbin,grac121:/tmp"
Check: User existence for "grid" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac121       passed                    exists(501)             
Checking for multiple users with UID value 501
Result: Check for multiple users with UID value 501 passed 
Result: User existence check passed for "grid"
Check: Group existence for "oinstall" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac121       passed                    exists                  
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac121       passed                    exists                  
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           yes           yes           yes           yes           passed      
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group "dba" 
  Node Name         User Exists   Group Exists  User in Group  Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac121           yes           yes           yes           passed          
Result: Membership check for user "grid" in group "dba" passed
Check: Run level 
  Node Name     run level                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       5                         3,5                       passed    
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac121           hard          4096          65536         failed          
Result: Hard limits check failed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac121           hard          30524         16384         passed          
Result: Hard limits check passed for "maximum user processes"
Check: System architecture 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       x86_64                    x86_64                    passed    
Result: System architecture check passed
Check: Kernel version 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       2.6.39-200.24.1.el6uek.x86_64  2.6.32                    passed    
Result: Kernel version check passed
Check: Kernel parameter for "semmsl" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           250           250           250           passed          
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           32000         32000         32000         passed          
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           100           100           100           passed          
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           128           128           128           passed          
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4398046511104  4398046511104  2009298944    passed          
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4096          4096          4096          passed          
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4294967296    4294967296    392441        passed          
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           6815744       6815744       6815744       passed          
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed          
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           262144        262144        262144        passed          
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4194304       4194304       4194304       passed          
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           262144        262144        262144        passed          
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "binutils" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       binutils-2.20.51.0.2-5.34.el6  binutils-2.20.51.0.2      passed    
Result: Package existence check passed for "binutils"
Check: Package existence for "compat-libcap1" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       compat-libcap1-1.10-1     compat-libcap1-1.10       passed    
Result: Package existence check passed for "compat-libcap1"
Check: Package existence for "compat-libstdc++-33(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed    
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "libgcc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libgcc(x86_64)-4.4.6-4.el6  libgcc(x86_64)-4.4.4      passed    
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libstdc++(x86_64)-4.4.6-4.el6  libstdc++(x86_64)-4.4.4   passed    
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libstdc++-devel(x86_64)-4.4.6-4.el6  libstdc++-devel(x86_64)-4.4.4  passed    
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       sysstat-9.0.4-20.el6      sysstat-9.0.4             passed    
Result: Package existence check passed for "sysstat"
Check: Package existence for "gcc" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       gcc-4.4.6-4.el6           gcc-4.4.4                 passed    
Result: Package existence check passed for "gcc"
Check: Package existence for "gcc-c++" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       gcc-c++-4.4.6-4.el6       gcc-c++-4.4.4             passed    
Result: Package existence check passed for "gcc-c++"
Check: Package existence for "ksh" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       ksh-20100621-16.el6       ksh-...                   passed    
Result: Package existence check passed for "ksh"
Check: Package existence for "make" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       make-3.81-20.el6          make-3.81                 passed    
Result: Package existence check passed for "make"
Check: Package existence for "glibc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       glibc(x86_64)-2.12-1.80.el6_3.5  glibc(x86_64)-2.12        passed    
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "glibc-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       glibc-devel(x86_64)-2.12-1.80.el6_3.5  glibc-devel(x86_64)-2.12  passed    
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "libaio(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed    
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "libaio-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed    
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "nfs-utils" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       nfs-utils-1.2.3-26.el6    nfs-utils-1.2.3-15        passed    
Result: Package existence check passed for "nfs-utils"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed 
Starting check for consistency of primary group of root user
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac121                               passed                  
Check for consistency of root user's primary group passed
Check: Time zone consistency 
Result: Time zone consistency check passed
******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
******************************************************************************************
--------------                ---------------     ----------------    
Check failed.                 Failed on nodes     Reboot required?    
--------------                ---------------     ----------------    
Hard Limit: maximum open      grac121             no                  
file descriptors                                                      
Execute "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" as root user on nodes "grac121" to perform the fix up operations manually
--> Now run runfixup.sh" as root   on nodes "grac121" 
Press ENTER key to continue after execution of "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has completed on nodes "grac121"
Fix: Hard Limit: maximum open file descriptors 
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac121                               successful              
Result: "Hard Limit: maximum open file descriptors" was successfully fixed on all the applicable nodes
Fix up operations were successfully completed on all the applicable nodes
Verification of system requirement was unsuccessful on all the specified nodes.
--
Fixup may fix some errors but errors like to low memory/swap needs manual intervention:
Check: Total memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       3.7426GB (3924412.0KB)    4GB (4194304.0KB)         failed    
Result: Total memory check failed

Verify  GNS integrity ( Note is a GNS is already active you will get a warning ) 
$ ./bin/cluvfy comp gns -precrsinst -domain grid12c.example.com  -vip 192.168.1.55
Verifying GNS integrity 
Checking GNS integrity...
The GNS subdomain name "grid12c.example.com" is a valid domain name
GNS VIP "192.168.1.58" resolves to a valid IP address
GNS integrity check passed
Verification of GNS integrity was successfull

Create ASM disks

D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm1_5G.vdi --size 5120 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: a24ac5ee-f045-434d-8c2d-8fde5c73d6fa
D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm2_5G.vdi --size 5120 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: ade56ace-a8fd-4383-aa8e-f2b4f7645372
D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm3_5G.vdi --size 5120 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 033563bc-e63d-435a-8fc6-e4f67dd54128
D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm4_5G.vdi --size 5120 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 7b60806c-78fc-4f4c-beb1-ff9bafd36eeb
Attach ASM Diks and make them sharable
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 1  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm1_5G.vdi
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 2  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm2_5G.vdi
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 3  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm3_5G.vdi
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 4  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm4_5G.vdi
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm1_5G.vdi --type shareable
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm2_5G.vdi --type shareable
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm3_5G.vdi --type shareable
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm4_5G.vdi --type shareable

Reboot your system and format disks:

# fdisk /dev/sde
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-652, default 652): 
Using default value 652
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
--> Repeat above format command for all newly created disks !

Configure ASMlib

Configure Oracle ASM library driver
# /usr/sbin/oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.
Default user to own the driver interface [grid]: 
Default group to own the driver interface [asmadmin]: 
Start Oracle ASM library driver on boot (y/n) [y]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

Create ASM disks:
# /etc/init.d/oracleasm createdisk data1 /dev/sdb1
Marking disk "data1" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data2 /dev/sdc1
Marking disk "data2" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data3 /dev/sdd1
Marking disk "data3" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data4 /dev/sde1
Marking disk "data4" as an ASM disk:                       [  OK  ]

Verify ASM disks
# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
DATA4
# /etc/init.d/oracleasm querydisk -d data1
Disk "DATA1" is a valid ASM disk on device [8, 17]
# /etc/init.d/oracleasm querydisk -d data2
Disk "DATA2" is a valid ASM disk on device [8, 33]
# /etc/init.d/oracleasm querydisk -d data3
Disk "DATA3" is a valid ASM disk on device [8, 49]
# /etc/init.d/oracleasm querydisk -d data4
Disk "DATA4" is a valid ASM disk on device [8, 65]

Clone VM and attach ASM disks

D:\VM>VBoxManage clonehd  d:\VM\GNS12c\grac121\grac121.vdi   d:\VM\GNS12c\grac122\grac122.vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VDI'. UUID: 5fb10575-2293-489b-b105-289d5d49ab18
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 1  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm1_5G.vdi
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 2  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm2_5G.vdi
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 3  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm3_5G.vdi
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 4  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm4_5G.vdi

Setup 2nd note

Reboot grac122 and change your TCP/IP settings. Verify with ping and nslookup

Run sshUserSetup.sh  on grac121:
./sshUserSetup.sh -user grid -hosts "grac121 grac122"  -noPromptPassphrase

Verify CRS for both nodes using newly created  ASM disk and asmadmin group 

./bin/cluvfy stage -pre crsinst -n grac121,grac122 -asm -asmdev /dev/oracleasm/disks/DATA1,
      /dev/oracleasm/disks/DATA2,/dev/oracleasm/disks/DATA3,/dev/oracleasm/disks/DATA4 
      -presence local -networks eth1:192.168.1.0:PUBLIC/eth2:192.168.2.0:cluster_interconnect

Potential Error:
ERROR:  /dev/oracleasm/disks/DATA4
grac122:Cannot verify the shared state for device /dev/sde1 due to Universally Unique Identifiers 
    (UUIDs) not being found, or different values being found, for this device across nodes:
    grac121,grac122
--> Error is due to test system using VirtualBox to setup the RAC and partitions not returning an UUID. 
Installation could be continued ignoring this error. In a proper system where UUID is available the cluvfy 
would have the following messages when these check succeed.
( See http://asanga-pradeep.blogspot.co.uk/2013/08/installing-12c-12101-rac-on-rhel-6-with.html )

Install 12.1 clusterware

$ cd grid
$ ls
install  response  rpm    runcluvfy.sh  runInstaller  sshsetup  stage  welcome.html
$ ./runInstaller 
-> Configure a standard cluster
-> Advanced Installation
   Cluster name : grac112
   Scan name    : grac112-scan.grid12c.example.com
   Scan port    : 1521
   -> Create New GNS
      GNS VIP address: 192.168.1.58
      GNS Sub domain : grid12c.example.com
  Public Hostname           Virtual Hostname 
  grac121.example.com        AUTO
  grac122.example.com        AUTO
-> Test and Setuop SSH connectivity
-> Setup network Interfaces
   eth0: don't use
   eth1: PUBLIC
   eht2: Private Cluster_Interconnect
-> Configure GRID Infrastruce: YES
-> Use standard ASM for storage
-> ASM setup
   Diskgroup         : DATA
   Disk discover PATH: /dev/oracleasm/disks/*

Run root.sh scripts on grac121:

# /u01/app/121/grid/root.sh
Performing root user operation for Oracle 12c 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/121/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/121/grid/crs/install/crsconfig_params
2013/08/25 14:56:52 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
2013/08/25 14:57:38 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'grac121'
CRS-2677: Stop of 'ora.drivers.acfs' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'grac121'
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac121'
CRS-2676: Start of 'ora.mdnsd' on 'grac121' succeeded
CRS-2676: Start of 'ora.evmd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac121'
CRS-2676: Start of 'ora.gpnpd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac121'
CRS-2672: Attempting to start 'ora.gipcd' on 'grac121'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac121' succeeded
CRS-2676: Start of 'ora.gipcd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac121'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac121'
CRS-2676: Start of 'ora.diskmon' on 'grac121' succeeded
CRS-2676: Start of 'ora.cssd' on 'grac121' succeeded
ASM created and started successfully.
Disk Group DATA created successfully.
CRS-2672: Attempting to start 'ora.crf' on 'grac121'
CRS-2672: Attempting to start 'ora.storage' on 'grac121'
CRS-2676: Start of 'ora.storage' on 'grac121' succeeded
CRS-2676: Start of 'ora.crf' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'grac121'
CRS-2676: Start of 'ora.crsd' on 'grac121' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk e158882a16cf4f44bfab3fac241e5152.
Successful addition of voting disk b93b579e97f24ff4bfb58e7a1d9e628b.
Successful addition of voting disk 2a29ac7797544f8cbfb6650ce7c287fe.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   e158882a16cf4f44bfab3fac241e5152 (/dev/oracleasm/disks/DATA1) [DATA]
 2. ONLINE   b93b579e97f24ff4bfb58e7a1d9e628b (/dev/oracleasm/disks/DATA2) [DATA]
 3. ONLINE   2a29ac7797544f8cbfb6650ce7c287fe (/dev/oracleasm/disks/DATA3) [DATA]
Located 3 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'grac121'
CRS-2673: Attempting to stop 'ora.crsd' on 'grac121'
CRS-2677: Stop of 'ora.crsd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'grac121'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'grac121'
CRS-2673: Attempting to stop 'ora.ctssd' on 'grac121'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'grac121'
CRS-2677: Stop of 'ora.drivers.acfs' on 'grac121' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'grac121' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'grac121' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'grac121'
CRS-2673: Attempting to stop 'ora.storage' on 'grac121'
CRS-2677: Stop of 'ora.storage' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'grac121'
CRS-2677: Stop of 'ora.asm' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'grac121'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'grac121' succeeded
CRS-2677: Stop of 'ora.evmd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'grac121'
CRS-2677: Stop of 'ora.cssd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'grac121'
CRS-2677: Stop of 'ora.crf' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'grac121'
CRS-2677: Stop of 'ora.gipcd' on 'grac121' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'grac121' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac121'
CRS-2672: Attempting to start 'ora.evmd' on 'grac121'
CRS-2676: Start of 'ora.mdnsd' on 'grac121' succeeded
CRS-2676: Start of 'ora.evmd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac121'
CRS-2676: Start of 'ora.gpnpd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'grac121'
CRS-2676: Start of 'ora.gipcd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac121'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac121'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac121'
CRS-2676: Start of 'ora.diskmon' on 'grac121' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'grac121'
CRS-2676: Start of 'ora.cssd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'grac121'
CRS-2672: Attempting to start 'ora.ctssd' on 'grac121'
CRS-2676: Start of 'ora.ctssd' on 'grac121' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'grac121'
CRS-2676: Start of 'ora.asm' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'grac121'
CRS-2676: Start of 'ora.storage' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'grac121'
CRS-2676: Start of 'ora.crf' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'grac121'
CRS-2676: Start of 'ora.crsd' on 'grac121' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: grac121
CRS-6016: Resource auto-start has completed for server grac121
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2013/08/25 15:07:34 CLSRSC-343: Successfully started Oracle clusterware stack
CRS-2672: Attempting to start 'ora.asm' on 'grac121'
CRS-2676: Start of 'ora.asm' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'grac121'
CRS-2676: Start of 'ora.DATA.dg' on 'grac121' succeeded
2013/08/25 15:11:00 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Run root.sh scripts on grac122

# /u01/app/121/grid/root.sh
Performing root user operation for Oracle 12c 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/121/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/121/grid/crs/install/crsconfig_params
2013/08/25 18:51:55 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
2013/08/25 18:52:18 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'grac122'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'grac122'
CRS-2677: Stop of 'ora.drivers.acfs' on 'grac122' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'grac122' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac122'
CRS-2672: Attempting to start 'ora.evmd' on 'grac122'
CRS-2676: Start of 'ora.evmd' on 'grac122' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac122'
CRS-2676: Start of 'ora.gpnpd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'grac122'
CRS-2676: Start of 'ora.gipcd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac122'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac122'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac122'
CRS-2676: Start of 'ora.diskmon' on 'grac122' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'grac122'
CRS-2676: Start of 'ora.cssd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'grac122'
CRS-2672: Attempting to start 'ora.ctssd' on 'grac122'
CRS-2676: Start of 'ora.ctssd' on 'grac122' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'grac122'
CRS-2676: Start of 'ora.asm' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'grac122'
CRS-2676: Start of 'ora.storage' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'grac122'
CRS-2676: Start of 'ora.crf' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'grac122'
CRS-2676: Start of 'ora.crsd' on 'grac122' succeeded
CRS-6017: Processing resource auto-start for servers: grac122
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'grac121'
CRS-2672: Attempting to start 'ora.ons' on 'grac122'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'grac121'
CRS-2677: Stop of 'ora.scan1.vip' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'grac122'
CRS-2676: Start of 'ora.ons' on 'grac122' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'grac122'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'grac122' succeeded
CRS-6016: Resource auto-start has completed for server grac122
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2013/08/25 18:58:50 CLSRSC-343: Successfully started Oracle clusterware stack
2013/08/25 18:59:06 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 

Verify CRS installation with a modified : crsctl stat res -t

NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
                               Name       Target          State        Server State
ora.DATA.dg                    ONLINE     ONLINE          grac121      STABLE 
ora.DATA.dg                    ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER.lsnr              ONLINE     ONLINE          grac121      STABLE 
ora.LISTENER.lsnr              ONLINE     ONLINE          grac122      STABLE 
ora.asm                        ONLINE     ONLINE          grac121      Started,STABLE 
ora.asm                        ONLINE     ONLINE          grac122      Started,STABLE 
ora.net1.network               ONLINE     ONLINE          grac121      STABLE 
ora.net1.network               ONLINE     ONLINE          grac122      STABLE 
ora.ons                        ONLINE     ONLINE          grac121      STABLE 
ora.ons                        ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac121      STABLE 
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac121      STABLE 
ora.MGMTLSNR                   ONLINE     ONLINE          grac121      169.254.187.22 192.1
ora.cvu                        ONLINE     ONLINE          grac121      STABLE 
ora.gns                        ONLINE     ONLINE          grac121      STABLE 
ora.gns.vip                    ONLINE     ONLINE          grac121      STABLE 
ora.grac121.vip                ONLINE     ONLINE          grac121      STABLE 
ora.grac122.vip                ONLINE     ONLINE          grac122      STABLE 
ora.mgmtdb                     ONLINE     ONLINE          grac121      Open,STABLE 
ora.oc4j                       ONLINE     ONLINE          grac121      STABLE 
ora.scan1.vip                  ONLINE     ONLINE          grac122      STABLE 
ora.scan2.vip                  ONLINE     ONLINE          grac121      STABLE 
ora.scan3.vip                  ONLINE     ONLINE          grac121      STABLE

Verify CRS installation with cluvfy


$ ./bin/cluvfy stage -post crsinst -n grac121,grac122 
Performing post-checks for cluster services setup 
Checking node reachability...
Node reachability check passed from node "grac121"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) grac122,grac121
TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity using interfaces on subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) grac121,grac122
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Time zone consistency check passed
Checking Cluster manager integrity... 
Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations 
UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations 
Default user file creation mask check passed
Checking cluster integrity...
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+DATA" is available on all the nodes
NOTE: 
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Checking CRS integrity...
Clusterware version consistency passed.
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of ONS node application (optional)
ONS node application check passed
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "grac112-scan.grid12c.example.com"...
Checking integrity of name service switch configur
ation file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking SCAN IP addresses...
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed
WARNING: 
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
OLR integrity check passed
Checking GNS integrity...
The GNS subdomain name "grid12c.example.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0" match with the GNS VIP "192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0"
GNS VIP "192.168.1.58" resolves to a valid IP address
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resource configuration check passed
GNS VIP resource configuration check passed.
GNS integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
CTSS resource check passed
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Post-check for cluster services setup was successful.

 

RDBMS install

Verify pre RDBMS install with cluvfy

$ ./bin/cluvfy stage -pre dbcfg -n grac121,grac122 -d /u01/app/oracle/product/121/racdb -verbose -fixup
In this case cluvfy builds a fixup script to create the oper group - run it on both notes
# /tmp/CVU_12.1.0.1.0_oracle/runfixup.sh
Solve all errors until cluvfy reports : Pre-check for database configuration was successful.

Run Installer from Database media and run related root.sh scripts

$ cd /KITS/ORACLE/121/database
$ ./runInstaller  
   server class
    Oracle Real application cluster installation
     Test/Create SSH connectivity
      Advanced Install 
        Enterprise Edition
         Global Database name : crac12             
          OSDBA  group : dba
          OSOPER group : oper 
Run root.sh in grac121 and grac122

 Verify Rdbms installation with : $GRID_HOME/bin/crsctl stat res -t

$ my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
                               Name       Target          State        Server State
ora.DATA.dg                    ONLINE     ONLINE          grac121      STABLE 
ora.DATA.dg                    ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER.lsnr              ONLINE     ONLINE          grac121      STABLE 
ora.LISTENER.lsnr              ONLINE     ONLINE          grac122      STABLE 
ora.asm                        ONLINE     ONLINE          grac121      Started,STABLE 
ora.asm                        ONLINE     ONLINE          grac122      Started,STABLE 
ora.net1.network               ONLINE     ONLINE          grac121      STABLE 
ora.net1.network               ONLINE     ONLINE          grac122      STABLE 
ora.ons                        ONLINE     ONLINE          grac121      STABLE 
ora.ons                        ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac121      STABLE 
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac121      STABLE 
ora.MGMTLSNR                   ONLINE     ONLINE          grac121      169.254.187.22 192.1
ora.crac12.db                  ONLINE     ONLINE          grac121      Open,STABLE 
ora.crac12.db                  ONLINE     ONLINE          grac122      Open,STABLE 
ora.cvu                        ONLINE     ONLINE          grac121      STABLE 
ora.gns                        ONLINE     ONLINE          grac121      STABLE 
ora.gns.vip                    ONLINE     ONLINE          grac121      STABLE 
ora.grac121.vip                ONLINE     ONLINE          grac121      STABLE 
ora.grac122.vip                ONLINE     ONLINE          grac122      STABLE 
ora.mgmtdb                     ONLINE     ONLINE          grac121      Open,STABLE 
ora.oc4j                       ONLINE     ONLINE          grac121      STABLE 
ora.scan1.vip                  ONLINE     ONLINE          grac122      STABLE 
ora.scan2.vip                  ONLINE     ONLINE          grac121      STABLE 
ora.scan3.vip                  ONLINE     ONLINE          grac121      STABLE

 

Verify database status with srvctl, olsnodes

$ srvctl config database -d crac12
Database unique name: crac12
Database name: crac12
Oracle home: /u01/app/oracle/product/121/rac121
Oracle user: oracle
Spfile: +DATA/crac12/spfilecrac12.ora
Password file: +DATA/crac12/orapwcrac12
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: crac12
Database instances: crac121,crac122
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
Database is administrator managed
$  srvctl status  database -d crac12
Instance crac121 is running on node grac121
Instance crac122 is running on node grac122
$ sqlplus / as sysdba
SQL> select * from v$active_instances
INST_NUMBER INST_NAME                   CON_ID
----------- ------------------------------ ----------
      1 grac121.example.com:crac121         0
      2 grac122.example.com:crac122         0

Print node number with the node name
$ olsnodes -n -l
grac121    1
Print private interconnect address for the local node
$ olsnodes -p -l
grac121    192.168.2.81
Print virtual IP address with the node name
$ olsnodes -n -l
grac121    1
Print above info via a single command
$  olsnodes -n -p -i -l
grac121    1    192.168.2.81    192.168.1.147

Verify GNS/SCAN settings:
$ $GRID_HOME/bin/srvctl config gns -list
Oracle-GNS A 192.168.1.58 Unique Flags: 0x15
grac112-scan A 192.168.1.148 Unique Flags: 0x81
grac112-scan A 192.168.1.149 Unique Flags: 0x81
grac112-scan A 192.168.1.150 Unique Flags: 0x81
grac112-scan1-vip A 192.168.1.148 Unique Flags: 0x81
grac112-scan2-vip A 192.168.1.149 Unique Flags: 0x81
grac112-scan3-vip A 192.168.1.150 Unique Flags: 0x81
grac112.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 22526 Weight: 0 Priority: 0 Flags: 0x15
grac112.Oracle-GNS TXT CLUSTER_NAME="grac112", CLUSTER_GUID="191a52ec780d5f30bf460333c96cb46e", NODE_ADDRESS="192.168.1.58", SERVER_STATE="RUNNING", VERSION="12.1.0.1.0", DOMAIN="grid12c.example.com" Flags: 0x15
grac121-vip A 192.168.1.147 Unique Flags: 0x81
grac122-vip A 192.168.1.152 Unique Flags: 0x81

$  $GRID_HOME/bin/srvctl config gns  -subdomain
Domain served by GNS: grid12c.example.com

$  $GRID_HOME/bin/srvctl config scan
SCAN name: grac112-scan.grid12c.example.com, Network: 1
Subnet IPv4: 192.168.1.0/255.255.255.0/eth1
Subnet IPv6: 
SCAN 0 IPv4 VIP: -/scan1-vip/192.168.1.148
SCAN name: grac112-scan.grid12c.example.com, Network: 1
Subnet IPv4: 192.168.1.0/255.255.255.0/eth1
Subnet IPv6: 
SCAN 1 IPv4 VIP: -/scan2-vip/192.168.1.149
SCAN name: grac112-scan.grid12c.example.com, Network: 1
Subnet IPv4: 192.168.1.0/255.255.255.0/eth1
Subnet IPv6: 
SCAN 2 IPv4 VIP: -/scan3-vip/192.168.1.150

 

Reference:

  • http://www.oracle-base.com/articles/12c/oracle-db-12cr1-rac-installation-on-oracle-linux-6-using-virtualbox.php#install_db_software

 

One thought on “Install Oracle RAC 12.1,OEL 6.4 and Virtualbox 4.2 with GNS and ASMLib”

Leave a Reply

Your email address will not be published. Required fields are marked *