Install Oracle RAC 12.1 , OEL 6.3 and Virtualbox 4.2 with GNS and ASMLIB

Generic Network setup considerations

Network setup using 3 Network devices: 
eth0   -   DHCP ( either local LTE-Router 192.168.1.1 or Coorporate VPN network ) 
eth1   -   Public interface ( grac121: 192.168.1.81 / grac122: 192.168.1.82 ) 
eth2   -   Private Cluster Interconect ( grac121int: 192.168.2.91 / grac122int: 192.168.2.92 )

Modify ifcfg-eth0 as this script is the source for a newly created /etc/resolv.conf during reboot
# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
...
DNS1=192.168.1.50
DNS2=192.135.82.44
DNS3=192.168.1.1
DOMAIN="example.com grid12c.example.com de.oracle.com"
PEERDNS=no

After reboot /etc/resolv.conf should look like :
# Generated by NetworkManager
search example.com grid12c.example.com de.oracle.com
nameserver 192.168.1.50
nameserver 192.135.82.44
nameserver 192.168.1.1
This translates to following nameserver usage
  192.168.1.50    - local nameserver handling domains example.com + grid12c.example.com
  192.135.82.44   - coorporate nameserver 
  192.168.1.1     - local LTE router nameserver

Testing GNS nameserver (  192.168.1.55  ) - Note this test will only work after a successfull GNS installation 
#  nslookup grac112-scan
Server:        192.168.1.50
Address:    192.168.1.50#53
Non-authoritative answer:
Name:    grac112-scan.grid12c.example.com
Address: 192.168.1.149
Name:    grac112-scan.grid12c.example.com
Address: 192.168.1.150
Name:    grac112-scan.grid12c.example.com
Address: 192.168.1.148
...

Setup  BIND, NTP, DHCP in a LAN on a separate VirtualBox VM

For this task use steps from my 11.2.0.3 installation.

Install  from Grid media rpm directory  : cvuqdisk-1.0.9-1.rpm

# rpm -qa | grep  cvu
cvuqdisk-1.0.9-1.x86_64

Verify GNS setup and system setup with cluvfy:

$ ./bin/cluvfy comp sys -p crs -n grac121 -verbose -fixup
Verifying system requirement 
Check: Total memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       3.7426GB (3924412.0KB)    4GB (4194304.0KB)         failed    
Result: Total memory check failed
Check: Available memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       3.3971GB (3562152.0KB)    50MB (51200.0KB)          passed    
Result: Available memory check passed
Check: Swap space 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       6.0781GB (6373372.0KB)    3.7426GB (3924412.0KB)    passed    
Result: Swap space check passed
Check: Free disk space for "grac121:/usr,grac121:/var,grac121:/etc,grac121:/u01/app/11203/grid,grac121:/sbin,grac121:/tmp" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              grac121       /             13.5332GB     7.9635GB      passed      
  /var              grac121       /             13.5332GB     7.9635GB      passed      
  /etc              grac121       /             13.5332GB     7.9635GB      passed      
  /u01/app/11203/grid  grac121       /             13.5332GB     7.9635GB      passed      
  /sbin             grac121       /             13.5332GB     7.9635GB      passed      
  /tmp              grac121       /             13.5332GB     7.9635GB      passed      
Result: Free disk space check passed for "grac121:/usr,grac121:/var,grac121:/etc,grac121:/u01/app/11203/grid,grac121:/sbin,grac121:/tmp"
Check: User existence for "grid" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac121       passed                    exists(501)             
Checking for multiple users with UID value 501
Result: Check for multiple users with UID value 501 passed 
Result: User existence check passed for "grid"
Check: Group existence for "oinstall" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac121       passed                    exists                  
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac121       passed                    exists                  
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           yes           yes           yes           yes           passed      
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group "dba" 
  Node Name         User Exists   Group Exists  User in Group  Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac121           yes           yes           yes           passed          
Result: Membership check for user "grid" in group "dba" passed
Check: Run level 
  Node Name     run level                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       5                         3,5                       passed    
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac121           hard          4096          65536         failed          
Result: Hard limits check failed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  grac121           hard          30524         16384         passed          
Result: Hard limits check passed for "maximum user processes"
Check: System architecture 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       x86_64                    x86_64                    passed    
Result: System architecture check passed
Check: Kernel version 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       2.6.39-200.24.1.el6uek.x86_64  2.6.32                    passed    
Result: Kernel version check passed
Check: Kernel parameter for "semmsl" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           250           250           250           passed          
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           32000         32000         32000         passed          
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           100           100           100           passed          
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           128           128           128           passed          
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4398046511104  4398046511104  2009298944    passed          
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4096          4096          4096          passed          
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4294967296    4294967296    392441        passed          
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           6815744       6815744       6815744       passed          
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed          
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           262144        262144        262144        passed          
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           4194304       4194304       4194304       passed          
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           262144        262144        262144        passed          
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  grac121           1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "binutils" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       binutils-2.20.51.0.2-5.34.el6  binutils-2.20.51.0.2      passed    
Result: Package existence check passed for "binutils"
Check: Package existence for "compat-libcap1" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       compat-libcap1-1.10-1     compat-libcap1-1.10       passed    
Result: Package existence check passed for "compat-libcap1"
Check: Package existence for "compat-libstdc++-33(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed    
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "libgcc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libgcc(x86_64)-4.4.6-4.el6  libgcc(x86_64)-4.4.4      passed    
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libstdc++(x86_64)-4.4.6-4.el6  libstdc++(x86_64)-4.4.4   passed    
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libstdc++-devel(x86_64)-4.4.6-4.el6  libstdc++-devel(x86_64)-4.4.4  passed    
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       sysstat-9.0.4-20.el6      sysstat-9.0.4             passed    
Result: Package existence check passed for "sysstat"
Check: Package existence for "gcc" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       gcc-4.4.6-4.el6           gcc-4.4.4                 passed    
Result: Package existence check passed for "gcc"
Check: Package existence for "gcc-c++" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       gcc-c++-4.4.6-4.el6       gcc-c++-4.4.4             passed    
Result: Package existence check passed for "gcc-c++"
Check: Package existence for "ksh" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       ksh-20100621-16.el6       ksh-...                   passed    
Result: Package existence check passed for "ksh"
Check: Package existence for "make" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       make-3.81-20.el6          make-3.81                 passed    
Result: Package existence check passed for "make"
Check: Package existence for "glibc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       glibc(x86_64)-2.12-1.80.el6_3.5  glibc(x86_64)-2.12        passed    
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "glibc-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       glibc-devel(x86_64)-2.12-1.80.el6_3.5  glibc-devel(x86_64)-2.12  passed    
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "libaio(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed    
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "libaio-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed    
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "nfs-utils" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       nfs-utils-1.2.3-26.el6    nfs-utils-1.2.3-15        passed    
Result: Package existence check passed for "nfs-utils"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed 
Starting check for consistency of primary group of root user
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac121                               passed                  
Check for consistency of root user's primary group passed
Check: Time zone consistency 
Result: Time zone consistency check passed
******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
******************************************************************************************
--------------                ---------------     ----------------    
Check failed.                 Failed on nodes     Reboot required?    
--------------                ---------------     ----------------    
Hard Limit: maximum open      grac121             no                  
file descriptors                                                      
Execute "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" as root user on nodes "grac121" to perform the fix up operations manually
--> Now run runfixup.sh" as root   on nodes "grac121" 
Press ENTER key to continue after execution of "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has completed on nodes "grac121"
Fix: Hard Limit: maximum open file descriptors 
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac121                               successful              
Result: "Hard Limit: maximum open file descriptors" was successfully fixed on all the applicable nodes
Fix up operations were successfully completed on all the applicable nodes
Verification of system requirement was unsuccessful on all the specified nodes.
--
Fixup may fix some errors but errors like to low memory/swap needs manual intervention:
Check: Total memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  grac121       3.7426GB (3924412.0KB)    4GB (4194304.0KB)         failed    
Result: Total memory check failed

Check GNS
$ ./bin/cluvfy comp gns -precrsinst -domain grid12c.example.com  -vip 192.168.1.55
Verifying GNS integrity 
Checking GNS integrity...
The GNS subdomain name "grid12c.example.com" is a valid domain name
GNS VIP "192.168.1.58" resolves to a valid IP address
GNS integrity check passed
Verification of GNS integrity was successful.

 

Setup User Accounts

NOTE: Oracle recommend different users for the installation of the Grid  Infrastructure (GI) and the Oracle RDBMS home. The GI will be installed in  a separate Oracle base, owned by user ‘grid.’ After the grid install the GI home will be owned by root, and inaccessible to unauthorized users.

Create OS groups using the command below. Enter these commands as the 'root' user:
  #/usr/sbin/groupadd -g 501 oinstall
  #/usr/sbin/groupadd -g 502 dba
  #/usr/sbin/groupadd -g 504 asmadmin
  #/usr/sbin/groupadd -g 506 asmdba
  #/usr/sbin/groupadd -g 507 asmoper

Create the users that will own the Oracle software using the commands:
  #/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
  #/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle
  $ id
  uid=501(grid) gid=54321(oinstall) groups=54321(oinstall),504(asmadmin),506(asmdba),507(asmoper)
  $ id
  uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),506(adba),54322(dba)

For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file:
  if ( $USER = "oracle" || $USER = "grid" ) then
  limit maxproc 16384
  limit descriptors 65536
  endif

Modify  /etc/security/limits.conf
  # oracle-rdbms-server-11gR2-preinstall setting for nofile soft limit is 1024
  oracle   soft   nofile    1024
  grid   soft   nofile    1024
  # oracle-rdbms-server-11gR2-preinstall setting for nofile hard limit is 65536
  oracle   hard   nofile    65536
  grid   hard   nofile    65536
  # oracle-rdbms-server-11gR2-preinstall setting for nproc soft limit is 2047
  oracle   soft   nproc    2047
  grid     soft   nproc    2047
  # oracle-rdbms-server-11gR2-preinstall setting for nproc hard limit is 16384
  oracle   hard   nproc    16384
  grid     hard   nproc    16384
  # oracle-rdbms-server-11gR2-preinstall setting for stack soft limit is 10240KB
  oracle   soft   stack    10240
  grid     soft   stack    10240
  # oracle-rdbms-server-11gR2-preinstall setting for stack hard limit is 32768KB
  oracle   hard   stack    32768
  grid     hard   stack    32768

Create Directories:
 - Have a separate ORACLE_BASE for both GRID and RDBMS install !
Create the Oracle Inventory Directory
To create the Oracle Inventory directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oraInventory
  # chown -R grid:oinstall /u01/app/oraInventory

Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/grid
  # chown -R grid:oinstall /u01/app/grid
  # chmod -R 775 /u01/app/grid
  # mkdir -p /u01/app/121/grid
  # chown -R grid:oinstall /u01//app/121/grid
  # chmod -R 775 /u01/app/121/grid

Creating the Oracle Base Directory
  To create the Oracle Base directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle
  # chown -R oracle:oinstall /u01/app/oracle
  # chmod -R 775 /u01/app/oracle

Creating the Oracle RDBMS Home Directory
  To create the Oracle RDBMS Home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle/product/1121/rac121
  # chown -R oracle:oinstall /u01/app/oracle/product/121/rac121
  # chmod -R 775 /u01/app/oracle/product/121/rac121

.

Setup ASM disks

Create ASM disks:
D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm1_5G.vdi --size 5120 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: a24ac5ee-f045-434d-8c2d-8fde5c73d6fa
D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm2_5G.vdi --size 5120 -
-format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: ade56ace-a8fd-4383-aa8e-f2b4f7645372
D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm3_5G.vdi --size 5120 -
-format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 033563bc-e63d-435a-8fc6-e4f67dd54128
D:\VM>VBoxManage createhd --filename C:\VM\GRAC12c\ASM\asm4_5G.vdi --size 5120 -
-format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 7b60806c-78fc-4f4c-beb1-ff9bafd36eeb
Attach ASM Diks and make them sharable

D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 1  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm1_5G.vdi
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 2  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm2_5G.vdi
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 3  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm3_5G.vdi
D:\VM>VBoxManage storageattach grac121 --storagectl "SATA" --port 4  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm4_5G.vdi
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm1_5G.vdi --type shareable
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm2_5G.vdi --type shareable
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm3_5G.vdi --type shareable
D:\VM> VBoxManage modifyhd C:\VM\GRAC12c\ASM\asm4_5G.vdi --type shareable
...
Format Disks: ( format all disks !) 
# fdisk /dev/sde
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-652, default 652): 
Using default value 652
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

Configure ASM disks
# /usr/sbin/oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.
Default user to own the driver interface [grid]: 
Default group to own the driver interface [asmadmin]: 
Start Oracle ASM library driver on boot (y/n) [y]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
# /etc/init.d/oracleasm createdisk data1 /dev/sdb1
Marking disk "data1" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data2 /dev/sdc1
Marking disk "data2" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data3 /dev/sdd1
Marking disk "data3" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data4 /dev/sde1
Marking disk "data4" as an ASM disk:                       [  OK  ]

# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
DATA4

# /etc/init.d/oracleasm querydisk -d data1
Disk "DATA1" is a valid ASM disk on device [8, 17]
# /etc/init.d/oracleasm querydisk -d data2
Disk "DATA2" is a valid ASM disk on device [8, 33]
# /etc/init.d/oracleasm querydisk -d data3
Disk "DATA3" is a valid ASM disk on device [8, 49]
# /etc/init.d/oracleasm querydisk -d data4
Disk "DATA4" is a valid ASM disk on device [8, 65]

 

Setup user equivalence and run cluvfy with stage -pre crsinst

Run sshUserSetup.sh :
./sshUserSetup.sh -user grid -hosts "grac121 grac122"  -noPromptPassphrase

Verify CRS for both nodes using newly created  ASM disk and asmadmin group 
./bin/cluvfy stage -pre crsinst -n grac121,grac122 -asm -asmdev /dev/oracleasm/disks/DATA1,/dev/oracleasm/disks/DATA2,/dev/oracleasm/disks/DATA3,/dev/oracleasm/disks/DATA4 -presence local -networks eth1:192.168.1.0:PUBLIC/eth2:192.168.2.0:cluster_interconnect
Errors:
ERROR:  /dev/oracleasm/disks/DATA4
grac122:Cannot verify the shared state for device /dev/sde1 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:
    grac121,grac122
--> Error is due to test system using VirtualBox to setup the RAC and partitions not returning an UUID. Installation could be continued ignoring this error. In a proper system where UUID is available the cluvfy would have the following messages when these check succeed.
( See http://asanga-pradeep.blogspot.co.uk/2013/08/installing-12c-12101-rac-on-rhel-6-with.html )

Clone VirtualBox Image

Clone VM and attach ASM disks
D:\VM>VBoxManage clonehd  d:\VM\GNS12c\grac121\grac121.vdi   d:\VM\GNS12c\grac122\grac122.vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VDI'. UUID: 5fb10575-2293-489b-b105-289d5d49ab18

D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 1  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm1_5G.vdi
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 2  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm2_5G.vdi
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 3  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm3_5G.vdi
D:\VM>VBoxManage storageattach grac122 --storagectl "SATA" --port 4  --device 0 --type hdd --medium C:\VM\GRAC12c\ASM\asm4_5G.vdi
Verify CRS for both nodes using newly created  ASM disk and asmadmin group 

Ignore PRVF-9802 , PRVF-5636.  For details check the following link.

Install Clusterware Software and run root.sh on both nodes

$ cd grid
$ ls
install  response  rpm    runcluvfy.sh  runInstaller  sshsetup  stage  welcome.html
$ ./runInstaller 
-> Configure a standard cluster
-> Advanced Installation
   Cluster name : grac112
   Scan name    : grac112-scan.grid12c.example.com
   Scan port    : 1521
   -> Create New GNS
      GNS VIP address: 192.168.1.55 
      GNS Sub domain : grid12c.example.com
  Public Hostname           Virtual Hostname 
  grac121.example.com        AUTO
  grac122.example.com        AUTO
-> Test and Setuop SSH connectivity
-> Setup network Interfaces
   eth0: don't use
   eth1: PUBLIC
   eht2: Private Cluster_Interconnect
-> Configure GRID Infrastruce: YES
-> Use standard ASM for storage
-> ASM setup
   Diskgroup         : DATA
   Disk discover PATH: /dev/oracleasm/disks/* 

Running root.sh script manually on grac121:
# /u01/app/121/grid/root.sh
Performing root user operation for Oracle 12c 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/121/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/121/grid/crs/install/crsconfig_params
2013/08/25 14:56:52 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
2013/08/25 14:57:38 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'grac121'
CRS-2677: Stop of 'ora.drivers.acfs' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'grac121'
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac121'
CRS-2676: Start of 'ora.mdnsd' on 'grac121' succeeded
CRS-2676: Start of 'ora.evmd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac121'
CRS-2676: Start of 'ora.gpnpd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac121'
CRS-2672: Attempting to start 'ora.gipcd' on 'grac121'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac121' succeeded
CRS-2676: Start of 'ora.gipcd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac121'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac121'
CRS-2676: Start of 'ora.diskmon' on 'grac121' succeeded
CRS-2676: Start of 'ora.cssd' on 'grac121' succeeded
ASM created and started successfully.
Disk Group DATA created successfully.
CRS-2672: Attempting to start 'ora.crf' on 'grac121'
CRS-2672: Attempting to start 'ora.storage' on 'grac121'
CRS-2676: Start of 'ora.storage' on 'grac121' succeeded
CRS-2676: Start of 'ora.crf' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'grac121'
CRS-2676: Start of 'ora.crsd' on 'grac121' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk e158882a16cf4f44bfab3fac241e5152.
Successful addition of voting disk b93b579e97f24ff4bfb58e7a1d9e628b.
Successful addition of voting disk 2a29ac7797544f8cbfb6650ce7c287fe.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   e158882a16cf4f44bfab3fac241e5152 (/dev/oracleasm/disks/DATA1) [DATA]
 2. ONLINE   b93b579e97f24ff4bfb58e7a1d9e628b (/dev/oracleasm/disks/DATA2) [DATA]
 3. ONLINE   2a29ac7797544f8cbfb6650ce7c287fe (/dev/oracleasm/disks/DATA3) [DATA]
Located 3 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'grac121'
CRS-2673: Attempting to stop 'ora.crsd' on 'grac121'
CRS-2677: Stop of 'ora.crsd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'grac121'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'grac121'
CRS-2673: Attempting to stop 'ora.ctssd' on 'grac121'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'grac121'
CRS-2677: Stop of 'ora.drivers.acfs' on 'grac121' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'grac121' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'grac121' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'grac121'
CRS-2673: Attempting to stop 'ora.storage' on 'grac121'
CRS-2677: Stop of 'ora.storage' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'grac121'
CRS-2677: Stop of 'ora.asm' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'grac121'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'grac121' succeeded
CRS-2677: Stop of 'ora.evmd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'grac121'
CRS-2677: Stop of 'ora.cssd' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'grac121'
CRS-2677: Stop of 'ora.crf' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'grac121'
CRS-2677: Stop of 'ora.gipcd' on 'grac121' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'grac121' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac121'
CRS-2672: Attempting to start 'ora.evmd' on 'grac121'
CRS-2676: Start of 'ora.mdnsd' on 'grac121' succeeded
CRS-2676: Start of 'ora.evmd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac121'
CRS-2676: Start of 'ora.gpnpd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'grac121'
CRS-2676: Start of 'ora.gipcd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac121'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac121'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac121'
CRS-2676: Start of 'ora.diskmon' on 'grac121' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'grac121'
CRS-2676: Start of 'ora.cssd' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'grac121'
CRS-2672: Attempting to start 'ora.ctssd' on 'grac121'
CRS-2676: Start of 'ora.ctssd' on 'grac121' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'grac121'
CRS-2676: Start of 'ora.asm' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'grac121'
CRS-2676: Start of 'ora.storage' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'grac121'
CRS-2676: Start of 'ora.crf' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'grac121'
CRS-2676: Start of 'ora.crsd' on 'grac121' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: grac121
CRS-6016: Resource auto-start has completed for server grac121
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2013/08/25 15:07:34 CLSRSC-343: Successfully started Oracle clusterware stack
CRS-2672: Attempting to start 'ora.asm' on 'grac121'
CRS-2676: Start of 'ora.asm' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'grac121'
CRS-2676: Start of 'ora.DATA.dg' on 'grac121' succeeded
2013/08/25 15:11:00 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Running roots.sh on grac122:
# /u01/app/121/grid/root.sh
Performing root user operation for Oracle 12c 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/121/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/121/grid/crs/install/crsconfig_params
2013/08/25 18:51:55 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
2013/08/25 18:52:18 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'grac122'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'grac122'
CRS-2677: Stop of 'ora.drivers.acfs' on 'grac122' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'grac122' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac122'
CRS-2672: Attempting to start 'ora.evmd' on 'grac122'
CRS-2676: Start of 'ora.evmd' on 'grac122' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac122'
CRS-2676: Start of 'ora.gpnpd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'grac122'
CRS-2676: Start of 'ora.gipcd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac122'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac122'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac122'
CRS-2676: Start of 'ora.diskmon' on 'grac122' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'grac122'
CRS-2676: Start of 'ora.cssd' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'grac122'
CRS-2672: Attempting to start 'ora.ctssd' on 'grac122'
CRS-2676: Start of 'ora.ctssd' on 'grac122' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'grac122'
CRS-2676: Start of 'ora.asm' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'grac122'
CRS-2676: Start of 'ora.storage' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'grac122'
CRS-2676: Start of 'ora.crf' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'grac122'
CRS-2676: Start of 'ora.crsd' on 'grac122' succeeded
CRS-6017: Processing resource auto-start for servers: grac122
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'grac121'
CRS-2672: Attempting to start 'ora.ons' on 'grac122'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'grac121' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'grac121'
CRS-2677: Stop of 'ora.scan1.vip' on 'grac121' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'grac122'
CRS-2676: Start of 'ora.ons' on 'grac122' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'grac122' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'grac122'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'grac122' succeeded
CRS-6016: Resource auto-start has completed for server grac122
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2013/08/25 18:58:50 CLSRSC-343: Successfully started Oracle clusterware stack
2013/08/25 18:59:06 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Checking cluster status after installation 
$ my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
                               Name       Target          State        Server State
ora.DATA.dg                    ONLINE     ONLINE          grac121      STABLE 
ora.DATA.dg                    ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER.lsnr              ONLINE     ONLINE          grac121      STABLE 
ora.LISTENER.lsnr              ONLINE     ONLINE          grac122      STABLE 
ora.asm                        ONLINE     ONLINE          grac121      Started,STABLE 
ora.asm                        ONLINE     ONLINE          grac122      Started,STABLE 
ora.net1.network               ONLINE     ONLINE          grac121      STABLE 
ora.net1.network               ONLINE     ONLINE          grac122      STABLE 
ora.ons                        ONLINE     ONLINE          grac121      STABLE 
ora.ons                        ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac122      STABLE 
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac121      STABLE 
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac121      STABLE 
ora.MGMTLSNR                   ONLINE     ONLINE          grac121      169.254.187.22 192.1
ora.cvu                        ONLINE     ONLINE          grac121      STABLE 
ora.gns                        ONLINE     ONLINE          grac121      STABLE 
ora.gns.vip                    ONLINE     ONLINE          grac121      STABLE 
ora.grac121.vip                ONLINE     ONLINE          grac121      STABLE 
ora.grac122.vip                ONLINE     ONLINE          grac122      STABLE 
ora.mgmtdb                     ONLINE     ONLINE          grac121      Open,STABLE 
ora.oc4j                       ONLINE     ONLINE          grac121      STABLE 
ora.scan1.vip                  ONLINE     ONLINE          grac122      STABLE 
ora.scan2.vip                  ONLINE     ONLINE          grac121      STABLE 
ora.scan3.vip                  ONLINE     ONLINE          grac121      STABLE

Verify 12.1 CRS installation with cluvfy

$ ./bin/cluvfy stage -post crsinst -n grac121,grac122 
Performing post-checks for cluster services setup 
Checking node reachability...
Node reachability check passed from node "grac121"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) grac122,grac121
TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity using interfaces on subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) grac121,grac122
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Time zone consistency check passed
Checking Cluster manager integrity... 
Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations 
UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations 
Default user file creation mask check passed
Checking cluster integrity...
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+DATA" is available on all the nodes
NOTE: 
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Checking CRS integrity...
Clusterware version consistency passed.
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of ONS node application (optional)
ONS node application check passed
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "grac112-scan.grid12c.example.com"...
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking SCAN IP addresses...
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed
WARNING: 
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
OLR integrity check passed
Checking GNS integrity...
The GNS subdomain name "grid12c.example.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0" match with the GNS VIP "192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0"
GNS VIP "192.168.1.58" resolves to a valid IP address
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resource configuration check passed
GNS VIP resource configuration check passed.
GNS integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
CTSS resource check passed
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Post-check for cluster services setup was successful.

Install RDBMS and create database



Login as Oracle user and verify the account
$ id
  uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),506(asmdba),54322(dba)

$ env | grep ORA
ORACLE_BASE=/u01/app/oracle
ORACLE_SID=crac121
ORACLE_HOME=/u01/app/oracle/product/121/rac121 
Install database and verify 
$ srvctl config database -d crac12
Database unique name: crac12
Database name: crac12
Oracle home: /u01/app/oracle/product/121/rac121
Oracle user: oracle
Spfile: +DATA/crac12/spfilecrac12.ora
Password file: +DATA/crac12/orapwcrac12
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: crac12
Database instances: crac121,crac122
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
Database is administrator managed

$  srvctl status  database -d crac12
Instance crac121 is running on node grac121
Instance crac122 is running on node grac122

$ sqlplus / as sysdba
SQL> SELECT inst_name FROM v$active_instances;
INST_NAME
--------------------------------------------------------------------------------
grac121.example.com:crac121
grac122.example.com:crac122

2 thoughts on “Install Oracle RAC 12.1 , OEL 6.3 and Virtualbox 4.2 with GNS and ASMLIB”

  1. Hi thanks for this article. Just one question I am wondering myself and don’t find anything in the net

    What is the meaning of these values

    Start concurrency:
    Stop concurrency:

    Which values they can have

    Thanks

Leave a Reply

Your email address will not be published. Required fields are marked *