Install CRS 10.2.0.1 on top of OEL 5.10 / Virtualbox 4.2

Disk Layout

Using Virtualbox devices attached to SATA controller :
Raw-Devices for OCR:
/dev/sdb1 -> /dev/raw/raw1:  bound to major 8, minor 17  - Size:  1 GByte
/dev/sdc1 -> /dev/raw/raw2:  bound to major 8, minor 33  - Size:  1 GByte

Raw-Devices Voting disks:
/dev/sdd1 -> /dev/raw/raw3:  bound to major 8, minor 49  - Size:  1 GByte
/dev/sde1 -> /dev/raw/raw4:  bound to major 8, minor 65  - Size:  1 GByte
/dev/sdf1 -> /dev/raw/raw5:  bound to major 8, minor 81  - Size:  1 GByte

ASM Devices:
/dev/sdg1  - Size:  2 GByte 
/dev/sdh1  - Size:  2 GByte
/dev/sdi1  - Size:  2 GByte 
/dev/sdj1  - Size:  2 GByte

Verify disk size with dd after reboot
# dd if=/dev/sdb1 of=/dev/null bs=1M
1019+1 records in
1019+1 records out
...
Note be careful not to mix ASM disks with Raw devices. If your create an ASM on top of a RAW disk
already used  OCR or voting disk will corrupt your RAW devices ! 

 

Verify OS packages

#  rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils  compat-libstdc++-33  elfutils-libelf  \
   elfutils-libelf-devel  gcc  gcc-c++  glibc  glibc-common  glibc-devel  glibc-headers  ksh  libaio  \
   libaio-devel  libgcc  libstdc++  libstdc++-devel  make  sysstat  unixODBC  unixODBC-devel

binutils-2.17.50.0.6-26.el5 (x86_64)
libstdc++-devel-4.1.2-54.el5 (x86_64)
make-3.81-3.el5 (x86_64)
sysstat-7.0.2-12.0.1.el5 (x86_64)
...
package unixODBC is not installed
package unixODBC-devel is not installed

 

Prepare ASMLib

# yum install oracleasm-support
# ls
oracleasmlib-2.0.4-1.el5.x86_64.rpm  oracleasm-support-2.1.8-1.el5.x86_64.rpm
# rpm -iv oracleasmlib-2.0.4-1.el5.x86_64.rpm
Preparing packages for installation...
oracleasmlib-2.0.4-1.el5
# rpm -iv oracleasm-support-2.1.8-1.el5.x86_64.rpm
Preparing packages for installation...
        package oracleasm-support-2.1.8-1.el5.x86_64 is already installed
# rpm -qa | grep asm
oracleasm-support-2.1.8-1.el5
oracleasmlib-2.0.4-1.el5

Configuring the Oracle ASM library driver.
# /etc/init.d/oracleasm configure
This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: 
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

 

Create and format  Virtualbox  disks

M:\VM\RAC10g\SHARED_DISK> VBoxManage createhd --filename M:\VM\RAC10g\SHARED_DISK\DATA01.vdi --size 2048 --format VDI --variant Fixed
M:\VM\RAC10g\SHARED_DISK> VBoxManage createhd --filename M:\VM\RAC10g\SHARED_DISK\DATA02.vdi --size 2048 --format VDI --variant Fixed
....
M:\VM\RAC10g\SHARED_DISK>  VBoxManage modifyhd DATA01.vdi --type shareable 
M:\VM\RAC10g\SHARED_DISK> VBoxManage modifyhd DATA01.vdi --type shareable
..

Format disk
# fdisk /dev/sdb
# fdisk /dev/sdc
# /sbin/partprobe
Warning: Unable to open /dev/sr0 read-write (Read-only file system).  /dev/sr0 has been opened read-only.
Error: Error opening /dev/md0: No such file or directory
# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdb1  /dev/sdc  /dev/sdc1

Create ASM disks

# /usr/sbin/oracleasm createdisk ASM_DATA01 /dev/sdg1
Writing disk header: done
Instantiating disk: done
# /usr/sbin/oracleasm createdisk ASM_DATA02 /dev/sdh1
# /usr/sbin/oracleasm createdisk ASM_DATA03 /dev/sdi1
# /usr/sbin/oracleasm createdisk ASM_DATA04 /dev/sdj1
If you need to delete disks run
# /usr/sbin/oracleasm deletedisk DATA01
Clearing disk header: done
Dropping disk: done

After any ASMLib operation run scandisks and listdisks on all RAC nodes
#  /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
# /usr/sbin/oracleasm listdisks
ASM_DATA01
ASM_DATA02
ASM_DATA03
ASM_DATA04

Prepare UDEV rules for our RAW devices

# cat  /etc/udev/rules.d/63-oracle-raw.rules
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBc477f753-2ce5f51a_", RUN+="/bin/raw /dev/raw/raw5 %N"       
KERNEL=="raw[1-2]*", OWNER="root", GROUP="oinstall", MODE="640"
KERNEL=="raw[3-5]*", OWNER="oracle", GROUP="oinstall", MODE="6
--> Always try to map your disk with /sbin/scsi_id ( like line 5 ) and not by using sdX devices ( line 1-4 )

Reload udev rules
# /sbin/udevcontrol reload_rules
# /sbin/start_udev

Verify raw devices after reboot
#  raw -qa
/dev/raw/raw1:  bound to major 8, minor 17
/dev/raw/raw2:  bound to major 8, minor 33
/dev/raw/raw3:  bound to major 8, minor 49
/dev/raw/raw4:  bound to major 8, minor 65
/dev/raw/raw5:  bound to major 8, minor 81

# ls -l  /dev/raw/ra*
crw-r----- 1 root   oinstall 162, 1 Apr  4 09:09 /dev/raw/raw1
crw-r----- 1 root   oinstall 162, 2 Apr  4 09:09 /dev/raw/raw2
crw-r--r-- 1 oracle oinstall 162, 3 Apr  4 09:09 /dev/raw/raw3
crw-r--r-- 1 oracle oinstall 162, 4 Apr  4 09:09 /dev/raw/raw4
crw-r--r-- 1 oracle oinstall 162, 5 Apr  4 09:09 /dev/raw/raw5

SSH setup

On both RAC Nodes run :
$ su - oracle
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ /usr/bin/ssh-keygen -t rsa # Accept the default settings

[oracle@ract1 .ssh]$ cd ~/.ssh
[oracle@ract1 .ssh]$ cat id_rsa.pub >> authorized_keys
[oracle@ract1 .ssh]$ scp authorized_keys ract2:.ssh/
[oracle@ract1 ~]$  ssh ract2 date
Tue Apr  1 14:24:32 CEST 2014

[oracle@ract2 .ssh]$ cd ~/.ssh
[oracle@ract2 .ssh]$ cat id_rsa.pub >> authorized_keys
[oracle@ract2 .ssh]$ scp authorized_keys ract1:.ssh/
[oracle@ract2 ~]$  ssh ract1 date
Tue Apr  1 14:24:32 CEST 2014

Use cluvfy 12.1 to test node readiness

Always install newest cluvfy version even for 10gR2 CRS validations!
[root@ract1 ~]$  ./bin/cluvfy  -version
12.1.0.1.0 Build 112713x8664

Verify OS setup on ract1
[root@ract1 ~]$ ./bin/cluvfy comp sys -p crs -r 10gR2 -n ract1 -verbose -fixup
--> Run required scripts
[root@ract1 ~]# /tmp/CVU_12.1.0.1.0_oracle/runfixup.sh
All Fix-up operations were completed successfully.

Repeat this step on ract2
[root@ract2 ~]$ ./bin/cluvfy comp sys -p crs -r 10gR2 -n ract2 -verbose -fixup
--> Run required scripts
[root@ract2 ~]# /tmp/CVU_12.1.0.1.0_oracle/runfixup.sh
All Fix-up operations were completed successfully.

Now verify System requirements on both nodes
[oracle@ract1 cluvfy12]$  ./bin/cluvfy comp sys -p crs -r 10gR2 -n ract1 -verbose -fixup
Verifying system requirement
..
NOTE:
No fixable verification failures to fix

Finally run cluvfy to test CRS installation readiness 
$ cluvfy12/bin/cluvfy stage -pre crsinst -r 10gR2 \
  -networks eth1:192.168.1.0:PUBLIC/eth2:192.168.2.0:cluster_interconnect \
  -n ract1,ract2 -verbose
..
Pre-check for cluster services setup was successful.

Install CRS 10.2.0.1

Unzip clusterware kits
# cd /Kits
# gunzip  /media/sf_kits/Oracle/10.2/Linux64/10201_clusterware_linux_x86_64.cpio.gz
# cpio  -idmv < /media/sf_kits/Oracle/10.2/Linux64/10201_clusterware_linux_x86_64.cpio
# gunzip  /media/sf_kits/Oracle/10.2/Linux64/10201_database_linux_x86_64.cpio.gz
# cpio  -idmv <   /media/sf_kits/Oracle/10.2/Linux64/10201_database_linux_x86_64.cpio

Run  ./rootpre.sh on both nodes
# ./rootpre.sh
No OraCM running
The "No OraCM" message can be ignored since the clusterware is not installed yet.

Install CRS software stack: 
Problem 1: Installer fails with java.lang.UnsatisfiedLinkError:  libXp.so.6:

oracle@ract1 clusterware]$ ./runInstaller -ignoreSysPrereqs   
Exception java.lang.UnsatisfiedLinkError: /tmp/OraInstall2014-04-01_02-56-03PM/jre/1.4.2/lib/i386/libawt.so: libXp.so.6: 
    cannot open shared object file: No such file or directory occurred..
    java.lang.UnsatisfiedLinkError: /tmp/OraInstall2014-04-01_02-56-03PM/jre/1.4.2/lib/i386/libawt.so: libXp.so.6: 
    cannot open shared object file: No such file or directory

Fix : Install libXp via yum 
#  yum install libXp
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
You can use up2date --register to register.
ULN support will be disabled.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package libXp.i386 0:1.0.0-8.1.el5 set to be updated
---> Package libXp.x86_64 0:1.0.0-8.1.el5 set to be updated
... 

Problem 2:  Vipca fails with error loading shared libraries: libpthread.so.0x:
Fix vipca and srvctl scripts by unsetting LD_ASSUME_KERNEL parameter
# vipca
/u01/app/oracle/product/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: 
    cannot open shared object file: No such file or directory
# which vipca
/u01/app/oracle/product/crs/bin/vipca
[root@ract1 ~]# vi /u01/app/oracle/product/crs/bin/vipca
After the IF statement around line 123 add an unset command to ensure LD_ASSUME_KERNEL is not set as follows:
if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
unset LD_ASSUME_KERNEL <<<== Line to be added Similarly for srvctl

Retest vipca ( Ignore Error:  Error 0(Native: listNetInterfaces:[3] - this error will be fixed later)
# vipca
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]
# which srvctl
/u01/app/oracle/product/crs/bin/srvctl
# vi /u01/app/oracle/product/crs/bin/srvctl  <-- unset LD_ASSUME_KERNEL in srvcl scipt too

Usage: srvctl <command> <object> [<options>]
Execute the same steps  on ract2 and verify that vipca and srvctl are running :

Now rerun root.sh again be cleaning up last setup
Run on ract1,ract2
# cd /u01/app/oracle/product/crs/install
# ./rootdelete.sh 
# ./rootdeinstall.sh
#   rm -rf /var/tmp/.oracle
For more details rerunning root.sh please read following link 

Run on root.sh on ract1
# /u01/app/oracle/product/crs/root.sh
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: ract1 ract1int ract1
node 2: ract2 ract2int ract2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        ract1
CSS is inactive on these nodes.
        ract2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

Later run root.sh on ract2
# /u01/app/oracle/product/crs/root.sh
--> CRS doesn't come up 
    On 2nd node, root.sh fails with message:
     Failure at final check of Oracle CRS stack.
     10

Verify Logs
# cd /u01/app/oracle/product/crs/log
Error 
[root@ract2 log]# more ./ract2/client/css.log
Oracle Database 10g CRS Release 10.2.0.1.0 Production Copyright 1996, 2005 Oracle.  All rights reserved.
2014-04-01 16:13:01.176: [ CSSCLNT][3312432864]clsssInitNative: connect failed, rc 9
2014-04-01 16:13:02.201: [ CSSCLNT][3312432864]clsssInitNative: connect failed, rc 9
2014-04-01 16:13:03.222: [ CSSCLNT][3312432864]clsssInitNative: connect failed, rc 9
--> Disable Firewall on all cluster nodes
    For Details check:   Pre-11.2: Root.sh Unable To Start CRS On Second Node (Doc ID 369699.1)

Disable firewall on all cluster nodes 
# service iptables stop
# chkconfig iptables off

Cleanup clusterware setup on both nodes  ( read following link if you need details ) 
Now run root.sh on ract2:
#  /u01/app/oracle/product/crs/root.sh
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: ract1 ract1int ract1
node 2: ract2 ract2int ract2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        ract1
        ract2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]

Finally fix the  Vipca errors 
# oifcfg setif -global eth1/192.168.1.0:public 
# oifcfg setif -global eth2/192.168.2.0:cluster_interconnect 
# oifcfg getif 
eth1  192.168.1.0  global  public
eth2  192.168.2.0  global  cluster_interconnect

Now run vipca
# vipca
Node   VIP-alias   VIP-IP-address  
ract1  ract1vip.example.com  192.168.1.135 255.255.255.0
ract2  ract2vip.example.com  192.168.1.136 255.255.255.0

Once vipca completes running, all the Clusterware resources (VIP, GSD, ONS) will be started, 
there is no need to re-run  root.sh since vipca is the last step in root.sh. 

Verify  CRS setup

#crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.ract1.gsd  application    ONLINE    ONLINE    ract1       
ora.ract1.ons  application    ONLINE    ONLINE    ract1       
ora.ract1.vip  application    ONLINE    ONLINE    ract1       
ora.ract2.gsd  application    ONLINE    ONLINE    ract2       
ora.ract2.ons  application    ONLINE    ONLINE    ract2       
ora.ract2.vip  application    ONLINE    ONLINE    ract2

References

  • Pre-11.2: Root.sh Unable To Start CRS On Second Node (Doc ID 369699.1)
  • Unable To Connect To Cluster Manager Ora-29701 as Network Socket Files are Removed (Doc ID 391790.1)
  • http://oracleview.wordpress.com/2011/03/31/oracle-10gr2-rac-on-linux-5-5-using-virtualbox-4/
  • http://www.databaseskill.com/2699596/
  • CLUVFY Fails With Error: Could not find a suitable set of interfaces for VIPs or Private Interconnect (Doc ID 338924.1)
  • How to Proceed From a Failed 10g or 11.1 Oracle Clusterware (CRS) Installation (Doc ID 239998.1)
  • 10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA / SRVCTL / OUI Failures) (Doc ID 414163.1)

 

Leave a Reply

Your email address will not be published. Required fields are marked *