RAC 11.2.0.4 setup using OPENFILER with Multipath ISCSI disks

Overview

Openfiler 2.99 product highlights

  • Unified storage: NAS/SAN
  • iSCSI target functionality
  • CIFS, NFS, HTTP, FTP Protocols
  • RAID 0,1,5,6,10 support
  • Bare metal or virtualization installation
  • 8TB+ Journaled filesystems
  • Filesystem Access Control Lists
  • Point In Time Copy (snapshots) support
  • Dynamic volume manager
  • Powerful web-based management
  • Block level remote replication
  • High Availability clustering

ISCSI Highlights

 

  •  IScsi protocol allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers.
  • IScsi is a storage area network (SAN) protocol, allowing organizations to consolidate storage into data center

RAC related issues

 

  •  /dev/dm-N and /dev/sdXX  devices are not persistent across reboot, it should not be used
  • /dev/mapper devices are persistent across reboot and should be used with mulitpathdd ASM devices
  •  Don’t use this setup in a production environment

OpenFiler download and commands

 

  • Download Openfiler from http://www.openfiler.com/community/download
  • Manually dropping a volume group on your Openfiler in the GUI wont work

       # vgchange –a n rac11204_volgrp1
       # vgremove rac11204_volgrp1

Setup Openfiler  2.99

  •  Create 3×2 Gbyte HD disk and attach CDrom:  openfileresa-2.99.1-x86_64-disc1.iso and install software
  • Create 2 Network Device: Virtualbox  Internal network  ( Hostname: Openfiler – IP Adr:   192.168.1.195  and 192.168.2.195) before starting installation
 
Reset Openfiler password after installation- Login as root:  
# password openfiler  ( use openfiler for password )

Test Openfiler GUI
Login via https from RAC Node grac41:  https://192.168.1.195:446/   Login : openfiler  Password : openfiler

Network Config
RAC node: grac41
eth0      inet addr:10.0.2.15          Bcast:10.0.2.255      Mask:255.255.255.0
eth1      inet addr:192.168.1.101     Bcast:192.168.1.255      Mask:255.255.255.0 VIP
eth2      inet addr:192.168.2.101      Bcast:192.168.2.255      Mask:255.255.255.0 Cluster interconnect
GATEWAY: 192.168.1.1
NAMESERVER: 192.168.1.50

Network Config - Openfiler:
Configuring openfiler with as static address ( don't use NAT VBox interface here )
eth0      inet addr:192.168.1.195     Bcast:192.168.1.255      Mask:255.255.255.0
eth1      inet addr:192.168.2.195     Bcast:192.168.1.255      Mask:255.255.255.0
GATEWAY: 192.168.1.1
NAMESERVER: 192.168.1.50

Create  Virtualbox disks used for ISCSI setup later
VBoxManage createhd --filename M:\VM\OPENFILER\of_disk1_2G.vdi --size 2048 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\OPENFILER\of_disk2_2G.vdi --size 2048 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\OPENFILER\of_disk2_2G.vdi --size 2048 --format VDI --variant Fixed

Create new Volume Groups
Login via https from RAC Node grac41:  https://192.168.1.195:446/   Login : openfiler  Password : openfiler

Start ISCSI service 
Services -> Manage Service -> ISCSI target -> Enable -> Start
Manually check iscsi-target status
[root@openfiler ~]#  service iscsi-target status
ietd (pid 2336) is running...

Create Volume Group(s)
Volumes -> Volume Group -> Create a new volume group -> click on  create new physical volumes.
   Create a partition in /dev/sdb -> Create  
Volume group name (no spaces) : rac11204_volgrp1
Select physical volumes to add
    /dev/sdb1     1,91 GB
 --> Add volume group

Display/add all newly created Volume Group Management
Volumes -> Volume Group  

Volume Group Name     Size     Allocated   Free     Members          Add physical storage  Delete VG
rac11204_volgrp3     1,88 GB   0 bytes     1,88 GB  View member PVs  All PVs are used      Delete
rac11204_volgrp2     1,88 GB   0 bytes     1,88 GB  View member PVs  All PVs are used      Delete
rac11204_volgrp1     1,88 GB   0 bytes     1,88 GB  View member PVs  All PVs are used      Delete

Display/add  ACLs
System -> Network Access Configuration and add the following ( Scroll down System page )
    grac41int     192.168.1.101     255.255.255.0     Share
    grac42int     192.168.1.102     255.255.255.0     Share
    grac43int     192.168.1.103     255.255.255.0     Share

Create 3 Volume groups with a Single Disk - File system type iSCS !
Volumes -> Volume Groups    
Volume Group Name     Size       Allocated     Free      Members     Add physical storage     Delete VG
rac11204_volgrp3     1.88 GB   1.88 GB     0 bytes  View member PVs     All PVs are used     VG contains volumes
rac11204_volgrp2     1.88 GB   1.88 GB     0 bytes  View member PVs     All PVs are used     VG contains volumes
rac11204_volgrp1     1.88 GB    1.88 GB     0 bytes  View member PVs     All PVs are used     VG contains volumes

Display rac11204_volgrp1 properties 
Volumes -> Manage Volumes -> Select Volume group -> rac11204_volgrp1 
Volume name     Volume description     Volume size     File system type     
openfilerdisk1     OpenfilerDisk1             1920 MB     iSCS

Add new iSCSI Target - Map Disks using Target Configuration and LUN Mapping 
Volumes -> ISCSI tragets ->  Select iSCSI Target ->  iqn.2006-01.com.openfiler:grac41_disk1 
   -> LUN Mapping ->  LUNs mapped to target: iqn.2006-01.com.openfiler:grac41_disk1
LUNs mapped to target: iqn.2006-01.com.openfiler:grac41_disk1
LUN Id. LUN Path                             R/W Mode     SCSI Serial No.     SCSI Id.             Transfer Mode     
0     /dev/rac11204_volgrp1/openfilerdisk1     write-thru     2dUaQD-Ra3m-VnBP     2dUaQD-Ra3m-VnBP     blockio    

LUNs mapped to target: iqn.2006-01.com.openfiler:grac41_disk2 
LUN Id. LUN Path                             R/W Mode     SCSI Serial No.     SCSI Id.             Transfer Mode 
0     /dev/rac11204_volgrp2/openfilerdisk2     write-thru     xd19ml-mIPI-RQqx     xd19ml-mIPI-RQqx     blockio

LUNs mapped to target: iqn.2006-01.com.openfiler:grac41_disk3 
LUN Id. LUN Path                             R/W Mode     SCSI Serial No.     SCSI Id.             Transfer Mode     
0     /dev/rac11204_volgrp3/openfilerdisk3     write-thru     R7UoL5-WKNl-Dnsk     R7UoL5-WKNl-Dnsk     blockio

Verify iSCSI host access configuration:
Volumne Section -> ISCSI targets ->  Select iSCSI Target ->  iqn.2006-01.com.openfiler:grac41_disk1 
   -> Network ACL ->  iSCSI host access configuration for target "iqn.2006-01.com.openfiler:grac41_disk1"
Name              Network/Host     Netmask     Access
grac41int     192.168.1.101     255.255.255.0   Allow     
grac42int     192.168.1.102     255.255.255.0     Allow
grac43int     192.168.1.103     255.255.255.0   Allow

Setup ISCSI clients/RAC Nodes

Verify that  iscsi-initiator-util package is already installed
$   rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-initiator-util
iscsi-initiator-utils-6.2.0.873-2.0.2.el6 (x86_64)

Configure and start iscsid service
#  service iscsid start
#  chkconfig iscsi on

Install device-mapper-multipath package
# yum install device-mapper-multipath.x86_64

Discover ISCSI targets
# iscsiadm -m discovery -t sendtargets -p 192.168.2.195
Starting iscsid:                                           [  OK  ]
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk3
192.168.1.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk3
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk2
192.168.1.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk2
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk1
192.168.1.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk1

Manually Log In to iSCSI Targets
Verify 1.st Path
# iscsiadm -m node -T  iqn.2006-01.com.openfiler:grac41_disk1  -l -p  192.168.2.195
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:grac41_disk1, portal: 192.168.2.195,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:grac41_disk1, portal: 192.168.2.195,3260] successful.
# iscsiadm -m node -T  iqn.2006-01.com.openfiler:grac41_disk2 -l -p  192.168.2.195
# iscsiadm -m node -T  iqn.2006-01.com.openfiler:grac41_disk3 -l -p  192.168.2.195

Verify 2.nd Path 
# iscsiadm -m node -T  iqn.2006-01.com.openfiler:grac41_disk1 -l -p  192.168.1.195
# iscsiadm -m node -T  iqn.2006-01.com.openfiler:grac41_disk2 -l -p  192.168.1.195
# iscsiadm -m node -T  iqn.2006-01.com.openfiler:grac41_disk3 -l -p  192.168.1.195

Display current sessions
#  iscsiadm -m session
tcp: [10] 192.168.1.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk3 
tcp: [5] 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk1 
tcp: [6] 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk2 
tcp: [7] 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk3 
tcp: [8] 192.168.1.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk1 
tcp: [9] 192.168.1.195:3260,1 iqn.2006-01.com.openfiler:grac41_disk2 

Logoff all sessions 
#  iscsiadm -m node -u
Logging out of session [sid: 2,  target: iqn.2006-01.com.openfiler:grac41_disk1, portal: 192.168.2.195,3260]
Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:grac41_disk2, portal: 192.168.2.195,3260]
Logging out of session [sid: 4, target: iqn.2006-01.com.openfiler:grac41_disk3, portal: 192.168.2.195,3260]
Logout of [sid: 2, target: iqn.2006-01.com.openfiler:grac41_disk1, portal: 192.168.2.195,3260] successful.
Logout of [sid: 3, target: iqn.2006-01.com.openfiler:grac41_disk2, portal: 192.168.2.195,3260] successful.
Logout of [sid: 4, target: iqn.2006-01.com.openfiler:grac41_disk3, portal: 192.168.2.195,3260] successful.

Verify Openfiler disk mapping and format disk ( format disk only on first node )  
# ls -l /dev/disk/by-path/*openfiler*
lrwxrwxrwx. 1 root root 9 Feb 27 08:48 /dev/disk/by-path/ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:grac41_disk1-lun-0 -> ../../sdp
lrwxrwxrwx. 1 root root 9 Feb 27 08:50 /dev/disk/by-path/ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:grac41_disk2-lun-0 -> ../../sdq
lrwxrwxrwx. 1 root root 9 Feb 27 08:50 /dev/disk/by-path/ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:grac41_disk3-lun-0 -> ../../sdr

# ls -l /dev/mapper/*
crw-rw----. 1 root root 10, 236 Feb 16 10:34 /dev/mapper/control
lrwxrwxrwx. 1 root root       7 Feb 27 15:19 /dev/mapper/mpathp -> ../dm-2
lrwxrwxrwx. 1 root root       7 Feb 27 15:19 /dev/mapper/mpathpp1 -> ../dm-5
lrwxrwxrwx. 1 root root       7 Feb 27 15:19 /dev/mapper/mpathq -> ../dm-3
lrwxrwxrwx. 1 root root       7 Feb 27 15:19 /dev/mapper/mpathqp1 -> ../dm-6
lrwxrwxrwx. 1 root root       7 Feb 27 15:19 /dev/mapper/mpathr -> ../dm-4
lrwxrwxrwx. 1 root root       7 Feb 27 15:19 /dev/mapper/mpathrp1 -> ../dm-7
#  ls -l /dev/mapper/../dm-2
brw-rw----. 1 grid asmadmin 252, 2 Feb 27 15:19 /dev/mapper/../dm-2
# ls -l /dev/mapper/../dm-5 
brw-rw----. 1 grid asmadmin 252, 5 Feb 27 15:19 /dev/mapper/../dm-5

Format disks
[root@grac41 etc]# fdisk /dev/dm-5
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').
Command (m for help): p
Disk /dev/dm-5: 2013 MB, 2013265920 bytes
255 heads, 63 sectors/track, 244 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb5c7150c
     Device Boot      Start         End      Blocks   Id  System
/dev/dm-5p1               1         244     1959898+  83  Linux

Add to rc.local
#multipath disks for Oracle 11gR2
chown grid:asmadmin /dev/mapper/mpath*
chmod 0660 /dev/mapper/mpath* 

Create file /etc/multipath.conf 
defaults {
    udev_dir              /dev
    polling_interval      10
    path_selector         "round-robin 0"
    path_grouping_policy  multibus
    getuid_callout        "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
#    prio                  alua
    path_checker          readsector0
    rr_min_io             100
    max_fds               8192
    rr_weight             priorities
    failback              immediate
    no_path_retry         fail
    user_friendly_names   yes
}

blacklist {
    # Blacklist by WWID
    wwid "*"
}
blacklist_exceptions {
    wwid "14f504e46494c45523264556151442d5261336d2d566e4250"
    wwid "14f504e46494c4552786431396d6c2d6d4950492d52517178"
    wwid "14f504e46494c45525237556f4c352d574b4e6c2d446e736b"
}
multipaths {
    multipath {
        wwid                  14f504e46494c45523264556151442d5261336d2d566e4250 
        alias                 grac41_disk1 
    }
    multipath {
        wwid                  14f504e46494c4552786431396d6c2d6d4950492d52517178 
        alias                 grac41_disk2 
    }
    multipath {
        wwid                  14f504e46494c45525237556f4c352d574b4e6c2d446e736b 
        alias                 grac41_disk3 
    }

Restart multipathd and configure daemon to startup on reboot 
#  service multipathd restart                                                                              
ok
Stopping multipathd daemon:                                [  OK  ]
Starting multipathd daemon:                                [  OK  ]
# chkconfig multipathd on 

Verify current multipath configuration
#  multipath -ll
mpathr (14f504e46494c45525237556f4c352d574b4e6c2d446e736b) dm-4 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 23:0:0:0 sdr 65:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 26:0:0:0 sdu 65:64 active ready running
mpathq (14f504e46494c4552786431396d6c2d6d4950492d52517178) dm-3 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 22:0:0:0 sdq 65:0  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 25:0:0:0 sdt 65:48 active ready running
mpathp (14f504e46494c45523264556151442d5261336d2d566e4250) dm-2 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 21:0:0:0 sdp 8:240 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 24:0:0:0 sds 65:32 active ready running

Reboot your system and verify correct mapping of our multipath devices 
Note we should repeat above setup steps on all available RAC nodes  !

Verify mulipath setup on grac43 ( grac41 and grac42 should give you similar outout ) 
[root@grac43 ~]# multipath -ll
grac41_disk3 (14f504e46494c45525237556f4c352d574b4e6c2d446e736b) dm-2 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 22:0:0:0 sdu 65:64 active ready running
  `- 21:0:0:0 sdt 65:48 active ready running
grac41_disk2 (14f504e46494c4552786431396d6c2d6d4950492d52517178) dm-3 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 18:0:0:0 sdr 65:16 active ready running
  `- 17:0:0:0 sdp 8:240 active ready running
grac41_disk1 (14f504e46494c45523264556151442d5261336d2d566e4250) dm-4 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 20:0:0:0 sdq 65:0  active ready running
  `- 19:0:0:0 sds 65:32 active ready running

# ls -l /dev/mapper/*
[root@grac43 ~]#  ls -l /dev/mapper/*
crw-rw----. 1 root root 10, 236 Feb 27 16:53 /dev/mapper/control
lrwxrwxrwx. 1 root root       7 Feb 27 16:53 /dev/mapper/grac41_disk1 -> ../dm-4
lrwxrwxrwx. 1 root root       7 Feb 27 16:53 /dev/mapper/grac41_disk1p1 -> ../dm-6
lrwxrwxrwx. 1 root root       7 Feb 27 16:53 /dev/mapper/grac41_disk2 -> ../dm-3
lrwxrwxrwx. 1 root root       7 Feb 27 16:53 /dev/mapper/grac41_disk2p1 -> ../dm-7
lrwxrwxrwx. 1 root root       7 Feb 27 16:53 /dev/mapper/grac41_disk3 -> ../dm-2
lrwxrwxrwx. 1 root root       7 Feb 27 16:53 /dev/mapper/grac41_disk3p1 -> ../dm-5

Setup UDEV configuration for a Multipath env with OPENFiler.

 

  • Note without a proper udev setup you can get a lot of troubles as /dev/dm-X and /dev/sdX name can change on reboot
  • For details please read following article.

 

Test your Multipath installation by shutting down eth1

# ifdown eth1
/var/log/messages reports :
Feb 27 11:53:34 grac41 kernel: connection8:0: detected conn error (1011)
Feb 27 11:53:35 grac41 iscsid: Kernel reported iSCSI connection 8:0 error (1011 - ISCSI_ERR_CONN_FAILED: iSCSI connection failed) state (3)

#   multipath -ll
mpathr (14f504e46494c45525237556f4c352d574b4e6c2d446e736b) dm-4 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 23:0:0:0 sdr 65:16 active ready  running
`-+- policy='round-robin 0' prio=0 status=enabled
`- 26:0:0:0 sdu 65:64 failed faulty running
mpathq (14f504e46494c4552786431396d6c2d6d4950492d52517178) dm-3 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 22:0:0:0 sdq 65:0  active ready  running
`-+- policy='round-robin 0' prio=0 status=enabled
`- 25:0:0:0 sdt 65:48 failed faulty running
mpathp (14f504e46494c45523264556151442d5261336d2d566e4250) dm-2 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 21:0:0:0 sdp 8:240 active ready  running
`-+- policy='round-robin 0' prio=0 status=enabled
`- 24:0:0:0 sds 65:32 failed faulty running

--> After shutting down eth1 pathes /dev/sds /dev/sdt /dev/sdr are reported as failed faulty 

Restart network device eth1 and check multipath configuration
# ifup eth1
/var/log/messages reports :
Feb 27 11:56:27 grac41 iscsid: connection8:0 is operational after recovery (12 attempts)
Feb 27 11:56:28 grac41 iscsid: connection9:0 is operational after recovery (12 attempts)
Feb 27 11:56:28 grac41 iscsid: connection10:0 is operational after recovery (12 attempts)
Feb 27 11:56:29 grac41 multipathd: mpathq: sdt - directio checker reports path is up
Feb 27 11:56:29 grac41 multipathd: 65:48: reinstated
Feb 27 11:56:29 grac41 multipathd: mpathq: remaining active paths: 2
Feb 27 11:56:29 grac41 multipathd: mpathr: sdu - directio checker reports path is up
Feb 27 11:56:29 grac41 multipathd: 65:64: reinstated
Feb 27 11:56:29 grac41 multipathd: mpathr: remaining active paths: 2
Feb 27 11:56:32 grac41 multipathd: mpathp: sds - directio checker reports path is up
Feb 27 11:56:32 grac41 multipathd: 65:32: reinstated
Feb 27 11:56:32 grac41 multipathd: mpathp: remaining active paths: 2

#  multipath -ll
mpathr (14f504e46494c45525237556f4c352d574b4e6c2d446e736b) dm-4 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 23:0:0:0 sdr 65:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 26:0:0:0 sdu 65:64 active ready running
mpathq (14f504e46494c4552786431396d6c2d6d4950492d52517178) dm-3 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 22:0:0:0 sdq 65:0  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 25:0:0:0 sdt 65:48 active ready running
mpathp (14f504e46494c45523264556151442d5261336d2d566e4250) dm-2 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 21:0:0:0 sdp 8:240 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 24:0:0:0 sds 65:32 active ready running

--> both logs and multipath -ll shows the the failed network pathes are available again without any intervention

Testing I/O Performance with dd

Disk /dev/asmdisk_OF-disk1:
[root@grac41 Desktop]#  dd if=/dev/asmdisk_OF-disk1 of=/dev/null bs=1k count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 27.5834 s, 37.1 MB/s

Monitor disk performance on our openfiler  using iostat  
Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sdb             367.00     47317.33         2.67     141952          8
sdb1            367.00     47317.33         2.67     141952          8

Disk /dev/asmdisk_OF-disk2:
[root@grac41 Desktop]# dd if=/dev/asmdisk_OF-disk2  of=/dev/null bs=1k count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 23.4077 s, 43.7 MB/s
Monitor disk performance on our openfiler  using iostat  
Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sdc             388.67     49664.67         0.00     148994          0
sdc1            388.67     49664.67         0.00     148994          0

Disk /dev/asmdisk_OF-disk3:
[root@grac41 Desktop]# dd if=/dev/asmdisk_OF-disk3 of=/dev/null bs=1k count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 38.3407 s, 26.7 MB/s

Monitor disk performance on our openfiler  using iostat  
Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sdd             400.00     51200.00         0.00     153600          0
sdd1            400.00     51200.00         0.00     153600          0

Testing I/O Performance using sqlplus

  • Create OPENFiler ASM diskgroup using OPENFiler  using ISCSI disks  /dev/asmdisk_OF-disk1, /dev/asmdisk_OF-disk2, /dev/asmdisk_OF-disk3
  • Verify these setings on all nodes and after that use asmca to create our OPENFILER diskgroup
Create tablespace/table and issue some load
SQL> connect sys ....
SQL> create tablespace  OPENFILER_TS   datafile '+OPENFILER_DG' size 1g;

SQL> connect scott/tiger
SQL> create table t  tablespace openfiler_ts as ( select * from  all_objects );
SQL> insert into t ( select * from t );
SQL> insert into t ( select * from t );
SQL> insert into t ( select * from t );
SQL> insert into t ( select * from t );
SQL> insert into t ( select * from t );
SQL> insert into t ( select * from t );
2738688 rows created.

Monitor I/O distribution on our RAC node:
Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sdp              35.50         0.00      1020.00          0       4080
sdq              49.75         0.00      1554.00          0       6216
sdr              37.25         0.00       724.00          0       2896
sds              44.00         0.00      1484.00          0       5936
sdt              28.50         0.00       464.00          0       1856
sdu              30.50         0.00       744.00          0       2976
dm-2             72.75         0.00      1744.00          0       6976
dm-3             93.75         0.00      3038.00          0      12152
dm-4             59.00         0.00      1208.00          0       4832
dm-5             95.75         0.00      1538.00          0       6152
dm-7             92.00         0.00      1488.00          0       5952
dm-6             98.50         0.00      1588.00          0       6352
# multipath -ll
grac41_disk3 (14f504e46494c45525237556f4c352d574b4e6c2d446e736b) dm-4 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 21:0:0:0 sdt 65:48 active ready running
  `- 22:0:0:0 sdu 65:64 active ready running
grac41_disk2 (14f504e46494c4552786431396d6c2d6d4950492d52517178) dm-2 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 17:0:0:0 sdp 8:240 active ready running
  `- 18:0:0:0 sdr 65:16 active ready running
grac41_disk1 (14f504e46494c45523264556151442d5261336d2d566e4250) dm-3 OPNFILER,VIRTUAL-DISK
size=1.9G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 20:0:0:0 sdq 65:0  active ready running
  `- 19:0:0:0 sds 65:32 active ready running

--> There is good I/O distribution for all multipathed devices: /dev/sdp - /dev/sdt
--> The 3 multipathed devices shows an I/O rate for about 100 IOPs each - in summary 300 IOPs
--> The 6 used SD devices shows an I/O rate for about 50 IOPs each - in summary 300 IOPs 

Monitor I/O distribution on our OPENFiler system 
Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda               0.00         0.00         0.00          0          0
sda1              0.00         0.00         0.00          0          0
sda2              0.00         0.00         0.00          0          0
sda3              0.00         0.00         0.00          0          0
sdb             200.00         0.00      3234.78          0       9672
sdb1            200.00         0.00      3234.78          0       9672
sdc             238.46         0.00      4029.43          0      12048
sdc1            238.46         0.00      4029.43          0      12048
sdd             228.43         0.00      4147.16          0      12400
sdd1            228.43         0.00      4147.16          0      12400

--> Openfiler system shows a good I/O distribution accross all ASM disks
--> Each of our 3 disks show an I/O rate of around 200 IOPS

Reference

  • http://opensource.marshall.edu/papers/rhel5-iscsi-HOWTO.pdf
  • http://orainternals.wordpress.com/2012/08/29/do-you-need-asmlib
  • http://initq.com/index.php/Setting_up_Multipathing_on_Linux
  • http://fritshoogland.wordpress.com/2012/07/23/using-udev-on-rhel-6-ol-6-to-change-disk-permissions-for-asm/
  • http://murtazahabib.wordpress.com/2012/08/20/4/

One thought on “RAC 11.2.0.4 setup using OPENFILER with Multipath ISCSI disks”

  1. great doc ,
    just question..
    what the benefit from using openfiler while i can use shared hard disks in vmware or virual box

    Thanks again bro

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>