ACFS install on top of GRID 11.2.0.3.4


Used Software

  • GRID: 11.2.0.3.4
  • OEL 6.3
  • VirtualBox 4.2.14

 

Create ASM diskgroup

Create the new ASM disks:
D:\VM> VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm_1G_ACFS1.vdi --size 1024 --format VDI --variant Fixed
D:\VM> VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm_1G_ACFS2.vdi --size 1024 --format VDI --variant Fixed

Shutdown grac1 and attach disk to VM: grac1 
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 9  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm_1G_ACFS1.vdi 
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 10  --device 0 --type hdd --medium  C:\VM\GRACE2\ASM\asm_1G_ACFS2.vdi 

Change disk type to sharable disks
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm_1G_ACFS1.vdi  --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm_1G_ACFS2.vdi  --type shareable
After reboot check /dev ( we should have now 2 newly create disk devices )
# ls -l /dev/sdh /dev/sdi
brw-rw---- 1 root disk 8, 112 Aug  4 13:52 /dev/sdh
brw-rw---- 1 root disk 8, 128 Aug  4 13:49 /dev/sdi


Create new partitions ( sample for /dev/sdh ) 
# fdisk  /dev/sdh
Command (m for help): p
Disk /dev/sdh: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-130, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-130, default 130): 
Using default value 130
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

Create ASM disks and check their status:
# /etc/init.d/oracleasm createdisk acfs_data1 /dev/sdh
Marking disk "acfs_data1" as an ASM disk:                  [FAILED]
--> Reboot VM 
# /etc/init.d/oracleasm createdisk acfs_data /dev/sdh1
Marking disk "acfs_data" as an ASM disk:                   [  OK  ]
[root@grac1 Desktop]# /etc/init.d/oracleasm createdisk acfs_data2 /dev/sdi1
Marking disk "acfs_data2" as an ASM disk:                  [  OK  ]
# /etc/init.d/oracleasm listdisks
ACFS_DATA
ACFS_DATA2

Create ASM diskgroup using asmca
--> ADVM Compatibility is greyed and not selectable
Check OS version and kernel release
# uname -r
2.6.39-300.17.2.el6uek.x86_64
# cat /etc/oracle-release 
Oracle Linux Server release 6.3
# lsmod | grep ora
oracleasm              53352  1
See: Bug 12983005  Linux: ADVM/ACFS is not supported on OS version '2.6.39-100.7.1.el6uek.x86_64'
--> Missing driver for oracleacfs, oracleadvm , oracleoks
Fix : Install clusterware patch 11.2.0.3.3 or higher
For a detailed 11.2.0.3.4 patch install please read the following link.

After patch install we should see the following driver loaded on both nodes: 
# lsmod | grep ora
oracleacfs           1844281  0 
oracleadvm            231722  0 
oracleoks             329652  2 
oracleasm              53352  1

Use asmca to create ASM diskgroup by selecting disks 
  asm_1G_ACFS1.vdi
  asm_1G_ACFS2.vdi
Use advanced configuration to set
  ASM       Compatibility : 11.2.0.0.0
  ADVM      Compatibility : 11.2.0.0.0

 

Create  new ACFS volume in out ACFS diskgroup

Trying to use 1G space for our volume  from our ACFS diskgroup
ASMCMD> volcreate -G ACFS -s 1G ACFS_VOL1
ORA-15032: not all alterations performed
ORA-15041: diskgroup "ACFS" space exhausted (DBD ERROR: OCIStmtExecute)
Create a first disk volume using 900 MBbyte from our 1G ACFS diskgroup
ASMCMD> volcreate -G ACFS -s 900M ACFS_VOL1

Check our ACFS disk volume
$ asmcmd volinfo -a
Diskgroup Name: ACFS
     Volume Name: ACFS_VOL1
     Volume Device: /dev/asm/acfs_vol1-140
     State: ENABLED
     Size (MB): 928
     Resize Unit (MB): 32
     Redundancy: MIRROR
     Stripe Columns: 4
     Stripe Width (K): 128
     Usage: 
     Mountpath: 

Check the related linux device
$ ls -l /dev/asm
total 0
brwxrwx--- 1 root asmadmin 251, 71681 Aug  4 19:53 acfs_vol1-140
Create ACFS file system
# mkfs -t acfs /dev/asm/acfs_vol1-140
mkfs.acfs: version                   = 11.2.0.3.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/acfs_vol1-140
mkfs.acfs: volume size               = 973078528
mkfs.acfs: Format complete.

On grac1 create mount points an mount the filesystem
# mkdir -p /u01/app/oracle/acfsmount/acfs_vol1
# mount -t acfs /dev/asm/acfs_vol1-140 /u01/app/oracle/acfsmount/acfs_vol1
# df /dev/asm/acfs_vol1-140
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/asm/acfs_vol1-140
                        950272     39192    911080   5% /u01/app/oracle/acfsmount/acfs_vol1

On grac2:
# mkdir -p /u01/app/oracle/acfsmount/acfs_vol1
#  mount -t acfs /dev/asm/acfs_vol1-140 /u01/app/oracle/acfsmount/acfs_vol1
mount.acfs: CLSU-00100: Operating System function: open64 failed with error data: 2
mount.acfs: CLSU-00101: Operating System error message: No such file or directory
mount.acfs: CLSU-00103: error location: OOF_1
mount.acfs: CLSU-00104: additional error information: open64 (/dev/asm/acfs_vol1-140)
mount.acfs: ACFS-02017: Failed to open volume /dev/asm/acfs_vol1-140. Verify the volume exists.

Checking mount status with asmca -> diskgroup ACFS only mount by grac1 !
Trying to mount by right clicking the entry in asmca -> get error unsuffiecent number of disks 
Cecking ASM disks status 
# /etc/init.d/oracleasm  listdisks
DATA1
DATA2
DATA3
OCR1
OCR2
OCR3
--> ACFS disks missing on our VBox grac2 image
Shutdown grac2 and attach disks to grac2 Vbox image
D:\VM> VBoxManage storageattach grac2 --storagectl "SATA" --port 9  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm_1G_ACFS1.vdi 
D:\VM> VBoxManage storageattach grac2 --storagectl "SATA" --port 10  --device 0 --type hdd --medium  C:\VM\GRACE2\ASM\asm_1G_ACFS2.vdi 
Reboot and check ACFS disks
# /etc/init.d/oracleasm  listdisks
ACFS_DATA
ACFS_DATA2
DATA1
...
# mount -t acfs /dev/asm/acfs_vol1-140 /u01/app/oracle/acfsmount/acfs_vol1
mount.acfs: CLSU-00100: Operating System function: open64 failed with error data: 2
mount.acfs: CLSU-00101: Operating System error message: No such file or directory
mount.acfs: CLSU-00103: error location: OOF_1
mount.acfs: CLSU-00104: additional error information: open64 (/dev/ofsctl)
mount.acfs: ACFS-00502: Failed to communicate with the ACFS driver.  Verify the ACFS driver has been loaded.
Checking ACFS driver 
#  lsmod | grep ora
oracleasm              53352  1 
Manually load acfs kernel driver and check again
$ $GRID_HOME/bin/acfsload start -s
[root@grac2 Desktop]#  lsmod | grep ora
oracleacfs           1844281  0 
oracleadvm            231722  0 
oracleoks             329652  2 oracleacfs,oracleadvm
oracleasm              53352  1 
# mount -t acfs /dev/asm/acfs_vol1-140 /u01/app/oracle/acfsmount/acfs_vol1
mount.acfs: CLSU-00104: additional error information: open64 (/dev/asm/acfs_vol1-140)
mount.acfs: ACFS-02 017: Failed to open volume /dev/asm/acfs_vol1-140. Verify the volume exists.

Check Linux device volume info and diskgroup on grac2:
# ls /dev/asm/acfs_vol1-140
ls: cannot access /dev/asm/acfs_vol1-140: No such file or directory
$ asmcmd volinfo -a
no volumes found
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576     15342    10125             5114            2505              0             N  DATA/
MOUNTED  NORMAL  N         512   4096  1048576      6141     5217             2047            1585              0             Y  OCR/
Diskgroup ACFS not mounted - check clusterware alert.log 
ORA-15041: diskgroup "ACFS" space exhausted
. For details refer to "(:CLSN00107:)" in "/u01/app/11203/grid/log/grac2/agent/crsd/oraagent_grid/oraagent_grid.log".
CRS-2674: Start of 'ora.ACFS.dg' on 'grac2' failed
/u01/app/11203/grid/log/grac2/agent/crsd/oraagent_grid/oraagent_grid.log
2013-08-04 20:56:01.107: [ora.ACFS.dg][536860416] {1:24621:491} [start] ORA-15032: not all alterations performed
ORA-15202: cannot create additional ASM internal change segment
ORA-15041: diskgroup "ACFS" space exhausted

Check ASM logfile  
ERROR: diskgroup ACFS was not mounted
ORA-15032: not all alterations performed
ORA-15202: cannot create additional ASM internal change segment
ORA-15041: diskgroup "ACFS" space exhausted
ERROR: ALTER DISKGROUP ACFS MOUNT  /* asm agent *//* {1:24621:491} */

Use amdu to check whether this diskgroup is already mounted to a different disk group
$ amdu -diskstring '/dev/oracleasm/disks/ACFS*' -dump 'ACFS'
amdu_2013_08_04_21_10_37/
AMDU-00204: Disk N0001 is in currently mounted diskgroup ACFS
AMDU-00201: Disk N0001: '/dev/oracleasm/disks/ACFS_DATA'
AMDU-00204: Disk N0002 is in currently mounted diskgroup ACFS
AMDU-00201: Disk N0002: '/dev/oracleasm/disks/ACFS_DATA2'
--> looks ok  - but could not see the diskgroup ACFS with asmcmd lsdg
Fix : Reboot that node 

After reboot diskgroup is available but mount still fails with  ACFS-00502
$ asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576      2038       36                0              18              0             N  ACFS/
# mount -t acfs /dev/asm/acfs_vol1-140 /u01/app/oracle/acfsmount/acfs_vol1
mount.acfs: ACFS-00502: Failed to communicate with the ACFS driver.  Verify the ACFS driver has been loaded.
Manually mount ACFS file system:
Recreate volumegroup with smaller size ( seems we need about 100 MByte for each node )
# umount /dev/asm/acfs_vol1-140 
$ asmcmd voldelete -G ACFS ACFS_VOL1
$ asmcmd volcreate -G ACFS -s 800M ACFS_VOL1
$ asmcmd volinfo -a
Diskgroup Name: ACFS
     Volume Name: ACFS_VOL1
     Volume Device: /dev/asm/acfs_vol1-140
     State: ENABLED
     Size (MB): 800
     Resize Unit (MB): 32
     Redundancy: MIRROR
     Stripe Columns: 4
     Stripe Width (K): 128
     Usage: 
     Mountpath: 
#  mkfs -t acfs /dev/asm/acfs_vol1-140
On grac1 mount ACFS filesystem:
# mount -t acfs /dev/asm/acfs_vol1-140 /u01/app/oracle/acfsmount/acfs_vol1
On grac1 run asmca -> ACFS filesystem mount on form grac1 -> press right mouse button -> select Mount on all Notes
Now volumen should be mounted on both Nodes grac1,grac2
On grac2 check ASM logfile and verify successfull mount:
SUCCESS: diskgroup ACFS was mounted
SUCCESS: ALTER DISKGROUP ACFS MOUNT  /* asm agent *//* {1:12727:327} */
Mon Aug 05 08:53:28 2013
NOTE: diskgroup resource ora.ACFS.dg is updated
$ asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576      2038      244                0             122              0             N  ACFS/
# lsmod | grep ora
oracleacfs           1844281  0 
oracleadvm            231722  1 
oracleoks             329652  2 oracleacfs,oracleadvm
oracleasm              53352  1
Enable volume on grac2 and verify that ACFS filesytem can be mounted
$ asmcmd volinfo -a
Diskgroup Name: ACFS
     Volume Name: ACFS_VOL1
     Volume Device: /dev/asm/acfs_vol1-140
     State: DISABLED
     Size (MB): 800
     Resize Unit (MB): 32
     Redundancy: MIRROR
     Stripe Columns: 4
     Stripe Width (K): 128
     Usage: ACFS
     Mountpath: /u01/app/oracle/acfsm/dev/asm/acfs_vol1-140ount/acfs_vol1 
$ ls -l /dev/asm/acfs_vol1-140
  ls: cannot access /dev/asm/acfs_vol1-140: No such file or directory
$ asmcmd volenable -G  ACFS ACFS_VOL1
$ ls -l /dev/asm/acfs_vol1-140
  brwxrwx--- 1 root asmadmin 251, 71681 Aug  5 09:04 
# mount -t acfs /dev/asm/acfs_vol1-140 /u01/app/oracle/acfsmount/acfs_vol1
# df /dev/asm/acfs_vol1-140
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/asm/acfs_vol1-140
                        819200     73964    745236  10% /u01/app/oracle/acfsmount/acfs_vol1

Register mount volume and mount point
# acfsutil registry -a /dev/asm/acfs_vol1-140 /u01/app/oracle/acfsmount/acfs_vol1
acfsutil registry: mount point /u01/app/oracle/acfsmount/acfs_vol1 successfully added to Oracle Registry
#  acfsutil registry -l
Device : /dev/asm/acfs_vol1-140 : Mount Point : /u01/app/oracle/acfsmount/acfs_vol1 : Options : none : Nodes : all : Disk Group : ACFS : Volume : ACFS_VOL1
Reboot grac2 and check wheter ACFS gets mounted automatically 

Automatically load ACFS driver after reboot
--> /etc/init.d/acfsload
#!/bin/sh
# chkconfig: 2345 30 21
# description: Load Oracle ACFS drivers at system boot
/u01/app/11203/grid/bin/acfsload start -s

# chmod u+x /etc/init.d/acfsload
# chkconfig --add acfsload
--> reboot and check ACFS driver state 
$ acfsdriverstate version
ACFS-9325:     Driver OS kernel version = 2.6.39-300.21.1.el6uek.x86_64(x86_64).
ACFS-9326:     Driver Oracle version = RELEASE.
$ acfsdriverstate loaded
ACFS-9203: true

To make the ACFS file system automatically mounted on reboot run : 
# /u01/app/11203/grid/bin/acfsroot install
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9118: oracleacfs.ko driver in use - cannot unload.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9118: oracleacfs.ko driver in use - cannot unload.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9309: ADVM/ACFS installation correctness verified.
# /u01/app/11203/grid/bin/acfsroot enable
ACFS-9376: Adding ADVM/ACFS drivers resource succeeded.
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'grac1'
CRS-2676: Start of 'ora.drivers.acfs' on 'grac1' succeeded
ACFS-9380: Starting ADVM/ACFS drivers resource succeeded.
ACFS-9368: Adding ACFS registry resource succeeded.
CRS-2672: Attempting to start 'ora.registry.acfs' on 'grac2'
CRS-2672: Attempting to start 'ora.registry.acfs' on 'grac1'
CRS-2676: Start of 'ora.registry.acfs' on 'grac1' succeeded
CRS-2676: Start of 'ora.registry.acfs' on 'grac2' succeeded
ACFS-9372: Starting ACFS registry resource succeeded.

Verify that these resources are ONLINE now:
$ my_crs_stat | grep -i acfs
ora.ACFS.dg                    ONLINE     ONLINE          grac1         
ora.ACFS.dg                    ONLINE     ONLINE          grac2         
ora.registry.acfs              ONLINE     ONLINE          grac1         
ora.registry.acfs              ONLINE     ONLINE          grac2     

Reference

  •  “ora.drivers.acfs” Resource Was Not Configured Therefore RAC ACFS Filesystem Is Not Mounting During The Reboot. (Doc ID 1486208.1)
  • Bug 14503558 : ACFS FILESYSTEMS ARE NOT BEING MOUNTED AFTER REBOOT THE RAC NODES ON LINUX
  • .Bug 12983005  Linux: ADVM/ACFS is not supported on OS version ‘2.6.39-100.7.1.el6uek.x86_64’

 

Troubleshooting tips

  • Check ACFS driver : # lsmod | grep ora
  • Check readiness of your ASM disk:  # dd if=/dev/oracleasm/disks/ACFS_DATA of=/dev/nulls bs=1M
  • Check your ASM disks:  # /etc/init.d/oracleasm listdisks
  • Check that your diskgroup is availabe on all nodes: $ asmcmd lsdg
  • Check that your volumes are ready :  $ asmcmd volinfo -a
  • Check your registry settings : #  acfsutil registry -l
  • Check whether ASM disks are mounted by a different group: $ amdu -diskstring ‘/dev/oracleasm/disks/ACFS*’ -dump ‘ACFS’
  • Check that resource ora.registry.acfs is ONLINE: $ my_crs_stat | grep -i acfs

 

4 thoughts on “ACFS install on top of GRID 11.2.0.3.4”

  1. Hey. Nice post.

    I’m trying to do the same with:
    – RAC 12.1.0.1
    – ASM over udev (I don’t use asmlib)
    – RH 6.5

    The procedure I’m using is (actually I tried to add a separated LUN for the acfs instead of using DATA DG, with same results):
    – Open asmca
    – Create a volume (If not drivers found, I run “acfsroot install” and “acfsload start”)
    – Once created, I create the file system from asmca

    Everything works fine BUT, if I dare to reboot one of the nodes, “ora.DATA.ACFS_VOL.advm” remains down on the rebooted node because the volume created (/dev/asm/acfs_vol-469 in this case) desapear!

    “Volume Device /dev/asm/acfs_vol-469 unexpectedly went offline. Please check underlying storage,STABLE”

    Also, If I try to create another volume, I hit the “missing volume manager driver” again.

    Anything I’m missing? This should be much more stable and simple to use.

    Thanks,
    Alex.

    1. Hi Alex,

      I did an ACFS setup on OEL 6.4, RAC 12.1.0.1 and was pretty simple – see : http://www.hhutzler.de/blog/acfs-install-on-top-of-grid-12-1/
      You may review :
      http://oracleinaction.com/ora-15477-communicate-volume-driver/
      ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)
      The following commands may help to investigate your ACFS current status
      # lsmod | grep ora
      oracleacfs 3053229 2
      oracleadvm 320180 8
      oracleoks 417171 2 oracleacfs,oracleadvm
      oracleasm 53865 1
      —> If no oracleacfs, oracleadvm driver are loaded try to run acfsload start -s
      # asmcmd lsdg -g
      # /sbin/acfsutil registry -l
      $ asmcmd volinfo -G ACFS_DG1 acfs_vol1

      Potential problems : ACFS driver not loaded after reboot – UDdev rules are not permantent after reboot

  2. Hey,

    I’ve found what the problem was. I’m using NIS for the oracle user, and since that runs AFTER acfs driver, acfs has no way to know the GID for the group belongs to whom. :).
    What I did was to create a local user with the same GID/UID than the NIS and now it’s working.
    Probably there is a way to delay acfs driver to run after NIS starts so there shouldn’t be any issues.

    Sorry, forgot this post.
    Thanks!
    Alex.

Leave a Reply

Your email address will not be published. Required fields are marked *