12c feature: ACFS/HANFS

Overview  ACFS/HANFS

 

  •  HANFS 12.1 only supports NFS v3 over Ipv4, with no NFS locking
  • HANFS requires NFS in order to run.
  • It is assumed that NFS (and its associated needs) will be started by init scripts at node boot time.
  • NFS will need to be running on all nodes of the cluster.
  • The portmap service was used to map RPC program numbers to IP address port number combinations in earlier versions of Red Hat Enterprise Linux.
  • As per RHEL6 docs, portmap service has been replaced by rpcbind in Red Hat Enterprise Linux 6 to enable IPv6 support.

Setup NFS ( on all RAC nodes )

Start and configure NFS
[root@gract2 ~]# service nfs status
rpc.svcgssd is stopped
rpc.mountd is stopped
nfsd is stopped
rpc.rquotad is stopped
[root@gract2 ~]# service nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Stopping RPC idmapd:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]

[root@gract2 ~]# service rpcbind start
[root@gract2 ~]# chkconfig nfs on
[root@gract2 ~]# chkconfig rpcbind on

Setup ACFS and create a static HANFS VIP host in your DNS

Configure a static DNS address 
[root@gract1 var]# nslookup nfsvip
Server:        192.168.1.50
Address:    192.168.1.50#53

Name:    nfsvip.example.com
Address: 192.168.1.199

Create an ACFS filessytem ( please read following node to continue )

Verify ACFS resources
[root@gract1 Desktop]#  crs | egrep 'acfs|advm|proxy|ACFS|NAME|---'
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract1       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract2       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract3       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.ACFS_DG1.dg                ONLINE     ONLINE          gract1       STABLE   
ora.ACFS_DG1.dg                ONLINE     ONLINE          gract2       STABLE   
ora.ACFS_DG1.dg                ONLINE     ONLINE          gract3       STABLE   
ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract1       mounted on /u01/acfs /acfs-vol1,STABLE
ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract2       mounted on /u01/acfs /acfs-vol1,STABLE
ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract3       mounted on /u01/acfs /acfs-vol1,STABLE
ora.proxy_advm                 ONLINE     ONLINE          gract1       STABLE   
ora.proxy_advm                 ONLINE     ONLINE          gract2       STABLE   
ora.proxy_advm                 ONLINE     ONLINE          gract3       STABLE 

To continue HANFS setup following resource needs to be ONLINE  on all RAC nods 
  - DG where our ACFS Volumne sits on top            ora.ACFS_DG1.dg
  - PROXY ADVM instance to handle ACFS operations    ora.proxy_advm
  - ACFS Volumne manager                              
  - ACFS files system   
In case you get an error  Debugging ACFS mount errors : ACFS-02017 please read following note.

Creating the HAVIP and Exportfs cluster resource

Setup a static HANFS VIP host in your DNS and verifiy this address
[root@gract1 var]# nslookup nfsvip
Server:        192.168.1.50
Address:    192.168.1.50#53
Name:    nfsvip.example.com
Address: 192.168.1.199

Note HAVIP ID and exportfs name should be lower case to avoid CRS-2674, PRCR-1079  error during staring the HAVIP

Create HAVIP resource 
[root@gract1 grid]# srvctl add havip -id havip_id -address nfsvip.example.com -netnum 1 -description "Helmut's XXXX Project"
[root@gract1 grid]# crs | egrep 'hanfs|havip|export|NAME|---'
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.havip_id.havip             1   OFFLINE      OFFLINE      -               STABLE  

Trying to bring this resource ONLINE 
[root@gract1 Desktop]# srvctl start havip -id havip_id
PRCE-1026 : Cannot start HAVIP resource without an Export FS resource.

Unable to start 'ora.hr1.havip' because it has a 'hard' dependency on resource type 'ora.HR1.export.type' and no resource of 
that type can satisfy the dependency
[root@gract1 Desktop]#  crsctl status resource  ora.hr.havip  -f | grep START_DEPENDENCIES
START_DEPENDENCIES=hard(ora.net1.network,uniform:type:ora.HR.export.type) weak(global:ora.gns) dispersion:active(type:ora.havip.type) 
                   pullup(ora.net1.network) pullup:always(type:ora.HR.export.type)

Why the failure to start? 
Recall earlier that we mentioned that an HAVIP requires1 or more ExportFS configured.
Without an ExportFS, the HAVIP will not start. If a client had mounted the ExportFS and the HAVIP started without the ExportFS
available, the client would receive an ESTALE error. This resource dependency prevents the resumption of NFS services on the 
client until the server side file system is available for access.

Create exportfs cluster rescource 
[root@gract1 grid]# srvctl add exportfs -id havip_id -path  /u01/acfs/acfs-vol1  -name hanfs -options "rw,no_root_squash" 
[root@gract1 grid]#  crs | egrep 'acfs|hhanfs|havip|export|NAME|---'
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract1       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract2       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract3       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract1       mounted on /u01/acfs /acfs-vol1,STABLE
ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract2       mounted on /u01/acfs /acfs-vol1,STABLE
ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract3       mounted on /u01/acfs /acfs-vol1,STABLE
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.hanfs.export               1   OFFLINE      OFFLINE      -               STABLE  
ora.havip_id.havip             1   OFFLINE      OFFLINE      -               STABLE  

Start HAVIP
[root@gract1 Desktop]#  srvctl start havip -id havip_id

Check status:
[root@gract1 grid]# srvctl status exportfs -id havip_id
export file system hanfs is enabled
export file system hanfs is exported on node gract2
--> HANFS fs is exported on gract2

[root@gract1 grid]# srvctl status exportfs -id havip_id
export file system hanfs is enabled
export file system hanfs is exported on node gract2

[root@gract1 grid]# showmount -e gract2
Export list for gract2:
/u01/acfs/acfs-vol1 192.168.1.0/24

[root@gract1 grid]# ssh gract2 exportfs -v
/u01/acfs/acfs-vol1
             192.168.1.0/24(rw,wdelay,no_root_squash,no_subtree_check,fsid=1702682800)

[root@gract1 grid]#  srvctl config exportfs -name  hanfs
export file system hanfs is configured
Exported path: /u01/acfs/acfs-vol1
Export Options: rw,no_root_squash
Configured Clients: 192.168.1.0/24
Export file system is enabled.
Export file system is individually enabled on nodes: 
Export file system is individually disabled on nodes:

Testing application failover with HAVIP

Login to our name server ( In that case NS is not a RAC instance )

Figure out our HAVIP address ( run the command below an any of yozur HUB nodes 
[root@gract3 ~]# srvctl config havip 
HAVIP exists: /havip_id/192.168.1.199, network number 1
Description: Helmut's XXXX Project
Home Node: 
HAVIP is enabled.
HAVIP is individually enabled on nodes: 
HAVIP is individually disabled on nodes: 

Login to our name server or any other client 
Create mount point 
[root@ns1 ~]# chmod 755 hr
[root@ns1 ~]# chmod 755  /hr
[root@ns1 ~]# showmount -e gract2
Export list for gract2:
/u01/acfs/acfs-vol1 192.168.1.0/24

NFS mount that filesystem
[root@ns1 ~]# mount -t nfs nfsvip.example.com:/u01/acfs/acfs-vol1 /hr
[root@ns1 ~]# mount -t nfs
nfsvip.example.com:/u01/acfs/acfs-vol1 on /hr type nfs (rw,vers=4,addr=192.168.1.199,clientaddr=192.168.1.50)

Open a file in and editor and keep it open
[root@ns1 ~]# vi /hr/test_file
[root@gract2 ~]# cat /u01/acfs/acfs-vol1/test_file
This is my first line 
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.hanfs.export               1   ONLINE       ONLINE       gract2          STABLE  
ora.havip_id.havip             1   ONLINE       ONLINE       gract2          STABLE
--> HANFS fs is currently servered by node gract2

Failover Test I : Stop/crash node gract2 by a reboot
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.hanfs.export               1   ONLINE       ONLINE       gract2          STOPPING  
ora.havip_id.havip             1   ONLINE       OFFLINE      -               STABLE 

--> Both CW resources exportfs and havip are stopping and failing over to gract3
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.hanfs.export               1   ONLINE       ONLINE       gract3          STABLE  
ora.havip_id.havip             1   ONLINE       ONLINE       gract3          STABLE 

Add a new line to our file and display that line from node gract3
[root@gract3 ~]# cat  /u01/acfs/acfs-vol1/test_file
This is my first line 
This is my second line after reboot of gract2
--> File sucessfully updated - no interuption despite the failover 

Failover Test II : Relocate HAVIP 
[root@gract3 ~]#  srvctl relocate havip -id havip_id -n gract1 -f
HAVIP was relocated successfully
[root@gract3 ~]#  crs | egrep 'acfshanfs|havip|export|NAME|---'
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.hanfs.export               1   ONLINE       ONLINE       gract1          STABLE  
ora.havip_id.havip             1   ONLINE       ONLINE       gract1          STABLE 

Update the file again and test application continuity from RAC node gract1
[root@gract1 Desktop]# cat  /u01/acfs/acfs-vol1/test_file
This is my first line 
This is my second line after reboot of gract2 
This is my last line after relocation to gract1

Deleting exportfs and HAVIP resource

# srvctl stop   exportfs   -name hanfs -f
# srvctl remove exportfs   -name hanfs
# srvctl remove havip -id havip_id
   ---> This command dumps core  but HAVIP resource was deleted sucessfully

Reference

Leave a Reply

Your email address will not be published. Required fields are marked *