DNFS – Direct NFS setup for 11.2.0.4 3-node RAC cluster on OEL 6.4

DNFS key features

  • Every Oracle Process ( DBWR, RMAN, PQ, LGWR) has its own TPC/IP connection
  • NFS version 3 client embedded into kernel
  • Failover capabilities
  • Reduces CPU usage for Oracle instances ( disk I/O occurs at NFS server location )
  • Works even for Windows
  • Multiple pathes can be configured ( Oracle recommends using own subnet – see sample below )
  • JUMBO frames are supported with DNFS
  • OCR and Voting disks are not supported with DNFS
  •  RAC and CW uses O_DIRECT flag for write system calls to bypass any CACHE and directly talk to the NFS servers
  • Datafiles which are concurrently read/written by multiple nodes need to be on a mount point with actimeo set to 0.
  • Don’t set atimeo to 0 for NON-RAC setups
  • DNFS provides faster performance than the native OS NFS  as DNFS don’t need to copy the write buffer to the kernel space
  • DNFS can’t store VOTING Disks and OCR as CSS is multi-threaded and DNFS is not thread safe
  • ASM Dynamic Volume Manager ( Oracle ADVM ) does not currently support NFS-based ASM files.
  •  dNFS currently does not support NFSv4. dNFS on Oracle 11g will only work with NFSv3 volumes – 12c  supports NFS V4 
  • dNFS does not support automouter and autofs. The volumes will have to be mounted explicitly as NFS volumes and should be visible via /etc/mtab
  • Use oracle/oinstall for file protection for files and directories

Enable Direct NFS at database level

Verify whether current kernel already supports direct NFS
$  ldd /u01/app/oracle/product/11204/racdb/bin/oracle | grep odm
    libodm11.so => /u01/app/oracle/product/11204/racdb/lib/libodm11.so (0x00007f475fa35000)
$ strings /u01/app/oracle/product/11204/racdb/lib/libodm11.so | grep -i odm
...
odm_fini
odm_init
odm_discover
ODM ERR: Calling stubbed version
Stubbed ODM Library, Version: 1.0
--> Stubbed ODM Library, Version: 1.0 means there DNFS is Disabled

Stop RAC database and enable DNFS
$ srvctl stop database -d grac4
$ srvctl status database -d grac4
Instance grac41 is not running on node grac41
Instance grac42 is not running on node grac42
Instance grac43 is not running on node grac43

$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk dnfs_on
$ strings /u01/app/oracle/product/11204/racdb/lib/libodm11.so | grep -i odm
..
kgodm_discover
nfs odm heap
kgodm event %u set to level %u
--> Repeat above steps on grac42 and grac43 and startup RAC database

$ srvctl start  database -d grac4
Instance grac41 is running on node grac41
Instance grac42 is running on node grac42
Instance grac43 is running on node grac43

Verify uid,gid for NFS Server and RAC instances
[oracle@ns1 ~]$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),506(asmdba),54322(dba)

[oracle@grac41 lib]$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),500(vboxsf),506(asmdba),54322(dba context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[oracle@grac42 lib]$ id 
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),500(vboxsf),506(asmdba),54322(dba context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[oracle@grac43 lib]$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),500(vboxsf),506(asmdba),54322(dba context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Setup NFS Server and check NFS version

# mkdir /shared_data_nfs
# chown oracle:oinstall  /shared_data_nfs
# ls -ld  /shared_data_nfs
drwxr-xr-x 2 oracle oinstall 4096 Dec 24 16:58 /shared_data_nfs
Add the following lines to the "/etc/exports" file and verify the export parameters
/shared_data_nfs                 *(rw,sync,no_wdelay,insecure,root_squash,anonuid=54321,anongid=54321)
  Options used:
    rw              read write access
    insecure        this option is sometimes very important as DNFS channels are not opened without 
                    ( Potential error message without this option :  Direct NFS: Please check the permissions for server .... )
    sync            write all data to disk before returing from client write call
    no_wdelay       don't wait on add. write request
    root_squash     map files created by root to nobody user
    anonuid=54321   Oracle owner ID
    anongid=54321   Oracle GID ( oinstall )
    --> Not sure whether anonuid and  anongid works with OEL 6.x
Run the following command to export the NFS shares
# chkconfig nfs on
# service nfs restart
Shutting down NFS daemon:                                  [FAILED]
Shutting down NFS mountd:                                  [FAILED]
Shutting down NFS quotas:                                  [FAILED]
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:               
Starting NFS mountd:                                       [  OK  ]
Stopping RPC idmapd:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]

# exportfs -v
/shared_data_nfs
        <world>(rw,insecure,root_squash,no_subtree_check,anonuid=54321,anongid=54321)
# rpcinfo | egrep 'vers|nfs'
   program version netid     address                service    owner
    100003    2    tcp       0.0.0.0.8.1            nfs        superuser
    100003    3    tcp       0.0.0.0.8.1            nfs        superuser
    100003    4    tcp       0.0.0.0.8.1            nfs        superuser
    100227    2    tcp       0.0.0.0.8.1            nfs_acl    superuser
    100227    3    tcp       0.0.0.0.8.1            nfs_acl    superuser
    100003    2    udp       0.0.0.0.8.1            nfs        superuser
    100003    3    udp       0.0.0.0.8.1            nfs        superuser
    100003    4    udp       0.0.0.0.8.1            nfs        superuser
    100227    2    udp       0.0.0.0.8.1            nfs_acl    superuser
    100227    3    udp       0.0.0.0.8.1            nfs_acl    superuser
--> NFS version 2,3 and 4 is available for TCP and UDP

 

Setup NFS client

Verify that the oracle instances can mount the NFS file system
# showmount -e ns1
Export list for ns1:
/shared_data_nfs *
# mkdir -p /u01/oradata
As we are using NFS4 edit /etc/idmapd.conf to specify your domain name and restart the
Domain = example.com

# service rpcidmapd restart 
Stopping RPC idmapd:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]

Add the following lines to the "/etc/fstab" file.
    ns1:/shared_data_nfs   /u01/oradata  nfs  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0

Mount the NFS shares 
# mount /u01/oradata

Make sure the permissions on the shared directories are correct.
# chown -R oracle:oinstall /u01/oradata

Verify correct file creation as owner oracle
$ touch  /u01/oradata/grac41_testing 
$ ls -l  /u01/oradata/grac41_testing 
-rw-r--r--. 1 oracle oinstall 0 Dec 25 09:42 /u01/oradata/grac41_testing
--> Repeat the NFS client setup for all your remaining instances ( in my case: grac42, grac43 )

Direct NFS Client searches for mount entries in the following order:
    $ORACLE_HOME/dbs/oranfstab
    /etc/oranfstab
    /etc/mtab

Oranfstab File
oranfstab file with the following attributes for each NFS server to be accessed using Direct NFS Client:
    Server: The NFS server name.
    Local: Up to four paths on the database host, specified by IP address or by name, as displayed 
           using the ifconfig command run on the database host.
    Path: Up to four network paths to the NFS server, specified either by IP address, or by name, as 
          displayed using the ifconfig command on the NFS server.
    Export: The exported path from the NFS server.
    Mount: The corresponding local mount point for the exported volume.
    Mnt_timeout: Specifies (in seconds) the time Direct NFS Client should wait for a successful mount 
                 before timing out. This parameter is optional.  The default timeout is 10 minutes (600).
    Dontroute: Specifies that outgoing messages should not be routed by the operating system, but instead 
               sent using the IP address to which they are bound. 
               Note that this POSIX option sometimes does not work on Linux systems with multiple paths 
               in the same subnet.

The following example uses both local and path. Since they are in different subnets, we do not have 
to specify dontroute.
$ORACLE_HOME/dbs/oranfstab file on grac41:
server: ns1
local: 192.168.1.101 path:  192.168.1.50
local: 192.168.3.101 path:  192.168.3.50
export: /shared_data_nfs mount: /u01/oradata

--> Here we have 2 NICs on our NFS server (  192.168.1.50 and  192.168.3.50 ) where the local RAC instance 
    also has to NICS to NFS server communication (  192.168.1.101 and  192.168.3.101  )

Verify settings with ifconfig
Instance grac41:
eth1      Link encap:Ethernet  HWaddr 08:00:27:1E:7D:B0  
          inet addr:192.168.1.101  Bcast:192.168.1.255  Mask:255.255.255.0
eth3      Link encap:Ethernet  HWaddr 08:00:27:95:59:EE  
          inet addr:192.168.3.101  Bcast:192.168.3.255  Mask:255.255.255.0

NFS-server :
eth1      Link encap:Ethernet  HWaddr 08:00:27:8D:8A:93  
          inet addr:192.168.1.50  Bcast:192.168.1.255  Mask:255.255.255.0
          RX packets:119581 errors:0 dropped:0 overruns:0 frame:0
          TX packets:93205 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:25000344 (23.8 MiB)  TX bytes:19165803 (18.2 MiB)

eth2      Link encap:Ethernet  HWaddr 08:00:27:74:3D:E1  
          inet addr:192.168.3.50  Bcast:192.168.3.255  Mask:255.255.255.0
          RX packets:31976 errors:9 dropped:47 overruns:0 frame:0
          TX packets:11345 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:15615019 (14.8 MiB)  TX bytes:8739912 (8.3 MiB)
--> Load balancing takes place - but as eht1 interface is used for nameserver issues the traffic for 
    eht1 should be higher 

Verify alert.log for DNFS support
After reboot RAC instance alert.log should report:
Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0 
Direct NFS: attempting to mount /shared_data_nfs on filer ns1 defined in oranfstab
Direct NFS: channel config is: 
Direct NFS: mount complete dir /shared_data_nfs on ns1 mntport 39819 nfsport 2049 
Direct NFS: channel id [0] path [192.168.1.50] to filer [ns1] via local [192.168.1.101] is UP
Direct NFS: channel id [1] path [192.168.3.50] to filer [ns1] via local [192.168.3.101] is UP

 

Create table, tablespace, run a testcase and monitor gv$ tables

SQL> create tablespace dnfs_ts datafile '/u01/oradata/grac4_dnfs_ts.dbf' size 100M;
SQL> create table rac_perftest ( id number, inst_name varchar(8), host_name varchar(24), ins_date date)
     tablespace dnfs_ts;
Test case: $ java UCPDemo grac4 10 5000 1 -noseq -nodebug

Monitor DNFS via GV$ performance table ( 10 Threads running clusterwide inserts )

THIS DB REPORT WAS GENERATED AT:  28-DEC-2013 10:06:53
HOSTNAME ASSOCIATED WITH THIS DB INSTANCE:  grac41.example.com
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

SQL> select * from gv$dnfs_servers;
INST_ID     ID SVRNAME    DIRNAME                MNTPORT    NFSPORT    WTMAX       RTMAX
------- ---------- ---------- ------------------------------ ---------- ---------- ---------- ----------
      1      1 ns1          /shared_data_nfs              39819       2049    65536       65536
      3      1 ns1          /shared_data_nfs              39819       2049    65536       65536
      2      1 ns1          /shared_data_nfs              39819       2049    65536       65536

SQL> select * from gv$dnfs_files;
INST_ID FILENAME                   FILESIZE  PNUM SVR_ID
------- ---------------------------------------- ---------- ----- ------
      1 /u01/oradata/grac4_dnfs_ts.dbf          104865792    18      1
      3 /u01/oradata/grac4_dnfs_ts.dbf          104865792    17      1
      2 /u01/oradata/grac4_dnfs_ts.dbf          104865792    17      1

SQL> select * from gv$dnfs_channels    order by  inst_id, PNUM;
INST_ID  PNUM SVRNAME     PATH         LOCAL         STATE          CH_ID SVR_ID     SENDS     RECVS     PINGS      RECO RESENDS        SENT  RESENT      RECV   SENDQ   PENDQ RESTARTQ
------- ----- ---------- --------------- --------------- ---------------- ----- ------ ------- ------- ------- ------- ------- --------- ------- --------- ------- ------- --------
      1    18 ns1     192.168.1.50     192.168.1.101     CONNECTED          0      1        30        61         0         0         0      300772       0      5068     0     0      0
      1    18 ns1     192.168.3.50     192.168.3.101     CONNECTED          1      1        32        61         0         0         0      300772       0      5068     0     0      0
      1    20 ns1     192.168.1.50     192.168.1.101     CONNECTED          0      1         0         0         0         0         0           0       0     0     0     0      0
      1    20 ns1     192.168.3.50     192.168.3.101     CONNECTED          1      1         0         0         0         0         0           0       0     0     0     0      0
      2    17 ns1     192.168.1.50     192.168.1.102     CONNECTED          0      1        10        26         0         0         0      493480       0      1596     0     0      0
      2    17 ns1     192.168.3.50     192.168.3.102     CONNECTED          1      1        10        22         0         0         0      255592       0      1316     0     0      0
      2    19 ns1     192.168.1.50     192.168.1.102     CONNECTED          0      1         0         0         0         0         0           0       0     0     0     0      0
      2    19 ns1     192.168.3.50     192.168.3.102     CONNECTED          1      1         0         0         0         0         0           0       0     0     0     0      0
      2    46 ns1     192.168.1.50     192.168.1.102     CONNECTED          0      1         0         0         0         0         0           0       0     0     0     0      0
      2    46 ns1     192.168.3.50     192.168.3.102     CONNECTED          1      1         0         0         0         0         0           0       0     0     0     0      0
      2    52 ns1     192.168.1.50     192.168.1.102     CONNECTED          0      1         0         0         0         0         0           0       0     0     0     0      0
      2    52 ns1     192.168.3.50     192.168.3.102     CONNECTED          1      1         0         0         0         0         0           0       0     0     0     0      0
      3    17 ns1     192.168.1.50     192.168.1.103     CONNECTED          0      1        17        34         0         0         0      133732       0      2268     0     0      0
      3    17 ns1     192.168.3.50     192.168.3.103     CONNECTED          1      1        16        32         0         0         0      133632       0      2240     0     0      0

SQL> select inst_id, PNUM, NFS_READ, NFS_WRITE, NFS_COMMIT, NFS_MOUNT from gv$dnfs_stats where NFS_READ>0 or NFS_WRITE>0 order by  inst_id, PNUM;
INST_ID  PNUM    NFS_READ  NFS_WRITE NFS_COMMIT    NFS_MOUNT
------- ----- ---------- ---------- ---------- ----------
      1    18           2    329         0        1
      1    20           3      2         0        0
      2    17           2    167         0        1
      2    19           3      2         0        0
      2    46         288      0         0        0
      2    52           1      0         0        0
      3    17           2     74         0        1

SQL> select c.inst_id,    program, pid,pname, local, path   from gv$process p, gv$dnfs_channels c where p.inst_id = c.inst_id and c.pnum = p.pid;
INST_ID PROGRAM                         PID PNAME LOCAL       PATH
------- ------------------------------------------------ ---------- ----- --------------- ---------------
      1 oracle@grac41.example.com (DBW0)             18 DBW0  192.168.1.101   192.168.1.50
      1 oracle@grac41.example.com (DBW0)             18 DBW0  192.168.3.101   192.168.3.50
      1 oracle@grac41.example.com (CKPT)             20 CKPT  192.168.1.101   192.168.1.50
      1 oracle@grac41.example.com (CKPT)             20 CKPT  192.168.3.101   192.168.3.50
      3 oracle@grac43.example.com (DBW0)             17 DBW0  192.168.1.103   192.168.1.50
      3 oracle@grac43.example.com (DBW0)             17 DBW0  192.168.3.103   192.168.3.50
      2 oracle@grac42.example.com (DBW0)             17 DBW0  192.168.1.102   192.168.1.50
      2 oracle@grac42.example.com (DBW0)             17 DBW0  192.168.3.102   192.168.3.50
      2 oracle@grac42.example.com (CKPT)             19 CKPT  192.168.1.102   192.168.1.50
      2 oracle@grac42.example.com (CKPT)             19 CKPT  192.168.3.102   192.168.3.50
      2 oracle@grac42.example.com (J001)             46 J001  192.168.1.102   192.168.1.50
      2 oracle@grac42.example.com (J001)             46 J001  192.168.3.102   192.168.3.50

 

Configure oranfstab with 2 independend DNFS paths but only 1 path maps to a correct IP address

server: ns1
local: 192.168.1.101    path:  192.168.1.50
local: 192.168.4.101    path:  192.168.3.50
export: /shared_data_nfs mount: /u01/nfs_asmdisks

oracle@grac41 DNFS]$ ifconfig | grep addr
eth1      Link encap:Ethernet  HWaddr 08:00:27:74:37:E7  
          inet addr:192.168.1.101  Bcast:192.168.1.255  Mask:255.255.255.0
eth3      Link encap:Ethernet  HWaddr 08:00:27:5B:62:89  
          inet addr:192.168.3.101  Bcast:192.168.3.255  Mask:255.255.255.0
--> Only 192.168.1.101 is valid --> 192.168.4.101 is a wrong entry in oranfstab and should be 192.168.3.101

Alert log:
Direct NFS: attempting to mount /shared_data_nfs on filer ns1 defined in oranfstab
Direct NFS: channel config is:
     channel id [0] local [192.168.1.101] path [192.168.1.50]
     channel id [1] local [192.168.4.101] path [192.168.3.50]
Direct NFS: mount complete dir /shared_data_nfs on ns1 mntport 35621 nfsport 2049
Direct NFS: channel id [0] path [192.168.1.50] to filer [ns1] via local [192.168.1.101] is UP
--> system comes up but only channel id [0] is reported as UP 

Configure oranfstab with 2 independend DNFS paths and none of the paths maps to a correct IP address

server: ns1
local: 192.168.2.101    path:  192.168.1.50
local: 192.168.4.101    path:  192.168.3.50
export: /shared_data_nfs mount: /u01/nfs_asmdisks

[oracle@grac41 DNFS]$ifconfig | grep addr
eth1      Link encap:Ethernet  HWaddr 08:00:27:74:37:E7  
          inet addr:192.168.1.101  Bcast:192.168.1.255  Mask:255.255.255.0
eth3      Link encap:Ethernet  HWaddr 08:00:27:5B:62:89  
          inet addr:192.168.3.101  Bcast:192.168.3.255  Mask:255.255.255.0
--> both entries oranfstab are not valid ! 

Restart database 
SQL> startup force
ORACLE instance started.
Database mounted.
--> db startup hangs
Latest alert log entry :
Direct NFS: attempting to mount /shared_data_nfs on filer ns1 defined in oranfstab
Direct NFS: channel config is: 
     channel id [0] local [192.168.2.101] path [192.168.1.50]
     channel id [1] local [192.168.4.101] path [192.168.3.50]
Direct NFS: mount complete dir /shared_data_nfs on ns1 mntport 35621 nfsport 2049 
--> alert.log stops here
From diag trace file : grac41_dia0_7985_1.trc
Verified Hangs in the System
  inst# SessId  Ser#     OSPID PrcNm Event
  ----- ------ ----- --------- ----- -----
      1      1     5      8275  FBGP rdbms ipc reply
      1     18     1      8001  DBW0 Disk file operations I/O
Victim Information
                                                                      Ignored
  HangID  Inst#  Sessid  Ser Num      OSPID  Fatal BG  Previous Hang    Count
  ------  -----  ------  -------  ---------  --------  -------------  -------
       1      1      18        1       8001     Y           New Hang        1
*** 2014-07-19 08:38:46.034
Wait-For-Graphs collected at 07/19/14 08:37:55)
===============================================================================
Non-intersecting chains:
-------------------------------------------------------------------------------
Chain 1:
-------------------------------------------------------------------------------
    Oracle session identified by:
    {
                instance: 1 (grac4.grac41)
                   os id: 8275
              process id: 32, oracle@grac41.example.com (TNS V1-V3)
              session id: 1
        session serial #: 5
    }
    is waiting for 'rdbms ipc reply' with wait info:
    {
                      p1: 'from_process'=0x11
                      p2: 'timeout'=0x316
            time in wait: 1.028233 sec
           timeout after: 0.971767 sec
                 wait id: 86
                blocking: 0 sessions
            wait history:
              * time between current wait and wait #1: 0.000077 sec
              1.       event: 'rdbms ipc reply'
                 time waited: 1.999915 sec
                     wait id: 85              p1: 'from_process'=0x11
                                              p2: 'timeout'=0x318
              * time between wait #1 and #2: 0.000078 sec
              2.       event: 'rdbms ipc reply'
                 time waited: 1.999948 sec
                     wait id: 84              p1: 'from_process'=0x11
                                              p2: 'timeout'=0x31a
              * time between wait #2 and #3: 0.000073 sec
              3.       event: 'rdbms ipc reply'
                 time waited: 1.999958 sec
                     wait id: 83              p1: 'from_process'=0x11
                                              p2: 'timeout'=0x31c
    }
    and is blocked by
 => Oracle session identified by:
    {
                instance: 1 (grac4.grac41)
                   os id: 8001
              process id: 17, oracle@grac41.example.com (DBW0)
              session id: 18
        session serial #: 1
    }
    which is waiting for 'Disk file operations I/O' with wait info:
    {
                      p1: 'FileOperation'=0x2
                      p2: 'fileno'=0x7
                      p3: 'filetype'=0x2
            time in wait: 2 min 1 sec
           timeout after: never
                 wait id: 56
                blocking: 1 session
            wait history:
              * time between current wait and wait #1: 0.000005 sec
              1.       event: 'Disk file operations I/O'
                 time waited: 0.000059 sec (last interval)
                 time waited: 0.001539 sec (total)
                     wait id: 54              p1: 'FileOperation'=0x2
                                              p2: 'fileno'=0x6
                                              p3: 'filetype'=0x2
              * time between wait #1 and #2: 0.000000 sec
              2.       event: 'KSV master wait'
                 time waited: 0.001341 sec
                     wait id: 55              
              * time between wait #2 and #3: 0.000000 sec
              3.       event: 'Disk file operations I/O'
                 time waited: 0.000139 sec
                     wait id: 54              p1: 'FileOperation'=0x2
                                              p2: 'fileno'=0x6
                                              p3: 'filetype'=0x2
    }

Chain 1 Signature: 'Disk file operations I/O'<='rdbms ipc reply'
Chain 1 Signature Hash: 0x7278e935
-------------------------------------------------------------------------------

-->  its time to strace DBW0 process with OS - PID : 8001
# strace -p 8001
setsockopt(32, SOL_SOCKET, SO_SNDBUF, [262144], 4) = 0
setsockopt(32, SOL_SOCKET, SO_RCVBUF, [262144], 4) = 0
setsockopt(32, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
setsockopt(32, SOL_SOCKET, SO_SNDTIMEO, "\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
setsockopt(32, SOL_TCP, TCP_NODELAY, [1], 4) = 0
setsockopt(32, SOL_SOCKET, SO_LINGER, {onoff=0, linger=2}, 8) = 0
bind(32, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("192.168.4.101")}, 16) = -1 EADDRNOTAVAIL (Cannot assign requested address)
close(32)  
--> We can't bind to  IP address 192.168.4.101 
    Is this address available and UP and running ? 
    root@grac41 Desktop]# ifconfig | egrep 'inet addr|UP'
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          inet addr:192.168.1.101  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          inet addr:192.168.1.218  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          inet addr:192.168.1.172  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          inet addr:192.168.2.101  Bcast:192.168.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          inet addr:169.254.108.81  Bcast:169.254.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          inet addr:192.168.3.101  Bcast:192.168.3.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
---> IP addr 192.168.4.101 and 192.168.2.101 are not available - We need to correct oranfstab
          Change 192.168.4.101 to 192.168.3.101
                 192.168.2.101 to 192.168.1.101 
          and reboot your RAC node.

alert.log entries of a successful DNFS configuration with 2 channels

Direct NFS: attempting to mount /shared_data_nfs on filer ns1 defined in oranfstab
Direct NFS: channel config is: 
     channel id [0] local [192.168.1.101] path [192.168.1.50]
     channel id [1] local [192.168.3.101] path [192.168.3.50]
Direct NFS: mount complete dir /shared_data_nfs on ns1 mntport 35621 nfsport 2049 
Direct NFS: channel id [0] path [192.168.1.50] to filer [ns1] via local [192.168.1.101] is UP
Direct NFS: channel id [1] path [192.168.3.50] to filer [ns1] via local [192.168.3.101] is UP

 

Other Potential Errors

v$dnfs_files table is empty and  and alert log reports :  
  Some NFS servers require 'insecure' to be specified as part of the export

alert.log
Direct NFS: Please check the permissions for server ns1.
Note: Some NFS servers require 'insecure' to be specified as part of the export.

Alert.log with DNFS tracing events:
Direct NFS: attempting to mount /shared_data_nfs on filer ns1 defined in oranfstab
Direct NFS: channel config is:
     channel id [0] local [192.168.1.101] path [192.168.1.50]
Direct NFS: mount complete dir /shared_data_nfs on ns1 mntport 58969 nfsport 2049
Direct NFS: Please check the permissions for server ns1.
Note: Some NFS servers require 'insecure' to be specified as part of the export.
--> Here we have missing entries for Direct NFS: channel id [0] path ... us UP

Alert.log with DNFS tracing events after adding insecure to /etc/exports on our NFS server
Direct NFS: attempting to mount /shared_data_nfs on filer ns1 defined in oranfstab
Direct NFS: channel config is: 
     channel id [0] local [192.168.1.101] path [192.168.1.50]
Direct NFS: mount complete dir /shared_data_nfs on ns1 mntport 50847 nfsport 2049 
Direct NFS: channel id [0] path [192.168.1.50] to filer [ns1] via local [192.168.1.101] is UP
Direct NFS: channel id [1] path [192.168.1.50] to filer [ns1] via local [192.168.1.101] is UP

Trace Events

Direct NFS database events:

SQL> alter system set event='19392 trace name context forever, level 8' scope=spfile sid='*';
SQL> alter system set event='19394 trace name context forever, level 8' scope=spfile sid='*';
SQL> alter system set event='19396 trace name context forever, level 2' scope=spfile sid='*';

Q&A

What are NFS mount options and how do they relate to DNFS?
NFS mount options are meant for OS NFS client. They don’t affect Direct NFS in anyway.

In that case why are we asked to configure mount options correctly?
Because not all files are accessed via Direct NFS. For example: clusterware keep OCR, voting disk over 
NFS and uses OS native NFS client for access.

My customer wants to use Bonding because this gives High Availability(HA).
Direct NFS is designed to provide HA without any need of Bonding network interfaces. Few database 
installations may justify need for bonding and must be analyzed on case-to-case basis.
What should be the value of the parameter fileystemio_options for Direct NFS?
Direct NFS does not depend on the value of filesystemio_options. Direct NFS always issues async and 
direct I/O as it does not depend on OS support for the same. That said, we can always fall back to the 
OS NFS client in case of mis-configuration. Hence, once should set filesystemio_options to 'directio' 
or 'setall' if the OS supports it.

Does server parameter in oranfstab be DNS resolved to an IP address?
No, “server” parameter is just a heading for a particular set of parameters for given NFS server. Actual IP 
address (or DNS name) is given in “path” parameter. As such, local<->path forms unique network connection, 
where local specifies Direct NFS client local network endpoint, and path is NFS filer address. Value for 
“server” is printed in alert log whenever connection is established.

Important Bugs

 9451706  DIRECT NFS DOES NOT SUPPORT UNSTABLE WRITES.
 9977452  DNFS NIC FAILOVER CAUSES PAUSES AND DATABASE CRASH.
11655043  DNFS HUNG FOR 2 MIN WHEN A NIC IS REMOVED.

References

  • How To Verify If DNFS Is Enabled (“ON”) Or Disabled (“OFF”) Before The Database Instance Is Started In 11.2 Release? (Doc ID 1551909.1)
  • Are NFS v4 and automount supported with dNFS (Direct NFS) in 11g? (Doc ID 1087430.1)
  • DNFS: Direct NFS: channel id [#] path [#] to filer [name] is DOWN / UP Message When Standby Database Is Idle. (Doc ID 1601500.1)
  • NOTE:1468114.1 – TESTCASE Step by Step - Configure Direct NFS Client (DNFS) on Windows
  • NOTE:456246.1  - How to Do NFSv4 Configure
  • NOTE:1012325.1 - HOWTO: Setup a linux system to share files using NFS
  • Howto Optimize NFS Performance with NFS options. (Doc ID 397194.1)
  • Configure Direct NFS Client (DNFS) on Windows (Doc ID 1468114.1)
  • Mount Options for Oracle files when used with NFS on NAS devices (Doc ID 359515.1)
  • Direct NFS: FAQ (Doc ID 954425.1)
  • Configure Direct NFS Client (DNFS) on Linux (11g) (Doc ID 762374.1)
  • This note covers some Frequently Asked Questions related to Direct NFS (Doc ID 1496040.1)
  • How to configure DNFS to use multiple IPs (Doc ID 1552831.1)     (on different subnets)
  • How to Setup Direct NFS client multipaths in same subnet (Doc ID 822481.1)
  • Trace Files From dNFS Database: kgnfscrechan … Failed to get root fsinfo … on filer … error 1 (Doc ID 1507212.1)
  • dNFS Hangs for Few Minutes During NIC Failover (Doc ID 1350245.1)
  • Direct NFS not load balancing the client-side NICs (Doc ID 746656.1)
  •  http://blog.oracle48.nl/direct-nfs-configuring-and-network-considerations-in-practise/

2 thoughts on “DNFS – Direct NFS setup for 11.2.0.4 3-node RAC cluster on OEL 6.4”

  1. Hi Blake,

    Here is the description from the Oracle docu:
    The following views are for managing Direct NFS Client in a cluster environment:
    gv$dnfs_servers: Shows a table of servers accessed using Direct NFS Client.
    gv$dnfs_files: Shows a table of files currently open using Direct NFS Client.
    gv$dnfs_channels: Shows a table of open network paths (or channels) to servers for which Direct NFS Client is providing files.
    gv$dnfs_stats: Shows a table of performance statistics for Direct NFS Client.

    This means if you have already configured DNFS for your cluster all processes like RMAN, DBWR act as a client and must
    use DNFS to read/write to the NFS based datafiles. If you don’t have and DNFS datafiles I can hardly believe RMAN is
    using this protocol.

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>