12c Feature: Testing VIP/Application failover on LEAF Nodes

Overview – Difference between HUB and LEAF server

  • Hub Servers run in the Database tier (RDBMS and Application can run here)
  • Leaf Servers run in the Application tier (Only CW Application can run here- using VIPs and failover)
  • This means LEAF servers can  not handle any database activity with 12cR1
  • LEAF servers can be used to deploy cluster aware application using  LEAF VIP/application failover ( see the sample below )
  • In 12c Beta 2  I/O server instances on LEAF nodes  were able to run Oracle RAC database instances
  • This feature ( referred as indirect ASM client instances  ) was dropped for 12cR1
  • For details read :  http://skillbuilders.com/Oracle/Oracle-Consulting-Training.cfm?category=blogs&tab=john-watsons-blog&node=2857#tabs

Configuration

CRS: 12.1.0,1
gract1  : HUB  node 
gract2  : LEAF node 
gract3  : LEAF node

Change a HUP node to LEAF node

[root@gract3 gract3]#   crsctl get node role status -all
Node 'gract1' active role is 'hub'
Node 'gract2' active role is 'hub'    <-- Let's change this node to a LEAF node
Node 'gract3' active role is 'leaf'


[root@gract3 gract3]# ssh gract2
Last login: Sat Aug  2 18:25:21 2014 from gract3.example.com
[root@gract2 ~]# crsctl set node role leaf 
CRS-4408: Node 'gract2' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
[root@gract2 ~]# crsctl stop crs 
[root@gract2 ~]# crsctl start crs 
[root@gract2 ~]#  crsctl get node role status -all
Node 'gract1' active role is 'hub'
Node 'gract3' active role is 'leaf'
Node 'gract2' active role is 'leaf'

VIP setup on our LEAF nodes

Create a static network which can be used by our leave nodes ( see -leaf switch )

# srvctl add network -netnum  4 -subnet 192.168.1.0/255.255.255.0 -leaf
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.net1.network               ONLINE     ONLINE          gract1       STABLE     <-- HUB  
ora.net4.network               OFFLINE    OFFLINE         gract2       STABLE     <-- LEAF 1
ora.net4.network               OFFLINE    OFFLINE         gract3       STABLE     <-- LEAF 2  

Create a specific network type and our application VIP
# crsctl add type ora.cluster_vip_net4.type -basetype ora.cluster_vip.type
# $GRID_HOME/bin/appvipcfg create -network=4 -ip=192.168.1.199 -vipname=MyTestVIP -user=root

[root@gract2 ~]# crsctl start resource  MyTest
# VIP 
CRS-2672: Attempting to start 'ora.net4.network' on 'gract2'
CRS-2676: Start of 'ora.net4.network' on 'gract2' succeeded
CRS-2672: Attempting to start 'MyTestVIP' on 'gract2'
CRS-2676: Start of 'MyTestVIP' on 'gract2' succeeded

*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract2          STABLE  

Relocate VIP to LEAF node gract3
[root@gract2 ~]# crsctl relocate resource  MyTestVIP
CRS-2672: Attempting to start 'ora.net4.network' on 'gract3'
CRS-2676: Start of 'ora.net4.network' on 'gract3' succeeded
CRS-2673: Attempting to stop 'MyTestVIP' on 'gract2'
CRS-2677: Stop of 'MyTestVIP' on 'gract2' succeeded
CRS-2672: Attempting to start 'MyTestVIP' on 'gract3'
CRS-2676: Start of 'MyTestVIP' on 'gract3' succeeded

[root@gract2 ~]# crs
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.net1.network               ONLINE     ONLINE          gract1       STABLE   
ora.net4.network               ONLINE     ONLINE          gract2       STABLE   
ora.net4.network               ONLINE     ONLINE          gract3       STABLE   

*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract3          STABLE 
--> Here an application VIP is running and active on our LEAF nodes gract2 and gract3

Create  LEAF server Pool

$ crsctl status server  gract1 -f  | egrep "^NAME|ACTIVE_POOLS|ACTIVE_CSS_ROLE"
NAME=gract1
ACTIVE_POOLS=ora.TOP_PRIORITY
ACTIVE_CSS_ROLE=hub

$ crsctl status server  gract2 -f  | egrep "^NAME|ACTIVE_POOLS|ACTIVE_CSS_ROLE"
NAME=gract2
ACTIVE_POOLS=Free
ACTIVE_CSS_ROLE=leaf

$ crsctl status server  gract3 -f  | egrep "^NAME|ACTIVE_POOLS|ACTIVE_CSS_ROLE"
NAME=gract3
ACTIVE_POOLS=Free
ACTIVE_CSS_ROLE=leaf

[grid@gract1 ~/PM]$  srvctl status serverpool -detail
Server pool name: Free
Active servers count: 2
Active server names: gract2,gract3
NAME=gract2 STATE=ONLINE
NAME=gract3 STATE=ONLINE
Server pool name: Generic
Active servers count: 0
Active server names: 
Server pool name: STANDARD_PRIORITY
Active servers count: 0
Active server names: 
Server pool name: TOP_PRIORITY
Active servers count: 1
Active server names: gract1
NAME=gract1 STATE=ONLINE
--> Our HUP server is attached to TOP_PRIORITY pool whereas our LEAF servers are waiting in the FREE pool

# crsctl add category My_leaf_nodes -attr "ACTIVE_CSS_ROLE=leaf";
# crsctl status category My_leaf_nodes;
NAME=My_leaf_nodes
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
ACTIVE_CSS_ROLE=leaf
EXPRESSION=

# crsctl status server -category My_leaf_nodes;
NAME=gract2
STATE=ONLINE

NAME=gract3
STATE=ONLINE

[root@gract2 ~]# crsctl add serverpool  My_leaf_pool -attr "SERVER_CATEGORY=My_leaf_nodes";
[root@gract2 ~]# crsctl status serverpool My_leaf_pool;
NAME=My_leaf_pool
ACTIVE_SERVERS=gract2 gract3

[root@gract2 ~]#  crsctl status serverpool My_leaf_pool -f
NAME=My_leaf_pool
IMPORTANCE=0
MIN_SIZE=0
MAX_SIZE=-1
SERVER_NAMES=
PARENT_POOLS=
EXCLUSIVE_POOLS=
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
SERVER_CATEGORY=My_leaf_nodes
ACTIVE_SERVERS=gract2 gract3

 

Create and deploy  a clusterware application resource for apache 2.15

Change /etc/httpd/conf/httpd.conf and add our VIP as Listening Adress:  192.168.1.199:80
Listen 192.168.1.199:80

Check the VIP location
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract3          STABLE
--> As our initial testing is in gract2 relocate the VIP to grac2

[root@gract2 bin]#   crsctl relocate resource MyTestVIP
CRS-2673: Attempting to stop 'MyTestVIP' on 'gract3'
CRS-2677: Stop of 'MyTestVIP' on 'gract3' succeeded
CRS-2672: Attempting to start 'MyTestVIP' on 'gract2'
CRS-2676: Start of 'MyTestVIP' on 'gract2' succeeded
[root@gract2 bin]#  crs
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract2          STABLE

Configure action script 
[root@gract2 bin]# cat apache.scr 
#!/bin/sh

HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf
WEBPAGECHECK=http://192.168.1.199:80/icons/apache_pb.gif

case $1 in
'start')
    /usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION
   RET=$?
    ;;
    sleep 10
    ;;
'stop')
    /usr/sbin/apachectl -k stop
   RET=$?
    ;;
'clean')
    /usr/sbin/apachectl -k stop
   RET=$?
    ;;
'check')
    /usr/bin/wget -q --delete-after $WEBPAGECHECK
   RET=$?
    ;;
*)
   RET=0
    ;;
esac
# 0: success; 1 : error
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

Edit /etc/httpd/conf/httpd.conf and put in our VIP
Change 
Listen 80
to
Listen 192.168.1.199:80

Check the apached application status
[root@gract2 bin]# apache.scr start
[root@gract2 bin]#  apache.scr check
[root@gract2 bin]#  echo $?
0
[root@gract2 bin]#  apache.scr stop
[root@gract2 bin]#  apache.scr check
[root@gract2 bin]#   echo $?
1
-> Looks good we are ready for clusterwide distribution

Create a cluster managed application resource 
[root@gract2 bin]#  $GRID_HOME/bin/crsctl add resource My_apache -type cluster_resource -attr \
"ACTION_SCRIPT=/usr/local/bin/apache.scr,PLACEMENT=restricted,HOSTING_MEMBERS=gract2 gract3,SERVER_POOLS=My_leaf_pool, \
CHECK_INTERVAL='30',RESTART_ATTEMPTS='2',START_DEPENDENCIES=hard(MyTestVIP) pullup(MyTestVIP), \
STOP_DEPENDENCIES=hard(intermediate:MyTestVIP),CARDINALITY=1 "

root@gract2 bin]#  $GRID_HOME/bin/crsctl start resource My_apache
CRS-2672: Attempting to start 'My_apache' on 'gract2'
CRS-2676: Start of 'My_apache' on 'gract2' succeeded
...

Check resource properties and status for our apache resource

[root@gract1 Desktop]# $GRID_HOME/bin/crsctl status resource My_apache -f | egrep '^PLACEMENT|HOSTING_MEMBERS|SERVER_POOLS|DEPENDENCIES|^CARDINALITY'
CARDINALITY=1
CARDINALITY_ID=0
HOSTING_MEMBERS=gract2 gract3
PLACEMENT=restricted
SERVER_POOLS=My_leaf_pool
START_DEPENDENCIES=hard(MyTestVIP) pullup(MyTestVIP)
STOP_DEPENDENCIES=hard(intermediate:MyTestVIP)

 HOSTING_MEMBERS=gract2 gract3 : A space-delimited, ordered list of cluster server names that can host a resource. 
                                 This attribute is required only when using administrator management, and when the value of the 
                                 PLACEMENT attribute is set to favored or restricted.
 PLACEMENT=restricted          : Oracle Clusterware only considers servers that belong to server pools listed in the SEVER_POOLS resource attribute  

[root@gract2 bin]#  $GRID_HOME/bin/crsctl status resource My_apache
NAME=My_apache
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on gract2

Test application/VIP failover

Test application/VIP failover using clusterware resource relocation
Copy over the action script 
[root@gract2 bin]# scp /usr/local/bin/apache.scr gract3://usr/local/bin/
[root@gract2 bin]# ssh gract3 ls -l /usr/local/bin/apache.scr 
-rwxr-xr-x. 1 root root 505 Aug  3 11:05 /usr/local/bin/apache.scr

[root@gract2 bin]#  $GRID_HOME/bin/crsctl relocate resource My_apache
CRS-2527: Unable to start 'My_apache' because it has a 'hard' dependency on 'MyTestVIP'
CRS-2525: All instances of the resource 'MyTestVIP' are already running; relocate is not allowed because the force option was not specified
CRS-4000: Command Relocate failed, or completed with errors.
[root@gract2 bin]#  $GRID_HOME/bin/crsctl relocate resource My_apache -f
CRS-2673: Attempting to stop 'My_apache' on 'gract2'
CRS-2677: Stop of 'My_apache' on 'gract2' succeeded
CRS-2673: Attempting to stop 'MyTestVIP' on 'gract2'
CRS-2677: Stop of 'MyTestVIP' on 'gract2' succeeded
CRS-2672: Attempting to start 'MyTestVIP' on 'gract3'
CRS-2676: Start of 'MyTestVIP' on 'gract3' succeeded
CRS-2672: Attempting to start 'My_apache' on 'gract3'
CRS-2676: Start of 'My_apache' on 'gract3' succeeded

Test application/VIP failover after a CRS crash

Starting firefox from our HUP node:
[root@gract1 Desktop]# firefox http://192.168.1.199:80
--> Apache page is displayed successfully 
Cecking cluster resources 
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract3          STABLE  
--> Now reboot server gract3

As  expected out HTMG page get unvailable for some seconds 
Firefix error: 
   Unable to connect ->    Firefox can't establish a connection to the server at 192.168.1.199

After some seconds VIP becomes available on gract2  and apache can display out HTML page again 
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract2          STABLE 

Relocate that service from gract2 to gract3

[root@gract1 Desktop]# crsctl relocate resource My_apache -f
CRS-2673: Attempting to stop 'My_apache' on 'gract2'
CRS-2677: Stop of 'My_apache' on 'gract2' succeeded
CRS-2673: Attempting to stop 'MyTestVIP' on 'gract2'
CRS-2677: Stop of 'MyTestVIP' on 'gract2' succeeded
CRS-2672: Attempting to start 'MyTestVIP' on 'gract3'
CRS-2676: Start of 'MyTestVIP' on 'gract3' succeeded
CRS-2672: Attempting to start 'My_apache' on 'gract3'

Cleanup and delete clusterware resources

# crsctl stop   resource My_apache 
# crsctl delete resource My_apache 
# crsctl stop res MyTestVIP
# $GRID_HOME/bin/appvipcfg  delete  -vipname=MyTestVIP
# crsctl delete type ora.cluster_vip_net4.type
# crsctl stop resource  ora.net4.network
# crsctl delete  resource  ora.net4.network
# srvctl  remove serverpool  -serverpool My_Leaf_Pool

Potential Errors

Error CRS-2667:
[root@gract2 bin]#  $GRID_HOME/bin/crsctl start resource My_apache
CRS-2667: Resource 'My_apache' with PLACEMENT=balanced may only run on servers assigned to Generic and Free, both of which are empty
CRS-4000: Command Start failed, or completed with errors.
Fix : Change PLACEMENT attribute
[root@gract2 bin]#  crsctl  modify resource My_apache -attr "PLACEMENT=restricted,HOSTING_MEMBERS=gract2 gract3"

Error CRS-5809:
[root@gract2 bin]# crsctl start  resource My_apache 
CRS-2672: Attempting to start 'My_apache' on 'gract2'
CRS-5809: Failed to execute 'ACTION_SCRIPT' value of '' for 'My_apache'. Error information 'cmd  not found', Category : -2, OS error : 2
CRS-5809: Failed to execute 'ACTION_SCRIPT' value of '' for 'My_apache'. Error information 'cmd  not found', Category : -2, OS error : 2

Fix:
Check action script location and proctectisn  on the relevant node and set  ACTION_SCRIPT for that ressource 
[root@gract2 bin]#   ls -l /usr/local/bin/apache.scr
-rwxr-xr-x. 1 root root 505 Aug  3 10:05 /usr/local/bin/apache.scr
[root@gract2 bin]#  crsctl  modify resource My_apache -attr "ACTION_SCRIPT=/usr/local/bin/apache.scr"

 

Reference

..

One thought on “12c Feature: Testing VIP/Application failover on LEAF Nodes”

  1. I couldn’t find much information about adding database instances to leaf nodes.
    I was able to create serverpools for hub and leaf nodes, but, when I try to add a service it fails:
    srvctl add service -database dbsrvpool -service readbsrvpoolsv -rfpool srvleaf
    PRCS-1139 : failed to add reader farm service

    SRVM_TRACE didn’t give me much information and Oracle Support didn’t return any record about it.

    Could you please make an example how you did it?

    Best Regards,

    Bruno

Leave a Reply

Your email address will not be published. Required fields are marked *