12c/11g feature : Serverpools in Action

Current Configuration

Create 2 new Serverpools  srvctl add serverpool  -serverpool TOP_PRIORITY  -min 1 -max 2 -i 10srvctl add serverpool  -serverpool STANDARD_PRIORITY  -min 1 -max 2 -i 5

[root@gract1 Desktop]# olsnodes -s 
gract1    Active
gract2    Inactive
gract3    Active

#   crsctl get node role status -all
Node 'gract1' active role is 'hub'
Node 'gract3' active role is 'hub'
--> In our flex cluster node gract2 isn't available yet 

------ RAC Database Info ---------
Database name: dw Server pools: STANDARD_PRIORITY 
Instance dw_2 is running on node gract3

Database name: ERP Server pools: TOP_PRIORITY 
Instance ERP_3 is running on node gract1
--> ERP database is using TOP_PRIORITY pool   -> only one instance was running 
     DW database is using  STANDARD_PRIORITY  -> only one instance was running  

------ Serverpool Info Config ---------
NAME=Free                          IMPORTANCE=0        MIN_SIZE=0          MAX_SIZE=-1         SERVER_CATEGORY=                    
NAME=ora.TOP_PRIORITY              IMPORTANCE=10       MIN_SIZE=1          MAX_SIZE=2          SERVER_CATEGORY=ora.hub.category    
NAME=ora.STANDARD_PRIORITY         IMPORTANCE=5        MIN_SIZE=1          MAX_SIZE=2          SERVER_CATEGORY=ora.hub.category    

------ Serverpool Info ---------
Server pool name: Free             Active servers count: 0            Active server names:                

Server pool name: TOP_PRIORITY     Active servers count: 1            Active server names: gract1         
           NAME=gract1 STATE=ONLINE 
Server pool name: STANDARD_PRIORITYActive servers count: 1            Active server names: gract3         
           NAME=gract3 STATE=ONLINE

Use Case #1: A new server gract2 joins our serverpools

Action : restart crs stack on gract2
[root@gract2 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
------ Server Info ---------
NAME=gract1                        ACTIVE_POOLS=ora.TOP_PRIORITY      ACTIVE_CSS_ROLE=hub
NAME=gract2                        ACTIVE_POOLS=ora.TOP_PRIORITY      ACTIVE_CSS_ROLE=hub
NAME=gract3                        ACTIVE_POOLS=ora.STANDARD_PRIORITY ACTIVE_CSS_ROLE=hub

------ RAC Database Info ---------
Database name: dw Server pools: STANDARD_PRIORITY
Instance dw_2 is running on node gract3

Database name: ERP Server pools: TOP_PRIORITY
Instance ERP_1 is running on node gract2
Instance ERP_3 is running on node gract1

------ Serverpool Info Config ---------
NAME=Free                          IMPORTANCE=0        MIN_SIZE=0          MAX_SIZE=-1         SERVER_CATEGORY=
NAME=ora.TOP_PRIORITY              IMPORTANCE=10       MIN_SIZE=1          MAX_SIZE=2          SERVER_CATEGORY=ora.hub.category
NAME=ora.STANDARD_PRIORITY         IMPORTANCE=5        MIN_SIZE=1          MAX_SIZE=2          SERVER_CATEGORY=ora.hub.category

------ Serverpool Info ---------
Server pool name: Free             Active servers count: 0            Active server names:

Server pool name: TOP_PRIORITY     Active servers count: 2            Active server names: gract1,gract2
NAME=gract1 STATE=ONLINE           NAME=gract2 STATE=ONLINE
Server pool name: STANDARD_PRIORITY Active servers count: 1            Active server names: gract3
NAME=gract3 STATE=ONLINE

Pool status

  • all MIN_SIZE parameter are satisfied – for both pools MAX_SIZE parameter isn’t reached
  • server gract2 joins TOP_PRIORITY pool as this pool has a higher priority than STANDARD_PRIORITY
  • as ERP database is assigned to TOP_PRIORITY an new instance ERP_1 is now runnung in gract2

 

Use Case #2 : DW database instance is not heavily used – run ERP instance on server for DW database too

 
Action : Distributed ERP database on both pools
         Attach database ERP to  STANDARD_PRIORITY and TOP_PRIORITY pool
         Increase MAX_SIZE prameter to 3 for TOP_PRIORITY pool
 
[root@gract2 ~]# srvctl modify  database -d ERP -g  STANDARD_PRIORITY,TOP_PRIORITY
[root@gract1 ~]# srvctl modify  serverpool  -serverpool  ora.TOP_PRIORITY -min 1  -max 3 -fa 
[grid@gract1 ~/PM]$ get_pool.sh
------ Server Info ---------
NAME=gract1                        ACTIVE_POOLS=ora.TOP_PRIORITY      ACTIVE_CSS_ROLE=hub                 
NAME=gract2                        ACTIVE_POOLS=ora.TOP_PRIORITY      ACTIVE_CSS_ROLE=hub                 
NAME=gract3                        ACTIVE_POOLS=ora.STANDARD_PRIORITY ACTIVE_CSS_ROLE=hub                 

------ RAC Database Info ---------
Database name: dw Server pools: STANDARD_PRIORITY 
Instance dw_2 is running on node gract3

Database name: ERP Server pools: STANDARD_PRIORITY,TOP_PRIORITY 
Instance ERP_1 is running on node gract2
Instance ERP_2 is running on node gract3
Instance ERP_3 is running on node gract1

------ Serverpool Info Config ---------
NAME=Free                            IMPORTANCE=0        MIN_SIZE=0          MAX_SIZE=-1         SERVER_CATEGORY=                      
NAME=ora.TOP_PRIORITY                IMPORTANCE=10       MIN_SIZE=1          MAX_SIZE=3          SERVER_CATEGORY=ora.hub.category      
NAME=ora.STANDARD_PRIORITY           IMPORTANCE=5        MIN_SIZE=1          MAX_SIZE=1          SERVER_CATEGORY=ora.hub.category      

------ Serverpool Info ---------
Server pool name: Free             Active servers count: 0            Active server names:                

Server pool name: TOP_PRIORITY     Active servers count: 2            Active server names: gract1,gract2  
           NAME=gract1 STATE=ONLINE           NAME=gract2 STATE=ONLINE 
Server pool name: STANDARD_PRIORITYActive servers count: 1            Active server names: gract3         
           NAME=gract3 STATE=ONLINE

Pool status

  • Note ERP database is attached to STANDARD_PRIORITY and TOP_PRIORITY pool
  • ERP database runs on both pools and all three servers
  • ERP database runs now 3 instances whereas DW still runs only with 1 instance

Use case #3: DW instance needs more resources – add a server to STANDARD_PRIORITY pool

Action : Increase  MIN_SIZE to 2 on  STANDARD_PRIORITY pool to run DW database with 2 instances
         srvctl modify  serverpool  -serverpool  ora.STANDARD_PRIORITY  -min 2
------ Server Info ---------
NAME=gract1                        ACTIVE_POOLS=ora.TOP_PRIORITY      ACTIVE_CSS_ROLE=hub                 
NAME=gract2                        ACTIVE_POOLS=ora.STANDARD_PRIORITY ACTIVE_CSS_ROLE=hub                 
NAME=gract3                        ACTIVE_POOLS=ora.STANDARD_PRIORITY ACTIVE_CSS_ROLE=hub                 

------ RAC Database Info ---------
Database name: dw Server pools: STANDARD_PRIORITY 
Instance dw_1 is running on node gract2
Instance dw_2 is running on node gract3

Database name: ERP Server pools: STANDARD_PRIORITY,TOP_PRIORITY 
Instance ERP_1 is running on node gract2
Instance ERP_3 is running on node gract1
Database ERP is not running on node gract3

------ Serverpool Info Config ---------
NAME=Free                            IMPORTANCE=0        MIN_SIZE=0          MAX_SIZE=-1         SERVER_CATEGORY=                      
NAME=ora.TOP_PRIORITY                IMPORTANCE=10       MIN_SIZE=1          MAX_SIZE=2          SERVER_CATEGORY=ora.hub.category      
NAME=ora.STANDARD_PRIORITY           IMPORTANCE=5        MIN_SIZE=2          MAX_SIZE=2          SERVER_CATEGORY=ora.hub.category      

------ Serverpool Info ---------
Server pool name: Free             Active servers count: 0            Active server names:                

Server pool name: TOP_PRIORITY     Active servers count: 1            Active server names: gract1         
           NAME=gract1 STATE=ONLINE 
Server pool name: STANDARD_PRIORITYActive servers count: 2            Active server names: gract2,gract3  
           NAME=gract2 STATE=ONLINE           NAME=gract3 STATE=ONLINE

Pool status

  • server gract2 moved from  TOP_PRIORITY to  STANDARD_PRIORITY to satisfy MIN_SIZE=2 property
  • as DW database is attached to pool STANDARD_PRIORITY poll  a new instance for DW database is started on gract2
  • ERP database runs only with 2 instances as MAX_SIZE is set to 2

 

Use case #4: Even distribute database ERP and DW on all servers

Action : Attach both database to both pools 
[root@gract1 ~]#  srvctl modify  database -d ERP -g  STANDARD_PRIORITY,TOP_PRIORITY
[root@gract1 ~]#  srvctl modify  database -d DW -g  STANDARD_PRIORITY,TOP_PRIORITY
------ RAC Database Info ---------
Database name: dw Server pools: STANDARD_PRIORITY,TOP_PRIORITY 
Instance dw_1 is running on node gract2
Instance dw_2 is running on node gract3
Database dw is not running on node gract1

Database name: ERP Server pools: STANDARD_PRIORITY,TOP_PRIORITY 
Instance ERP_1 is running on node gract2
Instance ERP_3 is running on node gract1
Database ERP is not running on node gract3

------ Serverpool Info Config ---------
NAME=Free                            IMPORTANCE=0        MIN_SIZE=0          MAX_SIZE=-1         SERVER_CATEGORY=                      
NAME=ora.TOP_PRIORITY                IMPORTANCE=10       MIN_SIZE=1          MAX_SIZE=2          SERVER_CATEGORY=ora.hub.category      
NAME=ora.STANDARD_PRIORITY           IMPORTANCE=5        MIN_SIZE=2          MAX_SIZE=2          SERVER_CATEGORY=ora.hub.category      

------ Serverpool Info ---------
Server pool name: Free             Active servers count: 0            Active server names:                

Server pool name: TOP_PRIORITY     Active servers count: 1            Active server names: gract1         
           NAME=gract1 STATE=ONLINE 
Server pool name: STANDARD_PRIORITYActive servers count: 2            Active server names: gract2,gract3  
           NAME=gract2 STATE=ONLINE           NAME=gract3 STATE=ONLINE 

Pool Status:
 Due to MAXSIZE=2 set for both pools only 2 instance for each database were running 

Fix : set MAX_SIZE=3 for both pools
[root@gract1 ~]#  srvctl modify  serverpool  -serverpool  ora.TOP_PRIORITY  -max 3
[root@gract1 ~]#  srvctl modify  serverpool  -serverpool  STANDARD_PRIORITY -max 3

Note: In above case the instance wasn't started automatically I need to start the instances using srvctl 
[grid@gract1 ~/PM]$ srvctl start database -d erp
[grid@gract1 ~/PM]$ srvctl start database -d dw 
------ RAC Database Info ---------
Database name: dw Server pools: STANDARD_PRIORITY,TOP_PRIORITY 
Instance dw_1 is running on node gract2
Instance dw_2 is running on node gract3
Instance dw_3 is running on node gract1

Database name: ERP Server pools: STANDARD_PRIORITY,TOP_PRIORITY 
Instance ERP_1 is running on node gract2
Instance ERP_2 is running on node gract3
Instance ERP_3 is running on node gract1

------ Serverpool Info Config ---------
NAME=Free                            IMPORTANCE=0        MIN_SIZE=0          MAX_SIZE=-1         SERVER_CATEGORY=                      
NAME=ora.TOP_PRIORITY                IMPORTANCE=10       MIN_SIZE=1          MAX_SIZE=3          SERVER_CATEGORY=ora.hub.category      
NAME=ora.STANDARD_PRIORITY           IMPORTANCE=5        MIN_SIZE=2          MAX_SIZE=3          SERVER_CATEGORY=ora.hub.category      

------ Serverpool Info ---------
Server pool name: Free             Active servers count: 0            Active server names:                

Server pool name: TOP_PRIORITY     Active servers count: 1            Active server names: gract1         
           NAME=gract1 STATE=ONLINE 
Server pool name: STANDARD_PRIORITYActive servers count: 2            Active server names: gract2,gract3  
           NAME=gract2 STATE=ONLINE           NAME=gract3 STATE=ONLINE

Pool status

  • all instances are no running on all servers ( = 6 instances )
  • both databases ERP and DW are attached to both severpools

Use case #5: Year END processing – only ERP database sould run

Action: Change back DW database to use STANDARD_PRIORITY pool only
        Change MIN_SIZE to 3  and MAX_SIZE to 3 for ora.TOP_PRIORITY pool

[root@gract1 ~]#  srvctl modify  database -d DW -g  STANDARD_PRIORITY
PRCD-1230 : Failed to modify database dw to use server pool STANDARD_PRIORITY
PRCR-1071 : Failed to register or update resource ora.dw.db
CRS-2736: The operation requires stopping resource 'ora.dw.db' on server 'gract1'
CRS-2736: The operation requires stopping resource 'ora.dw.db' on server 'gract1'
CRS-2744: Unable to modify resource 'ora.dw.db' as this will affect running resources, but the force option was not specified
[root@gract1 ~]#  srvctl modify  database -d DW -g  STANDARD_PRIORITY -f

[root@gract1 ~]#   srvctl modify  serverpool  -serverpool  ora.TOP_PRIORITY -min 3  -max 3
PRCS-1011 : Failed to modify server pool TOP_PRIORITY
CRS-2736: The operation requires stopping resource 'ora.dw.db' on server 'gract3'
CRS-2736: The operation requires stopping resource 'ora.dw.db' on server 'gract2'
CRS-2738: Unable to modify server pool 'ora.TOP_PRIORITY' as this will affect running resources, but the force option was not specified
[root@gract1 ~]#  srvctl modify  serverpool  -serverpool  ora.TOP_PRIORITY -min 3  -max 3 -f
------ Server Info ---------
NAME=gract1                        ACTIVE_POOLS=ora.TOP_PRIORITY      ACTIVE_CSS_ROLE=hub
NAME=gract2                        ACTIVE_POOLS=ora.TOP_PRIORITY      ACTIVE_CSS_ROLE=hub
NAME=gract3                        ACTIVE_POOLS=ora.TOP_PRIORITY      ACTIVE_CSS_ROLE=hub

------ RAC Database Info ---------
Database name: dw Server pools: STANDARD_PRIORITY
Database is not running.

Database name: ERP Server pools: STANDARD_PRIORITY,TOP_PRIORITY
Instance ERP_1 is running on node gract2
Instance ERP_2 is running on node gract3
Instance ERP_3 is running on node gract1

------ Serverpool Info Config ---------
NAME=Free                            IMPORTANCE=0        MIN_SIZE=0          MAX_SIZE=-1         SERVER_CATEGORY=
NAME=ora.TOP_PRIORITY                IMPORTANCE=10       MIN_SIZE=3          MAX_SIZE=3          SERVER_CATEGORY=ora.hub.category
NAME=ora.STANDARD_PRIORITY           IMPORTANCE=5        MIN_SIZE=2          MAX_SIZE=3          SERVER_CATEGORY=ora.hub.category

------ Serverpool Info ---------
Server pool name: Free             Active servers count: 0            Active server names:

Server pool name: TOP_PRIORITY     Active servers count: 3            Active server names: gract1,gract2,gract3
NAME=gract1 STATE=ONLINE           NAME=gract2 STATE=ONLINE           NAME=gract3 STATE=ONLINE
Server pool name: STANDARD_PRIORITYActive servers count: 0            Active server names:

Pool status

  • all 3 servers are assigned to TOP_PRIORITY pool  – no sever is assigned to STANDARD_PRIORITY pool
  • as database DW is only attached to STANDARD_PRIORITY no instance for DW database is running

12c feature : What-If Command Evaluation with serverpool

Current Cluster Configuration

ASM Configuration
[root@gract3 ~]#   asmcmd showclustermode
ASM cluster : Flex mode enabled

[root@gract3 ~]#  srvctl status asm -detail
ASM is running on gract2,gract1
ASM is enabled.

[root@gract3 ~]# crsctl status resource ora.asm -f | grep CARDINALITY=
CARDINALITY=2

RAC configuration
[root@gract3 ~]# olsnodes -c
gract
[root@gract3 ~]#   crsctl get node role status -all
Node 'gract1' active role is 'hub'
Node 'gract2' active role is 'hub'
Node 'gract3' active role is 'hub'
Summary : - This cluster is running 3 HUB nodes where gract2 and gract1 are currently hosting our 2 ASM FLEX instances 

Following CRS resource output reflects the current status  
[root@gract1 Desktop]# crs
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract1       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     ONLINE          gract2       Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.ACFS_DG1.ACFS_VOL1.advm    ONLINE     OFFLINE         gract3       Unable to connect toASM,STABLE
ora.ACFS_DG1.dg                ONLINE     ONLINE          gract1       STABLE   
ora.ACFS_DG1.dg                ONLINE     ONLINE          gract2       STABLE   
ora.ACFS_DG1.dg                ONLINE     OFFLINE         gract3       STABLE   
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          gract1       STABLE   
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          gract2       STABLE   
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     OFFLINE         gract3       STABLE   
ora.DATA.dg                    ONLINE     ONLINE          gract1       STABLE   
ora.DATA.dg                    ONLINE     ONLINE          gract2       STABLE   
ora.DATA.dg                    ONLINE     OFFLINE         gract3       STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          gract1       STABLE   
ora.LISTENER.lsnr              ONLINE     ONLINE          gract2       STABLE   
ora.LISTENER.lsnr              ONLINE     OFFLINE         gract3       STABLE   
ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract1       mounted on /u01/acfs /acfs-vol1,STABLE
ora.acfs_dg1.acfs_vol1.acfs    ONLINE     ONLINE          gract2       mounted on /u01/acfs /acfs-vol1,STABLE
ora.acfs_dg1.acfs_vol1.acfs    ONLINE     OFFLINE         gract3       (1) volume /u01/acfs /acfs-vol1 offline,STABLE
ora.net1.network               ONLINE     ONLINE          gract1       STABLE   
ora.net1.network               ONLINE     ONLINE          gract2       STABLE   
ora.net1.network               ONLINE     ONLINE          gract3       STABLE   
ora.ons                        ONLINE     ONLINE          gract1       STABLE   
ora.ons                        ONLINE     ONLINE          gract2       STABLE   
ora.ons                        ONLINE     OFFLINE         gract3       STABLE   
ora.proxy_advm                 ONLINE     ONLINE          gract1       STABLE   
ora.proxy_advm                 ONLINE     ONLINE          gract2       STABLE   
ora.proxy_advm                 ONLINE     OFFLINE         gract3       STABLE   
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       OFFLINE      -               STABLE  
My_apache                      1   ONLINE       OFFLINE      -               STABLE  
ora.LISTENER_SCAN1.lsnr        1   ONLINE       ONLINE       gract1          STABLE  
ora.LISTENER_SCAN2.lsnr        1   ONLINE       ONLINE       gract2          STABLE  
ora.LISTENER_SCAN3.lsnr        1   ONLINE       ONLINE       gract1          STABLE  
ora.MGMTLSNR                   1   ONLINE       ONLINE       gract1          169.254.235.72 192.1 68.2.111,STABLE
ora.asm                        1   ONLINE       ONLINE       gract1          STABLE  
ora.asm                        3   ONLINE       ONLINE       gract2          STABLE  
ora.cvu                        1   ONLINE       ONLINE       gract1          STABLE  
ora.dw.db                      1   ONLINE       ONLINE       gract2          Open,STABLE  
ora.dw.db                      2   ONLINE       OFFLINE      -               STABLE  
ora.erp.db                     1   ONLINE       ONLINE       gract3          Open,STABLE  
ora.erp.db                     2   OFFLINE      OFFLINE      -               Instance Shutdown,ST ABLE
ora.erp.db                     3   ONLINE       ONLINE       gract1          Open,STABLE  
ora.gns                        1   ONLINE       ONLINE       gract1          STABLE  
ora.gns.vip                    1   ONLINE       ONLINE       gract1          STABLE  
ora.gract1.vip                 1   ONLINE       ONLINE       gract1          STABLE  
ora.gract2.vip                 1   ONLINE       ONLINE       gract2          STABLE  
ora.gract3.vip                 1   ONLINE       INTERMEDIATE gract2          FAILED OVER,STABLE 
ora.mgmtdb                     1   ONLINE       ONLINE       gract1          Open,STABLE  
ora.oc4j                       1   ONLINE       ONLINE       gract1          STABLE  
ora.scan1.vip                  1   ONLINE       ONLINE       gract1          STABLE  
ora.scan2.vip                  1   ONLINE       ONLINE       gract2          STABLE  
ora.scan3.vip                  1   ONLINE       ONLINE       gract1          STABLE  
Following instances 
------ RAC Database Info ---------
Database name: dw Server pools: STANDARD_PRIORITY 
Instance dw_1 is running on node gract2

Database name: ERP Server pools: TOP_PRIORITY 
Instance ERP_1 is running on node gract3
Instance ERP_3 is running on node gract1
------ Serverpool Info Config ---------
NAME=Free                          IMPORTANCE=0        MIN_SIZE=0          MAX_SIZE=-1         SERVER_CATEGORY=                    
NAME=ora.TOP_PRIORITY              IMPORTANCE=5        MIN_SIZE=1          MAX_SIZE=2          SERVER_CATEGORY=ora.hub.category    
NAME=ora.STANDARD_PRIORITY         IMPORTANCE=5        MIN_SIZE=1          MAX_SIZE=2          SERVER_CATEGORY=ora.hub.category    

------ Serverpool Info ---------
Server pool name: Free             Active servers count: 0            Active server names:                 
Server pool name: TOP_PRIORITY     Active servers count: 2            Active server names: gract1,gract3  
           NAME=gract1 STATE=ONLINE           NAME=gract3 STATE=ONLINE 
Server pool name: STANDARD_PRIORITYActive servers count: 1            Active server names: gract2         
           NAME=gract2 STATE=ONLINE

What Rescource Changes can we expect when server crashes or a CRS stack shutdown occurs

Run crsctl eval to get a rought idea what is going on
[root@gract3 ~]#   crsctl eval delete server gract2

-> Server 'gract2' will be removed from pools [ora.STANDARD_PRIORITY]     As crs is down gract2 need to leave the pool
Server 'gract3' will be moved from pools      [ora.TOP_PRIORITY] to       gract2 needs to move to STANDARD_PRIORITY
                                           pools [ora.STANDARD_PRIORITY]  pool to satisfiy  MIN_SIZE=1 pool settings
Resource 'ora.DATA.dg' (gract2) will be in state    [OFFLINE]             As our FLEX ASM runs on gract2 DG are going 
                                                                          OFFLINE
Resource 'ora.asm' (3/1) will be in state           [OFFLINE]             After DGs are OFFLINE we can stop the ASM 
                                                                          server
Resource 'ora.dw.db' (1/1) will be in state         [OFFLINE]             DW instance needs to go OFFLINE as server 
                                                                          gract2 leaves pool STANDARD_PRIORITY

Resource 'ora.erp.db' (1/1) will be in state        [OFFLINE]             ERP_1 needs to set OFFLINE as server gract3 leaves that pool

Resource 'ora.DATA.dg' (gract3) will be in state    [ONLINE|INTERMEDIATE] Flex ASM restarted on gract3
                                                    on server [gract3]

It seems no all events are displayed - but it is a good test to get a rough idea what is going on.
The startup of DW instance was missing for  crsctl eval delete server gract2 command.

Afer stopping CRS  on server gract2 by runinng 
[root@gract2 ~]#   crsctl stop crs

The following changes are active now 
------ RAC Database Info ---------
Database name: dw Server pools: STANDARD_PRIORITY
Instance dw_2 is running on node gract3

Database name: ERP Server pools: TOP_PRIORITY
Instance ERP_3 is running on node gract1

[grid@gract1 ~/PM]$  srvctl status asm -detail
ASM is running on gract3,gract1
ASM is enabled.

Summary

  • crsctl eval is a good tool to verify clusterware actions happening  for a certain command
  • Due to serverpool operations even instances not running on affected server can go OFFLINE
  • Use  crsctl eval delete server .. to track changes for a CRS stack outage

Full output from running: crsctl eval delete server ..

[root@gract2 ~]# crsctl eval delete server gract3

Stage Group 1:
--------------------------------------------------------------------------------
Stage Number    Required    Action    
--------------------------------------------------------------------------------

     1            Y        Server 'gract2' will be moved from pools
                [ora.TOP_PRIORITY] to pools
                [ora.STANDARD_PRIORITY]
            Y        Server 'gract3' will be removed from pools
                [ora.STANDARD_PRIORITY]
            Y        Resource 'ora.ACFS_DG1.ACFS_VOL1.advm' (gract3)
                will be in state [OFFLINE]
            Y        Resource 'ora.ACFS_DG1.dg' (gract3) will be in
                state [OFFLINE]
            Y        Resource 'ora.ASMNET1LSNR_ASM.lsnr' (gract3)
                will be in state [OFFLINE]
            Y        Resource 'ora.DATA.dg' (gract3) will be in state
                [OFFLINE]
            Y        Resource 'ora.LISTENER.lsnr' (gract3) will be in
                state [OFFLINE]
            Y        Resource 'ora.LISTENER_SCAN1.lsnr' (1/1) will be
                in state [OFFLINE]
            Y        Resource 'ora.acfs_dg1.acfs_vol1.acfs' (gract3)
                will be in state [OFFLINE]
            Y        Resource 'ora.asm' (3/1) will be in state
                [OFFLINE]
            Y        Resource 'ora.dw.db' (1/1) will be in state
                [OFFLINE]
            Y        Resource 'ora.gract3.vip' (1/1) will be in state
                [OFFLINE]
            Y        Resource 'ora.net1.network' (gract3) will be in
                state [OFFLINE]
            Y        Resource 'ora.ons' (gract3) will be in state
                [OFFLINE]
            Y        Resource 'ora.proxy_advm' (gract3) will be in
                state [OFFLINE]
            Y        Resource 'ora.scan1.vip' (1/1) will be in state
                [OFFLINE]

     2            N        Create new group (Stage Group = 2)
            Y        Resource 'ora.asm' (3/1) will be in state
                [ONLINE|INTERMEDIATE] on server [gract2]
            Y        Resource 'ora.gract3.vip' (1/1) will be in state
                [ONLINE|INTERMEDIATE] on server [gract2]
            Y        Resource 'ora.scan1.vip' (1/1) will be in state
                [ONLINE] on server [gract1]

     3            Y        Resource 'ora.LISTENER_SCAN1.lsnr' (1/1) will be
                in state [ONLINE|INTERMEDIATE] on server
                [gract1]
            Y        Resource 'ora.erp.db' (1/1) will be in state
                [OFFLINE]

--------------------------------------------------------------------------------

Stage Group 2:
--------------------------------------------------------------------------------
Stage Number    Required    Action    
--------------------------------------------------------------------------------

     1            N        Resource 'ora.DATA.dg' (gract2) will be in state
                [ONLINE|INTERMEDIATE] on server [gract2]

--------------------------------------------------------------------------------

crsctl commands supporting eval switch

  crsctl eval start resource {<resname>|-w <filter>}[-n server] [-f]
  crsctl eval stop resource {<resname>|-w <filter>} [-f]
  crsctl eval relocate resource {<resName> | {<resName>|-all} -s <server> | -w <filter>} {-n <server>} [-f]
  crsctl eval add serverpool <spName> [-file <filePath> | -attr  "<attrName>=<value>[,...]"] [-f] [-admin [-l <level>] [-x] [-a]]
  crsctl eval modify serverpool <spName> {-file <filePath> | -attr  "<attrName>=<value>[,...]"} [-f] [-admin [-l <level>] [-x] [-a]]
  crsctl eval delete serverpool <spName>  [-admin [-l <level>] [-x] [-a]]
  crsctl eval add server <serverName> [-file <filePath> | -attr  "<attrName>=<value>[,...]"] [-f] [-admin [-l <level>] [-x] [-a]]
  crsctl eval relocate server <serverName> -to <toPool> [-f] [-admin [-l <level>] [-x] [-a]]
  crsctl eval delete server <serverName> [-f] [-admin [-l <level>] [-x] [-a]]
  crsctl eval add resource <resName> -type <typeName> [-file <filePath> | -attr  "<attrName>=<value>[,...]"]  [-f]
  crsctl eval modify resource <resName> -attr "<attrName>=<value>[,...]" [-f] 
  crsctl eval fail resource {<resname> | -w <filter>} [-n <server>]
  crsctl eval activate policy <policyName> [-f] [-admin [-l <level>] [-x] [-a]]

Reference

 

12c Feature: Testing VIP/Application failover on LEAF Nodes

Overview – Difference between HUB and LEAF server

  • Hub Servers run in the Database tier (RDBMS and Application can run here)
  • Leaf Servers run in the Application tier (Only CW Application can run here- using VIPs and failover)
  • This means LEAF servers can  not handle any database activity with 12cR1
  • LEAF servers can be used to deploy cluster aware application using  LEAF VIP/application failover ( see the sample below )
  • In 12c Beta 2  I/O server instances on LEAF nodes  were able to run Oracle RAC database instances
  • This feature ( referred as indirect ASM client instances  ) was dropped for 12cR1
  • For details read :  http://skillbuilders.com/Oracle/Oracle-Consulting-Training.cfm?category=blogs&tab=john-watsons-blog&node=2857#tabs

Configuration

CRS: 12.1.0,1
gract1  : HUB  node 
gract2  : LEAF node 
gract3  : LEAF node

Change a HUP node to LEAF node

[root@gract3 gract3]#   crsctl get node role status -all
Node 'gract1' active role is 'hub'
Node 'gract2' active role is 'hub'    <-- Let's change this node to a LEAF node
Node 'gract3' active role is 'leaf'


[root@gract3 gract3]# ssh gract2
Last login: Sat Aug  2 18:25:21 2014 from gract3.example.com
[root@gract2 ~]# crsctl set node role leaf 
CRS-4408: Node 'gract2' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
[root@gract2 ~]# crsctl stop crs 
[root@gract2 ~]# crsctl start crs 
[root@gract2 ~]#  crsctl get node role status -all
Node 'gract1' active role is 'hub'
Node 'gract3' active role is 'leaf'
Node 'gract2' active role is 'leaf'

VIP setup on our LEAF nodes

Create a static network which can be used by our leave nodes ( see -leaf switch )

# srvctl add network -netnum  4 -subnet 192.168.1.0/255.255.255.0 -leaf
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.net1.network               ONLINE     ONLINE          gract1       STABLE     <-- HUB  
ora.net4.network               OFFLINE    OFFLINE         gract2       STABLE     <-- LEAF 1
ora.net4.network               OFFLINE    OFFLINE         gract3       STABLE     <-- LEAF 2  

Create a specific network type and our application VIP
# crsctl add type ora.cluster_vip_net4.type -basetype ora.cluster_vip.type
# $GRID_HOME/bin/appvipcfg create -network=4 -ip=192.168.1.199 -vipname=MyTestVIP -user=root

[root@gract2 ~]# crsctl start resource  MyTest
# VIP 
CRS-2672: Attempting to start 'ora.net4.network' on 'gract2'
CRS-2676: Start of 'ora.net4.network' on 'gract2' succeeded
CRS-2672: Attempting to start 'MyTestVIP' on 'gract2'
CRS-2676: Start of 'MyTestVIP' on 'gract2' succeeded

*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract2          STABLE  

Relocate VIP to LEAF node gract3
[root@gract2 ~]# crsctl relocate resource  MyTestVIP
CRS-2672: Attempting to start 'ora.net4.network' on 'gract3'
CRS-2676: Start of 'ora.net4.network' on 'gract3' succeeded
CRS-2673: Attempting to stop 'MyTestVIP' on 'gract2'
CRS-2677: Stop of 'MyTestVIP' on 'gract2' succeeded
CRS-2672: Attempting to start 'MyTestVIP' on 'gract3'
CRS-2676: Start of 'MyTestVIP' on 'gract3' succeeded

[root@gract2 ~]# crs
*****  Local Resources: *****
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.net1.network               ONLINE     ONLINE          gract1       STABLE   
ora.net4.network               ONLINE     ONLINE          gract2       STABLE   
ora.net4.network               ONLINE     ONLINE          gract3       STABLE   

*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract3          STABLE 
--> Here an application VIP is running and active on our LEAF nodes gract2 and gract3

Create  LEAF server Pool

$ crsctl status server  gract1 -f  | egrep "^NAME|ACTIVE_POOLS|ACTIVE_CSS_ROLE"
NAME=gract1
ACTIVE_POOLS=ora.TOP_PRIORITY
ACTIVE_CSS_ROLE=hub

$ crsctl status server  gract2 -f  | egrep "^NAME|ACTIVE_POOLS|ACTIVE_CSS_ROLE"
NAME=gract2
ACTIVE_POOLS=Free
ACTIVE_CSS_ROLE=leaf

$ crsctl status server  gract3 -f  | egrep "^NAME|ACTIVE_POOLS|ACTIVE_CSS_ROLE"
NAME=gract3
ACTIVE_POOLS=Free
ACTIVE_CSS_ROLE=leaf

[grid@gract1 ~/PM]$  srvctl status serverpool -detail
Server pool name: Free
Active servers count: 2
Active server names: gract2,gract3
NAME=gract2 STATE=ONLINE
NAME=gract3 STATE=ONLINE
Server pool name: Generic
Active servers count: 0
Active server names: 
Server pool name: STANDARD_PRIORITY
Active servers count: 0
Active server names: 
Server pool name: TOP_PRIORITY
Active servers count: 1
Active server names: gract1
NAME=gract1 STATE=ONLINE
--> Our HUP server is attached to TOP_PRIORITY pool whereas our LEAF servers are waiting in the FREE pool

# crsctl add category My_leaf_nodes -attr "ACTIVE_CSS_ROLE=leaf";
# crsctl status category My_leaf_nodes;
NAME=My_leaf_nodes
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
ACTIVE_CSS_ROLE=leaf
EXPRESSION=

# crsctl status server -category My_leaf_nodes;
NAME=gract2
STATE=ONLINE

NAME=gract3
STATE=ONLINE

[root@gract2 ~]# crsctl add serverpool  My_leaf_pool -attr "SERVER_CATEGORY=My_leaf_nodes";
[root@gract2 ~]# crsctl status serverpool My_leaf_pool;
NAME=My_leaf_pool
ACTIVE_SERVERS=gract2 gract3

[root@gract2 ~]#  crsctl status serverpool My_leaf_pool -f
NAME=My_leaf_pool
IMPORTANCE=0
MIN_SIZE=0
MAX_SIZE=-1
SERVER_NAMES=
PARENT_POOLS=
EXCLUSIVE_POOLS=
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
SERVER_CATEGORY=My_leaf_nodes
ACTIVE_SERVERS=gract2 gract3

 

Create and deploy  a clusterware application resource for apache 2.15

Change /etc/httpd/conf/httpd.conf and add our VIP as Listening Adress:  192.168.1.199:80
Listen 192.168.1.199:80

Check the VIP location
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract3          STABLE
--> As our initial testing is in gract2 relocate the VIP to grac2

[root@gract2 bin]#   crsctl relocate resource MyTestVIP
CRS-2673: Attempting to stop 'MyTestVIP' on 'gract3'
CRS-2677: Stop of 'MyTestVIP' on 'gract3' succeeded
CRS-2672: Attempting to start 'MyTestVIP' on 'gract2'
CRS-2676: Start of 'MyTestVIP' on 'gract2' succeeded
[root@gract2 bin]#  crs
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract2          STABLE

Configure action script 
[root@gract2 bin]# cat apache.scr 
#!/bin/sh

HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf
WEBPAGECHECK=http://192.168.1.199:80/icons/apache_pb.gif

case $1 in
'start')
    /usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION
   RET=$?
    ;;
    sleep 10
    ;;
'stop')
    /usr/sbin/apachectl -k stop
   RET=$?
    ;;
'clean')
    /usr/sbin/apachectl -k stop
   RET=$?
    ;;
'check')
    /usr/bin/wget -q --delete-after $WEBPAGECHECK
   RET=$?
    ;;
*)
   RET=0
    ;;
esac
# 0: success; 1 : error
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

Edit /etc/httpd/conf/httpd.conf and put in our VIP
Change 
Listen 80
to
Listen 192.168.1.199:80

Check the apached application status
[root@gract2 bin]# apache.scr start
[root@gract2 bin]#  apache.scr check
[root@gract2 bin]#  echo $?
0
[root@gract2 bin]#  apache.scr stop
[root@gract2 bin]#  apache.scr check
[root@gract2 bin]#   echo $?
1
-> Looks good we are ready for clusterwide distribution

Create a cluster managed application resource 
[root@gract2 bin]#  $GRID_HOME/bin/crsctl add resource My_apache -type cluster_resource -attr \
"ACTION_SCRIPT=/usr/local/bin/apache.scr,PLACEMENT=restricted,HOSTING_MEMBERS=gract2 gract3,SERVER_POOLS=My_leaf_pool, \
CHECK_INTERVAL='30',RESTART_ATTEMPTS='2',START_DEPENDENCIES=hard(MyTestVIP) pullup(MyTestVIP), \
STOP_DEPENDENCIES=hard(intermediate:MyTestVIP),CARDINALITY=1 "

root@gract2 bin]#  $GRID_HOME/bin/crsctl start resource My_apache
CRS-2672: Attempting to start 'My_apache' on 'gract2'
CRS-2676: Start of 'My_apache' on 'gract2' succeeded
...

Check resource properties and status for our apache resource

[root@gract1 Desktop]# $GRID_HOME/bin/crsctl status resource My_apache -f | egrep '^PLACEMENT|HOSTING_MEMBERS|SERVER_POOLS|DEPENDENCIES|^CARDINALITY'
CARDINALITY=1
CARDINALITY_ID=0
HOSTING_MEMBERS=gract2 gract3
PLACEMENT=restricted
SERVER_POOLS=My_leaf_pool
START_DEPENDENCIES=hard(MyTestVIP) pullup(MyTestVIP)
STOP_DEPENDENCIES=hard(intermediate:MyTestVIP)

 HOSTING_MEMBERS=gract2 gract3 : A space-delimited, ordered list of cluster server names that can host a resource. 
                                 This attribute is required only when using administrator management, and when the value of the 
                                 PLACEMENT attribute is set to favored or restricted.
 PLACEMENT=restricted          : Oracle Clusterware only considers servers that belong to server pools listed in the SEVER_POOLS resource attribute  

[root@gract2 bin]#  $GRID_HOME/bin/crsctl status resource My_apache
NAME=My_apache
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on gract2

Test application/VIP failover

Test application/VIP failover using clusterware resource relocation
Copy over the action script 
[root@gract2 bin]# scp /usr/local/bin/apache.scr gract3://usr/local/bin/
[root@gract2 bin]# ssh gract3 ls -l /usr/local/bin/apache.scr 
-rwxr-xr-x. 1 root root 505 Aug  3 11:05 /usr/local/bin/apache.scr

[root@gract2 bin]#  $GRID_HOME/bin/crsctl relocate resource My_apache
CRS-2527: Unable to start 'My_apache' because it has a 'hard' dependency on 'MyTestVIP'
CRS-2525: All instances of the resource 'MyTestVIP' are already running; relocate is not allowed because the force option was not specified
CRS-4000: Command Relocate failed, or completed with errors.
[root@gract2 bin]#  $GRID_HOME/bin/crsctl relocate resource My_apache -f
CRS-2673: Attempting to stop 'My_apache' on 'gract2'
CRS-2677: Stop of 'My_apache' on 'gract2' succeeded
CRS-2673: Attempting to stop 'MyTestVIP' on 'gract2'
CRS-2677: Stop of 'MyTestVIP' on 'gract2' succeeded
CRS-2672: Attempting to start 'MyTestVIP' on 'gract3'
CRS-2676: Start of 'MyTestVIP' on 'gract3' succeeded
CRS-2672: Attempting to start 'My_apache' on 'gract3'
CRS-2676: Start of 'My_apache' on 'gract3' succeeded

Test application/VIP failover after a CRS crash

Starting firefox from our HUP node:
[root@gract1 Desktop]# firefox http://192.168.1.199:80
--> Apache page is displayed successfully 
Cecking cluster resources 
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract3          STABLE  
--> Now reboot server gract3

As  expected out HTMG page get unvailable for some seconds 
Firefix error: 
   Unable to connect ->    Firefox can't establish a connection to the server at 192.168.1.199

After some seconds VIP becomes available on gract2  and apache can display out HTML page again 
*****  Cluster Resources: *****
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
MyTestVIP                      1   ONLINE       ONLINE       gract2          STABLE 

Relocate that service from gract2 to gract3

[root@gract1 Desktop]# crsctl relocate resource My_apache -f
CRS-2673: Attempting to stop 'My_apache' on 'gract2'
CRS-2677: Stop of 'My_apache' on 'gract2' succeeded
CRS-2673: Attempting to stop 'MyTestVIP' on 'gract2'
CRS-2677: Stop of 'MyTestVIP' on 'gract2' succeeded
CRS-2672: Attempting to start 'MyTestVIP' on 'gract3'
CRS-2676: Start of 'MyTestVIP' on 'gract3' succeeded
CRS-2672: Attempting to start 'My_apache' on 'gract3'

Cleanup and delete clusterware resources

# crsctl stop   resource My_apache 
# crsctl delete resource My_apache 
# crsctl stop res MyTestVIP
# $GRID_HOME/bin/appvipcfg  delete  -vipname=MyTestVIP
# crsctl delete type ora.cluster_vip_net4.type
# crsctl stop resource  ora.net4.network
# crsctl delete  resource  ora.net4.network
# srvctl  remove serverpool  -serverpool My_Leaf_Pool

Potential Errors

Error CRS-2667:
[root@gract2 bin]#  $GRID_HOME/bin/crsctl start resource My_apache
CRS-2667: Resource 'My_apache' with PLACEMENT=balanced may only run on servers assigned to Generic and Free, both of which are empty
CRS-4000: Command Start failed, or completed with errors.
Fix : Change PLACEMENT attribute
[root@gract2 bin]#  crsctl  modify resource My_apache -attr "PLACEMENT=restricted,HOSTING_MEMBERS=gract2 gract3"

Error CRS-5809:
[root@gract2 bin]# crsctl start  resource My_apache 
CRS-2672: Attempting to start 'My_apache' on 'gract2'
CRS-5809: Failed to execute 'ACTION_SCRIPT' value of '' for 'My_apache'. Error information 'cmd  not found', Category : -2, OS error : 2
CRS-5809: Failed to execute 'ACTION_SCRIPT' value of '' for 'My_apache'. Error information 'cmd  not found', Category : -2, OS error : 2

Fix:
Check action script location and proctectisn  on the relevant node and set  ACTION_SCRIPT for that ressource 
[root@gract2 bin]#   ls -l /usr/local/bin/apache.scr
-rwxr-xr-x. 1 root root 505 Aug  3 10:05 /usr/local/bin/apache.scr
[root@gract2 bin]#  crsctl  modify resource My_apache -attr "ACTION_SCRIPT=/usr/local/bin/apache.scr"

 

Reference

..

RAC 12c : Pluggable Databases

Overview

    • PDBs and its services do not start automatically, it is your responsibility to start them when the database starts ( -> use a database trigger )
    • You should create services to connect to a PDB database and to archive failover and load balancing capabilities across cluster nodes
    • Connect to PDBs using SCAN address and service name
    •  New Data Dictionary views : CDB_XXXX views show related objects for CDB and all PDBs
    • PDB Benefits
      • Centralized management for patching and backups
      •  PDBs  reduces resources
      •      Storage ( reduced data dictionary storage )
      •      Memory ( PDBs don’t run Oracle BG proesses, PDBs don’t allocates own SGA )
      •      CPU usage ( less instance overhead as we have only 1 instance for all PDBs)
    • Container IDs:
      0   select CON_ID  from v$database
          select CON_ID  from v$instance;
       1  root container database  : CDB  1.st container
       2  Seed PDB                        2.nd container
      >2  Newly created PDBs              3.rd container ..

 

Create a pluggable database with dbca

Invoke dbca :
Manage Pluggable Database
  Create a a Pluggable Database
    Create a new Pluggable Database

Summary:
Container Database:           gract1
Pluggable Database:           PDB1
Pluggable database source:    Default
Datafile location:            Use Oracle Managed Files
Configure Database Vault:     No
Configure Label Security:     No

Alert.log :
Fri Oct 04 16:37:29 2013
CREATE PLUGGABLE DATABASE PDB1 ADMIN USER pdb_admin IDENTIFIED BY * ROLES=(CONNECT)  file_name_convert=NONE
Fri Oct 04 16:40:25 2013
****************************************************************
Pluggable Database PDB1 with pdb id - 3 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
This instance was first to open pluggable database PDB1 (container=3)
Deleting old file#5 from file$ 
Deleting old file#7 from file$ 
Adding new file#16 to file$(old file#5) 
Adding new file#17 to file$(old file#7) 
Successfully created internal service pdb1 at open
ALTER SYSTEM: Flushing buffer cache inst=1 container=3 local
Fri Oct 04 16:40:35 2013
****************************************************************
Post plug operations are now complete.
Pluggable database PDB1 with pdb id - 3 is now marked as NEW.
****************************************************************
Completed: CREATE PLUGGABLE DATABASE PDB1 ADMIN USER pdb_admin IDENTIFIED BY * ROLES=(CONNECT)  file_name_convert=NONE
alter pluggable database PDB1 open instances=all
Fri Oct 04 16:40:35 2013
This instance was first to open pluggable database PDB1 (container=3)
Pluggable database PDB1 dictionary check beginning
Pluggable Database PDB1 Dictionary check complete
Fri Oct 04 16:40:48 2013
Opening pdb PDB1 (3) with no Resource Manager plan active
XDB installed.
XDB initialized.
Fri Oct 04 16:41:01 2013
Pluggable database PDB1 opened read write
Completed: alter pluggable database PDB1 open instances=all
CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE SIZE 5M AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT  AUTO
Completed: CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE SIZE 5M AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT  AUTO
ALTER DATABASE DEFAULT TABLESPACE "USERS"
Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
Fri Oct 04 16:41:23 2013
Shared IO Pool defaulting to 64MB. Trying to get it from Buffer Cache for process 6453.

 

Check the current instance and storage status of a PDB

Instances ruinning for all of our CDB, PDBs
SQL> select INST_ID, INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME, DATABASE_STATUS, con_id  from gv$instance;
   INST_ID INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME         DATABASE_STATUS       CON_ID
---------- --------------- ---------------- -------------------- ----------------- ----------
     1         1 cdb1         gract1.example.com     ACTIVE             0
     3         3 cdb3         gract3.example.com     ACTIVE             0
     2         2 cdb2         gract2.example.com     ACTIVE             0
--> Our CDB is a 3 node RAC cluster
    PDBs aren't running an own instances they are using the CDB instance 

How many PDBs are hosted by our CDB ?
SQL> select * from v$pdbs;
    CON_ID     DBID     CON_UID GUID                  NAME         OPEN_MODE    RES OPEN_TIME              CREATE_SCN TOTAL_SIZE
---------- ---------- ---------- -------------------------------- ---------- ---------- --- ------------------------- ---------- ----------
     2 4097414398 4097414398 FFE2B5D067357B55E0436F01A8C0F0F0 PDB$SEED   READ ONLY    NO  05-AUG-14 03.32.33.866 PM     1720758  283115520
     3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1         READ WRITE NO  05-AUG-14 03.39.23.762 PM     1755988  288358400
--> We have 2 PDBs - PDB1 is a user created PDB 

What is the status of our PDBs ?
SQL> select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED  from gv$pdbs where NAME='PDB1';
   INST_ID     CON_ID      DBID     CON_UID GUID                             NAME    OPEN_MODE  RES
---------- ---------- ---------- ---------- -------------------------------- ------ ---------- ---
     3              3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE  NO
     1              3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE  NO
     2              3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE  NO
--> PDB1 is running  on all 3 RAC nodes ( cdb1,cdb2,cdb3 )

Storage
SQL> select file_name, con_id from cdb_data_files;
FILE_NAME                                                                          CON_ID  
---------------------------------------------------------------------------------- ----------
+DATA/CDB/DATAFILE/system.273.854810097                                                1
+DATA/CDB/DATAFILE/sysaux.283.854809845                                                1
+DATA/CDB/DATAFILE/undotbs1.279.854810307                                              1
+DATA/CDB/DATAFILE/users.295.854810303                                                 1
+DATA/CDB/DATAFILE/undotbs2.294.854810989                                              1
+DATA/CDB/DATAFILE/undotbs3.286.854810995                                              1
+DATA/CDB/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.275.854810401               2
+DATA/CDB/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.281.854810401               2
+DATA/CDB/FFE30B05B94B1D25E0436F01A8C05EFE/DATAFILE/system.272.854811641               3
+DATA/CDB/FFE30B05B94B1D25E0436F01A8C05EFE/DATAFILE/sysaux.278.854811821               3
+DATA/CDB/FFE30B05B94B1D25E0436F01A8C05EFE/DATAFILE/users.270.854812071                3
--> PDB1 ( con_id = 3 ) has a system,sysaux and users tablespace 
    PDB$SEED  ( con_id = 2 ) has a system,sysaux tablespace only 
    root container ( con_id =1 ) has a system, sysaux, undotbs1, undotbs2, undotbs3 an an users tablespace 
    Note: Each RAC node has its own undo tablespace     

 

Start and stop a pluggable database in a RAC cluster

Startup and shutdown a PDBs 

SQL>  connect sys/sys@gract-scan.grid12c.example.com:/cdb as sysdba
Connected.

Stop a single PDB
SQL> alter pluggable database PDB1 close instances=('cdb1');
SQL>  select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED    from gv$pdbs where NAME='PDB1';
   INST_ID     CON_ID        DBID    CON_UID GUID                      NAME    OPEN_MODE  RES
---------- ---------- ---------- ---------- --------------------- ---------- ---------- ---
     3        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO
     1        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    MOUNTED
     2        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO

Stop 2 PDBs
SQL> alter pluggable database PDB1 close instances=('cdb2','cdb3');
Pluggable database altered.
SQL>  select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED    from gv$pdbs where NAME='PDB1';
   INST_ID     CON_ID        DBID    CON_UID GUID                     NAME     OPEN_MODE  RES
---------- ---------- ---------- ---------- ----------------------- ---------- ---------- ---
     1        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    MOUNTED
     3        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    MOUNTED
     2        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    MOUNTED

Open 2 PDBs 
SQL>   alter pluggable database PDB1 open instances=('cdb1','cdb3');
Pluggable database altered.
SQL>  select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED    from gv$pdbs where NAME='PDB1';
   INST_ID     CON_ID        DBID    CON_UID GUID                 NAME    OPEN_MODE  RES
---------- ---------- ---------- ---------- -------------------------------- ---------- ---------- ---
     2        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    MOUNTED
     1        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO
     3        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO

Stop a single PDB
SQL>   alter pluggable database PDB1 open instances=('cdb2');
Pluggable database altered.
SQL>  select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED    from gv$pdbs where NAME='PDB1';
   INST_ID     CON_ID        DBID    CON_UID GUID                 NAME    OPEN_MODE  RES
---------- ---------- ---------- ---------- -------------------------------- ---------- ---------- ---
     1        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO
     3        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO
     2        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO

Open and close all PDB instances 
SQL> alter pluggable database PDB1 open  instances=all;
SQL> alter pluggable database PDB1 close  instances=all;

CLose a PDB with immediate option
SQL> alter pluggable database PDB1 close immediate instances=('cdb3');


Verify that pdb1 is registered with the listeners:
$ lsnrctl status
..
Service "pdb1" has 1 instance(s).
  Instance "gract1_1", status READY, has 1 handler(s) for this service...
The command completed successfully
-->  local listener only  knows  local PDB service running on local PDB lsnrctl status LISTENER_SCAN1
Service "pdb1" has 3 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
  Instance "cdb2", status READY, has 1 handler(s) for this service...
  Instance "cdb3", status READY, has 1 handler(s) for this service...
-->  SCAN listener knows  all PDB services currently running on instances cdb1,cdb2 and cdb3 

 

Connect to a PDB instance

Connect to PDP service using EZconnect
Find SCAN address and connect to a  service[root@gract1 Desktop]# srvctl config scan
SCAN name: gract-scan.grid12c.example.com, Network: 1
Subnet IPv4: 192.168.1.0/255.255.255.0/eth1
...
[root@gract1 Desktop]# ping gract-scan.grid12c.example.com
PING gract-scan.grid12c.example.com (192.168.1.186) 56(84) bytes of data.
64 bytes from 192.168.1.186: icmp_seq=1 ttl=64 time=0.025 ms..
..

Check service status to which we want to connect to :
[grid@gract1 ~]$  crs | egrep 'hr.svc|NAME|----'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.cdb.hr.svc                 ONLINE     ONLINE          gract1       STABLE 
ora.cdb.hr.svc                 ONLINE     ONLINE          gract2       STABLE 

Testing Load Balacning by conneting to the  SCAN address and using service hr  
SQL> connect system/sys@gract-scan.grid12c.example.com:/hr
SQL> select 'Connect Info - Host: ' ||    sys_context('USERENV', 'SERVER_HOST')  || '   PDB:  ' || sys_context('USERENV', 'CON_NAME')  || '    CDB_instance: '
     ||  sys_context('USERENV', 'INSTANCE_NAME')  ||  '    Service:  ' || sys_context('USERENV', 'SERVICE_NAME')  PDB_CONNECT_INFO from sys.dual;
PDB_CONNECT_INFO
--------------------------------------------------------------------------------
Connect Info - Host: gract1   PDB:  PDB1    CDB_instance: cdb1      Service:  hr

SQL> connect system/sys@gract-scan.grid12c.example.com:/hr
PDB_CONNECT_INFO
--------------------------------------------------------------------------------
Connect Info - Host: gract2   PDB:  PDB1    CDB_instance: cdb2      Service:  hr

SQL> connect system/sys@gract-scan.grid12c.example.com:/hr
PDB_CONNECT_INFO
--------------------------------------------------------------------------------
Connect Info - Host: gract1   PDB:  PDB1    CDB_instance: cdb1      Service:  hr
--> Load balancing between nodes ( gract1,gract2 ) running service hr works fine

Testing dedicated server connections to node gract1,gract2,gract3 
SQL> connect system/sys@gract1:1521/hr
PDB_CONNECT_INFO
--------------------------------------------------------------------------------
Connect Info - Host: gract1   PDB:  PDB1    CDB_instance: cdb1      Service:  hr

SQL> connect system/sys@gract2:1521/hr
PDB_CONNECT_INFO
--------------------------------------------------------------------------------
Connect Info - Host: gract2   PDB:  PDB1    CDB_instance: cdb2      Service:  hr

SQL> connect system/sys@gract3:1521/hr
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
---> Connect to hr severvice on  cdb3/pdb1  fails as hr service is not running on gract3 
Connect to PDB instance via tnsnames.ora
pdb1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = gract-scan.grid12c.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb1)
    )
  )

$  sqlplus pdb_admin/sys@pdb1
SQL>  show con_name;
CON_NAME
------------------------------
PDB1

Connect via  alter session set container
SQL> connect / as sysdba.
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT

SQL> show pdbs
    CON_ID CON_NAME              OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
     2 PDB$SEED              READ ONLY  NO
     3 PDB_PLUG_NOCOPY2          MOUNTED

SQL> alter session set container=PDB_PLUG_NOCOPY2;
Session altered.
SQL> sho con_name;
CON_NAME
------------------------------
PDB_PLUG_NOCOPY2

What happens to PDP services when closing a PDB or when CDB crashes ?

Working with services 
Add a service to your PDB 
[oracle@gract1 ~]$ srvctl add service -db cdb -service hr -pdb pdb1 -preferred cdb1,cdb2 -available cdb3

Check service status  
[grid@gract1 ~]$ crs | egrep 'hr.svc|NAME|----'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.cdb.hr.svc                 OFFLINE    OFFLINE         gract1       STABLE 
ora.cdb.hr.svc                 OFFLINE    OFFLINE         gract1       STABLE 

Start service 
[grid@gract1 ~]$ srvctl start service -db cdb -service hr
[grid@gract1 ~]$ crs | egrep 'hr.svc|NAME|----'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.cdb.hr.svc                 ONLINE     ONLINE          gract1       STABLE 
ora.cdb.hr.svc                 ONLINE     ONLINE          gract2       STABLE 

Relocate Service 
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.cdb.hr.svc                 ONLINE     ONLINE          gract1       STABLE 
ora.cdb.hr.svc                 ONLINE     ONLINE          gract2       STABLE 
[grid@gract1 ~]$  srvctl relocate service -db cdb -service hr -oldinst cdb1  -newinst cdb3
[grid@gract1 ~]$  crs | egrep 'hr.svc|NAME|----'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.cdb.hr.svc                 ONLINE     ONLINE          gract3       STABLE 
ora.cdb.hr.svc                 ONLINE     ONLINE          gract2       STABLE 

Bring back this service again to original node 
[grid@gract1 ~]$ srvctl relocate service -db cdb -service hr -oldinst cdb3 -newinst cdb1
[grid@gract1 ~]$  crs | egrep 'hr.svc|NAME|----'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.cdb.hr.svc                 ONLINE     ONLINE          gract1       STABLE 
ora.cdb.hr.svc                 ONLINE     ONLINE          gract2       STABLE 
--->PDB based Service relocation works fine - there is difference the way services work on CDBs 

Stop Service 
[grid@gract1 ~]$ srvctl start service -db cdb -service hr
[grid@gract1 ~]$  crs | egrep 'hr.svc|NAME|----'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.cdb.hr.svc                 OFFLINE    OFFLINE         gract1       STABLE 
ora.cdb.hr.svc                 OFFLINE    OFFLINE         gract1       STABLE 

Service registration with local and scan listener
[grid@gract1 ~]$  lsnrctl status listener
Service "hr" has 1 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...

[grid@gract1 ~]$  lsnrctl status LISTENER_SCAN1
Service "hr" has 2 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
  Instance "cdb2", status READY, has 1 handler(s) for this service...
Service "pdb1" has 3 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
  Instance "cdb2", status READY, has 1 handler(s) for this service...
  Instance "cdb3", status READY, has 1 handler(s) for this service...
--> Local listener knows local services runnung on instance  cdb1
     SCAN listener knows   all services running on instances cdb1,cdb2 and cdb3 

Testing service failover after  PDB shutdown ( shutdown immediatede )  and CDB crash ( shutdown abort )
Status:
[grid@gract1 ~]$  srvctl start service -db cdb -s hr
SQL> select INST_ID, NAME ,SESSION_STATE_CONSISTENCY, global  from gv$active_services where name like 'pdb%' or name like '%hr%';
   INST_ID NAME      SESSION_STATE_CONSISTENCY     GLO
---------- --------- ------------------------------ ---
     3 pdb1                                         NO
     2 hr            DYNAMIC                        NO
     2 pdb1                                         NO
     1 hr            DYNAMIC                        NO
     1 pdb1                                         NO

Stop the PDB
SQL> alter pluggable database PDB1 close immediate instances=('cdb2');
Pluggable database altered.
SQL> select INST_ID, NAME ,SESSION_STATE_CONSISTENCY, global  from gv$active_services where name like 'pdb%' or name like '%hr%';
   INST_ID NAME      SESSION_STATE_CONSISTENCY  GLO
   ---------------- -------------------------- ---
     1 hr                  DYNAMIC             NO
     1 pdb1                                    NO
     3 pdb1                                    NO
     2 pdb1                                    NO
--> Only hr service is removed from nde cd2 : pdb1 service is still active on cdb2
Verifying the PDB service
SQL> connect system/sys@gract2:/pdb1
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Process ID: 0
Session ID: 0 Serial number: 0
Warning: You are no longer connected to ORACLE.

SQL> connect system/sys@gract2:/hr
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
<<<<<< Not sure whether we hit a BUG here - hr services shows the expected behaviour 

Restart CDB and open PDB again
SQL>  alter pluggable database PDB1  open instances=('cdb2');
   INST_ID     CON_ID        DBID    CON_UID NAME     OPEN_MODE  RES
---------- ---------- ---------- ---------- ------------ ---------- ---
     2        3 3362522988 3362522988 PDB1     READ WRITE NO
     1        3 3362522988 3362522988 PDB1     READ WRITE NO
     3        3 3362522988 3362522988 PDB1     MOUNTED

ora.cdb.hr.svc                 ONLINE     ONLINE          gract1       STABLE 
ora.cdb.hr.svc                 OFFLINE    OFFLINE         gract2       STABLE 
--> service remain OFFLINE even after PDB was restarted on INST_ID 2
    this behavior is documented - services on PDB needs to be restarted on startup

Testing failover after a CDB crash ( shutdown abort )  
SQL>  select INST_ID, NAME ,SESSION_STATE_CONSISTENCY, global  from gv$active_services where name like 'pdb%' or name like '%hr%';
   INST_ID NAME         SESSION_STATE_CONSISTENCY      GLO
---------- ------------ ------------------------------ ---
     3     pdb1                                        NO
     2     hr           DYNAMIC                        NO
     2     pdb1                                        NO
     1     hr           DYNAMIC                        NO
     1     pdb1                                        NO
--> Run a shutdown abort for INST_ID 2 ( == cdb2 ) - services hr and pdb1 should be terminated asap
[oracle@gract2 ~]$ sqlplus / as sysdba
SQL> shutdown abort
SQL> select INST_ID, NAME ,SESSION_STATE_CONSISTENCY, global  from gv$active_services where name like 'pdb%' or name like '%hr%';
   INST_ID NAME               SESSION_STATE_CONSISTENCY      GLO
---------- ------------------ ------------------------------ ---
     3     hr                  DYNAMIC                       NO
     3     pdb1                                              NO
     1     hr                  DYNAMIC                       NO
     1     pdb1                                              NO
--> successfull failover of service hr from cdb2 to cdb3 

Summary:

  • PDB based services fail over if the hosting instance crashes or cdb instance has been stopped with shutdown abort
  • Closing a PDB ( even with immediate switch ) doesn’t failover the PDB service . In that case you need to relocate the PDB services manually ( Not sure whether this is a Bug or not  )

Unplug PDB database and keep ASM datafiles

SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 unplug into '/home/oracle/PDB/pdb1_unplug.xml';
 alter pluggable database pdb1 unplug into '/home/oracle/PDB/unplugg_pdb1.xml'
*
ERROR at line 1:
ORA-65025: Pluggable database PDB1 is not closed on all instances.
--> Close all pluggable database on all instances
 
SQL> alter pluggable database pdb1 close instances=all;
Pluggable database altered.

SQL> alter pluggable database pdb1 unplug into '/home/oracle/PDB/pdb1_unplug.xml';
Pluggable database altered.

SQL> drop pluggable database pdb1 keep datafiles;
Pluggable database dropped.

Verify that PDB datafiles are still their but our PDB is finally dropped from Data dictionary 
$  asmcmd ls  +DATA/GRACT1/E7EC3DF06C5D1840E0436F01A8C0692E/DATAFILE/
SYSAUX.285.827944651
SYSTEM.284.827944651
USERS.287.827944875

SQL> show pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO

 

Plugin a PDB

SQL> create pluggable database pdb_plug_nocopy2 using '/home/oracle/PDB/pdb1_unplug.xml'
        COPY  TEMPFILE REUSE;
Pluggable database created.

SQL> show pdbs;
    CON_ID  CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB_PLUG_NOCOPY2               MOUNTED

SQL> alter pluggable database PDB_PLUG_NOCOPY2 open;
Pluggable database altered.
SQL> show pdbs
    CON_ID CON_NAME                OPEN MODE  RESTRICTED
---------- ---------------------- ---------- ----------
         2 PDB$SEED               READ ONLY  NO
         3 PDB_PLUG_NOCOPY2       READ WRITE NO

Using PDBs with Services and Serverpools

Current status 1 serverpool - 3 nodes 
Server pool name: 12c_pool
Active servers count: 3
Active server names: gract1,gract2,gract3
NAME=gract1 STATE=ONLINE
NAME=gract2 STATE=ONLINE
NAME=gract3 STATE=ONLINE

Create a service for our PDB and verify the service locations
$ srvctl add service -db gract1 -service pdb_s1 -pdb pdb_plug_nocopy2 -d 12c_pool
SQL> select INST_ID, NAME ,pdb    from gv$services where name like 'pdb%';
   INST_ID NAME               PDB
---------- ------------------------------ ------------------------------
         1 pdb_plug_nocopy2          PDB_PLUG_NOCOPY2  
         2 pdb_plug_nocopy2          PDB_PLUG_NOCOPY2
         3 pdb_plug_nocopy2          PDB_PLUG_NOCOPY2
--> PDB database PDB_PLUG_NOCOPY2 is currently serviced by any of our CDBs ( gract1_1, gract1_2, gract_3) 

Create a new server pool and verify serverpool status
$ srvctl add serverpool  -serverpool NEW_12c_pool -min 1 -max 2 -i 5 -f
$ srvctl status serverpool -detail
.. 
Server pool name: 12c_pool
Active servers count: 2
Active server names: gract2,gract3
NAME=gract2 STATE=ONLINE
NAME=gract3 STATE=ONLINE

Server pool name: NEW_12c_pool
Active servers count: 1
Active server names: gract1
NAME=gract1 STATE=ONLINE
--> gract1 is moved from 12c_pool to newly created serverpool NEW_12c_pool
SQL> select INST_ID, NAME ,pdb    from gv$services where name like 'pdb%';
   INST_ID NAME               PDB
---------- ------------------------------ ------------------------------
     2 pdb_plug_nocopy2          PDB_PLUG_NOCOPY2
     3 pdb_plug_nocopy2          PDB_PLUG_NOCOPY2
--> PDB database PDB_PLUG_NOCOPY2 is now only serviced by gract2 and gract3 whereas gract1 can do some work for other RAC/CDBs

 

Error ORA-65319 after create pluggable database

Steps to reproduce:
  - close the PDBs on all instances
  - alter pluggable database pdb1 unplug into ....
  - drop pluggable database pdb1 keep datafiles;
  - create pluggable database with NOCOPY ( ( here datafile header will be changed )
  - open that  PDB ( here datafile header will be changed )
  - close and drop  the newly plugged in PDB again  ( using keep datafiles )
  - try to create the pluggable database with NOCOPY
--> ORA-65319 during create pluggable database
SQL> create pluggable database pdb_plug_nocopy2 using '/home/oracle/PDB/pdb1_unplug.xml'
      NOCOPY     TEMPFILE REUSE;
create pluggable database pdb_plug_nocopy2 using '/home/oracle/PDB/pdb1_unplug.xml'
ERROR at line 1:
ORA-65139: Mismatch between XML metadata file and data file
+DATA/GRACT1/E7EC3DF06C5D1840E0436F01A8C0692E/DATAFILE/system.284.827944651 for
value of afn (16 in the plug XML file, 19 in the data file)
---> Not sure whether this is supported but in case of ORA-65139 the PDB can be reopened by editing pdb1_unplug.xml.
---> Sample change  afn to 19 in /home/oracle/PDB/pdb1_unplug.xml .
     Do the same for other XML properties like : fcpsb ,  createscnbas

 

Create Common and  Local users

Create a COMMON user
SQL> connect system/sys@gract-scan.grid12c.example.com:/cdb
SQL> create user c##cdb_admin  identified by x container=all;
SQL> GRANT CREATE SESSION TO  c##cdb_admin  container=all;

Connect to CDB 
SQL> connect c##cdb_admin/x@gract-scan.grid12c.example.com:/cdb
Connected.
Connect to PDB
SQL>  connect c##cdb_admin/x@gract-scan.grid12c.example.com:/pdb1
Connected.

Create a LOCAL user
SQL> connect system/sys@gract-scan.grid12c.example.com:/cdb
SQL> create user local_admin  identified by x container=current;
  create user local_admin  identified by x container=current
ERROR at line 1:
ORA-65049: creation of local user or role is not allowed in CDB$ROOT
--> Need to connect to PDB
SQL>  connect system/sys@gract-scan.grid12c.example.com:/pdb1
SQL> create user local_admin  identified by x container=current;
User created.
SQL>  GRANT CREATE SESSION TO local_admin container=current;
Grant succeeded.

SQL>  connect local_admin/x@gract-scan.grid12c.example.com:/pdb1
Connected.
SQL>   connect local_admin/x@gract-scan.grid12c.example.com:/cdb
ERROR:
ORA-01017: invalid username/password; logon denied
--> If  users are  not defined in CDB/PDBs you will get:  ORA-1017

Verify user by querying CDB_USERS view 
SQL> select USERNAME,COMMON,CON_ID from cdb_users where  USERNAME like  '%CDB_ADMIN%' or USERNAME like   '%LOCAL_ADMIN%';
USERNAME         COM     CON_ID
---------------- --- ----------
C##CDB_ADMIN      YES      1
C##CDB_ADMIN      YES      3
LOCAL_ADMIN        NO      3

--> User LOCAL_ADMIN is only defined in container 3 which is our PDB
    User   CDB_ADMIN connect connect to CDB and PDB1

PDB FAQ

How do I know if my database is Multitenant or not ?
SQL> select NAME, DECODE(CDB, 'YES', 'Multitenant Option enabled', 'Regular 12c Database: ') "Multitenant Option ?" ,
    OPEN_MODE, CON_ID from V$DATABASE;
NAME      Multitenant Option ?         OPEN_MODE              CON_ID
--------- -------------------------- -------------------- ----------
CDB      Multitenant Option enabled READ WRITE            0

How do I find all the CDB and all PDBs
SQL>  SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;
NAME                   CON_ID    DBID    CON_UID GUID
------------------------------ ---------- ---------- ---------- --------------------------------
CDB$ROOT                1 1971413870          1 DD7C48AA5A4504A2E04325AAE80A403C
PDB$SEED                2 4097414398 4097414398 FFE2B5D067357B55E0436F01A8C0F0F0
PDB1                    3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE

What Pluggable databases do we have in this container database ?
SQL>  select CON_ID, NAME, OPEN_MODE from V$PDBS;
    CON_ID NAME               OPEN_MODE
---------- ------------------------------ ----------
     3 PDB1               READ WRITE

How do I connect to a Pluggable Database , say, PDB1 and back to root container ?
SQL>  alter session set container = pdb1;
SQL> show con_name
CON_NAME
------------------------------
PDB1
SQL>  ALTER SESSION SET CONTAINER = CDB$ROOT;
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT

Can I monitor SGA usage on a PDB by PDB basis?
There are single SGA shared by all pluggable databases. However, you can determine SGA consumptions by all containers i.e, root and PDB.
SQL> alter session set container=CDB$ROOT;
SQL>  SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;
NAME                   CON_ID    DBID    CON_UID GUID
------------------------------ ---------- ---------- ---------- --------------------------------
CDB$ROOT                1 1971413870          1 DD7C48AA5A4504A2E04325AAE80A403C
PDB$SEED                2 4097414398 4097414398 FFE2B5D067357B55E0436F01A8C0F0F0
PDB1                    3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE
SQL>  select POOL, NAME, BYTES from V$SGASTAT where CON_ID = 3;
POOL         NAME                 BYTES
------------ -------------------------- ----------
shared pool  ktli log buffer pools           616
shared pool  KQR X PO                807856
...
large pool   SWRF Metric Eidbuf         368640
large pool   SWRF Metric CHBs           1802240

SQL> select CON_ID, POOL, sum(bytes) from  v$sgastat group by CON_ID, POOL order by  CON_ID, POOL;
    CON_ID POOL     SUM(BYTES)
---------- ------------ ----------
     0 java pool       4194304
     0 large pool       2179072
     0 shared pool      23187808
     0         122522608
     1 large pool       1867776
     1 shared pool     241739224
     2 large pool       2170880
     2 shared pool       5147720
     3 large pool       2170880
     3 shared pool      27720832

Can I monitor PGA usage on a PDB by PDB basis?
SQL> select CON_ID, sum(PGA_USED_MEM), sum(PGA_ALLOC_MEM), sum(PGA_MAX_MEM) from  v$process group by CON_ID order by  CON_ID;
    CON_ID SUM(PGA_USED_MEM) SUM(PGA_ALLOC_MEM) SUM(PGA_MAX_MEM)
---------- ----------------- ------------------ ----------------
     0       231006520          297505736        399513864
     1         9719884           16203316     20004404

How can I identify shared objects in the data dictionary ?
OBJECT LINK and  METADATA LINK objects are shared objects 
SQL>  select count(sharing),sharing  from cdb_objects group by sharing;
COUNT(SHARING) SHARING
-------------- -------------
     19152 NONE
       441 OBJECT LINK
    252646 METADATA LINK

References

 

12c feature: FLEX ASM

Overview

  • Flex ASM can be configured for Standard Cluster and for Flex cluster   
  • Changing the cluster mode requires cluster downtime.   
  • Oracle does not support changing an Oracle Flex Cluster to an Oracle Clusterware standard Cluster.   
  • Oracle Flex Cluster requires Grid Naming Service (GNS).    Zone delegation is not required.

Verify ASM instance status

Check ASM cluster mode and state  
[grid@gract1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled

[grid@gract1 ~]$ asmcmd showclusterstate
Normal

[grid@gract1 ~/ASM]$   oifcfg getif
eth1  192.168.1.0  global  public
eth2  192.168.2.0  global  cluster_interconnect,asm
--> Only 2 networks present : ASM traffic uses standard interconnect

[grid@gract1 ~]$  olsnodes  -n -s -a
gract1    1    Active    Hub
gract2    2    Active    Hub
gract3    3    Active    Hub
--> Note:  only HUP nodes with access to storage can act as an ASM FLEX instance 

ASM resource status: 
[grid@gract1 ~]$  crs | egrep 'STATE|asm|--'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     ONLINE          gract1       STABLE 
ora.asm                        ONLINE     ONLINE          gract3       STABLE 
ora.asm                        ONLINE     ONLINE          gract2       STABLE 

$  srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.

Testing ASM connection:

How many ASM connections do we expect ?
[grid@gract2 ASM]$ srvctl status asm -detail
ASM is running on gract2,gract1
ASM is enabled.

Let's connect to the global +ASM service
[grid@gract2 ASM]$  sqlplus sys/sys@gract1:/+ASM as sysasm @v
SQL> select INSTANCE_NUMBER,INSTANCE_NAME,STATUS, HOST_NAME    from v$instance;
INSTANCE_NUMBER INSTANCE_N STATUS    HOST_NAME
--------------- ---------- ------------ ------------------------------
          1 +ASM1       STARTED    gract1.example.com

[grid@gract2 ASM]$ sqlplus sys/sys@gract2:/+ASM as sysasm @v
SQL> select INSTANCE_NUMBER,INSTANCE_NAME,STATUS, HOST_NAME    from v$instance;
INSTANCE_NUMBER INSTANCE_N STATUS    HOST_NAME
--------------- ---------- ------------ ------------------------------
          2 +ASM2       STARTED    gract2.example.com

[grid@gract2 ASM]$  sqlplus sys/sys@gract3:/+ASM as sysasm 
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
--> This error is expected as +ASM servcice is not running on gract3

ASM Network

  • With Flex ASM Oracle 12c, a new type of network is called the ASM network.
  • This network is used for communication between ASM and its clients and is accessible on all the nodes. 
  • All ASM clients in the cluster have access to one or ore ASM network. 
  • Also it is possible to configure single network can perform both function as a private and an ASM network (oifcfg getif, you should see cluster_interconnect,asm)

ASM Listeners

  • To support FLex ASM, a set of ASM listeners are configured for every ASM network. 
  • Up to three ASM listener addresses are registred as remote listeners in each client database instance.
  • All clients connections are load balanced across the entire set of ASM instances
ASM Listener Details
Note: ASM listeners only run on nodes with ASM instances 
[grid@gract2 ~]$ srvctl  config asm -a
ASM home: /u01/app/121/grid
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM is enabled.
ASM instance count: 2
Cluster ASM listener: ASMNET1LSNR_ASM

Check clusterware resources 
[root@gract1 var]#  crs  | egrep 'asm|ASMNET|--|STATE_DETAILS'
Rescource NAME                 TARGET     STATE           SERVER       STATE_DETAILS                       
-------------------------      ---------- ----------      ------------ ------------------                  
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          gract1       STABLE   
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          gract2       STABLE   
ora.ASMNET1LSNR_ASM.lsnr       ONLINE     ONLINE          gract3       STABLE   

Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.asm                        1   ONLINE       ONLINE       gract1          Started,STABLE  
ora.asm                        3   ONLINE       ONLINE       gract2          Started,STABLE a 

ASM listener details 
[grid@gract2 ~]$ lsnrctl services ASMNET1LSNR_ASM 
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 07-AUG-2014 17:14:01
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM)))
Services Summary...
Service "+ASM" has 2 instance(s).
  Instance "+ASM2", status READY, has 2 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         REMOTE SERVER
         (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.112)(PORT=1521)))
      "DEDICATED" established:1213 refused:0 state:ready
         LOCAL SERVER
  Instance "+ASM3", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         REMOTE SERVER
         (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.113)(PORT=1521)))
The command completed successfully

The ASM instance  automatically registers with ASMListener at port 1521 using the asm or cluster_interconnect,asm network
[grid@gract1 ASM]$  oifcfg getif -type cluster_interconnect,asm
eth2  192.168.2.0  global  cluster_interconnect,asm

--> Instance "+ASM2" runs on gract2  serviced  by ASMNET1LSNR_ASM  listening on 192.168.2.112 Port 1521
    Instance "+ASM3" runs on gract3  serviced  by ASMNET1LSNR_ASM  listening on 192.168.2.113 Port 1521

Check resource dependencies
[root@gract1 var]#  crsctl status resource ora.asm -f | grep -i DEP
RELOCATE_BY_DEPENDENCY=0
START_DEPENDENCIES=weak(ora.LISTENER.lsnr) pullup(ora.ASMNET1LSNR_ASM.lsnr) hard(ora.ASMNET1LSNR_ASM.lsnr)
STOP_DEPENDENCIES=hard(intermediate:ora.ASMNET1LSNR_ASM.lsnr)
--> ASM instance starts and stops ASM listener 

ADVM Proxy

  • ASM Dynamic Volume Manager (ADVM) Proxy is a special Oracle instance.
  • It enables ADVM to connect to Flex ASM and is required to run on the same node as ADVM and ACFS. 
  • It can be shutdown when ACFS is not running.
  • The ADVM proxy instance has its ORACLE_SID set to +APX<node number>
ADVM Proxy details
$ my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
                               Name       Target          State        Server State
ora.proxy_advm                 ONLINE     ONLINE          gract1       STABLE 
ora.proxy_advm                 ONLINE     ONLINE          gract2       STABLE 
ora.proxy_advm                 ONLINE     ONLINE          gract3       STABLE 

$ ps -elf | grep pmon | grep APX
0 S grid      4019     1  0  80   0 - 350587 semtim 07:53 ?       00:00:00 apx_pmon_+APX3

$ srvctl status asm -proxy
ADVM proxy is running on node gract3,gract2,gract1

Stop/Start ADVM proxy server on a certain node:
$ srvctl status asm -proxy
ADVM proxy is running on node gract3,gract2,gract1
$  srvctl stop asm -proxy -node gract3
$ srvctl status asm -proxy
ADVM proxy is running on node gract2,gract1
$  ps -elf | grep pmon | grep APX
$ srvctl start asm -proxy -node gract3
$  ps -elf | grep pmon | grep APX
0 S grid      4967     1  0  80   0 - 350587 semtim 08:09 ?       00:00:00 apx_pmon_+APX3
$ srvctl status asm -proxy
ADVM proxy is running on node gract3,gract2,gract1

 

Remove and add an ASM instance to  our FLEX ASM cluster

Remove an ASM instance to  our FLEX ASM cluster 
[grid@gract1 ~]$ srvctl modify asm -count 2
[grid@gract1 ~]$  crs | egrep 'STATE|asm|--'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     ONLINE          gract3       STABLE 
ora.asm                        ONLINE     ONLINE          gract2       STABLE 

[grid@gract1 ~]$ srvctl status asm -detail
ASM is running on gract3,gract2
ASM is enabled.

Add an ASM instance to  our FLEX ASM cluster  
[grid@gract1 ~]$ srvctl modify asm -count 3
[grid@gract1 ~]$  srvctl status asm -detail
ASM is running on gract3,gract2
ASM is enabled.
--> Here we need to startup the ASM instance manually
[grid@gract1 ~]$ srvctl start asm
[grid@gract1 ~]$  crs | egrep 'STATE|asm|--'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.asm                        ONLINE     ONLINE          gract1       STABLE 
ora.asm                        ONLINE     ONLINE          gract3       STABLE 
ora.asm                        ONLINE     ONLINE          gract2       STABLE 

[grid@gract1 ~]$  srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.

Some useful commands to administer Flex ASM

$ srvctl config asm
ASM home: /u01/app/121/grid
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

$  srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.

$  srvctl status asm -proxy -detail
ADVM proxy is running on node gract3,gract2,gract1
ADVM proxy is enabled

$ crsctl status resource ora.asm -f | grep CARDINALITY=
CARDINALITY=3

Monitor Flex ASM Connections:

SQL> select INST_ID,GROUP_NUMBER, INSTANCE_NAME, DB_NAME, INSTANCE_NAME||':'||DB_NAME client_id, STATUS from gv$asm_client;

   INST_ID GROUP_NUMBER INSTANCE_N DB_NAME  CLIENT_ID         STATUS
---------- ------------ ---------- -------- -------------------- ------------
     3          1 +ASM3       +ASM     +ASM3:+ASM         CONNECTED
     3          1 gract1_1   gract1   gract1_1:gract1     CONNECTED
     3          1 gract1_2   gract1   gract1_2:gract1     CONNECTED
     3          0 +APX1       +APX     +APX1:+APX         CONNECTED
     3          0 +APX2       +APX     +APX2:+APX         CONNECTED
     3          0 +APX3       +APX     +APX3:+APX         CONNECTED
     3          1 -MGMTDB    _mgmtdb  -MGMTDB:_mgmtdb     CONNECTED
     2          1 gract1_3   gract1   gract1_3:gract1     CONNECTED

Reduce/Increase ASM instance count

$  srvctl config asm
ASM home: /u01/app/121/grid
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM instance count: 2
Cluster ASM listener: ASMNET1LSNR_ASM

$  srvctl status asm -detail
ASM is running on gract3,gract2
$ srvctl modify asm -count 3  
$ srvctl start asm -n gract1
$ srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.

$  srvctl config asm
ASM home: /u01/app/121/grid
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

Relocate ASM  connections


Due to the BUG mentioned below I've tested ASM relocation feature against 12.1.0.2 
   Bug 18003355 : LNX64-12.1-ASM, EXECUTE "RELOCATE" COMMAND,HIT ORA-15344 for further details

Setup testbase by reducing ASM instances to 2 in our 3-node cluster running 3 HUBs
[oracle@gract1 ~]$ srvctl modify asm -count 2
[oracle@gract1 ~]$ srvctl status asm -detail
ASM is running on gract2,gract1
ASM is enabled.
--> ASM is running on gract1,gract2

SQL>  @asm_cli
INSTANCE_NAME
--------------------
+ASM1
   INST_ID GROUP_NUMBER INSTANCE_NAME         DB_NAME  CLIENT_ID        STATUS
---------- ------------ -------------------- -------- -------------------- ------------
     1          2 cdbn1             cdbn     cdbn1:cdbn       CONNECTED
     1          2 cdbn3             cdbn     cdbn3:cdbn       CONNECTED
     2          2 cdbn2             cdbn     cdbn2:cdbn       CONNECTED

  cdbn2:cdbn -> connected to instance id 2 -> local  ASM client 
  cdbn3:cdbn -> connected to instance id 1 -> Remote ASM client
  cdbn1:cdbn -> connected to instance id 1 -> local  ASM  client

What happens when we relocate an client ?  
1) Relocate client cdbn2:cdbn when connected to INST_ID 1 
SQL> alter system relocate client 'cdbn2:cdbn';
  alter system relocate client 'cdbn2:cdbn'
  *
  ERROR at line 1:  
  ORA-15344: client cdbn2:cdbn@+ASM1 not found
     INST_ID GROUP_NUMBER INSTANCE_NAME         DB_NAME  CLIENT_ID        STATUS
  ---------- ------------ -------------------- -------- -------------------- ------------
     1          2 cdbn1             cdbn     cdbn1:cdbn       CONNECTED
     1          2 cdbn3             cdbn     cdbn3:cdbn       CONNECTED
     2          2 cdbn2             cdbn     cdbn2:cdbn       CONNECTED
  --> Error ORA-15344 . 
      We need connect INST_ID 2 to relocate that client   'cdbn2:cdbn';

2) Relocate the local client cdbn1:cdbn on INST_ID 1
SQL> alter system relocate client 'cdbn1:cdbn';
  System altered.
+ASM1 alert.log  
  Mon Aug 11 14:19:10 2014
  NOTE: relocating client cdbn1:cdbn:gract (reg:3580848907)
  NOTE: cleaned up ASM client cdbn1:cdbn:gract connection state  
  Mon Aug 11 14:19:10 2014
  NOTE: Flex client id 0x10005 [cdbn1:cdbn:gract] attempting to reconnect
  NOTE: registered owner id 0x10005 for cdbn1:cdbn:gract (reconnecting)
  NOTE: Flex client cdbn1:cdbn:gract re-registered, osid 17384, mbr 0x0, asmb 32283 (reg:1445977979)
  NOTE: client [cdbn1:cdbn:gract] declared 0 additional pending writes

   INST_ID GROUP_NUMBER INSTANCE_NAME         DB_NAME  CLIENT_ID        STATUS
  ---------- ------------ -------------------- -------- -------------------- ------------
     1          2 cdbn1             cdbn     cdbn1:cdbn       CONNECTED
     1          2 cdbn3             cdbn     cdbn3:cdbn       CONNECTED
     2          2 cdbn2             cdbn     cdbn2:cdbn       CONNECTED
  --> ASM doesn't relocate the instance - instead it just recconnects to the local ASM instance 

3) Relocate the Remote ASM client connection
SQL> alter system relocate client 'cdbn3:cdbn';
ASM1+ alert.log 
  Mon Aug 11 14:25:47 2014
  NOTE: relocating client cdbn3:cdbn:gract (reg:959447971)
  NOTE: cleaned up ASM client cdbn3:cdbn:gract connection state
  Mon Aug 11 14:25:49 2014
  NOTE: detected orphaned client id 0x10003.
  NOTE: client cdbn3:cdbn:gract id 0x10003 has reconnected to ASM inst +ASM2 (reg:309839056), or has been fenced
   INST_ID GROUP_NUMBER INSTANCE_NAME         DB_NAME  CLIENT_ID        STATUS
  ---------- ------------ -------------------- -------- -------------------- ------------
     1          2 cdbn1             cdbn     cdbn1:cdbn       CONNECTED
     2          2 cdbn2             cdbn     cdbn2:cdbn       CONNECTED
     2          2 cdbn3             cdbn     cdbn3:cdbn       CONNECTED
  Note : The relocate not always worked ( sometimes I got a reconnect even for as REMOTE asn client )
         In that case just repeat the the relocate command 

  To relocate ASM instance back to INST_ID 1 connect to +ASM2 instance and run 
  SQL> alter system relocate client 'cdbn3:cdbn';

Summary: Local  ASM clients always tries to re-connect locally 
         Remote ASM clients really fail over the remote note 

Force a Flex ASM network connection by reducing ASM instance count

$  srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.
$ srvctl modify asm -count 2
$  srvctl status asm -detail
ASM is running on gract3,gract2
ASM is enabled.
--> ASM instance dropped on gract1 - Connected session still working
SQL> select INSTANCE_NUMBER,INSTANCE_NAME,STATUS, HOST_NAME    from v$instance;
INSTANCE_NUMBER INSTANCE_N STATUS    HOST_NAME
--------------- ---------- ------------ ------------------------------
          1 gract1_1   OPEN     gract1.example.com
--> alert.log on  gract1 reports that ASM is reconnecting to ASM instance +ASM3
Thu Oct 03 10:35:35 2013
NOTE: ASMB registering with ASM instance as client 0x10004 (reg:2698621261)
NOTE: ASMB connected to ASM instance +ASM3 (Flex mode; client id 0x10004)
NOTE: ASMB rebuilding ASM server state
NOTE: ASMB rebuilt 1 (of 1) groups
NOTE: ASMB rebuilt 17 (of 17) allocated files
NOTE: fetching new locked extents from server
NOTE: 0 locks established; 0 pending writes sent to server
SUCCESS: ASMB reconnected & completed ASM server state
SQL> select INST_ID,GROUP_NUMBER, INSTANCE_NAME, DB_NAME, INSTANCE_NAME||':'||DB_NAME client_id, STATUS from gv$asm_client
      where DB_NAME = 'gract1';
   INST_ID GROUP_NUMBER INSTANCE_N DB_NAME  CLIENT_ID         STATUS
---------- ------------ ---------- -------- -------------------- ------------
     3          1 gract1_1   gract1   gract1_1:gract1     CONNECTED  <-- ASM connection connected via Network
     3          1 gract1_2   gract1   gract1_2:gract1     CONNECTED
     2          1 gract1_3   gract1   gract1_3:gract1     CONNECTED

 

Force a Flex ASM network connection by killing  ASM asmb process

Testing ASM instance crash 

Prepare a db session as user scott connecting to PDB1 using CDBN1
[oracle@gract1 ~]$ sqlplus scott/tiger@gract1/PDB1
SQL> select INSTANCE_NUMBER,INSTANCE_NAME,STATUS, HOST_NAME    from v$instance;
INSTANCE_NUMBER INSTANCE_N STATUS    HOST_NAME
--------------- ---------- ------------ ------------------------------
          1 cdbn1       OPEN     gract1.example.com
SQL> 
SQL>  show con_name
CON_NAME
------------------------------
PDB1
SQL> select user from dual;
USER
------------------------------
SCOTT
--> Don't close this session we will test whether this connection survives the ASM instance crash

Connect to the local +ASM1 instance  and check ASM client status 
SQL> @asm_cli
INSTANCE_NAME
--------------------
+ASM1
   INST_ID GROUP_NUMBER INSTANCE_NAME         DB_NAME  CLIENT_ID        STATUS
---------- ------------ -------------------- -------- -------------------- ------------
     1          2 cdbn1             cdbn     cdbn1:cdbn       CONNECTED
     1          2 cdbn3             cdbn     cdbn3:cdbn       CONNECTED
     2          2 cdbn2             cdbn     cdbn2:cdbn       CONNECTED
+ASM1 instance serves a local client cdbn1:cdbn and a remote client :  cdbn3:cdbn
Lets kill +ASM1 instance 
[grid@gract1 trace]$ ps -elf | grep -i asm_asmb
0 S grid     32119     1  0  80   0 - 361252 pipe_w 09:37 ?       00:00:00 asm_asmb_+ASM1
[root@gract1 var]#  kill -9  32119

Connect to +ASM2 instance and check ASM status 
INSTANCE_NAME
--------------------
+ASM2
   INST_ID GROUP_NUMBER INSTANCE_NAME         DB_NAME  CLIENT_ID        STATUS
---------- ------------ -------------------- -------- -------------------- ------------
     2          2 cdbn1             cdbn     cdbn1:cdbn       CONNECTED
     2          2 cdbn2             cdbn     cdbn2:cdbn       CONNECTED
     2          2 cdbn3             cdbn     cdbn3:cdbn       CONNECTED
--> Now we have 2 REMOTE ASM clients cdbn1:cdbn and cdbn3:cdbn
    All ASM operations are now serviced by INST_ID 2 ( +ASM2 )

Check SQL session status 
SQL>  select user from dual;
USER
------------------------------
SCOTT
--> SQL session survived the ASM failover 

Note ASM instance is automatically restarted due to CW resources monitoring
Check CW resources  
[root@gract1 var]#  crs  | egrep 'asm|---|INST'
-------------------------      ---------- ----------      ------------ ------------------                  
Resource NAME               INST   TARGET       STATE        SERVER          STATE_DETAILS
--------------------------- ----   ------------ ------------ --------------- -----------------------------------------
ora.asm                        1   ONLINE       ONLINE       gract1          Started,STABLE  
ora.asm                        3   ONLINE       ONLINE       gract2          Started,STABLE  

Relocate ASM clients as needed
SQL> alter system relocate client 'cdbn3:cdbn';
System altered.
SQL>   alter system relocate client 'cdbn1:cdbn';
System altered.
SQL> @asm_cli
INSTANCE_NAME
--------------------
+ASM2
   INST_ID GROUP_NUMBER INSTANCE_NAME         DB_NAME  CLIENT_ID        STATUS
---------- ------------ -------------------- -------- -------------------- ------------
     1          2 cdbn1             cdbn     cdbn1:cdbn       CONNECTED
     1          2 cdbn3             cdbn     cdbn3:cdbn       CONNECTED
     2          2 cdbn2             cdbn     cdbn2:cdbn       CONNECTED
Reference