Tracefiles actions generated by an instance node eviction ordered by timestamps

CRS alter.log in grac1  tracks  communication errors and issue a cluster removal for Node grac3 
 Thu Sep 26 11:51:53 2013 - [ alertgrac1.log ]  
   [cssd(2929)]CRS-1612:Network communication with node grac3 (3) missing for 50% of timeout interval.  Removal of this node from cluster in 14.340 seconds
   [cssd(2929)]CRS-1611:Network communication with node grac3 (3) missing for 75% of timeout interval.  Removal of this node from cluster in 7.340 seconds
 2013-09-26 11:51:47.535  - [ alertgrac1.log ]  
   [cssd(2929)]CRS-1610:Network communication with node grac3 (3) missing for 90% of timeout interval.  Removal of this node from cluster in 2.340 seconds
 2013-09-26 11:51:49.890 [ alertgrac1.log ]   
   [cssd(2929)]CRS-1632:Node grac3 is being removed from the cluster in cluster incarnation 269544510
 2013-09-26 11:51:49.906 [ alertgrac1.log ]   
   [cssd(2929)]CRS-1601:CSSD Reconfiguration complete. Active nodes are grac1 grac2 .
 2013-09-26 11:51:48.035 [ alertgrac1.log ]   
   [ohasd(2549)]CRS-8011:reboot advisory message from host: grac3, component: cssmonit, with time stamp: L-2013-09-26-11:51:48.150
   [ohasd(2549)]CRS-8013:reboot advisory message text: Rebooting after limit 28000 exceeded; disk timeout 28000, network timeout 27590, last heartbeat 
      from CSSD at epoch seconds 1380189080.100, 28052 milliseconds ago based on invariant clock value of 3427242
   [cssd(2929)]CRS-1632:Node grac3 is being removed from the cluster in cluster incarnation 269544510
 2013-09-26 11:51:49.906 [ alertgrac1.log ]   
   [cssd(2929)]CRS-1601:CSSD Reconfiguration complete. Active nodes are grac1 grac2 .

 DB alert.log on grac1 reports starting DRM action       
 Thu Sep 26 11:51:53 2013 [ alert_GRACE2_1.log ] 
 Reconfiguration started (old inc 6, new inc 8)
 List of instances:
   1 3 (myinst: 1)
   Global Resource Directory frozen
  * dead instance detected - domain 0 invalid = TRUE
  Communication channels reestablished
  * dead instance detected - domain 0 invalid = TRUE
  Communication channels reestablished
  * domain 0 not valid according to instance 3
  
 DB alert.log on grac3 reports starting DRM action        
 Thu Sep 26 11:51:53 2013 [ alert_GRACE2_2.log ] 
 Reconfiguration started (old inc 6, new inc 8)
 List of instances:
   1 3 (myinst: 1)
   Global Resource Directory frozen
  * dead instance detected - domain 0 invalid = TRUE
  Communication channels reestablished
  * dead instance detected - domain 0 invalid = TRUE
  Communication channels reestablished
  * domain 0 not valid according to instance 3

 [ alert_GRACE2_1.log ] 
  Master broadcasted resource hash value bitmaps
  Non-local Process blocks cleaned out
  * domain 0 valid = 0 according to instance 1

[ alert_GRACE2_3.log ]   
  Master broadcasted resource hash value bitmaps
  Non-local Process blocks cleaned out

DB alert.log on grac1 reports hat all oustanding DRM actions are finished now
[ alert_GRACE2_1.log     ]
  LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
  Set master node info
  Submitted all remote-enqueue requests
  Dwn-cvts replayed, VALBLKs dubious
  All grantable enqueues granted
  Submitted all GCS remote-cache requests
  Fix write in gcs resources
  Reconfiguration complete

DB alert.log on grac2 reports thats  instance recovery for grac3 was startet and  all outstanding DRM actions are finished 
Thu Sep 26 11:51:57 2013 [ alert_GRACE2_3.log ]  
  LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
  Set master node info
  Submitted all remote-enqueue requests
  Dwn-cvts replayed, VALBLKs dubious
  All grantable enqueues granted
  Post SMON to start 1st pass IR
  Submitted all GCS remote-cache requests
  Post SMON to start 1st pass IR
  Fix write in gcs resources
 Reconfiguration complete
 Instance recovery: looking for dead threads
 Beginning instance recovery of 1 threads
 Started redo scan
 Completed redo scan
  read 16 KB redo, 33 data blocks need recovery
 Started redo application at
  Thread 1: logseq 116, block 7933
 Recovery of Online Redo Log: Thread 1 Group 2 Seq 116 Reading mem 0
   Mem# 0: +DATA/grace2/onlinelog/group_2.262.821450273
 Completed redo application of 0.01MB
 Completed instance recovery at
  Thread 1: logseq 116, block 7965, scn 11572979
  26 data blocks read, 45 data blocks written, 16 redo k-bytes read
 Thread 1 advanced to log sequence 117 (thread recovery)

DB Alert.log on grac1 reports node grac3 as a dead node 
 Thu Sep 26 11:52:03 2013 [ alert_GRACE2_1.log ]  
 minact-scn: master found reconf/inst-rec before recscn scan old-inc#:8 new-inc#:8
 minact-scn: master continuing after IR
 minact-scn: Master considers inst:2 dead
 [crsd(3685)]CRS-5504:Node down event reported for node 'grac3'.
 Restarting dead background process DIA0

DB alert.log on  grac2 reports that grac3 was reomoved from the cluster
 Thu Sep 26 11:52:03 2013 [ alert_GRACE2_3.log ]   
 DIA0 started with pid=10, OS id=12045  
 [crsd(3685)]CRS-2773:Server 'grac3' has been removed from pool 'ora.SrvPool1'.

CRS alert.log on grac1 reports that grac3 has joined the cluster again 
 2013-09-26 11:54:35.751 [ alertgrac1.log ]   
 [cssd(2929)]CRS-1601:CSSD Reconfiguration complete. Active nodes are grac1 grac2 grac3 .
 [crsd(3685)]CRS-2772:Server 'grac3' has been assigned to pool 'ora.SrvPool1'.
 [/u01/app/11203/grid/bin/oraagent.bin(3824)]CRS-5016:Process "/u01/app/11203/grid/bin/lsnrctl" spawned by agent "/u01/app/11203/grid/bin/oraagent.bin" 
 for action "check" failed: details at "(:CLSN00010:)" in "/u01/app/11203/grid/log/grac1/agent/crsd/oraagent_grid/oraagent_grid.log"
2013-09-26 11:54:35.751 [ alert_GRACE2_2.log ]  
 Starting ORACLE instance (normal)
 Private Interface 'eth1:1' configured from GPnP for use as a private interconnect.
   [name='eth1:1', type=1, ip=169.254.146.89, mac=08-00-27-8b-27-0b, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62]
 Public Interface 'eth0' configured from GPnP for use as a public interface.
   [name='eth0', type=1, ip=192.168.1.63, mac=08-00-27-05-d7-51, net=192.168.1.0/24, mask=255.255.255.0, use=public/1]
 Public Interface 'eth0:1' configured from GPnP for use as a public interface.
   [name='eth0:1', type=1, ip=192.168.1.109, mac=08-00-27-05-d7-51, net=192.168.1.0/24, mask=255.255.255.0, use=public/1]
 Public Interface 'eth0:2' configured from GPnP for use as a public interface.
   [name='eth0:2', type=1, ip=192.168.1.120, mac=08-00-27-05-d7-51, net=192.168.1.0/24, mask=255.255.255.0, use=public/1]
 Picked latch-free SCN scheme 3
 Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11203/racdb/dbs/arch
 Autotune of undo retention is turned on.
 Starting up:
 Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

One thought on “Tracefiles actions generated by an instance node eviction ordered by timestamps”

  1. hey there and thank you for your information – I have definitely picked up something new from right here. I did however expertise several technical points using this site, since I experienced to reload the site lots of times previous to I could get it to load properly. I had been wondering if your web hosting is OK? Not that I am complaining, but slow loading instances times will very frequently affect your placement in google and can damage your high quality score if advertising and marketing with Adwords. Anyway I’m adding this RSS to my email and could look out for much more of your respective interesting content. Make sure you update this again soon..

Leave a Reply

Your email address will not be published. Required fields are marked *