Preparing and testing configuration
Change Request for GRID version 11.2.0.4
grac41: switching CI from eth2 192.168.2.101 to eth3 192.168.3.101
grac42: switching CI form eth2 192.168.2.102 to eth3 192.168.3.102
Check RAC nodes and CRS status
root@grac41 Desktop]# olsnodes -n -i
grac41 1 192.168.1.250
grac42 2 192.168.1.249
[root@grac41 Desktop]# crsctl check cluster -all
**************************************************************
grac41:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
grac42:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@grac41 Desktop]# oifcfg getif
eth1 192.168.1.0 global public
eth2 192.168.2.0 global cluster_interconnect
[root@grac42 ~]# oifcfg getif
eth1 192.168.1.0 global public
eth2 192.168.2.0 global cluster_interconnect
Verify that the new interface is ready on all nodes and check that MTU size is equal
OLD CI: ( from grac41 )
eth2 Link encap:Ethernet HWaddr 08:00:27:DF:79:B9
inet addr:192.168.2.101 Bcast:192.168.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
NEW CI: ( from grac41 )
eth3 Link encap:Ethernet HWaddr 08:00:27:2E:59:28
inet addr:192.168.3.101 Bcast:192.168.3.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
--> MTU size is 1500 on both devices and new CI Network adress are configured and ready
Testing connectivity
[root@grac41 network-scripts]# ssh grac41 "/bin/ping -s 1500 -c 10 -I 192.168.3.101 192.168.3.102"
[root@grac41 network-scripts]# ssh grac42 "/bin/ping -s 1500 -c 10 -I 192.168.3.102 192.168.3.101"
Add new CI Interfaces with oifcfg setif
Stop newly configured interfaces on all nodes
[root@grac41 network-scripts]# ifconfig eth3 down
[root@grac42 network-scripts]# ifconfig eth3 down
[root@grac41 network-scripts]# oifcfg setif -global eth3/192.168.3.0:cluster_interconnect
Verify change on all nodes
[root@grac41 network-scripts]# oifcfg getif
eth1 192.168.1.0 global public
eth2 192.168.2.0 global cluster_interconnect
eth3 192.168.3.0 global cluster_interconnect
[root@grac42 network-scripts]# oifcfg getif
eth1 192.168.1.0 global public
eth2 192.168.2.0 global cluster_interconnect
eth3 192.168.3.0 global cluster_interconnect
Stop and disable CRS on all nodes
[root@grac41 network-scripts]# crsctl stop crs
[root@grac41 network-scripts]# crsctl disable crs
Restart newly configure CI interface on all nodes and run ping command
[root@grac41 network-scripts]# ifconfig eth3 up
[root@grac41 network-scripts]# ssh grac41 "/bin/ping -s 1500 -c 10 -I 192.168.3.101 192.168.3.102"
[root@grac41 network-scripts]# ssh grac42 "/bin/ping -s 1500 -c 10 -I 192.168.3.102 192.168.3.101"
Restart and enable CRS on all nodes :
[root@grac41 network-scripts]# crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@grac41 network-scripts]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
Note at this stage out CI uses both Interfaces eth2 and eth3 !
Remove the old interface if required
[root@grac41 network-scripts]# oifcfg delif -global eth2/192.168.2.0
[root@grac42 ~]# oifcfg getif
eth1 192.168.1.0 global public
eth3 192.168.3.0 global cluster_interconnect
[root@grac41 network-scripts]# oifcfg getif
eth1 192.168.1.0 global public
eth3 192.168.3.0 global cluster_interconnect
Stop the old CI interfaces
[root@grac41 ~]# ifconfig eth2 down
[root@grac42 ~]# ifconfig eth2 down
--> Clusterware should remain up and running
Reference
- How to Modify Private Network Information in Oracle Clusterware (Doc ID 283684.1)