Check ASM and RAC status
[grid@gract1 ~]$ olsnodes -n -s -a
gract1 1 Active Hub
gract2 2 Active Hub
gract3 3 Active Hub
[grid@gract1 ~]$ srvctl status database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is running on node gract3
[grid@gract1 ~]$ srvctl status asm
ASM is running on gract3,gract2
[grid@gract1 ~]$ srvctl config database -d cdb
Database unique name: cdb
Database name: cdb
Oracle home: /u01/app/oracle/product/121/racdb
Oracle user: oracle
Spfile: +DATA/cdb/spfilecdb.ora
Password file: +DATA/cdb/orapwcdb
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: cdb
Database instances: cdb1,cdb2,cdb3
Disk Groups: DATA
Mount point paths:
Services: hr
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed
[grid@gract1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled
Unmount your ACFS filesystem to avoid errors during CRS restart due to device busy error
--> For details see read following note.
Test your Perl and ASM configuration by running following command on all cluster nodes
$ /u01/app/121/grid/perl/bin/perl -w -I /u01/app/121/grid/perl/lib/5.14.1 -I /u01/app/121/grid/perl/lib/site_perl/5.14.1
-I /u01/app/121/grid/lib -I /u01/app/121/grid/lib/asmcmd -I /u01/app/121/grid/rdbms/lib/asmcmd /u01/app/121/grid/bin/asmcmdcore spget
--> You may fix here error like wrong settings for LD_LIBRARY_PATH ....
Install and run orachk/raccheck tool
Install latest support bundle : DBSupportBundle_v1_3_6.zip
Unzip and Prepare directory
[grid@gract1 ~/DBSupportBundle]$ mkdir orachk_OUT
[grid@gract1 ~/DBSupportBundle]$ chmod 777 orachk_OUT
Run orachk as user oracle
Check orachk version
[oracle@gract1 DBSupportBundle]$ orachk -v
ORACHK VERSION: 2.2.5_20140530
[oracle@gract1 DBSupportBundle]$ setenv RAT_OUTPUT /home/grid/DBSupportBundle/orachk_OUT
[oracle@gract1 DBSupportBundle]$ ./orachk -u -o pre
Enter upgrade target version (valid versions are 11.2.0.3.0, 11.2.0.4.0, 12.1.0.1.0, 12.1.0.2.0):- 12.1.0.2.0
CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to /u01/app/121/grid?[y/n][y]
Checking ssh user equivalency settings on all nodes in cluster
Node gract2 is configured for ssh user equivalency for oracle user
Node gract3 is configured for ssh user equivalency for oracle user
Searching for running databases . . . . .
List of running databases registered in OCR
1. cdb
2. None of above
Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1].
Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
-------------------------------------------------------------------------------------------------------
Oracle Stack Status
-------------------------------------------------------------------------------------------------------
Host Name CRS Installed ASM HOME RDBMS Installed CRS UP ASM UP RDBMS UP DB Instance Name
-------------------------------------------------------------------------------------------------------
gract1 Yes N/A Yes Yes No Yes cdb1
gract2 Yes N/A Yes Yes Yes Yes cdb2
gract3 Yes N/A Yes Yes Yes Yes cdb3
-------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------
Installed components summary
---------------------------------------------------------------------------------------------------------------------------------
GI_HOME ORACLE_HOME Database Names
---------------------------------------------------------------------------------------------------------------------------------
/u01/app/121/grid - 12.1.0.1.0 /u01/app/oracle/product/121/racdb - 12.1.0.1.0 cdb PDB1
---------------------------------------------------------------------------------------------------------------------------------
...
Detailed report (html) - /home/grid/DBSupportBundle/orachk_OUT/orachk_gract1_PDB1_080814_105958/orachk_gract1_PDB1_080814_105958.html
UPLOAD(if required) - /home/grid/DBSupportBundle/orachk_OUT/orachk_gract1_PDB1_080814_105958.zip
[oracle@gract1 DBSupportBundle]$ ls orachk_OUT
orachk_gract1_PDB1_080814_105958 orachk_gract1_PDB1_080814_105958.zip
--> Fix now any ERROR or WARNING messages before moving forward
Create new clusterware location and verify CRS installation with cluvfy
# mkdir -p /u01/app/12102/grid
# chown -R grid:oinstall /u01//app/12102/grid
# chmod -R 775 /u01/app/12102/grid
--> Run these commands on all cluster nodes
Download newest cluvfy tool and preparre cluvfy
[grid@gract1 ~/cluvfy]$ mkdir cluvfy
[grid@gract1 ~/cluvfy]$ cd cluvfy
[grid@gract1 ~/cluvfy]$ unzip /media/sf_Kits/CLUVFY/cvupack_Linux_x86_64.zip
Check cluvfy version
[grid@gract1 ~/cluvfy]$ ./bin/cluvfy -version
12.1.0.1.0 Build 112713x8664
Running cluvfy
Check current installation for any errors
[grid@gract1 ~/cluvfy]$ ./bin/cluvfy stage -post crsinst -n gract1,gract2,gract3
..
Checking Flex Cluster node role configuration...
Flex Cluster node role configuration check passed
Post-check for cluster services setup was successful.
Verify new CRS location and run cluvfy with -pre crsinst -upgrade .. rolling
[grid@gract1 ~/cluvfy]$ ./bin/cluvfy stage -pre crsinst -upgrade -n gract1,gract2, gract3 -rolling -src_crshome $GRID_HOME
-dest_crshome /u01/app/12102/grid -dest_version 12.1.0.2.0 -verbose
Install new Clusterware version – start OUI as user grid
[grid@gract1 grid]$ pwd
/media/sf_Kits/12.1.0.2/grid
[grid@gract1 grid]$ ./runInstaller
-> Upgrade Oracle Grid Infrastructure for a Cluster
If OUI hangs with progress level at 7 % progressing following step :
Checking Verify that the ASM instance was configure using an existing ASM parameter file
--> please read following node
Run rootupgrade.sh on node gract1
[root@gract1 ~]# /u01/app/12102/grid/rootupgrade.sh
..
Monitor cluster state during running rootupgrade.sh
From local Node gract1
[grid@gract1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.1.0]
[grid@gract1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.1.0]
[grid@gract1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [gract1] is [12.1.0.2.0]
From Remote Node gract3
[grid@gract3 ~]$ srvctl status database -d cdb
Instance cdb1 is not running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is running on node gract3
[grid@gract3 ~]$ srvctl status asm -detail
ASM is running on gract3,gract2
ASM is enabled.
[grid@gract3 ~]$ asmcmd showclusterstate
In Rolling Upgrade
--> cdb1 and +ASM1 are not available - Cluster mode is : In Rolling Upgrade
2 ASM instanced and 2 RAC instances are still available
Full Output from rootupgrade.sh script on node gract1
[root@gract1 ~]# /u01/app/12102/grid/rootupgrade.sh
...
Using configuration parameter file: /u01/app/12102/grid/crs/install/crsconfig_params
2014/08/09 13:22:43 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
TFA-00001: Failed to start Oracle Trace File Analyzer (TFA) daemon. Please check TFA logs.
2014/08/09 13:24:53 CLSRSC-4005: Failed to patch Oracle Trace File Analyzer (TFA) Collector. Grid Infrastructure operations will continue.
2014/08/09 13:25:03 CLSRSC-464: Starting retrieval of the cluster configuration data
2014/08/09 13:25:13 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2014/08/09 13:25:13 CLSRSC-363: User ignored prerequisites during installation
2014/08/09 13:25:34 CLSRSC-515: Starting OCR manual backup.
2014/08/09 13:25:39 CLSRSC-516: OCR manual backup successful.
2014/08/09 13:25:45 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2014/08/09 13:25:45 CLSRSC-482: Running command: '/u01/app/121/grid/bin/crsctl start rollingupgrade 12.1.0.2.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2014/08/09 13:26:06 CLSRSC-482: Running command: '/u01/app/12102/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/121/grid -oldCRSVersion 12.1.0.1.0 -nodeNumber 1 -firstNode true -startRolling false'
ASM configuration upgraded in local node successfully.
2014/08/09 13:26:16 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2014/08/09 13:26:16 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2014/08/09 13:29:17 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2014/08/09 13:37:40 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/08/09 13:45:39 CLSRSC-472: Attempting to export the OCR
2014/08/09 13:45:40 CLSRSC-482: Running command: 'ocrconfig -upgrade grid oinstall'
2014/08/09 13:47:31 CLSRSC-473: Successfully exported the OCR
2014/08/09 13:47:40 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2014/08/09 13:47:40 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.
2014/08/09 13:47:40 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2014/08/09 13:47:40 CLSRSC-543:
3. The downgrade command must be run on the node gract2 with the '-lastnode' option to restore global configuration data.
2014/08/09 13:48:06 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/08/09 13:48:58 CLSRSC-474: Initiating upgrade of resource types
2014/08/09 13:50:01 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p first'
2014/08/09 13:50:01 CLSRSC-475: Upgrade of resource types successfully initiated.
2014/08/09 13:50:14 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
--> Node the downgrade hints above
After finishing rootupgrade.sql on gract1 both cd1 and +ASM1 are available again
[grid@gract3 ~]$ srvctl status database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is running on node gract3
[grid@gract3 ~]$ srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.
[grid@gract3 ~]$ asmcmd showclusterstate
In Rolling Upgrade
Run rootupgrade.sh on 2nd node : gract2
Cluster status during running rootupgrade.sh
[grid@gract3 ~]$ srvctl status database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is not running on node gract2
Instance cdb3 is running on node gract3
[grid@gract3 ~]$ srvctl status asm -detail
ASM is running on gract3,gract1
ASM is enabled.
[grid@gract3 ~]$ asmcmd showclusterstate
In Rolling Upgrade
Full Output from rootupgrade.sh script on node gract2
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12102/grid/crs/install/crsconfig_params
2014/08/09 13:53:41 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
TFA-00001: Failed to start Oracle Trace File Analyzer (TFA) daemon. Please check TFA logs.
2014/08/09 13:55:50 CLSRSC-4005: Failed to patch Oracle Trace File Analyzer (TFA) Collector. Grid Infrastructure operations will continue.
2014/08/09 13:55:57 CLSRSC-464: Starting retrieval of the cluster configuration data
2014/08/09 13:56:12 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2014/08/09 13:56:12 CLSRSC-363: User ignored prerequisites during installation
ASM configuration upgraded in local node successfully.
2014/08/09 13:56:46 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2014/08/09 14:00:03 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2014/08/09 14:00:58 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/08/09 14:05:56 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/08/09 14:06:39 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Status after running rootupgracd.sh on gract2
[grid@gract2 ~]$ srvctl status database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is running on node gract3
[grid@gract2 ~]$ srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.
[grid@gract2 ~]$ asmcmd showclusterstate
In Rolling Upgrade
[grid@gract2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
[grid@gract2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.1.0]
[grid@gract2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [gract2] is [12.1.0.2.0]
--> All cluster instances are up softwareversion is 12.1.0.2.0
Running rootupgrade.sh on last node : gract3
[root@gract3 ~]# /u01/app/12102/grid/rootupgrade.sh
...
Status during running rootupgrade.sh on last node gract3
[grid@gract2 ~]$ srvctl status database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is not running on node gract3
[grid@gract2 ~]$ srvctl status asm -detail
ASM is running on gract2,gract1
ASM is enabled.
[grid@gract2 ~]$ asmcmd showclusterstate
In Rolling Upgrade
Full Output from rootupgrade.sh script on node gract3
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12102/grid/crs/install/crsconfig_params
2014/08/09 14:01:01 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
TFA-00001: Failed to start Oracle Trace File Analyzer (TFA) daemon. Please check TFA logs.
2014/08/09 14:03:06 CLSRSC-4005: Failed to patch Oracle Trace File Analyzer (TFA) Collector. Grid Infrastructure operations will continue.
2014/08/09 14:03:12 CLSRSC-464: Starting retrieval of the cluster configuration data
2014/08/09 14:03:30 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2014/08/09 14:03:30 CLSRSC-363: User ignored prerequisites during installation
ASM configuration upgraded in local node successfully.
2014/08/09 14:03:59 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2014/08/09 14:06:01 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2014/08/09 14:06:52 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/08/09 14:10:56 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/08/09 14:11:11 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2014/08/09 14:11:11 CLSRSC-482: Running command: '/u01/app/12102/grid/bin/crsctl set crs activeversion'
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2014/08/09 14:12:27 CLSRSC-479: Successfully set Oracle Clusterware active version
arted to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2014/08/09 14:12:27 CLSRSC-479: Successfully set Oracle Clusterware active version
2014/08/09 14:14:25 CLSRSC-476: Finishing upgrade of resource types
2014/08/09 14:14:36 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p last'
2014/08/09 14:14:36 CLSRSC-477: Successfully completed upgrade of resource types
PRCN-3004 : Listener MGMTLSNR already exists
2014/08/09 14:15:46 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
--> Upgrade scripts detecs last node and runs crsctl set crs activeversion
Last nodes upgrades resurces by running : upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p last
Status after running rootupgracd.sh on gract3
[grid@gract3 ~]$ srvctl status database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is running on node gract3
[grid@gract3 ~]$ srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.
[grid@gract3 ~]$ asmcmd showclusterstate
Normal
[grid@gract3 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.1.0]
[grid@gract3 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[grid@gract3 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [gract3] is [12.1.0.2.0]
--> Clusterware is in nomrmal state and successfully upgrade to 12.1.0.2
--> Go back to OUI and continue installation
Reference
awesome post…thanks much