Upgrade 12.1.0.1 Oracle Multitenant database to 12.1.0.2

Overiew

  • Basically there are two techniques to upgrade an Oracle Multitenant environment:
    • Everything at Once
    • One at a Time — via unplug/plug
  • Even plugin a 12.1.0.1 PDB into 12.1.0.2 CDB will take some time the concept of installing the CDB 12.1.0.2 first has some advantages:
    • Intensive testing can occur with that CDB 12.1.0.2 which will become our procution database later
    • If during the UPGRADE some problems happens despite intensive testing we can easily swich back to our 12.1.0.1 env.
    • In that case of switch back you simply need to plugin your PDB to your CDB 12.1.0.1
  • This BLOG covers  use the unplug/plug approach 

Install 12.1.0.2 software  ( no impact to our production env )

Create new ORACLE_HOME on all custer nodes and install 12.1.0.2 software 
[root@gract1 ~]# mkdir -p  /u01/app/oracle/product/12102/racdb
[root@gract1 ~]# chown oracle:oinstall  /u01/app/oracle/product/12102/racdb
[root@gract1 ~]# chmod 775 /u01/app/oracle/product/12102/racdb

Run OUI and install 12.0.1.2 by using software ONLY option 
[oracle@gract1 database]$ pwd
/media/sf_Kits/12.1.0.2/database
[oracle@gract1 database]$ ./runInstaller
  --> Select database software only 
   --> Select RAC database

Run any fixup scripts if there are fixable errors
[root@gract1 ~]# /tmp/CVU_12.1.0.2.0_oracle/runfixup.sh
All Fix-up operations were completed successfully.

[root@gract1 ~]# ssh gract2
[root@gract2 ~]# /tmp/CVU_12.1.0.2.0_oracle/runfixup.sh
All Fix-up operations were completed successfully.

[root@gract1 ~]# ssh gract3
[root@gract3 ~]# /tmp/CVU_12.1.0.2.0_oracle/runfixup.sh
All Fix-up operations were completed successfully.

Create a new CDB database ( no impact to our production env )

After OUI finishes run /u01/app/oracle/product/12102/racdb/root.sh on all nodes 
--> Use dbca an crate an new database
  --> Create new database
   --> Advanced mode 
    --> RAC Database - Admin managed
     --> Create an empty Container database : dbname : cdbn

Run preupgrade scripts in 12.1.0.1 PDB (  no impact to our production env )

Copy preupgrd.sql and utluppkg.sql from the rdbms/admin directory of the new Oracle home where you installed Oracle Database 
12c to a directory that is accessible when you connect to your source database, which is the database to be upgraded. 
Preferably, this should be a temp directory.
Now copy preupgrd.sql and utluppkg.sql from the new home into a temp directory eg. /tmp and run the preupgrd.sql. 

[oracle@gract1 UPGRADE]$ cp  /u01/app/oracle/product/12102/racdb/rdbms/admin/preupgrd.sql .
[oracle@gract1 UPGRADE]$ cp  /u01/app/oracle/product/12102/racdb/rdbms/admin/utluppkg.sql .

Switch  to your  source 12.1.0.1 Oracle Home.
 - When running preupgrd.sql in a CDB, make sure all the PDBs are opened. 
 - To open all the PDBs:

$ sqlplus  sys/sys@cdb as sysdba
SQL>  alter pluggable database all open;
SQL>  select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED  from gv$pdbs where NAME='PDB1';
   INST_ID     CON_ID        DBID    CON_UID GUID                 NAME    OPEN_MODE  RES
---------- ---------- ---------- ---------- -------------------------------- ---------- ---------- ---
     1        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO
     2        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO
     3        3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO

Run preugrd script on PDB1 and prepare for plugin later 
[oracle@gract1 UPGRADE]$ sqlplus sys/sys as sysdba
SQL> alter session set container=PDB1;  
SQL> @preupgrd
           ====>> PRE-UPGRADE RESULTS for PDB1 <<====
ACTIONS REQUIRED:
1. Review results of the pre-upgrade checks:
 /u01/app/oracle/cfgtoollogs/cdb/preupgrade/preupgrade.log
2. Execute in the SOURCE environment BEFORE upgrade:
 /u01/app/oracle/cfgtoollogs/cdb/preupgrade/preupgrade_fixups.sql
3. Execute in the NEW environment AFTER upgrade:
 /u01/app/oracle/cfgtoollogs/cdb/preupgrade/postupgrade_fixups.sql
Summary from preupgrade.log 
[Pre-Upgrade Recommendations] : as SYSDBA run : EXECUTE dbms_stats.gather_dictionary_stats;
[Post-Upgrade Recommendations] : as SYSDBA run : EXECUTE dbms_stats.gather_dictionary_stats;

MANUAL ACTION SUGGESTED
 After your database is upgraded and open in normal mode you must run 
 rdbms/admin/catuppst.sql which executes several required tasks and completes
 the upgrade process.

 You should follow that with the execution of rdbms/admin/utlrp.sql, and a
 comparison of invalid objects before and after the upgrade using
 rdbms/admin/utluiobj.sql

 If needed you may want to upgrade your timezone data using the process
 described in My Oracle Support note 1509653.1

Unplug PDB database and plugin into your new 12.1.0.2 CDB ( Downtime starts here !!)

SQL> alter sesstion set container=CDB$ROOT; 
SQL> alter pluggable database PDB1 close immediate instances=all;
SQL> alter pluggable database PDB1 unplug into '/home/oracle/RAC/UPGRADE/pdb1.xml' ;
Pluggable database altered.

Connect to new CDB cd2
SQL> alter session set container=CDB$ROOT;
SQL>  SET SERVEROUTPUT ON
      DECLARE
              compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
              pdb_descr_file => '/home/oracle/RAC/UPGRADE//pdb1.xml',
              pdb_name => 'PDB1')
              WHEN TRUE THEN 'YES' ELSE 'NO'
      END;
            BEGIN
            DBMS_OUTPUT.PUT_LINE(compatible);
            END;
            /
NO  
PL/SQL procedure successfully completed.
--> Compatibiltiy  Check will result in "NO" - but obviously the plugin operation will work

Check pdb_plug_in_violations
SQL> select message, status from pdb_plug_in_violations where type like '%ERR%';
MESSAGE                                                        STATUS
--------------------------------------------------------------------------------------------------- ---------
PDB's version does not match CDB's version: PDB's version 12.1.0.0.0. CDB's version 12.1.0.2.0.      PENDING
APEX mismatch: PDB installed version 4.2.0.00.27 CDB installed version 4.2.5.00.08                   PENDING

As we use the same ASM datafile we don't need to use file_name_convert running create pluggable database  
SQL> create pluggable database pdb1 using '/home/oracle/RAC/UPGRADE/pdb1.xml'; 
Pluggable database created.
      Here a sample with file_name_convert
      SQL>  create pluggable database pdb1 using '/stage/pdb1.xml' file_name_convert=('/oradata/CDB1/pdb1', '/oradata/CDB2/pdb1');

Open database with UPGRADE 
SQL>  alter pluggable database PDB1 open upgrade;
Warning: PDB altered with errors.

Note the follwing step will take some time :
[oracle@gract1 racdb]$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl -d $ORACLE_HOME/rdbms/admin -c 'PDB1' catupgrd.sql
Argument list for [/u01/app/oracle/product/12102/racdb/rdbms/admin/catctl.pl]
SQL Process Count     n = 0
SQL PDB Process Count N = 0
Input Directory       d = /u01/app/oracle/product/12102/racdb/rdbms/admin
...
Display Phases        y = 0
Child Process         I = 0

catctl.pl version: 12.1.0.2.0
Oracle Base           = /u01/app/oracle

Analyzing file /u01/app/oracle/product/12102/racdb/rdbms/admin/catupgrd.sql
Log files in /u01/app/oracle/product/12102/racdb
catcon: ALL catcon-related output will be written to catupgrd_catcon_22075.lst
catcon: See catupgrd*.log files for output generated by scripts
catcon: See catupgrd_*.lst files for spool files, if any
Number of Cpus        = 1
Parallel PDB Upgrades = 2
SQL PDB Process Count = 2
SQL Process Count     = 0
New SQL Process Count = 1
[CONTAINER NAMES]
CDB$ROOT
PDB$SEED
PDB1
PDB Inclusion:[PDB1] Exclusion:[]
Starting
[/u01/app/oracle/product/12102/racdb/perl/bin/perl /u01/app/oracle/product/12102/racdb/rdbms/admin/catctl.pl -d 
          /u01/app/oracle/product/12102/racdb/rdbms/admin -c 'PDB1' -I -i pdb1 -n 2 catupgrd.sql]
Argument list for [/u01/app/oracle/product/12102/racdb/rdbms/admin/catctl.pl]
SQL Process Count     n = 2
SQL PDB Process Count N = 0
Input Directory       d = /u01/app/oracle/product/12102/racdb/rdbms/admin
...
Display Phases        y = 0
Child Process         I = 1
catctl.pl version: 12.1.0.2.0
Oracle Base           = /u01/app/oracle
Analyzing file /u01/app/oracle/product/12102/racdb/rdbms/admin/catupgrd.sql
Log files in /u01/app/oracle/product/12102/racdb
catcon: ALL catcon-related output will be written to catupgrdpdb1_catcon_22568.lst
catcon: See catupgrdpdb1*.log files for output generated by scripts
catcon: See catupgrdpdb1_*.lst files for spool files, if any
Number of Cpus        = 1
SQL PDB Process Count = 2
SQL Process Count     = 2

[CONTAINER NAMES]
CDB$ROOT
PDB$SEED
PDB1
PDB Inclusion:[PDB1] Exclusion:[]

------------------------------------------------------
Phases [0-73]
Container Lists Inclusion:[PDB1] Exclusion:[]
Serial   Phase #: 0 Files: 1     Time: 91s   PDB1
Serial   Phase #: 1 Files: 5     Time: 254s  PDB1
Restart  Phase #: 2 Files: 1     Time: 0s    PDB1
....
Serial   Phase #:73 Files: 1     Time: 0s    PDB1
Grand Total Time: 7957s PDB1
LOG FILES: (catupgrdpdb1*.log)
Upgrade Summary Report Located in:
/u01/app/oracle/product/12102/racdb/cfgtoollogs/cdbn/upgrade/upg_summary.log
Total Upgrade Time:          [0d:2h:12m:37s]
     Time: 7965s For PDB(s)
Grand Total Time: 7965s 
LOG FILES: (catupgrd*.log)
Grand Total Upgrade Time:    [0d:2h:12m:45s]

Check Upgrade Log
[root@gract1 var]# more /u01/app/oracle/product/12102/racdb/cfgtoollogs/cdbn/upgrade/upg_summary.log
Oracle Database 12.1 Post-Upgrade Status Tool           08-10-2014 21:27:56
                             [PDB1:3]
Component                               Current         Version  Elapsed Time
Name                                    Status          Number   HH:MM:SS
Oracle Server                          UPGRADED      12.1.0.2.0  00:34:31
JServer JAVA Virtual Machine              VALID      12.1.0.2.0  00:21:14
Oracle Real Application Clusters          VALID      12.1.0.2.0  00:00:06
Oracle Workspace Manager                  VALID      12.1.0.2.0  00:07:25
OLAP Analytic Workspace                   VALID      12.1.0.2.0  00:01:41
Oracle OLAP API                           VALID      12.1.0.2.0  00:04:31
Oracle Label Security                     VALID      12.1.0.2.0  00:00:36
Oracle XDK                                VALID      12.1.0.2.0  00:06:49
Oracle Text                               VALID      12.1.0.2.0  00:01:08
Oracle XML Database                       VALID      12.1.0.2.0  00:03:09
Oracle Database Java Packages             VALID      12.1.0.2.0  00:01:40
Oracle Multimedia                         VALID      12.1.0.2.0  00:11:09
Spatial                                UPGRADED      12.1.0.2.0  00:15:27
Oracle Application Express                VALID     4.2.5.00.08  00:11:23
Oracle Database Vault                     VALID      12.1.0.2.0  00:03:03
Final Actions                                                    00:02:26
Post Upgrade                                                     00:00:18
Total Upgrade Time: 02:07:09 [PDB1]
PL/SQL procedure successfully completed.
Elapsed: 00:00:02.42

Execute POST upgragde scripts
SQL> @/u01/app/oracle/cfgtoollogs/cdb/preupgrade/postupgrade_fixups.sql
SQL> @?/rdbms/admin/catuppst.sql
SQL> @?/rdbms/admin/utlrp.sql

Check upgrade status 
Verify object status 
SQL>  @?/rdbms/admin/utluiobj.sql
Oracle Database 12.1 Post-Upgrade Invalid Objects Tool 08-11-2014 08:36:02
This tool lists post-upgrade invalid objects that were not invalid
prior to upgrade (it ignores pre-existing pre-upgrade invalid objects).
                           Owner                     Object Name                     Object Type
.
PL/SQL procedure successfully completed.
Check upgrade status 
SQL> select owner,count(*) from dba_objects where status !=  'VALID'  group by owner; 
no rows selected
SQL> select comp_name,version,status from dba_registry;

Verify PDB mount status :
SQL> select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED  from gv$pdbs where NAME='PDB1';
   INST_ID     CON_ID        DBID    CON_UID GUID                 NAME    OPEN_MODE  RES
---------- ---------- ---------- ---------- -------------------------------- ---------- ---------- ---
     1        3 3362522988 1064331803 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO
     2        3 3362522988 1064331803 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    MOUNTED
     3        3 3362522988 1064331803 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    MOUNTED

Mount PDB clusterwide
SQL>  alter pluggable database pdb1 open instances=all;
Pluggable database altered.
SQL> select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED  from gv$pdbs where NAME='PDB1';
   INST_ID     CON_ID        DBID    CON_UID GUID                 NAME    OPEN_MODE  RES
---------- ---------- ---------- ---------- -------------------------------- ---------- ---------- ---
     1        3 3362522988 1064331803 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO
     3        3 3362522988 1064331803 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO
     2        3 3362522988 1064331803 FFE30B05B94B1D25E0436F01A8C05EFE PDB1    READ WRITE NO

After all PDBs are migrated consider to drop the old 12.1.0.1 CDB,PDB including software release
Before dropping Check new datafiles in use: 
SQL> select file_name, con_id from cdb_data_files;
FILE_NAME                                                 CON_ID
---------------------------------------------------------------------------------------------------- ----------
+DATA/CDBN/DATAFILE/system.302.855237255                                      1
+DATA/CDBN/DATAFILE/sysaux.303.855237163                                      1
+DATA/CDBN/DATAFILE/undotbs1.316.855237383                                    1
+DATA/CDBN/DATAFILE/users.318.855237381                                       1
+DATA/CDBN/DATAFILE/undotbs2.297.855237793                                      1
+DATA/CDBN/DATAFILE/undotbs3.298.855237797                                      1
+DATA/CDBN/FFE30B05B94B1D25E0436F01A8C05EFE/DATAFILE/system.324.855252739                      3
+DATA/CDBN/FFE30B05B94B1D25E0436F01A8C05EFE/DATAFILE/sysaux.325.855252377                      3
+DATA/CDBN/FFE30B05B94B1D25E0436F01A8C05EFE/DATAFILE/users.326.855251235                      3

OLD CDB datafiles 
[grid@gract1 ~]$ asmcmd ls -l DATA/CDB/DATAFILE/
Type      Redund  Striped  Time             Sys  Name
DATAFILE  MIRROR  COARSE   AUG 10 13:00:00  Y    SYSAUX.283.854809845
DATAFILE  MIRROR  COARSE   AUG 10 13:00:00  Y    SYSTEM.273.854810097
DATAFILE  MIRROR  COARSE   AUG 10 13:00:00  Y    UNDOTBS1.279.854810307
DATAFILE  MIRROR  COARSE   AUG 10 13:00:00  Y    UNDOTBS2.294.854810989
DATAFILE  MIRROR  COARSE   AUG 10 13:00:00  Y    UNDOTBS3.286.854810995
DATAFILE  MIRROR  COARSE   AUG 10 13:00:00  Y    USERS.295.854810303

[grid@gract1 ~]$ asmcmd  ls -l  DATA/CDB/FFE30B05B94B1D25E0436F01A8C05EFE//DATAFILE/
Type      Redund  Striped  Time             Sys  Name
DATAFILE  MIRROR  COARSE   AUG 10 18:00:00  Y    SYSAUX.278.854811821
DATAFILE  MIRROR  COARSE   AUG 10 18:00:00  Y    SYSTEM.272.854811641
DATAFILE  MIRROR  COARSE   AUG 10 17:00:00  Y    USERS.270.854812071

--> After verifying that our new CDB cdbn does not reference an old  file form 12.1.0.1
    you can detele your 12.1.0.1 CDB,PDBs and software kit

 

Reference

  • https://blogs.oracle.com/UPGRADE/entry/upgrade_pdbs_one_at_a
  • https://blogs.oracle.com/UPGRADE/entry/upgrade_pdbs_everything_at_once1   

     Thx Mike for above  articles – they helpled me a lot !!

Upgrading from 12.1.0.1 to 12.1.0.2 using FLEX cluster hangs with progress bar frozzen at at 7 %

OUI hangs testing prerequisites steps  with progress level frozzen at 7%

OUI Step to progress : Checking Verify that the ASM instance was configure using an existing ASM paramter file

Verify that  asmcmd spget works 
If  asmcmd spget hangs try to kill process with is running asmcmd spget in parallel 
[root@gract1 /]# ps -elf  | grep asmcmd
0 S grid     15820     1  1  80   0 - 351661 pipe_w 16:34 ?       00:00:00 oracle+ASM1_asmcmd (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))]
[root@gract1 /]# kill -9 15820
After that  asmcmd spget should work

Strace output from above hang scenario 
[grid@gract1 ~/DBSupportBundle]$ asmcmd spget
+DATA/gract/ASMPARAMETERFI LE/registry.253.827098053
stat("/tmp/pipe_501_sysasm", {st_mode=S_IFIFO|0777, st_size=0, ...}) = 0
open("expor/tmp/pipe_501_sysasm", O_WRONLY|O_CREAT|O_TRUNC, 0666^C <unfinished ...>

If installation still hange try as next step to test your installation env by running
$ /u01/app/121/grid/perl/bin/perl -w -I /u01/app/121/grid/perl/lib/5.14.1 -I /u01/app/121/grid/perl/lib/site_perl/5.14.1 
   -I /u01/app/121/grid/lib -I /u01/app/121/grid/lib/asmcmd -I /u01/app/121/grid/rdbms/lib/asmcmd /u01/app/121/grid/bin/asmcmdcore spget
--> You may fix here errors like wrong settings for LD_LIBRARY_PATH , perl problems ....

If OUI still hangs check for defunct asmcmd daemon processed
[root@gract1 log]# ps -elf | grep asmcmd
0 S grid      9087  3028  0  80   0 - 65796 hrtime 11:43 pts/8    00:00:00 /u01/app/121/grid/perl/bin/perl -w -I /u01/app/121/grid/perl/lib/5.14.1 
                                                                            -I /u01/app/121/grid/perl/lib/site_perl/5.14.1 -I /u01/app/121/grid/lib 
                                                                            -I /u01/app/121/grid/lib/asmcmd -I /u01/app/121/grid/rdbms/lib/asmcmd 
                                                                             /u01/app/121/grid/bin/asmcmdcore spget
1 Z grid      9092  9087  0  80   0 -     0 exit   11:43 ?        00:00:00 [asmcmd daemon] <defunct>
0 S root     10571 25378  0  80   0 - 25824 pipe_w 12:05 pts/11   00:00:00 grep asmcmd
[root@gract1 log]# kill -9 9087
After killing process 4842 OUI should go  forward processing next prerequisites checks !

After all  prerequisites checks have been finished run followíng command on all nodes verifying that ASM parameter file is valid
[grid@gract1 ~]$ /u01/app/121/grid/perl/bin/perl -w -I /u01/app/121/grid/perl/lib/5.14.1 -I /u01/app/121/grid/perl/lib/site_perl/5.14.1 
   -I /u01/app/121/grid/lib -I /u01/app/121/grid/lib/asmcmd -I /u01/app/121/grid/rdbms/lib/asmcmd /u01/app/121/grid/bin/asmcmdcore spget
+DATA/gract/ASMPARAMETERFILE/registry.253.827098053

Relatad Installer error:
PRVG-4568 : An ASM instance was found to be configured but the ASM parameter file does not exist at location "/u01/app/121/grid/dbs/initnull.ora" 
            on the node "gract1" on which upgrade is requested.  
           Cause:  The indicated ASM parameter file did not exist at the identified location.  
          Action:  Ensure that the ASM instance is configured using an existing ASM parameter file, SPFILE or PFILE, on the indicated node

Summary 
  - Don't run any asmcmd commands in parallel with 12.1.0.1
  - Check for any outstanding asmcmd commands with <defunct>  for hanging asmcmd processes ( kill them ) 
  - After killing that process OUI will report an error after finishing prerequisites a
  - Test ASM spfile usage manually after OUI prerequisites testing was finished  and press ignore prerequisite test
    to continue installation

 

Reference

  • Bug 16875041 : ASMCMD -P LS COMMAND HANG OR FAILED WITH UNEXPECTED EOF ERROR
  • Bug 18630276 : ASMCMD MKDG HANG AND ASMCMD PERFORMANCE ISSUE WHILE SWINGBENCH WORKLOAD RUNNING

Upgrade FLEX ASM cluster from 12.1.0.1 to 12.1.0.2

Check  ASM and RAC status

[grid@gract1 ~]$ olsnodes -n -s -a
gract1    1    Active    Hub
gract2    2    Active    Hub
gract3    3    Active    Hub

[grid@gract1 ~]$ srvctl status  database -d  cdb
Instance cdb1 is running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is running on node gract3

[grid@gract1 ~]$ srvctl status  asm
ASM is running on gract3,gract2

[grid@gract1 ~]$ srvctl config database -d cdb
Database unique name: cdb
Database name: cdb
Oracle home: /u01/app/oracle/product/121/racdb
Oracle user: oracle
Spfile: +DATA/cdb/spfilecdb.ora
Password file: +DATA/cdb/orapwcdb
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: cdb
Database instances: cdb1,cdb2,cdb3
Disk Groups: DATA
Mount point paths: 
Services: hr
Type: RAC
Start concurrency: 
Stop concurrency: 
Database is administrator managed

[grid@gract1 ~]$ asmcmd showclustermode 
ASM cluster : Flex mode enabled

Unmount your ACFS filesystem to avoid errors during CRS restart due to device busy error
--> For details see read following note. 

Test your Perl and ASM configuration by running following command on all cluster nodes 
$ /u01/app/121/grid/perl/bin/perl -w -I /u01/app/121/grid/perl/lib/5.14.1 -I /u01/app/121/grid/perl/lib/site_perl/5.14.1 
   -I /u01/app/121/grid/lib -I /u01/app/121/grid/lib/asmcmd -I /u01/app/121/grid/rdbms/lib/asmcmd /u01/app/121/grid/bin/asmcmdcore spget
--> You may fix here error like wrong settings for LD_LIBRARY_PATH ....

Install and run orachk/raccheck tool

Install latest support bundle :  DBSupportBundle_v1_3_6.zip

Unzip and Prepare directory
[grid@gract1 ~/DBSupportBundle]$ mkdir orachk_OUT
[grid@gract1 ~/DBSupportBundle]$ chmod 777 orachk_OUT

Run orachk as user oracle 

Check orachk version 
[oracle@gract1 DBSupportBundle]$ orachk -v
ORACHK  VERSION: 2.2.5_20140530

[oracle@gract1 DBSupportBundle]$ setenv RAT_OUTPUT /home/grid/DBSupportBundle/orachk_OUT
[oracle@gract1 DBSupportBundle]$ ./orachk -u -o pre
Enter upgrade target version (valid versions are 11.2.0.3.0, 11.2.0.4.0, 12.1.0.1.0, 12.1.0.2.0):- 12.1.0.2.0
CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to /u01/app/121/grid?[y/n][y]
Checking ssh user equivalency settings on all nodes in cluster
Node gract2 is configured for ssh user equivalency for oracle user
Node gract3 is configured for ssh user equivalency for oracle user
Searching for running databases . . . . .
List of running databases registered in OCR
1. cdb
2. None of above
Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1].
Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
-------------------------------------------------------------------------------------------------------
                                                 Oracle Stack Status                            
-------------------------------------------------------------------------------------------------------
Host Name  CRS Installed  ASM HOME       RDBMS Installed  CRS UP    ASM UP    RDBMS UP  DB Instance Name
-------------------------------------------------------------------------------------------------------
gract1      Yes             N/A             Yes             Yes        No       Yes      cdb1      
gract2      Yes             N/A             Yes             Yes        Yes      Yes      cdb2      
gract3      Yes             N/A             Yes             Yes        Yes      Yes      cdb3      
-------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------
                                       Installed components summary                             
---------------------------------------------------------------------------------------------------------------------------------
GI_HOME                                  ORACLE_HOME                                                  Database Names                
---------------------------------------------------------------------------------------------------------------------------------
/u01/app/121/grid - 12.1.0.1.0           /u01/app/oracle/product/121/racdb - 12.1.0.1.0               cdb PDB1                      
---------------------------------------------------------------------------------------------------------------------------------
...
Detailed report (html) - /home/grid/DBSupportBundle/orachk_OUT/orachk_gract1_PDB1_080814_105958/orachk_gract1_PDB1_080814_105958.html
UPLOAD(if required) - /home/grid/DBSupportBundle/orachk_OUT/orachk_gract1_PDB1_080814_105958.zip

[oracle@gract1 DBSupportBundle]$ ls orachk_OUT
orachk_gract1_PDB1_080814_105958  orachk_gract1_PDB1_080814_105958.zip
--> Fix now any ERROR or WARNING messages before moving forward

Create new clusterware location and verify CRS installation with cluvfy

# mkdir -p /u01/app/12102/grid
# chown -R grid:oinstall /u01//app/12102/grid
# chmod -R 775 /u01/app/12102/grid
--> Run these commands on all cluster nodes 

Download newest cluvfy tool and preparre cluvfy
[grid@gract1 ~/cluvfy]$ mkdir cluvfy
[grid@gract1 ~/cluvfy]$ cd cluvfy 
[grid@gract1 ~/cluvfy]$ unzip /media/sf_Kits/CLUVFY/cvupack_Linux_x86_64.zip

Check cluvfy version
[grid@gract1 ~/cluvfy]$  ./bin/cluvfy -version
12.1.0.1.0 Build 112713x8664

Running cluvfy 

Check current installation for any errors
[grid@gract1 ~/cluvfy]$ ./bin/cluvfy stage -post crsinst  -n gract1,gract2,gract3
..
Checking Flex Cluster node role configuration...
Flex Cluster node role configuration check passed
Post-check for cluster services setup was successful.

Verify new CRS location and run cluvfy with -pre crsinst -upgrade .. rolling 

[grid@gract1 ~/cluvfy]$  ./bin/cluvfy stage -pre crsinst -upgrade -n gract1,gract2, gract3 -rolling -src_crshome $GRID_HOME 
     -dest_crshome /u01/app/12102/grid -dest_version 12.1.0.2.0 -verbose

Install new Clusterware version  – start OUI  as user grid

[grid@gract1 grid]$ pwd
/media/sf_Kits/12.1.0.2/grid
[grid@gract1 grid]$ ./runInstaller
 -> Upgrade Oracle Grid Infrastructure for a Cluster

If OUI hangs  with progress level at  7 %  progressing following step : 
  Checking Verify that the ASM instance was configure using an existing ASM parameter file
--> please read following node

Run rootupgrade.sh on node gract1
[root@gract1 ~]# /u01/app/12102/grid/rootupgrade.sh
..
Monitor cluster state during running   rootupgrade.sh

From local Node gract1
[grid@gract1 ~]$  crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.1.0]
[grid@gract1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.1.0]
[grid@gract1 ~]$  crsctl query crs softwareversion  
Oracle Clusterware version on node [gract1] is [12.1.0.2.0]

From Remote Node gract3
[grid@gract3 ~]$   srvctl status  database -d cdb
Instance cdb1 is not running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is running on node gract3
[grid@gract3 ~]$  srvctl status asm -detail
ASM is running on gract3,gract2
ASM is enabled.
[grid@gract3 ~]$  asmcmd showclusterstate
In Rolling Upgrade
--> cdb1 and +ASM1 are not available - Cluster mode is : In Rolling Upgrade
    2 ASM instanced and 2 RAC instances are still available 

Full Output from rootupgrade.sh script  on node gract1
[root@gract1 ~]# /u01/app/12102/grid/rootupgrade.sh
...
Using configuration parameter file: /u01/app/12102/grid/crs/install/crsconfig_params
2014/08/09 13:22:43 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
TFA-00001: Failed to start Oracle Trace File Analyzer (TFA) daemon. Please check TFA logs.
2014/08/09 13:24:53 CLSRSC-4005: Failed to patch Oracle Trace File Analyzer (TFA) Collector. Grid Infrastructure operations will continue.
2014/08/09 13:25:03 CLSRSC-464: Starting retrieval of the cluster configuration data
2014/08/09 13:25:13 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2014/08/09 13:25:13 CLSRSC-363: User ignored prerequisites during installation
2014/08/09 13:25:34 CLSRSC-515: Starting OCR manual backup.
2014/08/09 13:25:39 CLSRSC-516: OCR manual backup successful.
2014/08/09 13:25:45 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2014/08/09 13:25:45 CLSRSC-482: Running command: '/u01/app/121/grid/bin/crsctl start rollingupgrade 12.1.0.2.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2014/08/09 13:26:06 CLSRSC-482: Running command: '/u01/app/12102/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/121/grid -oldCRSVersion 12.1.0.1.0 -nodeNumber 1 -firstNode true -startRolling false'
ASM configuration upgraded in local node successfully.
2014/08/09 13:26:16 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2014/08/09 13:26:16 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2014/08/09 13:29:17 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2014/08/09 13:37:40 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/08/09 13:45:39 CLSRSC-472: Attempting to export the OCR
2014/08/09 13:45:40 CLSRSC-482: Running command: 'ocrconfig -upgrade grid oinstall'
2014/08/09 13:47:31 CLSRSC-473: Successfully exported the OCR
2014/08/09 13:47:40 CLSRSC-486: 
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2014/08/09 13:47:40 CLSRSC-541: 
 To downgrade the cluster: 
 1. All nodes that have been upgraded must be downgraded.
2014/08/09 13:47:40 CLSRSC-542: 
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2014/08/09 13:47:40 CLSRSC-543: 
 3. The downgrade command must be run on the node gract2 with the '-lastnode' option to restore global configuration data.
2014/08/09 13:48:06 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/08/09 13:48:58 CLSRSC-474: Initiating upgrade of resource types
2014/08/09 13:50:01 CLSRSC-482: Running command: 'upgrade model  -s 12.1.0.1.0 -d 12.1.0.2.0 -p first'
2014/08/09 13:50:01 CLSRSC-475: Upgrade of resource types successfully initiated.
2014/08/09 13:50:14 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
--> Node the downgrade hints above   
After finishing rootupgrade.sql on gract1 both cd1 and +ASM1 are available again
[grid@gract3 ~]$ srvctl status  database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is running on node gract3
[grid@gract3 ~]$  srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.

[grid@gract3 ~]$  asmcmd showclusterstate
In Rolling Upgrade

Run rootupgrade.sh on 2nd node : gract2
Cluster status during running  rootupgrade.sh 
[grid@gract3 ~]$  srvctl status  database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is not running on node gract2
Instance cdb3 is running on node gract3
[grid@gract3 ~]$ srvctl status asm -detail
ASM is running on gract3,gract1
ASM is enabled.
[grid@gract3 ~]$ asmcmd showclusterstate
In Rolling Upgrade

Full Output from rootupgrade.sh script  on node gract2
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12102/grid/crs/install/crsconfig_params
2014/08/09 13:53:41 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
TFA-00001: Failed to start Oracle Trace File Analyzer (TFA) daemon. Please check TFA logs.
2014/08/09 13:55:50 CLSRSC-4005: Failed to patch Oracle Trace File Analyzer (TFA) Collector. Grid Infrastructure operations will continue.
2014/08/09 13:55:57 CLSRSC-464: Starting retrieval of the cluster configuration data
2014/08/09 13:56:12 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2014/08/09 13:56:12 CLSRSC-363: User ignored prerequisites during installation
ASM configuration upgraded in local node successfully.
2014/08/09 13:56:46 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2014/08/09 14:00:03 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2014/08/09 14:00:58 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/08/09 14:05:56 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/08/09 14:06:39 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Status after running rootupgracd.sh on gract2
[grid@gract2 ~]$  srvctl status  database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is running on node gract3
[grid@gract2 ~]$ srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.
[grid@gract2 ~]$ asmcmd showclusterstate
In Rolling Upgrade


[grid@gract2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
[grid@gract2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.1.0]
[grid@gract2 ~]$  crsctl query crs softwareversion 
Oracle Clusterware version on node [gract2] is [12.1.0.2.0]
--> All cluster instances are up softwareversion  is 12.1.0.2.0

Running rootupgrade.sh on last node : gract3
[root@gract3 ~]#  /u01/app/12102/grid/rootupgrade.sh
...
Status during running rootupgrade.sh on last node gract3
[grid@gract2 ~]$  srvctl status  database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is not running on node gract3


[grid@gract2 ~]$ srvctl status asm -detail
ASM is running on gract2,gract1
ASM is enabled.
[grid@gract2 ~]$ asmcmd showclusterstate
In Rolling Upgrade

Full Output from rootupgrade.sh script  on node gract3
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12102/grid/crs/install/crsconfig_params
2014/08/09 14:01:01 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
TFA-00001: Failed to start Oracle Trace File Analyzer (TFA) daemon. Please check TFA logs.
2014/08/09 14:03:06 CLSRSC-4005: Failed to patch Oracle Trace File Analyzer (TFA) Collector. Grid Infrastructure operations will continue.
2014/08/09 14:03:12 CLSRSC-464: Starting retrieval of the cluster configuration data
2014/08/09 14:03:30 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2014/08/09 14:03:30 CLSRSC-363: User ignored prerequisites during installation
ASM configuration upgraded in local node successfully.
2014/08/09 14:03:59 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2014/08/09 14:06:01 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2014/08/09 14:06:52 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/08/09 14:10:56 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/08/09 14:11:11 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2014/08/09 14:11:11 CLSRSC-482: Running command: '/u01/app/12102/grid/bin/crsctl set crs activeversion'
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2014/08/09 14:12:27 CLSRSC-479: Successfully set Oracle Clusterware active version
arted to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2014/08/09 14:12:27 CLSRSC-479: Successfully set Oracle Clusterware active version
2014/08/09 14:14:25 CLSRSC-476: Finishing upgrade of resource types
2014/08/09 14:14:36 CLSRSC-482: Running command: 'upgrade model  -s 12.1.0.1.0 -d 12.1.0.2.0 -p last'
2014/08/09 14:14:36 CLSRSC-477: Successfully completed upgrade of resource types
PRCN-3004 : Listener MGMTLSNR already exists
2014/08/09 14:15:46 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

--> Upgrade scripts detecs last node and runs crsctl set crs activeversion 
    Last nodes upgrades resurces by running :  upgrade model  -s 12.1.0.1.0 -d 12.1.0.2.0 -p last

Status after running rootupgracd.sh on gract3
[grid@gract3 ~]$ srvctl status  database -d cdb
Instance cdb1 is running on node gract1
Instance cdb2 is running on node gract2
Instance cdb3 is running on node gract3


[grid@gract3 ~]$ srvctl status asm -detail
ASM is running on gract3,gract2,gract1
ASM is enabled.
[grid@gract3 ~]$  asmcmd showclusterstate
Normal
[grid@gract3 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.1.0]
[grid@gract3 ~]$  crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[grid@gract3 ~]$ crsctl query crs softwareversion 
Oracle Clusterware version on node [gract3] is [12.1.0.2.0]
--> Clusterware is in nomrmal state and successfully upgrade to 12.1.0.2 

--> Go back to OUI and continue installation 

Reference

Debug Upgrade Problems

Important Notes

  • How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed? (Doc ID 1410202.1)
  • How to Configure or Re-configure Grid Infrastructure With config.sh/config.bat (Doc ID 1354258.1)
  • Things to Consider Before Upgrading to 11.2.0.2 Grid Infrastructure/ASM (Doc ID 1312225.1)
  •  rootupgrade.sh (and root.sh for that matter) are restartable since 11.2.0.2 at least
  •   A great deal of logging is available in $GRID_HOME/cfgtoollogs/crsconfig/rootcrs_hostname.logLogfiles :
    • Checkpoint file  : /u01/app/grid/Clusterware/ckptGridHA_grac31.xml
    • Log File         :  $GRID_HOME/cfgtoollogs/crsconfig/rootcrs_grac31.log
  • How to Proceed When Upgrade to 11.2 Grid Infrastructure Cluster Fails (Doc ID 1364947.1)
  • Be careful before you remove the OLD binaires – if you need to drop old resouces  you may need these binairies

PRCD-1229 error if you want  to delete a  12.1.0.1  created resource  with 12.1.0.2 srvctl

[oracle@gract1 ~]$  srvctl  remove service -db cdb -s hr -force
PRCD-1229 : An attempt to access configuration of database cdb was rejected because its version 
12.1.0.1.0 differs from the  program version 12.1.0.2.0. 
Instead run the program from /u01/app/oracle/product/121/racdb.

Check that ASM and database is running

[oracle@grac41 ~/DBSupportBundle134]$ srvctl status  database -d grac4
Instance grac41 is running on node grac41
Instance grac42 is running on node grac42
Instance grac43 is running on node grac43

[oracle@grac41 ~/DBSupportBundle134]$ srvctl status  asm
ASM is running on grac42,grac43,grac41

Run orack as oracle user and cluvfy as user grid

[oracle@grac41 ~/DBSupportBundle134]$ cd /home/oracle/DBSupportBundle134
[oracle@grac41 ~/DBSupportBundle134]$ ./orachk -u -o pre 

grid@grac41 /]$  cluvfy stage -pre crsinst -upgrade -n grac41,grac42,grac43 -rolling -src_crshome $GRID_HOME 
                 -dest_crshome /u01/app/grid_new -dest_version 12.1.0.1.0  -fixup -fixupdir /tmp -verbose

 

Install software

  • To upgrade to this release, you must install the Oracle Grid Infrastructure and Oracle Database software into a new Oracle home instead of applying a patch set to the existing Oracle home.
  • This is referred to as an out-of-place upgrade and is different from patch set releases for earlier releases of Oracle Database, where the patch set was always installed in place.

Problems before running rootupgrade.sh

  • Troubleshoot 11gR2 Grid Infrastructure/RAC Database runInstaller Issues (Doc ID 1056322.1)
  • Top 11gR2 Grid Infrastructure Upgrade Issues (Doc ID 1366558.1)

 

Problems after running rootupgrade.sh

  • How to Proceed When Upgrade to 11.2 Grid Infrastructure Cluster Fails (Doc ID 1364947.1)
  • Step 1: Identify cause of rootupgrade.sh failure
Identify the cause of the rootupgrade.sh failure by reviewing logs in $NEW_HOME/cfgtoollog/crsconfig,
$NEW_HOME/log, $CIL/logs and $ORACLE_BASE/cfgtoollogs/asmca. Once root cause is identified and the issue is resolved,
proceed with the steps below.
  • Step 2 : Rerun rootupgrade.sh
rootupgrade.sh is restartable when upgrading to 11.2.0.2 or above. After the issue identified in Step 1 has been 
resolved, as root execute "$NEW_HOME/rootupgrade.sh" (even if the cause is unclear, it's still recommended to 
re-run rootupgrade.sh 10 minutes after the failure at least once ). It will continue from the last failed step. 
If it succeeds, continue with your planned upgrade procedure. Continue with the rest of steps in this document 
only when this Step fails

For complete deconfiguration steps read  Note  1364947.1
  • How To Proceed After The Failed Upgrade or Deconfiguration Of The Grid Infrastructure Software In Standalone Environments (Doc ID 1121573.1)

Reference

  • Troubleshoot 11gR2 Grid Infrastructure/RAC Database runInstaller Issues (Doc ID 1056322.1)
  • Top 11gR2 Grid Infrastructure Upgrade Issues (Doc ID 1366558.1)
  • How to Proceed When Upgrade to 11.2 Grid Infrastructure Cluster Fails (Doc ID 1364947.1)
  • How To Proceed After The Failed Upgrade or Deconfiguration Of The Grid Infrastructure Software In Standalone Environments (Doc ID 1121573.1)
  • Things to Consider Before Upgrading to 11.2.0.3/11.2.0.4 Grid Infrastructure/ASM (Doc ID 1363369.1)
  • How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed? (Doc ID 1410202.1)
  • How to Configure or Re-configure Grid Infrastructure With config.sh/config.bat (Doc ID 1354258.1)
  • Oracle® Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux
      F How to Upgrade to Oracle Grid Infrastructure 11g Release
  • http://gavinsoorma.com/2012/07/upgrading-11gr2-rac-grid-infrastructure-to-11-2-0-3/       (Grid Upgrade 11.2.0.1 -> 11. 2.0.4 ) 
  • http://appsdbaworkshop.blogspot.co.uk/2014/03/upgrade-oracle-grid-infrastructure-from.html    (Grid Upgrade 11.2.0.1 -> 11. 2.0.4 )