Oracle RAC Assessment Report

System Health Score is 89 out of 100 (detail)

Cluster Summary

Cluster Namegrac4
OS/Kernel Version LINUX X86-64 OELRHEL 6 2.6.39-400.109.6.el6uek.x86_64
CRS Home - Version/u01/app/11204/grid - 11.2.0.4.0
DB Home - Version - Names/u01/app/oracle/product/11204/racdb - 11.2.0.4.0 - grac4
Number of nodes3
   Database Servers3
raccheck Version 2.2.3.2_20131213
Collectionraccheck_grac41_grac4_022214_095210.zip
Collection Date22-Feb-2014 09:53:40

Note! This version of raccheck is considered valid for 19 days from today or until a new version is available

Table of Contents


Show Check Ids

Remove finding from report


Findings Needing Attention

FAIL, WARNING, ERROR and INFO finding details should be reviewed in the context of your environment.

NOTE: Any recommended change should be applied to and thoroughly tested (functionality and load) in one or more non-production environments before applying the change to a production environment.

Database Server

Check Id Status Type Message Status On Details
DC4495442D7A0CEBE04313C0E50A76E8FAILOS CheckPackage unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installedAll Database ServersView
C1D1B240993425B8E0431EC0E50AFEF5FAILOS CheckPackage unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installedAll Database ServersView
C1D0BD14BF4A3BCEE0431EC0E50A9DB5FAILOS CheckPackage unixODBC-2.2.14-11.el6-i686 is recommended but NOT installedAll Database ServersView
CCF6F44765861F7AE0431EC0E50A72ADFAILOS CheckOperating system hugepages count does not satisfy total SGA requirementsAll Database ServersView
951C025701C65CC5E040E50A1EC0371FWARNINGOS CheckOSWatcher is not running as is recommended.grac43View
841A3A9F4A73AC6AE040E50A1EC03FC0WARNINGOS CheckShell limit soft nproc for DB is NOT configured according to recommendationgrac42, grac43View
70CFB24C11B52EF5E040E50A1EC03ED0WARNINGOS CheckOpen files limit (ulimit -n) for current user is NOT set to recommended value >= 65536 or unlimitedgrac42, grac43View
9AA08EB2573A36C6E040E50A1EC02BD9WARNINGOS Checkkernel parameter rp_filter is set to 1.All Database ServersView
E10E99868C34569BE04313C0E50A44C1WARNINGOS Checkvm.min_free_kbytes should be set as recommended.All Database ServersView
D35CE19AE68165F3E0431EC0E50A4C09WARNINGOS CheckRedo log write time is more than 500 millisecondsAll Database ServersView
9DAFD1040CA9389FE040E50A1EC0307CWARNINGOS CheckShell limit hard stack for GI is NOT configured according to recommendationAll Database ServersView
8C9D63D9441C1F52E040E50A1EC0211FWARNINGOS CheckNIC bonding is NOT configured for public network (VIP)All Database ServersView
5EA8F4C6C6BDF8F0E0401490CACF067FWARNINGOS CheckNIC bonding is not configured for interconnectAll Database ServersView
841F8C3E78906005E040E50A1EC00357WARNINGOS CheckShell limit hard nproc for GI is NOT configured according to recommendationAll Database ServersView
841E706550995C68E040E50A1EC05EFBWARNINGOS CheckShell limit hard nofile for GI is NOT configured according to recommendationAll Database ServersView
841D87785594F263E040E50A1EC020D6WARNINGOS CheckShell limit soft nofile for GI is NOT configured according to recommendationAll Database ServersView
841C7DEB776DB4BBE040E50A1EC0782EWARNINGOS CheckShell limit soft nproc for GI is NOT configured according to recommendationAll Database ServersView
834835A4EC032658E040E50A1EC056F6WARNINGOS Check/tmp is NOT on a dedicated filesystemAll Database ServersView
833F68D88AE57B7CE040E50A1EC02BE7WARNINGSQL CheckOne or more redo log groups are NOT multiplexedAll DatabasesView
833F12C25516ACAFE040E50A1EC020F7WARNINGSQL CheckControlfile is NOT multiplexedAll DatabasesView
DC3D819F5D2A50FEE04312C0E50AFF9FINFOOS CheckParallel Execution Health-Checks and Diagnostics ReportsAll Database ServersView
D957C871B811597AE04312C0E50A91BFINFOASM CheckOne or more disks found which are not part of any disk groupAll ASM InstancesView
BBB4357BF09B79D6E0431EC0E50AFB57INFOOS CheckInformation about hanganalyze and systemstate dumpAll Database ServersView
5E4956EE574FB034E0401490CACF2F84INFOOS CheckJumbo frames (MTU >= 8192) are not configured for interconnectAll Database ServersView
83D8032AFDE57746E040E50A1EC00806INFOOS CheckHugepages configuration is NOT CorrectAll Database ServersView
8343C0D6A9D8702BE040E50A1EC045C8INFOSQL CheckSome data or temp files are not autoextensibleAll DatabasesView
831B9FABDB6CFCB4E040E50A1EC034C0INFOOS Checkaudit_file_dest has audit files older than 30 days for grac4All Database ServersView
6890329C1FFFCEDDE040E50A1EC02FEDINFOOS CheckAt some times checkpoints are not being completedAll Database ServersView
6556EAA74E28214FE0401490CACF6C89INFOOS Check$CRS_HOME/log/hostname/client directory has too many older log files.All Database ServersView

Top

MAA Scorecard

Outage Type Status Type Message Status On Details
.
DATABASE FAILURE PREVENTION BEST PRACTICESPASS
Description
Oracle database can be configured with best practices that are applicable to all Oracle databases, including single-instance, Oracle RAC databases, Oracle RAC One Node databases, and the primary and standby databases in Oracle Data Guard configurations. Key HA Benefits:
  • Improved recoverability
  • Improved stability

Best Practices
PASSSQL CheckAll tablespaces are locally managed tablespaceAll DatabasesView
PASSSQL CheckAll tablespaces are using Automatic segment storage managementAll DatabasesView
PASSSQL CheckDefault temporary tablespace is setAll DatabasesView
PASSSQL CheckDatabase Archivelog Mode is set to ARCHIVELOGAll DatabasesView
PASSSQL CheckThe SYS and SYSTEM userids have a default tablespace of SYSTEMAll DatabasesView
.
COMPUTER FAILURE PREVENTION BEST PRACTICESFAIL
Description
Oracle RAC and Oracle Clusterware allow Oracle Database to run any packaged or custom application across a set of clustered servers. This capability provides server side high availability and scalability. If a clustered server fails, then Oracle Database continues running on the surviving servers. When more processing power is needed, you can add another server without interrupting access to data. Key HA Benefits: Zero database downtime for node and instance failures. Application brownout can be zero or seconds compared to minutes and an hour with third party cold cluster failover solutions. Oracle RAC and Oracle Clusterware rolling upgrade for most hardware and software changes excluding Oracle RDBMS patch sets and new database releases.
Best Practices
WARNINGSQL Parameter Checkfast_start_mttr_target should be greater than or equal to 300.All InstancesView
.
DATA CORRUPTION PREVENTION BEST PRACTICESFAIL
Description
The MAA recommended way to achieve the most comprehensive data corruption prevention and detection is to use Oracle Active Data Guard and configure the DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM, and DB_LOST_WRITE_PROTECT database initialization parameters on the Primary database and any Data Guard and standby databases. Key HA Benefits
  • Application downtime can be reduced from hours and days to seconds to no downtime.
  • Prevention, quick detection and fast repair of data block corruptions.
  • With Active Data Guard, data block corruptions can be repaired automatically.

Best Practices
WARNINGOS CheckDatabase parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.All Database ServersView
PASSSQL CheckThe data files are all recoverableAll DatabasesView
PASSSQL CheckNo reported block corruptions in V$DATABASE_BLOCK_CORRUPTIONSAll DatabasesView
.
LOGICAL CORRUPTION PREVENTION BEST PRACTICESFAIL
Description
Oracle Flashback Technology enables fast logical failure repair. Oracle recommends that you use automatic undo management with sufficient space to attain your desired undo retention guarantee, enable Oracle Flashback Database, and allocate sufficient space and I/O bandwidth in the fast recovery area. Application monitoring is required for early detection. Effective and fast repair comes from leveraging and rehearsing the most common application specific logical failures and using the different flashback features effectively (e.g flashback query, flashback version query, flashback transaction query, flashback transaction, flashback drop, flashback table, and flashback database). Key HA Benefits: With application monitoring and rehearsed repair actions with flashback technologies, application downtime can reduce from hours and days to the time to detect the logical inconsistency. Fast repair for logical failures caused by malicious or accidental DML or DDL operations. Effect fast point-in-time repair at the appropriate level of granularity: transaction, table, or database. Questions: Can your application or monitoring infrastructure detect logical inconsistencies? Is your operations team prepared to use various flashback technologies to repair quickly and efficiently? Is security practices enforced to prevent unauthorized privileges that can result logical inconsistencies?
Best Practices
FAILSQL CheckFlashback on PRIMARY is not configuredAll DatabasesView
PASSSQL Parameter CheckRECYCLEBIN on PRIMARY is set to the recommended valueAll InstancesView
PASSSQL Parameter CheckDatabase parameter UNDO_RETENTION on PRIMARY is not nullAll InstancesView
.
DATABASE/CLUSTER/SITE FAILURE PREVENTION BEST PRACTICESFAIL
Description
Oracle 11g and higher Active Data Guard is the real-time data protection and availability solution that eliminates single point of failure by maintaining one or more synchronized physical replicas of the production database. If an unplanned outage of any kind impacts the production database, applications and users can quickly failover to a synchronized standby, minimizing downtime and preventing data loss. An Active Data Guard standby can be used to offload read-only applications, ad-hoc queries, and backups from the primary database or be dual-purposed as a test system at the same time it provides disaster protection. An Active Data Guard standby can also be used to minimize downtime for planned maintenance when upgrading to new Oracle Database patch sets and releases and for select migrations. For zero data loss protection and fastest recovery time, deploy a local Data Guard standby database with Data Guard Fast-Start Failover and integrated client failover. For protection against outages impacting both the primary and the local standby or the entire data center, or a broad geography, deploy a second Data Guard standby database at a remote location. Key HA Benefits: With Oracle 11g release 2 and higher Active Data Guard and real time apply, data block corruptions can be repaired automatically and downtime can be reduced from hours and days of application impact to zero downtime with zero data loss. With MAA best practices, Data Guard Fast-Start Failover (typically a local standby) and integrated client failover, downtime from database, cluster and site failures can be reduced from hours to days and seconds and minutes. With remote standby database (Disaster Recovery Site), you have protection from complete site failures. In all cases, the Active Data Guard instances can be active and used for other activities. Data Guard can reduce risks and downtime for planned maintenance activities by using Database rolling upgrade with transient logical standby, standby-first patch apply and database migrations. Active Data Guard provides optimal data protection by using physical replication and comprehensive Oracle validation to maintain an exact byte-for-byte copy of the primary database that can be open read-only to offload reporting, ad-hoc queries and backups. For other advanced replication requirements where read-write access to a replica database is required while it is being synchronized with the primary database see Oracle GoldenGate logical replication.Oracle GoldenGate can be used to support heterogeneous database platforms and database releases, an effective read-write full or subset logical replica and to reduce or eliminate downtime for application, database or system changes. Oracle GoldenGate flexible logical replication solution’s main trade-off is the additional administration for application developer and database administrators.
Best Practices
FAILSQL CheckPrimary database is NOT protected with Data Guard (standby database) for real-time data protection and availabilityAll DatabasesView
.
CLIENT FAILOVER OPERATIONAL BEST PRACTICESPASS
Description
A highly available architecture requires the ability of the application tier to transparently fail over to a surviving instance or database advertising the required service. This ensures that applications are generally available or minimally impacted in the event of node failure, instance failure, or database failures. Oracle listeners can be configured to throttle incoming connections to avoid logon storms after a database node or instance failure. The connection rate limiter feature in the Oracle Net Listener enables a database administrator (DBA) to limit the number of new connections handled by the listener.
Best Practices
PASSOS CheckClusterware is runningAll Database ServersView
.
ORACLE RECOVERY MANAGER(RMAN) BEST PRACTICESPASS
Description
Oracle Recovery Manager (RMAN) is an Oracle Database utility to manage database backup and, more importantly, the recovery of the database. RMAN eliminates operational complexity while providing superior performance and availability of the database. RMAN determines the most efficient method of executing the requested backup, restoration, or recovery operation and then submits these operations to the Oracle Database server for processing. RMAN and the server automatically identify modifications to the structure of the database and dynamically adjust the required operation to adapt to the changes. RMAN has many unique HA capabilities that can be challenging or impossible for third party backup and restore utilities to deliver such as
  • In-depth Oracle data block checks during every backup or restore operation
  • Efficient block media recovery
  • Automatic recovery through complex database state changes such as resetlogs or past Data Guard role transitions
  • Fast incremental backup and restore operations
  • Integrated retention policies and backup file management with Oracle’s fast recovery area
  • Online backups without the need to put the database or data file in hot backup mode.
RMAN backups are strategic to MAA so a damaged database (complete database or subset of the database such as a data file or tablespace, log file, or controlfile) can be recovered but for the fastest recovery, use Data Guard or GoldenGate. RMAN operations are also important for detecting any corrupted blocks from data files that are not frequently accessed.
Best Practices
PASSOS Checkcontrol_file_record_keep_time is within recommended range [1-9] for grac4All Database ServersView
PASSSQL CheckRMAN controlfile autobackup is set to ONAll DatabasesView
PASSSQL CheckFast Recovery Area (FRA) has sufficient reclaimable spaceAll DatabasesView
.
OPERATIONAL BEST PRACTICESINFO
Description
Operational best practices are an essential prerequisite to high availability. The integration of Oracle Maximum Availability Architecture (MAA) operational and configuration best practices with Oracle Exadata Database Machine (Exadata MAA) provides the most comprehensive high availability solution available for the Oracle Database.
Best Practices
.
DATABASE CONSOLIDATION BEST PRACTICESINFO
Description
Database consolidation requires additional planning and management to ensure HA requirements are met.
Best Practices

Top

GRID and RDBMS patch recommendation Summary report

Summary Report for "grac41"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11204/racdb View

Summary Report for "grac42"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11204/racdb View

Summary Report for "grac43"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11204/racdb View

Top

GRID and RDBMS patch recommendation Detailed report

Detailed report for "grac41"




0 Recommended CRS patches for 112040 from /u01/app/11204/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11204/racdb
Top

Detailed report for "grac42"




0 Recommended CRS patches for 112040 from /u01/app/11204/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11204/racdb
Top

Detailed report for "grac43"




0 Recommended CRS patches for 112040 from /u01/app/11204/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11204/racdb
Top
Top

Findings Passed

Database Server

Check Id Status Type Message Status On Details
E7E213A6D9151EEBE04312C0E50A85C3PASSOS CheckBerkeley Database location points to correct GI_HOMEAll Database ServersView
E6862960C4CD3E9DE04313C0E50A9C21PASSASM CheckLinux Disk I/O Scheduler is configured as recommendedAll ASM InstancesView
E47ECDCFE09A122CE04313C0E50A35ECPASSOS CheckThere are no duplicate parameter entries in the database init.ora(spfile) fileAll Database ServersView
E47EBE3023936D3CE04313C0E50A7A0EPASSASM CheckThere are no duplicate parameter entries in the ASM init.ora(spfile) fileAll ASM InstancesView
E1DF2A6140395D42E04312C0E50A0A6CPASSASM CheckAll diskgroups from v$asm_diskgroups are registered in clusterware registryAll ASM InstancesView
E18D7F9837B7754EE04313C0E50AD4AAPASSOS CheckPackage cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendationAll Database ServersView
E1500ADF060A3EA2E04313C0E50A3676PASSOS CheckOLR Integrity check SucceededAll Database ServersView
E12A91DC10F31AD7E04312C0E50A6361PASSOS Checkpam_limits configured properly for shell limitsAll Database ServersView
D348A289DD032396E0431EC0E50A26D5PASSOS CheckOCR and Voting disks are stored in ASMAll Database ServersView
D0C2640EBA071F73E0431EC0E50AA159PASSOS CheckSystem clock is synchronized to hardware clock at system shutdownAll Database ServersView
DCB4C2CB907F4C76E04312C0E50A7667PASSOS CheckLinux transparent huge pages are disabledAll Database ServersView
DC28F07D94FD1B10E04313C0E50A9FD8PASSOS CheckTFA Collector is installed and runningAll Database ServersView
DBC2C9218542349FE04312C0E50AC1E9PASSOS CheckNo clusterware resource are in unknown stateAll Database ServersView
D9A5C0E2DE430A85E04312C0E50AC8B0PASSASM CheckNo corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)All ASM InstancesView
D112D25A574F13DCE0431EC0E50A55CDPASSOS CheckGrid infastructure network broadcast requirements are metAll Database ServersView
CB5BD768E88F7F71E0431EC0E50A346FPASSOS CheckPackage libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
6B515A724AB85906E040E50A1EC039F6PASSSQL CheckNo read/write errors found for ASM disksAll DatabasesView
C1D39B834AA46E44E0431EC0E50A5366PASSOS CheckPackage sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D39B834AA36E44E0431EC0E50A5366PASSOS CheckPackage libgcc-4.4.4-13.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D34D17A4F45402E0431EC0E50A5DD9PASSOS CheckPackage binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D348AB978E3873E0431EC0E50A19F0PASSOS CheckPackage glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D30E313A4C0B0BE0431EC0E50A1931PASSOS CheckPackage libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D2A95C2BF31FE4E0431EC0E50AB101PASSOS CheckPackage libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D29B4860DA19C2E0431EC0E50AFB36PASSOS CheckPackage glibc-2.12-1.7.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D29B4860D919C2E0431EC0E50AFB36PASSOS CheckPackage gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D1CC4D830F3B90E0431EC0E50A559FPASSOS CheckPackage make-3.81-19.el6 meets or exceeds recommendationAll Database ServersView
C1D1CC4D830E3B90E0431EC0E50A559FPASSOS CheckPackage libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D1BA6C1CD213F9E0431EC0E50A8B9CPASSOS CheckPackage libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D1BA6C1CD013F9E0431EC0E50A8B9CPASSOS CheckPackage libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D1B240991A25B8E0431EC0E50AFEF5PASSOS CheckPackage compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D1973D1B4C0EA1E0431EC0E50A9108PASSOS CheckPackage glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D15659D96376CBE0431EC0E50A74F5PASSOS CheckPackage glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D15659D96276CBE0431EC0E50A74F5PASSOS CheckPackage compat-libcap1-1.10-1-x86_64 meets or exceeds recommendationAll Database ServersView
C1D0EE98B4BC4083E0431EC0E50ADCB2PASSOS CheckPackage ksh-20100621-12.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D0BD14BF493BCEE0431EC0E50A9DB5PASSOS CheckPackage libaio-0.3.107-10.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D0BD14BF483BCEE0431EC0E50A9DB5PASSOS CheckPackage libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1CF431B59054969E0431EC0E50A9B88PASSOS CheckPackage gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1CF431B59034969E0431EC0E50A9B88PASSOS CheckPackage compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1CEC9D9E9432BDFE0431EC0E50AF329PASSOS CheckPackage libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendationAll Database ServersView
89130F49748E6CC7E040E50A1EC07A44PASSOS CheckRemote listener is set to SCAN nameAll Database ServersView
65F8FA5F9B838079E040E50A1EC059DCPASSOS CheckValue of remote_listener parameter is able to tnspingAll Database ServersView
D6972E101386682AE0431EC0E50A9FD9PASSOS CheckNo tnsname alias is defined as scanname:portAll Database ServersView
D6972E101384682AE0431EC0E50A9FD9PASSOS Checkezconnect is configured in sqlnet.oraAll Database ServersView
BEAE25E17C4130E4E0431EC0E50A8C3FPASSSQL Parameter CheckDatabase Parameter parallel_execution_message_size is set to the recommended valueAll InstancesView
B6457DE59F9D457EE0431EC0E50A1DD2PASSSQL Parameter CheckDatabase parameter CURSOR_SHARING is set to recommended valueAll InstancesView
B167E5248D476B74E0431EC0E50A3E27PASSSQL CheckAll bigfile tablespaces have non-default maxbytes values setAll DatabasesView
AD6481CF9BDD6058E040E50A1EC021ECPASSOS Checkumask for RDBMS owner is set to 0022All Database ServersView
9DEBED7B8DAB583DE040E50A1EC01BA0PASSASM CheckASM Audit file destination file count <= 100,000All ASM InstancesView
64DC3E59CB88B984E0401490CACF1104PASSSQL Parameter Checkasm_power_limit is set to recommended value of 1All InstancesView
9661449019481CF6E040E50A1EC01682PASSOS CheckNTP is running with correct settingAll Database ServersView
951C025701C65CC5E040E50A1EC0371FPASSOS CheckOSWatcher is runninggrac41, grac42View
90DCECE833790E9DE040E50A1EC0750APASSOS CheckCSS reboottime is set to the default value of 3All Database ServersView
90DCB860F9380638E040E50A1EC07248PASSOS CheckCSS disktimeout is set to the default value of 200All Database ServersView
8E1B5EE973BAA8C6E040E50A1EC0622EPASSOS Checkohasd Log Ownership is Correct (root root)All Database ServersView
8E1A46CB0BDA0608E040E50A1EC022CDPASSOS Checkohasd/orarootagent_root Log Ownership is Correct (root root)All Database ServersView
8E197A76D887BAC4E040E50A1EC07E0BPASSOS Checkcrsd/orarootagent_root Log Ownership is Correct (root root)All Database ServersView
8E19457488167806E040E50A1EC00310PASSOS Checkcrsd Log Ownership is Correct (root root)All Database ServersView
CB94D8434AA02210E0431EC0E50A7C40PASSSQL Parameter CheckDatabase Parameter memory_target is set to the recommended valueAll InstancesView
898E1DF96754C57FE040E50A1EC03224PASSASM CheckCRS version is higher or equal to ASM version.All ASM InstancesView
8915B823FCEBC259E040E50A1EC04AD6PASSOS CheckLocal listener init parameter is set to local node VIPAll Database ServersView
8914F5D0A9AB85BAE040E50A1EC04A31PASSOS CheckNumber of SCAN listeners is equal to the recommended number of 3.All Database ServersView
87604C73D768DF7AE040E50A1EC0566BPASSOS CheckAll voting disks are onlineAll Database ServersView
90E150135F6859C4E040E50A1EC01FF5PASSOS CheckCSS misscount is set to the default value of 30All Database ServersView
85F282CFD5DADCB4E040E50A1EC01BC9PASSSQL CheckAll redo log files are of same sizeAll DatabasesView
856A9B77AF14DD9FE040E50A1EC00285PASSOS CheckSELinux is not being Enforced.All Database ServersView
8529D3798EA039F3E040E50A1EC07218PASSOS CheckPublic interface is configured and exists in OCRAll Database ServersView
84C193C69EE36512E040E50A1EC06466PASSOS Checkip_local_port_range is configured according to recommendationAll Database ServersView
84BE8B9C4817090DE040E50A1EC07DB8PASSOS Checkkernel.shmmax parameter is configured according to recommendationAll Database ServersView
84BE4DE1F00AD833E040E50A1EC07771PASSOS CheckKernel Parameter fs.file-max is configuration meets or exceeds recommendationAll Database ServersView
8449C298FC0EF19CE040E50A1EC00965PASSOS CheckShell limit hard stack for DB is configured according to recommendationAll Database ServersView
841FD604C3C8F2B1E040E50A1EC0122FPASSOS CheckFree space in /tmp directory meets or exceeds recommendation of minimum 1GBAll Database ServersView
841F0977B92F0185E040E50A1EC070BBPASSOS CheckShell limit soft nofile for DB is configured according to recommendationAll Database ServersView
841E706550975C68E040E50A1EC05EFBPASSOS CheckShell limit hard nproc for DB is configured according to recommendationAll Database ServersView
841A3A9F4A74AC6AE040E50A1EC03FC0PASSOS CheckShell limit hard nofile for DB is configured according to recommendationAll Database ServersView
841A3A9F4A73AC6AE040E50A1EC03FC0PASSOS CheckShell limit soft nproc for DB is configured according to recommendationgrac41View
83C301ACFF203C9BE040E50A1EC067EBPASSOS CheckLinux Swap Configuration meets or exceeds RecommendationAll Database ServersView
833D92F95B0A5CB6E040E50A1EC06498PASSSQL Parameter Checkremote_login_passwordfile is configured according to recommendationAll InstancesView
7EDE9EBEC9429FBAE040E50A1EC03AEDPASSOS Check$ORACLE_HOME/bin/oradism ownership is rootAll Database ServersView
7EDDA570A1827FBAE040E50A1EC02EB1PASSOS Check$ORACLE_HOME/bin/oradism setuid bit is setAll Database ServersView
77029A014E159389E040E50A1EC02060PASSSQL CheckAvg message sent queue time on ksxp is <= recommendedAll DatabasesView
770244572FC70393E040E50A1EC01299PASSSQL CheckAvg message sent queue time is <= recommendedAll DatabasesView
7701CFDB2F6EF98EE040E50A1EC00573PASSSQL CheckAvg message received queue time is <= recommendedAll DatabasesView
7674FEDB08C2FDA2E040E50A1EC0156FPASSSQL CheckNo Global Cache lost blocks detectedAll DatabasesView
7674C09669C5BCE6E040E50A1EC011E5PASSSQL CheckFailover method (SELECT) and failover mode (BASIC) are configured properlyAll DatabasesView
70CFB24C11B52EF5E040E50A1EC03ED0PASSOS CheckOpen files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimitedgrac41View
670FE09A93E12317E040E50A1EC018E9PASSSQL CheckAvg GC CURRENT Block Receive Time Within Acceptable RangeAll DatabasesView
670FE09A93E02317E040E50A1EC018E9PASSSQL CheckAvg GC CR Block Receive Time Within Acceptable RangeAll DatabasesView
66FEB2848B21DB24E040E50A1EC00A0CPASSSQL CheckTablespace allocation type is SYSTEM for all appropriate tablespaces for grac4All DatabasesView
66EBC49E368387CAE040E50A1EC03B98PASSOS Checkbackground_dump_dest does not have any files older than 30 daysAll Database ServersView
66EABE4A113A3B1EE040E50A1EC006B2PASSOS CheckAlert log is not too bigAll Database ServersView
66EAB3BB6CF79C54E040E50A1EC06084PASSOS CheckNo ORA-07445 errors found in alert logAll Database ServersView
66E70B43167837ABE040E50A1EC02FEAPASSOS CheckNo ORA-00600 errors found in alert logAll Database ServersView
66E6B013BAE3EFBEE040E50A1EC01F87PASSOS Checkuser_dump_dest does not have trace files older than 30 daysAll Database ServersView
66E59E657BFC85F4E040E50A1EC0501DPASSOS Checkcore_dump_dest does not have too many older core dump filesAll Database ServersView
669862F59599CA2AE040E50A1EC018FDPASSOS CheckKernel Parameter SEMMNS OKAll Database ServersView
66985D930D2DF070E040E50A1EC019EBPASSOS CheckKernel Parameter kernel.shmmni OKAll Database ServersView
6697946779AC8AD3E040E50A1EC03C0EPASSOS CheckKernel Parameter SEMMSL OKAll Database ServersView
6696C7B368784A66E040E50A1EC01B92PASSOS CheckKernel Parameter SEMMNI OKAll Database ServersView
66959FC16B423896E040E50A1EC07CDCPASSOS CheckKernel Parameter SEMOPM OKAll Database ServersView
6694F204EE47A92DE040E50A1EC07145PASSOS CheckKernel Parameter kernel.shmall OKAll Database ServersView
65E6F4BD15BB92EBE040E50A1EC04384PASSSQL Parameter CheckRemote listener parameter is set to achieve load balancing and failoverAll InstancesView
6580DCAAE8A28F5BE0401490CACF6186PASSOS CheckThe number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)All Database ServersView
65414495B2047F26E0401490CACF0FEDPASSOS CheckOCR is being backed up dailyAll Database ServersView
6050196F644254BDE0401490CACF203DPASSOS Checknet.core.rmem_max is Configured ProperlyAll Database ServersView
60500BAFB377E3ADE0401490CACF2245PASSSQL Parameter CheckInstance is using spfileAll InstancesView
5E5B7EEA0010DC6BE0401490CACF3B82PASSOS CheckInterconnect is configured on non-routable network addressesAll Database ServersView
5DC7EBCB6B72E046E0401490CACF321APASSOS CheckNone of the hostnames contains an underscore characterAll Database ServersView
5ADE14B5205111D1E0401490CACF673BPASSOS Checknet.core.rmem_default Is Configured ProperlyAll Database ServersView
5ADD88EC8E0AFF2EE0401490CACF0C10PASSOS Checknet.core.wmem_max Is Configured ProperlyAll Database ServersView
5ADCECF64757E914E0401490CACF4BBDPASSOS Checknet.core.wmem_default Is Configured ProperlyAll Database ServersView
595A436B3A7172FDE0401490CACF5BA5PASSOS CheckORA_CRS_HOME environment variable is not setAll Database ServersView
4B8B98A9C9644FADE0401490CACF6528PASSSQL CheckSYS.AUDSES$ sequence cache size >= 10,000All DatabasesView
4B881724781BB7BEE0401490CACF59FDPASSSQL CheckSYS.IDGEN1$ sequence cache size >= 1,000All DatabasesView

Cluster Wide

Check Id Status Type Message Status On Details
9EC93C7514C11512E040E50A1EC048DDPASSCluster Wide CheckAll nodes are using same NTP server across clusterCluster WideView
8FC4FA469BAA945EE040E50A1EC06AC6PASSCluster Wide CheckTime zone matches for root user across clusterCluster WideView
8FC307D9A9CEF95FE040E50A1EC01580PASSCluster Wide CheckTime zone matches for GI/CRS software owner across clusterCluster WideView
8BEFCB0B4C9DBF5CE040E50A1EC03B14PASSCluster Wide CheckOperating system version matches across cluster.Cluster WideView
8BEFA88017530395E040E50A1EC05E99PASSCluster Wide CheckOS Kernel version(uname -r) matches across cluster.Cluster WideView
8955120D63FCAC2DE040E50A1EC006CAPASSCluster Wide CheckClusterware active version matches across cluster.Cluster WideView
895255E0D2A63C8CE040E50A1EC00A43PASSCluster Wide CheckRDBMS software version matches across cluster.Cluster WideView
88704DB19306DC92E040E50A1EC02C92PASSCluster Wide CheckTimezone matches for current user across cluster.Cluster WideView
7E8D719B61F43773E040E50A1EC029C0PASSCluster Wide CheckPublic network interface names are the same across clusterCluster WideView
7E40D02BD3C22C5AE040E50A1EC033F5PASSCluster Wide CheckGI/CRS software owner UID matches across clusterCluster WideView
7E3FAC1843F137ABE040E50A1EC0139BPASSCluster Wide CheckRDBMS software owner UID matches across clusterCluster WideView
7E2DCCF1429A6A8FE040E50A1EC05FE6PASSCluster Wide CheckPrivate interconnect interface names are the same across clusterCluster WideView

Top

Best Practices and Other Recommendations

Best Practices and Other Recommendations are generally items documented in various sources which could be overlooked. raccheck assesses them and calls attention to any findings.


Top

Same NTP server across cluster

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Make sure machine clocks are synchronized on all nodes to the same NTP source.

NOTE: raccheck expects the NTP time source to be the same across the cluster based on the NTP server IP address.  In cases where the customer is using a fault tolerant configuration for NTP servers and the customer is certain that the configuration is correct and the same time source is being utilized then a finding for this check can be ignored.

Implement NTP (Network Time Protocol) on all nodes.
Prevents evictions and helps to facilitate problem diagnosis.

Also use  the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact the CRS. Enterprise Linux: see /etc/sysconfig/ntpd; Solaris: set "slewalways yes" and "disable pll" in /etc/inet/ntp.conf. 
Like:- 
       # Drop root to id 'ntp:ntp' by default.
       OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
       # Set to 'yes' to sync hw clock after successful ntpdate
       SYNC_HWCLOCK=no
       # Additional options for ntpdate
       NTPDATE_OPTIONS=""

The Time servers operate in a pyramid structure where the top of the NTP stack is usually an external time source (such as GPS Clock).  This then trickles down through the Network Switch stack to the connected server.  
This NTP stack acts as the NTP Server and ensuring that all the RAC Nodes are acting as clients to this server in a slewing method will keep time changes to a minute amount.

Changes in global time to account for atomic accuracy's over Earth rotational wobble , will thus be accounted for with minimal effect.   This is sometimes referred to as the " Leap Second " " epoch ", (between  UTC  12/31/2008 23:59.59 and 01/01/2009 00:00.00  has the one second inserted).

RFC "NTP Slewing for RAC" has been created successfully. CCB ID 462 
 
Links
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => All nodes are using same NTP server across cluster


grac41 =
grac42 =
grac43 =
Top

Top

Root time zone

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Make sure machine clocks are synchronized on all nodes to the same NTP source.
Implement NTP (Network Time Protocol) on all nodes.
Prevents evictions and helps to facilitate problem diagnosis.

Also use  the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact the CRS. Enterprise Linux: see /etc/sysconfig/ntpd; Solaris: set "slewalways yes" and "disable pll" in /etc/inet/ntp.conf. 
Like:- 
       # Drop root to id 'ntp:ntp' by default.
       OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
       # Set to 'yes' to sync hw clock after successful ntpdate
       SYNC_HWCLOCK=no
       # Additional options for ntpdate
       NTPDATE_OPTIONS=""

The Time servers operate in a pyramid structure where the top of the NTP stack is usually an external time source (such as GPS Clock).  This then trickles down through the Network Switch stack to the connected server.  
This NTP stack acts as the NTP Server and ensuring that all the RAC Nodes are acting as clients to this server in a slewing method will keep time changes to a minute amount.

Changes in global time to account for atomic accuracy's over Earth rotational wobble , will thus be accounted for with minimal effect.   This is sometimes referred to as the " Leap Second " " epoch ", (between  UTC  12/31/2008 23:59.59 and 01/01/2009 00:00.00  has the one second inserted).

More information can be found in Note 759143.1
"NTP leap second event causing Oracle Clusterware node reboot"
Linked to this Success Factor.

RFC "NTP Slewing for RAC" has been created successfully. CCB ID 462 
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Time zone matches for root user across cluster


grac41 = CET
grac42 = CET
grac43 = CET
Top

Top

GI/CRS software owner time zone

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Benefit / Impact:

Clusterware deployment requirement

Risk:

Potential cluster instability

Action / Repair:

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.

If for whatever reason the time zones have gotten out of sych then the configuration should be corrected.  Consult with Oracle Support about the proper method for correcting the time zones.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Time zone matches for GI/CRS software owner across cluster


grac41 = CET
grac42 = CET
grac43 = CET
Top

Top

Operating System Version comparison

Recommendation
 Operating system versions should match on each node of the cluster
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Operating system version matches across cluster.


grac41 = 64
grac42 = 64
grac43 = 64
Top

Top

Kernel version comparison across cluster

Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster instability due to kernel version mismatch on cluster nodes.
It is possible that if the kernel versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the l
ater kernel still being present on some nodes but not on others.

Action / Repair:

Unless in the process of a rolling upgrade of cluster node kernels it is assumed
that the kernel versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => OS Kernel version(uname -r) matches across cluster.


grac41 = 2639-4001096el6uekx86_64
grac42 = 2639-4001096el6uekx86_64
grac43 = 2639-4001096el6uekx86_64
Top

Top

Clusterware version comparison

Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster instability due to clusterware version mismatch on cluster nodes.
It is possible that if the clusterware versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the
later clusterware version still being present on some nodes but not on others.

Action / Repair:

Unless in the process of a rolling upgrade of the clusterware it is assumed
that the clusterware versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Clusterware active version matches across cluster.


grac41 = 112040
grac42 = 112040
grac43 = 112040
Top

Top

RDBMS software version comparison

Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential database or application instability due to version mismatch for database homes.
It is possible that if the versions of related RDBMS homes on all the cluster nodes do not
match that some incompatibility could exist which would make diagnosing problems difficult
or bugs fixed in the later RDBMS version still being present on some nodes but not on others.

Action / Repair:

It is assumed that the RDBMS versions of related database homes will match across the cluster. 
If the versions of related RDBMS homes do not match then it is assumed that some mistake has
been made and overlooked.  The purpose of this check is to bring this situation to the attention
of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => RDBMS software version matches across cluster.


grac41 = 112040
grac42 = 112040
grac43 = 112040
Top

Top

Timezone for current user

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Benefit / Impact:

Clusterware deployment requirement

Risk:

Potential cluster instability

Action / Repair:

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.

If for whatever reason the time zones have gotten out of sych then the configuration should be corrected.  Consult with Oracle Support about the proper method for correcting the time zones.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Timezone matches for current user across cluster.


grac41 = CET
grac42 = CET
grac43 = CET
Top

Top

GI/CRS - Public interface name check (VIP)

Success FactorMAKE SURE NETWORK INTERFACES HAVE THE SAME NAME ON ALL NODES
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential application instability due to incorrectly named network interfaces used for node VIP.

Action / Repair:

The Oracle clusterware expects and it is required that the network interfaces used for
the public interface used for the node VIP be named the same on all nodes of the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Public network interface names are the same across cluster


grac41 = eth1
grac42 = eth1
grac43 = eth1
Top

Top

GI/CRS software owner across cluster

Success FactorENSURE EACH ORACLE/ASM USER HAS A UNIQUE UID ACROSS THE CLUSTER
Recommendation
 Benefit / Impact:

Availability, stability

Risk:

Potential OCR logical corruptions and permission problems accessing OCR keys when multiple O/S users share the same UID which are difficult to diagnose.

Action / Repair:

For GI/CRS, ASM and RDBMS software owners ensure one unique user ID with a single name is in use across the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => GI/CRS software owner UID matches across cluster


grac41 = 501
grac42 = 501
grac43 = 501
Top

Top

RDBMS software owner UID across cluster

Success FactorENSURE EACH ORACLE/ASM USER HAS A UNIQUE UID ACROSS THE CLUSTER
Recommendation
 Benefit / Impact:

Availability, stability

Risk:

Potential OCR logical corruptions and permission problems accessing OCR keys when multiple O/S users share the same UID which are difficult to diagnose.

Action / Repair:

For GI/CRS, ASM and RDBMS software owners ensure one unique user ID with a single name is in use across the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => RDBMS software owner UID matches across cluster


grac41 = 54321
grac42 = 54321
grac43 = 54321
Top

Top

GI/CRS - Private interconnect interface name check

Success FactorMAKE SURE NETWORK INTERFACES HAVE THE SAME NAME ON ALL NODES
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster or application instability due to incorrectly named network interfaces.

Action / Repair:

The Oracle clusterware expects and it is required that the network interfaces used for
the cluster interconnect be named the same on all nodes of the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Private interconnect interface names are the same across cluster


grac41 = eth2
grac42 = eth2
grac43 = eth2
Top

Top

OSWatcher status

Success FactorINSTALL AND RUN OSWATCHER PROACTIVELY FOR OS RESOURCE UTILIZATION DIAGNOSIBILITY
Recommendation
 Operating System Watcher  (OSW) is a collection of UNIX shell scripts intended to collect and archive operating system and network metrics to aid diagnosing performance issues. OSW is designed to run continuously and to write the metrics to ASCII files which are saved to an archive directory. The amount of archived data saved and frequency of collection are based on user parameters set when starting OSW.
 
Links
Needs attention ongrac43
Passed ongrac41, grac42

Status on grac41:
PASS => OSWatcher is running


DATA FROM GRAC41 - OSWATCHER STATUS 



root     27547 16314  0 Feb20 pts/1    00:04:30 /usr/lib64/firefox/firefox profile/grac41.example.com_rac_perf_grac41/OSW_profile.htm

Status on grac42:
PASS => OSWatcher is running


DATA FROM GRAC42 - OSWATCHER STATUS 



root     15941     1  0 Feb21 pts/1    00:00:12 /bin/sh ./OSWatcher.sh 60 1 gzip
root     16650 15941  0 Feb21 pts/1    00:00:03 /bin/sh ./OSWatcherFM.sh 1 /u01/app/11204/grid/oswbb/archive

Status on grac43:
WARNING => OSWatcher is not running as is recommended.


DATA FROM GRAC43 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running
Top

Top

DB shell limits soft nproc

Recommendation
 This recommendation represents a change or deviation from the documented values and should be considered a temporary measure until the code addresses the problem in a more permanent way.

Problem Statement: 
------------------ 
The soft limit of nproc is not adjusted at runtime by the database. As a 
result, if that limit is reached, the database may become unstable since it 
will fail to fork additional processes. 

Workaround: 
----------- 
Ensure that the soft limit for nproc in /etc/security/limits.conf is set high 
enough to accommodate the maximum number of concurrent threads on the system 
for the given workload. If in doubt, set it to the hard limit. For example: 

oracle  soft    nproc   16384 
oracle  hard    nproc   16384

The soft nproc shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 2047.  So the above advice of setting soft nproc = hard nproc = 16384 should be considered a temporary proactive measure to avoid the possibility of the database not being able to fork enough processes.
 
Links
Needs attention ongrac42, grac43
Passed ongrac41

Status on grac41:
PASS => Shell limit soft nproc for DB is configured according to recommendation


DATA FROM GRAC41 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384

Status on grac42:
WARNING => Shell limit soft nproc for DB is NOT configured according to recommendation


DATA FROM GRAC42 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 2047

Status on grac43:
WARNING => Shell limit soft nproc for DB is NOT configured according to recommendation


DATA FROM GRAC43 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 2047
Top

Top

User Open File Limit

Recommendation
 Please consult 

Oracle Database Installation Guide for Linux
Configure Oracle Installation Owner Shell Limits
 
Needs attention ongrac42, grac43
Passed ongrac41

Status on grac41:
PASS => Open files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimited


DATA FROM GRAC41 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 33880
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Status on grac42:
WARNING => Open files limit (ulimit -n) for current user is NOT set to recommended value >= 65536 or unlimited


DATA FROM GRAC42 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 29448
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 2047
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Status on grac43:
WARNING => Open files limit (ulimit -n) for current user is NOT set to recommended value >= 65536 or unlimited


DATA FROM GRAC43 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 29448
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 2047
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
Top

Top

Verify Berkeley Database location for Cloned GI homes

Recommendation
 Benefit / Impact:

After cloning a Grid Home the Berkeley Database configuration file ($GI_HOME/crf/admin/crf<node>.ora) in the new home should not be pointing to the previous GI home where it is cloned from. During previous patch set updates Berkeley Database configuration files were found still pointing to the 'before (previously cloned from) home'. It was due an invalid cloning procedure the Berkeley Database location of the 'new home' was not updated during the out of place bundle patching procedure.

Risk:

Berkeley Database configurations still pointing to the old GI home, will fail GI Upgrades to 11.2.0.3. Error messages in $GRID_HOME/log/crflogd/crflogdOUT.log logfile

Action / Repair:

Detect:
cat $GI_HOME/crf/admin/crf`hostname -s`.ora | grep CRFHOME | grep $GI_HOME | wc -l 

cat $GI_HOME/crf/admin/crf`hostname -s`.ora | grep BDBLOC | egrep "default|$GI_HOME | wc -l
 
For each of the above commands, when no '1' is returned, the CRFHOME or BDBLOC as mentioned the crf.ora file has the correct reference to the GI_HOME in it.

To solve this, manually edit $GI_HOME/crf/admin/crf<node>.ora in the cloned Grid Infrastructure Home and change the values for BDBLOC and CRFHOME and make sure none of them point to the previous GI_HOME but to their current home. The same change needs to be done on all nodes in the cluster. It is recommended to set BDBLOC to "default". This needs to be done prior the upgrade.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Berkeley Database location points to correct GI_HOME


DATA FROM GRAC41 - VERIFY BERKELEY DATABASE LOCATION FOR CLONED GI HOMES 



BDBLOC=default
MYNAME=grac41
CLUSTERNAME=grac4
USERNAME=grid
CRFHOME=/u01/app/11204/grid
grac41 5=127.0.0.1 0
grac41 1=127.0.0.1 0
grac41 2=192.168.2.101 61021
grac42 5=127.0.0.1 0
grac42 1=127.0.0.1 0
grac42 0=192.168.2.102 61020
grac43 5=127.0.0.1 0
grac43 1=127.0.0.1 0
grac43 0=192.168.2.103 61020
grac42 2=192.168.2.102 61021
grac43 2=192.168.2.103 61021
Click for more data

Status on grac42:
PASS => Berkeley Database location points to correct GI_HOME


DATA FROM GRAC42 - VERIFY BERKELEY DATABASE LOCATION FOR CLONED GI HOMES 



BDBLOC=default
MYNAME=grac42
CLUSTERNAME=grac4
USERNAME=grid
CRFHOME=/u01/app/11204/grid
grac42 5=127.0.0.1 0
grac42 1=127.0.0.1 0
grac42 0=192.168.2.102 61020
grac41 5=127.0.0.1 0
grac41 1=127.0.0.1 0
grac41 2=192.168.2.101 61021
grac42 2=192.168.2.102 61021
grac43 5=127.0.0.1 0
grac43 1=127.0.0.1 0
grac43 0=192.168.2.103 61020
grac43 2=192.168.2.103 61021
Click for more data

Status on grac43:
PASS => Berkeley Database location points to correct GI_HOME


DATA FROM GRAC43 - VERIFY BERKELEY DATABASE LOCATION FOR CLONED GI HOMES 



BDBLOC=default
MYNAME=grac43
CLUSTERNAME=grac4
USERNAME=grid
CRFHOME=/u01/app/11204/grid
grac43 5=127.0.0.1 0
grac43 1=127.0.0.1 0
grac43 0=192.168.2.103 61020
grac42 5=127.0.0.1 0
grac42 1=127.0.0.1 0
grac42 0=192.168.2.102 61020
grac41 2=192.168.2.101 61021
grac41 5=127.0.0.1 0
grac41 1=127.0.0.1 0
grac42 2=192.168.2.102 61021
grac43 2=192.168.2.103 61021
Click for more data
Top

Top

Disk I/O Scheduler on Linux

Recommendation
 Starting with the 2.6 kernel, for example Red Hat Enterprise Linux 4 or 5, the I/O scheduler can be
changed at boot time which controls the way the kernel commits reads and writes to disks. For more
information on various I/O scheduler, see Choosing an I/O Scheduler for Red Hat Enterprise Linux 4
and the 2.6 Kernel 1


The Completely Fair Queuing (CFQ) scheduler is the default algorithm in Red Hat Enterprise Linux
4 which is suitable for a wide variety of applications and provides a good compromise between
throughput and latency. In comparison to the CFQ algorithm, the Deadline scheduler caps maximum
latency per request and maintains a good disk throughput which is best for disk-intensive database
applications. Hence, the Deadline scheduler is recommended for database systems.

Action/repair :-

Red Hat Enterprise Linux 5 and above allows users to change I/O schedulers dynamically,
one way this can be done is by executing the command echo sched_name > /sys/block/
<sdx>/queue/scheduler.

 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Linux Disk I/O Scheduler is configured as recommended


DATA FROM GRAC41 - DISK I/O SCHEDULER ON LINUX 




Status on grac42:
PASS => Linux Disk I/O Scheduler is configured as recommended


DATA FROM GRAC42 - DISK I/O SCHEDULER ON LINUX 




Status on grac43:
PASS => Linux Disk I/O Scheduler is configured as recommended


DATA FROM GRAC43 - DISK I/O SCHEDULER ON LINUX 



Top

Top

Verify no multiple parameter entries in database init.ora(spfile)

Recommendation
 
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM GRAC41 - GRAC4 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



grac42.__db_cache_size=838860800
grac41.__db_cache_size=905969664
grac43.__db_cache_size=889192448
grac42.__java_pool_size=16777216
grac43.__java_pool_size=16777216
grac41.__java_pool_size=16777216
grac42.__large_pool_size=33554432
grac43.__large_pool_size=33554432
grac41.__large_pool_size=33554432
grac41.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
grac42.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
grac43.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
grac42.__pga_aggregate_target=452984832
grac43.__pga_aggregate_target=452984832
grac41.__pga_aggregate_target=452984832
grac42.__sga_target=1342177280
Click for more data

Status on grac42:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM GRAC42 - GRAC4 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



grac42.__db_cache_size=838860800
grac41.__db_cache_size=905969664
grac43.__db_cache_size=889192448
grac42.__java_pool_size=16777216
grac43.__java_pool_size=16777216
grac41.__java_pool_size=16777216
grac42.__large_pool_size=33554432
grac43.__large_pool_size=33554432
grac41.__large_pool_size=33554432
grac41.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
grac42.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
grac43.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
grac42.__pga_aggregate_target=452984832
grac43.__pga_aggregate_target=452984832
grac41.__pga_aggregate_target=452984832
grac42.__sga_target=1342177280
Click for more data

Status on grac43:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM GRAC43 - GRAC4 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



grac42.__db_cache_size=838860800
grac41.__db_cache_size=905969664
grac43.__db_cache_size=889192448
grac42.__java_pool_size=16777216
grac43.__java_pool_size=16777216
grac41.__java_pool_size=16777216
grac42.__large_pool_size=33554432
grac43.__large_pool_size=33554432
grac41.__large_pool_size=33554432
grac41.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
grac42.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
grac43.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
grac42.__pga_aggregate_target=452984832
grac43.__pga_aggregate_target=452984832
grac41.__pga_aggregate_target=452984832
grac42.__sga_target=1342177280
Click for more data
Top

Top

Verify no multiple parameter entries in ASM init.ora(spfile)

Recommendation
 
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM GRAC41 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



+ASM1.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
+ASM2.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
+ASM1.asm_diskgroups='OCR','SSD','FRA'#Manual Mount
+ASM3.asm_diskgroups='OCR','SSD','FRA'#Manual Mount
+ASM2.asm_diskgroups='SSD','FRA'#Manual Mount
*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/grid'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on grac42:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM GRAC42 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



+ASM1.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
+ASM2.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
+ASM1.asm_diskgroups='OCR','SSD','FRA'#Manual Mount
+ASM3.asm_diskgroups='OCR','SSD','FRA'#Manual Mount
+ASM2.asm_diskgroups='SSD','FRA'#Manual Mount
*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/grid'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on grac43:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM GRAC43 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



+ASM1.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
+ASM2.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
+ASM1.asm_diskgroups='OCR','SSD','FRA'#Manual Mount
+ASM3.asm_diskgroups='OCR','SSD','FRA'#Manual Mount
+ASM2.asm_diskgroups='SSD','FRA'#Manual Mount
*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/grid'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'
Top

Top

Verify control_file_record_keep_time value is in recommended range

Success FactorORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 Benefit / Impact:

When a Recovery Manager catalog is not used, the initialization parameter "control_file_record_keep_time" controls the period of time for which circular reuse records are maintained within the database control file. RMAN repository records are kept in circular reuse records.  The optimal setting is the maximum number of days in the past that is required to restore and recover a specific database without the use of a RMAN recovery catalog.  Setting this parameter within a recommended range (1 to 9 days) has been shown to address most recovery scenarios by ensuring archive logs and backup records are not prematurely aged out making database recovery much more challenging.    

The impact of verifying that the initialization parameter control_file_record_keep_time value is in the recommended range is minimal. Increasing this value will increase the size of the controlfile and possible query time for backup meta data and archive data.

Risk:

If the control_file_record_keep_time is set to 0, no RMAN repository records are retained in the controlfile, which causes a much more challenging database recovery operation if RMAN recovery catalog is not available.

If the control_file_record_keep_time is set too high, problems can arise with space management within the control file, expansion of the control file, and control file contention issues.


Action / Repair:

To verify that the FRA space management function is not blocked, as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

CF_RECORD_KEEP_TIME="";
CF_RECORD_KEEP_TIME=$(echo -e "set heading off feedback off\n select value from V\$PARAMETER where name = 'control_file_record_keep_time';" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [[ $CF_RECORD_KEEP_TIME -ge "1" && $CF_RECORD_KEEP_TIME -le "9" ]]
then echo -e "\nPASS:  control_file_record_keep_time is within recommended range [1-9]:" $CF_RECORD_KEEP_TIME;
elif [ $CF_RECORD_KEEP_TIME -eq "0" ]
then echo -e "\nFAIL:  control_file_record_keep_time is set to zero:" $CF_RECORD_KEEP_TIME;
else echo -e "\nWARNING:  control_file_record_keep_time is not within recommended range [1-9]:" $CF_RECORD_KEEP_TIME;
fi;

The expected output should be:

PASS:  control_file_record_keep_time is within recommended range [1-9]: 7

If the output is not as expected, investigate and correct the condition(s).

NOTE: The use of an RMAN recovery catalog is recommended as the best way to avoid the loss of RMAN metadata because of overwritten control file records.
 
Links
Needs attention on-
Passed ongrac41

Status on grac41:
PASS => control_file_record_keep_time is within recommended range [1-9] for grac4


DATA FROM GRAC41 - GRAC4 DATABASE - VERIFY CONTROL_FILE_RECORD_KEEP_TIME VALUE IS IN RECOMMENDED RANGE 



control_file_record_keep_time = 7
Top

Top

Verify rman controlfile autobackup is set to ON

Success FactorORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 Benefit / Impact:

The control file is a binary file that records the physical structure of the database and contains important meta data required to recover the database. The database cannot startup or stay up unless all control files are valid. When a Recovery Manager catalog is not used, the control file is needed for database recovery because it contains all backup and recovery meta data.

The impact of verifying and setting "CONTROLFILE AUTOBACKUP" to "ON" is minimal. 

Risk:

When a Recovery Manager catalog is not used, loss of the controlfile results in loss of all backup and recovery meta data, which causes a much more challenging database recovery operation

Action / Repair:

To verify that RMAN "CONTROLFILE AUTOBACKUP" is set to "ON", as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

RMAN_AUTOBACKUP_STATUS="";
RMAN_AUTOBACKUP_STATUS=$(echo -e "set heading off feedback off\n select value from V\$RMAN_CONFIGURATION where name = 'CONTROLFILE AUTOBACKUP';" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [ -n "$RMAN_AUTOBACKUP_STATUS" ] && [ "$RMAN_AUTOBACKUP_STATUS" = "ON" ]
then echo -e "\nPASS:  RMAN "CONTROLFILE AUTOBACKUP" is set to \"ON\":" $RMAN_AUTOBACKUP_STATUS;
else
echo -e "\nFAIL:  RMAN "CONTROLFILE AUTOBACKUP" should be set to \"ON\":" $RMAN_AUTOBACKUP_STATUS;
fi;

The expected output should be:

PASS:  RMAN CONTROLFILE AUTOBACKUP is set to "ON": ON

If the output is not as expected, investigate and correct the condition(s).

For additional information, review information on CONFIGURE syntax in Oracle® Database Backup and Recovery Reference 11g Release 2 (11.2).

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

NOTE: Oracle MAA also recommends periodically backing up the controlfile to trace as additional backup.

SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
 
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => RMAN controlfile autobackup is set to ON


DATA FOR GRAC4 FOR VERIFY RMAN CONTROLFILE AUTOBACKUP IS SET TO ON 




rman_configuration = ON                                                         
Top

Top

Verify the Fast Recovery Area (FRA) has reclaimable space

Success FactorORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 Benefit / Impact:

Oracle's Fast Recovery Area (FRA) manages archivelog files, flashback logs, and RMAN backups.
Before RMAN's space management can clean up files according to your configured retention and
deletion policies, the database needs to be backup periodically. Without these backups, FRA can run
out of available space resulting in database hang because it cannot archive locally.

The impact of verifying that the Flash Recovery Area (FRA) has reclaimable space is minimal.

Risk:

If the Flash Recovery Area (FRA) space management function has no space available to reclaim, the database may hang because it cannot archive a log to the FRA.

Action / Repair:

To verify that the FRA space management funcion is not blocked, as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

PROBLEM_FILE_TYPES_PRESENT=$(echo -e "set heading off feedback off\n select count(*) from V\$FLASH_RECOVERY_AREA_USAGE where file_type in ('ARCHIVED LOG', 'BACKUP PIECE', 'IMAGE COPY') and number_of_files > 0 ;" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
RMAN_BACKUP_WITHIN_30_DAYS=$(echo -e "set heading off feedback off\n select count(*) from V\$BACKUP_SET where completion_time > sysdate-30;" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [ $PROBLEM_FILE_TYPES_PRESENT -eq "0" ]
then echo -e "\nThis check is not applicable because file types 'ARCHIVED LOG', 'BACKUP PIECE', or 'IMAGE COPY' are not present in V\$FLASH_RECOVERY_AREA_USAGE";
else if [[ $PROBLEM_FILE_TYPES_PRESENT -ge "1" && $RMAN_BACKUP_WITHIN_30_DAYS -ge "1" ]]
then echo -e "\nPASS:  FRA space management problem file types are present with an RMAN backup completion within the last 30 days."
else echo -e "\nFAIL:  FRA space management problem file types are present without an RMAN backup completion within the last 7 days."
fi;
fi;

The expected output should be:

PASS:  FRA space management problem file types are present with an RMAN backup completion within the last 30 days.

If the output is not as expected, investigate and correct the condition(s).
 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => Fast Recovery Area (FRA) has sufficient reclaimable space


DATA FOR GRAC4 FOR VERIFY THE FAST RECOVERY AREA (FRA) HAS RECLAIMABLE SPACE 




rman_backup_within_30_days = 3                                                  
Top

Top

Registered diskgroups in clusterware registry

Recommendation
 Benefit / Impact: :-

Risk:-

Action / Repair:-
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM GRAC41 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA
FRA
OCR
SSD

Diskgroups from Clusterware resources:-

DATA
FRA
OCR
SSD

Status on grac42:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM GRAC42 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA
FRA
OCR
SSD

Diskgroups from Clusterware resources:-

DATA
FRA
OCR
SSD

Status on grac43:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM GRAC43 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA
FRA
OCR
SSD

Diskgroups from Clusterware resources:-

DATA
FRA
OCR
SSD
Top

Top

rp_filter for bonded private interconnects

Recommendation
 As a consequence of having rp_filter set to 1, Interconnect packets may potentially be blocked/discarded. 

To fix this problem, use following MOS note.
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM GRAC41 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1
net.ipv4.conf.eth2.rp_filter = 1
net.ipv4.conf.eth3.rp_filter = 1
net.ipv4.conf.virbr0.rp_filter = 1
net.ipv4.conf.virbr0-nic.rp_filter = 1

Status on grac42:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM GRAC42 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1
net.ipv4.conf.eth2.rp_filter = 1
net.ipv4.conf.eth3.rp_filter = 1
net.ipv4.conf.virbr0.rp_filter = 1
net.ipv4.conf.virbr0-nic.rp_filter = 1

Status on grac43:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM GRAC43 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1
net.ipv4.conf.eth2.rp_filter = 1
net.ipv4.conf.eth3.rp_filter = 1
net.ipv4.conf.virbr0.rp_filter = 1
net.ipv4.conf.virbr0-nic.rp_filter = 1
Top

Top

Check for parameter cvuqdisk|1.0.9|1|x86_64

Recommendation
 Install the operating system package cvuqdisk. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you receive the error message "Package cvuqdisk not installed" when you run Cluster Verification Utility. Use the cvuqdisk rpm for your hardware (for example, x86_64, or i386).
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on grac42:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on grac43:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64
Top

Top

OLR Integrity

Recommendation
 Any Kind of OLR corruption should be remedied before attempting upgrade otherwise 11.2 GI rootupgrade.sh fails with "Invalid  OLR during upgrade"
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => OLR Integrity check Succeeded


DATA FROM GRAC41 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2668
	 Available space (kbytes) :     259452
	 ID                       : 1855884304
	 Device/File Name         : /u01/app/11204/grid/cdata/grac41.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded


Status on grac42:
PASS => OLR Integrity check Succeeded


DATA FROM GRAC42 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2668
	 Available space (kbytes) :     259452
	 ID                       :  742976850
	 Device/File Name         : /u01/app/11204/grid/cdata/grac42.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded


Status on grac43:
PASS => OLR Integrity check Succeeded


DATA FROM GRAC43 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2668
	 Available space (kbytes) :     259452
	 ID                       : 1666050145
	 Device/File Name         : /u01/app/11204/grid/cdata/grac43.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded

Top

Top

pam_limits check

Recommendation
 This is required to make the shell limits work properly and applies to 10g,11g and 12c.  

Add the following line to the /etc/pam.d/login file, if it does not already exist:

session    required     pam_limits.so

 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => pam_limits configured properly for shell limits


DATA FROM GRAC41 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on grac42:
PASS => pam_limits configured properly for shell limits


DATA FROM GRAC42 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on grac43:
PASS => pam_limits configured properly for shell limits


DATA FROM GRAC43 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so
Top

Top

Verify vm.min_free_kbytes

Recommendation
  Benefit / Impact:

Maintaining vm.min_free_kbytes=524288 (512MB) helps a Linux system to reclaim memory faster and avoid LowMem pressure issues which can lead to node eviction or other outage or performance issues.

The impact of verifying vm.min_free_kbytes=524288 is minimal. The impact of adjusting the parameter should include editing the /etc/sysctl.conf file and rebooting the system. It is possible, but not recommended, especially for a system already under LowMem pressure, to modify the setting interactively. However, a reboot should still be performed to make sure the interactive setting is retained through a reboot.

Risk:

Exposure to unexpected node eviction and reboot.

Action / Repair:

To verify that vm.min_free_kbytes is properly set to 524288 execute the following command

/sbin/sysctl -n vm.min_free_kbytes

cat /proc/sys/vm/min_free_kbytes

If the output is not as expected, investigate and correct the condition. For example if the value is incorrect in /etc/sysctl.conf but current memory matches the incorrect value, simply edit the /etc/sysctl.conf file to include the line "vm.min_free_kbytes = 524288" and reboot the node. 
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM GRAC41 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 67584

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 67584

Status on grac42:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM GRAC42 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 67584

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 67584

Status on grac43:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM GRAC43 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 67584

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 67584
Top

Top

Verify data files are recoverable

Success FactorDATA CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Benefit / Impact:

When you perform a DML or DDL operation using the NOLOGGING or UNRECOVERABLE clause, database backups made prior to the unrecoverable operation are invalidated and new backups are required. You can specify the SQL ALTER DATABASE or SQL ALTER TABLESPACE statement with the FORCE LOGGING clause to override the NOLOGGING setting; however, this statement will not repair a database that is already invalid.

Risk:

Changes under NOLOGGING will not be available after executing database recovery from a backup made prior to the unrecoverable change.

Action / Repair:

To verify that the data files are recoverable, execute the following Sqlplus command as the userid that owns the oracle home for the database:
select file#, unrecoverable_time, unrecoverable_change# from v$datafile where unrecoverable_time is not null;
If there are any unrecoverable actions, the output will be similar to:
     FILE# UNRECOVER UNRECOVERABLE_CHANGE#
---------- --------- ---------------------
        11 14-JAN-13               8530544
If nologging changes have occurred and the data must be recoverable then a backup of those datafiles that have nologging operations within should be done immediately. Please consult the following sections of the Backup and Recovery User guide for specific steps to resolve files that have unrecoverable changes

The standard best practice is to enable FORCE LOGGING at the database level (ALTER DATABASE FORCE LOGGING;) to ensure that all transactions are recoverable. However, placing the a database in force logging mode for ETL operations can lead to unnecessary database overhead. MAA best practices call for isolating data that does not need to be recoverable. Such data would include:

Data resulting from temporary loads
Data resulting from transient transformations
Any non critical data

To reduce unnecessary redo generation, do the following:

Specifiy FORCE LOGGING for all tablespaces that you explicitly wish to protect (ALTERTABLESPACE FORCE LOGGING;)
Specify NO FORCE LOGGING for those tablespaces that do not need protection (ALTERTABLESPACE NO FORCE LOGGING;).
Disable force logging at the database level (ALTER DATABASE NO FORCE LOGGING;) otherwise the database level settings will override the tablespace settings.

Once the above is complete, redo logging will function as follows:

Explicit no logging operations on objects in the no logging tablespace will not generate the normal redo (a small amount of redo is always generated for no logging operations to signal that a no logging operation was performed).

All other operations on objects in the no logging tablespace will generate the normal redo.
Operations performed on objects in the force logging tablespaces always generate normal redo.

Note:-Please seek oracle support assistance to mitigate this problem. Upon their guidance, the following commands could help validate, identify corrupted blocks.

              oracle> dbv file=<data_file_returned_by_above_command> userid=sys/******
              RMAN> validate check logical database;
              SQL> select COUNT(*) from v$database_block_corruption;

 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => The data files are all recoverable


DATA FOR GRAC4 FOR VERIFY DATA FILES ARE RECOVERABLE 




Query returned no rows which is expected when the SQL check passes.

Top

Top

Check for parameter unixODBC-devel|2.2.14|11.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on grac42:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on grac43:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed

Top

Top

OCR and Voting file location

Recommendation
 Starting with Oracle 11gR2, our recommendation is to use Oracle ASM to store OCR and Voting Disks. With appropriate redundancy level (HIGH or NORMAL) of the ASM Disk Group being used, Oracle can create required number of Voting Disks as part of installation
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => OCR and Voting disks are stored in ASM


DATA FROM GRAC41 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4108
	 Available space (kbytes) :     258012
	 ID                       :  630679368
	 Device/File Name         :       +OCR
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on grac42:
PASS => OCR and Voting disks are stored in ASM


DATA FROM GRAC42 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4108
	 Available space (kbytes) :     258012
	 ID                       :  630679368
	 Device/File Name         :       +OCR
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on grac43:
PASS => OCR and Voting disks are stored in ASM


DATA FROM GRAC43 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4108
	 Available space (kbytes) :     258012
	 ID                       :  630679368
	 Device/File Name         :       +OCR
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data
Top

Top

Parallel Execution Health-Checks and Diagnostics Reports

Recommendation
 This audit check captures information related to Oracle Parallel Query (PQ), DOP, PQ/PX Statistics, Database Resource Plans, Consumers Groups etc. This is primarily for Oracle Support Team consumption. However, customers may also review this to identify/troubleshoot related problems.
For every database, there will be a zip file of format <pxhcdr_DBNAME_HOSTNAME_DBVERSION_DATE_TIMESTAMP.zip> in raccheck output directory. 
 
Needs attention ongrac41
Passed on-
Top

Top

Hardware clock synchronization

Recommendation
 /etc/init.d/halt file is called when system is rebooted or halt. this file must have instructions to synchronize system time to hardware clock.

it should have commands like 

[ -x /sbin/hwclock ] && action $"Syncing hardware clock to system time" /sbin/hwclock $CLOCKFLAGS
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM GRAC41 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on grac42:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM GRAC42 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on grac43:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM GRAC43 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc
Top

Top

Verify transparent hugepages are disabled

Recommendation
 Benefit / Impact:

Linux transparent huge pages are enabled by default in OEL 6 and SuSE 11 which might cause soft lockup of CPUs and make system unresponsive which can cause node eviction

Risk:

Because Transparent HugePages are known to cause unexpected node reboots and performance problems with RAC, Oracle strongly advises to disable the use of Transparent HugePages. In addition, Transparent Hugepages may cause problems even in a single-instance database environment with unexpected performance problems or delays. As such, Oracle recommends disabling Transparent HugePages on all Database servers running Oracle


Action / Repair:

To turn this feature off,put this in /etc/rc.local:-

echo never > /sys/kernel/mm/transparent_hugepage/enabled 
echo never > /sys/kernel/mm/transparent_hugepage/defrag 
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Linux transparent huge pages are disabled


DATA FROM GRAC41 - VERIFY TRANSPARENT HUGEPAGES ARE DISABLED 



AnonHugePages:    372736 kB

Status on grac42:
PASS => Linux transparent huge pages are disabled


DATA FROM GRAC42 - VERIFY TRANSPARENT HUGEPAGES ARE DISABLED 



AnonHugePages:    337920 kB

Status on grac43:
PASS => Linux transparent huge pages are disabled


DATA FROM GRAC43 - VERIFY TRANSPARENT HUGEPAGES ARE DISABLED 



AnonHugePages:    198656 kB
Top

Top

TFA Collector status

Recommendation
 TFA Collector (aka TFA) is a diagnostic collection utility to simplify  diagnostic data collection on Oracle Clusterware/Grid Infrastructure and RAC  systems.  TFA is similar to the diagcollection utility packaged with Oracle  Clusterware in the fact that it collects and packages diagnostic data however  TFA is MUCH more powerful than diagcollection with its ability to centralize  and automate the collection of diagnostic information. This helps speed up  the data collection and upload process with Oracle Support, minimizing delays  in data requests and analysis.
TFA provides the following key benefits:- 
  - Encapsulates diagnostic data collection for all CRS/GI and RAC components  on all cluster nodes into a single command executed from a single node 
  - Ability to "trim" diagnostic files during data collection to reduce data  upload size 
  - Ability to isolate diagnostic data collection to a given time period 
  - Ability to centralize collected diagnostic output to a single server in  the cluster 
  - Ability to isolate diagnostic collection to a particular product  component, e.g. ASM, RDBMS, Clusterware 
  - Optional Real Time Scan Alert Logs for conditions indicating a problem (DB 
  - Alert Logs, ASM Alert Logs, Clusterware Alert Logs, etc) 
  - Optional Automatic Data Collection based off of Real Time Scan findings 
  - Optional On Demand Scan (user initialted) of all log and trace files for  conditions indicating a problem 
  - Optional Automatic Data Collection based off of On Demand Scan findings 
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => TFA Collector is installed and running


DATA FROM GRAC41 - TFA COLLECTOR STATUS 




-rwxr-xr-x. 1 root root 10435 Sep 12 12:50 /etc/init.d/init.tfa 

-rw-r--r--. 1 root root 5 Oct 19 08:30 /u01/app/11204/grid/tfa/grac41/tfa_home/internal/.pidfile

Status on grac42:
PASS => TFA Collector is installed and running


DATA FROM GRAC42 - TFA COLLECTOR STATUS 




-rwxr-xr-x. 1 root root 10435 Sep 12 13:05 /etc/init.d/init.tfa 

-rw-r--r--. 1 root root 5 Oct 29 10:07 /u01/app/11204/grid/tfa/grac42/tfa_home/internal/.pidfile

Status on grac43:
PASS => TFA Collector is installed and running


DATA FROM GRAC43 - TFA COLLECTOR STATUS 




-rwxr-xr-x. 1 root root 10435 Sep 24 18:10 /etc/init.d/init.tfa 

-rw-r--r--. 1 root root 5 Oct 29 10:07 /u01/app/11204/grid/tfa/grac43/tfa_home/internal/.pidfile
Top

Top

Clusterware resource status

Recommendation
 Resources were found to be in an UNKNOWN state on the system.  Having  resources in this state often results in issues when upgrading.  It is  recommended to correct resources in an UNKNOWN state prior to upgrading.   

 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => No clusterware resource are in unknown state


DATA FROM GRAC41 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [grac41] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
               ONLINE  ONLINE       grac43                                       
ora.FRA.dg
Click for more data

Status on grac42:
PASS => No clusterware resource are in unknown state


DATA FROM GRAC42 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [grac42] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
               ONLINE  ONLINE       grac43                                       
ora.FRA.dg
Click for more data

Status on grac43:
PASS => No clusterware resource are in unknown state


DATA FROM GRAC43 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [grac43] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
               ONLINE  ONLINE       grac43                                       
ora.FRA.dg
Click for more data
Top

Top

ORA-15196 errors in ASM alert log

Recommendation
 ORA-15196 errors means ASM encountered an invalid metadata block. Please see the trace file for more information next to ORA-15196 error in ASM alert log.  If this is an old error, you can ignore this finding otherwise open service request with Oracle support to find the cause and fix it


 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM GRAC41 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on grac42:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM GRAC42 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on grac43:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM GRAC43 - ORA-15196 ERRORS IN ASM ALERT LOG 



Top

Top

Disks without Disk Group

Recommendation
 The GROUP_NUMBER and DISK_NUMBER columns in GV$ASM_DISK will only be valid if the disk is part of a disk group which is currently mounted by the instance. Otherwise, GROUP_NUMBER will be 0, and DISK_NUMBER will be a unique value with respect to the other disks that also have a group number of 0. Run following query to find out the disks which are not part of any disk group.

select name,path,HEADER_STATUS,GROUP_NUMBER  from gv$asm_disk where group_number=0;
 
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
INFO => One or more disks found which are not part of any disk group


DATA FROM GRAC41 - DISKS WITHOUT DISK GROUP 




PATH
--------------------------------------------------------------------------------
/dev/asmdisk3_test
/dev/asmdisk2_test
/dev/asmdisk1_test
/dev/asmdisk2_test
/dev/asmdisk3_test
/dev/asmdisk1_test
/dev/asmdisk1_test
/dev/asmdisk2_test
/dev/asmdisk3_test

9 rows selected.


Status on grac42:
INFO => One or more disks found which are not part of any disk group


DATA FROM GRAC42 - DISKS WITHOUT DISK GROUP 




PATH
--------------------------------------------------------------------------------
/dev/asmdisk3_test
/dev/asmdisk2_test
/dev/asmdisk1_test
/dev/asmdisk1_test
/dev/asmdisk2_test
/dev/asmdisk3_test
/dev/asmdisk2_test
/dev/asmdisk3_test
/dev/asmdisk1_test

9 rows selected.


Status on grac43:
INFO => One or more disks found which are not part of any disk group


DATA FROM GRAC43 - DISKS WITHOUT DISK GROUP 




PATH
--------------------------------------------------------------------------------
/dev/asmdisk2_test
/dev/asmdisk3_test
/dev/asmdisk1_test
/dev/asmdisk3_test
/dev/asmdisk2_test
/dev/asmdisk1_test
/dev/asmdisk1_test
/dev/asmdisk2_test
/dev/asmdisk3_test

9 rows selected.

Top

Top

Redo log file write time latency

Recommendation
 When the latency hits 500ms, a Warning message is written to the lgwr trace file(s). For example:

Warning: log write elapsed time 564ms, size 2KB

Even though this threshold is very high and latencies below this range could impact the application performance, it is still worth to capture and report it to customers for necessary action.The performance impact of LGWR latencies include commit delays,Broadcast-on-Commit delays etc.
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM GRAC41 - GRAC4 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 2703ms, size 1KB
Warning: log write elapsed time 639ms, size 3KB
Warning: log write elapsed time 615ms, size 2KB
Warning: log write elapsed time 507ms, size 1KB
Warning: log write elapsed time 521ms, size 1KB
Warning: log write elapsed time 665ms, size 1KB

Status on grac42:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM GRAC42 - GRAC4 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 765ms, size 2KB
Warning: log write elapsed time 528ms, size 2KB
Warning: log write elapsed time 511ms, size 1KB
Warning: log write elapsed time 790ms, size 2KB
Warning: log write elapsed time 504ms, size 2KB
Warning: log write elapsed time 575ms, size 2KB
Warning: log write elapsed time 520ms, size 1KB
Warning: log write elapsed time 508ms, size 2KB
Warning: log write elapsed time 529ms, size 2KB
Warning: log write elapsed time 503ms, size 1KB
Warning: log write elapsed time 544ms, size 2KB
Warning: log write elapsed time 529ms, size 2KB
Warning: log write elapsed time 563ms, size 2KB

Status on grac43:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM GRAC43 - GRAC4 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 665ms, size 4KB
Warning: log write elapsed time 526ms, size 2KB
Warning: log write elapsed time 535ms, size 2KB
Warning: log write elapsed time 558ms, size 2KB
Warning: log write elapsed time 771ms, size 2KB
Warning: log write elapsed time 715ms, size 2KB
Warning: log write elapsed time 544ms, size 2KB
Warning: log write elapsed time 626ms, size 3KB
Warning: log write elapsed time 695ms, size 3KB
Warning: log write elapsed time 757ms, size 2KB
Warning: log write elapsed time 586ms, size 2KB
Warning: log write elapsed time 702ms, size 2KB
Warning: log write elapsed time 651ms, size 3KB
Warning: log write elapsed time 621ms, size 2KB
Warning: log write elapsed time 545ms, size 1KB
Warning: log write elapsed time 726ms, size 3KB
Click for more data
Top

Top

Broadcast Requirements for Networks

Success FactorUSE SEPARATE SUBNETS FOR INTERFACES CONFIGURED FOR REDUNDANT INTERCONNECT (HAIP)
Recommendation
 all public and private interconnect network cards should be able to arping to all remote nodes in cluster.

For example using public network card, arping remote node using following command and output should be "Received 1 response(s)"

/sbin/arping -b -f -c 1 -w 1 -I eth1 nodename2.

Here eth1 is public network interface and nodename2 is second node in cluster.

 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Grid infastructure network broadcast requirements are met


DATA FROM GRAC41 FOR BROADCAST REQUIREMENTS FOR NETWORKS 



ARPING 192.168.1.102 from 192.168.1.101 eth1
Unicast reply from 192.168.1.102 [08:00:27:15:73:CD]  0.859ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.1.102 from 192.168.2.101 eth2
Unicast reply from 192.168.1.102 [08:00:27:CA:E7:A7]  0.933ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.1.103 from 192.168.1.101 eth1
Unicast reply from 192.168.1.103 [08:00:27:94:AA:5E]  1.070ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.1.103 from 192.168.2.101 eth2
Unicast reply from 192.168.1.103 [08:00:27:B8:B4:00]  1.634ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)

Status on grac42:
PASS => Grid infastructure network broadcast requirements are met


DATA FROM GRAC42 FOR BROADCAST REQUIREMENTS FOR NETWORKS 



ARPING 192.168.1.101 from 192.168.1.102 eth1
Unicast reply from 192.168.1.101 [08:00:27:1E:7D:B0]  1.123ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.1.101 from 192.168.2.102 eth2
Unicast reply from 192.168.1.101 [08:00:27:97:59:C3]  1.091ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.1.103 from 192.168.1.102 eth1
Unicast reply from 192.168.1.103 [08:00:27:94:AA:5E]  1.072ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.1.103 from 192.168.2.102 eth2
Unicast reply from 192.168.1.103 [08:00:27:B8:B4:00]  0.923ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)

Status on grac43:
PASS => Grid infastructure network broadcast requirements are met


DATA FROM GRAC43 FOR BROADCAST REQUIREMENTS FOR NETWORKS 



ARPING 192.168.1.101 from 192.168.1.103 eth1
Unicast reply from 192.168.1.101 [08:00:27:1E:7D:B0]  0.758ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.1.101 from 192.168.2.103 eth2
Unicast reply from 192.168.1.101 [08:00:27:97:59:C3]  1.110ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.1.102 from 192.168.1.103 eth1
Unicast reply from 192.168.1.102 [08:00:27:15:73:CD]  0.892ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.1.102 from 192.168.2.103 eth2
Unicast reply from 192.168.1.102 [08:00:27:CA:E7:A7]  1.327ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
Top

Top

Primary database protection with Data Guard

Success FactorDATABASE/CLUSTER/SITE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Oracle 11g and higher Active Data Guard is the real-time data protection and availability solution that eliminates single point of failure by maintaining one or more synchronized physical replicas of the production database. If an unplanned outage of any kind impacts the production database, applications and users can quickly failover to a synchronized standby, minimizing downtime and preventing data loss. An Active Data Guard standby can be used to offload read-only applications, ad-hoc queries, and backups from the primary database or be dual-purposed as a test system at the same time it provides disaster protection. An Active Data Guard standby can also be used to minimize downtime for planned maintenance when upgrading to new Oracle Database patch sets and releases and for select migrations.  
 
For zero data loss protection and fastest recovery time, deploy a local Data Guard standby database with Data Guard Fast-Start Failover and integrated client failover. For protection against outages impacting both the primary and the local standby or the entire data center, or a broad geography, deploy a second Data Guard standby database at a remote location.

Key HA Benefits:

With Oracle 11g release 2 and higher Active Data Guard and real time apply, data block corruptions can be repaired automatically and downtime can be reduced from hours and days of application impact to zero downtime with zero data loss.

With MAA best practices, Data Guard Fast-Start Failover (typically a local standby) and integrated client failover, downtime from database, cluster and site failures can be reduced from hours to days and seconds and minutes.

With remote standby database (Disaster Recovery Site), you have protection from complete site failures.

In all cases, the Active Data Guard instances can be active and used for other activities.

Data Guard can reduce risks and downtime for planned maintenance activities by using Database rolling upgrade with transient logical standby, standby-first patch apply and database migrations.

Active Data Guard provides optimal data protection by using physical replication and comprehensive Oracle validation to maintain an exact byte-for-byte copy of the primary database that can be open read-only to offload reporting, ad-hoc queries and backups. For other advanced replication requirements where read-write access to a replica database is required while it is being synchronized with the primary database see Oracle GoldenGate logical replication.Oracle GoldenGate can be used to support heterogeneous database platforms and database releases, an effective read-write full or subset logical replica and to reduce or eliminate downtime for application, database or system changes. Oracle GoldenGate flexible logical replication solution’s main trade-off is the additional administration for application developer and database administrators.
 
Links
Needs attention ongrac4
Passed on-

Status on grac4:
FAIL => Primary database is NOT protected with Data Guard (standby database) for real-time data protection and availability


DATA FOR GRAC4 FOR PRIMARY DATABASE PROTECTION WITH DATA GUARD 



Top

Top

Locally managed tablespaces

Success FactorDATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 In order to reduce contention to the data dictionary, rollback data, and reduce the amount of generated redo, locally managed tablespaces should be used rather than dictionary managed tablespaces.Please refer to the below referenced notes for more information about benefits of locally managed tablespace and how to migrate a tablesapce from dictionary managed to locally managed.
 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => All tablespaces are locally managed tablespace


DATA FOR GRAC4 FOR LOCALLY MANAGED TABLESPACES 




dictionary_managed_tablespace_count = 0                                         
Top

Top

Automatic segment storage management

Success FactorDATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Starting with Oracle 9i Auto Segment Space Management (ASSM) can be used by specifying the SEGMENT SPACE MANAGEMENT clause, set to AUTO in the CREATE TABLESPACE statement. Implementing the ASSM feature allows Oracle to use bitmaps to manage the free space within segments. The bitmap describes the status of each data block within a segment, with respect to the amount of space in the block available for inserting rows. The current status of the space available in a data block is reflected in the bitmap allowing for Oracle to manage free space automatically with ASSM. ASSM tablespaces automate freelist management and remove the requirement/ability to specify PCTUSED, FREELISTS, and FREELIST GROUPS storage parameters for individual tables and indexes created in these tablespaces. 
 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => All tablespaces are using Automatic segment storage management


DATA FOR GRAC4 FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT 




Query returned no rows which is expected when the SQL check passes.

Top

Top

Default Temporary Tablespace

Success FactorDATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Its recommended to set default temporary tablespace at database level to achieve optimal performance for queries which requires sorting the data.
 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => Default temporary tablespace is set


DATA FOR GRAC4 FOR DEFAULT TEMPORARY TABLESPACE 




DEFAULT_TEMP_TABLESPACE                                                         
TEMP                                                                            
                                                                                
Top

Top

Archivelog Mode

Success FactorDATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Running the database in ARCHIVELOG mode and using database FORCE LOGGING mode are prerequisites for database recovery operations. The ARCHIVELOG mode enables online database backup and is necessary to recover the database to a point in time later than what has been restored. Features such as Oracle Data Guard and Flashback Database require that the production database run in ARCHIVELOG mode.
 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => Database Archivelog Mode is set to ARCHIVELOG


DATA FOR GRAC4 FOR ARCHIVELOG MODE 




Archivelog Mode = ARCHIVELOG                                                    
Top

Top

Check for parameter libgcc|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on grac42:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on grac43:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64
Top

Top

ASM disk read write error

Recommendation
 Read errors can be the result of a loss of access to the entire disk or media corruptions on an otherwise a healthy disk. ASM tries to recover from read errors on corrupted sectors on a disk. When a read error by the database or ASM triggers the ASM instance to attempt bad block remapping, ASM reads a good copy of the extent and copies it to the disk that had the read error.

If the write to the same location succeeds, then the underlying allocation unit (sector) is deemed healthy. This might be because the underlying disk did its own bad block reallocation.

If the write fails, ASM attempts to write the extent to a new allocation unit on the same disk. If this write succeeds, the original allocation unit is marked as unusable. If the write fails, the disk is taken offline.

One unique benefit on ASM-based mirroring is that the database instance is aware of the mirroring. For many types of logical corruptions such as a bad checksum or incorrect System Change Number (SCN), the database instance proceeds through the mirror side looking for valid content and proceeds without errors. If the process in the database that encountered the read is in a position to obtain the appropriate locks to ensure data consistency, it writes the correct data to all mirror sides.

When encountering a write error, a database instance sends the ASM instance a disk offline message.

If database can successfully complete a write to at least one extent copy and receive acknowledgment of the offline disk from ASM, the write is considered successful.

If the write to all mirror side fails, database takes the appropriate actions in response to a write error such as taking the tablespace offline.

When the ASM instance receives a write error message from an database instance or when an ASM instance encounters a write error itself, ASM instance attempts to take the disk offline. ASM consults the Partner Status Table (PST) to see whether any of the disk's partners are offline. If too many partners are already offline, ASM forces the dismounting of the disk group. Otherwise, ASM takes the disk offline.

The ASMCMD remap command was introduced to address situations where a range of bad sectors exists on a disk and must be corrected before ASM or database I/O
 
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => No read/write errors found for ASM disks


DATA FOR GRAC4 FOR ASM DISK READ WRITE ERROR 




                0                  0                                            
Top

Top

Block Corruptions

Success FactorDATA CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 The V$DATABASE_BLOCK_CORRUPTION view displays blocks marked corrupt by Oracle Database components such as RMAN commands, ANALYZE, dbv, SQL queries, and so on. Any process that encounters a corrupt block records the block corruption in this view.  Repair techniques include block media recovery, restoring data files, recovering with incremental backups, and block newing. Block media recovery can repair physical corruptions, but not logical corruptions. It is also recommended to use RMAN “CHECK LOGICAL” option to check for data block corruptions periodically. Please consult the Oracle® Database Backup and Recovery User's Guide for repair instructions
 
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => No reported block corruptions in V$DATABASE_BLOCK_CORRUPTIONS


DATA FOR GRAC4 FOR BLOCK CORRUPTIONS 




0 block_corruptions found in v$database_block_corruptions                       
Top

Top

Check for parameter sysstat|9.0.4|11.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64

Status on grac42:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64

Status on grac43:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64
Top

Top

Check for parameter libgcc|4.4.4|13.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on grac42:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on grac43:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64
Top

Top

Check for parameter binutils|2.20.51.0.2|5.11.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils-devel|2.20.51.0.2|5.36.el6|x86_64
binutils|2.20.51.0.2|5.36.el6|x86_64

Status on grac42:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils-devel|2.20.51.0.2|5.36.el6|x86_64
binutils|2.20.51.0.2|5.36.el6|x86_64

Status on grac43:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils-devel|2.20.51.0.2|5.36.el6|x86_64
binutils|2.20.51.0.2|5.36.el6|x86_64
Top

Top

Check for parameter glibc|2.12|1.7.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-headers|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on grac42:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-headers|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on grac43:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-headers|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64
Top

Top

Check for parameter libstdc++|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on grac42:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on grac43:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64
Top

Top

Check for parameter libstdc++|4.4.4|13.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on grac42:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on grac43:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64
Top

Top

Check for parameter glibc|2.12|1.7.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-headers|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on grac42:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-headers|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on grac43:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-headers|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64
Top

Top

Check for parameter gcc|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc|4.4.7|3.el6|x86_64
gcc-gfortran|4.4.7|3.el6|x86_64
gcc-c++|4.4.7|3.el6|x86_64

Status on grac42:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc|4.4.7|3.el6|x86_64
gcc-gfortran|4.4.7|3.el6|x86_64
gcc-c++|4.4.7|3.el6|x86_64

Status on grac43:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc|4.4.7|3.el6|x86_64
gcc-gfortran|4.4.7|3.el6|x86_64
gcc-c++|4.4.7|3.el6|x86_64
Top

Top

Check for parameter make|3.81|19.el6|

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64

Status on grac42:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64

Status on grac43:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64
Top

Top

Check for parameter libstdc++-devel|4.4.4|13.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on grac42:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on grac43:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
Top

Top

Check for parameter libaio-devel|0.3.107|10.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on grac42:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on grac43:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
Top

Top

Check for parameter libaio|0.3.107|10.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64

Status on grac42:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64

Status on grac43:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64
Top

Top

Check for parameter unixODBC-devel|2.2.14|11.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed


Status on grac42:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed


Status on grac43:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed

Top

Top

Check for parameter compat-libstdc++-33|3.2.3|69.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on grac42:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on grac43:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
Top

Top

Check for parameter glibc-devel|2.12|1.7.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on grac42:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on grac43:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
Top

Top

Check for parameter glibc-devel|2.12|1.7.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on grac42:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on grac43:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
Top

Top

Check for parameter compat-libcap1|1.10|1|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64

Status on grac42:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64

Status on grac43:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64
Top

Top

Check for parameter ksh|20100621|12.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64

Status on grac42:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64

Status on grac43:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64
Top

Top

Check for parameter unixODBC|2.2.14|11.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed


Status on grac42:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed


Status on grac43:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed

Top

Top

Check for parameter libaio|0.3.107|10.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64

Status on grac42:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64

Status on grac43:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64
Top

Top

Check for parameter libstdc++-devel|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on grac42:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on grac43:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
Top

Top

Check for parameter gcc-c++|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64

Status on grac42:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64

Status on grac43:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64
Top

Top

Check for parameter compat-libstdc++-33|3.2.3|69.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on grac42:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on grac43:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
Top

Top

Check for parameter libaio-devel|0.3.107|10.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on grac42:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on grac43:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
Top

Top

Remote listener set to scan name

Recommendation
 For Oracle Database 11g Release 2, the REMOTE_LISTENER parameter should be set to the SCAN. This allows the instances to register with the SCAN Listeners to provide information on what services are being provided by the instance, the current load, and a recommendation on how many incoming connections should be directed to the
instance.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Remote listener is set to SCAN name


DATA FROM GRAC41 - GRAC4 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = grac4-scan.grid4.example.com 

scan name =  grac4-scan.grid4.example.com

Status on grac42:
PASS => Remote listener is set to SCAN name


DATA FROM GRAC42 - GRAC4 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = grac4-scan.grid4.example.com 

scan name =  grac4-scan.grid4.example.com

Status on grac43:
PASS => Remote listener is set to SCAN name


DATA FROM GRAC43 - GRAC4 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = grac4-scan.grid4.example.com 

scan name =  grac4-scan.grid4.example.com
Top

Top

tnsping to remote listener parameter

Recommendation
 If value of remote_listener parameter is set to non-pingable tnsnames,instances will not be cross registered and will not balance the load across cluster.In case of node or instance failure, connections may not failover to serviving node. For more information about remote_listener,load balancing and failover.

 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM GRAC41 - GRAC4 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 22-FEB-2014 09:57:38

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.170)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.165)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.168)(PORT=1521)))
OK (0 msec)

Status on grac42:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM GRAC42 - GRAC4 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 22-FEB-2014 10:08:24

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.170)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.165)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.168)(PORT=1521)))
OK (0 msec)

Status on grac43:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM GRAC43 - GRAC4 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 22-FEB-2014 10:26:47

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.165)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.170)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.168)(PORT=1521)))
OK (0 msec)
Top

Top

tnsname alias defined as scanname:port

Recommendation
 Benecit/Impact

There should be local tnsnames alias defined in $ORACLE_HOME/network/admin/tnsnames.ora same as scan name : port.

Risk:

 it might disturb instance registration with listener services and you may not be able to achieve fail over and load balancing.

Action / Repair:

rename scan name:port tnsalias to some different name.
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => No tnsname alias is defined as scanname:port


DATA FROM GRAC41 - GRAC4 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = grac4-scan.grid4.example.com

 /u01/app/oracle/product/11204/racdb/network/admin/tnsnames.ora file is 


GRAC4 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = grac4-scan.grid4.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = grac4)
    )
  )

GRAC41 =
  (DESCRIPTION =
Click for more data

Status on grac42:
PASS => No tnsname alias is defined as scanname:port


DATA FROM GRAC42 - GRAC4 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = grac4-scan.grid4.example.com

 /u01/app/oracle/product/11204/racdb/network/admin/tnsnames.ora file is 


GRAC4 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = grac4-scan.grid4.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = grac4)
    )
  )

Status on grac43:
PASS => No tnsname alias is defined as scanname:port


DATA FROM GRAC43 - GRAC4 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = grac4-scan.grid4.example.com

 /u01/app/oracle/product/11204/racdb/network/admin/tnsnames.ora file is 


GRAC4 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = grac4-scan.grid4.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = grac4)
    )
  )
Top

Top

ezconnect configuration in sqlnet.ora

Recommendation
 EZCONNECT eliminates the need for service name lookups in tnsnames.ora files when connecting to an Oracle database across a TCP/IP network. In fact, no naming or directory system is required when using this method.It extends the functionality of the host naming method by enabling clients to connect to a database with an optional port and service name in addition to the host name of the database.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM GRAC41 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/grid


Status on grac42:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM GRAC42 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/grid


Status on grac43:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM GRAC43 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/grid

Top

Top

Check for parameter parallel_execution_message_size

Success FactorCONFIGURE PARALLEL_EXECUTION_MESSAGE_SIZE FOR BETTER PARALLELISM PERFORMANCE
Recommendation
 Critical

Benefit / Impact: 

Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized.
The parameters are common to all database instances. The impact of setting these parameters is minimal.
The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact.

Risk: 

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.

Action / Repair: 

PARALLEL_EXECUTION_MESSAGE_SIZE = 16384 Improves Parallel Query performance
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

grac41.parallel_execution_message_size = 16384                                  

Status on grac42:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

grac42.parallel_execution_message_size = 16384                                  

Status on grac43:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

grac43.parallel_execution_message_size = 16384                                  
Top

Top

Hang and Deadlock material

Recommendation
 Ways to troubleshoot database hang and deadlocks:- 

1. V$Wait_Chains - The DB (the dia0 BG process) samples local hanganalyze every 3 seconds and global hanganalyze every 10 seconds and stores it in memory.  V$Wait_Chains is an interface to seeing this "hanganalyze cache".  That means at any moment you can query v$wait_chains and see what hanganalyze knows about the current wait chains at any given time.  In 11.2 with a live hang this is the first thing you can use to know who the blocker and final blockers are.For more info see following: NOTE:1428210.1 - Troubleshooting Database Contention With V$Wait_Chains

2. Procwatcher - In v11, this script samples v$wait_chains every 90 seconds and collects interesting info about the processes involved in wait chains (short stacks, current wait, current SQL, recent ASH data, locks held, locks waiting for, latches held, etc...).  This script works in RAC and non-RAC and is a proactive way to trap hang data even if you can't predict when the problem will happen.  Some very large customers are proactively either planning to, or using this script on hundreds of systems to catch session contention.  For more info see followings: NOTE:459694.1 - Procwatcher: Script to Monitor and Examine Oracle DB and Clusterware Processes AND NOTE:1352623.1 - How To Troubleshoot Database Contention With Procwatcher.

3. Hanganalyze Levels - Hanganalyze format and output is completely different starting in version 11.  In general we recommend getting hanganalyze dumps at level 3. Make sure you always get a global hanganalyze in RAC.

4. Systemstate Levels - With a large SGA and a large number of processes, systemstate dumps at level 266 or 267 can dump a HUGE amount of data and take even hours to dump on large systems.  That situation should be avoided.  One lightweight alternative is a systemstate dump at level 258.  This is basically a level 2 systemstate plus short stacks and is much cheaper than level 266 or 267 and level 258 still has the most important info that support engineers typically look at like process info, latch info, wait events, short stacks, and more at a fraction of the cost.

Note that bugs 11800959 and 11827088 have significant impact on systemstate dumps.  If not on 11.2.0.3+ or a version that has both fixes applied, systemstate dumps at levels 10, 11, 266, and 267 can be VERY expensive in RAC.  In versions < 11.2.0.3 without these fixes applied, systemstate dumps at level 258 would typically be advised. 

NOTE:1353073.1 - Exadata Diagnostic Collection Guide while for Exadata many of the concepts for hang detection and analysis are the same for normal RAC systems.

5.Hang Management and LMHB provide good proactive hang related data.  For Hang Management see following note: 1270563.1 Hang Manager 11.2.0.2
 
Links
Needs attention ongrac41
Passed on-
Top

Top

Check for parameter recyclebin

Success FactorLOGICAL CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Benefit / Impact: 
  
Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice  values set at deployment time. By setting these database initialization  parameters as recommended, known problems may be avoided and performance  maximized. 
The parameters are common to all database instances. The impact of setting  these parameters is minimal.  The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance  settings can be done after careful performance evaluation and clear understanding of the performance impact. 
  
Risk: 
  
If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization  parameter is not set as recommended, and the actual set value. 
  
Action / Repair: 
  
"RECYCLEBIN = ON" provides higher availability by enabling the Flashback Drop  feature. "ON" is the default value and should not be changed. 

 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

grac41.recyclebin = on                                                          

Status on grac42:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

grac42.recyclebin = on                                                          

Status on grac43:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

grac43.recyclebin = on                                                          
Top

Top

Check for parameter cursor_sharing

Recommendation
 We recommend that customers discontinue setting cursor_sharing = SIMILAR due to the many problematic situations customers have experienced using it. The ability to set this will be removed in version 12 of the Oracle Database (the settings of EXACT and FORCE will remain available). Instead, we recommend the use of Adaptive Cursor Sharing in 11g.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Database parameter CURSOR_SHARING is set to recommended value

grac41.cursor_sharing = EXACT                                                   

Status on grac42:
PASS => Database parameter CURSOR_SHARING is set to recommended value

grac42.cursor_sharing = EXACT                                                   

Status on grac43:
PASS => Database parameter CURSOR_SHARING is set to recommended value

grac43.cursor_sharing = EXACT                                                   
Top

Top

Check for parameter fast_start_mttr_target

Success FactorCOMPUTER FAILURE PREVENTION BEST PRACTICES
Recommendation
 Benefit / Impact:

To optimize run time performance for write/redo generation intensive workloads.  Increasing fast_start_mttr_target from the default will reduce checkpoint writes from DBWR processes, making more room for LGWR IO.

Risk:

Performance implications if set too aggressively (lower setting = more aggressive), but a trade-off between performance and availability.  This trade-off and the type of workload needs to be evaluated and a decision made whether the default is needed to meet RTO objectives.  fast_start_mttr_target should be set to the desired RTO (Recovery Time Objective) while still maintaing performance SLAs. So this needs to be evaluated on a case by case basis.

Action / Repair:

Consider increasing fast_start_mttr_target to 300 (five minutes) from the default. The trade-off is that instance recovery will run longer, so if instance recovery is more important than performance, then keep fast_start_mttr_target at the default.

Keep in mind that an application with inadequately sized redo logs will likley not see an affect from this change due to frequent log switches so follow best practices for sizing redo logs.

Considerations for a direct writes in a data warehouse type of application: Even though direct operations aren't using the buffer cache, fast_start_mttr_target is very effective at controlling crash recovery time because it ensures adequate checkpointing for the few buffers that are resident (ex: undo segment headers).
 
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

grac41.fast_start_mttr_target = 0                                               

Status on grac42:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

grac42.fast_start_mttr_target = 0                                               

Status on grac43:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

grac43.fast_start_mttr_target = 0                                               
Top

Top

Check for parameter undo_retention

Success FactorLOGICAL CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Oracle Flashback Technology enables fast logical failure repair. Oracle recommends that you use automatic undo management with sufficient space to attain your desired undo retention guarantee, enable Oracle Flashback Database, and allocate sufficient space and I/O bandwidth in the fast recovery area.  Application monitoring is required for early detection.  Effective and fast repair comes from leveraging and rehearsing the most common application specific logical failures and using the different flashback features effectively (e.g flashback query, flashback version query, flashback transaction query, flashback transaction, flashback drop, flashback table, and flashback database).

Key HA Benefits:

With application monitoring and rehearsed repair actions with flashback technologies, application downtime can reduce from hours and days to the time to detect the logical inconsistency.

Fast repair for logical failures caused by malicious or accidental DML or DDL operations.

Effect fast point-in-time repair at the appropriate level of granularity: transaction, table, or database.
 
Questions:

Can your application or monitoring infrastructure detect logical inconsistencies?

Is your operations team prepared to use various flashback technologies to repair quickly and efficiently?

Is security practices enforced to prevent unauthorized privileges that can result logical inconsistencies?
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

grac41.undo_retention = 900                                                     

Status on grac42:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

grac42.undo_retention = 900                                                     

Status on grac43:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

grac43.undo_retention = 900                                                     
Top

Top

Verify all "BIGFILE" tablespaces have non-default "MAXBYTES" values set

Recommendation
 Benefit / Impact:

"MAXBYTES" is the SQL attribute that expresses the "MAXSIZE" value that is used in the DDL command to set "AUTOEXTEND" to "ON". By default, for a bigfile tablespace, the value is "3.5184E+13", or "35184372064256". The benefit of having "MAXBYTES" set at a non-default value for "BIGFILE" tablespaces is that a runaway operation or heavy simultaneous use (e.g., temp tablespace) cannot take up all the space in a diskgroup.

The impact of verifying that "MAXBYTES" is set to a non-default value is minimal. The impact of setting the "MAXSIZE" attribute to a non-default value "varies depending upon if it is done during database creation, file addition to a tablespace, or added to an existing file.

Risk:

The risk of running out of space in a diskgroup varies by application and cannot be quantified here. A diskgroup running out of space may impact the entire database as well as ASM operations (e.g., rebalance operations).

Action / Repair:

To obtain a list of file numbers and bigfile tablespaces that have the "MAXBYTES" attribute at the default value, enter the following sqlplus command logged into the database as sysdba:
select file_id, a.tablespace_name, autoextensible, maxbytes
from (select file_id, tablespace_name, autoextensible, maxbytes from dba_data_files where autoextensible='YES' and maxbytes = 35184372064256) a, (select tablespace_name from dba_tablespaces where bigfile='YES') b
where a.tablespace_name = b.tablespace_name
union
select file_id,a.tablespace_name, autoextensible, maxbytes
from (select file_id, tablespace_name, autoextensible, maxbytes from dba_temp_files where autoextensible='YES' and maxbytes = 35184372064256) a, (select tablespace_name from dba_tablespaces where bigfile='YES') b
where a.tablespace_name = b.tablespace_name;

The output should be:no rows returned 

If you see output similar to:

   FILE_ID TABLESPACE_NAME                AUT   MAXBYTES
---------- ------------------------------ --- ----------
         1 TEMP                           YES 3.5184E+13
         3 UNDOTBS1                       YES 3.5184E+13
         4 UNDOTBS2                       YES 3.5184E+13

Investigate and correct the condition.
 
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => All bigfile tablespaces have non-default maxbytes values set


DATA FOR GRAC4 FOR VERIFY ALL "BIGFILE" TABLESPACES HAVE NON-DEFAULT "MAXBYTES" VALUES SET 




Query returned no rows which is expected when the SQL check passes.

Top

Top

Clusterware status

Success FactorCLIENT FAILOVER OPERATIONAL BEST PRACTICES
Recommendation
 Oracle clusterware is required for complete client failover integration.  Please consult the following whitepaper for further information
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Clusterware is running


DATA FROM GRAC41 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
               ONLINE  ONLINE       grac43                                       
ora.FRA.dg
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
               ONLINE  ONLINE       grac43                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
Click for more data

Status on grac42:
PASS => Clusterware is running


DATA FROM GRAC42 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
               ONLINE  ONLINE       grac43                                       
ora.FRA.dg
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
               ONLINE  ONLINE       grac43                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
Click for more data

Status on grac43:
PASS => Clusterware is running


DATA FROM GRAC43 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
               ONLINE  ONLINE       grac43                                       
ora.FRA.dg
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
               ONLINE  ONLINE       grac43                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       grac41                                       
               ONLINE  ONLINE       grac42                                       
Click for more data
Top

Top

Flashback database on primary

Success FactorLOGICAL CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Oracle Flashback Technology enables fast logical failure repair. Oracle recommends that you use automatic undo management with sufficient space to attain your desired undo retention guarantee, enable Oracle Flashback Database, and allocate sufficient space and I/O bandwidth in the fast recovery area.  Application monitoring is required for early detection.  Effective and fast repair comes from leveraging and rehearsing the most common application specific logical failures and using the different flashback features effectively (e.g flashback query, flashback version query, flashback transaction query, flashback transaction, flashback drop, flashback table, and flashback database).

Key HA Benefits:

With application monitoring and rehearsed repair actions with flashback technologies, application downtime can reduce from hours and days to the time to detect the logical inconsistency.

Fast repair for logical failures caused by malicious or accidental DML or DDL operations.

Effect fast point-in-time repair at the appropriate level of granularity: transaction, table, or database.
 
Questions:

Can your application or monitoring infrastructure detect logical inconsistencies?

Is your operations team prepared to use various flashback technologies to repair quickly and efficiently?

Is security practices enforced to prevent unauthorized privileges that can result logical inconsistencies?
 
Links
Needs attention ongrac4
Passed on-

Status on grac4:
FAIL => Flashback on PRIMARY is not configured


DATA FOR GRAC4 FOR FLASHBACK DATABASE ON PRIMARY 




Flashback status = NO                                                           
Top

Top

Database init parameter DB_BLOCK_CHECKING

Recommendation
 Critical

Benefit / Impact:

Intially db_block_checking is set to off due to potential performance impact. Performance testing is particularly important given that overhead is incurred on every block change. Block checking typically causes 1% to 10% overhead, but for update and insert intensive applications (such as Redo Apply at a standby database) the overhead can be much higher. OLTP compressed tables also require additional checks that can result in higher overhead depending on the frequency of updates to those tables. Workload specific testing is required to assess whether the performance overhead is acceptable.


Risk:

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.

Action / Repair:

Based on performance testing results set the primary or standby database to either medium or full depending on the impact. If performance concerns prevent setting DB_BLOCK_CHECKING to either FULL or MEDIUM at a primary database, then it becomes even more important to enable this at the standby database. This protects the standby database from logical corruption that would be undetected at the primary database.
For higher data corruption detection and prevention, enable this setting but performance impacts vary per workload.Evaluate performance impact.

 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM GRAC41 - GRAC4 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE

Status on grac42:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM GRAC42 - GRAC4 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE

Status on grac43:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM GRAC43 - GRAC4 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE
Top

Top

umask setting for RDBMS owner

Recommendation
 
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => umask for RDBMS owner is set to 0022


DATA FROM GRAC41 - UMASK SETTING FOR RDBMS OWNER 



0022

Status on grac42:
PASS => umask for RDBMS owner is set to 0022


DATA FROM GRAC42 - UMASK SETTING FOR RDBMS OWNER 



0022

Status on grac43:
PASS => umask for RDBMS owner is set to 0022


DATA FROM GRAC43 - UMASK SETTING FOR RDBMS OWNER 



0022
Top

Top

Manage ASM Audit File Directory Growth with cron

Recommendation
 Benefit / Impact:

The audit file destination directories for an ASM instance can grow to contain a very large number of files if they are not regularly maintained. Use the Linux cron(8) utility and the find(1) command to manage the number of files in the audit file destination directories.

The impact of using cron(8) and find(1) to manage the number of files in the audit file destination directories is minimal.

Risk:

Having a very large number of files can cause the file system to run out of free disk space or inodes, or can cause Oracle to run very slowly due to file system directory scaling limits, which can have the appearance that the ASM instance is hanging on startup.

Action / Repair:

Refer to MOS Note 1298957.1. 
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM GRAC41 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11204/grid/rdbms/audit = 9309

Status on grac42:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM GRAC42 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11204/grid/rdbms/audit = 3793

Status on grac43:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM GRAC43 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11204/grid/rdbms/audit = 3984
Top

Top

GI shell limits hard stack

Recommendation
 The hard stack shell limit for the Oracle Grid Infrastructure software install owner should be >= 10240.

What's being checked here is the /etc/security/limits.conf file as documented in 11gR2 Grid Infrastructure Installation Guide, section 2.15.3 Setting Resource Limits for the Oracle Software Installation Users.  

If the /etc/security/limits.conf file is not configured as described in the documentation then to check the hard stack configuration while logged into the software owner account (eg. grid).

$ ulimit -Hs
10240

As long as the hard stack limit is 10240 or above then the configuration should be ok.

 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => Shell limit hard stack for GI is NOT configured according to recommendation


DATA FROM GRAC41 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)



Status on grac42:
WARNING => Shell limit hard stack for GI is NOT configured according to recommendation


DATA FROM GRAC42 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)



Status on grac43:
WARNING => Shell limit hard stack for GI is NOT configured according to recommendation


DATA FROM GRAC43 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)


Top

Top

Check for parameter asm_power_limit

Recommendation
 ASM_POWER_LIMIT specifies the maximum power on an Automatic Storage Management instance for disk rebalancing. The higher the limit, the faster rebalancing will complete. Lower values will take longer, but consume fewer processing and I/O resources.

Syntax to specify power limit while adding or droping disk is :- alter diskgroup <diskgroup_name> add disk '/dev/raw/raw37' rebalance power 10;
 
Needs attention on-
Passed on+ASM1, +ASM2, +ASM3

Status on +ASM1:
PASS => asm_power_limit is set to recommended value of 1

+ASM1.asm_power_limit = 1                                                       

Status on +ASM2:
PASS => asm_power_limit is set to recommended value of 1

+ASM2.asm_power_limit = 1                                                       

Status on +ASM3:
PASS => asm_power_limit is set to recommended value of 1

+ASM3.asm_power_limit = 1                                                       
Top

Top

NTP with correct setting

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Make sure machine clocks are synchronized on all nodes to the same NTP source.
Implement NTP (Network Time Protocol) on all nodes.
Prevents evictions and helps to facilitate problem diagnosis.

Also use  the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact the CRS. Enterprise Linux: see /etc/sysconfig/ntpd; Solaris: set "slewalways yes" and "disable pll" in /etc/inet/ntp.conf. 
Like:- 
       # Drop root to id 'ntp:ntp' by default.
       OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
       # Set to 'yes' to sync hw clock after successful ntpdate
       SYNC_HWCLOCK=no
       # Additional options for ntpdate
       NTPDATE_OPTIONS=""

The Time servers operate in a pyramid structure where the top of the NTP stack is usually an external time source (such as GPS Clock).  This then trickles down through the Network Switch stack to the connected server.  
This NTP stack acts as the NTP Server and ensuring that all the RAC Nodes are acting as clients to this server in a slewing method will keep time changes to a minute amount.

Changes in global time to account for atomic accuracy's over Earth rotational wobble , will thus be accounted for with minimal effect.   This is sometimes referred to as the " Leap Second " " epoch ", (between  UTC  12/31/2008 23:59.59 and 01/01/2009 00:00.00  has the one second inserted).

More information can be found in Note 759143.1
"NTP leap second event causing Oracle Clusterware node reboot"
Linked to this Success Factor.

RFC "NTP Slewing for RAC" has been created successfully. CCB ID 462 
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => NTP is running with correct setting


DATA FROM GRAC41 - NTP WITH CORRECT SETTING 



ntp       3513     1  0 Feb20 ?        00:00:13 ntpd -x -u ntp:ntp -p /var/run/ntpd.pid

Status on grac42:
PASS => NTP is running with correct setting


DATA FROM GRAC42 - NTP WITH CORRECT SETTING 



ntp       3545     1  0 Feb18 ?        00:00:29 ntpd -x -u ntp:ntp -p /var/run/ntpd.pid

Status on grac43:
PASS => NTP is running with correct setting


DATA FROM GRAC43 - NTP WITH CORRECT SETTING 



ntp       3466     1  0 Feb18 ?        00:00:31 ntpd -x -u ntp:ntp -p /var/run/ntpd.pid
Top

Top

Jumbo frames configuration for interconnect

Success FactorUSE JUMBO FRAMES IF SUPPORTED AND POSSIBLE IN THE SYSTEM
Recommendation
 A performance improvement can be seen with MTU frame size of approximately 9000.  Check with your SA and network admin first and if possible, configure jumbo frames for the interconnect.  Depending upon your network gear the supported frame sizes may vary between NICs and switches.  The highest setting supported by BOTH devices should be considered.  Please see below referenced notes for more detail specific to platform.    

To validate whether jumbo frames are configured correctly end to end (ie., NICs and switches), run the following commands as root.  Invoking ping using a specific interface requires root.

export CRS_HOME= To your GI or clusterware home like export CRS_HOME=/u01/app/12.1.0/grid

/bin/ping -s 8192 -c 2 -M do -I `$CRS_HOME/bin/oifcfg getif -type cluster_interconnect|tail -1|awk '{print $1}'` hostname 

Substitute your frame size as required for 8192 in the above command.  The actual frame size varies from one networking vendor to another.

If you get errors similar to the  following then jumbo frames are not configured properly for your frame size.

From 192.168.122.186 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 192.168.122.186 icmp_seq=1 Frag needed and DF set (mtu = 1500)

--- rws3060018.us.oracle.com ping statistics ---
0 packets transmitted, 0 received, +2 errors


if jumbo frames are configured properly for your frame size you should obtain output similar to the following:

8192 bytes from hostname (10.208.111.43): icmp_seq=1 ttl=64 time=0.683 ms
8192 bytes from hostname(10.208.111.43): icmp_seq=2 ttl=64 time=0.243 ms

--- hostname ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.243/0.463/0.683/0.220 ms
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM GRAC41 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth2      Link encap:Ethernet  HWaddr 08:00:27:97:59:C3  
          inet addr:192.168.2.101  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe97:59c3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9042844 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7092094 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:7273108314 (6.7 GiB)  TX bytes:4193427394 (3.9 GiB)


Status on grac42:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM GRAC42 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth2      Link encap:Ethernet  HWaddr 08:00:27:CA:E7:A7  
          inet addr:192.168.2.102  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:feca:e7a7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15102388 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17808180 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:7513706167 (6.9 GiB)  TX bytes:11192708958 (10.4 GiB)


Status on grac43:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM GRAC43 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth2      Link encap:Ethernet  HWaddr 08:00:27:B8:B4:00  
          inet addr:192.168.2.103  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:feb8:b400/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18151378 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20113980 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:9278137421 (8.6 GiB)  TX bytes:12254660753 (11.4 GiB)

Top

Top

CSS reboot time

Success FactorUNDERSTAND CSS TIMEOUT COMPUTATION IN ORACLE CLUSTERWARE
Recommendation
 Reboottime (default 3 seconds) is the amount of time allowed for a node to complete a reboot after the CSS daemon has been evicted.
 
Links
Needs attention on-
Passed ongrac41

Status on grac41:
PASS => CSS reboottime is set to the default value of 3


DATA FROM GRAC41 - CSS REBOOT TIME 



CRS-4678: Successful get reboottime 3 for Cluster Synchronization Services.
Top

Top

CSS disktimeout

Success FactorUNDERSTAND CSS TIMEOUT COMPUTATION IN ORACLE CLUSTERWARE
Recommendation
 The maximum amount of time allowed for a voting file I/O to complete; if this time is exceeded the voting disk will be marked as offline.  Note that this is also the amount of time that will be required for initial cluster formation, i.e. when no nodes have previously been up and in a cluster.
 
Links
Needs attention on-
Passed ongrac41

Status on grac41:
PASS => CSS disktimeout is set to the default value of 200


DATA FROM GRAC41 - CSS DISKTIMEOUT 



CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services.
Top

Top

ohasd Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM GRAC41 - OHASD LOG FILE OWNERSHIP 



total 112908
-rw-r--r--. 1 root root 10534966 Feb 19 04:35 ohasd.l01
-rw-r--r--. 1 root root 10493807 Feb 12 15:58 ohasd.l02
-rw-r--r--. 1 root root 10539954 Feb  8 03:39 ohasd.l03
-rw-r--r--. 1 root root 10523588 Jan  2 10:07 ohasd.l04
-rw-r--r--. 1 root root 10498382 Dec 27 22:16 ohasd.l05
-rw-r--r--. 1 root root 10497844 Dec 25 01:41 ohasd.l06
-rw-r--r--. 1 root root 10578777 Dec  5 06:35 ohasd.l07
-rw-r--r--. 1 root root 10530025 Dec  2 08:55 ohasd.l08
-rw-r--r--. 1 root root 10528072 Nov 29 11:42 ohasd.l09
-rw-r--r--. 1 root root 10509922 Nov 23 17:17 ohasd.l10
-rw-r--r--. 1 root root 10284910 Feb 22 09:56 ohasd.log
-rw-r--r--. 1 root root    27653 Feb 20 14:49 ohasdOUT.log

Status on grac42:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM GRAC42 - OHASD LOG FILE OWNERSHIP 



total 103032
-rw-r--r--. 1 root root 10578855 Feb 22 09:22 ohasd.l01
-rw-r--r--. 1 root root 10524410 Feb 18 23:05 ohasd.l02
-rw-r--r--. 1 root root 10547008 Feb 12 01:31 ohasd.l03
-rw-r--r--. 1 root root 10536529 Feb  7 20:13 ohasd.l04
-rw-r--r--. 1 root root 10503100 Dec 31 05:34 ohasd.l05
-rw-r--r--. 1 root root 10561483 Dec 27 06:14 ohasd.l06
-rw-r--r--. 1 root root 10511464 Dec 23 19:11 ohasd.l07
-rw-r--r--. 1 root root 10574165 Dec  4 07:13 ohasd.l08
-rw-r--r--. 1 root root 10490115 Dec  1 03:23 ohasd.l09
-rw-r--r--. 1 root root 10486458 Nov 27 12:05 ohasd.l10
-rw-r--r--. 1 root root   113475 Feb 22 10:06 ohasd.log
-rw-r--r--. 1 root root     6937 Feb 16 10:37 ohasdOUT.log

Status on grac43:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM GRAC43 - OHASD LOG FILE OWNERSHIP 



total 107452
-rw-r--r--. 1 root root 10576044 Feb 21 04:12 ohasd.l01
-rw-r--r--. 1 root root 10515709 Feb 15 18:56 ohasd.l02
-rw-r--r--. 1 root root 10508702 Feb 10 20:40 ohasd.l03
-rw-r--r--. 1 root root 10494491 Feb  4 17:37 ohasd.l04
-rw-r--r--. 1 root root 10488817 Dec 28 11:29 ohasd.l05
-rw-r--r--. 1 root root 10511128 Dec 25 13:40 ohasd.l06
-rw-r--r--. 1 root root 10573644 Dec  5 14:56 ohasd.l07
-rw-r--r--. 1 root root 10569504 Dec  2 12:51 ohasd.l08
-rw-r--r--. 1 root root 10528018 Nov 29 14:24 ohasd.l09
-rw-r--r--. 1 root root 10510620 Nov 24 10:15 ohasd.l10
-rw-r--r--. 1 root root  4670299 Feb 22 10:24 ohasd.log
-rw-r--r--. 1 root root     6516 Feb 16 10:40 ohasdOUT.log
Top

Top

ohasd/orarootagent_root Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
  • Oracle Bug # 9837321 - OWNERSHIP OF CRSD TRACES GOT CHANGE FROM ROOT TO ORACLE BY PATCHING SCRIPT
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM GRAC41 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 106728
-rw-r--r--. 1 root root 10501763 Feb 21 09:45 orarootagent_root.l01
-rw-r--r--. 1 root root 10495898 Feb 17 09:40 orarootagent_root.l02
-rw-r--r--. 1 root root 10502759 Feb 11 12:00 orarootagent_root.l03
-rw-r--r--. 1 root root 10503576 Feb  7 09:10 orarootagent_root.l04
-rw-r--r--. 1 root root 10505734 Jan  1 08:46 orarootagent_root.l05
-rw-r--r--. 1 root root 10492888 Dec 27 21:01 orarootagent_root.l06
-rw-r--r--. 1 root root 10498214 Dec 25 07:09 orarootagent_root.l07
-rw-r--r--. 1 root root 10543674 Dec  5 17:13 orarootagent_root.l08
-rw-r--r--. 1 root root 10532023 Dec  3 03:34 orarootagent_root.l09
-rw-r--r--. 1 root root 10535872 Nov 30 12:54 orarootagent_root.l10
-rw-r--r--. 1 root root  4101004 Feb 22 09:56 orarootagent_root.log
-rw-r--r--. 1 root root        5 Feb 20 14:50 orarootagent_root.pid
-rw-r--r--. 1 root root        0 Sep 12 12:53 orarootagent_rootOUT.log

Status on grac42:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM GRAC42 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 108416
-rw-r--r--. 1 root root 10540354 Feb 21 00:35 orarootagent_root.l01
-rw-r--r--. 1 root root 10485842 Feb 16 10:37 orarootagent_root.l02
-rw-r--r--. 1 root root 10535289 Feb 11 13:00 orarootagent_root.l03
-rw-r--r--. 1 root root 10487940 Feb  9 10:52 orarootagent_root.l04
-rw-r--r--. 1 root root 10486310 Feb  2 10:36 orarootagent_root.l05
-rw-r--r--. 1 root root 10511526 Dec 29 07:39 orarootagent_root.l06
-rw-r--r--. 1 root root 10516538 Dec 26 18:03 orarootagent_root.l07
-rw-r--r--. 1 root root 10496715 Dec 23 15:58 orarootagent_root.l08
-rw-r--r--. 1 root root 10523931 Dec  4 11:51 orarootagent_root.l09
-rw-r--r--. 1 root root 10494273 Dec  1 16:40 orarootagent_root.l10
-rw-r--r--. 1 root root  5855426 Feb 22 10:06 orarootagent_root.log
-rw-r--r--. 1 root root        5 Feb 16 10:37 orarootagent_root.pid
-rw-r--r--. 1 root root       32 Feb  8 08:23 orarootagent_rootOUT.log

Status on grac43:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM GRAC43 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 107072
-rw-r--r--. 1 root root 10541021 Feb 21 07:54 orarootagent_root.l01
-rw-r--r--. 1 root root 10491824 Feb 16 17:24 orarootagent_root.l02
-rw-r--r--. 1 root root 10509819 Feb 11 07:26 orarootagent_root.l03
-rw-r--r--. 1 root root 10497883 Feb  7 04:50 orarootagent_root.l04
-rw-r--r--. 1 root root 10487866 Dec 30 20:17 orarootagent_root.l05
-rw-r--r--. 1 root root 10520770 Dec 27 01:16 orarootagent_root.l06
-rw-r--r--. 1 root root 10500280 Dec 23 21:58 orarootagent_root.l07
-rw-r--r--. 1 root root 10523099 Dec  4 18:38 orarootagent_root.l08
-rw-r--r--. 1 root root 10524282 Dec  2 02:11 orarootagent_root.l09
-rw-r--r--. 1 root root 10506884 Nov 29 13:42 orarootagent_root.l10
-rw-r--r--. 1 root root  4468312 Feb 22 10:24 orarootagent_root.log
-rw-r--r--. 1 root root        5 Feb 16 10:41 orarootagent_root.pid
-rw-r--r--. 1 root root        0 Sep 15 17:13 orarootagent_rootOUT.log
Top

Top

crsd/orarootagent_root Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM GRAC41 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 107804
-rw-r--r--. 1 root root 10580837 Feb 22 04:48 orarootagent_root.l01
-rw-r--r--. 1 root root 10580887 Feb 21 17:06 orarootagent_root.l02
-rw-r--r--. 1 root root 10580417 Feb 21 05:25 orarootagent_root.l03
-rw-r--r--. 1 root root 10508740 Feb 20 17:46 orarootagent_root.l04
-rw-r--r--. 1 root root 10581192 Feb 19 16:41 orarootagent_root.l05
-rw-r--r--. 1 root root 10581719 Feb 19 03:22 orarootagent_root.l06
-rw-r--r--. 1 root root 10580622 Feb 18 15:26 orarootagent_root.l07
-rw-r--r--. 1 root root 10552668 Feb 16 20:22 orarootagent_root.l08
-rw-r--r--. 1 root root 10583365 Feb 15 16:03 orarootagent_root.l09
-rw-r--r--. 1 root root 10534969 Feb 13 18:49 orarootagent_root.l10
-rw-r--r--. 1 root root  4648817 Feb 22 09:56 orarootagent_root.log
-rw-r--r--. 1 root root        5 Feb 20 14:54 orarootagent_root.pid
-rw-r--r--. 1 root root        0 Sep 12 16:27 orarootagent_rootOUT.log

Status on grac42:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM GRAC42 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 110988
-rw-r--r--. 1 root root 10582964 Feb 22 01:05 orarootagent_root.l01
-rw-r--r--. 1 root root 10585660 Feb 21 13:02 orarootagent_root.l02
-rw-r--r--. 1 root root 10584960 Feb 21 00:58 orarootagent_root.l03
-rw-r--r--. 1 root root 10585415 Feb 20 12:28 orarootagent_root.l04
-rw-r--r--. 1 root root 10586490 Feb 19 10:03 orarootagent_root.l05
-rw-r--r--. 1 root root 10587565 Feb 18 21:48 orarootagent_root.l06
-rw-r--r--. 1 root root 10584781 Feb 18 09:32 orarootagent_root.l07
-rw-r--r--. 1 root root 10497873 Feb 16 12:08 orarootagent_root.l08
-rw-r--r--. 1 root root 10580379 Feb 15 08:37 orarootagent_root.l09
-rw-r--r--. 1 root root 10486582 Feb 13 11:21 orarootagent_root.l10
-rw-r--r--. 1 root root  7913714 Feb 22 10:06 orarootagent_root.log
-rw-r--r--. 1 root root        5 Feb 16 10:39 orarootagent_root.pid
-rw-r--r--. 1 root root        0 Sep 12 16:29 orarootagent_rootOUT.log

Status on grac43:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM GRAC43 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 104660
-rw-r--r--. 1 root root 10585366 Feb 22 08:53 orarootagent_root.l01
-rw-r--r--. 1 root root 10586693 Feb 21 20:38 orarootagent_root.l02
-rw-r--r--. 1 root root 10584319 Feb 21 08:24 orarootagent_root.l03
-rw-r--r--. 1 root root 10585201 Feb 20 20:11 orarootagent_root.l04
-rw-r--r--. 1 root root 10586449 Feb 19 18:38 orarootagent_root.l05
-rw-r--r--. 1 root root 10587067 Feb 19 04:59 orarootagent_root.l06
-rw-r--r--. 1 root root 10585590 Feb 18 16:44 orarootagent_root.l07
-rw-r--r--. 1 root root 10554512 Feb 16 21:20 orarootagent_root.l08
-rw-r--r--. 1 root root 10585414 Feb 15 16:57 orarootagent_root.l09
-rw-r--r--. 1 root root 10537109 Feb 13 19:42 orarootagent_root.l10
-rw-r--r--. 1 root root  1321660 Feb 22 10:24 orarootagent_root.log
-rw-r--r--. 1 root root        5 Feb 16 10:42 orarootagent_root.pid
-rw-r--r--. 1 root root        0 Sep 16 05:26 orarootagent_rootOUT.log
Top

Top

crsd Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 CRSD trace file should owned by "root:root", but due to Bug 9837321application of patch may have resulted in changing the trace file ownership for patching and not changing it back.
 
Links
  • Oracle Bug # 9837321 - 10g & 11g :Configuration of TAF(Transparent Application Failover) and Load Balancing - OWNERSHIP OF CRSD TRACES GOT CHANGE FROM ROOT TO ORACLE BY PATCHING SCRIPT
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM GRAC41 - CRSD LOG FILE OWNERSHIP 



total 104072
-rw-r--r--. 1 root root 10531007 Feb 22 04:23 crsd.l01
-rw-r--r--. 1 root root 10571736 Feb 19 09:08 crsd.l02
-rw-r--r--. 1 root root 10515983 Feb 17 09:26 crsd.l03
-rw-r--r--. 1 root root 10518474 Feb 14 11:01 crsd.l04
-rw-r--r--. 1 root root 10493807 Feb 10 09:20 crsd.l05
-rw-r--r--. 1 root root 10488775 Feb  2 09:22 crsd.l06
-rw-r--r--. 1 root root 10556675 Dec 28 18:35 crsd.l07
-rw-r--r--. 1 root root 10566768 Dec 27 08:10 crsd.l08
-rw-r--r--. 1 root root 10550407 Dec 24 00:33 crsd.l09
-rw-r--r--. 1 root root 10501445 Dec  5 13:22 crsd.l10
-rw-r--r--. 1 root root  1204952 Feb 22 09:56 crsd.log
-rw-r--r--. 1 root root     6490 Feb 20 14:51 crsdOUT.log

Status on grac42:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM GRAC42 - CRSD LOG FILE OWNERSHIP 



total 108588
-rw-r--r--. 1 root root 10572300 Feb 21 17:25 crsd.l01
-rw-r--r--. 1 root root 10525094 Feb 19 07:45 crsd.l02
-rw-r--r--. 1 root root 10568919 Feb 12 02:00 crsd.l03
-rw-r--r--. 1 root root 10532915 Feb 10 20:26 crsd.l04
-rw-r--r--. 1 root root 10565201 Feb  7 20:56 crsd.l05
-rw-r--r--. 1 root root 10563285 Jan  3 17:04 crsd.l06
-rw-r--r--. 1 root root 10493609 Dec 30 19:27 crsd.l07
-rw-r--r--. 1 root root 10562954 Dec 26 20:21 crsd.l08
-rw-r--r--. 1 root root 10524917 Dec 25 09:56 crsd.l09
-rw-r--r--. 1 root root 10566257 Dec  6 05:32 crsd.l10
-rw-r--r--. 1 root root  5646504 Feb 22 10:06 crsd.log
-rw-r--r--. 1 root root     7172 Feb 16 10:38 crsdOUT.log

Status on grac43:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM GRAC43 - CRSD LOG FILE OWNERSHIP 



total 110824
-rw-r--r--. 1 root root 10515239 Feb 18 22:40 crsd.l01
-rw-r--r--. 1 root root 10487125 Feb 12 00:46 crsd.l02
-rw-r--r--. 1 root root 10501320 Feb  6 22:37 crsd.l03
-rw-r--r--. 1 root root 10492751 Dec 27 19:58 crsd.l04
-rw-r--r--. 1 root root 10504211 Dec 23 15:06 crsd.l05
-rw-r--r--. 1 root root 10503376 Dec  3 05:36 crsd.l06
-rw-r--r--. 1 root root 10562384 Dec  1 07:55 crsd.l07
-rw-r--r--. 1 root root 10563676 Nov 29 20:46 crsd.l08
-rw-r--r--. 1 root root 10497446 Nov 27 18:09 crsd.l09
-rw-r--r--. 1 root root 10511633 Nov 18 16:57 crsd.l10
-rw-r--r--. 1 root root  8269487 Feb 22 10:24 crsd.log
-rw-r--r--. 1 root root     5393 Feb 16 10:42 crsdOUT.log
Top

Top

VIP NIC bonding config.

Success FactorCONFIGURE NIC BONDING FOR 10G VIP (LINUX)
Recommendation
 To avoid single point of failure for VIPs, Oracle highly recommends to configure redundant network for VIPs using NIC BONDING.  Follow below note for more information on how to configure bonding in linux
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM GRAC41 - VIP NIC BONDING CONFIG. 



eth1      Link encap:Ethernet  HWaddr 08:00:27:1E:7D:B0  
          inet addr:192.168.1.101  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe1e:7db0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:578751 errors:0 dropped:0 overruns:0 frame:0
          TX packets:617182 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:101445936 (96.7 MiB)  TX bytes:158762988 (151.4 MiB)


Status on grac42:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM GRAC42 - VIP NIC BONDING CONFIG. 



eth1      Link encap:Ethernet  HWaddr 08:00:27:15:73:CD  
          inet addr:192.168.1.102  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe15:73cd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:937102 errors:0 dropped:0 overruns:0 frame:0
          TX packets:865307 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:218782625 (208.6 MiB)  TX bytes:122338424 (116.6 MiB)


Status on grac43:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM GRAC43 - VIP NIC BONDING CONFIG. 



eth1      Link encap:Ethernet  HWaddr 08:00:27:94:AA:5E  
          inet addr:192.168.1.103  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe94:aa5e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:696964 errors:0 dropped:0 overruns:0 frame:0
          TX packets:659950 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:152668965 (145.5 MiB)  TX bytes:91081587 (86.8 MiB)

Top

Top

Interconnect NIC bonding config.

Success FactorCONFIGURE NIC BONDING FOR 10G VIP (LINUX)
Recommendation
 To avoid single point of failure for interconnect, Oracle highly recommends to configure redundant network for interconnect using NIC BONDING.  Follow below note for more information on how to configure bonding in linux.

NOTE: If customer is on 11.2.0.2 or above and HAIP is in use with two or more interfaces then this finding can be ignored.
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => NIC bonding is not configured for interconnect


DATA FROM GRAC41 - INTERCONNECT NIC BONDING CONFIG. 



eth2  192.168.2.0  global  cluster_interconnect

Status on grac42:
WARNING => NIC bonding is not configured for interconnect


DATA FROM GRAC42 - INTERCONNECT NIC BONDING CONFIG. 



eth2  192.168.2.0  global  cluster_interconnect

Status on grac43:
WARNING => NIC bonding is not configured for interconnect


DATA FROM GRAC43 - INTERCONNECT NIC BONDING CONFIG. 



eth2  192.168.2.0  global  cluster_interconnect
Top

Top

Verify operating system hugepages count satisfies total SGA requirements

Recommendation
 Benefit / Impact:

Properly configuring operating system hugepages on Linux and using the database initialization parameter "use_large_pages" to "only" results in more efficient use of memory and reduced paging.
The impact of validating that the total current hugepages are greater than or equal to estimated requirements for all currently active SGAs is minimal. The impact of corrective actions will vary depending on the specific configuration, and may require a reboot of the database server.

Risk:

The risk of not correctly configuring operating system hugepages in advance of setting the database initialization parameter "use_large_pages" to "only" is that if not enough huge pages are configured, some databases will not start after you have set the parameter.

Action / Repair:

Pre-requisite: All database instances that are supposed to run concurrently on a database server must be up and running for this check to be accurate.

NOTE: Please refer to below referenced My Oracle Support notes for additional details on configuring hugepages.

NOTE: If you have not reviewed the below referenced My Oracle Support notes and followed their guidance BEFORE using the database parameter "use_large_pages=only", this check will pass the environment but you will still not be able to start instances once the configured pool of operating system hugepages have been consumed by instance startups. If that should happen, you will need to change the "use_large_pages" initialization parameter to one of the other values, restart the instance, and follow the instructions in the below referenced My Oracle Support notes. The brute force alternative is to increase the huge page count until the newest instance will start, and then adjust the huge page count after you can see the estimated requirements for all currently active SGAs.
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM GRAC41 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) should be greater than or equal to
estimated requirements for all currently active SGAs (644).


Status on grac42:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM GRAC42 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) should be greater than or equal to
estimated requirements for all currently active SGAs (644).


Status on grac43:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM GRAC43 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) should be greater than or equal to
estimated requirements for all currently active SGAs (644).

Top

Top

Check for parameter memory_target

Recommendation
 It is recommended to use huge pages for efficient use of memory and reduced paging. Huge pages can not be configured if database is using automatic memory management. To take benefit of huge pages, its recommended to disable automatic memory management by unsetting to following init parameters.
MEMORY_TARGET
MEMORY_MAX_TARGET
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Database Parameter memory_target is set to the recommended value

grac41.memory_target = 0                                                        

Status on grac42:
PASS => Database Parameter memory_target is set to the recommended value

grac42.memory_target = 0                                                        

Status on grac43:
PASS => Database Parameter memory_target is set to the recommended value

grac43.memory_target = 0                                                        
Top

Top

Linux Hugepages Configuration

Recommendation
 It is beneficial and considered a best practice to use the Hugepages feature of Linux for databases.  The use of AMM is not compatible with Hugepages, so if any database is using AMM then the Hugepages feature will not be used by Oracle for that database.  

The entire SGAs of ALL the databases not using AMM should fit into the memory allocated for Hugepages.  Using Hugepages will result in an overall more stable system and more efficient memory management for Oracle.  Hugepages at 2mb consume far less kernel memory so in-kernel lists to keep track of are shorter.  Usage of hugepages also means that if there's memory pressure, the kernel knows not to spend CPU cycles managing those pages as they cannot be swapped out.  Therefore hugepages utilization for database results in far less contention and overhead in terms of CPU time spent in the kernel to go through page lists and in terms of allocated structures in the kernel.

To check the configuration then run the following command

$ cat /proc/meminfo |grep Huge
HugePages_Total:     0
HugePages_Free:      0
HugePages_Rsvd:      0
Hugepagesize:     2048 kB

The product of HugePages_Total * Hugepagesize should be larger than the sum of the SGAs of all the databases which are not using AMM.   The size of a database SGA can be derived using the following query:

select sum(value)/1024+128 from v$sga;

In this case 128k was added to the SGA size for a little extra headroom within the Hugepages memory allocation.


 
Links
Needs attention ongrac41
Passed on-

Status on grac41:
INFO => Hugepages configuration is NOT Correct


DATA FROM GRAC41 - GRAC4 DATABASE - LINUX HUGEPAGES CONFIGURATION 



SGA + 128k =    1304988
Hugepage Size = 2048
Hugepages = 0
Hugepage Pool = 0
Top

Top

CRS and ASM version comparison

Recommendation
 you should always run equal or higher version of CRS than ASM. running higher ASM version than CRS is non-supported configuration and may run into issues.
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => CRS version is higher or equal to ASM version.


DATA FROM GRAC41 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040

Status on grac42:
PASS => CRS version is higher or equal to ASM version.


DATA FROM GRAC42 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040

Status on grac43:
PASS => CRS version is higher or equal to ASM version.


DATA FROM GRAC43 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040
Top

Top

Local listener set to node VIP

Recommendation
 The LOCAL_LISTENER parameter should be set to the node VIP. If you need fully qualified domain names, ensure that LOCAL_LISTENER is set to the fully qualified domain name (node-vip.mycompany.com). By default a local listener is created during cluster configuration that runs out of the grid infrastructure home and listens on the specified port(default is 1521) of the node VIP.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Local listener init parameter is set to local node VIP


DATA FROM GRAC41 - GRAC4 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.160)(PORT=1521)) VIP Names=192.168.1.160 VIP IPs=192.168.1.160

Status on grac42:
PASS => Local listener init parameter is set to local node VIP


DATA FROM GRAC42 - GRAC4 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.178)(PORT=1521)) VIP Names=192.168.1.178 VIP IPs=192.168.1.178

Status on grac43:
PASS => Local listener init parameter is set to local node VIP


DATA FROM GRAC43 - GRAC4 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.176)(PORT=1521)) VIP Names=192.168.1.176 VIP IPs=192.168.1.176
Top

Top

Number of SCAN listeners

Recommendation
 Benefit / Impact:

Application scalability and/or availability

Risk:

Potential reduced scalability and/or availability of applications

Action / Repair:

The recommended number of SCAN listeners is 3....  See the referenced document for more details.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM GRAC41 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Status on grac42:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM GRAC42 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Status on grac43:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM GRAC43 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
Top

Top

Voting disk status

Success FactorUSE EXTERNAL OR ORACLE PROVIDED REDUNDANCY FOR OCR
Recommendation
 Benefit / Impact:

Stability, Availability

Risk:

Cluster instability

Action / Repair:

Voting disks that are not online would indicate a problem with the clusterware
and should be investigated as soon as possible.  All voting disks are expected to be ONLINE.

Use the following command to list the status of the voting disks

$CRS_HOME/bin/crsctl query css votedisk|sed 's/^ //g'|grep ^[0-9]

The output should look similar to the following, one row were voting disk, all disks should indicate ONLINE

1. ONLINE   192c8f030e5a4fb3bf77e43ad3b8479a (o/192.168.10.102/DBFS_DG_CD_02_sclcgcel01) [DBFS_DG]
2. ONLINE   2612d8a72d194fa4bf3ddff928351c41 (o/192.168.10.104/DBFS_DG_CD_02_sclcgcel03) [DBFS_DG]
3. ONLINE   1d3cceb9daeb4f0bbf23ee0218209f4c (o/192.168.10.103/DBFS_DG_CD_02_sclcgcel02) [DBFS_DG]
 
Needs attention on-
Passed ongrac41

Status on grac41:
PASS => All voting disks are online


DATA FROM GRAC41 - VOTING DISK STATUS 



##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   b0e94e5d83054fe9bf58b6b98bfacd65 (/dev/asmdisk5_udev_sdf1) [OCR]
 2. ONLINE   88c2a08b4c8c4f85bf0109e0990388e4 (/dev/asmdisk6_udev_sdg1) [OCR]
 3. ONLINE   1108f9a41e814fb2bfed879ff0039dd0 (/dev/asmdisk7_udev_sdh1) [OCR]
Located 3 voting disk(s).
Top

Top

css misscount

Success FactorUNDERSTAND CSS TIMEOUT COMPUTATION IN ORACLE CLUSTERWARE
Recommendation
 The CSS misscount parameter represents the maximum time, in seconds, that a network heartbeat can be missed before entering into a cluster reconfiguration to evict the node
 
Links
Needs attention on-
Passed ongrac41

Status on grac41:
PASS => CSS misscount is set to the default value of 30


DATA FROM GRAC41 - CSS MISSCOUNT 



CRS-4678: Successful get misscount 30 for Cluster Synchronization Services.
Top

Top

Same size of redo log files

Recommendation
 Having asymmetrical size of redo logs can lead to database hang and its best practice to keep same size for all redo log files. run following query to find out size of each member. 
column member format a50
select f.member,l.bytes/1024/1024 as "Size in MB" from v$log l,v$logfile f where l.group#=f.group#;
Resizing redo logs to make it same size does not need database downtime. 
 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => All redo log files are of same size


DATA FOR GRAC4 FOR SAME SIZE OF REDO LOG FILES 




         1           .048828125                                                 
         2           .048828125                                                 
         3           .048828125                                                 
         4           .048828125                                                 
         5           .048828125                                                 
         6           .048828125                                                 
Top

Top

SELinux status

Success FactorRPM THROWS ERROR WITH SELINUX ENABLED
Recommendation
 On Rhel4 u3 x86_64 2.6.9-34.ELsmp kernel , when selinux is enabled, rpm
installation gives the error:
'scriptlet failed, exit status 255'

The default selinux settings are used
# cat /etc/sysconfig/selinux
SELINUX=enforcing
SELINUXTYPE=targeted
e.g. on installing asm rpms:
# rpm -ivh *.rpm
Preparing...                ###########################################
[100%]
  1:oracleasm-support      ########################################### [33%]
  error: %post(oracleasm-support-2.0.2-1.x86_64) scriptlet failed, exit status 255
  2:oracleasm-2.6.9-34.ELsm########################################### [67%]
  error: %post(oracleasm-2.6.9-34.ELsmp-2.0.2-1.x86_64) scriptlet failed, exit status 255
   3:oracleasmlib           ###########################################  [100%]

However, asm rpms gets installed
# rpm -qa | grep asm
oracleasm-support-2.0.2-1
oracleasmlib-2.0.2-1
oracleasm-2.6.9-34.ELsmp-2.0.2-1

There is no error during oracleasm configure, creadisks, Also, oracleasm is able to start on reboot and the tests done around rac/asm seems to be fine.

# rpm -q -a | grep -i selinux
selinux-policy-targeted-1.17.30-2.126
selinux-policy-targeted-sources-1.17.30-2.126
libselinux-1.19.1-7
libselinux-1.19.1-7

Solution
--
If the machine is installed with selinux --disabled, it is possible that the selinux related pre/post activities have not been performed during the installation and as a result extended attribute is not getting set for /bin/*sh 

1. ensure that the kickstart config file does not have 'selinux --disabled'
Also, not specifying selinux in the config file will default to selinux --enforcing and the extended attribute will get set for /bin/*sh 
OR
2. If the machine has been installed with selinux --disabled then perform the below step manually # setfattr -n security.selinux --value="system_u:object_r:shell_exec_t\000"
bin/sh 
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => SELinux is not being Enforced.


DATA FROM GRAC41 - SELINUX STATUS 



Permissive

Status on grac42:
PASS => SELinux is not being Enforced.


DATA FROM GRAC42 - SELINUX STATUS 



Permissive

Status on grac43:
PASS => SELinux is not being Enforced.


DATA FROM GRAC43 - SELINUX STATUS 



Permissive
Top

Top

Public interface existence

Recommendation
 it is important to ensure that your public interface is properly marked as public and not private. This can be checked with the oifcfg getif command. If it is inadvertantly marked private, you can get errors such as "OS system dependent operation:bind failed with status" and "OS failure message: Cannot assign requested address". It can be corrected with a command like oifcfg setif -global eth0/<public IP address>:public
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Public interface is configured and exists in OCR


DATA FROM GRAC41 - PUBLIC INTERFACE EXISTENCE 



eth1  192.168.1.0  global  public
eth2  192.168.2.0  global  cluster_interconnect

Status on grac42:
PASS => Public interface is configured and exists in OCR


DATA FROM GRAC42 - PUBLIC INTERFACE EXISTENCE 



eth1  192.168.1.0  global  public
eth2  192.168.2.0  global  cluster_interconnect

Status on grac43:
PASS => Public interface is configured and exists in OCR


DATA FROM GRAC43 - PUBLIC INTERFACE EXISTENCE 



eth1  192.168.1.0  global  public
eth2  192.168.2.0  global  cluster_interconnect
Top

Top

ip_local_port_range

Recommendation
 Starting with Oracle Clusterware 11gR1, ip_local_port_range should be between 9000 (minimum) and 65500 (maximum).
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM GRAC41 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500

Status on grac42:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM GRAC42 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500

Status on grac43:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM GRAC43 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500
Top

Top

kernel.shmmax

Recommendation
 Benefit / Impact:

Optimal system memory management.

Risk:

In an Oracle RDBMS application, setting kernel.shmmax too high is not needed and could enable configurations that may leave inadequate system memory for other necessary functions.

Action / Repair:

Oracle Support officially recommends a "minimum" for SHMMAX of 1/2 of physical RAM. However, many Oracle customers choose a higher fraction, at their discretion.  Setting the kernel.shmmax as recommended only causes a few more shared memory segments to be used for whatever total SGA that you subsequently configure in Oracle.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM GRAC41 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 4458795008
1/2 total system memory = 2229397504

Status on grac42:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM GRAC42 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 3877883904
1/2 total system memory = 1938941952

Status on grac43:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM GRAC43 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 3877883904
1/2 total system memory = 1938941952
Top

Top

Check for parameter fs.file-max

Recommendation
 - In 11g we introduced automatic memory management which requires more file descriptors than previous versions.

- At a _MINIMUM_ we require 512*PROCESSES (init parameter) file descriptors per database instance + some for the OS and other non-oracle processes

- Since we cannot know at install time how many database instances the customer may run and how many PROCESSES they may configure for those instances and whether they will use automatic memory management or how many non-Oracle processes may be run and how many file descriptors they will require we recommend the file descriptor limit be set to a very high number (6553600) to minimize the potential for running out.

- Setting fs.file-max "too high" doesn't hurt anything because file descriptors are allocated dynamically as needed up to the limit of fs.file-max

- Oracle is not aware of any customers having problems from setting fs.file-max "too high" but we have had customers have problems from setting it too low.  A problem from having too few file descriptors is preventable.

- As for a formula, given 512*PROCESSES (as a minimum) fs.file-max should be a sufficiently high number to minimize the chance that ANY customer would suffer an outage from having fs.file-max set too low.  At a limit of 6553600 customers are likely to have other problems to worry about before they hit that limit. 

- If an individual customer wants to deviate from fs.file-max = 6553600 then they are free to do so based on their knowledge of their environment and implementation as long as they make sure they have enough file descriptors to cover all their database instances, other non-oracle processes and the OS.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744

Status on grac42:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744

Status on grac43:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744
Top

Top

DB shell limits hard stack

Recommendation
 The hard stack shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 10240.

What's being checked here is the /etc/security/limits.conf file as documented in 11gR2 Grid Infrastructure  Installation Guide, section 2.15.3 Setting Resource Limits for the Oracle Software Installation Users.  

If the /etc/security/limits.conf file is not configured as described in the documentation then to check the hard stack configuration while logged into the software owner account (eg. oracle).

$ ulimit -Hs
10240

As long as the hard stack limit is 10240 or above then the configuration should be ok.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM GRAC41 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768

Status on grac42:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM GRAC42 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768

Status on grac43:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM GRAC43 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768
Top

Top

/tmp directory free space

Recommendation
 There should be a minimum of 1GB of free space in the /tmp directory
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM GRAC41 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_oel64-lv_root
                       38G   22G   15G  61% /

Status on grac42:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM GRAC42 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_oel64-lv_root
                       38G   21G   16G  58% /

Status on grac43:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM GRAC43 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_oel64-lv_root
                       38G   20G   17G  54% /
Top

Top

GI shell limits hard nproc

Recommendation
 The hard nproc shell limit for the Oracle GI software install owner should be >= 16384.
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => Shell limit hard nproc for GI is NOT configured according to recommendation


DATA FROM GRAC41 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)



Status on grac42:
WARNING => Shell limit hard nproc for GI is NOT configured according to recommendation


DATA FROM GRAC42 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)



Status on grac43:
WARNING => Shell limit hard nproc for GI is NOT configured according to recommendation


DATA FROM GRAC43 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)


Top

Top

DB shell limits soft nofile

Recommendation
 The soft nofile shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 1024.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM GRAC41 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536

Status on grac42:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM GRAC42 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 1024

Status on grac43:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM GRAC43 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 1024
Top

Top

GI shell limits hard nofile

Recommendation
 The hard nofile shell limit for the Oracle GI software install owner should be >= 65536
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => Shell limit hard nofile for GI is NOT configured according to recommendation


DATA FROM GRAC41 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)



Status on grac42:
WARNING => Shell limit hard nofile for GI is NOT configured according to recommendation


DATA FROM GRAC42 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)



Status on grac43:
WARNING => Shell limit hard nofile for GI is NOT configured according to recommendation


DATA FROM GRAC43 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)


Top

Top

DB shell limits hard nproc

Recommendation
 The hard nproc shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 16384.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM GRAC41 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384

Status on grac42:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM GRAC42 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384

Status on grac43:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM GRAC43 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384
Top

Top

GI shell limits soft nofile

Recommendation
 The soft nofile shell limit for the Oracle GI software install owner should be >= 1024.
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => Shell limit soft nofile for GI is NOT configured according to recommendation


DATA FROM GRAC41 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)



Status on grac42:
WARNING => Shell limit soft nofile for GI is NOT configured according to recommendation


DATA FROM GRAC42 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)



Status on grac43:
WARNING => Shell limit soft nofile for GI is NOT configured according to recommendation


DATA FROM GRAC43 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)


Top

Top

GI shell limits soft nproc

Recommendation
 The soft nproc shell limit for the Oracle GI software install owner should be >= 2047.
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => Shell limit soft nproc for GI is NOT configured according to recommendation


DATA FROM GRAC41 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)



Status on grac42:
WARNING => Shell limit soft nproc for GI is NOT configured according to recommendation


DATA FROM GRAC42 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)



Status on grac43:
WARNING => Shell limit soft nproc for GI is NOT configured according to recommendation


DATA FROM GRAC43 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

 


Hard Limits(ulimit -Ha)


Top

Top

DB shell limits hard nofile

Recommendation
 The hard nofile shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 65536.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM GRAC41 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536

Status on grac42:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM GRAC42 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536

Status on grac43:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM GRAC43 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536
Top

Top

Linux Swap Size

Success FactorCORRECTLY SIZE THE SWAP SPACE
Recommendation
 The following table describes the relationship between installed RAM and the configured swap space requirement:

Note:
On Linux, the Hugepages feature allocates non-swappable memory for large page tables using memory-mapped files. If you enable Hugepages, then you should deduct the memory allocated to Hugepages from the available RAM before calculating swap space.

RAM between 1 GB and 2 GB, Swap 1.5 times the size of RAM (minus memory allocated to Hugepages)

RAM between 2 GB and 16 GB, Swap equal to the size of RAM (minus memory allocated to Hugepages)

RAM (minus memory allocated to Hugepages)
more than 16 GB,  Swap 16 GB

In other words the maximum swap size for Linux that Oracle would recommend would be 16GB
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM GRAC41 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 4354292
Swap memory found on system = 5210108
Recommended Swap = 4354292

Status on grac42:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM GRAC42 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 3786996
Swap memory found on system = 5210108
Recommended Swap = 3786996

Status on grac43:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM GRAC43 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 3786996
Swap memory found on system = 5210108
Recommended Swap = 3786996
Top

Top

/tmp on dedicated filesystem

Recommendation
 It is a best practice to locate the /tmp directory on a dedicated filesystem, otherwise accidentally filling up /tmp could lead to filling up the root (/) filesystem as the result of other file management (logs, traces, etc.) and lead to availability problems.  For example, Oracle creates socket files in /tmp.  Make sure 1GB of free space is maintained in /tmp.
 
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
WARNING => /tmp is NOT on a dedicated filesystem


DATA FROM GRAC41 - /TMP ON DEDICATED FILESYSTEM 




Status on grac42:
WARNING => /tmp is NOT on a dedicated filesystem


DATA FROM GRAC42 - /TMP ON DEDICATED FILESYSTEM 




Status on grac43:
WARNING => /tmp is NOT on a dedicated filesystem


DATA FROM GRAC43 - /TMP ON DEDICATED FILESYSTEM 



Top

Top

Non-autoextensible data and temp files

Recommendation
 Benefit / Impact:

The benefit of having "AUTOEXTEND" on is that applications may avoid out of space errors.
The impact of verifying that the "AUTOEXTEND" attribute is "ON" is minimal. The impact of setting "AUTOEXTEND" to "ON" varies depending upon if it is done during database creation, file addition to a tablespace, or added to an existing file.

Risk:

The risk of running out of space in either the tablespace or diskgroup varies by application and cannot be quantified here. A tablespace that runs out of space will interfere with an application, and a diskgroup running out of space could impact the entire database as well as ASM operations (e.g., rebalance operations).

Action / Repair:

To obtain a list of tablespaces that are not set to "AUTOEXTEND", enter the following sqlplus command logged into the database as sysdba:
select file_id, file_name, tablespace_name from dba_data_files where autoextensible <>'YES'
union
select file_id, file_name, tablespace_name from dba_temp_files where autoextensible <> 'YES'; 
The output should be:
no rows selected
If any rows are returned, investigate and correct the condition.
NOTE: Configuring "AUTOEXTEND" to "ON" requires comparing space utilization growth projections at the tablespace level to space available in the diskgroups to permit the expected projected growth while retaining sufficient storage space in reserve to account for ASM rebalance operations that occur either as a result of planned operations or component failure. The resulting growth targets are implemented with the "MAXSIZE" attribute that should always be used in conjunction with the "AUTOEXTEND" attribute. The "MAXSIZE" settings should allow for projected growth while minimizing the prospect of depleting a disk group. The "MAXSIZE" settings will vary by customer and a blanket recommendation cannot be given here.

NOTE: When configuring a file for "AUTOEXTEND" to "ON", the size specified for the "NEXT" attribute should cover all disks in the diskgroup to optimize balance. For example, with a 4MB AU size and 168 disks, the size of the "NEXT" attribute should be a multiple of 672M (4*168).
 
Needs attention ongrac4
Passed on-

Status on grac4:
INFO => Some data or temp files are not autoextensible


DATA FOR GRAC4 FOR NON-AUTOEXTENSIBLE DATA AND TEMP FILES 




/u01/oradata/grac4_dnfs_ts.dbf                                                  
Top

Top

Non-multiplexed redo logs

Recommendation
 The online redo logs of an Oracle database are critical to availability and recoverability and should always be multiplexed even in cases where fault tolerance is provided at the storage level.
 
Needs attention ongrac4
Passed on-

Status on grac4:
WARNING => One or more redo log groups are NOT multiplexed


DATA FOR GRAC4 FOR NON-MULTIPLEXED REDO LOGS 




         1          1                                                           
         6          1                                                           
         2          1                                                           
         4          1                                                           
         5          1                                                           
         3          1                                                           
Top

Top

Multiplexed controlfiles

Recommendation
 The controlfile of an Oracle database is critical to availability and recoverability and should always be multiplexed even in cases where fault tolerance is provided at the storage level.
 
Needs attention ongrac4
Passed on-

Status on grac4:
WARNING => Controlfile is NOT multiplexed


DATA FOR GRAC4 FOR MULTIPLEXED CONTROLFILES 




+DATA/grac4/controlfile/current.260.826111693                                   
Top

Top

Check for parameter remote_login_passwordfile

Recommendation
 For security reasons remote_login_passwordfile should be set to SHARED or  EXCLUSIVE.  The two are functionally equivalent.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => remote_login_passwordfile is configured according to recommendation

grac41.remote_login_passwordfile = EXCLUSIVE                                    

Status on grac42:
PASS => remote_login_passwordfile is configured according to recommendation

grac42.remote_login_passwordfile = EXCLUSIVE                                    

Status on grac43:
PASS => remote_login_passwordfile is configured according to recommendation

grac43.remote_login_passwordfile = EXCLUSIVE                                    
Top

Top

Check audit_file_dest

Recommendation
 we should clean old audit files from audit_file_dest regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
INFO => audit_file_dest has audit files older than 30 days for grac4


DATA FROM GRAC41 - GRAC4 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/grac4/adump = 3466

Status on grac42:
INFO => audit_file_dest has audit files older than 30 days for grac4


DATA FROM GRAC42 - GRAC4 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/grac4/adump = 770

Status on grac43:
INFO => audit_file_dest has audit files older than 30 days for grac4


DATA FROM GRAC43 - GRAC4 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/grac4/adump = 668
Top

Top

oradism executable ownership

Success FactorVERIFY OWNERSHIP OF ORADISM EXECUTABLE IF LMS PROCESS NOT RUNNING IN REAL TIME
Recommendation
 enefit / Impact:

The oradism executable is invoked after database startup to change the scheduling priority of LMS and other database background processes to the realtime scheduling class in order to maximize the ability of these key processes to be scheduled on the CPU in a timely way at times of high CPU utilization.

Risk:

The oradism executable should be owned by root and the owner s-bit should be set, eg. -rwsr-x---, where the s is the setuid bit (s-bit) for root in this case.  If the LMS process is not running at the proper scheduling priority it can lead to instance evictions due to IPC send timeouts or ORA-29740 errors.  oradism must be owned by root and it's s-bit set in order to be able to change the scheduling priority.   If oradism ownership is not root and the owner s-bit is not set then something must have gone wrong in the installation process or the ownership or the permission was otherwise changed.  

Action / Repair:

Please check with Oracle Support to determine the best course to take for your platform to correct the problem.
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM GRAC41 - /U01/APP/ORACLE/PRODUCT/11204/RACDB DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x---. 1 root oinstall 71790 Aug 24 10:51 /u01/app/oracle/product/11204/racdb/bin/oradism

Status on grac42:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM GRAC42 - /U01/APP/ORACLE/PRODUCT/11204/RACDB DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x---. 1 root oinstall 71790 Sep 14 11:19 /u01/app/oracle/product/11204/racdb/bin/oradism

Status on grac43:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM GRAC43 - /U01/APP/ORACLE/PRODUCT/11204/RACDB DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x---. 1 root oinstall 71790 Sep 15 17:19 /u01/app/oracle/product/11204/racdb/bin/oradism
Top

Top

oradism executable permission

Success FactorVERIFY OWNERSHIP OF ORADISM EXECUTABLE IF LMS PROCESS NOT RUNNING IN REAL TIME
Recommendation
 Benefit / Impact:

The oradism executable is invoked after database startup to change the scheduling priority of LMS and other database background processes to the realtime scheduling class in order to maximize the ability of these key processes to be scheduled on the CPU in a timely way at times of high CPU utilization.

Risk:

The oradism executable should be owned by root and the owner s-bit should be set, eg. -rwsr-x---, where the s is the setuid bit (s-bit) for root in this case.  If the LMS process is not running at the proper scheduling priority it can lead to instance evictions due to IPC send timeouts or ORA-29740 errors.  oradism must be owned by root and it's s-bit set in order to be able to change the scheduling priority.   If oradism ownership is not root and the owner s-bit is not set then something must have gone wrong in the installation process or the ownership or the permission was otherwise changed.  

Action / Repair:

Please check with Oracle Support to determine the best course to take for your platform to correct the problem.
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM GRAC41 - /U01/APP/ORACLE/PRODUCT/11204/RACDB DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x---. 1 root oinstall 71790 Aug 24 10:51 /u01/app/oracle/product/11204/racdb/bin/oradism

Status on grac42:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM GRAC42 - /U01/APP/ORACLE/PRODUCT/11204/RACDB DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x---. 1 root oinstall 71790 Sep 14 11:19 /u01/app/oracle/product/11204/racdb/bin/oradism

Status on grac43:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM GRAC43 - /U01/APP/ORACLE/PRODUCT/11204/RACDB DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x---. 1 root oinstall 71790 Sep 15 17:19 /u01/app/oracle/product/11204/racdb/bin/oradism
Top

Top

Avg message sent queue time on ksxp

Recommendation
 Avg message sent queue time on ksxp (ms) should be very low, average numbers are usually below 2 ms on most systems.  Higher averages usually mean the system is approaching interconnect or CPU capacity, or else there may be an interconnect problem.  The higher the average above 2ms the more severe the problem is likely to be.  

Interconnect performance should be investigated further by analysis using AWR and ASH reports and other network diagnostic tools.  
 
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => Avg message sent queue time on ksxp is <= recommended


DATA FOR GRAC4 FOR AVG MESSAGE SENT QUEUE TIME ON KSXP 




avg_message_sent_queue_time_on_ksxp_in_ms = 0                                   
Top

Top

Avg message sent queue time (ms)

Recommendation
 Avg message sent queue time (ms) as derived from AWR should be very low, average numbers are usually below 2 ms on most systems.  Higher averages usually mean the system is approaching interconnect or CPU capacity, or else there may be an interconnect problem.  The higher the average above 2ms the more severe the problem is likely to be.  

Interconnect performance should be investigated further by analysis using AWR and ASH reports and other network diagnostic tools. 
 
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => Avg message sent queue time is <= recommended


DATA FOR GRAC4 FOR AVG MESSAGE SENT QUEUE TIME (MS) 




avg_message_sent_queue_time_in_ms = 0                                           
Top

Top

Avg message received queue time

Recommendation
 Avg message receive queue time (ms) as derived from AWR should be very low, average numbers are usually below 2 ms on most systems.  Higher averages usually mean the system is approaching interconnect or CPU capacity, or else there may be an interconnect problem.  The higher the average above 2ms the more severe the problem is likely to be.  

Interconnect performance should be investigated further by analysis using AWR and ASH reports and other network diagnostic tools. 
 
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => Avg message received queue time is <= recommended


DATA FOR GRAC4 FOR AVG MESSAGE RECEIVED QUEUE TIME 




avg_message_received_queue_time_in_ms = 0                                       
Top

Top

GC block lost

Success FactorGC LOST BLOCK DIAGNOSTIC GUIDE
Recommendation
 The RDBMS reports global cache lost blocks statistics ("gc cr block lost" and/or "gc current block lost") which could indicate a negative impact on interconnect performance and global cache processing. 

The vast majority of escalations attributed to RDBMS global cache lost blocks can be directly related to faulty or misconfigured interconnects. This guide serves as a starting point for evaluating common (and sometimes obvious) causes.

<b> 1. Is Jumbo Frames configured? </b>

A Jumbo Frame is a Packet Size around 9000bytes. 5000 bytes are called Mini Jumbo Frames.  All the servers , switches and routers in operation must be configured to support the same size of packets.

Primary Benefit: performance
Secondary Benefit: cluster stability for IP overhead, less misses for network heartbeat checkins.

<b> 2. What is the configured MTU size for each interconnect interface and interconnect switch ports? </b>

The MTU is the "Maximum Transmission Unit" or the frame size.  The default is 1500 bytes for Ethernet.

<b> 3. Do you observe frame loss at the OS, NIC or switch layer?  </b> netstat, ifconfig, ethtool, switch port stats would help you determine that.

Using netstat -s look for:
x fragments dropped after timeout
x packet reassembles failed

<b> 4. Are network cards speed force full duplex? </b>

<b> 5. Are network card speed and mode (autonegotiate, fixed full duplex, etc) identical on all nodes and switch? </b>

<b> 6. Is the PCI bus at the same speed on all nodes that the NIC (Network Interface Cards) are using?  </b>

<b> 7. Have you modified the ring buffers away from default for the interconnect NIC for all nodes? </b>

<b> 8. Have you measured interconnect capacity and are you saturating available bandwidth? </b>

Remember that all network values are averaged over a time period.  Best to keep the average time period as small as possible so that spikes of activity are not masked out.

<b> 9. Are the CPUs overloaded (ie load average > 20 for new lintel architecture) on the nodes that exhibit block loss?  </b>"Uptime" command will display load average information on most platforms.

<b> 10. Have you modified transmit and receive (tx/rx) UDP buffer queue size for the OS from recommended settings?  </b>
          Send and receive queues should be the same size. 
          Queue max and default should be the same size. 
          Recommended queue size = 4194304 (4 megabytes). 
                  
<b> 11. What is the NIC driver version and is it the same on all nodes? </b>

<b> 12. Is the NIC driver NAPI (New Application Program Interface) enabled on all nodes (recommended)? </b>

<b> 13. What is the % of block loss compared to total gc block processing for that node? </b> View AWR reports for peak load periods.

Total # of blocks lost:
SQL> select INST_ID, NAME, VALUE from gv$sysstat where name like 'global cache %lost%' and value > 0;

<b> 14. Is flow control enabled (tx & rx) for switch and NIC? </b>  Its not just the servers that need the transmission to pause (Xoff) but also the network equipment.

<b> 15. </b> Using a QOS (Quality of Service) is not advised for the Network that the RAC Private Interconnect is comminucating to other nodes of the cluster with.  This includes the Server, Switch and DNS (or any other item connected on this segment of the network).
We have a case at AIX QOS service was turned on but not configured on Cisco 3750 switch causing excessive amount of gc cr block lost and other GC waits. Waits caused application performance issues. 
 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => No Global Cache lost blocks detected


DATA FOR GRAC4 FOR GC BLOCK LOST 




No of GC lost block in last 24 hours = 0                                        
Top

Top

Session Failover configuration

Success FactorCONFIGURE ORACLE NET SERVICES LOAD BALANCING PROPERLY TO DISTRIBUTE CONNECTIONS
Recommendation
 Benefit / Impact:

Higher application availability

Risk:

Application availability problems in case of failed nodes or database instances

Action / Repair:

Application connection failover and load balancing is highly recommended for OLTP environments but may not apply for DSS workloads.  DSS application customers may want to ignore this warning.


The following query will identify the application user sessions that do not have basic connection failover configured:

select username, sid, serial#,process,failover_type,failover_method FROM gv$session where upper(failover_method) != 'BASIC' and upper(failover_type) !='SELECT' and upper(username) not in ('SYS','SYSTEM','SYSMAN','DBSNMP');

 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly


DATA FOR GRAC4 FOR SESSION FAILOVER CONFIGURATION 




Query returned no rows which is expected when the SQL check passes.

Top

Top

Redo log Checkpoint not complete

Recommendation
 If Checkpoints are not being completed the database may hang or experience performance degradation.  Under this circumstance the alert.log will contain "checkpoint not complete" messages and it is recommended that the online redo logs be recreated with a larger size.
 
Links
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
INFO => At some times checkpoints are not being completed


DATA FROM GRAC41 - GRAC4 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/grac4/grac41/trace/alert_grac41.log = 160

Status on grac42:
INFO => At some times checkpoints are not being completed


DATA FROM GRAC42 - GRAC4 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/grac4/grac42/trace/alert_grac42.log = 82

Status on grac43:
INFO => At some times checkpoints are not being completed


DATA FROM GRAC43 - GRAC4 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/grac4/grac43/trace/alert_grac43.log = 123
Top

Top

Avg GC Current Block Receive Time

Recommendation
 The average gc current block receive time should typically be less than 15 milliseconds depending on your system configuration and volume.  This is the average latency of a current request round-trip from the requesting instance to the holding instance and back to the requesting instance.

Use the following query to determine the average gc current block receive time for each instance.

set numwidth 20 
column "AVG CURRENT BLOCK RECEIVE TIME (ms)" format 9999999.9 
select b1.inst_id, ((b1.value / decode(b2.value,0,1)) * 10) "AVG CURRENT BLOCK RECEIVE TIME (ms)" 
from gv$sysstat b1, gv$sysstat b2 
where b1.name = 'gc current block receive time' and 
b2.name = 'gc current blocks received' and b1.inst_id = b2.inst_id ;
 
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => Avg GC CURRENT Block Receive Time Within Acceptable Range


DATA FOR GRAC4 FOR AVG GC CURRENT BLOCK RECEIVE TIME 




avg_gc_current_block_receive_time_15ms_exceeded = 0                             
Top

Top

Avg GC CR Block Receive Time

Recommendation
 The average gc cr block receive time should typically be less than 15 milliseconds depending on your system configuration and volume.  This is the average latency of a consistent-read request round-trip from the requesting instance to the holding instance and back to the requesting instance.

Use the following query to determine the average gc cr block receive time for each instance.

set numwidth 20 
column "AVG CR BLOCK RECEIVE TIME (ms)" format 9999999.9 
select b1.inst_id, ((b1.value / decode(b2.value,0,1)) * 10) "AVG CR BLOCK RECEIVE TIME (ms)" 
from gv$sysstat b1, gv$sysstat b2 
where b1.name = 'gc cr block receive time' and 
b2.name = 'gc cr blocks received' and b1.inst_id = b2.inst_id ;
 
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => Avg GC CR Block Receive Time Within Acceptable Range


DATA FOR GRAC4 FOR AVG GC CR BLOCK RECEIVE TIME 




avg_gc_cr_block_receive_time_15ms_exceeded = 0                                  
Top

Top

Tablespace allocation type

Recommendation
 It is recommended that for all locally managed tablespaces the allocation type specified be SYSTEM to allow Oracle to automatically determine extent size based on the data profile.
 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => Tablespace allocation type is SYSTEM for all appropriate tablespaces for grac4


DATA FOR GRAC4 FOR TABLESPACE ALLOCATION TYPE 




Query returned no rows which is expected when the SQL check passes.

Top

Top

Old trace files in background dump destination

Recommendation
 we should clean old trace files from background_dump_destination regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM GRAC41 - GRAC4 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0

Status on grac42:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM GRAC42 - GRAC4 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0

Status on grac43:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM GRAC43 - GRAC4 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0
Top

Top

Alert log file size

Recommendation
 If alert log file is larger than 50 MB, it should be rolled over to new file and old file should be backed up.
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Alert log is not too big


DATA FROM GRAC41 - GRAC4 DATABASE - ALERT LOG FILE SIZE 



-rw-r-----. 1 oracle asmadmin 1192252 Feb 22 06:01 /u01/app/oracle/diag/rdbms/grac4/grac41/trace/alert_grac41.log

Status on grac42:
PASS => Alert log is not too big


DATA FROM GRAC42 - GRAC4 DATABASE - ALERT LOG FILE SIZE 



-rw-r-----. 1 oracle asmadmin 937641 Feb 22 05:59 /u01/app/oracle/diag/rdbms/grac4/grac42/trace/alert_grac42.log

Status on grac43:
PASS => Alert log is not too big


DATA FROM GRAC43 - GRAC4 DATABASE - ALERT LOG FILE SIZE 



-rw-r-----. 1 oracle asmadmin 945222 Feb 22 06:01 /u01/app/oracle/diag/rdbms/grac4/grac43/trace/alert_grac43.log
Top

Top

Check ORA-07445 errors

Recommendation
 ORA-07445 errors may lead to database block corruption or some serious issue. Please see the trace file for more information next to ORA-07445 error in alert log.If you are not able to resolve the problem,Please open service request with Oracle support.
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => No ORA-07445 errors found in alert log


DATA FROM GRAC41 - GRAC4 DATABASE - CHECK ORA-07445 ERRORS 




Status on grac42:
PASS => No ORA-07445 errors found in alert log


DATA FROM GRAC42 - GRAC4 DATABASE - CHECK ORA-07445 ERRORS 




Status on grac43:
PASS => No ORA-07445 errors found in alert log


DATA FROM GRAC43 - GRAC4 DATABASE - CHECK ORA-07445 ERRORS 



Top

Top

Check ORA-00600 errors

Recommendation
 ORA-00600 errors may lead to database block corruption or some serious issue. Please see the trace file for more information next to ORA-00600 error in alert log.If you are not able to resolve the problem,Please open service request with Oracle support.
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => No ORA-00600 errors found in alert log


DATA FROM GRAC41 - GRAC4 DATABASE - CHECK ORA-00600 ERRORS 




Status on grac42:
PASS => No ORA-00600 errors found in alert log


DATA FROM GRAC42 - GRAC4 DATABASE - CHECK ORA-00600 ERRORS 




Status on grac43:
PASS => No ORA-00600 errors found in alert log


DATA FROM GRAC43 - GRAC4 DATABASE - CHECK ORA-00600 ERRORS 



Top

Top

Check user_dump_destination

Recommendation
 we should clean old trace files from user_dump_destination regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM GRAC41 - GRAC4 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/grac4/grac41/trace which are older than 30 days

Status on grac42:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM GRAC42 - GRAC4 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/grac4/grac42/trace which are older than 30 days

Status on grac43:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM GRAC43 - GRAC4 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/grac4/grac43/trace which are older than 30 days
Top

Top

Check core_dump_destination

Recommendation
 we should clean old trace files from core_dump_destination regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM GRAC41 - GRAC4 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/grac4/grac41/cdump which are older than 30 days

Status on grac42:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM GRAC42 - GRAC4 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/grac4/grac42/cdump which are older than 30 days

Status on grac43:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM GRAC43 - GRAC4 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/grac4/grac43/cdump which are older than 30 days
Top

Top

Check for parameter semmns

Recommendation
 SEMMNS should be set >= 32000
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on grac42:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on grac43:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000
Top

Top

Check for parameter kernel.shmmni

Recommendation
 kernel.shmmni  should be >= 4096
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096

Status on grac42:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096

Status on grac43:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096
Top

Top

Check for parameter semmsl

Recommendation
 SEMMSL should be set >= 250
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on grac42:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on grac43:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250
Top

Top

Check for parameter semmni

Recommendation
 SEMMNI should be set >= 128
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Kernel Parameter SEMMNI OK

semmni = 128

Status on grac42:
PASS => Kernel Parameter SEMMNI OK

semmni = 128

Status on grac43:
PASS => Kernel Parameter SEMMNI OK

semmni = 128
Top

Top

Check for parameter semopm

Recommendation
 SEMOPM should be set >= 100 
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on grac42:
PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on grac43:
PASS => Kernel Parameter SEMOPM OK

semopm = 100
Top

Top

Check for parameter kernel.shmall

Recommendation
 Starting with Oracle 10g, kernel.shmall should be set >= 2097152.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824

Status on grac42:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824

Status on grac43:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824
Top

Top

Verify sys and system users default tablespace is system

Success FactorDATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Benefit / Impact:

It's recommended to Keep the Default Tablespace for SYS and SYSTEM Schema mapped to the Default SYSTEM. All Standard Dictionary objects as well as all the added option will be located in the same place with no risk to record Dictionary data in other Datafiles.

Risk:

If Default tablespace for SYS and SYSTEM is not set to SYSTEM, Data dictionary Object can be created in other locations and cannot be controlled during maintenance activitiesof the database. Due to this, there's a potentil risk to run into severe Data Dictionary Corruptuion that may implicate time consuming Recovery Steps.

Action / Repair:

If SYS or SYSTEM schema have a Default Tablespace different than SYSTEM, it's recommended to follow instruction given into NoteID? : 1111111.2

SQL> SELECT username, default_tablespace
     FROM dba_users
     WHERE username in ('SYS','SYSTEM');

If  DEFAULT_TABLESPACE is anything other than SYSTEM tablespace, modify the default tablespace to SYSTEM by using the below command.
 
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => The SYS and SYSTEM userids have a default tablespace of SYSTEM


DATA FOR GRAC4 FOR VERIFY SYS AND SYSTEM USERS DEFAULT TABLESPACE IS SYSTEM 




SYSTEM                                                                          
SYSTEM                                                                          
Top

Top

Check for parameter remote_listener

Recommendation
 Using remote listener init parameter, you can register instances running on remote node with local listener and that way you can achieve load balancing and failover if local listener or node goes down.
 
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Remote listener parameter is set to achieve load balancing and failover

grac41.remote_listener = grac4-scan.grid4.example.com:1521                      

Status on grac42:
PASS => Remote listener parameter is set to achieve load balancing and failover

grac42.remote_listener = grac4-scan.grid4.example.com:1521                      

Status on grac43:
PASS => Remote listener parameter is set to achieve load balancing and failover

grac43.remote_listener = grac4-scan.grid4.example.com:1521                      
Top

Top

maximum parallel asynch io

Recommendation
 A message in the alert.log similar to the one below is indicative of /proc/sys/fs/aio-max-nr being too low but you should set this to 1048576 proactively and even increase it if you get a similar message.  A problem in this area could lead to availability issues.

Warning: OS async I/O limit 128 is lower than recovery batch 1024
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM GRAC41 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576

Status on grac42:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM GRAC42 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576

Status on grac43:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM GRAC43 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576
Top

Top

Old log files in client directory in crs_home

Recommendation
 Having many old log files in $CRS_HOME/log/hostname/client directory can cause CRS performance issue.  So please delete log files older than 15 days.
 
Needs attention ongrac41, grac42, grac43
Passed on-

Status on grac41:
INFO => $CRS_HOME/log/hostname/client directory has too many older log files.


DATA FROM GRAC41 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



315 files in /u01/app/11204/grid/log/grac41/client directory are older than 15 days

Status on grac42:
INFO => $CRS_HOME/log/hostname/client directory has too many older log files.


DATA FROM GRAC42 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



99 files in /u01/app/11204/grid/log/grac42/client directory are older than 15 days

Status on grac43:
INFO => $CRS_HOME/log/hostname/client directory has too many older log files.


DATA FROM GRAC43 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



69 files in /u01/app/11204/grid/log/grac43/client directory are older than 15 days
Top

Top

OCR backup

Success FactorUSE EXTERNAL OR ORACLE PROVIDED REDUNDANCY FOR OCR
Recommendation
 Oracle Clusterware automatically creates OCR backups every four hours. At any one time, Oracle Database always retains the last three  backup copies of the OCR. The CRSD process that creates the backups also creates and retains an OCR backup for each full day and at the end of each week.
 
Needs attention on-
Passed ongrac41

Status on grac41:
PASS => OCR is being backed up daily


DATA FROM GRAC41 - OCR BACKUP 




grac42     2014/02/22 07:01:46     /u01/app/11204/grid/cdata/grac4/backup00.ocr

grac42     2014/02/22 03:01:45     /u01/app/11204/grid/cdata/grac4/backup01.ocr

grac42     2014/02/21 23:01:44     /u01/app/11204/grid/cdata/grac4/backup02.ocr

grac42     2014/02/21 03:01:36     /u01/app/11204/grid/cdata/grac4/day.ocr

grac42     2014/02/09 18:02:59     /u01/app/11204/grid/cdata/grac4/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available
Top

Top

Check for parameter net.core.rmem_max

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304

Status on grac42:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304

Status on grac43:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304
Top

Top

Check for parameter spfile

Recommendation
 Oracle recommendes to use one spfile for all instances in clustered database.  Using spfile, DBA can change many parameters dynamically.
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Instance is using spfile

grac41.spfile = +DATA/grac4/spfilegrac4.ora                                     

Status on grac42:
PASS => Instance is using spfile

grac42.spfile = +DATA/grac4/spfilegrac4.ora                                     

Status on grac43:
PASS => Instance is using spfile

grac43.spfile = +DATA/grac4/spfilegrac4.ora                                     
Top

Top

Non-routable network for interconnect

Success FactorUSE NON-ROUTABLE NETWORK ADDRESSES FOR PRIVATE INTERCONNECT
Recommendation
 Interconnect should be configured on non-routable private LAN. Interconnect IP should not be accessible outside LAN 
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM GRAC41 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth2  192.168.2.0  global  cluster_interconnect

Status on grac42:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM GRAC42 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth2  192.168.2.0  global  cluster_interconnect

Status on grac43:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM GRAC43 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth2  192.168.2.0  global  cluster_interconnect
Top

Top

Hostname Formating

Success FactorDO NOT USE UNDERSCORE IN HOST OR DOMAIN NAME
Recommendation
 Underscores should not be used in a  host or domainname..

 According to RFC952 - DoD Internet host table specification 
The same applies for Net, Host, Gateway, or Domain name.


 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => None of the hostnames contains an underscore character


DATA FROM GRAC41 - HOSTNAME FORMATING 



grac41
grac42
grac43

Status on grac42:
PASS => None of the hostnames contains an underscore character


DATA FROM GRAC42 - HOSTNAME FORMATING 



grac41
grac42
grac43

Status on grac43:
PASS => None of the hostnames contains an underscore character


DATA FROM GRAC43 - HOSTNAME FORMATING 



grac41
grac42
grac43
Top

Top

Check for parameter net.core.rmem_default

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on grac42:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on grac43:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144
Top

Top

Check for parameter net.core.wmem_max

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576

Status on grac42:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576

Status on grac43:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576
Top

Top

Check for parameter net.core.wmem_default

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on grac42:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on grac43:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144
Top

Top

CRS HOME env variable

Success FactorAVOID SETTING ORA_CRS_HOME ENVIRONMENT VARIABLE
Recommendation
 Benefit / Impact:

Avoid unexpected results running various Oracle utilities

Risk:

Setting this variable can cause problems for various Oracle components, and it is never necessary for CRS programs because they all have wrapper scripts.

Action / Repair:

Unset ORA_CRS_HOME in the execution environment.  If a variable is needed for automation purposes or convenience then use a different variable name (eg., GI_HOME, etc.)
 
Links
Needs attention on-
Passed ongrac41, grac42, grac43

Status on grac41:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM GRAC41 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set


Status on grac42:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM GRAC42 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set


Status on grac43:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM GRAC43 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set

Top

Top

AUDSES$ sequence cache size

Success FactorCACHE APPLICATION SEQUENCES AND SOME SYSTEM SEQUENCES FOR BETTER PERFORMANCE
Recommendation
 Use large cache value of maybe 10,000 or more. NOORDER most effective, but impact on strict ordering. Performance. Might not get strict time ordering of sequence numbers.
There are problems reported with Audses$ and ora_tq_base$ which are both internal sequences  . Also particularly if the order of the application sequence is not important or this is used during the login process and hence can be involved in a login storm then this needs to be taken care of. Some sequences need to be presented in a particular order and hence caching those is not a good idea but in the interest of performance if order does not matter then this could be cached and presented. This also manifests itself as waits in "rowcache" for "dc_sequences" which is a rowcache type for sequences. 


For Applications this can cause significant issues especially with Transactional Sequences.  
Please see note attached.

Oracle General Ledger - Version: 11.5.0 to 11.5.10
Oracle Payables - Version: 11.5.0 to 11.5.10
Oracle Receivables - Version: 11.5.10.2
Information in this document applies to any platform.
ARXTWAI,ARXRWMAI 

Increase IDGEN1$ to a value of 1000, see notes below.  This is the default as of 11.2.0.1.
 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => SYS.AUDSES$ sequence cache size >= 10,000


DATA FOR GRAC4 FOR AUDSES$ SEQUENCE CACHE SIZE 




audses$.cache_size = 10000                                                      
Top

Top

IDGEN$ sequence cache size

Success FactorCACHE APPLICATION SEQUENCES AND SOME SYSTEM SEQUENCES FOR BETTER PERFORMANCE
Recommendation
 Sequence contention (SQ enqueue) can occur if SYS.IDGEN1$ sequence is not cached to 1000.  This condition can lead to performance issues in RAC.  1000 is the default starting in version 11.2.0.1.
 
Links
Needs attention on-
Passed ongrac4

Status on grac4:
PASS => SYS.IDGEN1$ sequence cache size >= 1,000


DATA FOR GRAC4 FOR IDGEN$ SEQUENCE CACHE SIZE 




idgen1$.cache_size = 1000                                                       
Top