Sunday, December 3, 2023

Liquibase Usage

 

  1. Changeset Identifier:

    • David:96A507E7-F45F-4937-BF8C-5165201BB7CD
      • The changeset identifier uniquely identifies this changeset. It typically includes the author's name (David in this case) and a universally unique identifier (UUID) to ensure uniqueness.
  2. endDelimiter:

    • GO
      • The endDelimiter attribute specifies the delimiter that marks the end of the SQL statements within a changeset. In this case, it is set to GO. This is commonly used in SQL Server scripts.
  3. splitStatements:

    • true
      • The splitStatements attribute determines whether Liquibase should split SQL statements based on the specified endDelimiter. When set to true, Liquibase interprets each statement between delimiters as a separate SQL statement.
  4. stripComments:

    • false
      • The stripComments attribute controls whether Liquibase should remove comments from the SQL statements. When set to false, comments in the SQL script are retained.
  5. runAlways:

    • true
      • The runAlways attribute indicates that this changeset should be executed every time Liquibase runs, regardless of whether the changeset has been run before or not.
  6. runOnChange:

    • true
      • The runOnChange attribute specifies that the changeset should be executed if the changeset file has changed since the last execution. This is useful for scenarios where you want to rerun the changeset if it has been modified.
  7. failOnError:

    • false
      • The failOnError attribute determines whether Liquibase should halt the execution of the entire migration if an error occurs during the execution of this specific changeset. When set to false, Liquibase will log the error but continue with subsequent changesets.
       
    <changeSet id="David:96A507E7-F45F-4937-BF8C-5165201BB7CD"
               author="David"
               endDelimiter="GO"
               splitStatements="true"
               stripComments="false"
               runAlways="true"
               runOnChange="true"
               failOnError="false">
        <!-- Your SQL changes go here -->
    </changeSet>
     
  8. the NEWID() function in SQL Server is used to generate a new uniqueidentifier (UUID) value. If you want to generate a UUID similar to what you might use in Liquibase's changeset identifier, you can use the following SQL query:

SELECT NEWID() AS GeneratedUUID;
 96A507E7-F45F-4937-BF8C-5165201BB7CD

Replace the 96A507E7-F45F-4937-BF8C-5165201BB7CD part with the generated UUID from the SQL query. This ensures that each changeset has a unique identifier. Keep in mind that while the generated UUID may look different each time you run the query, it will still be a valid and unique identifier.

Tuesday, October 8, 2019

stats collection job using dbms_scheduler


set servoutoutput on

BEGIN
     DBMS_SCHEDULER.CREATE_JOB (
          job_name => 'RAJ_STATS_REFRESH'
          ,job_type => 'PLSQL_BLOCK'
          ,job_action => 'Begin dbms_stats.gather_schema_stats(ownname => ''RAJ'', cascade => true); end;'
          ,start_date => '30-JAN-19 10.00.00PM US/Pacific'
          ,repeat_interval => 'FREQ=DAILY; INTERVAL=1'
          ,enabled => TRUE
          ,comments => 'Refreshes the RAJ Schema stats every night at 10 PM'
          );
END;
/


col JOB_NAME form a30
col STATE form a10
col SOURCE form a5

SELECT JOB_NAME,STATE,LAST_START_DATE,LAST_RUN_DURATION, NEXT_RUN_DATE FROM DBA_SCHEDULER_JOBS  WHERE JOB_NAME = 'RAJ';

select JOB_NAME, FAILURE_COUNT, LAST_START_DATE, LAST_RUN_DURATION from dba_scheduler_jobs WHERE JOB_NAME = 'RAJ';

SELECT JOB_NAME FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'RAJ';

--select * from dba_scheduler_job_run_details where job_name = 'RAJ';

col status form a10
col ACTUAL_START_DATE form a40
col RUN_DURATION form a15
select JOB_NAME, STATUS,  ERROR#, ACTUAL_START_DATE, RUN_DURATION from dba_scheduler_job_run_details where job_name ='RAJ';




SELECT JOB_NAME,STATE,LAST_START_DATE,LAST_RUN_DURATION, NEXT_RUN_DATE FROM DBA_SCHEDULER_JOBS  WHERE JOB_NAME = 'RAJ';

JOB_NAME                       STATE      LAST_START_DATE                                                             LAST_RUN_DURATION
------------------------------ ---------- --------------------------------------------------------------------------- ---------------------------------------------------------------------------
NEXT_RUN_DATE
---------------------------------------------------------------------------
RAJ        RUNNING    03-FEB-19 12.55.00.156811 PM US/PACIFIC
03-FEB-19 12.55.00.100000 PM US/PACIFIC


--- Rollback:


> conn / as sysdba
Connected.
12:21:46 SYS@RAJ> BEGIN
12:21:51   2  DBMS_SCHEDULER.DROP_JOB( JOB_NAME => 'RAJ');
12:21:56   3   END;
12:21:59   4  /

PL/SQL procedure successfully completed.

How to flush shared pool


select address, hash_value from v$sqlarea
where sql_text = ‘select count(c2) from skew where c1 = :bind’;
ADDRESS HASH_VALUE
——– ———-
27308318 2934790721

exec DBMS_SHARED_POOL.PURGE ('00000008BB871740, 3842003817', 'C');
on you database


reason is that this SQL has produced many versions, appears to be bug in CS as on other databases we see only 1 or 2
10:03:04 SQL> select parsing_schema_name, version_count from v$sqlarea where sql_id='ak6up2gkh0nv9';

PARSING_SCHEMA_NAME            VERSION_COUNT
------------------------------ -------------
OPS$ORACLE                             57448


ak6up2gkh0nv9 select a.MRP, b.InTraf from (select decode(count(1), 0, 'N', 'Y') MRP from v$session where program like '%MRP%' and type='BACKGROUND') a, (select count(1) InTraf from gv$session where machine like '%occ%' or machine like '%paypal.com%' or machine like '%etl%') b



=== verification


select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID = 'ak6up2gkh0nv9';

should return none



Saturday, October 13, 2018

what happens when you de-config your RAC Cluster.. (from the last node)

what happens when you de-config your RAC Cluster.. (from the last node)

 /u01/home/grid/12.2.0.1/crs/install/rootcrs.sh -deconfig -force -lastnode
Using configuration parameter file: /u01/home/grid/12.2.0.1/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/home/oracle/crsdata/orclhostdb02/crsconfig/crsdeconfig_orclhostdb02_2018-10-11_04-00-02PM.log
2018/10/11 16:00:05 CLSRSC-332: CRS resources for listeners are still configured
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.crsd' on 'orclhostdb02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.ORCL_FRA.dg' on 'orclhostdb02'
CRS-2677: Stop of 'ora.ORCL_FRA.dg' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'orclhostdb02'
CRS-2677: Stop of 'ora.asm' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'orclhostdb02'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'orclhostdb02' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'orclhostdb02' has completed
CRS-2677: Stop of 'ora.crsd' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.crf' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.drivers.acfs' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.crf' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.storage' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'orclhostdb02'
CRS-2677: Stop of 'ora.mdnsd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.asm' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'orclhostdb02'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.evmd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.ctssd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.evmd' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.cssd' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.gipcd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.driver.afd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'orclhostdb02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.driver.afd' on 'orclhostdb02'
CRS-2672: Attempting to start 'ora.evmd' on 'orclhostdb02'
CRS-2672: Attempting to start 'ora.mdnsd' on 'orclhostdb02'
CRS-2676: Start of 'ora.driver.afd' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orclhostdb02'
CRS-2676: Start of 'ora.cssdmonitor' on 'orclhostdb02' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'orclhostdb02' succeeded
CRS-2676: Start of 'ora.evmd' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'orclhostdb02'
CRS-2676: Start of 'ora.gpnpd' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'orclhostdb02'
CRS-2676: Start of 'ora.gipcd' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'orclhostdb02'
CRS-2672: Attempting to start 'ora.diskmon' on 'orclhostdb02'
CRS-2676: Start of 'ora.diskmon' on 'orclhostdb02' succeeded
CRS-2676: Start of 'ora.cssd' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'orclhostdb02'
CRS-2672: Attempting to start 'ora.ctssd' on 'orclhostdb02'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'orclhostdb02'
CRS-2676: Start of 'ora.crf' on 'orclhostdb02' succeeded
CRS-2676: Start of 'ora.ctssd' on 'orclhostdb02' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'orclhostdb02'
CRS-2676: Start of 'ora.asm' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'orclhostdb02'
CRS-2676: Start of 'ora.storage' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'orclhostdb02'
CRS-2676: Start of 'ora.crsd' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.crsd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.crsd' on 'orclhostdb02' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.ctssd' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.evmd' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.storage' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'orclhostdb02'
CRS-2677: Stop of 'ora.ctssd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.evmd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.storage' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'orclhostdb02'
CRS-2677: Stop of 'ora.drivers.acfs' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.asm' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'orclhostdb02'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.cssd' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.driver.afd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.driver.afd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.crf' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.gipcd' on 'orclhostdb02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.driver.afd' on 'orclhostdb02'
CRS-2672: Attempting to start 'ora.evmd' on 'orclhostdb02'
CRS-2672: Attempting to start 'ora.mdnsd' on 'orclhostdb02'
CRS-2676: Start of 'ora.driver.afd' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orclhostdb02'
CRS-2676: Start of 'ora.cssdmonitor' on 'orclhostdb02' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'orclhostdb02' succeeded
CRS-2676: Start of 'ora.evmd' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'orclhostdb02'
CRS-2676: Start of 'ora.gpnpd' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'orclhostdb02'
CRS-2676: Start of 'ora.gipcd' on 'orclhostdb02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'orclhostdb02'
CRS-2672: Attempting to start 'ora.diskmon' on 'orclhostdb02'
CRS-2676: Start of 'ora.diskmon' on 'orclhostdb02' succeeded
CRS-2676: Start of 'ora.cssd' on 'orclhostdb02' succeeded
ASM de-configuration trace file location: /u01/home/oracle/cfgtoollogs/asmca/asmcadc_clean2018-10-11_04-03-06-PM.log
ASM Clean Configuration START
ASM Clean Configuration END

ASM instance deleted successfully. Check /u01/home/oracle/cfgtoollogs/asmca/asmcadc_clean2018-10-11_04-03-06-PM.log for details.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.evmd' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.drivers.acfs' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.evmd' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.mdnsd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.cssd' on 'orclhostdb02' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'orclhostdb02'
CRS-2673: Attempting to stop 'ora.gipcd' on 'orclhostdb02'
CRS-2677: Stop of 'ora.driver.afd' on 'orclhostdb02' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'orclhostdb02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/10/11 16:04:25 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2018/10/11 16:04:38 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
2018/10/11 16:04:40 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
2018/10/11 16:04:40 CLSRSC-559: Ensure that the GPnP profile data under the 'gpnp' directory in /u01/home/grid/12.2.0.1 is deleted on each node before using the software in the current Grid Infrastructure home for reconfiguration.

=== LOG FILE :

 tail -f /u01/home/oracle/crsdata/orclhostdb02/crsconfig/crsdeconfig_orclhostdb02_2018-10-11_04-00-02PM.log

2018-10-11 16:00:02: Checking parameters from paramfile /u01/home/grid/12.2.0.1/crs/install/crsconfig_params to validate installer variables
2018-10-11 16:00:02: Skipping validation for ODA_CONFIG
2018-10-11 16:00:02: Skipping validation for OPC_CLUSTER_TYPE
2018-10-11 16:00:02: Skipping validation for OPC_NAT_ADDRESS
2018-10-11 16:00:02: The configuration parameter file /u01/home/grid/12.2.0.1/crs/install/crsconfig_params  is valid
2018-10-11 16:00:02: ### Printing the configuration values from files:
2018-10-11 16:00:02:    /u01/home/grid/12.2.0.1/crs/install/crsconfig_params
2018-10-11 16:00:02:    /u01/home/grid/12.2.0.1/crs/install/s_crsconfig_defs
2018-10-11 16:00:02: AFD_CONF=true
2018-10-11 16:00:02: APPLICATION_VIP=
2018-10-11 16:00:02: ASMCA_ARGS=
2018-10-11 16:00:02: ASM_CONFIG=near
2018-10-11 16:00:02: ASM_CREDENTIALS=
2018-10-11 16:00:02: ASM_DISCOVERY_STRING=/dev/mapper
2018-10-11 16:00:02: ASM_SPFILE=
2018-10-11 16:00:02: ASM_UPGRADE=false
2018-10-11 16:00:02: BIG_CLUSTER=true
2018-10-11 16:00:02: CDATA_AUSIZE=4
2018-10-11 16:00:02: CDATA_BACKUP_AUSIZE=
2018-10-11 16:00:02: CDATA_BACKUP_DISKS=
2018-10-11 16:00:02: CDATA_BACKUP_DISK_GROUP=
2018-10-11 16:00:02: CDATA_BACKUP_FAILURE_GROUPS=
2018-10-11 16:00:02: CDATA_BACKUP_QUORUM_GROUPS=
2018-10-11 16:00:02: CDATA_BACKUP_REDUNDANCY=
2018-10-11 16:00:02: CDATA_BACKUP_SITES=
2018-10-11 16:00:02: CDATA_BACKUP_SIZE=
2018-10-11 16:00:02: CDATA_DISKS=/dev/mapper/DENHPE20450_2_29p1
2018-10-11 16:00:02: CDATA_DISK_GROUP=ORCL_FRA
2018-10-11 16:00:02: CDATA_FAILURE_GROUPS=
2018-10-11 16:00:02: CDATA_QUORUM_GROUPS=
2018-10-11 16:00:02: CDATA_REDUNDANCY=EXTERNAL
2018-10-11 16:00:02: CDATA_SITES=
2018-10-11 16:00:02: CDATA_SIZE=
2018-10-11 16:00:02: CLSCFG_MISSCOUNT=
2018-10-11 16:00:02: CLUSTER_CLASS=STANDALONE
2018-10-11 16:00:02: CLUSTER_GUID=
2018-10-11 16:00:02: CLUSTER_NAME=ORCL-DEN-DB
2018-10-11 16:00:02: CLUSTER_TYPE=DB
2018-10-11 16:00:02: CRFHOME=/u01/home/grid/12.2.0.1
2018-10-11 16:00:02: CRS_LIMIT_CORE=unlimited
2018-10-11 16:00:02: CRS_LIMIT_MEMLOCK=unlimited
2018-10-11 16:00:02: CRS_LSNR_STACK=32768
2018-10-11 16:00:02: CRS_NODEVIPS='orclhostdb02-vip.mattew.com/255.255.254.0/bond0,orclhostdb01-vip.mattew.com/255.255.254.0/bond0'
2018-10-11 16:00:02: CRS_STORAGE_OPTION=1
2018-10-11 16:00:02: CSS_LEASEDURATION=400
2018-10-11 16:00:02: DC_HOME=
2018-10-11 16:00:02: DIRPREFIX=
2018-10-11 16:00:02: DISABLE_OPROCD=0
2018-10-11 16:00:02: EXTENDED_CLUSTER=false
2018-10-11 16:00:02: EXTENDED_CLUSTER_SITES=ORCL-DEN-DB
2018-10-11 16:00:02: EXTERNAL_ORACLE=/opt/oracle
2018-10-11 16:00:02: EXTERNAL_ORACLE_BIN=/opt/oracle/bin
2018-10-11 16:00:02: GIMR_CONFIG=local
2018-10-11 16:00:02: GIMR_CREDENTIALS=
2018-10-11 16:00:02: GNS_ADDR_LIST=
2018-10-11 16:00:02: GNS_ALLOW_NET_LIST=
2018-10-11 16:00:02: GNS_CONF=false
2018-10-11 16:00:02: GNS_CREDENTIALS=
2018-10-11 16:00:02: GNS_DENY_ITF_LIST=
2018-10-11 16:00:02: GNS_DENY_NET_LIST=
2018-10-11 16:00:02: GNS_DOMAIN_LIST=
2018-10-11 16:00:02: GNS_TYPE=
2018-10-11 16:00:02: GPNPCONFIGDIR=/u01/home/grid/12.2.0.1
2018-10-11 16:00:02: GPNPGCONFIGDIR=/u01/home/grid/12.2.0.1
2018-10-11 16:00:02: GPNP_PA=
2018-10-11 16:00:02: HUB_NODE_LIST=orclhostdb02,orclhostdb01
2018-10-11 16:00:02: HUB_NODE_VIPS=orclhostdb01-vip.mattew.com,orclhostdb02-vip.mattew.com
2018-10-11 16:00:02: HUB_SIZE=32
2018-10-11 16:00:02: ID=/etc/init.d
2018-10-11 16:00:02: INIT=/sbin/init
2018-10-11 16:00:02: INITCTL=/sbin/initctl
2018-10-11 16:00:02: INSTALL_NODE=orclhostdb02.mattew.com
2018-10-11 16:00:02: ISROLLING=true
2018-10-11 16:00:02: IT=/etc/inittab
2018-10-11 16:00:02: JLIBDIR=/u01/home/grid/12.2.0.1/jlib
2018-10-11 16:00:02: JREDIR=/u01/home/grid/12.2.0.1/jdk/jre/
2018-10-11 16:00:02: LANGUAGE_ID=AMERICAN_AMERICA.AL32UTF8
2018-10-11 16:00:02: LISTENER_USERNAME=oracle
2018-10-11 16:00:02: MGMT_DB=true
2018-10-11 16:00:02: MSGFILE=/var/adm/messages
2018-10-11 16:00:02: NETWORKS="bond0"/10.57.238.0:public,"eth2"/191.155.1.0:cluster_interconnect,"eth3"/191.155.2.0:asm,"eth3"/191.155.2.0:cluster_interconnect
2018-10-11 16:00:02: NEW_HOST_NAME_LIST=
2018-10-11 16:00:02: NEW_NODEVIPS='orclhostdb02-vip.mattew.com/255.255.254.0/bond0,orclhostdb01-vip.mattew.com/255.255.254.0/bond0'
2018-10-11 16:00:02: NEW_NODE_NAME_LIST=
2018-10-11 16:00:02: NEW_PRIVATE_NAME_LIST=
2018-10-11 16:00:02: NODE_NAME_LIST=orclhostdb02,orclhostdb01
2018-10-11 16:00:02: OCRCONFIG=/etc/oracle/ocr.loc
2018-10-11 16:00:02: OCRCONFIGDIR=/etc/oracle
2018-10-11 16:00:02: OCRID=
2018-10-11 16:00:02: OCRLOC=ocr.loc
2018-10-11 16:00:02: OCR_LOCATIONS=
2018-10-11 16:00:02: ODA_CONFIG=
2018-10-11 16:00:02: OLASTGASPDIR=/etc/oracle/lastgasp
2018-10-11 16:00:02: OLD_CRS_HOME=
2018-10-11 16:00:02: OLRCONFIG=/etc/oracle/olr.loc
2018-10-11 16:00:02: OLRCONFIGDIR=/etc/oracle
2018-10-11 16:00:02: OLRLOC=olr.loc
2018-10-11 16:00:02: OPC_CLUSTER_TYPE=
2018-10-11 16:00:02: OPC_NAT_ADDRESS=
2018-10-11 16:00:02: OPROCDCHECKDIR=/etc/oracle/oprocd/check
2018-10-11 16:00:02: OPROCDDIR=/etc/oracle/oprocd
2018-10-11 16:00:02: OPROCDFATALDIR=/etc/oracle/oprocd/fatal
2018-10-11 16:00:02: OPROCDSTOPDIR=/etc/oracle/oprocd/stop
2018-10-11 16:00:02: ORACLE_BASE=/u01/home/oracle
2018-10-11 16:00:02: ORACLE_HOME=/u01/home/grid/12.2.0.1
2018-10-11 16:00:02: ORACLE_OWNER=oracle
2018-10-11 16:00:02: ORA_ASM_GROUP=dba
2018-10-11 16:00:02: ORA_DBA_GROUP=dba
2018-10-11 16:00:02: PING_TARGETS=
2018-10-11 16:00:02: PRIVATE_NAME_LIST=
2018-10-11 16:00:02: RCALLDIR=/etc/rc.d/rc0.d /etc/rc.d/rc1.d /etc/rc.d/rc2.d /etc/rc.d/rc3.d /etc/rc.d/rc4.d /etc/rc.d/rc5.d /etc/rc.d/rc6.d
2018-10-11 16:00:02: RCKDIR=/etc/rc.d/rc0.d /etc/rc.d/rc1.d /etc/rc.d/rc2.d /etc/rc.d/rc4.d /etc/rc.d/rc6.d
2018-10-11 16:00:02: RCSDIR=/etc/rc.d/rc3.d /etc/rc.d/rc5.d
2018-10-11 16:00:02: RC_KILL=K15
2018-10-11 16:00:02: RC_KILL_OLD=K96
2018-10-11 16:00:02: RC_KILL_OLD2=K19
2018-10-11 16:00:02: RC_START=S96
2018-10-11 16:00:02: REUSEDG=false
2018-10-11 16:00:02: RHP_CONF=false
2018-10-11 16:00:02: RIM_NODE_LIST=
2018-10-11 16:00:02: SCAN_NAME=ORCL-DEN-DB.db.mattew.com
2018-10-11 16:00:02: SCAN_PORT=2115
2018-10-11 16:00:02: SCRBASE=/etc/oracle/scls_scr
2018-10-11 16:00:02: SILENT=true
2018-10-11 16:00:02: SO_EXT=so
2018-10-11 16:00:02: SRVCFGLOC=srvConfig.loc
2018-10-11 16:00:02: SRVCONFIG=/var/opt/oracle/srvConfig.loc
2018-10-11 16:00:02: SRVCONFIGDIR=/var/opt/oracle
2018-10-11 16:00:02: SYSTEMCTL=/usr/bin/systemctl
2018-10-11 16:00:02: SYSTEMD_SYSTEM_DIR=/etc/systemd/system
2018-10-11 16:00:02: TZ=US/Pacific
2018-10-11 16:00:02: UPSTART_INIT_DIR=/etc/init
2018-10-11 16:00:02: USER_IGNORED_PREREQ=true
2018-10-11 16:00:02: VNDR_CLUSTER=false
2018-10-11 16:00:02: VOTING_DISKS=
2018-10-11 16:00:02: ### Printing other configuration values ###
2018-10-11 16:00:02: CLSCFG_EXTRA_PARMS=
2018-10-11 16:00:02: DECONFIG=1
2018-10-11 16:00:02: FORCE=1
2018-10-11 16:00:02: HAS_GROUP=dba
2018-10-11 16:00:02: HAS_USER=root
2018-10-11 16:00:02: HOST=orclhostdb02
2018-10-11 16:00:02: LASTNODE=1
2018-10-11 16:00:02: OLR_DIRECTORY=/u01/home/grid/12.2.0.1/cdata
2018-10-11 16:00:02: OLR_LOCATION=/u01/home/grid/12.2.0.1/cdata/orclhostdb02.olr
2018-10-11 16:00:02: ORA_CRS_HOME=/u01/home/grid/12.2.0.1
2018-10-11 16:00:02: SIHA=0
2018-10-11 16:00:02: SUCC_REBOOT=0
2018-10-11 16:00:02: SUPERUSER=root
2018-10-11 16:00:02: addfile=/u01/home/grid/12.2.0.1/crs/install/crsconfig_addparams
2018-10-11 16:00:02: cluutil_trc_suff_pp=0
2018-10-11 16:00:02: crscfg_trace=1
2018-10-11 16:00:02: crscfg_trace_file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/crsdeconfig_orclhostdb02_2018-10-11_04-00-02PM.log
2018-10-11 16:00:02: old_nodevips=
2018-10-11 16:00:02: osdfile=/u01/home/grid/12.2.0.1/crs/install/s_crsconfig_defs
2018-10-11 16:00:02: parameters_valid=1
2018-10-11 16:00:02: paramfile=/u01/home/grid/12.2.0.1/crs/install/crsconfig_params
2018-10-11 16:00:02: platform_family=unix
2018-10-11 16:00:02: pp_srvctl_trc_suff=0
2018-10-11 16:00:02: srvctl_trc_suff=0
2018-10-11 16:00:02: srvctl_trc_suff_pp=0
2018-10-11 16:00:02: stackStartLevel=11
2018-10-11 16:00:02: user_is_superuser=1
2018-10-11 16:00:02: ### Printing of configuration values complete ###
2018-10-11 16:00:02: Save the ASM password file location: +ORCL_FRA/orapwASM
2018-10-11 16:00:02: Print system environment variables:
2018-10-11 16:00:02: CVS_RSH = ssh
2018-10-11 16:00:02: EDITOR = vi
2018-10-11 16:00:02: G_BROKEN_FILENAMES = 1
2018-10-11 16:00:02: HOME = /root
2018-10-11 16:00:02: LANG = en_US.UTF-8
2018-10-11 16:00:02: LD_LIBRARY_PATH = /u01/home/grid/12.2.0.1/lib:
2018-10-11 16:00:02: LESSOPEN = ||/usr/bin/lesspipe.sh
2018-10-11 16:00:02: LOGNAME = root
2018-10-11 16:00:02: LS_COLORS = rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
2018-10-11 16:00:02: MAIL = /var/mail/root
2018-10-11 16:00:02: ORACLE_BASE = /u01/home/oracle
2018-10-11 16:00:02: ORACLE_HOME = /u01/home/grid/12.2.0.1
2018-10-11 16:00:02: PATH = /usr/lib64/qt-3.3/bin:/sbin:/bin:/usr/sbin:/usr/bin
2018-10-11 16:00:02: PWD = /u01/home/grid/12.2.0.1/addnode
2018-10-11 16:00:02: QTDIR = /usr/lib64/qt-3.3
2018-10-11 16:00:02: QTINC = /usr/lib64/qt-3.3/include
2018-10-11 16:00:02: QTLIB = /usr/lib64/qt-3.3/lib
2018-10-11 16:00:02: SHELL = /bin/bash
2018-10-11 16:00:02: SHLVL = 2
2018-10-11 16:00:02: SUDO_COMMAND = /bin/bash
2018-10-11 16:00:02: SUDO_GID = 533
2018-10-11 16:00:02: SUDO_UID = 969
2018-10-11 16:00:02: SUDO_USER = oracle
2018-10-11 16:00:02: TERM = xterm
2018-10-11 16:00:02: TZ = US/Pacific
2018-10-11 16:00:02: USER = root
2018-10-11 16:00:02: USERNAME = root
2018-10-11 16:00:02: _ = /u01/home/grid/12.2.0.1/perl/bin/perl
2018-10-11 16:00:02: Perform initialization tasks before configuring ACFS
2018-10-11 16:00:02: Executing pwdx 24744 >/dev/null 2>&1
2018-10-11 16:00:02: Executing cmd: pwdx 24744 >/dev/null 2>&1
2018-10-11 16:00:02: Executing pwdx 24754 >/dev/null 2>&1
2018-10-11 16:00:02: Executing cmd: pwdx 24754 >/dev/null 2>&1
2018-10-11 16:00:02: Executing pwdx 24771 >/dev/null 2>&1
2018-10-11 16:00:02: Executing cmd: pwdx 24771 >/dev/null 2>&1
2018-10-11 16:00:02: Executing pwdx 24773 >/dev/null 2>&1
2018-10-11 16:00:02: Executing cmd: pwdx 24773 >/dev/null 2>&1
2018-10-11 16:00:02: Executing pwdx 24775 >/dev/null 2>&1
2018-10-11 16:00:02: Executing cmd: pwdx 24775 >/dev/null 2>&1
2018-10-11 16:00:02: Executing pwdx 7771 >/dev/null 2>&1
2018-10-11 16:00:02: Executing cmd: pwdx 7771 >/dev/null 2>&1
2018-10-11 16:00:02: Running /u01/home/grid/12.2.0.1/bin/acfsdriverstate installed -s
2018-10-11 16:00:02: Executing cmd: /u01/home/grid/12.2.0.1/bin/acfsdriverstate installed -s
2018-10-11 16:00:03: acfs is installed
2018-10-11 16:00:03: Running /u01/home/grid/12.2.0.1/bin/acfsdriverstate loaded -s
2018-10-11 16:00:03: Executing cmd: /u01/home/grid/12.2.0.1/bin/acfsdriverstate loaded -s
2018-10-11 16:00:03: acfs is loaded
2018-10-11 16:00:03: Executing cmd: /sbin/acfsutil info fs "/u01/home/grid/12.2.0.1/addnode" -o mountpoint
2018-10-11 16:00:03: Command output:
>  acfsutil info fs: ACFS-03037: not an ACFS file system
>End Command output
2018-10-11 16:00:03: Running /u01/home/grid/12.2.0.1/bin/acfsdriverstate installed -s
2018-10-11 16:00:03: Executing cmd: /u01/home/grid/12.2.0.1/bin/acfsdriverstate installed -s
2018-10-11 16:00:03: acfs is installed
2018-10-11 16:00:03: Running /u01/home/grid/12.2.0.1/bin/acfsdriverstate loaded -s
2018-10-11 16:00:03: Executing cmd: /u01/home/grid/12.2.0.1/bin/acfsdriverstate loaded -s
2018-10-11 16:00:03: acfs is loaded
2018-10-11 16:00:03: Executing cmd: /sbin/acfsutil info fs "/home/rmattewada" -o mountpoint
2018-10-11 16:00:03: Command output:
>  acfsutil info fs: ACFS-03037: not an ACFS file system
>End Command output
2018-10-11 16:00:03: Performing few checks before running scripts
2018-10-11 16:00:03: Attempt to get current working directory
2018-10-11 16:00:03: Running as user oracle: pwd
2018-10-11 16:00:03: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; pwd '
2018-10-11 16:00:03: Removing file /tmp/6nsnisflxU
2018-10-11 16:00:03: Successfully removed file: /tmp/6nsnisflxU
2018-10-11 16:00:03: pipe exit code: 0
2018-10-11 16:00:03: /bin/su successfully executed

2018-10-11 16:00:03: The current working directory: /u01/home/grid/12.2.0.1/addnode
2018-10-11 16:00:03: Change working directory to safe directory /u01/home/grid/12.2.0.1
2018-10-11 16:00:03: Pre-checks for running the rootcrs script passed.
2018-10-11 16:00:03: Deconfiguring Oracle Clusterware on this node
2018-10-11 16:00:03: Executing the [DeconfigValidate] step with checkpoint [null] ...
2018-10-11 16:00:03: Perform initialization tasks before configuring OLR
2018-10-11 16:00:03: Perform initialization tasks before configuring OCR
2018-10-11 16:00:03: Perform initialization tasks before configuring CHM
2018-10-11 16:00:03: Perform prechecks for deconfiguration
2018-10-11 16:00:03: options=-force -lastnode
2018-10-11 16:00:03: Validate crsctl command
2018-10-11 16:00:03: Validating /u01/home/grid/12.2.0.1/bin/crsctl
2018-10-11 16:00:03: Executing the [DeconfigResources] step with checkpoint [null] ...
2018-10-11 16:00:03: Verifying the existence of CRS resources used by Oracle RAC databases
2018-10-11 16:00:03: Check if CRS is running
2018-10-11 16:00:03: Configured CRS Home: /u01/home/grid/12.2.0.1
2018-10-11 16:00:03: Running /u01/home/grid/12.2.0.1/bin/crsctl check crs
2018-10-11 16:00:03: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl check crs
2018-10-11 16:00:03: Command output:
>  CRS-4638: Oracle High Availability Services is online
>  CRS-4537: Cluster Ready Services is online
>  CRS-4529: Cluster Synchronization Services is online
>  CRS-4533: Event Manager is online
>End Command output
2018-10-11 16:00:03: Validate srvctl command
2018-10-11 16:00:03: Validating /u01/home/grid/12.2.0.1/bin/srvctl
2018-10-11 16:00:03: Remove listener resource...
2018-10-11 16:00:03: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl config listener"
2018-10-11 16:00:03: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg1.log
2018-10-11 16:00:03: Executing cmd: /u01/home/grid/12.2.0.1/bin/srvctl config listener
2018-10-11 16:00:05: Command output:
>  Name: LISTENER
>  Type: Database Listener
>  Network: 1, Owner: oracle
>  Home: <CRS home>
>  End points: TCP:1521
>  Listener is enabled.
>  Listener is individually enabled on nodes:
>  Listener is individually disabled on nodes:
>End Command output
2018-10-11 16:00:05: Executing cmd: /u01/home/grid/12.2.0.1/bin/clsecho -p has -f clsrsc -m 332
2018-10-11 16:00:05: Command output:
>  CLSRSC-332: CRS resources for listeners are still configured
>End Command output
2018-10-11 16:00:05: CLSRSC-332: CRS resources for listeners are still configured
2018-10-11 16:00:05: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl stop listener -f"
2018-10-11 16:00:05: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg2.log
2018-10-11 16:00:05: Executing cmd: /u01/home/grid/12.2.0.1/bin/srvctl stop listener -f
2018-10-11 16:00:06: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl remove listener -a -f"
2018-10-11 16:00:06: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg3.log
2018-10-11 16:00:06: Executing cmd: /u01/home/grid/12.2.0.1/bin/srvctl remove listener -a -f
2018-10-11 16:00:07: Remove Resources
2018-10-11 16:00:07: Validate srvctl command
2018-10-11 16:00:07: Validating /u01/home/grid/12.2.0.1/bin/srvctl
2018-10-11 16:00:07: Removing CVU ...
2018-10-11 16:00:07: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl stop cvu -f"
2018-10-11 16:00:07: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg4.log
2018-10-11 16:00:07: Running as user oracle: /u01/home/grid/12.2.0.1/bin/srvctl stop cvu -f
2018-10-11 16:00:07: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u01/home/grid/12.2.0.1/bin/srvctl stop cvu -f '
2018-10-11 16:00:08: Removing file /tmp/YA97nA_460
2018-10-11 16:00:08: Successfully removed file: /tmp/YA97nA_460
2018-10-11 16:00:08: pipe exit code: 0
2018-10-11 16:00:08: /bin/su successfully executed

2018-10-11 16:00:08:
2018-10-11 16:00:08: Stop CVU ... success
2018-10-11 16:00:08: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl remove cvu -f"
2018-10-11 16:00:08: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg5.log
2018-10-11 16:00:08: Running as user oracle: /u01/home/grid/12.2.0.1/bin/srvctl remove cvu -f
2018-10-11 16:00:08: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u01/home/grid/12.2.0.1/bin/srvctl remove cvu -f '
2018-10-11 16:00:09: Removing file /tmp/5BPar1j9Z5
2018-10-11 16:00:09: Successfully removed file: /tmp/5BPar1j9Z5
2018-10-11 16:00:09: pipe exit code: 0
2018-10-11 16:00:09: /bin/su successfully executed

2018-10-11 16:00:09:
2018-10-11 16:00:09: Remove CVU ... success
2018-10-11 16:00:09: Removing scan....
2018-10-11 16:00:09: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl stop scan_listener -f"
2018-10-11 16:00:09: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg6.log
2018-10-11 16:00:09: Running as user oracle: /u01/home/grid/12.2.0.1/bin/srvctl stop scan_listener -f
2018-10-11 16:00:09: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u01/home/grid/12.2.0.1/bin/srvctl stop scan_listener -f '
2018-10-11 16:00:10: Removing file /tmp/I3RuKfu02B
2018-10-11 16:00:10: Successfully removed file: /tmp/I3RuKfu02B
2018-10-11 16:00:10: pipe exit code: 0
2018-10-11 16:00:10: /bin/su successfully executed

2018-10-11 16:00:10:
2018-10-11 16:00:10: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl remove scan_listener -y -f"
2018-10-11 16:00:10: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg7.log
2018-10-11 16:00:10: Running as user oracle: /u01/home/grid/12.2.0.1/bin/srvctl remove scan_listener -y -f
2018-10-11 16:00:10: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u01/home/grid/12.2.0.1/bin/srvctl remove scan_listener -y -f '
2018-10-11 16:00:11: Removing file /tmp/tQkdKmj9Wr
2018-10-11 16:00:11: Successfully removed file: /tmp/tQkdKmj9Wr
2018-10-11 16:00:11: pipe exit code: 0
2018-10-11 16:00:11: /bin/su successfully executed

2018-10-11 16:00:11:
2018-10-11 16:00:11: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl stop scan -f"
2018-10-11 16:00:11: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg8.log
2018-10-11 16:00:11: Executing cmd: /u01/home/grid/12.2.0.1/bin/srvctl stop scan -f
2018-10-11 16:00:14: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl remove scan -y -f"
2018-10-11 16:00:14: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg9.log
2018-10-11 16:00:14: Executing cmd: /u01/home/grid/12.2.0.1/bin/srvctl remove scan -y -f
2018-10-11 16:00:16: Removing nodeapps...
2018-10-11 16:00:16: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl config nodeapps"
2018-10-11 16:00:16: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg10.log
2018-10-11 16:00:16: Executing cmd: /u01/home/grid/12.2.0.1/bin/srvctl config nodeapps
2018-10-11 16:00:19: Command output:
>  Network 1 exists
>  Subnet IPv4: 10.57.238.0/255.255.254.0/bond0, static
>  Subnet IPv6:
>  Ping Targets:
>  Network is enabled
>  Network is individually enabled on nodes:
>  Network is individually disabled on nodes:
>  VIP exists: network number 1, hosting node orclhostdb02
>  VIP Name: orclhostdb02-vip.mattew.com
>  VIP IPv4 Address: 10.57.239.67
>  VIP IPv6 Address:
>  VIP is enabled.
>  VIP is individually enabled on nodes:
>  VIP is individually disabled on nodes:
>  ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL true
>  ONS is enabled
>  ONS is individually enabled on nodes:
>  ONS is individually disabled on nodes:
>End Command output
2018-10-11 16:00:19: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl stop nodeapps -n orclhostdb02 -f"
2018-10-11 16:00:19: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg11.log
2018-10-11 16:00:19: Executing cmd: /u01/home/grid/12.2.0.1/bin/srvctl stop nodeapps -n orclhostdb02 -f
2018-10-11 16:00:24: Invoking "/u01/home/grid/12.2.0.1/bin/srvctl remove nodeapps -y -f"
2018-10-11 16:00:24: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/srvmcfg12.log
2018-10-11 16:00:24: Executing cmd: /u01/home/grid/12.2.0.1/bin/srvctl remove nodeapps -y -f
2018-10-11 16:00:26: Deconfiguring Oracle ASM or shared filesystem storage ...
2018-10-11 16:00:26: Stopping Oracle Clusterware ...
2018-10-11 16:00:26: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl stop crs -f
2018-10-11 16:00:52: Command output:
>  CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.crsd' on 'orclhostdb02'
>  CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.ORCL_FRA.dg' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.ORCL_FRA.dg' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.asm' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.asm' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'orclhostdb02' succeeded
>  CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'orclhostdb02' has completed
>  CRS-2677: Stop of 'ora.crsd' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.storage' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.crf' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.gpnpd' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.mdnsd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.drivers.acfs' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.crf' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.gpnpd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.storage' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.asm' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.mdnsd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.asm' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.ctssd' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.evmd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.ctssd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.evmd' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.cssd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.cssd' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.driver.afd' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.gipcd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.driver.afd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.gipcd' on 'orclhostdb02' succeeded
>  CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02' has completed
>  CRS-4133: Oracle High Availability Services has been stopped.
>End Command output
2018-10-11 16:00:52: The return value of stop of CRS: 0
2018-10-11 16:00:52: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl check crs
2018-10-11 16:00:52: Command output:
>  CRS-4639: Could not contact Oracle High Availability Services
>End Command output
2018-10-11 16:00:52: Oracle CRS stack has been shut down
2018-10-11 16:00:52: Checking if OCR is on ASM
2018-10-11 16:00:52: Retrieving OCR main disk location
2018-10-11 16:00:52: Opening file /etc/oracle/ocr.loc
2018-10-11 16:00:52: Value (+ORCL_FRA/ORCL-DEN-DB/OCRFILE/registry.255.989245853) is set for key=ocrconfig_loc
2018-10-11 16:00:52: Retrieving OCR mirror disk location
2018-10-11 16:00:52: Opening file /etc/oracle/ocr.loc
2018-10-11 16:00:52: Value () is set for key=ocrmirrorconfig_loc
2018-10-11 16:00:52: Retrieving OCR loc3 disk location
2018-10-11 16:00:52: Opening file /etc/oracle/ocr.loc
2018-10-11 16:00:52: Value () is set for key=ocrconfig_loc3
2018-10-11 16:00:52: Retrieving OCR loc4 disk location
2018-10-11 16:00:52: Opening file /etc/oracle/ocr.loc
2018-10-11 16:00:52: Value () is set for key=ocrconfig_loc4
2018-10-11 16:00:52: Retrieving OCR loc5 disk location
2018-10-11 16:00:52: Opening file /etc/oracle/ocr.loc
2018-10-11 16:00:52: Value () is set for key=ocrconfig_loc5
2018-10-11 16:00:52: OCR is on ASM
2018-10-11 16:00:52: De-configuring ASM...
2018-10-11 16:00:52: Executing /u01/home/grid/12.2.0.1/bin/crsctl start crs -excl
2018-10-11 16:00:52: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl start crs -excl
2018-10-11 16:01:57: Command output:
>  CRS-4123: Oracle High Availability Services has been started.
>  CRS-2672: Attempting to start 'ora.driver.afd' on 'orclhostdb02'
>  CRS-2672: Attempting to start 'ora.evmd' on 'orclhostdb02'
>  CRS-2672: Attempting to start 'ora.mdnsd' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.driver.afd' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.cssdmonitor' on 'orclhostdb02' succeeded
>  CRS-2676: Start of 'ora.mdnsd' on 'orclhostdb02' succeeded
>  CRS-2676: Start of 'ora.evmd' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.gpnpd' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.gpnpd' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.gipcd' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.gipcd' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.cssd' on 'orclhostdb02'
>  CRS-2672: Attempting to start 'ora.diskmon' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.diskmon' on 'orclhostdb02' succeeded
>  CRS-2676: Start of 'ora.cssd' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.crf' on 'orclhostdb02'
>  CRS-2672: Attempting to start 'ora.ctssd' on 'orclhostdb02'
>  CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.crf' on 'orclhostdb02' succeeded
>  CRS-2676: Start of 'ora.ctssd' on 'orclhostdb02' succeeded
>  CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.asm' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.asm' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.storage' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.storage' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.crsd' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.crsd' on 'orclhostdb02' succeeded
>End Command output
2018-10-11 16:01:57: The return value of blocking start of CRS: 0
2018-10-11 16:01:57: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl check crs
2018-10-11 16:01:57: Command output:
>  CRS-4638: Oracle High Availability Services is online
>  CRS-4692: Cluster Ready Services is online in exclusive mode
>  CRS-4529: Cluster Synchronization Services is online
>End Command output
2018-10-11 16:01:57: Oracle CRS stack completely started and running
2018-10-11 16:01:57: Oracle CRS home = /u01/home/grid/12.2.0.1
2018-10-11 16:01:57: GPnP host = orclhostdb02
2018-10-11 16:01:57: Oracle GPnP home = /u01/home/grid/12.2.0.1/gpnp
2018-10-11 16:01:57: Oracle GPnP local home = /u01/home/grid/12.2.0.1/gpnp/orclhostdb02
2018-10-11 16:01:57: GPnP directories verified.
2018-10-11 16:01:57: Try to read ASM mode from the global stage profile
2018-10-11 16:01:57: gpnptool: run /u01/home/grid/12.2.0.1/bin/gpnptool getpval -p="/u01/home/grid/12.2.0.1/gpnp/profiles/peer/profile.xml" -o="/tmp/qA8N6FCnY1" -asm_m
2018-10-11 16:01:57: Running as user oracle: /u01/home/grid/12.2.0.1/bin/gpnptool getpval -p="/u01/home/grid/12.2.0.1/gpnp/profiles/peer/profile.xml" -o="/tmp/qA8N6FCnY1" -asm_m
2018-10-11 16:01:57: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u01/home/grid/12.2.0.1/bin/gpnptool getpval -p="/u01/home/grid/12.2.0.1/gpnp/profiles/peer/profile.xml" -o="/tmp/qA8N6FCnY1" -asm_m '
2018-10-11 16:01:57: Removing file /tmp/Q8Itzuuyvf
2018-10-11 16:01:57: Successfully removed file: /tmp/Q8Itzuuyvf
2018-10-11 16:01:57: pipe exit code: 0
2018-10-11 16:01:57: /bin/su successfully executed

2018-10-11 16:01:57: gpnptool: rc=0
2018-10-11 16:01:57: gpnptool output:

2018-10-11 16:01:57: Removing file /tmp/qA8N6FCnY1
2018-10-11 16:01:57: Successfully removed file: /tmp/qA8N6FCnY1
2018-10-11 16:01:57: ASM mode = remote
2018-10-11 16:01:57: ASM mode = remote
2018-10-11 16:01:57: Executing '/u01/home/grid/12.2.0.1/bin/crsctl stop resource ora.crsd -init -f'
2018-10-11 16:01:57: Executing /u01/home/grid/12.2.0.1/bin/crsctl stop resource ora.crsd -init -f
2018-10-11 16:01:57: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl stop resource ora.crsd -init -f
2018-10-11 16:01:58: Command output:
>  CRS-2673: Attempting to stop 'ora.crsd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.crsd' on 'orclhostdb02' succeeded
>End Command output
2018-10-11 16:01:58: The return value of stop of ora.crsd: 0
2018-10-11 16:01:58: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl check crs
2018-10-11 16:01:58: Command output:
>  CRS-4638: Oracle High Availability Services is online
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4529: Cluster Synchronization Services is online
>  CRS-4533: Event Manager is online
>End Command output
2018-10-11 16:01:58: Attempt to bounce ohasd
2018-10-11 16:01:58: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl stop crs -f
2018-10-11 16:02:21: Command output:
>  CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.ctssd' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.evmd' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.storage' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.mdnsd' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.gpnpd' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.ctssd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.evmd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.storage' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.asm' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.drivers.acfs' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.mdnsd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.gpnpd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.asm' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.cssd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.cssd' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.crf' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.driver.afd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.driver.afd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.crf' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.gipcd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.gipcd' on 'orclhostdb02' succeeded
>  CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02' has completed
>  CRS-4133: Oracle High Availability Services has been stopped.
>End Command output
2018-10-11 16:02:21: The return value of stop of CRS: 0
2018-10-11 16:02:21: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl check crs
2018-10-11 16:02:21: Command output:
>  CRS-4639: Could not contact Oracle High Availability Services
>End Command output
2018-10-11 16:02:21: Oracle CRS stack has been shut down
2018-10-11 16:02:21: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl start crs -noautostart
2018-10-11 16:02:36: Command output:
>  CRS-4123: Oracle High Availability Services has been started.
>End Command output
2018-10-11 16:02:36: Return value of start of CRS with '-noautostart': 0
2018-10-11 16:02:36: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl check has
2018-10-11 16:02:36: Command output:
>  CRS-4638: Oracle High Availability Services is online
>End Command output
2018-10-11 16:02:36: Oracle High Availability Services is online
2018-10-11 16:02:36: Disable ASM to avoid race issue between ASM agent and ASMCA.
2018-10-11 16:02:36: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl modify resource ora.asm -attr "ENABLED@SERVERNAME(orclhostdb02)=0" -init
2018-10-11 16:02:36: Successfully disabled ASM resource.
2018-10-11 16:02:36: Executing cmd: /u01/home/grid/12.2.0.1/bin/ocrcheck -config -debug
2018-10-11 16:02:36: Command output:
>  Oracle Cluster Registry configuration is :
>  PROT-709:     Device/File Name         : +ORCL_FRA
>End Command output
2018-10-11 16:02:36: Parse the output for diskgroups with OCR
2018-10-11 16:02:36: LINE: PROT-709:     Device/File Name         : +ORCL_FRA
2018-10-11 16:02:36: OCR DG: +ORCL_FRA
2018-10-11 16:02:36: OCR DG name: ORCL_FRA
2018-10-11 16:02:36: Diskgoups with OCR: ORCL_FRA
2018-10-11 16:02:36: Configured CRS Home: /u01/home/grid/12.2.0.1
2018-10-11 16:02:36: Configured CRS Home: /u01/home/grid/12.2.0.1
2018-10-11 16:02:36: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl check css
2018-10-11 16:02:36: Command output:
>  CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
>End Command output
2018-10-11 16:02:36: Starting CSS exclusive
2018-10-11 16:02:36: Starting CSS in exclusive mode
2018-10-11 16:02:36: Configured CRS Home: /u01/home/grid/12.2.0.1
2018-10-11 16:02:36: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl start crs -excl -cssonly
2018-10-11 16:03:01: Command output:
>  CRS-2672: Attempting to start 'ora.driver.afd' on 'orclhostdb02'
>  CRS-2672: Attempting to start 'ora.evmd' on 'orclhostdb02'
>  CRS-2672: Attempting to start 'ora.mdnsd' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.driver.afd' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.cssdmonitor' on 'orclhostdb02' succeeded
>  CRS-2676: Start of 'ora.mdnsd' on 'orclhostdb02' succeeded
>  CRS-2676: Start of 'ora.evmd' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.gpnpd' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.gpnpd' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.gipcd' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.gipcd' on 'orclhostdb02' succeeded
>  CRS-2672: Attempting to start 'ora.cssd' on 'orclhostdb02'
>  CRS-2672: Attempting to start 'ora.diskmon' on 'orclhostdb02'
>  CRS-2676: Start of 'ora.diskmon' on 'orclhostdb02' succeeded
>  CRS-2676: Start of 'ora.cssd' on 'orclhostdb02' succeeded
>End Command output
2018-10-11 16:03:01: CRS-2672: Attempting to start 'ora.driver.afd' on 'orclhostdb02'
2018-10-11 16:03:01: CRS-2672: Attempting to start 'ora.evmd' on 'orclhostdb02'
2018-10-11 16:03:01: CRS-2672: Attempting to start 'ora.mdnsd' on 'orclhostdb02'
2018-10-11 16:03:01: CRS-2676: Start of 'ora.driver.afd' on 'orclhostdb02' succeeded
2018-10-11 16:03:01: CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orclhostdb02'
2018-10-11 16:03:01: CRS-2676: Start of 'ora.cssdmonitor' on 'orclhostdb02' succeeded
2018-10-11 16:03:01: CRS-2676: Start of 'ora.mdnsd' on 'orclhostdb02' succeeded
2018-10-11 16:03:01: CRS-2676: Start of 'ora.evmd' on 'orclhostdb02' succeeded
2018-10-11 16:03:01: CRS-2672: Attempting to start 'ora.gpnpd' on 'orclhostdb02'
2018-10-11 16:03:01: CRS-2676: Start of 'ora.gpnpd' on 'orclhostdb02' succeeded
2018-10-11 16:03:01: CRS-2672: Attempting to start 'ora.gipcd' on 'orclhostdb02'
2018-10-11 16:03:01: CRS-2676: Start of 'ora.gipcd' on 'orclhostdb02' succeeded
2018-10-11 16:03:01: CRS-2672: Attempting to start 'ora.cssd' on 'orclhostdb02'
2018-10-11 16:03:01: CRS-2672: Attempting to start 'ora.diskmon' on 'orclhostdb02'
2018-10-11 16:03:01: CRS-2676: Start of 'ora.diskmon' on 'orclhostdb02' succeeded
2018-10-11 16:03:01: CRS-2676: Start of 'ora.cssd' on 'orclhostdb02' succeeded
2018-10-11 16:03:01: Querying CSS vote disks
2018-10-11 16:03:01: Voting disk is : ##  STATE    File Universal Id                File Name Disk group
2018-10-11 16:03:01: Voting disk is : --  -----    -----------------                --------- ---------
2018-10-11 16:03:01: Voting disk is :  1. ONLINE   23fe2bc898014f0dbf816e24462c2a62 (AFD:DENHPE20450_2_29) [ORCL_FRA]
2018-10-11 16:03:01: Voting disk is : Located 1 voting disk(s).
2018-10-11 16:03:01: Diskgroups found: ORCL_FRA
2018-10-11 16:03:01: The diskgroup to store voting files: ORCL_FRA
2018-10-11 16:03:01: All diskgroups used by Clusterware: ORCL_FRA
2018-10-11 16:03:01: Dropping the diskgroups: ORCL_FRA ...
2018-10-11 16:03:01: Configured CRS Home: /u01/home/grid/12.2.0.1
2018-10-11 16:03:01: Configured CRS Home: /u01/home/grid/12.2.0.1
2018-10-11 16:03:01: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl check css
2018-10-11 16:03:01: Command output:
>  CRS-4529: Cluster Synchronization Services is online
>End Command output
2018-10-11 16:03:01: Querying CSS vote disks
2018-10-11 16:03:01: Voting disk is : ##  STATE    File Universal Id                File Name Disk group
2018-10-11 16:03:01: Voting disk is : --  -----    -----------------                --------- ---------
2018-10-11 16:03:01: Voting disk is :  1. ONLINE   23fe2bc898014f0dbf816e24462c2a62 (AFD:DENHPE20450_2_29) [ORCL_FRA]
2018-10-11 16:03:01: Voting disk is : Located 1 voting disk(s).
2018-10-11 16:03:01: Diskgroups found: ORCL_FRA
2018-10-11 16:03:01: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl delete css votedisk '+ORCL_FRA'
2018-10-11 16:03:01: Command output:
>  CRS-4611: Successful deletion of voting disk +ORCL_FRA.
>End Command output
2018-10-11 16:03:01: keep DG = 0
2018-10-11 16:03:01: Running as user oracle: /u01/home/grid/12.2.0.1/bin/asmca -silent -deleteLocalASM -diskGroups ORCL_FRA
2018-10-11 16:03:01:   Invoking "/u01/home/grid/12.2.0.1/bin/asmca -silent -deleteLocalASM -diskGroups ORCL_FRA " as user "oracle"
2018-10-11 16:03:01: Executing /bin/su oracle -c "/u01/home/grid/12.2.0.1/bin/asmca -silent -deleteLocalASM -diskGroups ORCL_FRA "
2018-10-11 16:03:01: Executing cmd: /bin/su oracle -c "/u01/home/grid/12.2.0.1/bin/asmca -silent -deleteLocalASM -diskGroups ORCL_FRA "

raj

ORCL_FRA "
2018-10-11 16:03:54: Command output:
>  ASM de-configuration trace file location: /u01/home/oracle/cfgtoollogs/asmca/asmcadc_clean2018-10-11_04-03-06-PM.log
>  ASM Clean Configuration START
>  ASM Clean Configuration END
>
>  ASM instance deleted successfully. Check /u01/home/oracle/cfgtoollogs/asmca/asmcadc_clean2018-10-11_04-03-06-PM.log for details.
>
>End Command output
2018-10-11 16:03:54: Running as user oracle: /u01/home/grid/12.2.0.1/bin/kfod op=disableremote
2018-10-11 16:03:54: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u01/home/grid/12.2.0.1/bin/kfod op=disableremote '
2018-10-11 16:03:55: Removing file /tmp/1BzT3RtlvY
2018-10-11 16:03:55: Successfully removed file: /tmp/1BzT3RtlvY
2018-10-11 16:03:55: pipe exit code: 0
2018-10-11 16:03:55: /bin/su successfully executed

2018-10-11 16:03:55: kfod op=disableremote rc: 0
2018-10-11 16:03:55: Successfully disabled remote ASM
2018-10-11 16:03:55: disable remote asm success
2018-10-11 16:03:55: see asmca logs at /u01/home/oracle/cfgtoollogs/asmca for details
2018-10-11 16:03:55: Perform initialization tasks before configuring ASM
2018-10-11 16:03:55: Skip deconfiguring audit log redirection becuase DSC is not configured
2018-10-11 16:03:55: de-configuration of ASM ... success
2018-10-11 16:03:55: Configured CRS Home: /u01/home/grid/12.2.0.1
2018-10-11 16:03:55: Configured CRS Home: /u01/home/grid/12.2.0.1
2018-10-11 16:03:55: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl check css
2018-10-11 16:03:55: Command output:
>  CRS-4529: Cluster Synchronization Services is online
>End Command output
2018-10-11 16:03:55: Querying CSS vote disks
2018-10-11 16:03:55: Voting disk is : Located 0 voting disk(s).
2018-10-11 16:03:55: Vote disks found:
2018-10-11 16:03:55: Reset voting disks
2018-10-11 16:03:55: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl stop crs -f
2018-10-11 16:04:02: Command output:
>  CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.evmd' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.mdnsd' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.gpnpd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.drivers.acfs' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.evmd' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.cssd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.mdnsd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.gpnpd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.cssd' on 'orclhostdb02' succeeded
>  CRS-2673: Attempting to stop 'ora.driver.afd' on 'orclhostdb02'
>  CRS-2673: Attempting to stop 'ora.gipcd' on 'orclhostdb02'
>  CRS-2677: Stop of 'ora.driver.afd' on 'orclhostdb02' succeeded
>  CRS-2677: Stop of 'ora.gipcd' on 'orclhostdb02' succeeded
>  CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orclhostdb02' has completed
>  CRS-4133: Oracle High Availability Services has been stopped.
>End Command output
2018-10-11 16:04:02: The return value of stop of CRS: 0
2018-10-11 16:04:02: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl check crs
2018-10-11 16:04:02: Command output:
>  CRS-4639: Could not contact Oracle High Availability Services
>End Command output
2018-10-11 16:04:02: Oracle CRS stack has been shut down
2018-10-11 16:04:12: Reset OCR
2018-10-11 16:04:12: Removing OLR file: /u01/home/grid/12.2.0.1/cdata/orclhostdb02.olr
2018-10-11 16:04:12: Removing file /u01/home/grid/12.2.0.1/cdata/orclhostdb02.olr
2018-10-11 16:04:12: Successfully removed file: /u01/home/grid/12.2.0.1/cdata/orclhostdb02.olr
2018-10-11 16:04:12: Removing file /etc/oracle/olr.loc
2018-10-11 16:04:12: Successfully removed file: /etc/oracle/olr.loc
2018-10-11 16:04:12: Retrieving OCR main disk location
2018-10-11 16:04:12: Opening file /etc/oracle/ocr.loc
2018-10-11 16:04:12: Value (+ORCL_FRA/ORCL-DEN-DB/OCRFILE/registry.255.989245853) is set for key=ocrconfig_loc
2018-10-11 16:04:12: Retrieving OCR mirror disk location
2018-10-11 16:04:12: Opening file /etc/oracle/ocr.loc
2018-10-11 16:04:12: Value () is set for key=ocrmirrorconfig_loc
2018-10-11 16:04:12: Retrieving OCR loc3 disk location
2018-10-11 16:04:12: Opening file /etc/oracle/ocr.loc
2018-10-11 16:04:12: Value () is set for key=ocrconfig_loc3
2018-10-11 16:04:12: Retrieving OCR loc4 disk location
2018-10-11 16:04:12: Opening file /etc/oracle/ocr.loc
2018-10-11 16:04:12: Value () is set for key=ocrconfig_loc4
2018-10-11 16:04:12: Retrieving OCR loc5 disk location
2018-10-11 16:04:12: Opening file /etc/oracle/ocr.loc
2018-10-11 16:04:12: Value () is set for key=ocrconfig_loc5
2018-10-11 16:04:12: Removing file /etc/oracle/ocr.loc
2018-10-11 16:04:12: Successfully removed file: /etc/oracle/ocr.loc
2018-10-11 16:04:12: Executing the [DeconfigCleanup] step with checkpoint [null] ...
2018-10-11 16:04:12: Running /u01/home/grid/12.2.0.1/bin/acfshanfs installed -nfsv4lock
2018-10-11 16:04:12: Executing cmd: /u01/home/grid/12.2.0.1/bin/acfshanfs installed -nfsv4lock
2018-10-11 16:04:12: Command output:
>  ACFS-9204: false
>End Command output
2018-10-11 16:04:12: acfshanfs is not installed
2018-10-11 16:04:12: Executing step deconfiguration ACFS on the last node
2018-10-11 16:04:12: Executing cmd: /u01/home/grid/12.2.0.1/bin/acfsdriverstate supported
2018-10-11 16:04:14: Command output:
>  ACFS-9200: Supported
>End Command output
2018-10-11 16:04:14: acfs is supported
2018-10-11 16:04:14: Running /u01/home/grid/12.2.0.1/bin/acfsdriverstate installed
2018-10-11 16:04:14: Executing cmd: /u01/home/grid/12.2.0.1/bin/acfsdriverstate installed
2018-10-11 16:04:15: Command output:
>  ACFS-9203: true
>End Command output
2018-10-11 16:04:15: acfs is installed
2018-10-11 16:04:15: Not using checkpoint for USM driver uninstall
2018-10-11 16:04:15: Stopping ora.drivers.acfs if it exists, so that it doesn't race.
2018-10-11 16:04:15: isACFSSupported: 1
2018-10-11 16:04:15: Executing cmd: /u01/home/grid/12.2.0.1/bin/crsctl stat res ora.drivers.acfs -init
2018-10-11 16:04:15: Command output:
>  CRS-4047: No Oracle Clusterware components configured.
>  CRS-4000: Command Status failed, or completed with errors.
>End Command output
2018-10-11 16:04:15: Executing /u01/home/grid/12.2.0.1/bin/acfsroot uninstall -t2
2018-10-11 16:04:15: Executing cmd: /u01/home/grid/12.2.0.1/bin/acfsroot uninstall -t2
2018-10-11 16:04:21: Command output:
>  ACFS-9176: Entering 'get ora home'
>  ACFS-9500: Location of Oracle Home is '/u01/home/grid/12.2.0.1' as determined from the internal configuration data
>  ACFS-9182: Variable 'ORACLE_HOME' has value '/u01/home/grid/12.2.0.1'
>  ACFS-9177: Return from 'get ora home'
>  ACFS-9176: Entering 'ga admin name'
>  ACFS-9176: Entering 'va admin group'
>  ACFS-9178: Return code = 0
>  ACFS-9177: Return from 'va admin group'
>  ACFS-9178: Return code = dba
>  ACFS-9177: Return from 'ga admin name'
>  ACFS-9505: Using acfsutil executable from location: '/u01/home/grid/12.2.0.1/usm/install/cmds/bin/acfsutil'
>  ACFS-9176: Entering 'uninstall'
>  ACFS-9176: Entering 'lc check any driver'
>  ACFS-9155: Checking for existing 'oracleoks.ko' driver installation.
>  ACFS-9178: Return code = 1
>  ACFS-9177: Return from 'lc check any driver'
>  ACFS-9312: Existing ADVM/ACFS installation detected.
>  ACFS-9176: Entering 'uld usm drvs'
>  WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/.
>  WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/.
>  ACFS-9178: Return code = USM_SUCCESS
>  ACFS-9177: Return from 'uld usm drvs'
>  ACFS-9314: Removing previous ADVM/ACFS installation.
>  ACFS-9315: Previous ADVM/ACFS components successfully removed.
>  ACFS-9178: Return code = USM_SUCCESS
>  ACFS-9177: Return from 'uninstall'
>  ACFS-9176: Entering 'acroot ex'
>  ACFS-9178: Return code = 0
>  ACFS-9177: Return from 'acroot ex'
>End Command output
2018-10-11 16:04:21: /u01/home/grid/12.2.0.1/bin/acfsroot uninstall -t2 ... success
2018-10-11 16:04:21: ACFS drivers uninstall completed
2018-10-11 16:04:21: Running /u01/home/grid/12.2.0.1/bin/okadriverstate installed
2018-10-11 16:04:21: Executing cmd: /u01/home/grid/12.2.0.1/bin/okadriverstate installed
2018-10-11 16:04:21: Command output:
>  OKA-9204: false
>End Command output
2018-10-11 16:04:21: OKA is not installed
2018-10-11 16:04:21: Running /u01/home/grid/12.2.0.1/bin/afddriverstate installed
2018-10-11 16:04:21: Executing cmd: /u01/home/grid/12.2.0.1/bin/afddriverstate installed
2018-10-11 16:04:21: Command output:
>  AFD-9203: AFD device driver installed status: 'true'
>End Command output
2018-10-11 16:04:21: AFD Driver is installed
2018-10-11 16:04:21: AFD Library is present
2018-10-11 16:04:21: AFD is installed
2018-10-11 16:04:21: Removing /etc/oracleafd.conf
2018-10-11 16:04:21: Init file = afd
2018-10-11 16:04:21: Removing "afd" from RC dirs
2018-10-11 16:04:21: Removing file /etc/rc.d/rc0.d/K15afd
2018-10-11 16:04:21: Successfully removed file: /etc/rc.d/rc0.d/K15afd
2018-10-11 16:04:21: Removing file /etc/rc.d/rc1.d/K15afd
2018-10-11 16:04:21: Successfully removed file: /etc/rc.d/rc1.d/K15afd
2018-10-11 16:04:21: Removing file /etc/rc.d/rc2.d/K15afd
2018-10-11 16:04:21: Successfully removed file: /etc/rc.d/rc2.d/K15afd
2018-10-11 16:04:21: Removing file /etc/rc.d/rc3.d/S96afd
2018-10-11 16:04:21: Successfully removed file: /etc/rc.d/rc3.d/S96afd
2018-10-11 16:04:21: Removing file /etc/rc.d/rc4.d/K15afd
2018-10-11 16:04:21: Successfully removed file: /etc/rc.d/rc4.d/K15afd
2018-10-11 16:04:21: Removing file /etc/rc.d/rc5.d/S96afd
2018-10-11 16:04:21: Successfully removed file: /etc/rc.d/rc5.d/S96afd
2018-10-11 16:04:21: Removing file /etc/rc.d/rc6.d/K15afd
2018-10-11 16:04:21: Successfully removed file: /etc/rc.d/rc6.d/K15afd
2018-10-11 16:04:21: Executing cmd: /bin/rpm -q sles-release
2018-10-11 16:04:21: Command output:
>  package sles-release is not installed
>End Command output
2018-10-11 16:04:21: Removing /etc/init.d/afd
2018-10-11 16:04:21: Executing /u01/home/grid/12.2.0.1/bin/afdroot uninstall
2018-10-11 16:04:21: Executing cmd: /u01/home/grid/12.2.0.1/bin/afdroot uninstall
2018-10-11 16:04:25: Command output:
>  AFD-632: Existing AFD installation detected.
>  WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/.
>  AFD-634: Removing previous AFD installation.
>  AFD-635: Previous AFD components successfully removed.
>End Command output
2018-10-11 16:04:25: /u01/home/grid/12.2.0.1/bin/afdroot uninstall ... success
2018-10-11 16:04:25: ASM Filter driver uninstall completed
2018-10-11 16:04:25: Either /etc/oracle/olr.loc does not exist or is not readable
2018-10-11 16:04:25: Make sure the file exists and it has read and execute access
2018-10-11 16:04:25: Info: No ora file present at  /crf/admin/crforclhostdb02.ora
2018-10-11 16:04:25: CHM repository path not found
2018-10-11 16:04:25: Executing cmd: /u01/home/grid/12.2.0.1/bin/clsecho -p has -f clsrsc -m 4006
2018-10-11 16:04:25: Command output:
>  CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
>End Command output
2018-10-11 16:04:25: CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2018-10-11 16:04:25: Executing cmd: /u01/home/grid/12.2.0.1/tfa/orclhostdb02/tfa_home/bin/uninstalltfa -silent -local -crshome /u01/home/grid/12.2.0.1
2018-10-11 16:04:38: Command output:
>
>  TFA will be uninstalled on node orclhostdb02 :
>
>  Removing TFA from orclhostdb02...
>
>  Stopping TFA Support Tools...
>
>  Stopping TFA in orclhostdb02...
>
>  Shutting down TFA
>  oracle-tfa stop/waiting
>  . . . . .
>  Killing TFA running with pid 11759
>  . . .
>  Successfully shutdown TFA..
>
>  Deleting TFA support files on orclhostdb02:
>  Removing /u01/home/oracle/tfa/orclhostdb02/database...
>  Removing /u01/home/oracle/tfa/orclhostdb02/log...
>  Removing /u01/home/oracle/tfa/orclhostdb02/output...
>  Removing /u01/home/oracle/tfa/orclhostdb02...
>  Removing /u01/home/oracle/tfa...
>  Removing /etc/rc.d/rc0.d/K17init.tfa
>  Removing /etc/rc.d/rc1.d/K17init.tfa
>  Removing /etc/rc.d/rc2.d/K17init.tfa
>  Removing /etc/rc.d/rc4.d/K17init.tfa
>  Removing /etc/rc.d/rc6.d/K17init.tfa
>  Removing /etc/init.d/init.tfa...
>  Removing /u01/home/grid/12.2.0.1/bin/tfactl...
>  Removing /u01/home/grid/12.2.0.1/tfa/orclhostdb02...
>  Removing /u01/home/grid/12.2.0.1/tfa...
>
>End Command output
2018-10-11 16:04:38: Executing cmd: /u01/home/grid/12.2.0.1/bin/clsecho -p has -f clsrsc -m 4007
2018-10-11 16:04:38: Command output:
>  CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
>End Command output
2018-10-11 16:04:38: CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
2018-10-11 16:04:38: Remove init resources
2018-10-11 16:04:38: itab entries=cssd|evmd|crsd|ohasd
2018-10-11 16:04:38: Check if the startup mechanism upstart is being used
2018-10-11 16:04:38: Executing cmd: /bin/rpm -qf /sbin/init
2018-10-11 16:04:38: Command output:
>  upstart-0.6.5-16.el6.x86_64
>End Command output
2018-10-11 16:04:38: Executing cmd: /sbin/initctl list
2018-10-11 16:04:38: Command output:
>  rc stop/waiting
>  tty (/dev/tty3) start/running, process 21616
>  tty (/dev/tty2) start/running, process 21614
>  tty (/dev/tty1) start/running, process 21612
>  tty (/dev/tty6) start/running, process 21623
>  tty (/dev/tty5) start/running, process 21621
>  tty (/dev/tty4) start/running, process 21618
>  plymouth-shutdown stop/waiting
>  control-alt-delete stop/waiting
>  rcS-emergency stop/waiting
>  readahead-collector stop/waiting
>  kexec-disable stop/waiting
>  quit-plymouth stop/waiting
>  rcS stop/waiting
>  prefdm stop/waiting
>  init-system-dbus stop/waiting
>  ck-log-system-restart stop/waiting
>  readahead stop/waiting
>  ck-log-system-start stop/waiting
>  splash-manager stop/waiting
>  start-ttys stop/waiting
>  readahead-disable-services stop/waiting
>  ck-log-system-stop stop/waiting
>  rcS-sulogin stop/waiting
>  serial stop/waiting
>  oracle-ohasd start/running, process 15538
>End Command output
2018-10-11 16:04:38: Service [oracle-ohasd] running.

2018-10-11 16:04:38: Executing cmd: /sbin/initctl stop oracle-ohasd
2018-10-11 16:04:38: Command output:
>  oracle-ohasd stop/waiting
>End Command output
2018-10-11 16:04:38: Glob file list = /etc/init/oracle-ohasd.conf
2018-10-11 16:04:38: Removing file /etc/init/oracle-ohasd.conf
2018-10-11 16:04:38: Successfully removed file: /etc/init/oracle-ohasd.conf
2018-10-11 16:04:38: Removing script for Oracle Cluster Ready services
2018-10-11 16:04:38: Removing /etc/init.d/init.evmd file
2018-10-11 16:04:38: Removing /etc/init.d/init.crsd file
2018-10-11 16:04:38: Removing /etc/init.d/init.cssd file
2018-10-11 16:04:38: Removing /etc/init.d/init.crs file
2018-10-11 16:04:38: Removing /etc/init.d/init.ohasd file
2018-10-11 16:04:38: Removing file /etc/init.d/init.ohasd
2018-10-11 16:04:38: Successfully removed file: /etc/init.d/init.ohasd
2018-10-11 16:04:38: Init file = ohasd
2018-10-11 16:04:38: Removing "ohasd" from RC dirs
2018-10-11 16:04:38: Removing file /etc/rc.d/rc0.d/K15ohasd
2018-10-11 16:04:38: Successfully removed file: /etc/rc.d/rc0.d/K15ohasd
2018-10-11 16:04:38: Removing file /etc/rc.d/rc1.d/K15ohasd
2018-10-11 16:04:38: Successfully removed file: /etc/rc.d/rc1.d/K15ohasd
2018-10-11 16:04:38: Removing file /etc/rc.d/rc2.d/K15ohasd
2018-10-11 16:04:38: Successfully removed file: /etc/rc.d/rc2.d/K15ohasd
2018-10-11 16:04:38: Removing file /etc/rc.d/rc3.d/S96ohasd
2018-10-11 16:04:38: Successfully removed file: /etc/rc.d/rc3.d/S96ohasd
2018-10-11 16:04:38: Removing file /etc/rc.d/rc4.d/K15ohasd
2018-10-11 16:04:38: Successfully removed file: /etc/rc.d/rc4.d/K15ohasd
2018-10-11 16:04:38: Removing file /etc/rc.d/rc5.d/S96ohasd
2018-10-11 16:04:38: Successfully removed file: /etc/rc.d/rc5.d/S96ohasd
2018-10-11 16:04:38: Removing file /etc/rc.d/rc6.d/K15ohasd
2018-10-11 16:04:38: Successfully removed file: /etc/rc.d/rc6.d/K15ohasd
2018-10-11 16:04:38: Init file = init.crs
2018-10-11 16:04:38: Removing "init.crs" from RC dirs
2018-10-11 16:04:38: Cleaning up SCR settings in /etc/oracle/scls_scr
2018-10-11 16:04:38: Cleaning oprocd directory, and log files
2018-10-11 16:04:38: Cleaning up Network socket directories
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/mdnsd
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/mdnsd.pid
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/npohasd
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_css_ctrllcl_orclhostdb02_ORCL-DEN-DB
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_css_ctrllcl_orclhostdb02_ORCL-DEN-DB
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_css_ctrllcl_orclhostdb02_ORCL-DEN-lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_css_ctrllcl_orclhostdb02_ORCL-DEN-lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_GPNPD_orclhostdb02
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_GPNPD_orclhostdb02_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_CSSD
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_CSSD
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_CSSD_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_CSSD_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_EVMD
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_EVMD
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_EVMD_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_EVMD_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_GIPCD
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_GIPCD_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_GPNPD
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_GPNPD_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_INIT
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/ora_gipc_orclhostdb02_INIT_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sAevm
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sAevm_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sOCSSD_LL_orclhostdb02_
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sOCSSD_LL_orclhostdb02__lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sOCSSD_LL_orclhostdb02_ORCL-DEN-DB
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sOCSSD_LL_orclhostdb02_ORCL-DEN-lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sOHASD_IPC_SOCKET_11
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sOHASD_IPC_SOCKET_11_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sOHASD_UI_SOCKET
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sOHASD_UI_SOCKET_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sOracle_CSS_LclLstnr_ORCL-DEN-1
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sOracle_CSS_LclLstnr_ORCL-DEN-1_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sprocr_local_conn_0_PROL
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sprocr_local_conn_0_PROL_lock
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sSYSTEM.evm.acceptor.auth
2018-10-11 16:04:38: Unlinking file : /var/tmp/.oracle/sSYSTEM.evm.acceptor.auth_lock
2018-10-11 16:04:38: Root script is not invoked as part of deinstall. /u01/home/oracle/oradiag_root, /etc/oracle/maps, and /etc/oracle/setasmgid are not removed
2018-10-11 16:04:38: removing all contents under /u01/home/grid/12.2.0.1/gpnp/profiles/peer
2018-10-11 16:04:38: removing all contents under /u01/home/grid/12.2.0.1/gpnp/wallets/peer
2018-10-11 16:04:38: removing all contents under /u01/home/grid/12.2.0.1/gpnp/wallets/prdr
2018-10-11 16:04:38: removing all contents under /u01/home/grid/12.2.0.1/gpnp/wallets/pa
2018-10-11 16:04:38: removing all contents under /u01/home/grid/12.2.0.1/gpnp/wallets/root
2018-10-11 16:04:38: Executing /etc/init.d/ohasd deinstall
2018-10-11 16:04:38: Executing cmd: /etc/init.d/ohasd deinstall
2018-10-11 16:04:38: Removing file /etc/init.d/ohasd
2018-10-11 16:04:38: Successfully removed file: /etc/init.d/ohasd
2018-10-11 16:04:38: Remove /var/tmp/.oracle
2018-10-11 16:04:38: Remove /tmp/.oracle
2018-10-11 16:04:38: Remove /etc/oracle/lastgasp
2018-10-11 16:04:38: Removing file /etc/oracle/ocr.loc.orig
2018-10-11 16:04:38: Successfully removed file: /etc/oracle/ocr.loc.orig
2018-10-11 16:04:38: Removing file /etc/oracle/olr.loc.orig
2018-10-11 16:04:38: Successfully removed file: /etc/oracle/olr.loc.orig
2018-10-11 16:04:38: Removing the local checkpoint file /u01/home/oracle/crsdata/orclhostdb02/crsconfig/ckptGridHA_orclhostdb02.xml
2018-10-11 16:04:38: Removing file /u01/home/oracle/crsdata/orclhostdb02/crsconfig/ckptGridHA_orclhostdb02.xml
2018-10-11 16:04:38: Successfully removed file: /u01/home/oracle/crsdata/orclhostdb02/crsconfig/ckptGridHA_orclhostdb02.xml
2018-10-11 16:04:38: Removing the local checkpoint index file /u01/home/oracle/orclhostdb02/checkpoints/crsconfig/index.xml
2018-10-11 16:04:38: Removing file /u01/home/oracle/orclhostdb02/checkpoints/crsconfig/index.xml
2018-10-11 16:04:38: Successfully removed file: /u01/home/oracle/orclhostdb02/checkpoints/crsconfig/index.xml
2018-10-11 16:04:38: Removing the global checkpoint file /u01/home/oracle/crsdata/@global/crsconfig/ckptGridHA_global.xml
2018-10-11 16:04:38: Invoking "/u01/home/grid/12.2.0.1/bin/cluutil -rmfile /u01/home/oracle/crsdata/@global/crsconfig/ckptGridHA_global.xml orclhostdb02,orclhostdb01"
2018-10-11 16:04:38: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/cluutil1.log
2018-10-11 16:04:38: Running as user oracle: /u01/home/grid/12.2.0.1/bin/cluutil -rmfile /u01/home/oracle/crsdata/@global/crsconfig/ckptGridHA_global.xml orclhostdb02,orclhostdb01
2018-10-11 16:04:38: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u01/home/grid/12.2.0.1/bin/cluutil -rmfile /u01/home/oracle/crsdata/@global/crsconfig/ckptGridHA_global.xml orclhostdb02,orclhostdb01 '
2018-10-11 16:04:39: Removing file /tmp/bhyNtg7QkG
2018-10-11 16:04:39: Successfully removed file: /tmp/bhyNtg7QkG
2018-10-11 16:04:39: pipe exit code: 0
2018-10-11 16:04:39: /bin/su successfully executed

2018-10-11 16:04:39: Removing the global checkpoint index file /u01/home/oracle/crsdata/@global/crsconfig/index.xml
2018-10-11 16:04:39: Invoking "/u01/home/grid/12.2.0.1/bin/cluutil -rmfile /u01/home/oracle/crsdata/@global/crsconfig/index.xml orclhostdb02,orclhostdb01"
2018-10-11 16:04:39: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/cluutil2.log
2018-10-11 16:04:39: Running as user oracle: /u01/home/grid/12.2.0.1/bin/cluutil -rmfile /u01/home/oracle/crsdata/@global/crsconfig/index.xml orclhostdb02,orclhostdb01
2018-10-11 16:04:39: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u01/home/grid/12.2.0.1/bin/cluutil -rmfile /u01/home/oracle/crsdata/@global/crsconfig/index.xml orclhostdb02,orclhostdb01 '
2018-10-11 16:04:40: Removing file /tmp/xvreVHVspJ
2018-10-11 16:04:40: Successfully removed file: /tmp/xvreVHVspJ
2018-10-11 16:04:40: pipe exit code: 0
2018-10-11 16:04:40: /bin/su successfully executed

2018-10-11 16:04:40: Removing the 'crsgenconfig_params' file /u01/home/grid/12.2.0.1/crs/install/crsgenconfig_params
2018-10-11 16:04:40: Invoking "/u01/home/grid/12.2.0.1/bin/cluutil -rmfile /u01/home/grid/12.2.0.1/crs/install/crsgenconfig_params orclhostdb02,orclhostdb01"
2018-10-11 16:04:40: trace file=/u01/home/oracle/crsdata/orclhostdb02/crsconfig/cluutil3.log
2018-10-11 16:04:40: Running as user oracle: /u01/home/grid/12.2.0.1/bin/cluutil -rmfile /u01/home/grid/12.2.0.1/crs/install/crsgenconfig_params orclhostdb02,orclhostdb01
2018-10-11 16:04:40: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u01/home/grid/12.2.0.1/bin/cluutil -rmfile /u01/home/grid/12.2.0.1/crs/install/crsgenconfig_params orclhostdb02,orclhostdb01 '
2018-10-11 16:04:40: Removing file /tmp/t758L2EZbs
2018-10-11 16:04:40: Successfully removed file: /tmp/t758L2EZbs
2018-10-11 16:04:40: pipe exit code: 0
2018-10-11 16:04:40: /bin/su successfully executed

2018-10-11 16:04:40: removing cvuqdisk rpm
2018-10-11 16:04:40: Executing /bin/rpm -e --allmatches cvuqdisk
2018-10-11 16:04:40: Executing cmd: /bin/rpm -e --allmatches cvuqdisk
2018-10-11 16:04:40: Successfully deconfigured Oracle Clusterware stack on this node
2018-10-11 16:04:40: Executing cmd: /u01/home/grid/12.2.0.1/bin/clsecho -p has -f clsrsc -m 336
2018-10-11 16:04:40: Command output:
>  CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
>End Command output
2018-10-11 16:04:40: CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
2018-10-11 16:04:40: Executing cmd: /u01/home/grid/12.2.0.1/bin/clsecho -p has -f clsrsc -m 559 "/u01/home/grid/12.2.0.1"
2018-10-11 16:04:40: Command output:
>  CLSRSC-559: Ensure that the GPnP profile data under the 'gpnp' directory in /u01/home/grid/12.2.0.1 is deleted on each node before using the software in the current Grid Infrastructure home for reconfiguration.
>End Command output
2018-10-11 16:04:40: CLSRSC-559: Ensure that the GPnP profile data under the 'gpnp' directory in /u01/home/grid/12.2.0.1 is deleted on each node before using the software in the current Grid Infrastructure home for reconfiguration.

Oracle 12c Grid Installation without GUI

Oracle 12c Grid Installation

NOTE: This effort is to provide kind of hands on experience, hence though there is some unwanted data/info from log files, I still posted it here so that folks who want to know what goes behind the scenes can get all the details.

High Level Steps

1. pre-checks such as
    sudo to root
    scan vip, goldengate vip
    public and private interfaces identified
    Ram settings, and any other pre-prereqs
    you can run cluvfy to identify failures
    and also you can run cluvfy in fix mode to generate the fixes.
2. copy your 12c tar files
    rdbms binaries to oracle home
    grid binaries to grid home
3. Set up passwordless ssh for oracle user
4. create required directories and provide appropriate privilegs
    such as : chown -R "oracle:dba" $ORACLE_BASE $ORACLE_HOME $GRID_HOME
5. you would need a diskgroup for storing ocr and voting disks
    Hence, As root user, load AFD and ask your sys admin to provide you with disk groups
6. clone binaries using clone.pl
7. see if you need UDP or RDS protocol, steps are provided below.
8. sample gird response file is given, use one such file for the configuration
9. config.sh as oracle user(only on first node)
10. run root.sh as root user on all nodes
11. run gridsetup.sh
12. run runinstaller
13. clone rdbms binaries & relink
14. set up listeners
15. create disk groups and change asm parameters

Setup passwordless ssh for oracle user as oracle user on all nodes and exchange public key
cd ~oracle
ssh-keygen -t rsa (accept the default and no passphrase)
cd /u01/app/oracle/.ssh
cat id_rsa.pub > authorized_keys
copy other nodes' public key to authorized_keys file
Perform ssh to other nodes to ensure passwordless ssh is working


make sure you login passwordless from node1 :
orcldb01$ ssh orcldb02
orcldb01$ ssh orcldb02-priv
orcldb01$ ssh orcldb02.mattew.com

and viceversa

--Run cluvfy

bash-4.1$ cd $GRID_HOME
bash-4.1$  ./runcluvfy.sh stage -pre crsinst -n orcldb01,orcldb02 -verbose          

Verifying Physical Memory ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     377.5119GB (3.95849944E8KB)  8GB (8388608.0KB)         passed  
  orcldb01     377.5119GB (3.95849944E8KB)  8GB (8388608.0KB)         passed  
Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     375.726GB (3.93977244E8KB)  50MB (51200.0KB)          passed  
  orcldb01     373.5914GB (3.91739012E8KB)  50MB (51200.0KB)          passed  
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     20GB (2.0971512E7KB)      16GB (1.6777216E7KB)      passed  
  orcldb01     20GB (2.0971508E7KB)      16GB (1.6777216E7KB)      passed  
Verifying Swap Size ...PASSED
Verifying Free Space: orcldb02:/usr,orcldb02:/var,orcldb02:/etc,orcldb02:/sbin,orcldb02:/tmp ...
  Path              Node Name     Mount point   Available     Required      Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              orcldb02     /             511.9414GB    25MB          passed    
  /var              orcldb02     /             511.9414GB    5MB           passed    
  /etc              orcldb02     /             511.9414GB    25MB          passed    
  /sbin             orcldb02     /             511.9414GB    10MB          passed    
  /tmp              orcldb02     /             511.9414GB    1GB           passed    
Verifying Free Space: orcldb02:/usr,orcldb02:/var,orcldb02:/etc,orcldb02:/sbin,orcldb02:/tmp ...PASSED
Verifying Free Space: orcldb01:/usr,orcldb01:/var,orcldb01:/etc,orcldb01:/sbin,orcldb01:/tmp ...
  Path              Node Name     Mount point   Available     Required      Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              orcldb01     /             522.0166GB    25MB          passed    
  /var              orcldb01     /             522.0166GB    5MB           passed    
  /etc              orcldb01     /             522.0166GB    25MB          passed    
  /sbin             orcldb01     /             522.0166GB    10MB          passed    
  /tmp              orcldb01     /             522.0166GB    1GB           passed    
Verifying Free Space: orcldb01:/usr,orcldb01:/var,orcldb01:/etc,orcldb01:/sbin,orcldb01:/tmp ...PASSED
Verifying User Existence: oracle ...
  Node Name     Status                    Comment              
  ------------  ------------------------  ------------------------
  orcldb02     passed                    exists(969)          
  orcldb01     passed                    exists(969)          

  Verifying Users With Same UID: 969 ...PASSED
Verifying User Existence: oracle ...PASSED
Verifying Group Existence: asmadmin ...
  Node Name     Status                    Comment              
  ------------  ------------------------  ------------------------
  orcldb02     failed                    does not exist        
  orcldb01     failed                    does not exist        
Verifying Group Existence: asmadmin ...FAILED (PRVG-10461)
Verifying Group Existence: dba ...
  Node Name     Status                    Comment              
  ------------  ------------------------  ------------------------
  orcldb02     passed                    exists                
  orcldb01     passed                    exists                
Verifying Group Existence: dba ...PASSED
Verifying Group Existence: asmdba ...
  Node Name     Status                    Comment              
  ------------  ------------------------  ------------------------
  orcldb02     failed                    does not exist        
  orcldb01     failed                    does not exist        
Verifying Group Existence: asmdba ...FAILED (PRVG-10461)
Verifying Group Membership: asmadmin ...
  Node Name         User Exists   Group Exists  User in Group  Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         yes           no            no            failed        
  orcldb01         yes           no            no            failed        
Verifying Group Membership: asmadmin ...FAILED (PRVG-10460)
Verifying Group Membership: dba(Primary) ...
  Node Name         User Exists   Group Exists  User in Group  Primary       Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb02         yes           yes           yes           yes           passed    
  orcldb01         yes           yes           yes           yes           passed    
Verifying Group Membership: dba(Primary) ...PASSED
Verifying Group Membership: asmdba ...
  Node Name         User Exists   Group Exists  User in Group  Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         yes           no            no            failed        
  orcldb01         yes           no            no            failed        
Verifying Group Membership: asmdba ...FAILED (PRVG-10460)
Verifying Run Level ...
  Node Name     run level                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     3                         3,5                       passed  
  orcldb01     3                         3,5                       passed  
Verifying Run Level ...PASSED
Verifying Architecture ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     x86_64                    x86_64                    passed  
  orcldb01     x86_64                    x86_64                    passed  
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     4.1.12-37.4.1.el6uek.x86_64  2.6.39-400.211.1          passed  
  orcldb01     4.1.12-37.4.1.el6uek.x86_64  2.6.39-400.211.1          passed  
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         250           250           250           passed        
  orcldb02         250           250           250           passed        
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         32000         32000         32000         passed        
  orcldb02         32000         32000         32000         passed        
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         100           100           100           passed        
  orcldb02         100           100           100           passed        
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         128           128           128           passed        
  orcldb02         128           128           128           passed        
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         4398046511104  4398046511104  202675171328  passed        
  orcldb02         4398046511104  4398046511104  202675171328  passed        
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         4096          4096          4096          passed        
  orcldb02         4096          4096          4096          passed        
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         4294967296    4294967296    39584994      passed        
  orcldb02         4294967296    4294967296    39584994      passed        
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         6815744       6815744       6815744       passed        
  orcldb02         6815744       6815744       6815744       passed        
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed        
  orcldb02         between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed        
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         262144        262144        262144        passed        
  orcldb02         262144        262144        262144        passed        
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         4194304       4194304       4194304       passed        
  orcldb02         4194304       4194304       4194304       passed        
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         262144        262144        262144        passed        
  orcldb02         262144        262144        262144        passed        
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         1048576       1048576       1048576       passed        
  orcldb02         1048576       1048576       1048576       passed        
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         1048576       1048576       1048576       passed        
  orcldb02         1048576       1048576       1048576       passed        
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         1             1             1             passed        
  orcldb02         1             1             1             passed        
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.20.51.0.2 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     binutils-2.20.51.0.2-5.44.el6  binutils-2.20.51.0.2      passed  
  orcldb01     binutils-2.20.51.0.2-5.44.el6  binutils-2.20.51.0.2      passed  
Verifying Package: binutils-2.20.51.0.2 ...PASSED
Verifying Package: compat-libcap1-1.10 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     compat-libcap1-1.10-1     compat-libcap1-1.10       passed  
  orcldb01     compat-libcap1-1.10-1     compat-libcap1-1.10       passed  
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed  
  orcldb01     compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed  
Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) ...PASSED
Verifying Package: libgcc-4.4.7 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     libgcc(x86_64)-4.4.7-17.el6  libgcc(x86_64)-4.4.7      passed  
  orcldb01     libgcc(x86_64)-4.4.7-17.el6  libgcc(x86_64)-4.4.7      passed  
Verifying Package: libgcc-4.4.7 (x86_64) ...PASSED
Verifying Package: libstdc++-4.4.7 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     libstdc++(x86_64)-4.4.7-17.el6  libstdc++(x86_64)-4.4.7   passed  
  orcldb01     libstdc++(x86_64)-4.4.7-17.el6  libstdc++(x86_64)-4.4.7   passed  
Verifying Package: libstdc++-4.4.7 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.4.7 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     libstdc++-devel(x86_64)-4.4.7-17.el6  libstdc++-devel(x86_64)-4.4.7  passed  
  orcldb01     libstdc++-devel(x86_64)-4.4.7-17.el6  libstdc++-devel(x86_64)-4.4.7  passed  
Verifying Package: libstdc++-devel-4.4.7 (x86_64) ...PASSED
Verifying Package: sysstat-9.0.4 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     sysstat-9.0.4-31.el6      sysstat-9.0.4             passed  
  orcldb01     sysstat-9.0.4-31.el6      sysstat-9.0.4             passed  
Verifying Package: sysstat-9.0.4 ...PASSED
Verifying Package: ksh ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     ksh                       ksh                       passed  
  orcldb01     ksh                       ksh                       passed  
Verifying Package: ksh ...PASSED
Verifying Package: make-3.81 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     make-3.81-23.el6          make-3.81                 passed  
  orcldb01     make-3.81-23.el6          make-3.81                 passed  
Verifying Package: make-3.81 ...PASSED
Verifying Package: glibc-2.12 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     glibc(x86_64)-2.12-1.192.el6  glibc(x86_64)-2.12        passed  
  orcldb01     glibc(x86_64)-2.12-1.192.el6  glibc(x86_64)-2.12        passed  
Verifying Package: glibc-2.12 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.12 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     glibc-devel(x86_64)-2.12-1.192.el6  glibc-devel(x86_64)-2.12  passed  
  orcldb01     glibc-devel(x86_64)-2.12-1.192.el6  glibc-devel(x86_64)-2.12  passed  
Verifying Package: glibc-devel-2.12 (x86_64) ...PASSED
Verifying Package: libaio-0.3.107 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed  
  orcldb01     libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed  
Verifying Package: libaio-0.3.107 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.107 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed  
  orcldb01     libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed  
Verifying Package: libaio-devel-0.3.107 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     nfs-utils-1.2.3-70.0.1.el6  nfs-utils-1.2.3-15        passed  
  orcldb01     nfs-utils-1.2.3-70.0.1.el6  nfs-utils-1.2.3-15        passed  
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: e2fsprogs-1.42.8 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     e2fsprogs-1.42.8-1.0.2.el6  e2fsprogs-1.42.8          passed  
  orcldb01     e2fsprogs-1.42.8-1.0.2.el6  e2fsprogs-1.42.8          passed  
Verifying Package: e2fsprogs-1.42.8 ...PASSED
Verifying Package: e2fsprogs-libs-1.42.8 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     e2fsprogs-libs(x86_64)-1.42.8-1.0.2.el6  e2fsprogs-libs(x86_64)-1.42.8  passed  
  orcldb01     e2fsprogs-libs(x86_64)-1.42.8-1.0.2.el6  e2fsprogs-libs(x86_64)-1.42.8  passed  
Verifying Package: e2fsprogs-libs-1.42.8 (x86_64) ...PASSED
Verifying Package: smartmontools-5.43-1 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     smartmontools-5.43-1.el6  smartmontools-5.43-1      passed  
  orcldb01     smartmontools-5.43-1.el6  smartmontools-5.43-1      passed  
Verifying Package: smartmontools-5.43-1 ...PASSED
Verifying Package: net-tools-1.60-110 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     net-tools-1.60-110.0.1.el6_2  net-tools-1.60-110        passed  
  orcldb01     net-tools-1.60-110.0.1.el6_2  net-tools-1.60-110        passed  
Verifying Package: net-tools-1.60-110 ...PASSED
Verifying Port Availability for component "Oracle Notification Service (ONS)" ...
  Node Name         Port Number   Protocol      Available     Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         6200          TCP           yes           successful    
  orcldb01         6200          TCP           yes           successful    
  orcldb02         6100          TCP           yes           successful    
  orcldb01         6100          TCP           yes           successful    
Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED
Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...
  Node Name         Port Number   Protocol      Available     Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         42424         TCP           yes           successful    
  orcldb01         42424         TCP           yes           successful    
Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...
  Node Name                             Status                
  ------------------------------------  ------------------------
  orcldb02                             passed                
  orcldb01                             passed                
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     cvuqdisk-1.0.10-1         cvuqdisk-1.0.10-1         passed  
  orcldb01     cvuqdisk-1.0.10-1         cvuqdisk-1.0.10-1         passed  
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Node Connectivity ...
  Verifying Hosts File ...
  Node Name                             Status                
  ------------------------------------  ------------------------
  orcldb01                             passed                
  orcldb02                             passed                
  Verifying Hosts File ...PASSED

Interface information for node "orcldb02"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth2   191.167.1.6     191.167.1.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B4:5C 1500
 eth3   191.167.2.6     191.167.2.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B4:5D 1500
 bond0  10.55.239.66    10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B4:5A 1500

Interface information for node "orcldb01"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth2   191.167.1.5     191.167.1.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2E 1500
 eth3   191.167.2.5     191.167.2.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2F 1500
 bond0  10.55.239.64    10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2C 1500

Check: MTU consistency of the subnet "191.167.2.0".

  Node              Name          IP Address    Subnet        MTU          
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         eth3          191.167.2.6   191.167.2.0   1500          
  orcldb01         eth3          191.167.2.5   191.167.2.0   1500          

Check: MTU consistency of the subnet "191.167.1.0".

  Node              Name          IP Address    Subnet        MTU          
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         eth2          191.167.1.6   191.167.1.0   1500          
  orcldb01         eth2          191.167.1.5   191.167.1.0   1500          

Check: MTU consistency of the subnet "10.55.238.0".

  Node              Name          IP Address    Subnet        MTU          
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         bond0         10.55.239.66  10.55.238.0   1500          
  orcldb01         bond0         10.55.239.64  10.55.238.0   1500          
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED

  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  orcldb01[eth3:191.167.2.5]     orcldb02[eth3:191.167.2.6]     yes          

  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  orcldb01[eth2:191.167.1.5]     orcldb02[eth2:191.167.1.6]     yes          

  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  orcldb01[bond0:10.55.239.64]   orcldb02[bond0:10.55.239.66]   yes          
  Verifying subnet mask consistency for subnet "191.167.2.0" ...PASSED
  Verifying subnet mask consistency for subnet "191.167.1.0" ...PASSED
  Verifying subnet mask consistency for subnet "10.55.238.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast check ...
Checking subnet "191.167.2.0" for multicast communication with multicast group "224.0.0.251"
Verifying Multicast check ...PASSED
Verifying Network Time Protocol (NTP) ...
  Verifying '/etc/ntp.conf' ...
  Node Name                             File exists?          
  ------------------------------------  ------------------------
  orcldb02                             yes                  
  orcldb01                             yes                  

  Verifying '/etc/ntp.conf' ...PASSED
  Verifying '/var/run/ntpd.pid' ...
  Node Name                             File exists?          
  ------------------------------------  ------------------------
  orcldb02                             yes                  
  orcldb01                             yes                  

  Verifying '/var/run/ntpd.pid' ...PASSED
  Verifying Daemon 'ntpd' ...
  Node Name                             Running?              
  ------------------------------------  ------------------------
  orcldb02                             yes                  
  orcldb01                             yes                  

  Verifying Daemon 'ntpd' ...PASSED
  Verifying NTP daemon or service using UDP port 123 ...
  Node Name                             Port Open?            
  ------------------------------------  ------------------------
  orcldb02                             yes                  
  orcldb01                             yes                  

  Verifying NTP daemon or service using UDP port 123 ...PASSED
  Verifying NTP daemon is synchronized with at least one external time source ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying Same core file name pattern ...PASSED
Verifying User Mask ...
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  orcldb02     0022                      0022                      passed  
  orcldb01     0022                      0022                      passed  
Verifying User Mask ...PASSED
Verifying User Not In Group "root": oracle ...
  Node Name     Status                    Comment              
  ------------  ------------------------  ------------------------
  orcldb02     passed                    does not exist        
  orcldb01     passed                    does not exist        
Verifying User Not In Group "root": oracle ...PASSED
Verifying Time zone consistency ...PASSED
Verifying resolv.conf Integrity ...
  Verifying (Linux) resolv.conf Integrity ...
  Node Name                             Status                
  ------------------------------------  ------------------------
  orcldb01                             passed                
  orcldb02                             passed                

  checking response for name "orcldb02" from each of the name servers
  specified in "/etc/resolv.conf"

  Node Name     Source                    Comment                   Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     10.55.64.23               IPv4                      passed  
  orcldb02     10.55.64.26               IPv4                      passed  

  checking response for name "orcldb01" from each of the name servers
  specified in "/etc/resolv.conf"

  Node Name     Source                    Comment                   Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb01     10.55.64.23               IPv4                      passed  
  orcldb01     10.55.64.26               IPv4                      passed  
  Verifying (Linux) resolv.conf Integrity ...PASSED
Verifying resolv.conf Integrity ...PASSED
Verifying DNS/NIS name service ...PASSED
Verifying Domain Sockets ...PASSED
Verifying /boot mount ...PASSED
Verifying Daemon "avahi-daemon" not configured and running ...
  Node Name     Configured                Status                
  ------------  ------------------------  ------------------------
  orcldb02     no                        passed                
  orcldb01     no                        passed                

  Node Name     Running?                  Status                
  ------------  ------------------------  ------------------------
  orcldb02     no                        passed                
  orcldb01     no                        passed                
Verifying Daemon "avahi-daemon" not configured and running ...PASSED
Verifying Daemon "proxyt" not configured and running ...
  Node Name     Configured                Status                
  ------------  ------------------------  ------------------------
  orcldb02     no                        passed                
  orcldb01     no                        passed                

  Node Name     Running?                  Status                
  ------------  ------------------------  ------------------------
  orcldb02     no                        passed                
  orcldb01     no                        passed                
Verifying Daemon "proxyt" not configured and running ...PASSED
Verifying User Equivalence ...PASSED
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying zeroconf check ...PASSED
Verifying ASM Filter Driver configuration ...WARNING (PRVE-10237, PRVE-10239)

Pre-check for cluster services setup was unsuccessful on all the nodes.


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Group Existence: asmadmin ...FAILED
orcldb02: PRVG-10461 : Group "asmadmin" selected for privileges "OSASM" does
           not exist on node "orcldb02".

orcldb01: PRVG-10461 : Group "asmadmin" selected for privileges "OSASM" does
           not exist on node "orcldb01".

Verifying Group Existence: asmdba ...FAILED
orcldb02: PRVG-10461 : Group "asmdba" selected for privileges "OSDBA" does not
           exist on node "orcldb02".

orcldb01: PRVG-10461 : Group "asmdba" selected for privileges "OSDBA" does not
           exist on node "orcldb01".

Verifying Group Membership: asmadmin ...FAILED
orcldb02: PRVG-10460 : User "oracle" does not belong to group "asmadmin"
           selected for privileges "OSASM" on node "orcldb02".

orcldb01: PRVG-10460 : User "oracle" does not belong to group "asmadmin"
           selected for privileges "OSASM" on node "orcldb01".

Verifying Group Membership: asmdba ...FAILED
orcldb02: PRVG-10460 : User "oracle" does not belong to group "asmdba"
           selected for privileges "OSDBA" on node "orcldb02".

orcldb01: PRVG-10460 : User "oracle" does not belong to group "asmdba"
           selected for privileges "OSDBA" on node "orcldb01".

Verifying ASM Filter Driver configuration ...WARNING
orcldb02: PRVE-10237 : Existence of files
           "/lib/modules/4.1.12-32.el6uek.x86_64/extra/oracle/oracleafd.ko,/lib/
           modules/4.1.12-37.4.1.el6uek.x86_64/weak-updates/oracle/oracleafd.ko,
           /opt/oracle/extapi/64/asm/orcl/1/libafd12.so" is not expected on
           node "orcldb02" before Clusterware installation or upgrade.
orcldb02: PRVE-10239 : ASM Filter Driver "oracleafd" is not expected to be
           loaded on node "orcldb02" before Clusterware installation or
           upgrade.

orcldb01: PRVE-10237 : Existence of files
           "/lib/modules/4.1.12-32.el6uek.x86_64/extra/oracle/oracleafd.ko,/lib/
           modules/4.1.12-37.4.1.el6uek.x86_64/weak-updates/oracle/oracleafd.ko,
           /opt/oracle/extapi/64/asm/orcl/1/libafd12.so" is not expected on
           node "orcldb01" before Clusterware installation or upgrade.
orcldb01: PRVE-10239 : ASM Filter Driver "oracleafd" is not expected to be
           loaded on node "orcldb01" before Clusterware installation or
           upgrade.


CVU operation performed:      stage -pre crsinst
Date:                         Oct 12, 2018 11:32:46 AM
CVU home:                     /u01/app/grid/12.2.0.1/
User:                         oracle




bash-4.1$  ./runcluvfy.sh comp nodecon -i bond0,eth2,eth3  -n orcldb01,orcldb02 -verbose

Verifying Node Connectivity ...
  Verifying Hosts File ...
  Node Name                             Status                
  ------------------------------------  ------------------------
  orcldb01                             passed                
  orcldb02                             passed                
  Verifying Hosts File ...PASSED

Interface information for node "orcldb02"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.55.239.66    10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B4:5A 1500
 eth2   191.167.1.6     191.167.1.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B4:5C 1500
 eth3   191.167.2.6     191.167.2.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B4:5D 1500

Interface information for node "orcldb01"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.55.239.64    10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2C 1500
 eth2   191.167.1.5     191.167.1.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2E 1500
 eth3   191.167.2.5     191.167.2.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2F 1500

Check: MTU consistency of the subnet "191.167.2.0".

  Node              Name          IP Address    Subnet        MTU          
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         eth3          191.167.2.6   191.167.2.0   1500          
  orcldb01         eth3          191.167.2.5   191.167.2.0   1500          

Check: MTU consistency of the subnet "191.167.1.0".

  Node              Name          IP Address    Subnet        MTU          
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         eth2          191.167.1.6   191.167.1.0   1500          
  orcldb01         eth2          191.167.1.5   191.167.1.0   1500          

Check: MTU consistency of the subnet "10.55.238.0".

  Node              Name          IP Address    Subnet        MTU          
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         bond0         10.55.239.66  10.55.238.0   1500          
  orcldb01         bond0         10.55.239.64  10.55.238.0   1500          

  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  orcldb01[eth3:191.167.2.5]     orcldb02[eth3:191.167.2.6]     yes          

  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  orcldb01[eth2:191.167.1.5]     orcldb02[eth2:191.167.1.6]     yes          

  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  orcldb01[bond0:10.55.239.64]   orcldb02[bond0:10.55.239.66]   yes          
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "191.167.2.0" ...PASSED
  Verifying subnet mask consistency for subnet "191.167.1.0" ...PASSED
  Verifying subnet mask consistency for subnet "10.55.238.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast check ...
Checking subnet "191.167.2.0" for multicast communication with multicast group "224.0.0.251"
Verifying Multicast check ...PASSED

Verification of node connectivity was successful.

CVU operation performed:      node connectivity
Date:                         Oct 12, 2018 11:36:19 AM
CVU home:                     /u01/app/grid/12.2.0.1/
User:                         oracle
bash-4.1$
bash-4.1$ netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         10.55.238.1     0.0.0.0         UG        0 0          0 bond0
10.55.238.0     0.0.0.0         255.255.254.0   U         0 0          0 bond0
191.167.1.0     0.0.0.0         255.255.255.0   U         0 0          0 eth2
191.167.2.0     0.0.0.0         255.255.255.0   U         0 0          0 eth3
bash-4.1$
bash-4.1$ ./runcluvfy.sh comp nodereach -n orcldb02

Verifying Node Reachability ...PASSED

Verification of node reachability was successful.

CVU operation performed:      node reachability
Date:                         Oct 12, 2018 11:38:11 AM
CVU home:                     /u01/app/grid/12.2.0.1/
User:                         oracle
bash-4.1$

bash-4.1$ ./runcluvfy.sh comp nodereach -n orcldb02 -verbose  

Verifying Node Reachability ...
  Destination Node                      Reachable?            
  ------------------------------------  ------------------------
  orcldb02                             yes                  
Verifying Node Reachability ...PASSED

Verification of node reachability was successful.

CVU operation performed:      node reachability
Date:                         Oct 12, 2018 11:39:05 AM
CVU home:                     /u01/app/grid/12.2.0.1/
User:                         oracle

bash-4.1$ ls /u01/app/oracle
admin  audit  bin  crsdata  def  diag  log  logs  lost+found  product   utils


bash-4.1$
bash-4.1$ sudo chown -R oracle:dba /u01/app/oracle/diag
bash-4.1$
bash-4.1$ cd /u01/app/grid/12.2.0.1
bash-4.1$
bash-4.1$ id
uid=969(oracle) gid=533(dba) groups=533(dba),54321(oinstall)
bash-4.1$
bash-4.1$ /u01/app/grid/12.2.0.1/perl/bin/perl /u01/app/grid/12.2.0.1/clone/bin/clone.pl -silent ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/grid/12.2.0.1 ORACLE_HOME_NAME=OraGrid122  INVENTORY_LOCATION=/u01/app/oracle/oraInventory LOCAL_NODE=orcldb01  CRS=TRUE
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 509781 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 20479 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-10-12_11-59-40AM.
Please wait ...[WARNING] [INS-32008] Oracle base location cant be same as the user home directory.
   CAUSE: The specified Oracle base is same as the user home directory.
   ACTION: Provide an Oracle base location other than the user home directory.
You can find the log of this install session at:
 /u01/app/oracle/oraInventory/logs/cloneActions2018-10-12_11-59-40AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of OraGrid122 was successful.
Please check '/u01/app/oracle/oraInventory/logs/cloneActions2018-10-12_11-59-40AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oracle/oraInventory/orainstRoot.sh
        2. /u01/app/grid/12.2.0.1/root.sh



..................................................   100% Done.
bash-4.1$
bash-4.1$ # now do it on the other node as well


bash-4.1$ sudo bash
[root@orcldb01 12.2.0.1]#
[root@orcldb01 12.2.0.1]#  /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oracle/oraInventory to dba.
The execution of the script is complete.
[root@orcldb01 12.2.0.1]# id
uid=0(root) gid=0(root) groups=0(root)
[root@orcldb01 12.2.0.1]#
[root@orcldb01 12.2.0.1]# exit
bash-4.1$
bash-4.1$ id
uid=969(oracle) gid=533(dba) groups=533(dba),54321(oinstall)


#since we are using straight ethernet cables and no infiniband, i am converting RDS protocol to UDP

bash-4.1$ $ # now do it on the other node
bash-4.1$
bash-4.1$ $ORACLE_HOME/bin/skgxpinfo -v
Oracle RDS/IP (generic)
bash-4.1$ $GRID_HOME/bin/skgxpinfo -v          
Oracle UDP/IP (generic)
bash-4.1$  make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_g ioracle
rm -f /u01/app/oracle/product/12.2.0.1/lib/libskgxp12.so
cp /u01/app/oracle/product/12.2.0.1/lib//libskgxpg.so /u01/app/oracle/product/12.2.0.1/lib/libskgxp12.so
chmod 755 /u01/app/oracle/product/12.2.0.1/bin

 - Linking Oracle
rm -f /u01/app/oracle/product/12.2.0.1/rdbms/lib/oracle
/u01/app/oracle/product/12.2.0.1/bin/orald  -o /u01/app/oracle/product/12.2.0.1/rdbms/lib/oracle -m64 -z noexecstack -Wl,--disable-new-dtags -L/u01/app/oracle/product/12.2.0.1/rdbms/lib/ -L/u01/app/oracle/product/12.2.0.1/lib/ -L/u01/app/oracle/product/12.2.0.1/lib/stubs/   -Wl,-E /u01/app/oracle/product/12.2.0.1/rdbms/lib/opimai.o /u01/app/oracle/product/12.2.0.1/rdbms/lib/ssoraed.o /u01/app/oracle/product/12.2.0.1/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv12 -Wl,--no-whole-archive /u01/app/oracle/product/12.2.0.1/lib/nautab.o /u01/app/oracle/product/12.2.0.1/lib/naeet.o /u01/app/oracle/product/12.2.0.1/lib/naect.o /u01/app/oracle/product/12.2.0.1/lib/naedhs.o /u01/app/oracle/product/12.2.0.1/rdbms/lib/config.o  -ldmext -lserver12 -lodm12 -lofs -lcell12 -lnnet12 -lskgxp12 -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 -lknlopt `if /usr/bin/ar tv /u01/app/oracle/product/12.2.0.1/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap12" ; fi` -lskjcx12 -lslax12 -lpls12  -lrt -lplp12 -ldmext -lserver12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 `if [ -f /u01/app/oracle/product/12.2.0.1/lib/libavserver12.a ] ; then echo "-lavserver12" ; else echo "-lavstub12"; fi` `if [ -f /u01/app/oracle/product/12.2.0.1/lib/libavclient12.a ] ; then echo "-lavclient12" ; fi` -lknlopt -lslax12 -lpls12  -lrt -lplp12 -ljavavm12 -lserver12  -lwwg  `cat /u01/app/oracle/product/12.2.0.1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnro12 `cat /u01/app/oracle/product/12.2.0.1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnnzst12 -lzt12 -lztkg12 -lmm -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lztkg12 `cat /u01/app/oracle/product/12.2.0.1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnro12 `cat /u01/app/oracle/product/12.2.0.1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnnzst12 -lzt12 -lztkg12   -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `if /usr/bin/ar tv /u01/app/oracle/product/12.2.0.1/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo12 -lserver12"; fi` -L/u01/app/oracle/product/12.2.0.1/ctx/lib/ -lctxc12 -lctx12 -lzx12 -lgx12 -lctx12 -lzx12 -lgx12 -lordimt12 -lclsra12 -ldbcfg12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12 -lgeneric12 -locr12 -locrb12 -locrutl12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12 -lgeneric12  -lgeneric12 -lorazip -loraz -llzopro5 -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged  -lippsmerged -lippcore  -lippcpemerged -lippcpmerged  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lsnls12 -lunls12  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lasmclnt12 -lcommon12 -lcore12  -laio -lons  -lfthread12   `cat /u01/app/oracle/product/12.2.0.1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/12.2.0.1/lib -lm    `cat /u01/app/oracle/product/12.2.0.1/lib/sysliblist` -ldl -lm   -L/u01/app/oracle/product/12.2.0.1/lib `test -x /usr/bin/hugeedit -a -r /usr/lib64/libhugetlbfs.so && test -r /u01/app/oracle/product/12.2.0.1/rdbms/lib/shugetlbfs.o && echo -Wl,-zcommon-page-size=2097152 -Wl,-zmax-page-size=2097152 -lhugetlbfs`
test ! -f /u01/app/oracle/product/12.2.0.1/bin/oracle || (\
           mv -f /u01/app/oracle/product/12.2.0.1/bin/oracle /u01/app/oracle/product/12.2.0.1/bin/oracleO &&\
           chmod 600 /u01/app/oracle/product/12.2.0.1/bin/oracleO )
mv /u01/app/oracle/product/12.2.0.1/rdbms/lib/oracle /u01/app/oracle/product/12.2.0.1/bin/oracle
chmod 6751 /u01/app/oracle/product/12.2.0.1/bin/oracle
bash-4.1$
bash-4.1$
bash-4.1$ $ORACLE_HOME/bin/skgxpinfo -v                            
Oracle UDP/IP (generic)
bash-4.1$ $GRID_HOME/bin/skgxpinfo -v                            
Oracle UDP/IP (generic)
bash-4.1$
bash-4.1$

/*
if you want to convert UDP to RDS, you can follow below.

export ORACLE_HOME=/u01/app/grid/12.2.0.1
cd $ORACLE_HOME/rdbms/lib;
make -f ins_rdbms.mk ipc_rds ioracle
*/



bash-4.1$ cp /tmp/grid12c.raj  /u01/app/grid/12.2.0.1/install/response/grid12c.rsp
bash-4.1$ scp  /u01/app/grid/12.2.0.1/install/response/grid12c.rsp orcldb02:/u01/app/grid/12.2.0.1/install/response/grid12c.rsp
grid12c.rsp                                                                                                                                                     100%   34KB  34.1KB/s   00:00  


bash-4.1$
bash-4.1$ cat  /u01/app/grid/12.2.0.1/install/response/grid12c.rsp
###############################################################################
## Copyright(c) Oracle Corporation 1998,2017. All rights reserved.           ##
##                                                                           ##
## Specify values for the variables listed below to customize                ##
## your installation.                                                        ##
##                                                                           ##
## Each variable is associated with a comment. The comment                   ##
## can help to populate the variables with the appropriate                   ##
## values.                                                                   ##
##                                                                           ##
## IMPORTANT NOTE: This file contains plain text passwords and               ##
## should be secured to have read permission only by oracle user             ##
## or db administrator who owns this installation.                           ##
##                                                                           ##
###############################################################################

###############################################################################
##                                                                           ##
## Instructions to fill this response file                                   ##
## To install and configure 'Grid Infrastructure for Cluster'                ##
##  - Fill out sections A,B,C,D,E,F and G                                    ##
##  - Fill out section G if OCR and voting disk should be placed on ASM      ##
##                                                                           ##
## To install and configure 'Grid Infrastructure for Standalone server'      ##
##  - Fill out sections A,B and G                                            ##
##                                                                           ##
## To install software for 'Grid Infrastructure'                             ##
##  - Fill out sections A,B and C                                            ##
##                                                                           ##
## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##
##                                                                           ##
###############################################################################

#------------------------------------------------------------------------------
# Do not change the following system generated value.
#------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.2.0

###############################################################################
#                                                                             #
#                          SECTION A - BASIC                                  #
#                                                                             #
###############################################################################


#-------------------------------------------------------------------------------
# Specify the location which holds the inventory files.
# This is an optional parameter if installing on
# Windows based Operating System.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/u01/app/oracle/oraInventory

#-------------------------------------------------------------------------------
# Specify the installation option.
# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY
#   - CRS_CONFIG : To configure Grid Infrastructure for cluster
#   - HA_CONFIG  : To configure Grid Infrastructure for stand alone server
#   - UPGRADE    : To upgrade clusterware software of earlier release
#   - CRS_SWONLY : To install clusterware files only (can be configured for cluster
#                  or stand alone server later)
#   - HA_SWONLY  : To install clusterware files only (can be configured for stand
#                  alone server later. This is only supported on Windows.)
#-------------------------------------------------------------------------------
oracle.install.option=CRS_CONFIG

#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Base.
#-------------------------------------------------------------------------------
ORACLE_BASE=/u01/app/oracle
#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Home.
#-------------------------------------------------------------------------------

################################################################################
#                                                                              #
#                              SECTION B - GROUPS                              #
#                                                                              #
#   The following three groups need to be assigned for all GI installations.   #
#   OSDBA and OSOPER can be the same or different.  OSASM must be different    #
#   than the other two.                                                        #
#   The value to be specified for OSDBA, OSOPER and OSASM group is only for    #
#   Unix based Operating System.                                               #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges.
#-------------------------------------------------------------------------------
oracle.install.asm.OSDBA=dba

#-------------------------------------------------------------------------------
# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges.
# The value to be specified for OSOPER group is optional.
# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE.
#-------------------------------------------------------------------------------
oracle.install.asm.OSOPER=

#-------------------------------------------------------------------------------
# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This
# must be different than the previous two.
#-------------------------------------------------------------------------------
oracle.install.asm.OSASM=dba

################################################################################
#                                                                              #
#                           SECTION C - SCAN                                   #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify a name for SCAN
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.scanName=orcl-den-live.db.mattew.com

#-------------------------------------------------------------------------------
# Specify a unused port number for SCAN service
#-------------------------------------------------------------------------------

oracle.install.crs.config.gpnp.scanPort=1521

################################################################################
#                                                                              #
#                           SECTION D - CLUSTER & GNS                         #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure the cluster as Extended, else
# specify 'false'
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.configureAsExtendedCluster=false


#-------------------------------------------------------------------------------
# Specify the Member Cluster Manifest file
#
# Applicable only for MEMBERDB and MEMBERAPP cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.memberClusterManifestFile=

#-------------------------------------------------------------------------------
# Specify a name for the Cluster you are creating.
#
# The maximum length allowed for clustername is 15 characters. The name can be
# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-)
# and underscore(_).
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterName=orcl-den-live

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration.
# Specify 'true' if you would like to configure Grid Naming Service(GNS), else
# specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.configureGNS=false

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS.
# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP
# , else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.autoConfigureClusterNodeVIP=

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure GNS.
# Specify the type of GNS configuration for cluster
# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS
# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration.
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsOption=

#-------------------------------------------------------------------------------
# Applicable only if SHARED_GNS is being configured for cluster
# Specify the path to the GNS client data file
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsClientDataFile=

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to
# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
# Specify the GNS subdomain and an unused virtual hostname for GNS service
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=

#-------------------------------------------------------------------------------
# Specify the list of sites - only if configuring an Extended Cluster
#-------------------------------------------------------------------------------
oracle.install.crs.config.sites=

#-------------------------------------------------------------------------------
# Specify the list of nodes that have to be configured to be part of the cluster.
#
# The list should a comma-separated list of tuples.  Each tuple should be a
# colon-separated string that contains
# - 1 field if configuring an Application Cluster, or
# - 3 fields if configuring a Flex Cluster
# - 4 fields if configuring an Extended Cluster
#
# The fields should be ordered as follows:
# 1. The first field should be the public node name.
# 2. The second field should be the virtual host name
#    (Should be specified as AUTO if you have chosen 'auto configure for VIP'
#     i.e. autoConfigureClusterNodeVIP=true)
# 3. The third field indicates the role of node (HUB,LEAF). This has to
#    be provided only if Flex Cluster is being configured.
#    For Extended Cluster only HUB should be specified for all nodes
# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster.# The 2nd and 3rd fields are not applicable if configuring an Application Cluster
#
# Examples
# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2
# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2
# You can specify a range of nodes in the tuple using colon separated fields of format
# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node
#
#oracle.install.crs.config.clusterNodes=orcldb01.mattew.com:orcldb01-vip.mattew.com:HUB,orcldb02.mattew.com:orcldb02-vip.mattew.com:HUB
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterNodes=orcldb01:orcldb01-vip:HUB,orcldb02:orcldb02-vip:HUB

#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetAddress:InterfaceType
# where InterfaceType can be either "1", "2", "3", "4", or "5"
# InterfaceType stand for the following values
#   - 1 : PUBLIC
#   - 2 : PRIVATE
#   - 3 : DO NOT USE
#   - 4 : ASM
#   - 5 : ASM & PRIVATE
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=bond0:10.55.238.0:1,eth2:191.167.1.0:2,eth3:191.167.2.0:5

#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup to store GIMR data.
# Specify 'true' if you would like to separate GIMR data with clusterware data,
# else specify 'false'
# Value should be 'true' for DOMAIN cluster configurations
# Value can be true/false for STANDALONE cluster configurations.
#------------------------------------------------------------------------------
oracle.install.asm.configureGIMRDataDG=

################################################################################
#                                                                              #
#                              SECTION E - STORAGE                             #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting
# Disks files
#   - FLEX_ASM_STORAGE
#   - CLIENT_ASM_STORAGE
#
# Applicable only for MEMBERDB cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.storageOption=
################################################################################
#                                                                              #
#                               SECTION F - IPMI                               #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Intelligent Power Management interface
# (IPMI), else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.useIPMI=

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure IPMI
# i.e. oracle.install.crs.config.useIPMI=true
# Specify the username and password for using IPMI service
#-------------------------------------------------------------------------------
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
################################################################################
#                                                                              #
#                                SECTION G - ASM                               #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# ASM Storage Type
# Allowed values are : ASM and ASM_ON_NAS
# ASM_ON_NAS applicable only if
# oracle.install.crs.config.ClusterConfiguration=STANDALONE
#-------------------------------------------------------------------------------
oracle.install.asm.storageOption=ASM

#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing OCR/VDSK
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store OCR/VDSK files
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.ocrLocation=
#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup on NAS to store GIMR data
# Specify 'true' if you would like to separate GIMR data with clusterware data, else
# specify 'false'
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#------------------------------------------------------------------------------
oracle.install.asmOnNAS.configureGIMRDataDG=

#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing GIMR data
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store the GIMR database
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
# and oracle.install.asmOnNAS.configureGIMRDataDG=true
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.gimrLocation=

#-------------------------------------------------------------------------------
# Password for SYS user of Oracle ASM
#-------------------------------------------------------------------------------
oracle.install.asm.SYSASMPassword=syspassword123

#-------------------------------------------------------------------------------
# The ASM DiskGroup
#
# Example: oracle.install.asm.diskGroup.name=data
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.name=ORCL_FRA

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following
#   - NORMAL
#   - HIGH
#   - EXTERNAL
#   - FLEX#   - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.diskGroup.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.redundancy=EXTERNAL

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
#   - 1
#   - 2
#   - 4
#   - 8
#   - 16
# Example: oracle.install.asm.diskGroup.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.AUSize=4

#-------------------------------------------------------------------------------
# Failure Groups for the disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.FailureGroups=

#-------------------------------------------------------------------------------
# List of disks and their failure groups to create a ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disksWithFailureGroupNames=

#-------------------------------------------------------------------------------
# List of disks to create a ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
#oracle.install.asm.diskGroup.disks=/dev/mapper/DENHPE20450_1_5ap1
# these are examples oracle.install.asm.diskGroup.disks=AFD:DENHPE20450_1_5A
#ORCL_lvs_live2
oracle.install.asm.diskGroup.disks=/dev/mapper/DENHPE20450_2_29p1

#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
#       oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.quorumFailureGroupNames=
#-------------------------------------------------------------------------------
# The disk discovery string to be used to discover the disks used create a ASM DiskGroup
#
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/*
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK*
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper

#-------------------------------------------------------------------------------
# Password for ASMSNMP account
# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances
#-------------------------------------------------------------------------------
oracle.install.asm.monitorPassword=syspassword123

#-------------------------------------------------------------------------------
# GIMR Storage data ASM DiskGroup
# Applicable only when
# oracle.install.asm.configureGIMRDataDG=true
# Example: oracle.install.asm.GIMRDG.name=MGMT
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.name=

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following
#   - NORMAL
#   - HIGH
#   - EXTERNAL
#   - FLEX#   - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.gimrDG.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.redundancy=

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
#   - 1
#   - 2
#   - 4
#   - 8
#   - 16
# Example: oracle.install.asm.gimrDG.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.AUSize=

#-------------------------------------------------------------------------------
# Failure Groups for the GIMR storage data ASM disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.FailureGroups=

#-------------------------------------------------------------------------------
# List of disks and their failure groups to create GIMR data ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
#     For Windows based Operating System:
#     oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disksWithFailureGroupNames=

#-------------------------------------------------------------------------------
# List of disks to create GIMR data ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2
#     For Windows based Operating System:
#     oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disks=

#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
#       oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.quorumFailureGroupNames=

#-------------------------------------------------------------------------------
# Configure AFD - ASM Filter Driver
# Applicable only for FLEX_ASM_STORAGE option
# Specify 'true' if you want to configure AFD, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.asm.configureAFD=true
#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=

################################################################################
#                                                                              #
#                             SECTION H - UPGRADE                              #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify whether to ignore down nodes during upgrade operation.
# Value should be 'true' to ignore down nodes otherwise specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.ignoreDownNodes=
################################################################################
#                                                                              #
#                               MANAGEMENT OPTIONS                             #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the management option to use for managing Oracle Grid Infrastructure
# Options are:
# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
# 2. NONE   -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
#-------------------------------------------------------------------------------
oracle.install.config.managementOption=

#-------------------------------------------------------------------------------
# Specify the OMS host to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsHost=

#-------------------------------------------------------------------------------
# Specify the OMS port to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsPort=

#-------------------------------------------------------------------------------
# Specify the EM Admin user name to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminUser=

#-------------------------------------------------------------------------------
# Specify the EM Admin password to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminPassword=
################################################################################
#                                                                              #
#                      Root script execution configuration                     #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------------------------------
# Specify the root script execution mode.
#
#   - true  : To execute the root script automatically by using the appropriate configuration methods.
#   - false : To execute the root script manually.
#
# If this option is selected, password should be specified on the console.
#-------------------------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.executeRootScript=

#--------------------------------------------------------------------------------------
# Specify the configuration method to be used for automatic root script execution.
#
# Following are the possible choices:
#   - ROOT
#   - SUDO
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.configMethod=
#--------------------------------------------------------------------------------------
# Specify the absolute path of the sudo program.
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoPath=

#--------------------------------------------------------------------------------------
# Specify the name of the user who is in the sudoers list.
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoUserName=
#--------------------------------------------------------------------------------------
# Specify the nodes batch map.
#
# This should be a comma separated list of node:batch pairs.
# During upgrade, you can sequence the automatic execution of root scripts
# by pooling the nodes into batches.
# A maximum of three batches can be specified.
# Installer will execute the root scripts on all the nodes in one batch before
# proceeding to next batch.
# Root script execution on the local node must be in Batch 1.
# Only one type of node role can be used for each batch.
# Root script execution should be done first in all HUB nodes and then, when
# existent, in all the LEAF nodes.
#
# Examples:
# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3
# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2
# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3
#
# Applicable only for UPGRADE install option.
#--------------------------------------------------------------------------------------
oracle.install.crs.config.batchinfo=
################################################################################
#                                                                              #
#                           APPLICATION CLUSTER OPTIONS                        #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the Virtual hostname to configure virtual access for your Application
# The value to be specified for Virtual hostname is optional.
#-------------------------------------------------------------------------------
oracle.install.crs.app.applicationAddress=
oracle.install.crs.config.clusterNodes=orcldb01.mattew.com:orcldb01-vip.mattew.com:HUB,orcldb02.mattew.com:orcldb02-vip.mattew.com:HUB
bash-4.1$
bash-4.1$
bash-4.1$ id
uid=969(oracle) gid=533(dba) groups=533(dba),54321(oinstall)
bash-4.1$
bash-4.1$ id
uid=969(oracle) gid=533(dba) groups=533(dba),54321(oinstall)
bash-4.1$
bash-4.1$ cd /u01/app/grid/12.2.0.1
bash-4.1$
bash-4.1$ l /dev/dm*
brw-rw---- 1 oracle dba  251, 25 Oct 12 08:13 /dev/dm-25
brw-rw---- 1 oracle dba  251, 26 Oct 12 08:13 /dev/dm-26
brw-rw---- 1 oracle dba  251, 28 Oct 12 08:13 /dev/dm-28
brw-rw---- 1 oracle dba  251, 27 Oct 12 08:13 /dev/dm-27
brw-rw---- 1 oracle dba  251, 30 Oct 12 08:13 /dev/dm-30
brw-rw---- 1 oracle dba  251, 29 Oct 12 08:13 /dev/dm-29
brw-rw---- 1 oracle dba  251, 32 Oct 12 08:13 /dev/dm-32
brw-rw---- 1 oracle dba  251, 31 Oct 12 08:13 /dev/dm-31
brw-rw---- 1 oracle dba  251, 36 Oct 12 08:13 /dev/dm-36
brw-rw---- 1 oracle dba  251, 35 Oct 12 08:13 /dev/dm-35
brw-rw---- 1 oracle dba  251, 34 Oct 12 08:13 /dev/dm-34
brw-rw---- 1 oracle dba  251, 33 Oct 12 08:13 /dev/dm-33
brw-rw---- 1 oracle dba  251, 37 Oct 12 08:13 /dev/dm-37
brw-rw---- 1 oracle dba  251, 40 Oct 12 08:13 /dev/dm-40
brw-rw---- 1 oracle dba  251, 39 Oct 12 08:13 /dev/dm-39
brw-rw---- 1 oracle dba  251, 38 Oct 12 08:13 /dev/dm-38
brw-rw---- 1 oracle dba  251, 44 Oct 12 08:13 /dev/dm-44
brw-rw---- 1 oracle dba  251, 43 Oct 12 08:13 /dev/dm-43
brw-rw---- 1 oracle dba  251, 42 Oct 12 08:13 /dev/dm-42
brw-rw---- 1 oracle dba  251, 41 Oct 12 08:13 /dev/dm-41
brw-rw---- 1 oracle dba  251, 46 Oct 12 08:13 /dev/dm-46
brw-rw---- 1 oracle dba  251, 45 Oct 12 08:13 /dev/dm-45
brw-rw---- 1 oracle dba  251, 57 Oct 12 08:13 /dev/dm-57
brw-rw---- 1 oracle dba  251, 82 Oct 12 08:13 /dev/dm-82
brw-rw---- 1 oracle dba  251, 84 Oct 12 08:13 /dev/dm-84
brw-rw---- 1 oracle dba  251, 88 Oct 12 08:13 /dev/dm-88
brw-rw---- 1 oracle dba  251, 51 Oct 12 08:13 /dev/dm-51
brw-rw---- 1 oracle dba  251, 83 Oct 12 08:13 /dev/dm-83
brw-rw---- 1 oracle dba  251, 87 Oct 12 08:13 /dev/dm-87
brw-rw---- 1 oracle dba  251, 56 Oct 12 08:13 /dev/dm-56
brw-rw---- 1 oracle dba  251, 50 Oct 12 08:13 /dev/dm-50
brw-rw---- 1 oracle dba  251, 86 Oct 12 08:13 /dev/dm-86
brw-rw---- 1 oracle dba  251, 47 Oct 12 08:13 /dev/dm-47
brw-rw---- 1 oracle dba  251, 55 Oct 12 08:13 /dev/dm-55
brw-rw---- 1 oracle dba  251, 73 Oct 12 08:13 /dev/dm-73
brw-rw---- 1 oracle dba  251, 49 Oct 12 08:13 /dev/dm-49
brw-rw---- 1 oracle dba  251, 78 Oct 12 08:13 /dev/dm-78
brw-rw---- 1 oracle dba  251, 81 Oct 12 08:13 /dev/dm-81
brw-rw---- 1 oracle dba  251, 85 Oct 12 08:13 /dev/dm-85
brw-rw---- 1 oracle dba  251, 72 Oct 12 08:13 /dev/dm-72
brw-rw---- 1 oracle dba  251, 74 Oct 12 08:13 /dev/dm-74
brw-rw---- 1 oracle dba  251, 63 Oct 12 08:13 /dev/dm-63
brw-rw---- 1 oracle dba  251, 48 Oct 12 08:13 /dev/dm-48
brw-rw---- 1 oracle dba  251, 80 Oct 12 08:13 /dev/dm-80
brw-rw---- 1 oracle dba  251, 77 Oct 12 08:13 /dev/dm-77
brw-rw---- 1 oracle dba  251, 71 Oct 12 08:13 /dev/dm-71
brw-rw---- 1 oracle dba  251, 79 Oct 12 08:13 /dev/dm-79
brw-rw---- 1 oracle dba  251, 70 Oct 12 08:13 /dev/dm-70
brw-rw---- 1 oracle dba  251, 66 Oct 12 08:13 /dev/dm-66
brw-rw---- 1 oracle dba  251, 76 Oct 12 08:13 /dev/dm-76
brw-rw---- 1 oracle dba  251, 60 Oct 12 08:13 /dev/dm-60
brw-rw---- 1 oracle dba  251, 75 Oct 12 08:13 /dev/dm-75
brw-rw---- 1 oracle dba  251, 69 Oct 12 08:13 /dev/dm-69
brw-rw---- 1 oracle dba  251, 59 Oct 12 08:13 /dev/dm-59
brw-rw---- 1 oracle dba  251,  8 Oct 12 08:13 /dev/dm-8
brw-rw---- 1 oracle dba  251, 53 Oct 12 08:13 /dev/dm-53
brw-rw---- 1 root   disk 251,  4 Oct 12 08:13 /dev/dm-4
brw-rw---- 1 root   disk 251,  2 Oct 12 08:13 /dev/dm-2
brw-rw---- 1 root   disk 251,  1 Oct 12 08:13 /dev/dm-1
brw-rw---- 1 root   disk 251,  0 Oct 12 08:13 /dev/dm-0
brw-rw---- 1 oracle dba  251,  6 Oct 12 08:13 /dev/dm-6
brw-rw---- 1 oracle dba  251,  5 Oct 12 08:13 /dev/dm-5
brw-rw---- 1 root   disk 251,  3 Oct 12 08:13 /dev/dm-3
brw-rw---- 1 oracle dba  251,  7 Oct 12 08:13 /dev/dm-7
brw-rw---- 1 oracle dba  251,  9 Oct 12 08:13 /dev/dm-9
brw-rw---- 1 oracle dba  251, 11 Oct 12 08:13 /dev/dm-11
brw-rw---- 1 oracle dba  251, 10 Oct 12 08:13 /dev/dm-10
brw-rw---- 1 oracle dba  251, 62 Oct 12 08:13 /dev/dm-62
brw-rw---- 1 oracle dba  251, 68 Oct 12 08:13 /dev/dm-68
brw-rw---- 1 oracle dba  251, 52 Oct 12 08:13 /dev/dm-52
brw-rw---- 1 oracle dba  251, 14 Oct 12 08:13 /dev/dm-14
brw-rw---- 1 oracle dba  251, 13 Oct 12 08:13 /dev/dm-13
brw-rw---- 1 oracle dba  251, 12 Oct 12 08:13 /dev/dm-12
brw-rw---- 1 oracle dba  251, 67 Oct 12 08:13 /dev/dm-67
brw-rw---- 1 oracle dba  251, 16 Oct 12 08:13 /dev/dm-16
brw-rw---- 1 oracle dba  251, 15 Oct 12 08:13 /dev/dm-15
brw-rw---- 1 oracle dba  251, 18 Oct 12 08:13 /dev/dm-18
brw-rw---- 1 oracle dba  251, 17 Oct 12 08:13 /dev/dm-17
brw-rw---- 1 oracle dba  251, 61 Oct 12 08:13 /dev/dm-61
brw-rw---- 1 oracle dba  251, 20 Oct 12 08:13 /dev/dm-20
brw-rw---- 1 oracle dba  251, 19 Oct 12 08:13 /dev/dm-19
brw-rw---- 1 oracle dba  251, 22 Oct 12 08:13 /dev/dm-22
brw-rw---- 1 oracle dba  251, 21 Oct 12 08:13 /dev/dm-21
brw-rw---- 1 oracle dba  251, 24 Oct 12 08:13 /dev/dm-24
brw-rw---- 1 oracle dba  251, 23 Oct 12 08:13 /dev/dm-23
brw-rw---- 1 oracle dba  251, 65 Oct 12 08:13 /dev/dm-65
brw-rw---- 1 oracle dba  251, 54 Oct 12 08:13 /dev/dm-54
brw-rw---- 1 oracle dba  251, 64 Oct 12 08:13 /dev/dm-64
brw-rw---- 1 oracle dba  251, 58 Oct 12 08:13 /dev/dm-58
bash-4.1$
bash-4.1$
bash-4.1$ /u01/app/grid/12.2.0.1/crs/config/config.sh -silent -ignorePrereqFailure -responseFile /u01/app/grid/12.2.0.1/install/response/grid12c.rsp
[WARNING] [INS-30011] The SYS password entered does not conform to the Oracle recommended standards.
   CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
   ACTION: Provide a password that conforms to the Oracle recommended standards.
[WARNING] [INS-13013] Target environment does not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/oracle/oraInventory/logs/configActions2018-10-12_12-09-07-PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oracle/oraInventory/logs/configActions2018-10-12_12-09-07-PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

As a root user, execute the following script(s):
        1. /u01/app/grid/12.2.0.1/root.sh

Execute /u01/app/grid/12.2.0.1/root.sh on the following nodes:
[orcldb01, orcldb02]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/app/grid/12.2.0.1/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/12.2.0.1/install/response/grid12c.rsp [-silent]


bash-4.1$ pwd
/u01/app/grid/12.2.0.1
bash-4.1$
bash-4.1$ id
uid=969(oracle) gid=533(dba) groups=533(dba),54321(oinstall)
bash-4.1$
bash-4.1$ sudo bash
[root@orcldb01 12.2.0.1]#
[root@orcldb01 12.2.0.1]# # this is node 1
[root@orcldb01 12.2.0.1]#
[root@orcldb01 12.2.0.1]# /u01/app/grid/12.2.0.1/root.sh
Check /u01/app/grid/12.2.0.1/install/root_orcldb01_2018-10-12_12-13-00-086520483.log for the output of root script

[root@orcldb01 12.2.0.1]# id
uid=0(root) gid=0(root) groups=0(root)
[root@orcldb01 12.2.0.1]# pwd
/u01/app/grid/12.2.0.1



[root@orcldb01 12.2.0.1]# echo " out put of above mentioned log file:
> bash-4.1$ tail -f /u01/app/grid/12.2.0.1/install/root_orcldb01_2018-10-12_12-13-00-086520483.log
> Entries will be added to the /etc/oratab file as needed by
> Database Configuration Assistant when a database is created
> Finished running generic part of root script.
> Now product-specific root actions will be performed.
> Relinking oracle with rac_on option
> Using configuration parameter file: /u01/app/grid/12.2.0.1/crs/install/crsconfig_params
> The log of current session can be found at:
>   /u01/app/oracle/crsdata/orcldb01/crsconfig/rootcrs_orcldb01_2018-10-12_12-13-08AM.log
> 2018/10/12 12:13:10 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
> 2018/10/12 12:13:10 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
> 2018/10/12 12:13:35 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
> 2018/10/12 12:13:35 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
> 2018/10/12 12:13:38 CLSRSC-363: User ignored prerequisites during installation
> 2018/10/12 12:13:38 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
> 2018/10/12 12:13:40 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
> 2018/10/12 12:13:41 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
> 2018/10/12 12:13:46 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
> 2018/10/12 12:13:48 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
> 2018/10/12 12:13:49 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
> 2018/10/12 12:14:07 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
> 2018/10/12 12:14:13 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
> 2018/10/12 12:14:13 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
> 2018/10/12 12:14:17 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
> 2018/10/12 12:14:32 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
> 2018/10/12 12:14:53 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
> 2018/10/12 12:15:05 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
> CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orcldb01'
> CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orcldb01' has completed
> CRS-4133: Oracle High Availability Services has been stopped.
> CRS-4123: Oracle High Availability Services has been started.
> 2018/10/12 12:15:38 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
> 2018/10/12 12:15:41 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
> CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orcldb01'
> CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orcldb01' has completed
> CRS-4133: Oracle High Availability Services has been stopped.
> CRS-4123: Oracle High Availability Services has been started.
> CRS-2672: Attempting to start 'ora.driver.afd' on 'orcldb01'
> CRS-2672: Attempting to start 'ora.evmd' on 'orcldb01'
> CRS-2672: Attempting to start 'ora.mdnsd' on 'orcldb01'
> CRS-2676: Start of 'ora.driver.afd' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orcldb01'
> CRS-2676: Start of 'ora.cssdmonitor' on 'orcldb01' succeeded
> CRS-2676: Start of 'ora.mdnsd' on 'orcldb01' succeeded
> CRS-2676: Start of 'ora.evmd' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.gpnpd' on 'orcldb01'
> CRS-2676: Start of 'ora.gpnpd' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.gipcd' on 'orcldb01'
> CRS-2676: Start of 'ora.gipcd' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.cssd' on 'orcldb01'
> CRS-2672: Attempting to start 'ora.diskmon' on 'orcldb01'
> CRS-2676: Start of 'ora.diskmon' on 'orcldb01' succeeded
> CRS-2676: Start of 'ora.cssd' on 'orcldb01' succeeded
>
> Disk label(s) created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-181012PM121622.log for details.
> Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-181012PM121622.log for details.
>
>
> 2018/10/12 12:17:03 CLSRSC-482: Running command: '/u01/app/grid/12.2.0.1/bin/ocrconfig -upgrade oracle dba'
> CRS-2672: Attempting to start 'ora.crf' on 'orcldb01'
> CRS-2672: Attempting to start 'ora.storage' on 'orcldb01'
> CRS-2676: Start of 'ora.storage' on 'orcldb01' succeeded
> CRS-2676: Start of 'ora.crf' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.crsd' on 'orcldb01'
> CRS-2676: Start of 'ora.crsd' on 'orcldb01' succeeded
> CRS-4256: Updating the profile
> Successful addition of voting disk de18d48391b74f7bbf1393c285e166be.
> Successfully replaced voting disk group with +ORCL_FRA.
> CRS-4256: Updating the profile
> CRS-4266: Voting file(s) successfully replaced
> ##  STATE    File Universal Id                File Name Disk group
> --  -----    -----------------                --------- ---------
>  1. ONLINE   de18d48391b74f7bbf1393c285e166be (AFD:DENHPE20450_2_29) [ORCL_FRA]
> Located 1 voting disk(s).
> CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orcldb01'
> CRS-2673: Attempting to stop 'ora.crsd' on 'orcldb01'
> CRS-2677: Stop of 'ora.crsd' on 'orcldb01' succeeded
> CRS-2673: Attempting to stop 'ora.storage' on 'orcldb01'
> CRS-2673: Attempting to stop 'ora.crf' on 'orcldb01'
> CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'orcldb01'
> CRS-2673: Attempting to stop 'ora.gpnpd' on 'orcldb01'
> CRS-2673: Attempting to stop 'ora.mdnsd' on 'orcldb01'
> CRS-2677: Stop of 'ora.drivers.acfs' on 'orcldb01' succeeded
> CRS-2677: Stop of 'ora.crf' on 'orcldb01' succeeded
> CRS-2677: Stop of 'ora.gpnpd' on 'orcldb01' succeeded
> CRS-2677: Stop of 'ora.storage' on 'orcldb01' succeeded
> CRS-2673: Attempting to stop 'ora.asm' on 'orcldb01'
> CRS-2677: Stop of 'ora.mdnsd' on 'orcldb01' succeeded
> CRS-2677: Stop of 'ora.asm' on 'orcldb01' succeeded
> CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'orcldb01'
> CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'orcldb01' succeeded
> CRS-2673: Attempting to stop 'ora.ctssd' on 'orcldb01'
> CRS-2673: Attempting to stop 'ora.evmd' on 'orcldb01'
> CRS-2677: Stop of 'ora.ctssd' on 'orcldb01' succeeded
> CRS-2677: Stop of 'ora.evmd' on 'orcldb01' succeeded
> CRS-2673: Attempting to stop 'ora.cssd' on 'orcldb01'
> CRS-2677: Stop of 'ora.cssd' on 'orcldb01' succeeded
> CRS-2673: Attempting to stop 'ora.driver.afd' on 'orcldb01'
> CRS-2673: Attempting to stop 'ora.gipcd' on 'orcldb01'
> CRS-2677: Stop of 'ora.driver.afd' on 'orcldb01' succeeded
> CRS-2677: Stop of 'ora.gipcd' on 'orcldb01' succeeded
> CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orcldb01' has completed
> CRS-4133: Oracle High Availability Services has been stopped.
> 2018/10/12 12:18:03 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
> lCRS-4123: Starting Oracle High Availability Services-managed resources
> CRS-2672: Attempting to start 'ora.mdnsd' on 'orcldb01'
> CRS-2672: Attempting to start 'ora.evmd' on 'orcldb01'
> CRS-2676: Start of 'ora.mdnsd' on 'orcldb01' succeeded
> CRS-2676: Start of 'ora.evmd' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.gpnpd' on 'orcldb01'
> CRS-2676: Start of 'ora.gpnpd' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.gipcd' on 'orcldb01'
> CRS-2676: Start of 'ora.gipcd' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orcldb01'
> CRS-2676: Start of 'ora.cssdmonitor' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.cssd' on 'orcldb01'
> CRS-2672: Attempting to start 'ora.diskmon' on 'orcldb01'
> CRS-2676: Start of 'ora.diskmon' on 'orcldb01' succeeded
> CRS-2676: Start of 'ora.cssd' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'orcldb01'
> CRS-2672: Attempting to start 'ora.ctssd' on 'orcldb01'
> CRS-2676: Start of 'ora.ctssd' on 'orcldb01' succeeded
> CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.asm' on 'orcldb01'
> CRS-2676: Start of 'ora.asm' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.storage' on 'orcldb01'
> CRS-2676: Start of 'ora.storage' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.crf' on 'orcldb01'
> CRS-2676: Start of 'ora.crf' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.crsd' on 'orcldb01'
> CRS-2676: Start of 'ora.crsd' on 'orcldb01' succeeded
> CRS-6023: Starting Oracle Cluster Ready Services-managed resources
> CRS-6017: Processing resource auto-start for servers: orcldb01
> CRS-6016: Resource auto-start has completed for server orcldb01
> CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
> CRS-4123: Oracle High Availability Services has been started.
> 2018/10/12 12:19:42 CLSRSC-343: Successfully started Oracle Clusterware stack
> 2018/10/12 12:19:42 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
> CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'orcldb01'
> CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.asm' on 'orcldb01'
> CRS-2676: Start of 'ora.asm' on 'orcldb01' succeeded
> CRS-2672: Attempting to start 'ora.ORCL_FRA.dg' on 'orcldb01'
> CRS-2676: Start of 'ora.ORCL_FRA.dg' on 'orcldb01' succeeded
> 2018/10/12 12:21:14 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
> 2018/10/12 12:21:36 CLSRSC-175: Failed to write the checkpoint 'ROOTCRS_FIRSTNODE' with status 'SUCCESS' (error code 1)
> Died at /u01/app/grid/12.2.0.1/crs/install/crsutils.pm line 13160.
> The command '/u01/app/grid/12.2.0.1/perl/bin/perl -I/u01/app/grid/12.2.0.1/perl/lib -I/u01/app/grid/12.2.0.1/crs/install /u01/app/grid/12.2.0.1/crs/install/rootcrs.pl ' execution failed^C
[root@orcldb01 12.2.0.1]#
[root@orcldb01 12.2.0.1]#
[root@orcldb01 12.2.0.1]# ^C
[root@orcldb01 12.2.0.1]# id
uid=0(root) gid=0(root) groups=0(root)
[root@orcldb01 12.2.0.1]# pwd
/u01/app/grid/12.2.0.1
[root@orcldb01 12.2.0.1]# cd $ORACLE_HOME
[root@orcldb01 ~]# . .pro
bash: .pro: No such file or directory
[root@orcldb01 ~]# . .profile
bash: .profile: No such file or directory
[root@orcldb01 ~]# pwd
/root
[root@orcldb01 ~]#
[root@orcldb01 ~]# exit
bash-4.1$ id
uid=969(oracle) gid=533(dba) groups=533(dba),54321(oinstall)
bash-4.1$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details    
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
ora.ORCL_FRA.dg
               ONLINE  ONLINE       orcldb01                STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
ora.net1.network
               ONLINE  ONLINE       orcldb01                STABLE
ora.ons
               ONLINE  ONLINE       orcldb01                STABLE
ora.proxy_advm
               OFFLINE OFFLINE      orcldb01                STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       orcldb01                Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.orcldb01.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.qosmserver
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
--------------------------------------------------------------------------------
bash-4.1$ pwd
/u01/app/grid/12.2.0.1
bash-4.1$  ./runcluvfy.sh stage -pre crsinst -n orcldb01,orcldb02  -fixup -verbose

Verifying Physical Memory ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     377.5119GB (3.95849944E8KB)  8GB (8388608.0KB)         passed  
  orcldb01     377.5119GB (3.95849944E8KB)  8GB (8388608.0KB)         passed  
Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     375.7066GB (3.93956892E8KB)  50MB (51200.0KB)          passed  
  orcldb01     371.7328GB (3.89790072E8KB)  50MB (51200.0KB)          passed  
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     20GB (2.0971512E7KB)      16GB (1.6777216E7KB)      passed  
  orcldb01     20GB (2.0971508E7KB)      16GB (1.6777216E7KB)      passed  
Verifying Swap Size ...PASSED
Verifying Free Space: orcldb02:/usr,orcldb02:/var,orcldb02:/etc,orcldb02:/sbin,orcldb02:/tmp ...
  Path              Node Name     Mount point   Available     Required      Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              orcldb02     /             511.9434GB    25MB          passed    
  /var              orcldb02     /             511.9434GB    5MB           passed    
  /etc              orcldb02     /             511.9434GB    25MB          passed    
  /sbin             orcldb02     /             511.9434GB    10MB          passed    
  /tmp              orcldb02     /             511.9434GB    1GB           passed    
Verifying Free Space: orcldb02:/usr,orcldb02:/var,orcldb02:/etc,orcldb02:/sbin,orcldb02:/tmp ...PASSED
Verifying Free Space: orcldb02:/u01/app/grid/12.2.0.1 ...
  Path              Node Name     Mount point   Available     Required      Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/grid/12.2.0.1  orcldb02     /u01/app/grid  618.2217GB    6.9GB         passed    
Verifying Free Space: orcldb02:/u01/app/grid/12.2.0.1 ...PASSED
Verifying Free Space: orcldb01:/usr,orcldb01:/var,orcldb01:/etc,orcldb01:/sbin,orcldb01:/tmp ...
  Path              Node Name     Mount point   Available     Required      Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              orcldb01     /             521.9189GB    25MB          passed    
  /var              orcldb01     /             521.9189GB    5MB           passed    
  /etc              orcldb01     /             521.9189GB    25MB          passed    
  /sbin             orcldb01     /             521.9189GB    10MB          passed    
  /tmp              orcldb01     /             521.9189GB    1GB           passed    
Verifying Free Space: orcldb01:/usr,orcldb01:/var,orcldb01:/etc,orcldb01:/sbin,orcldb01:/tmp ...PASSED
Verifying Free Space: orcldb01:/u01/app/grid/12.2.0.1 ...
  Path              Node Name     Mount point   Available     Required      Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/grid/12.2.0.1  orcldb01     /u01/app/grid  605.2422GB    6.9GB         passed    
Verifying Free Space: orcldb01:/u01/app/grid/12.2.0.1 ...PASSED
Verifying User Existence: oracle ...
  Node Name     Status                    Comment              
  ------------  ------------------------  ------------------------
  orcldb02     passed                    exists(969)          
  orcldb01     passed                    exists(969)          

  Verifying Users With Same UID: 969 ...PASSED
Verifying User Existence: oracle ...PASSED
Verifying Group Existence: asmadmin ...
  Node Name     Status                    Comment              
  ------------  ------------------------  ------------------------
  orcldb02     failed                    does not exist        
  orcldb01     failed                    does not exist        
Verifying Group Existence: asmadmin ...FAILED (PRVG-10461)
Verifying Group Existence: dba ...
  Node Name     Status                    Comment              
  ------------  ------------------------  ------------------------
  orcldb02     passed                    exists                
  orcldb01     passed                    exists                
Verifying Group Existence: dba ...PASSED
Verifying Group Existence: asmdba ...
  Node Name     Status                    Comment              
  ------------  ------------------------  ------------------------
  orcldb02     failed                    does not exist        
  orcldb01     failed                    does not exist        
Verifying Group Existence: asmdba ...FAILED (PRVG-10461)
Verifying Group Membership: asmadmin ...
  Node Name         User Exists   Group Exists  User in Group  Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         yes           no            no            failed        
  orcldb01         yes           no            no            failed        
Verifying Group Membership: asmadmin ...FAILED (PRVG-10460)
Verifying Group Membership: dba(Primary) ...
  Node Name         User Exists   Group Exists  User in Group  Primary       Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb02         yes           yes           yes           yes           passed    
  orcldb01         yes           yes           yes           yes           passed    
Verifying Group Membership: dba(Primary) ...PASSED
Verifying Group Membership: asmdba ...
  Node Name         User Exists   Group Exists  User in Group  Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         yes           no            no            failed        
  orcldb01         yes           no            no            failed        
Verifying Group Membership: asmdba ...FAILED (PRVG-10460)
Verifying Run Level ...
  Node Name     run level                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     3                         3,5                       passed  
  orcldb01     3                         3,5                       passed  
Verifying Run Level ...PASSED
Verifying Hard Limit: maximum open file descriptors ...
  Node Name         Type          Available     Required      Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         hard          65536         65536         passed        
  orcldb01         hard          65536         65536         passed        
Verifying Hard Limit: maximum open file descriptors ...PASSED
Verifying Soft Limit: maximum open file descriptors ...
  Node Name         Type          Available     Required      Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         soft          65536         1024          passed        
  orcldb01         soft          65536         1024          passed        
Verifying Soft Limit: maximum open file descriptors ...PASSED
Verifying Hard Limit: maximum user processes ...
  Node Name         Type          Available     Required      Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         hard          16384         16384         passed        
  orcldb01         hard          16384         16384         passed        
Verifying Hard Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum user processes ...
  Node Name         Type          Available     Required      Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         soft          16384         2047          passed        
  orcldb01         soft          16384         2047          passed        
Verifying Soft Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum stack size ...
  Node Name         Type          Available     Required      Status        
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         soft          10240         10240         passed        
  orcldb01         soft          10240         10240         passed        
Verifying Soft Limit: maximum stack size ...PASSED
Verifying Architecture ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     x86_64                    x86_64                    passed  
  orcldb01     x86_64                    x86_64                    passed  
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     4.1.12-37.4.1.el6uek.x86_64  2.6.39-400.211.1          passed  
  orcldb01     4.1.12-37.4.1.el6uek.x86_64  2.6.39-400.211.1          passed  
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         250           250           250           passed        
  orcldb02         250           250           250           passed        
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         32000         32000         32000         passed        
  orcldb02         32000         32000         32000         passed        
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         100           100           100           passed        
  orcldb02         100           100           100           passed        
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         128           128           128           passed        
  orcldb02         128           128           128           passed        
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         4398046511104  4398046511104  202675171328  passed        
  orcldb02         4398046511104  4398046511104  202675171328  passed        
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         4096          4096          4096          passed        
  orcldb02         4096          4096          4096          passed        
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         4294967296    4294967296    39584994      passed        
  orcldb02         4294967296    4294967296    39584994      passed        
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         6815744       6815744       6815744       passed        
  orcldb02         6815744       6815744       6815744       passed        
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed        
  orcldb02         between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed        
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         262144        262144        262144        passed        
  orcldb02         262144        262144        262144        passed        
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         4194304       4194304       4194304       passed        
  orcldb02         4194304       4194304       4194304       passed        
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         262144        262144        262144        passed        
  orcldb02         262144        262144        262144        passed        
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         1048576       1048576       1048576       passed        
  orcldb02         1048576       1048576       1048576       passed        
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         1048576       1048576       1048576       passed        
  orcldb02         1048576       1048576       1048576       passed        
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...
  Node Name         Current       Configured    Required      Status        Comment  
  ----------------  ------------  ------------  ------------  ------------  ------------
  orcldb01         1             1             1             passed        
  orcldb02         1             1             1             passed        
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.20.51.0.2 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     binutils-2.20.51.0.2-5.44.el6  binutils-2.20.51.0.2      passed  
  orcldb01     binutils-2.20.51.0.2-5.44.el6  binutils-2.20.51.0.2      passed  
Verifying Package: binutils-2.20.51.0.2 ...PASSED
Verifying Package: compat-libcap1-1.10 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     compat-libcap1-1.10-1     compat-libcap1-1.10       passed  
  orcldb01     compat-libcap1-1.10-1     compat-libcap1-1.10       passed  
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed  
  orcldb01     compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed  
Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) ...PASSED
Verifying Package: libgcc-4.4.7 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     libgcc(x86_64)-4.4.7-17.el6  libgcc(x86_64)-4.4.7      passed  
  orcldb01     libgcc(x86_64)-4.4.7-17.el6  libgcc(x86_64)-4.4.7      passed  
Verifying Package: libgcc-4.4.7 (x86_64) ...PASSED
Verifying Package: libstdc++-4.4.7 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     libstdc++(x86_64)-4.4.7-17.el6  libstdc++(x86_64)-4.4.7   passed  
  orcldb01     libstdc++(x86_64)-4.4.7-17.el6  libstdc++(x86_64)-4.4.7   passed  
Verifying Package: libstdc++-4.4.7 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.4.7 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     libstdc++-devel(x86_64)-4.4.7-17.el6  libstdc++-devel(x86_64)-4.4.7  passed  
  orcldb01     libstdc++-devel(x86_64)-4.4.7-17.el6  libstdc++-devel(x86_64)-4.4.7  passed  
Verifying Package: libstdc++-devel-4.4.7 (x86_64) ...PASSED
Verifying Package: sysstat-9.0.4 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     sysstat-9.0.4-31.el6      sysstat-9.0.4             passed  
  orcldb01     sysstat-9.0.4-31.el6      sysstat-9.0.4             passed  
Verifying Package: sysstat-9.0.4 ...PASSED
Verifying Package: ksh ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     ksh                       ksh                       passed  
  orcldb01     ksh                       ksh                       passed  
Verifying Package: ksh ...PASSED
Verifying Package: make-3.81 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     make-3.81-23.el6          make-3.81                 passed  
  orcldb01     make-3.81-23.el6          make-3.81                 passed  
Verifying Package: make-3.81 ...PASSED
Verifying Package: glibc-2.12 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     glibc(x86_64)-2.12-1.192.el6  glibc(x86_64)-2.12        passed  
  orcldb01     glibc(x86_64)-2.12-1.192.el6  glibc(x86_64)-2.12        passed  
Verifying Package: glibc-2.12 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.12 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     glibc-devel(x86_64)-2.12-1.192.el6  glibc-devel(x86_64)-2.12  passed  
  orcldb01     glibc-devel(x86_64)-2.12-1.192.el6  glibc-devel(x86_64)-2.12  passed  
Verifying Package: glibc-devel-2.12 (x86_64) ...PASSED
Verifying Package: libaio-0.3.107 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed  
  orcldb01     libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed  
Verifying Package: libaio-0.3.107 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.107 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed  
  orcldb01     libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed  
Verifying Package: libaio-devel-0.3.107 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     nfs-utils-1.2.3-70.0.1.el6  nfs-utils-1.2.3-15        passed  
  orcldb01     nfs-utils-1.2.3-70.0.1.el6  nfs-utils-1.2.3-15        passed  
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: e2fsprogs-1.42.8 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     e2fsprogs-1.42.8-1.0.2.el6  e2fsprogs-1.42.8          passed  
  orcldb01     e2fsprogs-1.42.8-1.0.2.el6  e2fsprogs-1.42.8          passed  
Verifying Package: e2fsprogs-1.42.8 ...PASSED
Verifying Package: e2fsprogs-libs-1.42.8 (x86_64) ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     e2fsprogs-libs(x86_64)-1.42.8-1.0.2.el6  e2fsprogs-libs(x86_64)-1.42.8  passed  
  orcldb01     e2fsprogs-libs(x86_64)-1.42.8-1.0.2.el6  e2fsprogs-libs(x86_64)-1.42.8  passed  
Verifying Package: e2fsprogs-libs-1.42.8 (x86_64) ...PASSED
Verifying Package: smartmontools-5.43-1 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     smartmontools-5.43-1.el6  smartmontools-5.43-1      passed  
  orcldb01     smartmontools-5.43-1.el6  smartmontools-5.43-1      passed  
Verifying Package: smartmontools-5.43-1 ...PASSED
Verifying Package: net-tools-1.60-110 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     net-tools-1.60-110.0.1.el6_2  net-tools-1.60-110        passed  
  orcldb01     net-tools-1.60-110.0.1.el6_2  net-tools-1.60-110        passed  
Verifying Package: net-tools-1.60-110 ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...
  Node Name                             Status                
  ------------------------------------  ------------------------
  orcldb02                             passed                
  orcldb01                             passed                
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb02     cvuqdisk-1.0.10-1         cvuqdisk-1.0.10-1         passed  
  orcldb01     cvuqdisk-1.0.10-1         cvuqdisk-1.0.10-1         passed  
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Node Connectivity ...
  Verifying Hosts File ...
  Node Name                             Status                
  ------------------------------------  ------------------------
  orcldb01                             passed                
  orcldb02                             passed                
  Verifying Hosts File ...PASSED

Interface information for node "orcldb02"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.55.239.66    10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B4:5A 1500
 eth2   191.167.1.6     191.167.1.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B4:5C 1500
 eth3   191.167.2.6     191.167.2.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B4:5D 1500

Interface information for node "orcldb01"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.55.239.64    10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2C 1500
 bond0  10.55.239.65    10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2C 1500
 bond0  10.55.239.131   10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2C 1500
 bond0  10.55.239.132   10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2C 1500
 eth2   191.167.1.5     191.167.1.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2E 1500
 eth3   191.167.2.5     191.167.2.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2F 1500

Check: MTU consistency on the private interfaces of subnet "191.167.2.0,191.167.1.0"

  Node              Name          IP Address    Subnet        MTU          
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         eth3          191.167.2.6   191.167.2.0   1500          
  orcldb01         eth3          191.167.2.5   191.167.2.0   1500          
  orcldb02         eth2          191.167.1.6   191.167.1.0   1500          
  orcldb01         eth2          191.167.1.5   191.167.1.0   1500          

Check: MTU consistency of the subnet "10.55.238.0".

  Node              Name          IP Address    Subnet        MTU          
  ----------------  ------------  ------------  ------------  ----------------
  orcldb02         bond0         10.55.239.66  10.55.238.0   1500          
  orcldb01         bond0         10.55.239.64  10.55.238.0   1500          
  orcldb01         bond0         10.55.239.65  10.55.238.0   1500          
  orcldb01         bond0         10.55.239.131  10.55.238.0   1500          
  orcldb01         bond0         10.55.239.132  10.55.238.0   1500          

  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  orcldb01[eth3:191.167.2.5]     orcldb02[eth3:191.167.2.6]     yes          

  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  orcldb01[eth2:191.167.1.5]     orcldb02[eth2:191.167.1.6]     yes          

  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  orcldb01[bond0:10.55.239.64]   orcldb02[bond0:10.55.239.66]   yes          
  orcldb01[bond0:10.55.239.64]   orcldb01[bond0:10.55.239.65]   yes          
  orcldb01[bond0:10.55.239.64]   orcldb01[bond0:10.55.239.131]  yes          
  orcldb01[bond0:10.55.239.64]   orcldb01[bond0:10.55.239.132]  yes          
  orcldb02[bond0:10.55.239.66]   orcldb01[bond0:10.55.239.65]   yes          
  orcldb02[bond0:10.55.239.66]   orcldb01[bond0:10.55.239.131]  yes          
  orcldb02[bond0:10.55.239.66]   orcldb01[bond0:10.55.239.132]  yes          
  orcldb01[bond0:10.55.239.65]   orcldb01[bond0:10.55.239.131]  yes          
  orcldb01[bond0:10.55.239.65]   orcldb01[bond0:10.55.239.132]  yes          
  orcldb01[bond0:10.55.239.131]  orcldb01[bond0:10.55.239.132]  yes          
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "191.167.2.0" ...PASSED
  Verifying subnet mask consistency for subnet "191.167.1.0" ...PASSED
  Verifying subnet mask consistency for subnet "10.55.238.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast check ...
Checking subnet "191.167.1.0" for multicast communication with multicast group "224.0.0.251"

Checking subnet "191.167.2.0" for multicast communication with multicast group "224.0.0.251"
Verifying Multicast check ...PASSED
Verifying ASM Integrity ...
  Verifying Node Connectivity ...
    Verifying Hosts File ...
  Node Name                             Status                
  ------------------------------------  ------------------------
  orcldb01                             passed                
    Verifying Hosts File ...PASSED

Interface information for node "orcldb01"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.55.239.64    10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2C 1500
 bond0  10.55.239.65    10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2C 1500
 bond0  10.55.239.131   10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2C 1500
 bond0  10.55.239.132   10.55.238.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2C 1500
 eth2   191.167.1.5     191.167.1.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2E 1500
 eth3   191.167.2.5     191.167.2.0     0.0.0.0         10.55.238.1     00:10:E0:BD:B5:2F 1500

Check: MTU consistency on the private interfaces of subnet "191.167.2.0,191.167.1.0"

  Node              Name          IP Address    Subnet        MTU          
  ----------------  ------------  ------------  ------------  ----------------
  orcldb01         eth3          191.167.2.5   191.167.2.0   1500          
  orcldb01         eth2          191.167.1.5   191.167.1.0   1500          

Check: MTU consistency of the subnet "10.55.238.0".

  Node              Name          IP Address    Subnet        MTU          
  ----------------  ------------  ------------  ------------  ----------------
  orcldb01         bond0         10.55.239.64  10.55.238.0   1500          
  orcldb01         bond0         10.55.239.65  10.55.238.0   1500          
  orcldb01         bond0         10.55.239.131  10.55.238.0   1500          
  orcldb01         bond0         10.55.239.132  10.55.238.0   1500          

  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  orcldb01[bond0:10.55.239.64]   orcldb01[bond0:10.55.239.65]   yes          
  orcldb01[bond0:10.55.239.64]   orcldb01[bond0:10.55.239.131]  yes          
  orcldb01[bond0:10.55.239.64]   orcldb01[bond0:10.55.239.132]  yes          
  orcldb01[bond0:10.55.239.65]   orcldb01[bond0:10.55.239.131]  yes          
  orcldb01[bond0:10.55.239.65]   orcldb01[bond0:10.55.239.132]  yes          
  orcldb01[bond0:10.55.239.131]  orcldb01[bond0:10.55.239.132]  yes          
    Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying Node Connectivity ...PASSED
Verifying ASM Integrity ...PASSED
Verifying Network Time Protocol (NTP) ...
  Verifying '/etc/ntp.conf' ...
  Node Name                             File exists?          
  ------------------------------------  ------------------------
  orcldb02                             yes                  
  orcldb01                             yes                  

  Verifying '/etc/ntp.conf' ...PASSED
  Verifying '/var/run/ntpd.pid' ...
  Node Name                             File exists?          
  ------------------------------------  ------------------------
  orcldb02                             yes                  
  orcldb01                             yes                  

  Verifying '/var/run/ntpd.pid' ...PASSED
  Verifying Daemon 'ntpd' ...
  Node Name                             Running?              
  ------------------------------------  ------------------------
  orcldb02                             yes                  
  orcldb01                             yes                  

  Verifying Daemon 'ntpd' ...PASSED
  Verifying NTP daemon or service using UDP port 123 ...
  Node Name                             Port Open?            
  ------------------------------------  ------------------------
  orcldb02                             yes                  
  orcldb01                             yes                  

  Verifying NTP daemon or service using UDP port 123 ...PASSED
  Verifying NTP daemon is synchronized with at least one external time source ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying Same core file name pattern ...PASSED
Verifying User Mask ...
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  orcldb02     0022                      0022                      passed  
  orcldb01     0022                      0022                      passed  
Verifying User Mask ...PASSED
Verifying User Not In Group "root": oracle ...
  Node Name     Status                    Comment              
  ------------  ------------------------  ------------------------
  orcldb02     passed                    does not exist        
  orcldb01     passed                    does not exist        
Verifying User Not In Group "root": oracle ...PASSED
Verifying Time zone consistency ...PASSED
Verifying resolv.conf Integrity ...
  Verifying (Linux) resolv.conf Integrity ...
  Node Name                             Status                
  ------------------------------------  ------------------------
  orcldb01                             passed                

  checking response for name "orcldb01" from each of the name servers
  specified in "/etc/resolv.conf"

  Node Name     Source                    Comment                   Status  
  ------------  ------------------------  ------------------------  ----------
  orcldb01     10.55.64.23               IPv4                      passed  
  orcldb01     10.55.64.26               IPv4                      passed  
  Verifying (Linux) resolv.conf Integrity ...PASSED
Verifying resolv.conf Integrity ...PASSED
Verifying DNS/NIS name service ...PASSED
Verifying Domain Sockets ...FAILED (PRVG-11750)
Verifying /boot mount ...PASSED
Verifying Daemon "avahi-daemon" not configured and running ...
  Node Name     Configured                Status                
  ------------  ------------------------  ------------------------
  orcldb02     no                        passed                
  orcldb01     no                        passed                

  Node Name     Running?                  Status                
  ------------  ------------------------  ------------------------
  orcldb02     no                        passed                
  orcldb01     no                        passed                
Verifying Daemon "avahi-daemon" not configured and running ...PASSED
Verifying Daemon "proxyt" not configured and running ...
  Node Name     Configured                Status                
  ------------  ------------------------  ------------------------
  orcldb02     no                        passed                
  orcldb01     no                        passed                

  Node Name     Running?                  Status                
  ------------  ------------------------  ------------------------
  orcldb02     no                        passed                
  orcldb01     no                        passed                
Verifying Daemon "proxyt" not configured and running ...PASSED
Verifying loopback network interface address ...PASSED
Verifying Grid Infrastructure home path: /u01/app/grid/12.2.0.1 ...
  Verifying '/u01/app/grid/12.2.0.1' ...FAILED (PRVG-11931)
Verifying Grid Infrastructure home path: /u01/app/grid/12.2.0.1 ...FAILED (PRVG-11931)
Verifying User Equivalence ...PASSED
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying zeroconf check ...PASSED
Verifying ASM Filter Driver configuration ...WARNING (PRVE-10237, PRVE-10239)

Pre-check for cluster services setup was unsuccessful on all the nodes.


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Group Existence: asmadmin ...FAILED
orcldb02: PRVG-10461 : Group "asmadmin" selected for privileges "OSASM" does
           not exist on node "orcldb02".

orcldb01: PRVG-10461 : Group "asmadmin" selected for privileges "OSASM" does
           not exist on node "orcldb01".

Verifying Group Existence: asmdba ...FAILED
orcldb02: PRVG-10461 : Group "asmdba" selected for privileges "OSDBA" does not
           exist on node "orcldb02".

orcldb01: PRVG-10461 : Group "asmdba" selected for privileges "OSDBA" does not
           exist on node "orcldb01".

Verifying Group Membership: asmadmin ...FAILED
orcldb02: PRVG-10460 : User "oracle" does not belong to group "asmadmin"
           selected for privileges "OSASM" on node "orcldb02".

orcldb01: PRVG-10460 : User "oracle" does not belong to group "asmadmin"
           selected for privileges "OSASM" on node "orcldb01".

Verifying Group Membership: asmdba ...FAILED
orcldb02: PRVG-10460 : User "oracle" does not belong to group "asmdba"
           selected for privileges "OSDBA" on node "orcldb02".

orcldb01: PRVG-10460 : User "oracle" does not belong to group "asmdba"
           selected for privileges "OSDBA" on node "orcldb01".

Verifying Domain Sockets ...FAILED
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_INIT" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_monitor_ag_orcldb01_"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File
           "/var/tmp/.oracle/ora_gipc_css_ctrllcl_orcldb01_orcl-den-live"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File
           "/var/tmp/.oracle/ora_gipc_sorcldb01oracleorcl-den-liveCRFM_MIIPC_
           lock" exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_EVMD" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File
           "/var/tmp/.oracle/ora_gipc_agent_ag_orcldb01__lock" exists on node
           "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_GPNPD_orcldb01" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_CSSD" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_CRSD" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_CTSSD_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_CSSD_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_GIPCD" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File
           "/var/tmp/.oracle/ora_gipc_css_ctrllcl_orcldb01_orcl-den-live_lock
           " exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_gipcd_orcldb01" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_MOND" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_GPNPD" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/npohasd" exists on node
           "orcldb01".
orcldb01: PRVG-11750 : File
           "/var/tmp/.oracle/ora_gipc_sorcldb01oracleorcl-den-liveCRFM_CLIIPC
           " exists on node "orcldb01".
orcldb01: PRVG-11750 : File
           "/var/tmp/.oracle/ora_gipc_sorcldb01oracleorcl-den-liveCRFM_MIIPC"
            exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_LOGD" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_MDNSD" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/mdnsd.pid" exists on node
           "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_agent_ag_orcldb01_"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File
           "/var/tmp/.oracle/ora_gipc_sorcldb01oracleorcl-den-liveCRFM_CLIIPC
           _lock" exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_MDNSD_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_GIPCD_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_GPNPD_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_CTSSD" exists
           on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_EVMD_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File
           "/var/tmp/.oracle/ora_gipc_monitor_ag_orcldb01__lock" exists on
           node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/mdnsd" exists on node
           "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_LOGD_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_CRSD_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_GPNPD_orcldb01_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File
           "/var/tmp/.oracle/ora_gipc_sorcldb01oracleorcl-den-liveCRFM_SIPC"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_gipcd_orcldb01_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File
           "/var/tmp/.oracle/ora_gipc_sorcldb01oracleorcl-den-liveCRFM_SIPC_l
           ock" exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_MOND_lock"
           exists on node "orcldb01".
orcldb01: PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_orcldb01_INIT_lock"
           exists on node "orcldb01".

Verifying Grid Infrastructure home path: /u01/app/grid/12.2.0.1 ...FAILED
  Verifying '/u01/app/grid/12.2.0.1' ...FAILED
  orcldb01: PRVG-11931 : Path "/u01/app/grid/12.2.0.1" is not writeable on node
             "orcldb01".

Verifying ASM Filter Driver configuration ...WARNING
orcldb02: PRVE-10237 : Existence of files
           "/lib/modules/4.1.12-32.el6uek.x86_64/extra/oracle/oracleafd.ko,/lib/
           modules/4.1.12-37.4.1.el6uek.x86_64/weak-updates/oracle/oracleafd.ko,
           /opt/oracle/extapi/64/asm/orcl/1/libafd12.so" is not expected on
           node "orcldb02" before Clusterware installation or upgrade.
orcldb02: PRVE-10239 : ASM Filter Driver "oracleafd" is not expected to be
           loaded on node "orcldb02" before Clusterware installation or
           upgrade.

orcldb01: PRVE-10237 : Existence of files
           "/lib/modules/4.1.12-32.el6uek.x86_64/extra/oracle/oracleafd.ko,/lib/
           modules/4.1.12-37.4.1.el6uek.x86_64/weak-updates/oracle/oracleafd.ko,
           /opt/oracle/extapi/64/asm/orcl/1/libafd12.so" is not expected on
           node "orcldb01" before Clusterware installation or upgrade.
orcldb01: PRVE-10239 : ASM Filter Driver "oracleafd" is not expected to be
           loaded on node "orcldb01" before Clusterware installation or
           upgrade.


CVU operation performed:      stage -pre crsinst
Date:                         Oct 12, 2018 12:27:44 PM
CVU home:                     /u01/app/grid/12.2.0.1/
User:                         oracle
******************************************************************************************
Fix up could not be generated for the following fixable prerequisites
******************************************************************************************
Check: Group Membership: asmadmin
Failed on nodes: orcldb02,orcldb01
ERROR:
PRVF-7730 : Fixup cannot be generated for user "oracle", group "asmadmin", on node "orcldb02" because the group is not defined locally on the node
PRVF-7730 : Fixup cannot be generated for user "oracle", group "asmadmin", on node "orcldb01" because the group is not defined locally on the node


******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
******************************************************************************************
--------------                ---------------     ----------------  
Check failed.                 Failed on nodes     Reboot required?  
--------------                ---------------     ----------------  
Group Existence: asmadmin     orcldb02,          no                
                              orcldb01                            
Group Existence: asmdba       orcldb02,          no                
                              orcldb01                            


Execute "/tmp/CVU_12.2.0.1.0_oracle/runfixup.sh" as root user on nodes "orcldb02,orcldb01" to perform the fix up operations manually

Press ENTER key to continue after execution of "/tmp/CVU_12.2.0.1.0_oracle/runfixup.sh" has completed on nodes "orcldb02,orcldb01"
^Cbash-4.1$
bash-4.1$
bash-4.1$  /u01/app/grid/12.2.0.1/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/12.2.0.1/install/response/grid12c.rsp [-silent]
ERROR: Unable to verify the graphical display setup. This application requires X display. Make sure that xdpyinfo exist under PATH variable.

[-silent]
Usage:  gridSetup.sh [<flag>] [<option>]
Following are the possible flags:
        -help - display help.
        -silent - run in silent mode. The inputs can be a response file or a list of command line variable value pairs.
                [-lenientInstallMode - perform the best effort installation by automatically ignoring invalid data in input parameters.]
                [-ignorePrereqFailure - ignore all prerequisite checks failures.]
        -responseFile - specify the complete path of the response file to use.
        -logLevel - enable the log of messages up to the priority level provided in this argument. Valid options are: severe, warning, info, config, fine, finer, finest.
        -executePrereqs | -executeConfigTools | -createGoldImage | -switchGridHome
        -executePrereqs - execute the prerequisite checks only.
        -executeConfigTools - execute the config tools for an installed home.
                -responseFile - specify the complete path of the response file to use.
                [-all - execute all the config tools for an installed home, including the config tools that have already succeeded.]
                [-skipStackCheck - skip the stack status check.]
        -createGoldImage - create a gold image from the current Oracle home.
                -destinationLocation - specify the complete path to where the created gold image will be located.
                [-exclFiles - specify the complete paths to the files to be excluded from the new gold image.]
        -switchGridHome - change the Oracle Grid Infrastructure home path.
        -debug - run in debug mode.
        -printdiskusage - log the debug information for the disk usage.
        -printmemory - log the debug information for the memory usage.
        -printtime - log the debug information for the time usage.
        -waitForCompletion - wait for the completion of the installation, instead of spawning the installer and returning the console prompt.
        -noconfig - do not execute the config tools.
        -noconsole - suppress the display of messages in the console. The console is not allocated.
        -skipPrereqs - skip the prerequisite checks.
        -ignoreInternalDriverError - ignore any internal driver errors.
        -noCopy - add a new node to the existing configuration without copying the software on to the specified nodes.
        -version - get the product version.



bash-4.1$
bash-4.1$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details    
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
ora.ORCL_FRA.dg
               ONLINE  ONLINE       orcldb01                STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
ora.net1.network
               ONLINE  ONLINE       orcldb01                STABLE
ora.ons
               ONLINE  ONLINE       orcldb01                STABLE
ora.proxy_advm
               OFFLINE OFFLINE      orcldb01                STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       orcldb01                Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.orcldb01.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.qosmserver
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
--------------------------------------------------------------------------------
bash-4.1$ hostname
orcldb01
bash-4.1$ history|grep chown
  267  chown -R oracle:dba /u01/app/grid
  269  sudo chown -R oracle:dba /u01/app/grid
  569  chown -R oracle:dba /u01/app/oracle
  598  chown oracle:dba /u01/app/grid/12.2.0.1/install/response/grid12c.rsp
  599  sudo  chown oracle:dba /u01/app/grid/12.2.0.1/install/response/grid12c.rsp
  891  sudo chown -R oracle:dba /u01/app/grid
  892   sudo chown -R oracle:dba /u01/app/oracle
  909  chown -R oracle:dba ./12.2.0.1
  912  chown -R oracle:dba ./12.2.0.1
  944  sudo chown -R oracle:dba /u01/app/oracle/diag
  976  history|grep chown
bash-4.1$ ls -ltr  /u01/app/grid/12.2.0.1/crs/install/rootcrs.pl

-rwxr-xr-x 1 root dba 19442 Jul  2  2016 /u01/app/grid/12.2.0.1/crs/install/rootcrs.pl
bash-4.1$ ls -l /u01/app
total 16
drwx------   2 bb     bb   4096 Apr 16 14:18 bb
drwxr-xr-x   2 root   root 4096 Oct 12 07:24 dselvarajan
drwxr-xr-x.  4 root   dba  4096 Sep 12 18:12 grid
drwxr-xr-x. 21 oracle dba  4096 Oct 12 12:17 oracle


bash-4.1$  ls -l /u01/app/oracle
total 96
drwxr-xr-x   2 oracle dba   4096 Oct  2 12:47 abc
drwxr-xr-x.  4 oracle dba   4096 Oct 12 12:16 admin
drwxr-xr-x.  3 oracle dba   4096 Jan 11  2017 audit
drwxrwxrwx.  2 oracle dba   4096 Jan 12  2017 bin
drwxrwxr-x   5 oracle dba   4096 Oct 12 12:16 cfgtoollogs
drwxr-xr-x   2 oracle dba   4096 Oct 12 12:00 checkpoints
drwxrwxr-x   6 oracle dba   4096 Oct 12 12:13 crsdata
-rw-r--r--   1 oracle dba      0 Oct  9 01:49 def
drwxrwxr-x  21 oracle dba   4096 Oct 12 12:00 diag
drwxr-xr-x   3 oracle dba   4096 Oct 12 12:17 diagsnap
drwxr-xr-x   3 root   root  4096 Oct 11 23:09 log
drwxr-xr-x.  2 oracle dba   4096 Jan 12  2017 logs
drwx------.  2 oracle dba  16384 Jan 11  2017 lost+found
drwxr-xr-x   3 oracle dba   4096 Oct 12 12:13 orcldb01
drwxrwx---   4 oracle dba   4096 Oct 12 12:09 oraInventory
drwxr-xr-x.  3 oracle dba   4096 Sep 12 18:38 product
-rwxrwxrwx   1 oracle dba  12768 Oct  2 12:59 rhist.txt
drwxr-x--x   4 root   root  4096 Oct 12 12:13 tfa
drwxr-xr-x.  3 oracle dba   4096 Jan 12  2017 utils
bash-4.1$ ls -l /u01/app/grid
total 12362836
drwxr-xr-x  88 root   dba        4096 Oct 12 12:13 12.2.0.1
-rwxr-xr-x   1 oracle dba 12659517440 Sep 12 18:10 Grid_12201_Vanilla_Linux.tar
drwx------.  2 oracle dba       16384 Jan 11  2017 lost+found
bash-4.1$ sudo chown -R oracle:dba /u01/app/grid
bash-4.1$ ls -l /u01/app/grid                
total 12362836
drwxr-xr-x  88 oracle dba        4096 Oct 12 12:13 12.2.0.1
-rwxr-xr-x   1 oracle dba 12659517440 Sep 12 18:10 Grid_12201_Vanilla_Linux.tar
drwx------.  2 oracle dba       16384 Jan 11  2017 lost+found
bash-4.1$ ls -ltr  /u01/app/grid/12.2.0.1/crs/install/rootcrs.pl
-rwxr-xr-x 1 oracle dba 19442 Jul  2  2016 /u01/app/grid/12.2.0.1/crs/install/rootcrs.pl
bash-4.1$ id
uid=969(oracle) gid=533(dba) groups=533(dba),54321(oinstall)
bash-4.1$ sudo bash
[root@orcldb01 12.2.0.1]#
[root@orcldb01 12.2.0.1]# /u01/app/grid/12.2.0.1/perl/bin/perl -I/u01/app/grid/12.2.0.1/perl/lib -I/u01/app/grid/12.2.0.1/crs/install /u01/app/grid/12.2.0.1/crs/install/rootcrs.pl
Using configuration parameter file: /u01/app/grid/12.2.0.1/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/orcldb01/crsconfig/rootcrs_orcldb01_2018-10-12_01-14-47PM.log
2018/10/12 13:14:50 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2018/10/12 13:14:50 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2018/10/12 13:14:50 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2018/10/12 13:14:51 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2018/10/12 13:14:52 CLSRSC-363: User ignored prerequisites during installation
2018/10/12 13:14:52 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2018/10/12 13:14:55 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2018/10/12 13:14:55 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2018/10/12 13:14:56 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
2018/10/12 13:14:59 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2018/10/12 13:14:59 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2018/10/12 13:15:01 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2018/10/12 13:15:02 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2018/10/12 13:15:02 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2018/10/12 13:15:03 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2018/10/12 13:15:04 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2018/10/12 13:15:05 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2018/10/12 13:15:07 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2018/10/12 13:15:08 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orcldb01'
CRS-2673: Attempting to stop 'ora.crsd' on 'orcldb01'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'orcldb01'
CRS-2673: Attempting to stop 'ora.ORCL_FRA.dg' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.cvu' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'orcldb01'
CRS-2677: Stop of 'ora.ORCL_FRA.dg' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'orcldb01'
CRS-2677: Stop of 'ora.cvu' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'orcldb01'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'orcldb01'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.orcldb01.vip' on 'orcldb01'
CRS-2677: Stop of 'ora.asm' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'orcldb01'
CRS-2677: Stop of 'ora.scan2.vip' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.orcldb01.vip' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'orcldb01'
CRS-2677: Stop of 'ora.ons' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'orcldb01'
CRS-2677: Stop of 'ora.net1.network' on 'orcldb01' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'orcldb01' has completed
CRS-2677: Stop of 'ora.crsd' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.crf' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'orcldb01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.crf' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.storage' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'orcldb01'
CRS-2677: Stop of 'ora.mdnsd' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.asm' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'orcldb01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.evmd' on 'orcldb01'
CRS-2677: Stop of 'ora.ctssd' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.evmd' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'orcldb01'
CRS-2677: Stop of 'ora.cssd' on 'orcldb01' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'orcldb01'
CRS-2673: Attempting to stop 'ora.gipcd' on 'orcldb01'
CRS-2677: Stop of 'ora.driver.afd' on 'orcldb01' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'orcldb01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orcldb01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/10/12 13:15:39 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'orcldb01'
CRS-2672: Attempting to start 'ora.evmd' on 'orcldb01'
CRS-2676: Start of 'ora.mdnsd' on 'orcldb01' succeeded
CRS-2676: Start of 'ora.evmd' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'orcldb01'
CRS-2676: Start of 'ora.gpnpd' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'orcldb01'
CRS-2676: Start of 'ora.gipcd' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orcldb01'
CRS-2676: Start of 'ora.cssdmonitor' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'orcldb01'
CRS-2672: Attempting to start 'ora.diskmon' on 'orcldb01'
CRS-2676: Start of 'ora.diskmon' on 'orcldb01' succeeded
CRS-2676: Start of 'ora.cssd' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'orcldb01'
CRS-2672: Attempting to start 'ora.ctssd' on 'orcldb01'
CRS-2676: Start of 'ora.ctssd' on 'orcldb01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'orcldb01'
CRS-2676: Start of 'ora.asm' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'orcldb01'
CRS-2676: Start of 'ora.storage' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'orcldb01'
CRS-2676: Start of 'ora.crf' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'orcldb01'
CRS-2676: Start of 'ora.crsd' on 'orcldb01' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-2664: Resource 'ora.ORCL_FRA.dg' is already running on 'orcldb01'
CRS-6017: Processing resource auto-start for servers: orcldb01
CRS-2672: Attempting to start 'ora.scan2.vip' on 'orcldb01'
CRS-2672: Attempting to start 'ora.scan1.vip' on 'orcldb01'
CRS-2672: Attempting to start 'ora.orcldb01.vip' on 'orcldb01'
CRS-2672: Attempting to start 'ora.cvu' on 'orcldb01'
CRS-2672: Attempting to start 'ora.ons' on 'orcldb01'
CRS-2672: Attempting to start 'ora.qosmserver' on 'orcldb01'
CRS-2676: Start of 'ora.cvu' on 'orcldb01' succeeded
CRS-2676: Start of 'ora.scan2.vip' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'orcldb01'
CRS-2676: Start of 'ora.scan1.vip' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'orcldb01'
CRS-2676: Start of 'ora.orcldb01.vip' on 'orcldb01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'orcldb01'
CRS-2676: Start of 'ora.ons' on 'orcldb01' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'orcldb01' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'orcldb01' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'orcldb01' succeeded
CRS-2676: Start of 'ora.qosmserver' on 'orcldb01' succeeded
CRS-6016: Resource auto-start has completed for server orcldb01
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/10/12 13:17:13 CLSRSC-343: Successfully started Oracle Clusterware stack
2018/10/12 13:17:13 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2018/10/12 13:17:16 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2018/10/12 13:17:23 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@orcldb01 12.2.0.1]# pwd
/u01/app/grid/12.2.0.1
[root@orcldb01 12.2.0.1]# cd bin/
[root@orcldb01 bin]# ./crs
crsboot_diags.sh    crsctl.bin          crs_getperm         crs_profile.bin     crs_relocate.bin    crs_setperm.bin     crs_stat.bin        crs_unregister    
crscdpd             crsd                crs_getperm.bin     crs_register        crsrename           crs_start           crs_stop            crs_unregister.bin
crscdpd.bin         crsd.bin            crskeytoolctl       crs_register.bin    crsrename.pl        crs_start.bin       crs_stop.bin        crswrapexece.pl  
crsctl              crsdiag.pl          crs_profile         crs_relocate        crs_setperm         crs_stat            crstmpl.scr      
[root@orcldb01 bin]# ./crs
crsboot_diags.sh    crsctl.bin          crs_getperm         crs_profile.bin     crs_relocate.bin    crs_setperm.bin     crs_stat.bin        crs_unregister    
crscdpd             crsd                crs_getperm.bin     crs_register        crsrename           crs_start           crs_stop            crs_unregister.bin
crscdpd.bin         crsd.bin            crskeytoolctl       crs_register.bin    crsrename.pl        crs_start.bin       crs_stop.bin        crswrapexece.pl  
crsctl              crsdiag.pl          crs_profile         crs_relocate        crs_setperm         crs_stat            crstmpl.scr      


[root@orcldb01 bin]# ./crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details    
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
ora._LIVE2_FRA.dg
               ONLINE  ONLINE       orcldb01                STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
ora.net1.network
               ONLINE  ONLINE       orcldb01                STABLE
ora.ons
               ONLINE  ONLINE       orcldb01                STABLE
ora.proxy_advm
               OFFLINE OFFLINE      orcldb01                STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       orcldb01                Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.orcldb01.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.qosmserver
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
--------------------------------------------------------------------------------

[root@orcldb01 bin]# pwd
/u01/app/grid/12.2.0.1/bin
[root@orcldb01 bin]# ORACLE_BASE=/u01/app/oracle
[root@orcldb01 bin]# export ORACLE_BASE
[root@orcldb01 bin]#
[root@orcldb01 bin]# GRID_HOME=/u01/app/grid/12.2.0.1
[root@orcldb01 bin]# export GRID_HOME
[root@orcldb01 bin]#
[root@orcldb01 bin]# ORACLE_HOME=$ORACLE_BASE/product/12.2.0.1
[root@orcldb01 bin]# export ORACLE_HOME
[root@orcldb01 bin]#
[root@orcldb01 bin]# PATH=$ORACLE_HOME/bin:$PATH:$GRID_HOME/bin:/usr/sbin
[root@orcldb01 bin]# export PATH
[root@orcldb01 bin]#
[root@orcldb01 bin]# LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/jdbc/lib:$LD_LIBRARY_PATH
[root@orcldb01 bin]# export LD_LIBRARY_PATH
[root@orcldb01 bin]#
[root@orcldb01 bin]# ORACLE_HOME=$GRID_HOME
[root@orcldb01 bin]# export ORACLE_HOME
[root@orcldb01 bin]#
[root@orcldb01 bin]# cd $GRID_HOME/bin
[root@orcldb01 bin]# ./asmcmd
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DENHPE20450_2_1F            ENABLED   /dev/mapper/DENHPE20450_2_1fp1
DENHPE20450_2_20            ENABLED   /dev/mapper/DENHPE20450_2_20p1
DENHPE20450_2_21            ENABLED   /dev/mapper/DENHPE20450_2_21p1
DENHPE20450_2_22            ENABLED   /dev/mapper/DENHPE20450_2_22p1
DENHPE20450_2_23            ENABLED   /dev/mapper/DENHPE20450_2_23p1
DENHPE20450_2_24            ENABLED   /dev/mapper/DENHPE20450_2_24p1
DENHPE20450_2_25            ENABLED   /dev/mapper/DENHPE20450_2_25p1
DENHPE20450_2_26            ENABLED   /dev/mapper/DENHPE20450_2_26p1
DENHPE20450_2_27            ENABLED   /dev/mapper/DENHPE20450_2_27p1
DENHPE20450_2_28            ENABLED   /dev/mapper/DENHPE20450_2_28p1
DENHPE20450_2_29            ENABLED   /dev/mapper/DENHPE20450_2_29p1
DENHPE20450_2_2A            ENABLED   /dev/mapper/DENHPE20450_2_2ap1
DENHPE20450_2_2B            ENABLED   /dev/mapper/DENHPE20450_2_2bp1
DENHPE20450_2_2C            ENABLED   /dev/mapper/DENHPE20450_2_2cp1
DENHPE20450_2_2D            ENABLED   /dev/mapper/DENHPE20450_2_2dp1
DENHPE20450_2_2E            ENABLED   /dev/mapper/DENHPE20450_2_2ep1
DENHPE20450_2_2F            ENABLED   /dev/mapper/DENHPE20450_2_2fp1
DENHPE20450_2_4A            ENABLED   /dev/mapper/DENHPE20450_2_4ap1
DENHPE20450_2_4B            ENABLED   /dev/mapper/DENHPE20450_2_4bp1
DENHPE20450_2_4C            ENABLED   /dev/mapper/DENHPE20450_2_4cp1
DENHPE20450_2_4D            ENABLED   /dev/mapper/DENHPE20450_2_4dp1
DENHPE20450_2_4E            ENABLED   /dev/mapper/DENHPE20450_2_4ep1
DENHPE20450_2_4F            ENABLED   /dev/mapper/DENHPE20450_2_4fp1
DENHPE20450_2_50            ENABLED   /dev/mapper/DENHPE20450_2_50p1
DENHPE20450_2_51            ENABLED   /dev/mapper/DENHPE20450_2_51p1
DENHPE20450_2_52            ENABLED   /dev/mapper/DENHPE20450_2_52p1
DENHPE20450_2_53            ENABLED   /dev/mapper/DENHPE20450_2_53p1
DENHPE20450_2_54            ENABLED   /dev/mapper/DENHPE20450_2_54p1
DENHPE20450_2_55            ENABLED   /dev/mapper/DENHPE20450_2_55p1
DENHPE20450_2_56            ENABLED   /dev/mapper/DENHPE20450_2_56p1
DENHPE20450_2_57            ENABLED   /dev/mapper/DENHPE20450_2_57p1
DENHPE20450_2_58            ENABLED   /dev/mapper/DENHPE20450_2_58p1
DENHPE20450_2_59            ENABLED   /dev/mapper/DENHPE20450_2_59p1
DENHPE20450_2_5A            ENABLED   /dev/mapper/DENHPE20450_2_5ap1
DENHPE20450_2_5B            ENABLED   /dev/mapper/DENHPE20450_2_5bp1
DENHPE20450_2_5C            ENABLED   /dev/mapper/DENHPE20450_2_5cp1
DENHPE20450_2_5D            ENABLED   /dev/mapper/DENHPE20450_2_5dp1
DENHPE20450_2_5E            ENABLED   /dev/mapper/DENHPE20450_2_5ep1
DENHPE20450_2_61            ENABLED   /dev/mapper/DENHPE20450_2_61p1
DENHPE20450_2_62            ENABLED   /dev/mapper/DENHPE20450_2_62p1
DENHPE20450_2_63            ENABLED   /dev/mapper/DENHPE20450_2_63p1
DENHPE20450_2_64            ENABLED   /dev/mapper/DENHPE20450_2_64p1
ASMCMD> exit
[root@orcldb01 bin]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details    
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
ora.ORCL_FRA.dg
               ONLINE  ONLINE       orcldb01                STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
ora.net1.network
               ONLINE  ONLINE       orcldb01                STABLE
ora.ons
               ONLINE  ONLINE       orcldb01                STABLE
ora.proxy_advm
               OFFLINE OFFLINE      orcldb01                STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       orcldb01                Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.orcldb01.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.qosmserver
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
--------------------------------------------------------------------------------
[root@orcldb01 bin]#  chown -R oracle:dba /u01/app/oracle/diag
[root@orcldb01 bin]# id
uid=0(root) gid=0(root) groups=0(root)
[root@orcldb01 bin]# /u01/app/grid/12.2.0.1/install/root_orcldb02_2018-10-12_13-44-22-375468355.log^C
[root@orcldb01 bin]#
[root@orcldb01 bin]#
[root@orcldb01 bin]# /u01/app/grid/12.2.0.1/root.sh
Check /u01/app/grid/12.2.0.1/install/root_orcldb01_2018-10-12_13-45-27-220385416.log for the output of root script
[root@orcldb01 bin]# cat  /u01/app/grid/12.2.0.1/install/root_orcldb01_2018-10-12_13-45-27-220385416.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/grid/12.2.0.1
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/grid/12.2.0.1/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/orcldb01/crsconfig/rootcrs_orcldb01_2018-10-12_01-45-27PM.log
2018/10/12 13:45:30 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2018/10/12 13:45:31 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2018/10/12 13:45:31 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2018/10/12 13:45:31 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2018/10/12 13:45:33 CLSRSC-456: The Oracle Grid Infrastructure has already been configured.
[root@orcldb01 bin]# lllllllllpacket_write_wait: Connection to 10.55.239.64 port 22: Broken pipe
[rmattewada@intermediatehost ~]$ llltimed out waiting for input: auto-logout




bash-4.1$ /u01/app/grid/12.2.0.1/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/12.2.0.1/install/response/grid12c.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oracle/oraInventory/logs/GridSetupActions2018-10-12_03-06-45PM

SEVERE:Remote 'UpdateNodeList' failed on nodes: 'orcldb01'. Refer to '/u01/app/oracle/oraInventory/logs/UpdateNodeList2018-10-12_03-06-45PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
 /u01/app/grid/12.2.0.1/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/grid/12.2.0.1 "CLUSTER_NODES={orcldb01,orcldb02}" "NODES_TO_SET={orcldb01,orcldb02}" CRS=true  "INVENTORY_LOCATION=/u01/app/oracle/oraInventory"  LOCAL_NODE=<node on which command is to be run>.
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
[WARNING] [INS-10016] Installer failed to update the cluster related details, for this Oracle home, in the inventory on all/some of the nodes
   ACTION: You may chose to retry the operation, without continuing further. Alternatively you can refer to information given below and manually execute the mentioned commands on the failed nodes now or later to update the inventory.
*MORE DETAILS*

Execute the following command on node(s) [orcldb01]:
/u01/app/grid/12.2.0.1/oui/bin/runInstaller -jreLoc /u01/app/grid/12.2.0.1/jdk/jre -paramFile /u01/app/grid/12.2.0.1/oui/clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -bigCluster ORACLE_HOME=/u01/app/grid/12.2.0.1 CLUSTER_NODES=<Local Node> "NODES_TO_SET={orcldb01,orcldb02}" -invPtrLoc "/u01/app/grid/12.2.0.1/oraInst.loc" -local CRS=true  -doNotUpdateNodeList




^Cbash-4.1$ /u01/app/grid/12.2.0.1/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/12.2.0.1/install/response/grid12c.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oracle/oraInventory/logs/GridSetupActions2018-10-12_03-11-22PM

SEVERE:Remote 'UpdateNodeList' failed on nodes: 'orcldb01'. Refer to '/u01/app/oracle/oraInventory/logs/UpdateNodeList2018-10-12_03-11-22PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
 /u01/app/grid/12.2.0.1/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/grid/12.2.0.1 "CLUSTER_NODES={orcldb01,orcldb02}" "NODES_TO_SET={orcldb01,orcldb02}" CRS=true  "INVENTORY_LOCATION=/u01/app/oracle/oraInventory"  LOCAL_NODE=<node on which command is to be run>.
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
[WARNING] [INS-10016] Installer failed to update the cluster related details, for this Oracle home, in the inventory on all/some of the nodes
   ACTION: You may chose to retry the operation, without continuing further. Alternatively you can refer to information given below and manually execute the mentioned commands on the failed nodes now or later to update the inventory.
*MORE DETAILS*

Execute the following command on node(s) [orcldb01]:
/u01/app/grid/12.2.0.1/oui/bin/runInstaller -jreLoc /u01/app/grid/12.2.0.1/jdk/jre -paramFile /u01/app/grid/12.2.0.1/oui/clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -bigCluster ORACLE_HOME=/u01/app/grid/12.2.0.1 CLUSTER_NODES=<Local Node> "NODES_TO_SET={orcldb01,orcldb02}" -invPtrLoc "/u01/app/grid/12.2.0.1/oraInst.loc" -local CRS=true  -doNotUpdateNodeList

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
bash-4.1$
bash-4.1$
bash-4.1$ /u01/app/grid/12.2.0.1/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/12.2.0.1/install/response/grid12c.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

^Cbash-4.1$
bash-4.1$
bash-4.1$
bash-4.1$
bash-4.1$ /u01/app/grid/12.2.0.1/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/grid/12.2.0.1 "CLUSTER_NODES={orcldb01}" "NODES_TO_SET={orcldb01,orcldb02}" CRS=true  "INVENTORY_LOCATION=/u01/app/oracle/oraInventory"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 20479 MB    Passed
bash-4.1$ /u01/app/grid/12.2.0.1/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/grid/12.2.0.1 "CLUSTER_NODES={orcldb01}" "NODES_TO_SET={orcldb01,orcldb02}" CRS=true  "INVENTORY_LOCATION=/u01/app/oracle/oraInventory"
  /u01/app/grid/12.2.0.1/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/grid/12.2.0.1 "CLUSTER_NODES={orcldb01,orcldb02}" "NODES_TO_SET={orcldb01,orcldb02}" CRS=true  "INVENTORY_LOCATION=/u01/app/oracle/oraInventory"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 20479 MB    Passed

bash-4.1$   /u01/app/grid/12.2.0.1/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/grid/12.2.0.1 "CLUSTER_NODES={orcldb01,orcldb02}" "NODES_TO_SET={orcldb01,orcldb02}" CRS=true  "INVENTORY_LOCATION=/u01/app/oracle/oraInventory"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 20479 MB    Passed
bash-4.1$
bash-4.1$ /u01/app/grid/12.2.0.1/oui/bin/runInstaller -jreLoc /u01/app/grid/12.2.0.1/jdk/jre -paramFile /u01/app/grid/12.2.0.1/oui/clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -bigCluster ORACLE_HOME=/u01/app/grid/12.2.0.1 CLUSTER_NODES={orcldb01,orcldb02}  "NODES_TO_SET={orcldb01,orcldb02}" -invPtrLoc "/u01/app/grid/12.2.0.1/oraInst.loc" -local CRS=true  -doNotUpdateNodeList
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 20479 MB    Passed
bash-4.1$
bash-4.1$
bash-4.1$ id
uid=969(oracle) gid=533(dba) groups=533(dba),54321(oinstall)
bash-4.1$ export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1
bash-4.1$
bash-4.1$
bash-4.1$ === RDBMS NOW
bash: ===: command not found
bash-4.1$
bash-4.1$
bash-4.1$ ORACLE_HOME/perl/bin/perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_BASE="/u01/app/oracle/" ORACLE_HOME="/u01/app/oracle/product/12.2.0.1" ORACLE_HOME_NAME=OraDB122
bash: ORACLE_HOME/perl/bin/perl: No such file or directory
bash-4.1$
bash-4.1$
bash-4.1$
bash-4.1$
bash-4.1$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_BASE="/u01/app/oracle/" ORACLE_HOME="/u01/app/oracle/product/12.2.0.1" ORACLE_HOME_NAME=OraDB122
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 509669 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 20479 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-10-12_03-21-05PM. Please wait ...[WARNING] [INS-32008] Oracle base location cant be same as the user home directory.
   CAUSE: The specified Oracle base is same as the user home directory.
   ACTION: Provide an Oracle base location other than the user home directory.
[WARNING] [INS-32056] The specified Oracle Base contains the existing Central Inventory location: /u01/app/oracle/oraInventory.
   ACTION: Oracle recommends that the Central Inventory location is outside the Oracle Base directory. Specify a different location for the Oracle Base.
You can find the log of this install session at:
 /u01/app/oracle/oraInventory/logs/cloneActions2018-10-12_03-21-05PM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of OraDB122 was successful.
Please check '/u01/app/oracle/oraInventory/logs/cloneActions2018-10-12_03-21-05PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.2.0.1/root.sh



..................................................   100% Done.
bash-4.1$
bash-4.1$ sudo bash
[root@orcldb01 oracle]# id
uid=0(root) gid=0(root) groups=0(root)
[root@orcldb01 oracle]#
[root@orcldb01 oracle]# /u01/app/oracle/product/12.2.0.1/root.sh
Check /u01/app/oracle/product/12.2.0.1/install/root_orcldb01_2018-10-12_15-22-27-192745681.log for the output of root script
[root@orcldb01 oracle]# cat /u01/app/oracle/product/12.2.0.1/install/root_orcldb01_2018-10-12_15-22-27-192745681.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.2.0.1
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@orcldb01 oracle]# srvctl config mgmtdb
bash: srvctl: command not found
[root@orcldb01 oracle]# pwd
/u01/app/oracle
[root@orcldb01 oracle]# id
uid=0(root) gid=0(root) groups=0(root)
[root@orcldb01 oracle]#



bash-4.1$ id
uid=969(oracle) gid=533(dba) groups=533(dba),54321(oinstall)

bash-4.1$
bash-4.1$ /u01/app/oracle/product/12.2.0.1/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.2.0.1 "CLUSTER_NODES={orcldb01,orcldb02}" "NODES_TO_SET={orcldb01,orcldb02}"  "INVENTORY_LOCATION=/u01/app/oracle/oraInventory"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 20479 MB    Passed
bash-4.1$
bash-4.1$ $GRID_HOME/bin/srvctl modify mgmtlsnr -endpoints TCP:2008

bash-4.1$
bash-4.1$ $GRID_HOME/bin/srvctl modify listener -l ASMNET1LSNR_ASM  -endpoints TCP:2001
bash-4.1$ id
uid=969(oracle) gid=533(dba) groups=533(dba),54321(oinstall)
bash-4.1$ pwd
/u01/app/oracle
bash-4.1$ . .profile
oracle   23867     1  0 13:16 ?        00:00:00 asm_pmon_+ASM1
bash-4.1$ env|grep ORA
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/12.2.0.1
bash-4.1$ export ORACLE_SID=+ASM1



bash-4.1$ export ORACLE_SID=+ASM1 ; export ORACLE_HOME=/u01/app/grid/12.2.0.1 ; sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Fri Oct 12 15:34:50 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select instance_number, instance_name, host_name  from v$instance;

INSTANCE_NUMBER INSTANCE_NAME   HOST_NAME
--------------- --------------- -------------------------
              1 +ASM1           orcldb01


SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
bash-4.1$
bash-4.1$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details    
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
               ONLINE  ONLINE       orcldb02                STABLE
ora.ORCL_FRA.dg
               ONLINE  ONLINE       orcldb01                STABLE
               ONLINE  ONLINE       orcldb02                STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
               ONLINE  ONLINE       orcldb02                STABLE
ora.net1.network
               ONLINE  ONLINE       orcldb01                STABLE
               ONLINE  ONLINE       orcldb02                STABLE
ora.ons
               ONLINE  ONLINE       orcldb01                STABLE
               ONLINE  ONLINE       orcldb02                STABLE
ora.proxy_advm
               OFFLINE OFFLINE      orcldb01                STABLE
               OFFLINE OFFLINE      orcldb02                STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       orcldb02                STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       orcldb01                Started,STABLE
      2        ONLINE  ONLINE       orcldb02                Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.orcldb01.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.orcldb02.vip
      1        ONLINE  ONLINE       orcldb02                STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.qosmserver
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       orcldb02                STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
--------------------------------------------------------------------------------
bash-4.1$
bash-4.1$ cat /var/tmp/ASM/Capadd_10102018/createdgs.sql
CREATE DISKGROUP ORCL_ARCHIVE EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_27',
'AFD:DENHPE20450_2_28',
'AFD:DENHPE20450_2_2a'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';

CREATE DISKGROUP ORCL_DATA EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_1f',
'AFD:DENHPE20450_2_20',
'AFD:DENHPE20450_2_21',
'AFD:DENHPE20450_2_22',
'AFD:DENHPE20450_2_23',
'AFD:DENHPE20450_2_24',
'AFD:DENHPE20450_2_25',
'AFD:DENHPE20450_2_26',
'AFD:DENHPE20450_2_58',
'AFD:DENHPE20450_2_59',
'AFD:DENHPE20450_2_5a',
'AFD:DENHPE20450_2_5b',
'AFD:DENHPE20450_2_5c',
'AFD:DENHPE20450_2_5d',
'AFD:DENHPE20450_2_5e'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';

/* provided it it grid.rsp file - its already mounted
CREATE DISKGROUP ORCL_FRA EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_29'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';
*/

CREATE DISKGROUP ORCL_REDO1A EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_2c'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';

CREATE DISKGROUP ORCL_REDO1B EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_2d'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';

CREATE DISKGROUP ORCL_REDO2A EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_2e'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';

CREATE DISKGROUP ORCL_REDO2B EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_2f'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';

CREATE DISKGROUP OGG EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_2b'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';


CREATE DISKGROUP HAIP_DATA EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_4a',
'AFD:DENHPE20450_2_4b',
'AFD:DENHPE20450_2_4c',
'AFD:DENHPE20450_2_4d',
'AFD:DENHPE20450_2_4e',
'AFD:DENHPE20450_2_4f',
'AFD:DENHPE20450_2_50',
'AFD:DENHPE20450_2_51',
'AFD:DENHPE20450_2_52',
'AFD:DENHPE20450_2_53',
'AFD:DENHPE20450_2_62',
'AFD:DENHPE20450_2_63'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';


CREATE DISKGROUP HAIP_FRA EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_64'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';

CREATE DISKGROUP HAIP_ARCHIVE EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_61'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';

CREATE DISKGROUP HAIP_REDO1A EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_54'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';


CREATE DISKGROUP HAIP_REDO2A EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_55'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';


CREATE DISKGROUP HAIP_REDO2B EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_56'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';


CREATE DISKGROUP HAIP_REDO1B EXTERNAL REDUNDANCY DISK
'AFD:DENHPE20450_2_57'
attribute 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms' = '12.2.0.1.0', 'au_size' = '64M';
bash-4.1$
bash-4.1$
bash-4.1$ echo "asm parameters
>
> alter system set "_disable_image_check"=TRUE SID='*' scope=spfile;
> alter system set "_disable_system_state"=0 SID='*'  scope=spfile;
> alter system set asm_diskstring='AFD:*' SID='*'  scope=spfile;
> alter system set asm_power_limit=1 SID='*'  scope=spfile;
> alter system set audit_syslog_level='LOCAL1.WARNING' SID='*'  scope=spfile;
> alter system set large_pool_size=104857600 SID='*'  scope=spfile;
> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=HOST-VIP with FQDN)(PORT=2201))))' SID='+ASM1'  scope=spfile;
> <<ADD LOCAL LISTENER ENTRY FOR ALL ASM INSTANCES>>
> alter system set processes=1000 SID='*'  scope=spfile;
> alter system set remote_login_passwordfile='EXCLUSIVE' SID='*'  scope=spfile;
> alter system set shared_pool_reserved_size=209715200 SID='*'  scope=spfile;
> alter system set shared_pool_size=1073741824 SID='*'  scope=spfile;
>



bash-4.1$ export ORACLE_SID=+ASM1 ; export ORACLE_HOME=/u01/app/grid/12.2.0.1 ; sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Fri Oct 12 16:22:08 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=orcldb01-vip.mattew.com)(PORT=2201))))' SID='+ASM1'  scope=spfile;

System altered.

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
bash-4.1$
bash-4.1$  srvctl stop asm -node orcldb01 -stopoption IMMEDIATE -force
bash-4.1$ srvctl start asm -node orcldb01
PRCR-1013 : Failed to start resource ora.asm
PRCR-1064 : Failed to start resource ora.asm on node orcldb01
CRS-0184 : Cannot communicate with the CRS daemon.


bash-4.1$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details    
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
               ONLINE  ONLINE       orcldb02                STABLE
ora.ORCL_FRA.dg
               OFFLINE OFFLINE      orcldb01                STABLE
               ONLINE  ONLINE       orcldb02                STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       orcldb01                STABLE
               ONLINE  ONLINE       orcldb02                STABLE
ora.net1.network
               ONLINE  ONLINE       orcldb01                STABLE
               ONLINE  ONLINE       orcldb02                STABLE
ora.ons
               ONLINE  ONLINE       orcldb01                STABLE
               ONLINE  ONLINE       orcldb02                STABLE
ora.proxy_advm
               OFFLINE OFFLINE      orcldb01                STABLE
               OFFLINE OFFLINE      orcldb02                STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       orcldb02                STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
      2        ONLINE  ONLINE       orcldb02                Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.orcldb01.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.orcldb02.vip
      1        ONLINE  ONLINE       orcldb02                STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.qosmserver
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       orcldb02                STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       orcldb01                STABLE
--------------------------------------------------------------------------------





bash-4.1$ crsctl stat res -t -init
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details    
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       orcldb01                Started,STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.crf
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.crsd
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.cssd
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.ctssd
      1        ONLINE  ONLINE       orcldb01                OBSERVER,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.drivers.acfs
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.evmd
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.gipcd
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.gpnpd
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.mdnsd
      1        ONLINE  ONLINE       orcldb01                STABLE
ora.storage
      1        ONLINE  ONLINE       orcldb01                STABLE
--------------------------------------------------------------------------------
bash-4.1$
bash-4.1$
bash-4.1$ export ORACLE_SID=+ASM1 ; export ORACLE_HOME=/u01/app/grid/12.2.0.1 ; sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Fri Oct 12 16:32:57 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select instance_number, instance_name, host_name  from v$instance;

INSTANCE_NUMBER INSTANCE_NAME   HOST_NAME
--------------- --------------- -------------------------
              1 +ASM1           orcldb01


SQL> !pwd                                      
/u01/app/oracle

SQL> spool dg_creation.log
SQL>
SQL>
SQL> @/var/tmp/ASM/Capadd_10102018/createdgs.sql

Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.


Diskgroup created.

SQL> set lines 200
SQL> @asmdg

NAME                           ALLOCATION_UNIT_SIZE STATE                             TYPE                 TOTAL_MB    FREE_MB
------------------------------ -------------------- --------------------------------- ------------------ ---------- ----------
ORCL_FRA                                    4194304 MOUNTED                           EXTERN                 557052     552772
ORCL_ARCHIVE                               67108864 MOUNTED                           EXTERN                1670976    1669824
ORCL_DATA                                  67108864 MOUNTED                           EXTERN                8354880    8351424
ORCL_REDO1A                                67108864 MOUNTED                           EXTERN                  69568      68800
ORCL_REDO1B                                67108864 MOUNTED                           EXTERN                  69568      68800
ORCL_REDO2A                                67108864 MOUNTED                           EXTERN                  69568      68800
ORCL_REDO2B                                67108864 MOUNTED                           EXTERN                  69568      68800
OGG                                        67108864 MOUNTED                           EXTERN                 556992     556224
HAIP_DATA                                  67108864 MOUNTED                           EXTERN                6683904    6681024
HAIP_FRA                                   67108864 MOUNTED                           EXTERN                 556992     556224
HAIP_ARCHIVE                               67108864 MOUNTED                           EXTERN                 556992     556224
HAIP_REDO1A                                67108864 MOUNTED                           EXTERN                  69568      68800
HAIP_REDO2A                                67108864 MOUNTED                           EXTERN                  69568      68800
HAIP_REDO2B                                67108864 MOUNTED                           EXTERN                  69568      68800
HAIP_REDO1B                                67108864 MOUNTED                           EXTERN                  69568      68800

15 rows selected.

Once done - on the remaining nodes, mount the diskgorups as they will be in Dismounted mode.
or you can do something like " alter diskgroup all mount;"