about News
about Tech
- Détails
- Catégorie : Uncategorised
- Publié le mardi 29 janvier 2013 19:55
- Écrit par Administrator
- Affichages : 13368
Retrouvez ici tous les articles techniques.
Ils sont rangés par catégorie:
- Sécurité
- Backup/Restore
- Grid Infrastructure
- Dataguard / Standby
- Migration
- Tuning et Performance
- Edition, verison, option, usage, licence
testbed 11gR2 RAC : Annexe 3 - crsctl
- Détails
- Catégorie : Uncategorised
- Publié le lundi 28 janvier 2013 20:12
- Écrit par Administrator
- Affichages : 53428
[grid@racdb1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE racdb1
ONLINE ONLINE racdb2
ora.OCRVOT.dg
ONLINE ONLINE racdb1
ONLINE ONLINE racdb2
ora.asm
ONLINE ONLINE racdb1 Started
ONLINE ONLINE racdb2 Started
ora.gsd
OFFLINE OFFLINE racdb1
OFFLINE OFFLINE racdb2
ora.net1.network
ONLINE ONLINE racdb1
ONLINE ONLINE racdb2
ora.ons
ONLINE ONLINE racdb1
ONLINE ONLINE racdb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racdb2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE racdb1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE racdb1
ora.cvu
1 ONLINE ONLINE racdb1
ora.oc4j
1 ONLINE ONLINE racdb1
ora.racdb1.vip
1 ONLINE ONLINE racdb1
ora.racdb2.vip
1 ONLINE ONLINE racdb2
ora.scan1.vip
1 ONLINE ONLINE racdb2
ora.scan2.vip
1 ONLINE ONLINE racdb1
ora.scan3.vip
1 ONLINE ONLINE racdb1
Tous les services sont ONLINE sauf le gsd qui est OFFLINE mais cela est normal, il ne sert à rien dans notre cas, nous n'avons de 9i database.
testbed 11gR2 RAC : Annexe 2 - root.sh
- Détails
- Catégorie : Uncategorised
- Publié le lundi 28 janvier 2013 19:44
- Écrit par Administrator
- Affichages : 72194
Sur les 2 serveur en tant que root:
# /oracle/oraInventory/orainstRoot.sh
Changing permissions of /oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /oracle/oraInventory to oinstall.
The execution of the script is complete.
Puis sur racdb1 en tant que root:
# /oracle/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'racdb1'
CRS-2676: Start of 'ora.mdnsd' on 'racdb1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'racdb1'
CRS-2676: Start of 'ora.gpnpd' on 'racdb1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racdb1'
CRS-2672: Attempting to start 'ora.gipcd' on 'racdb1'
CRS-2676: Start of 'ora.cssdmonitor' on 'racdb1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'racdb1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'racdb1'
CRS-2672: Attempting to start 'ora.diskmon' on 'racdb1'
CRS-2676: Start of 'ora.diskmon' on 'racdb1' succeeded
CRS-2676: Start of 'ora.cssd' on 'racdb1' succeeded
ASM created and started successfully.
Disk Group OCRVOT created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 1331634fde7d4f0bbf8ecbd8f5347205.
Successful addition of voting disk 69baf93c03064f34bf52858994864fdc.
Successful addition of voting disk 9ab32db118e34f4dbfd26e20d1657dd9.
Successfully replaced voting disk group with +OCRVOT.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 1331634fde7d4f0bbf8ecbd8f5347205 (/dev/oracleasm/disks/OCRVOT1) [OCRVOT]
2. ONLINE 69baf93c03064f34bf52858994864fdc (/dev/oracleasm/disks/OCRVOT2) [OCRVOT]
3. ONLINE 9ab32db118e34f4dbfd26e20d1657dd9 (/dev/oracleasm/disks/OCRVOT3) [OCRVOT]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'racdb1'
CRS-2676: Start of 'ora.asm' on 'racdb1' succeeded
CRS-2672: Attempting to start 'ora.OCRVOT.dg' on 'racdb1'
CRS-2676: Start of 'ora.OCRVOT.dg' on 'racdb1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Et quand le script est terminé avec succès, sur racdb2 toujours en tant que root, exécutez:
# /oracle/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node racdb1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
testbed 11gR2 RAC : Annexe 1 - cluverify
- Détails
- Catégorie : Uncategorised
- Publié le vendredi 25 janvier 2013 21:15
- Écrit par Administrator
- Affichages : 55782
# /oracle/sources/grid/runcluvfy.sh stage -pre crsinst -n racdb1,racdb2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "racdb1"
Destination Node Reachable?
------------------------------------ ------------------------
racdb1 yes
racdb2 yes
Result: Node reachability check passed from node "racdb1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
racdb2 passed
racdb1 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
racdb2 passed
racdb1 passed
Verification of the hosts config file successful
Interface information for node "racdb2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 10.10.0.11 10.10.0.0 0.0.0.0 10.10.0.1 08:00:27:00:B4:F4 1500
eth1 10.10.1.2 10.10.1.0 0.0.0.0 10.10.0.1 08:00:27:B6:CB:FA 1500
Interface information for node "racdb1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 10.10.0.10 10.10.0.0 0.0.0.0 10.10.0.1 08:00:27:63:63:64 1500
eth1 10.10.1.1 10.10.1.0 0.0.0.0 10.10.0.1 08:00:27:BD:66:5E 1500
Check: Node connectivity of subnet "10.10.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
racdb2[10.10.0.11] racdb1[10.10.0.10] yes
Result: Node connectivity passed for subnet "10.10.0.0" with node(s) racdb2,racdb1
Check: TCP connectivity of subnet "10.10.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
racdb1:10.10.0.10 racdb2:10.10.0.11 passed
Result: TCP connectivity check passed for subnet "10.10.0.0"
Check: Node connectivity of subnet "10.10.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
racdb2[10.10.1.2] racdb1[10.10.1.1] yes
Result: Node connectivity passed for subnet "10.10.1.0" with node(s) racdb2,racdb1
Check: TCP connectivity of subnet "10.10.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
racdb1:10.10.1.1 racdb2:10.10.1.2 passed
Result: TCP connectivity check passed for subnet "10.10.1.0"
Interfaces found on subnet "10.10.0.0" that are likely candidates for VIP are:
racdb2 eth0:10.10.0.11
racdb1 eth0:10.10.0.10
Interfaces found on subnet "10.10.1.0" that are likely candidates for a private interconnect are:
racdb2 eth1:10.10.1.2
racdb1 eth1:10.10.1.1
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.0.0".
Subnet mask consistency check passed for subnet "10.10.1.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "10.10.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "10.10.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Node Name Status
------------------------------------ ------------------------
racdb2 passed
racdb1 passed
Result: Check for ASMLib configuration passed.
Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 2.7919GB (2927556.0KB) 1.5GB (1572864.0KB) passed
racdb1 2.7861GB (2921412.0KB) 1.5GB (1572864.0KB) passed
Result: Total memory check passed
Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 2.5199GB (2642344.0KB) 50MB (51200.0KB) passed
racdb1 2.4222GB (2539892.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 4GB (4194300.0KB) 2.7919GB (2927556.0KB) passed
racdb1 4GB (4194300.0KB) 2.7861GB (2921412.0KB) passed
Result: Swap space check passed
Check: Free disk space for "racdb2:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/tmp racdb2 / 6.7842GB 1GB passed
Result: Free disk space check passed for "racdb2:/tmp"
Check: Free disk space for "racdb1:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/tmp racdb1 / 7.0146GB 1GB passed
Result: Free disk space check passed for "racdb1:/tmp"
Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
racdb2 passed exists(1100)
racdb1 passed exists(1100)
Checking for multiple users with UID value 1100
Result: Check for multiple users with UID value 1100 passed
Result: User existence check passed for "grid"
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
racdb2 passed exists
racdb1 passed exists
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
racdb2 passed exists
racdb1 passed exists
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Status
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 yes yes yes yes passed
racdb1 yes yes yes yes passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
racdb2 yes yes no failed
racdb1 yes yes no failed
Result: Membership check for user "grid" in group "dba" failed
Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
racdb2 5 3,5 passed
racdb1 5 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
racdb2 hard 65536 65536 passed
racdb1 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
racdb2 soft 1024 1024 passed
racdb1 soft 1024 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
racdb2 hard 16384 16384 passed
racdb1 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
racdb2 soft 2047 2047 passed
racdb1 soft 2047 2047 passed
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 x86_64 x86_64 passed
racdb1 x86_64 x86_64 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 2.6.39-200.24.1.el6uek.x86_64 2.6.32 passed
racdb1 2.6.39-200.24.1.el6uek.x86_64 2.6.32 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 250 250 250 passed
racdb1 250 250 250 passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 32000 32000 32000 passed
racdb1 32000 32000 32000 passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 100 100 100 passed
racdb1 100 100 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 128 128 128 passed
racdb1 128 128 128 passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 4294967295 4294967295 1498908672 passed
racdb1 4294967295 4294967295 1495762944 passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 4096 4096 4096 passed
racdb1 4096 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 2097152 2097152 2097152 passed
racdb1 2097152 2097152 2097152 passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 6815744 6815744 6815744 passed
racdb1 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
racdb1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 262144 262144 262144 passed
racdb1 262144 262144 262144 passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 4194304 4194304 4194304 passed
racdb1 4194304 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 262144 262144 262144 passed
racdb1 262144 262144 262144 passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 1048576 1048576 1048576 passed
racdb1 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb2 1048576 1048576 1048576 passed
racdb1 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 binutils-2.20.51.0.2-5.34.el6 binutils-2.20.51.0.2 passed
racdb1 binutils-2.20.51.0.2-5.34.el6 binutils-2.20.51.0.2 passed
Result: Package existence check passed for "binutils"
Check: Package existence for "compat-libcap1"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 compat-libcap1-1.10-1 compat-libcap1-1.10 passed
racdb1 compat-libcap1-1.10-1 compat-libcap1-1.10 passed
Result: Package existence check passed for "compat-libcap1"
Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
racdb1 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 libgcc(x86_64)-4.4.6-4.el6 libgcc(x86_64)-4.4.4 passed
racdb1 libgcc(x86_64)-4.4.6-4.el6 libgcc(x86_64)-4.4.4 passed
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 libstdc++(x86_64)-4.4.6-4.el6 libstdc++(x86_64)-4.4.4 passed
racdb1 libstdc++(x86_64)-4.4.6-4.el6 libstdc++(x86_64)-4.4.4 passed
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 libstdc++-devel(x86_64)-4.4.6-4.el6 libstdc++-devel(x86_64)-4.4.4 passed
racdb1 libstdc++-devel(x86_64)-4.4.6-4.el6 libstdc++-devel(x86_64)-4.4.4 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed
racdb1 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "gcc"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 gcc-4.4.6-4.el6 gcc-4.4.4 passed
racdb1 gcc-4.4.6-4.el6 gcc-4.4.4 passed
Result: Package existence check passed for "gcc"
Check: Package existence for "gcc-c++"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 gcc-c++-4.4.6-4.el6 gcc-c++-4.4.4 passed
racdb1 gcc-c++-4.4.6-4.el6 gcc-c++-4.4.4 passed
Result: Package existence check passed for "gcc-c++"
Check: Package existence for "ksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 ksh-20100621-16.el6 ksh-20100621 passed
racdb1 ksh-20100621-16.el6 ksh-20100621 passed
Result: Package existence check passed for "ksh"
Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 make-3.81-20.el6 make-3.81 passed
racdb1 make-3.81-20.el6 make-3.81 passed
Result: Package existence check passed for "make"
Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 glibc(x86_64)-2.12-1.80.el6 glibc(x86_64)-2.12 passed
racdb1 glibc(x86_64)-2.12-1.80.el6 glibc(x86_64)-2.12 passed
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 glibc-devel(x86_64)-2.12-1.80.el6 glibc-devel(x86_64)-2.12 passed
racdb1 glibc-devel(x86_64)-2.12-1.80.el6 glibc-devel(x86_64)-2.12 passed
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed
racdb1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb2 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed
racdb1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed
Result: Package existence check passed for "libaio-devel(x86_64)"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
Node Name Status
------------------------------------ ------------------------
racdb2 passed
racdb1 passed
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Checking Core file name pattern consistency...
Core file name pattern consistency check passed.
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
racdb2 passed does not exist
racdb1 passed does not exist
Result: User "grid" is not part of "root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
racdb2 0022 0022 passed
racdb1 0022 0022 passed
Result: Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "racdb2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
racdb2 failed
racdb1 failed
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racdb2,racdb1
File "/etc/resolv.conf" is not consistent across nodes
Check: Time zone consistency
Result: Time zone consistency check passed
Pre-check for cluster services setup was unsuccessful on all the nodes.