Sunday 26 May 2013

Oracle 11gR2 Grid Infrastructure implementation for Real Application Cluster (RAC)

Requirements:
OS:RedHat Enterprise Linux 5 on all machines participating in the setup.
DB: Oracle 11g R2
To be consistent with the rest of the article, the following information should be set during the installation:

RAC1:
 Hostname: trnbank4
IP Address eth0: 10.40.0.147 (public address)
IP Address eth1: 172.16.100.147(private address)
RAC2:
Hostname: trnbank5
IP Address eth0: 10.40.0.148 (public address)
IP Address eth1: 172.16.100.148 (private address)

Media Required:
·         Oracle 11gR2 Grid Infrastructure need for Cluster Installation.
·         Oracle 11gR2 media is required for Oracle Database Installation.
·         Oracle ASM RPM required for ASM configuration.

SAN Storage:
We use 100GB Share Storage on rac1 and rac2 machines.
·         2 Partition of 20 GB for Data Disk group.
·         2 Partition of 20 GB for FRA Disk group.
·         3 Partition of 2.5 GB for VOTING Disk group.

SCAN Configration:
The Single Client Access Name (SCAN) should really be defined in the DNS or GNS and round-robin between one of 3 addresses, which are on the same subnet as the public and virtual IPs. In this article I've defined it as a single IP address in the "/etc/hosts" file, which is wrong and will cause the cluster verification to fail, but it allows me to complete the install without the presence of a DNS.

Note:
1. Swap Space should be 2 time of RAM size.
2. There should be two ethernet cards (eth0 for Public and eth1 for Private interconnectivity).
3. On Root shoud be at least of 30Gb free Space.
4. Three Voting Disks with minimum of 2Gb space and Voting Disk Group with normal redundancy.
5. Public and virtual IP addresses must be on the same subnet.
Public and Private IP subnet should be different.

Oracle Installation Prerequisites

Note: In the entire text in this document, “All nodes” means all two machines as described in the setup above. “All RAC nodes” are (obviously) only RAC nodes, (obviously) excluding the storage machine.

Step: RPM Installation

Once the basic installation is complete, install the following packages whilst logged in as the root user. This includes the 64-bit and 32-bit versions of some packages.

yum install binutils-2.17.50.0.6 compat-libstdc++-33-3.2.3 compat-libstdc++-33-3.2.3   elfutils-libelf-0.125  elfutils-libelf-devel-0.125  elfutils-libelf-devel static-0.125 gcc-4.1.2 gcc-c++-4.1.2 glibc-2.5-24 glibc-2.5-24   glibc-common-2.5 glibc-devel-2.5 glibc-devel-2.5   glibc-headers-2.5 ksh-20060214 libaio-0.3.106 libaio-0.3.106   libaio-devel-0.3.106 libaio-devel-0.3.106   libgcc-4.1.2 libgcc-4.1.2libstdc++-4.1.2 libstdc++-4.1.2   libstdc++-devel 4.1.2 make-3.81 pdksh-5.2.14 sysstat-7.0.2 unixODBC-2.2.11 unixODBC-2.2.11   unixODBC-devel-2.2.11  unixODBC-devel-2.2.11


Configure Name Resolution

DNS is configured to resolve the following. However, local /etc/hosts is also configured, in case DNS becomes un-available for whatever reasons.


All nodes:
If you are not using DNS, the "/etc/hosts" file must contain the following information.

~]# vi /etc/hosts

127.0.0.1       localhost.localdomainlocalhost

# Public Network - (eth0)
10.40.0.147    trnbank4           trnbank4
10.40.0.148    trnbank5           trnbank5

# Private Interconnect - (eth1)
172.16.100.147    trnbank4-priv      trnbank4-priv
172.16.100.148    trnbank5-priv      trnbank5-priv

# Public Virtual IP (VIP) addresses - (eth0:1)
10.40.0.149    trnbank4-vip       trnbank4-vip
10.40.0.150    trnbank5-vip       trnbank5-vip

# SCAN
10.40.0.155   rac-scan    rac-scan

Set Resource Limits for Oracle Sofware Installation Users

Step:
Add or amend the following lines to the "/etc/sysctl.conf" file
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586

Run the following command to change the current kernel parameters.
/sbin/sysctl -p

Add the following lines to the "/etc/security/limits.conf" file.
oracle               soft    nproc   2047
oracle               hard    nproc   16384
oracle               soft    nofile  1024
oracle               hard    nofile  65536
Add the following lines to the "/etc/pam.d/login" file, if it does not already exist.
session    required     pam_limits.so
Disable secure linux by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
SELINUX=disabled

Alternatively, this alteration can be done using the GUI tool (System > Administration > Security Level and Firewall). Click on the SELinux tab and disable the feature.

Configure Cluster Time Synchronization Service - (CTSS)

Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. In this case we will deconfigure NTP.
# servicentpd stop
Shutting down ntpd:                                        [  OK  ]
# chkconfigntpd off
# mv /etc/ntp.conf /etc/ntp.conf.org
# rm /var/run/ntpd.pid

Start the Name Service Cache Daemon (nscd).
chkconfig --level 35 nscd on
servicenscd start

Create Groups and User for Grid Infrastructure

Create the new groups and users.
groupadd -g 1000 oinstall
groupadd -g 1200 dba
groupadd -g 1400 asmadmin
groupadd -g 1600 asmdba
useradd -u 1100 -g oinstall -G dba,asmadmin,asmdba oracle
passwd oracle

Create Required Directories

On both RAC1 and RAC2 create the directories in which the Oracle software will be installed.
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01

Create Login Profile for oracle User Account

Login as the oracle user and add the following lines at the end of the .bash_profile file for rac1.

#####################################################
#Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=trnbank4; export ORACLE_HOSTNAME
ORACLE_UNQNAME=racdb; export ORACLE_UNQNAME
GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=racdb1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH
PATH=$GRID_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
#####################################################

Login as the oracle user and add the following lines at the end of the .bash_profile file for rac2.

#####################################################
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=trnbank5; export ORACLE_HOSTNAME
ORACLE_UNQNAME=racdb; export ORACLE_UNQNAME
GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=racdb2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH
PATH=$GRID_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
#####################################################

Create Partitions on SAN Storage

Used the Linux fdisk command to create the logical partitions:
[root@trnbank4 ~]# fdisk –l
 
Now create logical partition on un-partitioned SAN disks.
 
[root@11gR2Base ~]# fdisk /dev/sdb
[root@11gR2Base ~]# partprobe
[root@trnbank4 ~]# fdisk –l
Disk /dev/sda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        5221    41833260   8e  Linux LVM

Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        2610    20964793+  83  Linux

And create the remaining logical partitions like this.

Install and Configure ASMLib

[root@trnbank4 Oracleasm_Lib-RHEL5-64]# pwd
/backup/Oracleasm_Lib-RHEL5-64
[root@trnbank4 Oracleasm_Lib-RHEL5-64]# ls
oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm
oracleasm-2.6.18-194.el5debug-2.0.5-1.el5.x86_64.rpm
oracleasm-2.6.18-194.el5-debuginfo-2.0.5-1.el5.x86_64.rpm
oracleasm-2.6.18-194.el5xen-2.0.5-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
oracleasm-support-2.1.4-1.el5.x86_64.rpm

[root@trnbank4 Oracleasm_Lib-RHEL5-64]# rpm -Uvhoracleasm*
[root@trnbank4 ~]#rpm -qa | greporacleasm
[root@trnbank4 ~]#/etc/init.d/oracleasm configure -i
Configuring the Oracle ASM library driver.
 
This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.
 
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration:           [  OK  ]
Loading module "oracleasm":                                [  OK  ]
Mounting ASMlib driver filesystem:                         [  OK  ]
Scanning system for ASM disks:                             [  OK  ]
 
[root@11gR2Base ~]# /etc/init.d/oracleasm status
Checking if ASM is loaded:                                 [  OK  ]
Checking if /dev/oracleasm is mounted:                     [  OK  ]

Create ASM Disks for Oracle

[root@trnbank4 ~]# fdisk –l

Disk /dev/sda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        5221    41833260   8e  Linux LVM
 
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        2610    20964793+  83  Linux
 
Disk /dev/sdc: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        2610    20964793+  83  Linux
 
Disk /dev/sdd: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1        2610    20964793+  83  Linux
 
Disk /dev/sde: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        2610    20964793+  83  Linux
 
Disk /dev/sdf: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1         317     2546271   83  Linux
/dev/sdf2             318         634     2546302+  83  Linux
 
Disk /dev/sdg: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1               1         317     2546271   83  Linux
/dev/sdg2             318         634     2546302+  83  Linux
 
[root@trnbank4 ~]#/etc/init.d/oracleasmcreatedisk DATA01 /dev/sdb1
[root@trnbank4 ~]#/etc/init.d/oracleasmcreatedisk DATA02 /dev/sdc1
[root@trnbank4 ~]#/etc/init.d/oracleasmcreatediskFRA01/dev/sdd1
[root@trnbank4 ~]#/etc/init.d/oracleasmcreatediskFRA02/dev/sde1
[root@trnbank4 ~]#/etc/init.d/oracleasmcreatediskVOT01/dev/sdf1
[root@trnbank4 ~]#/etc/init.d/oracleasmcreatediskVOT02/dev/sdg1
[root@trnbank4 ~]# /etc/init.d/oracleasmcreatedisk VOT01 /dev/sdf1
[root@trnbank4 ~]# /etc/init.d/oracleasmcreatedisk VOT02 /dev/sdf2
[root@trnbank4 ~]# /etc/init.d/oracleasmcreatedisk VOT03 /dev/sdg1
[root@trnbank4 ~]# /etc/init.d/oracleasmcreatedisk VOT04 /dev/sdg2
[root@trnbank4 ~]#/etc/init.d/oracleasmlistdisks
DATA01
DATA02
FRA01
FRA02
VOT01
VOT02
VOT03
VOT04

Also check on node 2
[root@trnbank5 ~]#/etc/init.d/oracleasmlistdisks
DATA01
DATA02
FRA01
FRA02
VOT01
VOT02
VOT03
VOT04

Install the cvuqdisk Package for Linux

Install the operating system package cvuqdisk to both Oracle RAC nodes. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you will receive the error message "Package cvuqdisk not installed" when the Cluster Verification Utility is run (either manually or at the end of the Oracle grid infrastructure installation). Use the cvuqdisk RPM for your hardware architecture (for example, x86_64 or i386).
The cvuqdisk RPM can be found on the Oracle grid infrastructure installation media in the rpm directory.
 
# Install the following package from the Oracle grid media.
[oracle@trnbank4 rpm]$ pwd
/u01/media/Oracle-DataGrid-11gR2/rpm
[oracle@trnbank4 rpm]$ ls
cvuqdisk-1.0.7-1.rpm
[oracle@trnbank4 rpm]$ rpm -Uvhcvuqdisk*


Shut down un-necessary services on all nodes: (Some of the below)

chkconfig --level 35 sendmail off
chkconfig --level 35 pcmcia off
chkconfig --level 35 cups off
chkconfig --level 35 hpoj off
chkconfig --level 35 iptables off
chkconfig --level 35 exim off
chkconfig --level 35 postfix off
chkconfig --level 35 FreeWnn off
chkconfig --level 35 httpd off
chkconfig --level 35 rhnsd off
chkconfig --level 35 smartd off
chkconfig --level 35 canna off
chkconfig --level 35 iiim off

Configuring Passwordless SSH on Cluster Nodes

To configure passwordless SSH, complete the following on both Oracle RAC nodes.
[root@trnbank4 ~]#su – oracle
[root@trnbank5 ~]#su – oracle
[oracle@trnbank4 ~]#mkdir ~/.ssh
[oracle@trnbank5 ~]#mkdir ~/.ssh
[oracle@trnbank4 ~]#chmod 700 ~/.ssh
[oracle@trnbank4 ~]#chmod 700 ~/.ssh
[oracle@trnbank4 ~]#/usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): [Enter]
Enter passphrase (empty for no passphrase): [Enter]
Enter same passphrase again: [Enter]
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
57:21:d7:d5:54:29:4c:12:40:23:36:e9:6e:2f:e6:40 oracle@trnbank4
[oracle@trnbank5 ~]#/usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): [Enter]
Enter passphrase (empty for no passphrase): [Enter]
Enter same passphrase again: [Enter]
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
58:25:d7:d5:54:29:4c:12:40:23:36:e9:6e:2f:e6:90 oracle@trnbank5
[oracle@trnbank4 ~]#touch ~/.ssh/authorized_keys
[oracle@trnbank5 ~]#touch ~/.ssh/authorized_keys
[oracle@trnbank4 ~]#ls -l ~/.ssh
[oracle@trnbank4 ~]#ls -l ~/.ssh
[oracle@trnbank4 ~]#ssh trnbank4 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@trnbank4 ~]#ssh trnbank5 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@trnbank4 ~]#ls -l ~/.ssh
[oracle@trnbank4 ~]#scp ~/.ssh/authorized_keys trnbank5:.ssh/authorized_keys
[oracle@trnbank4 ~]#chmod 600 ~/.ssh/authorized_keys
[oracle@trnbank5 ~]#chmod 600 ~/.ssh/authorized_keys
If SSH is configured correctly, you will be able to use the ssh and scp commands without being prompted for a password or pass phrase from the terminal session:
[oracle@trnbank4 ~]#ssh trnbank4 "date;hostname"
[oracle@trnbank4 ~]#ssh trnbank5 "date;hostname"
[oracle@trnbank5 ~]#ssh trnbank4 "date;hostname"
[oracle@trnbank5 ~]#ssh trnbank5 "date;hostname"

Verify Oracle Clusterware Requirements with CVU- (optional)

[oracle@trnbank4 ~]#cd /u01/media/Oracle-DataGrid-11gR2/
[oracle@trnbank4 ~]#./runcluvfy.sh stage -pre crsinst -fixup -n trnbank4,trnbank5 -verbose

Install Oracle Grid Infrastructure for a Cluster

[oracle@trnbank4 ~]$ cd /u01/media/Oracle-DataGrid-11gR2/
[oracle@trnbank4 Oracle-DataGrid-11gR2]$ ls
[oracle@trnbank4 Oracle-DataGrid-11gR2]$ ./runInstaller





Click on Add to add second cluster Node.


Configure the SSH Connectivity if not configured and also Test it.












Post-installation Tasks for Oracle Grid Infrastructure for a Cluster

Run Scripts on both nodes as root user.
[oracle@trnbank4 ~]$ /u01/app/oraInventory/orainstRoot.sh
[oracle@trnbank5 ~]$ /u01/app/oraInventory/orainstRoot.sh
[oracle@trnbank4 ~]# /u01/app/11.2.0/grid/root.sh
 
[oracle@trnbank5 ~]# /u01/app/11.2.0/grid/root.sh

 
After successful root scrip execution Click on Ok.

Verify Oracle Clusterware Installation

Check CRS Status

[oracle@trnbank4 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Check Clusterware Resources

[oracle@trnbank4 ~]$ crs_stat -t –v

Check Cluster Nodes

[oracle@trnbank4 ~]$ olsnodes -n
trnbank4        1
trnbank5        2

Check Oracle TNS Listener Process on Both Nodes

[oracle@trnbank4 ~]$ ps -ef | greplsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER
LISTENER_SCAN1

Confirming Oracle ASM Function for Oracle Clusterware Files

[oracle@trnbank4 ~]$ srvctl status asm -a
ASM is running on trnbank4,trnbank5
ASM is enabled.
[oracle@trnbank5 ~]$ srvctl status asm -a
ASM is running on trnbank4,trnbank5
ASM is enabled.

Check Oracle Cluster Registry (OCR)

[oracle@trnbank4 ~]$ ocrcheck
[oracle@trnbank5 ~]$ ocrcheck

Check Voting Disk

[oracle@trnbank4 ~]$ crsctl query cssvotedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0cb7d89cecd74fa5bf68fc4b24745e3e (ORCL:VOT01) [VOTING]
Located 1 voting disk(s).
[oracle@trnbank5 ~]$ crsctl query cssvotedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0cb7d89cecd74fa5bf68fc4b24745e3e (ORCL:VOT01) [VOTING]
Located 1 voting disk(s).

Check SCAN Resolution

[oracle@trnbank4 ~]$dig rac-scan
[oracle@trnbank5 ~]$dig rac-scan

Create ASM Disk Groups for Data and Fast Recovery Area

Create two additional DATA and FRA ASM disk groups as the oracle user:
[oracle@trnbank4 ~]$ asmca









Now Install Oracle 11gR2 Software only:

[oracle@trnbank4 ~]$ cd /u01/media/Oracle-Database-11gR2_Linux-64/
[oracle@trnbank4 Oracle-Database-11gR2_Linux-64]$ ls
doc  install  response  rpm  runInstallersshsetup  stage  welcome.html
[oracle@trnbank4 Oracle-Database-11gR2_Linux-64]$ ./runInstaller







Execute root.sh script on both nodes as root user.
[root@trnbank4 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh
[root@trnbank5 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh
Finish, Installation will completes.

Create the Oracle Cluster Database

[oracle@trnbank4 ~]$dbca&














When the DBCA has completed, you will have a fully functional Oracle RAC 11g release 2 cluster running!

Verify Oracle Grid Infrastructure and Database Configuration

[oracle@trnbank4 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

All Oracle Instances - (Database Status)

[oracle@trnbank4 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node trnbank4
Instance racdb2 is running on node trnbank5

Single Oracle Instance - (Status of Specific Instance)

[oracle@trnbank4 ~]$ srvctl status instance -d racdb -i racdb1
Instance racdb1 is running on node trnbank4
[oracle@trnbank5 ~]$ srvctl status instance -d racdb -i racdb2
Instance racdb2 is running on node trnbank5

Node Applications - (Status)

[oracle@trnbank4 ~]$ srvctl status nodeapps

List all Configured Databases

[oracle@trnbank4 ~]$ srvctlconfig database
racdb

ASM - (Status)

[oracle@trnbank4 ~]$ srvctl status asm
ASM is running on trnbank4,trnbank5

TNS listener - (Status)

[oracle@trnbank4 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): trnbank4,trnbank5

SCAN - (Status)

[oracle@trnbank4 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node trnbank4

VIP - (Status of Specific Node)

[oracle@trnbank4 ~]$ srvctl status vip -n trnbank4
VIP trnbank4-vip is enabled
VIP trnbank4-vip is running on node: trnbank4

All running instances in the cluster - (SQL)

SQL> SELECT
inst_id
  , instance_numberinst_no
  , instance_nameinst_name
  , parallel
  , status
  , database_statusdb_status
  , active_state state
  , host_name host
FROM gv$instance
ORDER BY inst_id; 

INST_ID INST_NO  INST_NAME  PAR  STATUS   DB_STATUS   STATE    HOST
---------- ---------- ---------------- --- ------------ ----------------- 
1       1        racdb1    YES  OPEN  ACTIVE  NORMAL    trnbank4
2     2       racdb2   YES  OPEN      ACTIVE     NORMAL    trnbank5

ASM Disk Volumes - (SQL)

SQL> SELECT path
FROM   v$asm_disk; 

PATH
-----------------------------------------------------------------------------
ORCL:DATA01
ORCL:DATA02
ORCL:FRA01
ORCL:FRA02
ORCL:VOT01

Starting / Stopping the Cluster

Stopping the Oracle Clusterware Stack on the Local Server
[root@trnbank4 ~]#  /u01/app/11.2.0/grid/bin/crsctl stop cluster
[root@trnbank4 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster –all

Starting the Oracle Clusterware Stack on the Local Server

Use the "crsctl start cluster" command on racnode1 to start the Oracle Clusterware stack:
[root@trnbank4 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster
[root@trnbank4 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all
You can also start the Oracle Clusterware stack on one or more named servers in the cluster by listing the servers separated by a space:
[root@trnbank4 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n trnbank4 trnbank5

Instance Start and Stop

[oracle@trnbank4 ~]# srvctl start instance -d racdb -i racdb1
[oracle@trnbank5 ~]# srvctl start instance -d racdb -i racdb2
[oracle@trnbank4 ~]# srvctl stop instance -d racdb -i racdb1
[oracle@trnbank5 ~]# srvctl stop instance -d racdb -i racdb2

Start/Stop All Instances with SRVCTL

Finally, you can start/stop all instances and associated services using the following:
[root@trnbank4 ~]# srvctl stop database -d racdb
[root@trnbank4 ~]# srvctl start database -d racdb

Tns Configuration for TAF
RACDB =
  (DESCRIPTION =
   (ENABLE=BROKEN)
   (FAILOVER=ON)
   (LOAD_BALANCE=ON)
   (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST=rac-scan)(PORT=1521)))
     (CONNECT_DATA =
      (SERVICE_NAME=racdb)
      (FAILOVER_MODE=(TYPE=SELECT)(METHOD=BASIC)(RETRIES=250)(DELAY=5)
)
    )
  )

-----------------------------------------------------------------End--------------------------------------------------------------------