Oracle Forum - The Knowledge Center for Oracle Professionals - Looking Beyond the Possibilities

Full Version: Step By Step: Install and setup Oracle 11g R2 RAC on Oracle Enterprise Linux 5.5 (32
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
This Document shows the step by step of installing and setting up 3-Node 11gR2 RAC cluster. This setup uses IP Based iSCSI Openfiler SAN as a shared storage subsystem. This setup does not have IPMI and Grid Naming Service (GNS) configured. The SCAN is resolved through DNS.

Hardware Used in setting up 3-node 11g R2 RAC using iSCSI SAN (Openfiler):

Total Machines: 5 (3 for RAC nodes + 1 for NAS + 1 for DNS)
Network Switches: 3 (for Public, Private and Shared Storage)
Extra Network Adaptors: 7 (6 for RAC nodes (2 for each node) and one for Storage Server)
Network cables: 11 (9 for RAC nodes (3 for each node), one for Shared Storage and 1 for DNS server)
External USB HD: 1 (1 TB)

Machines Specifications:
DELL OPTIPLEX GX620
CPU: Intel 3800MHz
RAM: 4084MB
HD: 250GB
DVD, 10/100 NIC, 8 MB VRAM

Network Adaptor Specifications:
Linksys EG1032 Instant Gigabit Network Adapter
Network Switch Specifications:
D-Link 24-Port Rackmountable Gigabit Switch
Network Cables Specifications:
25-Foot Cat6 Snagless Patch Cable – (Blue, Black and Grey)

Software Used for the 3-node RAC Setup using NAS (Openfiler):

NAS Storage Solution: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686)
Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE)
Clusterware: Oracle 11g R2 Grid Infrastructure (11.2.0.1)
Oracle RAC: Oracle RDBMS 11g R2 (11.2.0.1)

3-Node RAC Setup

Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE):
Server: All the RAC Nodes + DNS server

Grid Infrastructure Software (Clusterware + ASM 11.2.0.1):
Server: All the RAC Nodes
ORACLE_BASE: /u01/app/grid
ORACLE_HOME: /u01/app/grid11201
Owner: grid (Primary Group: oinstall, Secondary Group: asmadmin, asmdba)
Permissions: 755
OCR/Voting Disk Storage Type: ASM
Oracle Inventory Location: /u01/app/oraInventory

Oracle Database Software (RAC 11.2.0.1):

Server: All the RAC Nodes
ORACLE_BASE: /u01/app/oracle
ORACLE_HOME: /u01/app/oracle/db11201
Owner: oracle (Primary Group: oinstall, Secondary Group: asmdba, dba)
Permissions: 755
Oracle Inventory Location: /u01/app/oraInventory

Database Name: labdb
Listener: LAB_LISTENER (TCP:1525)

Openfiler 2.3:
Server: single dedicated server acting as NAS.
OS: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686).

3-Node RAC Architecture:

Machine
Public Name
Private Name
VIP Name
RAC Node1
node1.hingu.net
node1-prv
node1-vip.hingu.net
RAC Node2
node2.hingu.net
node2-prv
node2-vip.hingu.net
RAC Node3
node3.hingu.net
node3-prv
node3-vip.hingu.net
Storage
nas-server
N/A
N/A
DNS server
lab-dns
N/A
N/A


SCAN IPs:
192.168.2.151
192.168.2.152
192.168.2.153


SCAN: lab-scan.hingu.net
Cluster Name: lab
Public Network: 192.168.2.0/eth2
Private network (cluster Interconnect): 192.168.0.0/eth0
Private network (Storage Network): 192.168.1.0/eth1

Machine
Public IP
Private IP
VIP
Storage IP
RAC Node1
192.168.2.1
192.168.0.1
192.168.2.51
192.168.1.1
RAC Node2
192.168.2.2
192.168.0.2
192.168.2.52
192.168.1.2
RAC Node3
192.168.2.3
192.168.0.3
192.168.2.53
192.168.1.3
Storage
N/A
N/A
N/A
192.168.1.101
DNS server
192.168.2.200
N/A
N/A
N/A






[Image: image003.jpg]



The Installation is divided into 3 main categories:

Pre-installation task:

Server Hardware Requirements:

Each node in the Cluster must meet the below requirement.
At least 1024 x 768 display resolution, so that OUI displays correctly.
1 GB of space in the /tmp directory
5.5 GB space for Oracle Grid Infrastructure Home.
At least 2.5 GB of RAM and equivalent swap space (for 32 bit installation as in my case).
· All the RAC nodes must share the same Instruction Set Architecture. For a testing RAC setup, it is possible to install RAC on servers with mixtures of Intel 32 and AMD 32 with differences in sizes of Memory/CPU speed.

Installation of OEL5.5 (On All the RAC Nodes and DNS Host):
The below selection was made during the installation of OEL5 on the Node 1 (node1.hingu.net). The same process was followed to install RHEL 5 on all the remaining RAC nodes and DNS Host (lab-dns).
The Hostname/IP information was appropriately chosen for respective nodes from the Architecture diagram.

Insert Installation Media #1:
Testing the CD Media: Skip
Language: English
Key Board: U.S. English
Partition Option: “Remove all Partitions on selected drives and create default layout”
Boot Loader: “ The GRUB boot loader will be installed on /dev/sda”

Network Devices:

Active on Boot
Devices
IPV4.Netmask
IPV6/Prefix




Yes
eth0
192.168.0.1/255.255.255.0
Auto
Yes
eth1
192.168.1.1/255.255.255.0
Auto
Yes
eth2
192.168.2.1/255.255.255.0
Auto


Hostname a Manually à node1.hingu.net
Ignore both the Warning Messages at this point
Region: America/New York
System Clock Uses UTC (checked)
Root Password à Enter the root password
Additional Tasks On top of Default Installation: “Checked all Software Development” and “Web Server”
Customize Now (Selected)
(Below is the extra selection on top of the default selected packages)
Applications à Authoring and Publishing (checked)
Development à Development Libraries à
libstdc++44-devel
Development a Java Development
Development à Legacy Software Development
Servers à Checked All the servers
Servers à Legacy Network Server à
bootparamd, rsh-server, rusers, rusers-server, telnet-server
Servers à Network Servers à
dhcp, dhcpv6, dnsmasq, ypserv
Servers à Servers Configuration Tools à
Checked All
Base System àAdministration Tools à
Checked All
Base System à Base à
device-mapper-multipath, iscsi-initiator-utils,
Base System à Legacy Software Support à
openmotif22
Base System à System Tools à
OpenIPMI-gui, lsscsi, oracle*, sysstat, tsclient

Post Installation Steps:
(1) Yes to License Agreement.
(2) Disable the firewall
(3) Disable SELinux
(4) Disable kdump
(5) Set the clock
(6) Finish

Installation of openfiler 2.3

Version: Openfiler V 2.3 (downloaded from Minimum Required RPMs for OEL 5.5 (All the 3 RAC Nodes):

binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
elfutils-libelf-devel-static-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-common-2.5
glibc-devel-2.5
glibc-headers-2.5
kernel-headers-2.6.18
ksh-20060214
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.2
libgomp-4.1.2
libstdc++-4.1.2
libstdc++-devel-4.1.2
make-3.81
numactl-devel-0.9.8.i386
sysstat-7.0.2
unixODBC-2.2.11
unixODBC-devel-2.2.11

Below command verifies whether the specified rpms are installed or not. Any missing rpms can be installed from the OEL Media Pack

rpm -q binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel elfutils-libelf-devel-static \
gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers kernel-headers ksh libaio libaio-devel \
libgcc libgomp libstdc++ libstdc++-devel make numactl-devel sysstat unixODBC unixODBC-devel


I had to install below extra RPMs.

numactl-develà Located on the 3rd CD of OEL 5.5 Media pack.
oracleasmlibàAvailable Network Configuration for RAC Nodes/NAS Server/DNS Host:

Public, VIPs and SCAN VIPs are resolved by DNS. The private IPs for Cluster Interconnects are resolved through /etc/hosts. The hostname along with public/private and NAS network is configured at the time of OEL network installations. The final Network Configurations files are listed here.

(a) hostname:

For Node node1:
[root@node1 ~]# hostname node1.hingu.net

node1.hingu.net: /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=node1.hingu.net

For Node node2:
[root@node2 ~]# hostname node2.hingu.net

node2.hingu.net: /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=node2.hingu.net

For Node node3:
[root@node3 ~]# hostname node3.hingu.net

node3.hingu.net: /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=node3.hingu.net

For Node nas-server:
[root@nas-server ~]# hostname nas-server

nas-server: /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=nas-server

For Node lab-dns:
[root@lab-dns ~]# hostname lab-dns

lab-dns.hingu.net: /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=nas-server

(b) Private Network for Cluster Interconnect:

node1.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0
# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:60
IPADDR=192.168.0.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ON‌BOOT=yes

node2.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0
# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:4B
IPADDR=192.168.0.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ON‌BOOT=yes

node3.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0
# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:49
IPADDR=192.168.0.3
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ON‌BOOT=yes

© Public Network:

node1.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth2
# Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:18:8B:04:6A:62
IPADDR=192.168.2.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ON‌BOOT=yes

node2.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth2
# Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:18:8B:24:F8:58
IPADDR=192.168.2.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ON‌BOOT=yes

node3.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth2
# Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:19:B9:0C:E6:EF
IPADDR=192.168.2.3
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ON‌BOOT=yes

lab-dns.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0
# Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:13:72:A1:E9:1B
IPADDR=192.168.2.200
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ON‌BOOT=yes

(d) Private Network for Shared Storage:

node1.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1
# Linksys Gigabit Network Adapter
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:4E:60
IPADDR=192.168.1.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ON‌BOOT=yes

node2.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1
# Linksys Gigabit Network Adapter
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:45:13
IPADDR=192.168.1.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ON‌BOOT=yes

node3.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1
# Linksys Gigabit Network Adapter
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:4E:48
IPADDR=192.168.1.3
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ON‌BOOT=yes


nas-server.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:43D6
IPADDR=192.168.1.101
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ON‌BOOT=yes
TYPE=Ethernet

(e) /etc/hosts files:

node1.hingu.net: /etc/hosts
# # Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 node1.hingu.net node1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6


##=======================================
# Private Network for Cluster Interconnect
##=======================================

192.168.0.1 node1-prv
192.168.0.2 node2-prv
192.168.0.3 node3-prv

##=======================================
##=======================================

node2.hingu.net: /etc/hosts
# # Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 node2.hingu.net node2 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6


##=======================================
# Private Network for Cluster Interconnect
##=======================================

192.168.0.1 node1-prv
192.168.0.2 node2-prv
192.168.0.3 node3-prv

##=======================================
##=======================================

node3.hingu.net: /etc/hosts
# # Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 node3.hingu.net node3 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6


##=======================================
# Private Network for Cluster Interconnect
##=======================================

192.168.0.1 node1-prv
192.168.0.2 node2-prv
192.168.0.3 node3-prv

##=======================================
##=======================================

lab-dns.hingu.net: /etc/hosts
## Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 lab-dns.hingu.net lab-dns localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

Configure DNS server for SCAN VIPs, Node VIPs and Node IPs:

DNS Server: lab-dns.hingu.net

RPMs required in setting up DNS server:

ypbind-1.19-12.el5
bind-utils-9.3.6-4.P1.el5_4.2
bind-9.3.6-4.P1.el5_4.2
system-config-bind-4.0.3-4.0.1.el5
bind-libs-9.3.6-4.P1.el5_4.2
bind-chroot-9.3.6-4.P1.el5_4.2

Configurations files modified/created to set up DNS:

lab-dbs.hingu.net

/var/named/chroot/etc/named.conf (modified)
/var/named/chroot/var/named/hingu.net.zone (created)
/var/named/chroot/var/named/2.168.192.in-addr.arpa.zone (created)
/var/named/chroot/var/named/1.168.192.in-addr.arpa.zone (created)

On node1, node2 and node3

/etc/resolv.conf (modified)

/var/named/chroot/etc/named.conf

// Enterprise Linux BIND Configuration Tool
//
// Default initial "Caching Only" name server configuration
//

options {
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
};


// Zone for this RAC configuration is hingu.net

zone "hingu.net" in {
type master;
file "hingu.net.zone";
allow-update { none; };
};

// For reverse lookups

zone "2.168.192.in-addr.arpa" in {
type master;
file "2.168.192.in-addr.arpa.zone";
allow-update { none; };
};

zone "1.168.192.in-addr.arpa" in {
type master;
file "1.168.192.in-addr.arpa.zone";
allow-update { none; };
};

include "/etc/rndc.key";

/var/named/chroot/var/named/hingu.net.zone

$TTL 1d
hingu.net. IN SOA lab-dns.hingu.net. root.hingu.net. (
100 ; se = serial number
8h ; ref = refresh
5m ; ret = update retry
3w ; ex = expiry
3h ; min = minimum
)

IN NS lab-dns.hingu.net.

; DNS server

lab-dns IN A 192.168.2.200

; RAC Nodes Public name

node1 IN A 192.168.2.1
node2 IN A 192.168.2.2
node3 IN A 192.168.2.3

; RAC Nodes Public VIPs

node1-vip IN A 192.168.2.51
node2-vip IN A 192.168.2.52
node3-vip IN A 192.168.2.53

; 3 SCAN VIPs

lab-scan IN A 192.168.2.151
lab-scan IN A 192.168.2.152
lab-scan IN A 192.168.2.153

; Storage Network

nas-server IN A 192.168.1.101
node1-nas IN A 192.168.1.1
node2-nas IN A 192.168.1.2
node3-nas IN A 192.168.1.3


/var/named/chroot/var/named/2.168.192.in-addr.arpa.zone

$TTL 1d
@ IN SOA lab-dns.hingu.net. root.hingu.net. (
100 ; se = serial number
8h ; ref = refresh
5m ; ret = update retry
3w ; ex = expiry
3h ; min = minimum
)

IN NS lab-dns.hingu.net.

; DNS machine name in reverse
200 IN PTRlab-dns.hingu.net.

; RAC Nodes Public Name in Reverse

1 IN PTRnode1.hingu.net.
2 IN PTRnode2.hingu.net.
3 IN PTRnode3.hingu.net.

; RAC Nodes Public VIPs in Reverse

51 IN PTR node1-vip.hingu.net.
52 IN PTR node2-vip.hingu.net.
53 IN PTR node3-vip.hingu.net.

; RAC Nodes SCAN VIPs in Reverse

151 IN PTR lab-scan.hingu.net.
152 IN PTR lab-scan.hingu.net.
153 IN PTR lab-scan.hingu.net.

/var/named/chroot/var/named/1.168.192.in-addr.arpa.zone

$TTL 1d
@ IN SOA lab-dns.hingu.net. root.hingu.net. (
100 ; se = serial number
8h ; ref = refresh
5m ; ret = update retry
3w ; ex = expiry
3h ; min = minimum
)

IN NS lab-dns.hingu.net.

; Storage Network Reverse Lookup

101 IN PTR nas-server.hingu.net.
1 IN PTR node1-nas.hingu.net.
2 IN PTR node2-nas.hingu.net.
3 IN PTR node3-nas.hingu.net.

/etc/resolv.conf (on RAC nodes):

search hingu.net
nameserver 192.168.2.200

Start the DNS Service (named):

service named start
chkconfig --level 35 named on

Verify the DNS Setup:

[Image: image004.jpg]
NOTE: nslookup for lab-scan should return names in random order every time.

Enable Name Service Cache Daemon nscd: (On all the RAC Nodes)

chkconfig --level 35 nscd on
service ncsd start

Creating Oracle Users/Groups/Permissions and Installation Paths: (On all the RAC Nodes):

userdel oracle
groupdel oinstall
groupdel dba
groupadd -g 1000 oinstall
groupadd -g 1020 asmadmin
groupadd -g 1021 asmdba
groupadd -g 1031 dba
useradd -u 1100 -g oinstall -G asmadmin,asmdba grid
useradd -u 1101 -g oinstall -G dba,asmdba oracle
mkdir -p /u01/app/grid11201
mkdir -p /u01/app/grid
chown -R grid:oinstall /u01
mkdir -p /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
passwd grid
passwd oracle

Install cvuqdisk Package: (On all the RAC Nodes):

This package is located in the rpm directory on Grid Infrastructure Media and needs to be installed after the group oinstall is created. In my case, as this was a fresh install of 11g R2 on new hardware, old versions of cvuqdisk was not present. If it is, then the older version needs to be removed first.

export CVUQDISK_GRP=oinstall
echo $CVUQDISK
rpm –ivh cvuqdisk-1.0.7-1.rpm

[root@node1 rpm]# pwd
/home/grid/11gR2_for_OEL5/grid11201/grid/rpm
[root@node1 rpm]# export CVUQDISK_GRP=oinstall
[root@node1 rpm]# echo $CVUQDISK_GRP
oinstall
[root@node1 rpm]# rpm -ivh cvuqdisk-1.0.7-1.rpm
Preparing... ########################################### [100%]
1:cvuqdisk ########################################### [100%]
[root@node1 rpm]# rpm -qa | grep cvuqdisk
cvuqdisk-1.0.7-1
[root@node1 rpm]#

Network Time Protocol Setting (On all the RAC Nodes):

In this installation, Oracle Time Synchronization Service is used over the Linux system provided ntpd. So, it needs to deactivated and deinstalled to avoid any possibility of it being conflicted with the Oracle’s Cluster Time Sync Service (ctss).

# /sbin/service ntpd stop
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.org

Also remove the following file:

/var/run/ntpd.pid

Configure Grid Infrastructure as well as Oracle RAC Owner’s User Environment (grid and oracle):

(a) Set the umask to 022 by putting below line into these users’ (grid and oracle) .bash_profile files:

umask 022

Then, executed the .bash_profile and verified that the correct value of umask is displayed.

[grid@node1 ~]$ . .bash_profile
[grid@node1 ~]$ umask

(b) Setting up X11 forwarding:

Created the file ~/.ssh/config to disable the X11Forwadding by placing below line in it.

Host *
ForwardX11 no

© Suppressed the Terminal output on STDOUT and STDERR to prevent Installation errors:
Modified the file ~/.bashrc (or .cshrc for C shell) with below entry.

Bourne, Bash, or Korn shell:

if [ -t 0 ]; then
stty intr ^C
fi

C shell:

test -t 0
if ($status == 0) then
stty intr ^C
endif

(d) Increased the Shell Limits:
Recommended:

Resource
Soft Limit
Hard Limit
Processes
2047
16384
Open File Descriptors
1024
65536
Stack
10240
10240 - 32768


Set:

Resource
Soft Limit
Hard Limit
Processes
131072
131072
Open File Descriptors
131072
131072
Stack
32768
32768


Added the following lines to the /etc/security/limits.conf file:

oracle soft nofile 131072
oracle hard nofile 131072
oracle soft nproc 131072
oracle hard nproc 131072
oracle soft core unlimited
oracle hard core unlimited
oracle soft memlock 3500000
oracle hard memlock 3500000
# Recommended stack hard limit 32MB for oracle installations
# oracle hard stack 32768

grid soft nofile 131072
grid hard nofile 131072
grid soft nproc 131072
grid hard nproc 131072
grid soft core unlimited
grid hard core unlimited
grid soft memlock 3500000
grid hard memlock 3500000
# Recommended stack hard limit 32MB for grid installations
# grid hard stack 32768

Added the following line in the /etc/pam.d/login file, if it does not already exist:

session required /lib/security/pam_limits.so

For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile:

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 131072
ulimit -n 131072
else
ulimit -u 131072 -n 131072
fi
fi

if [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 131072
ulimit -n 131072
else
ulimit -u 131072 -n 131072
fi
fi


For the C shell (csh or tcsh), add the following lines to the /etc/csh.login.

if ( $USER == "oracle" ) then
limit maxproc 131072
limit descriptors 131072
endif

if ( $USER == "grid" ) then
limit maxproc 131072
limit descriptors 131072
endif


(e) Set the below Kernel Parameters with recommended range in /etc/sysctl.conf
This was already set with the installation of oracle-validated package.

/etc/sysctl.conf

# Kernel sysctl configuration file for Oracle Enterprise Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 1

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 8192

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 1073741824
# For 11g, recommended value for file-max is 6815744
fs.file-max = 6815744
# For 10g, uncomment 'fs.file-max 327679', comment other entries for this parameter and re-run sysctl -p
# fs.file-max:327679
kernel.msgmni = 2878
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
net.core.rmem_default = 262144
# For 11g, recommended value for net.core.rmem_max is 4194304
net.core.rmem_max = 4194304
# For 10g, uncomment 'net.core.rmem_max 2097152', comment other entries for this parameter and re-run sysctl -p
# net.core.rmem_max=2097152
net.core.wmem_default = 262144
# For 11g, recommended value for wmem_max is 1048576
net.core.wmem_max = 1048576
# For 10g, uncomment 'net.core.wmem_max 262144', comment other entries for this parameter and re-run sysctl -p
# net.core.wmem_max:262144
fs.aio-max-nr = 3145728
# For 11g, recommended value for ip_local_port_range is 9000 65500
net.ipv4.ip_local_port_range = 9000 65500
# For 10g, uncomment 'net.ipv4.ip_local_port_range 1024 65000', comment other entries for this parameter and re-run sysctl -p
# net.ipv4.ip_local_port_range:1024 65000
# Added min_free_kbytes 50MB to avoid OOM killer on EL4/EL5
vm.min_free_kbytes = 51200

(f) Repeated this process for all the remaining nodes in the cluster.

SSH user Equivalency configuration (grid and oracle):

On All the Cluster Nodes:

su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh

Generate the RSA and DSA keys:

/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa

On node1:

touch ~/.ssh/authorized_keys
cd ~/.ssh

(a) Add these Keys to the Authorized_keys file.

cat id_rsa.pub >> authorized_keys
cat id_dsa.pub >> authorized_keys

(b) Send this file to node2.

scp authorized_keys node2:.ssh/

On node2:

(a) Add these Keys to the Authorized_keys file.

cd ~/.ssh
cat id_rsa.pub >> authorized_keys
cat id_dsa.pub >> authorized_keys

(b) Send this file to node3.

scp authorized_keys node3:.ssh/
On node3:

(a) Add these Keys to the Authorized_keys file.

cd ~/.ssh
cat id_rsa.pub >> authorized_keys
cat id_dsa.pub >> authorized_keys

(b) Send this file to node1 and node2.

scp authorized_keys node1:.ssh/
scp authorized_keys node2:.ssh/

On All the Nodes:

chmod 600 ~/.ssh/authorized_keys

ssh node1 date
ssh node2 date
ssh node3 date
ssh node1.hingu.net date
ssh node2.hingu.net date
ssh node3.hingu.net date
ssh node1-prv date
ssh node2-prv date
ssh node3-prv date

Entered 'yes' and continued when prompted

Repeat the Above process for user grid:

Configure the Shared Storage for 11g R2 Grid Infrastructure and RAC Database:

Volume Group: grid
Physical Volume: /dev/sda5 (1st extended partition on the last physical partition of local disk /dev/sda nas-server)
Logical Volumes: asmdisk01, asmdisk02, asmdisk03

a) Connect to the nas-server using: Configure the iSCSI disk Devises for Oracle ASM with ASMLib:

(a) Partition the Disk Devises (only from one node): Format these disks to contain a single primary partition to represent it at the time of creating ASM disk using oracleasm.

[root@node1 ~]# fdisk /dev/sdb

The number of cylinders for this disk is set to 24992.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-24992, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-24992, default 24992):
Using default value 24992

Command (m for help): p

Disk /dev/sdb: 26.2 GB, 26206011392 bytes
64 heads, 32 sectors/track, 24992 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 24992 25591792 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@node1 ~]# fdisk /dev/sdc

The number of cylinders for this disk is set to 25024.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-25024, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-25024, default 25024):
Using default value 25024

Command (m for help): p

Disk /dev/sdc: 26.2 GB, 26239565824 bytes
64 heads, 32 sectors/track, 25024 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 25024 25624560 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@node1 ~]# fdisk /dev/sdd

The number of cylinders for this disk is set to 25248.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-25248, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-25248, default 25248):
Using default value 25248

Command (m for help): p

Disk /dev/sdd: 26.4 GB, 26474446848 bytes
64 heads, 32 sectors/track, 25248 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 25248 25853936 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@node1 ~]#

[Image: image018.jpg]

(b) Refreshed the kernel on the remaining nodes with the latest partition table using partprobe.

[root@node1 ~]# partprobe

© Verified that the below RPMs are installed before configuring the ASM driver.

oracleasm-2.6.18-194.el5-2.0.5-1.el5
oracleasm-support-2.1.3-1.el5
oracleasm-2.6.18-194.el5xen-2.0.5-1.el5
oracleasm-2.6.18-194.el5PAE-2.0.5-1.el5
oracleasm-2.6.18-194.el5debug-2.0.5-1.el5
oracleasmlib-2.0.4-1.el5

(d) Configured the ASMLib (All the RAC nodes):

[root@node1 ~]# oracleasm configure –i

(e) Loaded the ASMLib Module (All the RAC nodes):

[root@node1 ~]# oracleasm init

(f) Created the ASM disks using oracleasm: (ONLY from one of the RAC nodes)

oracleasm createdisk DSK01 /dev/sdb1
oracleasm createdisk DSK02 /dev/sdc1
oracleasm createdisk DSK03 /dev/sdd1
oracleasm scandisks
oracleasm listdisks


(g) On the Remaining RAC nodes, simply scanned the ASM disks to instantiate these newly created disks

oracleasm scandisks
oracleasm listdisks

(h) Verified that these ASM disk can be discovered by ASM Libraries (oracleasmlib) on all the RAC Nodes during Installation of Grid Infrastructure.

/usr/sbin/oracleasm-discover 'ORCL:*'

[grid@node1 ~]$ /usr/sbin/oracleasm-discover 'ORCL:*'
Using ASMLib from /opt/oracle/extapi/32/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]
Discovered disk: ORCLBig GrinSK01 [51183584 blocks (26205995008 bytes), maxio 512]
Discovered disk: ORCLBig GrinSK02 [51249120 blocks (26239549440 bytes), maxio 512]
Discovered disk: ORCLBig GrinSK03 [51707872 blocks (26474430464 bytes), maxio 512]
[grid@node1 ~]$

With this, the pre-Installation steps are completed successfully and are ready to install 11g R2 Grid Infrastructure software



Get ready with the RAC Architecture Diagram and RAC Setup Information about Cluster Name, SCAN Name, SCAN VIPs, Node Names, VIPs, and Inventory Location.

3-Node RAC Architecture:

Machine
Public Name
Private Name
VIP Name
RAC Node1
node1.hingu.net
node1-prv
node1-vip.hingu.net
RAC Node2
node2.hingu.net
node2-prv
node2-vip.hingu.net
RAC Node3
node3.hingu.net
node3-prv
node3-vip.hingu.net


SCAN: lab-scan.hingu.net
Cluster Name: lab
Public Network: 192.168.2.0/eth2
Private network: 192.168.0.0/eth0



Grid Infrastructure Software (Clusterware + ASM 11.2.0.1):
Server: All the RAC Nodes
ORACLE_BASE: /u01/app/grid
ORACLE_HOME: /u01/app/grid11201
Owner: grid (Primary Group: oinstall, Secondary Group: asmadmin, asmdba)
Permissions: 755
OCR/Voting Disk Storage Type: ASM
Oracle Inventory Location: /u01/app/oraInventory

Start the runInstaller from Grid Infrastructure Software Location:

[grid@node1 grid]$ /home/grid/11gR2_for_OEL5/grid11201/grid/runInstaller

Grid Infrastructure installation process:

Installation Option:
Install and Configure Grid Infrastructure for a Cluster
Installation Type:
Advanced Installation
Product Language:
English
Grid Plug and Play:
Cluster Name: lab
SCAN Name: lab-scan.hingu.net
SCAN Port: 1521
Configure GNS: Unchecked
Cluster Node Information:
Entered the Hostname and VIP names of the Cluster Nodes.
Checked the SSH connectivity
Network Interface Usage:
OUI picked up all the interfaces correctly. I did not have to make any changes here.
Storage Option:
Automatic Storage Management (ASM)
Create ASM Disk Group:
Disk Group Name: GIS_FILES
Redundancy: External
Candidate Disk: ORCLBig GrinSK01
ASM Password:
Use Same Password for these accounts. (Ignored password warnings.)
Failure Isolation:
Do not use Intelligent Platform Management Interface (IPMI)
Operating System Groups:
ASM Database Administrator (OSDBA) Group: asmdba
ASM Instance Administrator Operator (OSOPER) Group: oinstall
ASM Instance Administrator (OSASM) Group: asmadmin
Installation Location:
Oracle Base: /u01/app/grid
Software Location: /u01/app/grid11201
Create Inventory:
Inventory Directory: /u01/app/oraInventory
Prerequisite Checks:
Ignored the failures of NTP check to use Oracle’s CTSS. All the 3 nodes’ clock was set to same time before the installation.
Summary Screen:
Verified the information here and pressed “Finish” to start installation.

At the End of the installation, the two scripts needs to be executed as root user.

/u01/app/oraInventory/orainstRoot.sh
/u01/app/grid11201/root.sh

After the successful completion of these scripts, the 11g R2 High Availability Service (CRS, CSS and EVMD) should all be up and running.
Verify the status of the installation using below set of commands.

crsctl check cluster –all
crs_stat –t –v
crsctl check ctss

The GSD and OC4J resources are by default disabled. Enable GSD them if 9i Database is managed by this cluster.

srvctl enable nodeapps –g
srvctl start nodeapps –n node1
srvctl start nodeapps –n node2
srvctl start nodeapps –n node3
srvctl enable oc4j
srvctl start oc4j


Install 11g R2 RAC Software

Start the runInstaller from 11g R2 Real Application Cluster (RAC) Software Location:

[oracle@node1 ~]$ /home/oracle/db11201/database/runInstaller

Real Application Cluster installation process:
Configure Security Updates:
Email: abdul.rehman@raqmiyat.com
Ignore the “Connection Failed” alert.
Installation Option:
Install database software only
Node Selection:
Select All the Nodes (node1,node2 and node3)
Product Language:
English
Database Edition:
Enterprise Edition
Installation Location:
Oracle Base: /u01/app/oracle
Software Location: /u01/app/oracle/db11201
Operating System Groups:
Database Administrator (OSDBA) Group: dba
Database Operator (OSOPER) Group: oinstall
Network Interface Usage:
OUI picked up all the interfaces correctly. I did not have to make any changes here.
Summary Screen:
Verified the information here and pressed “Finish” to start installation.

At the End of the installation, the two scripts needs to be executed on all the nodes as root user.

/u01/app/oracle/db11201/root.sh





Run the root.sh on all the RAC nodes.

[Image: image013.jpg]

At this point, the 11g R2 (11.2.0.1) Real Application cluster Installation is completed successfully. Now, it’s time to create new ASM disk groups and LISTENER for the new database.
Add Listener:
Listener name: LAB_LISTENER
END Point: TCP:1525

Invoke the netca utility under $ORACLE_HOME/bin to create a listener and provide the above information to the specific window. The screenshots of this process are as below.



Create ASM Disk Groups:

Disk Group Name: FRA, DATA
ASM DISKs: DSK02, DSK03 respectively.
Redundancy Level: External

Connect as grid user and Invoke the asmca utility under $GRID_HOME/bin to create these disk groups. The screenshots of this process are as below.





Create 11g R2 RAC database using dbca:

DB Name: labdb

Connect as oracle user and invoke the dbca utility under $ORACLE_HOME/bin to create a new database. After the successful creation of database, modify the LOCAL_LISTENER parameter to point to port 1525 for all the RAC nodes and test the connectivity to the database. The HOST and port value in the TNS ALIAS (connect string) should be set to lab-scan.hingu.net and 1521 respectively. This way, all client knows is the cluster host and port (lab-scan.hingu.net:1521). The DATABASE instances gets registered with the SCAN Listener (1521) running on virtual Host via REMOTE_LISTENER parameter and client connect request coming to SCAN listener gets redirected to the least loaded DATABASE instance based on the service name specified in connect string. So, the Database instance gets registered with the LOCAL_LISTENER (regular Database Listener) and REMOTE_LISTENER (scan).

The screenshots of this process are as below.


[Image: image041.jpg]
[Image: image042.jpg]

[Image: image043.jpg]

[grid@node1 ~]$ lsnrctl status listener_scan1

LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 03-SEP-2011 22:30:53

Copyright © 1991, 2009, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 03-SEP-2011 22:23:08
Uptime 0 days 0 hr. 7 min. 45 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/grid11201/network/admin/listener.ora
Listener Log File /u01/app/grid11201/log/diag/tnslsnr/node1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.152)(PORT=1521)))
Services Summary...
Service "labdb.hingu.net" has 3 instance(s).
Instance "labdb1", status READY, has 1 handler(s) for this service...
Instance "labdb2", status READY, has 1 handler(s) for this service...
Instance "labdb3", status READY, has 1 handler(s) for this service...
Service "labdbXDB.hingu.net" has 3 instance(s).
Instance "labdb1", status READY, has 1 handler(s) for this service...
Instance "labdb2", status READY, has 1 handler(s) for this service...
Instance "labdb3", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@node1 ~]$

[oracle@node1 ~]$ lsnrctl status lab_listener

LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 03-SEP-2011 22:36:55

Copyright © 1991, 2009, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LAB_LISTENER)))
STATUS of the LISTENER
------------------------
Alias LAB_LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 03-SEP-2011 22:23:08
Uptime 0 days 0 hr. 13 min. 47 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/db11201/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/node1/lab_listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LAB_LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1525)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.51)(PORT=1525)))
Services Summary...
Service "labdb.hingu.net" has 1 instance(s).
Instance "labdb1", status READY, has 1 handler(s) for this service...
Service "labdbXDB.hingu.net" has 1 instance(s).
Instance "labdb1", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@node1 ~]$


[Image: image044.jpg]


With this, the Installation of 11g R2 RAC on Linux (OEL 5.5) is completed successfully. I hope you find this article helpful.