3. Agenda
• Target and Resources
• Session 1
– Install Oracle Linux on Virtual Box
– Prepare it for Oracle Installation
– Clone a second node and make it work
• Session 2 or maybe a Session 3
– IP planning and Configure DNS
– Install Grid Infrastructure
– Install Oracle RDBMS and Create Database
– Verifying & Exploring RAC
• New topic volunteering opportunities
4. Target
• To build up a two-node RAC based on Oracle 12c hosted by
Oracle Linux 6.5 and virtualized by Oracle VirtualBox
racnd1 racnd2
Private network
public network
ASM1
orcl1
orcl2
ASM2
Cluster SCAN:orcl-scan
Service:orcl
Virtual network
192.168.1.20192.168.1.10
192.168.1.30/31/32
192.168.1.21192.168.1.11
-- VirtualBox
-- No NTP
-- Simple Storage
Solution
5. Resources
• somewhereRACLab
– GridAndORA: contains Oracle 12c database and Grid
Infrastructure installation disks
– OraLinux: Oracle Unbreakable Enterprise Linux 6.5
– VirtualBox: Oracle VirtualBox installer for windows
– RAC2NodesExample: a completed installation
• somewhereVMWare ImagesRAC12C
– Two nodes RAC built in VMWare workstation.
– More complex with iSCSI as shared storage, NTP setup
and OpenFiler as NTP Server and host of shared
storage
7. Session 1
– Install Oracle Linux on Virtual Box
– Install client additions
– Prepare it for Oracle Installation
– Prepare shared storage
– Clone a second node and make it work
• Remove clone of shared disks
• Define shared disks
• Reconfigure network interfaces
8. Prepare directories and resources
– Make directory d:VboxVms and d:R12cResource
– Copy oracleLinux6.5_V41362-01 to
d:R12cResource; copy all files under
somewhereRACLabGridAndORA to
d:R12cResource
– Extract all the zip files in d:R12cResource. You will
have database for RDBMS and grid for grid
infrastructure
9. Install Oracle Linux on Virtual Box
• Start VirtualBox.
• Change default machine folder to D:VBVMs
• New to create new virtual machine
10. Install Oracle Linux
• Memory size: 2560mb
• Hard drive: create a virtual hard drive now:
VDI, Dynamically allocated,30GB
• Network: first two as host-only adapter, third
one as NAT. Make all enabled
12. Install Oracle Linux
• Skip media checking
• Name it as racnd1
• Network:
– Eth0: auto connect, manual ip 192.168.1.10/255.255.255.0
– Eth 1:auto connect, manual ip 192.168.2.10/255.255.255.0
– Eth2: not auto connect, DHCP
• Install as desktop so that x-windows server be installed. Check
HA is you like.
13. Install VirtualBox client additions
• After rebooting, logon with root.
• Enabled eth2 and connect to internet
– yum install gcc kernel-uek-devel-3.8.13-16.2.1.el16uek.x86_64
– Reboot
• DevicesInsert Gest Additions… run and let it finish. Eject the
CD.
• Mouse cursor is no longer captured
• Shared folder should be supported now
14. Prepare Oracle Installation
• Enable eth2
• yum install oracle-rdbms-server-12cR1-preinstall
– this will download all the dependencies that are
required by oracle installation. Create oracle user,
oinstall group, dba group. Log file is at /var/log/oracle-
rdbms-server-12cR1-preinstall/results/orakernel.log
• Verify users and groups are created
– ls /etc/passwd ls /etc/group
• passwd oracle
15. Prepare Oracle Installation
• Make oracle base
– mkdir -p /u01/app/oracle
– chown -R oracle:oinstall /u01
• Define variable
– vi /home/oracle/.bash_profile
• export ORACLE_BASE= /u01/app/oracle
16. Prepare Shared Storage
• shutdown now –h
• Configure racnd1’s storage to have two shared
disks:ASM1 and ASM2
– mkdir D:VBVMsSharedDisks
– Create new disk under SATA controller
– VDI, Fixed size, 6GB
18. Prepare Shared Storage
• fdisk –l to verify sdb and sdc are available
• Create partition on new disks
– fdisk /dev/sdb
• n,p,1,use default, use default,w
– fdisk /dev/sdc
• n,p,1,use default, use default,w
– fdisk –l to verify
19. Prepare Shared Storage
• Make persistent storage reference
– echo options=-g >/etc/scsi_id.config
– less /etc/scsi_id.config to confirm
– prepare /etc/udev/rules.d/99-ora-asm-devices.rules by running
the following scripts:
i=1
cmd="/sbin/scsi_id -g -u -d"
for disk in sdb sdc ; do
cat <<EOF >> /etc/udev/rules.d/99-ora-asm-devices.rules
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="$cmd /dev/$parent",
RESULT=="`scsi_id -g -u -d /dev/$disk`", NAME="asm-disk$i",
OWNER="oracle", GROUP="dba", MODE="0660"
EOF
i=$(($i+1))
done
20. Prepare shared storage
• Refresh udev
– /sbin/partprobe /dev/sdb1 /dev/sdc1
– /sbin/udevadm test block/sdb/sdb1
– /sbin/udevadm test /block/sdb/sdc1
– /sbin/udevadm control --reload-rules
– /sbin/start_udev
– ls /dev/asm* should be able to see the two disks
21. Disable Features/Services
• Disable selinux
– Edit /etc/sysconfig/selinux
• selinux=disabled
• Disable NTP so that OUI will decide to use Cluster
Time Synchronization Service (CTSS) which is part
of Oracle Clusterware.
– mv /etc/ntp.conf /etc/ntp.conf.orig
• Disable firewall
– /etc/rc.d/init.d/iptables status
– /etc/rc.d/init.d/iptables stop
– chkconfig iptables off
22. Clone an VM
• Now we’ve got an VM that is almost all prepared
for Oracle installation
• Clone it so that we will save time on preparing
the second node.
– Machine/clone… full clone. This will take a while
– Name it racnd2 with MAC address reinitialized
– Reconfigure storage
• Remove newly cloned and add shared disks
• Remove the newly cloned disks permanently from Virtual
Media Manager
23. Clone an VM
• Power on and configure the network
– Remove the ones cloned from racnd1
• System eth0,1,2
– Auto eth3: manual 192.168.1.20/255.255.255.0
– Auto eth4:manual 192.168.2.20/255.255.255.0
– Auto eth5: not connecting automatically, DHCP
– Change hostname to be racnd2
• /etc/sysconfig/network
– HOSTNAME=racnd2
– rm /etc/udev/rules.d/70-persistent-net.rules
• Reboot to automatically regenerate the rule file
– Verify the shared storage is still available
• Ls /dev/asm*
24. Session2
– IP planning
– Configure DNS
– Install Grid Infrastructure
– Install Oracle RDBMS and Create Database
– Verification: get the feet wet on RAC
25. IP Planning
• racnd1
– 192.168.1.10 racnd1
– 192.168.2.10 racnd1-priv
– 192.168.1.11 racnd1-vip
• racnd2
– 192.168.1.20 racnd2
– 192.168.2.20 racnd2-priv
– 192.168.1.21 racnd2-vip
Single client access name:
orcl-scan
• 192.168.1.30 orcl-scan
• 192.168.1.31 orcl-scan
• 192.168.1.32 orcl-scan
• Virtual IP addresses must be in same network as public ip addresses. They are
used by cluster ware to implement its failover mechanism. E.g. in tnsnames.ora
• Private IP is for cluster’s internal communications(cache fusion, interconnect).
• Public IP is normal IP address. DBAs use them to access the server
• SCAN is a single service name known by clients. A name that represents the
cluster.
26. Configure DNS
• Make racnd1 as DNS server
– Enable internet by enabling eth2
– yum install bind-libs bind bind-utils
– /etc/named.conf
• gedit /etc/named.conf
• Replace the content with the content in the embedded
object
• Save and close
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
options {
listen-on port 53 { 127.0.0.1;192.168.1.10; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "localdomain." IN {
type master;
file "localdomain.zone";
allow-update { none; };
};
zone "1.168.192.in-addr.arpa." IN {
type master;
file "1.168.192.in-addr.arpa";
allow-update { none; };
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
27. Configure DNS
• Make racnd1 DNS server – continued
– gedit /var/named/localdomain.zone
• Fill it with the content in the embedded object.
– Gedit /var/named/1.168.192.in-addr.arpa
• Fill it with the content in the embedded object
$TTL 86400
@ IN SOA localhost root.localhost (
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS localhost
localhost IN A 127.0.0.1
racnd1 IN A 192.168.1.10
racnd2 IN A 192.168.1.20
racnd1-priv IN A 192.168.2.10
racnd2-priv IN A 192.168.2.20
racnd1-vip IN A 192.168.1.11
racnd2-vip IN A 192.168.1.21
orcl-scan IN A 192.168.1.30
orcl-scan IN A 192.168.1.31
orcl-scan IN A 192.168.1.32
$TTL 1H
@ IN SOA racnd1.localdomain. root.racnd1.localdomain. ( 2
3H
1H
1W
1H )
1.168.192.in-addr.arpa. IN NS localdomain.
10 IN PTR racnd1.localdomain.
20 IN PTR racnd2.localdomain.
11 IN PTR racnd1-vip.localdomain.
21 IN PTR racnd2-vip.localdomain.
30 IN PTR orcl-scan.localdomain.
31 IN PTR orcl-scan.localdomain.
32 IN PTR orcl-scan.localdomain.
28. Configure DNS
• Make racnd1 as DNS server – continued
– Start DNS
• Service named start
– Start it automatically
• chkconfig named on
• Make both racnd1 and racnd2 to use DNS
– Edit the adapter of 192.168.1 network
to have 192.168.1.10 as their name server.
30. Test SSH
• Query package
– rpm -qa --queryformat "%{NAME}-%{VERSION}-
%{RELEASE} (%{ARCH})n"| grep ssh
• On racnd1, as oracle
– ssh racnd2
• On racnd2, as oracle
– ssh racnd1
• Make sure ssh is working. If not, correct it.
31. Install Grid Infrastructure
• Configure the shared folders
– In VirtualBox racnd1’s setting dialog, add
D:R12cResource
– Reboot the racnd1, logon as oracle
– su – and mount the shared folder
• Mkdir –p /media/ R12cResource
• mount -t vboxsf R12cResource /media/R12cResource
• Exit
– cd /media/R12cResource/grid
– ./runInstaller
32. Install Grid Infrastructure
• Skip software update
• Install and configure oracle Grid Infrastructure for a cluster
• Configure a standard cluster
• Typical installation—otherwise has to be familiar with GNS
the first
• SCAN name: orcl-scan
• Add racnd2
• Setup SSH connectivity
– Provide password,setup then test
• Identify network interfaces to make sure private and public
networks are correctly associated with corresponding
network
37. Install Grid Infrastructure
• Accept default inventory directory definition
• Automatically run configuration scripts
• Fix the cvuqdisk-1.0.9-1 and check again.
• agree to Run fix-up scripts
• Ignore other (two) warning on prerequisite
checks
• Review summary and install.
– Slow process
– Tail –f /u01/app/oraInventory/logs/installActionsyyyy-
mm-dd_xx-xx-xxPM.log
38. Install Grid Infrastructure
• Simple verification
– Ps –ef|grep asm
– export ORACLE_SID=ASM1; export
ORACLE_HOME=/u01/app/12.1.0/grid
– cd /u01/app/12.1.0/grid/bin
– ./asmca to view ASM instances and the disk group
– crsctl stat res –t to view cluster component status
39. Install Oracle 12c RDBMS and create database
On racnd1
– cd /media/R12cResource/database
– ./runInstaller
– Do not provide email and skip software update
– Create and configure database
– Server class
– Oracle RAC database installation
– Admin managed
– Choose all the nodes
– Typical install
47. Connect to a cluster
• This can be done on any machine that is not part of the
cluster, but should be in same network as cluster’s public
network
– From a third VW if your resource is enough
– From VM’s host if an Oracle client is installed
– From one of node with cluster instance name: orcl. This is
simplest method, but you have to believe it is working as if you
connect from third computer
• I adopt method 2
– http://www.oracle.com/technetwork/topics/winx64soft-
089540.html
• instantclient-basiclite-windows.x64-12.1.0.1.0
• instantclient-sqlplus-windows.x64-12.1.0.1.0
• Unzip them to same folder
48. Configure host network
• Go to property of VirtualBox Host-Only Network in network
connections
• Bring up the IPv4’s properties
– Change IP to be 192.168.1.1, DNS to be 192.168.1.10
– In DNS, add localdomain as DNS suffix
– Test the network by
• ping 192.168.1.10
• Ping racnd1.localdomain
• Ping racnd1
• Ping orcl-scan
– Verify SCAN is running
• ./svrctl status scan
• ./svrctl status scan_listener
49. Connect from host
• Using basic connection descriptor
– sqlplus scott/tiger@192.168.1.30/orcl
– sqlplus scott/tiger@orcl-scan/orcl
– But accessing to individual instance will fail
because local listener is not accessible directly
remotely. It has to go through SCAN listener the
first. The SCAN listener pick one with less work
load to serve the client
• sqlplus scott/tiger@192.168.1.10/orcl1
• sqlplus scott/tiger@192.168.1.10/orcl
50. Failover is not working?
• Start one node, connect to it from client. Then start second
node.
• Now shutdown first node, querying database will return end
of file error
• Restart first node, and re-issue query will be reported not
connected to oracle.
51. Transparent Application Failover
• Applications and users are automatically and
transparently reconnected to another system,
applications and queries continue
uninterrupted, and the login context is
maintained. But transaction will be aborted
and transaction management has to be done
inside application.
52. Configure TAF
• It’s through client side configuration. I guess the SCAN is only
meaningful here in term of new connections can still connect
to same database as long as one instance is still running.
• tnsnames.ora
raclab =
(DESCRIPTION =
(FAILOVER = ON)
(LOAD_BALANCE = OFF)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.11)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.21)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = PRECONNECT)
(BACKUP=192.168.1.21)
)
)
)
53. Test Failover 1
• sqlplus sys@rablab as sysdba
– Check instance that is serving
• On racnd1, abort service
srvctl status database -d orcl
srvctl stop instance -d orcl -i orcl1 -o abort
• Rerun the query. Check which node is serving. It should
failover to racnd2
• Start the node racnd1, then shutdown node racnd2
srvctl start instance -d orcl -i orcl1
srvctl stop instance -d orcl -i orcl2 -o abort
• Rerun the query, check which node is serving. It should
go back to racnd1
COLUMN instance_name FORMAT a13
COLUMN host_name FORMAT a9
COLUMN failover_method FORMAT a15
COLUMN failed_over FORMAT a11
SELECT DISTINCT
v.instance_name AS instance_name,
v.host_name AS host_name,
s.failover_type AS failover_type,
s.failover_method AS failover_method,
s.failed_over AS failed_over
FROM v$instance v, v$session s
WHERE s.username = 'SYS';
54. Test Failover 2
• Configure another TNS name orcllab without
failover enabled
• Create test package under scott
• Run this script to retrieve result from a slow
query
select * from table(AMATest.f_SlowResponse (restSecond =>
1));
• Non-failover connection, returns no row
• Failover connection returns rows normally but
using longer time
CREATE OR REPLACE PACKAGE AMATest
IS
Type recRow is Record
(
f1 varchar2(30),
f2 number
);
TYPE RecTable IS TABLE OF recRow;
FUNCTION f_SlowResponse(restSecond int)
return AMATest.RecTable PIPELINED DETERMINISTIC;
END AMATest;
/
CREATE OR REPLACE PACKAGE BODY AMATest
is
/*
example:
select * from table(AMATest.f_SlowResponse (restSecond => 2));
*/
FUNCTION f_SlowResponse(restSecond int)
return AMATest.RecTable PIPELINED DETERMINISTIC
is
begin
FOR aRec IN (select ename,deptno from scott.emp)
LOOP
pipe ROW(aRec);
dbms_lock.sleep(restSecond);
END LOOP;
return;
end;
end AMATest;
/
orcllab =
(DESCRIPTION =
(FAILOVER = ON)
(LOAD_BALANCE = OFF)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.30)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)
Unlike traditional Unix systems, where the device nodes in the /dev directory have been a static set of files, the Linux udev device manager dynamically provides only the nodes for the devices actually present on a system.
udev supports persistent device naming, which does not depend on, for example, the order in which the devices are plugged into the system. The default udev setup provides persistent names for storage devices. Any hard disk is recognized by its unique filesystem id, the name of the disk and the physical location on the hardware it is connected to.