2. Agenda β part 1
β’ Sun Cluster Architecture and Algorithms
β’ Sun Cluster 3.2 New Features Deep Dive
> New Command Line Interface (CLI)
> Quorum Server
> Service Level Management
> Dual Partition Software Update (aka Quantum Leap)
> Solaris Containers extended support
> Agent development facilities
Solaris Cluster Roadshow 2
4. Outline
β’ Introduction
β’ Solaris Cluster Building Blocks
> HA Infrastructure
> Resource Management Infrastructure
> Agent Development
> Manageability
> Disaster Recovery
β’ Test Infrastructure
β’ Availability Characterization
β’ Summary
Solaris Cluster Roadshow 4
5. Solaris Cluster (SC)
Provides the software for Service Availability, Data
Integrity, Business Continuity, and Disaster Recovery
Availability is our customers'
most critical requirement
Sun Cluster VOC Survey
Fifty percent of enterprises that lack a
recovery plan go out of business within
one year of a significant disaster
Gartner Group
Solaris Cluster Roadshow 5
6. Solaris Cluster
β Provides business continuity within the datacenter or
across the planet
β Meets a wide range of availability needs
Sun Cluster Sun Cluster Sun Cluster
Single Node Sun Cluster
Local Cluster Campus/Metro Cluster Geographic Edition
Cluster
US
Singapore
Single Server Local Data Center Hundreds of km Unlimited Distance
Solaris Cluster Roadshow 6
7. Solaris Cluster
β’ Also known as Sun Cluster (SC)
β’ Sun's High Availability (HA) product
β’ Integrated with Solaris Operating System (Sparc, x64)
> Allows the infrastructure to be resilient to load
> Exploit kernel hooks to get faster reconfiguration
> Both these lead to higher and more predictable availability
β’ Supports both traditional failover and scalable HA
β’ History
> SC 3.2 FCS - Dec. 2006
> SC 3.0 FCS - Dec. 2000, and several updates/releases in between
> Prior products: HA 1.x β 1990s, SC 2.y -1990s.
Solaris Cluster Roadshow 7
8. SC Stack
Network
Applications
HA Services
Node 1
Node 2
Node 3
Node 4
Cluster
Infrastructure
Operating
System
Storage
Solaris Cluster Roadshow 8
9. SC Architecture
SCM Failover Data Services Scalable Data Services
Oracle RAC
Commands Framework
& Libraries RGM
Userland
Kernel
Global SVM
File Systems Global
and Devices Networking
User Visible
Services
Cluster Communications Internal
CCR and High Availability Infrastructure
Infrastructure
Solaris Cluster Roadshow 9
10. SC Algorithms
β’ Heartbeats
> Monitor nodes in the cluster over the private n/w,
triggering reconfiguration when nodes join/leave
> Resilient to load
β’ Membership
> Establishes clusterwide consistent cluster membership
> Coordinates reconfiguration of other layers
β’ Cluster Configuration Repository
> Global repository, local copy on each node
> Updates made atomically
> Nodes can join and leave arbitrarily
Solaris Cluster Roadshow 10
11. SC Algorithms
β’ Quorum
> Prevents partitions (split brain, amnesia) in the cluster
β Protects against data corruption
> Uses a majority voting scheme
β 2 node clusters require a quorum device (an external tie-
breaker)
β’ Disk Fencing
> Used to preserve data integrity
> Non cluster nodes are fenced off from updating any
shared data
Solaris Cluster Roadshow 11
12. SC Algorithms
β’ Membership changes trigger algorithms of upper
layers, including the:
> ORB, Replica Framework
> CCR
> Global File System or PxFS
> Global Device Service
> Global Networking
> Resource Group Manager (in user space)
Solaris Cluster Roadshow 12
13. SC Algorithms
β’ Resource Group Manager (RGM)
> Rich and extensible framework for plugging
applications into Sun Cluster
> Application is wrapped by an RGM resource,
supplying methods for controlling the application
β Start, Stop, Monitor, Validate
> Closely related resources placed in Resource Groups
β Ex. HA-NFS: RG has 3 resources: NFS, IP, Storage
β An RG is a basic failover unit
> Supports both failover and scalable RGs
Solaris Cluster Roadshow 13
14. SC Algorithms
β’ Resource Group Manager (RGM) - continued
> Support for rich dependencies between Resources
and RGs
> Additional semantics for inter RG dependencies
> Solaris SMF support (SC 3.2)
β Wrap SMF Manifest with an RGM Resource
β Leverages SMF delegated restarter interface
β Enables reuse of customer and ISV SMF manifests
β After too many local (same host) restarts in a time period,
recovery is βescalatedβ by failing over to another host
β Configuration to control inter-host failover is attached to the
RG
Solaris Cluster Roadshow 14
15. Data Services
β’ Failover service
> Service is hosted by a primary node in the cluster, with
backup capability on one or more secondary nodes.
> Exactly one service instance active at a time
β’ Scalable service
> Service is hosted by several nodes in the cluster at the
same time, with backup capability on zero or more
nodes.
Solaris Cluster Roadshow 15
16. Failover Service
Network
DB
DB
Failover
Node 1
Node 2
Node 3
Node 4
Service
Storage
Solaris Cluster Roadshow 16
17. Scalable Service
Network
Web
Scalable
DB Web Web Web
Node 1
Node 3
Node 4
Node 5
Node 6
Node 2
Service
Storage
Solaris Cluster Roadshow 17
18. Data Services Development
β’ Several choices available
> Generic Data Service (GDS)
> Data Service Development Library (DSDL)
> RGM Application Programming Interface (API)
Solaris Cluster Roadshow 18
19. Large Portfolio of Supported Applications
Web Tier / Presentation Business Logic Tier Management Infrastructure Tier
β HA Sun Java System Web Server β HA Sun Java System App Server PE/SE β HA Sun N1 Grid Engine
β HA Sun Java System Messaging Server β HA Sun Java System App Server EE β HA Sun N1 Service Provisioning
β HA Sun Java System Message Queue β HA Sun Java System Directory Server System
β HA Sun Java System Calendar Server β HA Agfa IMPAX
β HA DNS, HA NFS
β HA Sun Java System Instant Messaging β HA BEA Weblogic Server
β HA DHCP
Server β Scalable Broadvision One-To-One
β HA Kerberos
β Scalable Sun Java System Web Server β HA IBM Websphere MQ
β IBM Tivoli+
β HA Apache Web/Proxy Server β HA IBM Websphere MQ Integrator
β Mainframe Rehosting (MTP)
β HA Apache Tomcat β IBM Lotus Notes+
β HA Samba
β Scalable Apache Web/Proxy Server β HA Oracle Application Server
β HA Solstice Backup
β HA SAP liveCache
β HA Solaris Container
Database Tier β HA SAP J2EE Engine
β HA Symantec NetBackup
β Oracle Parallel Server β HA SAP Enqueue Server
β HA Oracle9i and Oracle 9i RAC β Scalable SAP
β HA Oracle 10g and Oracle 10gRAC β HA Siebel
β HA Oracle E-business Suite β HA SWIFT Alliance Access
β HA Oracle + developed/supported/delivered by 3rd party
β HA Sybase Adaptative Server
β HA Sybase
β IBM DB2+
β
β
Informix+
HA MySQL
And much more through our
β HA SAP/MaxDB Database Professional Services teams
β HA PostgreSQL
Solaris Cluster Roadshow 19
20. SC Manageability
β’ New and improved Command Line Interface (CLI) in
SC 3.2
β’ Sun Cluster Manager (SCM) β SC GUI
β’ Data Services Configuration Wizards β for a set of
most common data services
β’ Service Level Management in SC 3.2
β’ Upgrade options β Live Upgrade, Quantum Leap
Solaris Cluster Roadshow 20
21. SC Geographic Edition
Multi-Cluster and Multi-Site capability
N+1 Multi-site Support Paris Berlin
One site backs up multiple cities
Geneva
One way Data Replication
Rome
Active β Active Configuration
Each site backs up the other
New York London
Bi-directional Data Replication
Solaris Cluster Roadshow 21
22. *
SCATE
SC Automated Test Environment
β’ A suite of automated tests and tools
β’ Distributed test development framework
> Fault injection (FI) framework
β Source code based FI (white box)
β System FI (black box)
β’ Distributed test execution framework
> Client/server architecture
> Easy to plug-in new test suites
*2002 Sun Chairman Award for Innovation
Solaris Cluster Roadshow 22
23. More on SCATE
β’ Test Assets
> 50+ automated test suites, each with 100s of tests
> 500+ fault points in the product
β’ 350000+ functional tests, 45000+ faults injected
β’ External Qualifications
> Enable internal partners and 3rd party vendors to qualify
their hardware (ex. Storage) and software (ex. Agents)
SCATE had been extended/expanded to CTI (Common Test
Interface), which is being used for Solaris test development.
Solaris Cluster Roadshow 23
24. Availability Characterization
β’ Extensive SC work on Availability Modeling,
Measurement, and Improvement
> Code instrumentation and detailed measurements taken
> Leading to code improvement
β’ Goals: faster failover times & predictable failover times
β’ Availability measurements part of release testing: no
regression in failover times.
β’ Important for meeting customer Service Level
Objectives
β’ Application failover time and customer workload key
Solaris Cluster Roadshow 24
26. SC Strengths
β’ Tight integration with Solaris β faster failure detection-> faster
recovery -> higher availability
β’ Robust HA infrastructure β resilient to single points of failure, and also
to many multiple points of failure
β’ No data corruption guarantee β many protection mechanisms in SC
(membership, quorum, fencing) to enable this
β’ Flexibility across the stack β flexible platform for developing HA
applications, and broad range of configuration choices
β’ Data-driven availability prediction β provides mathematical basis for
offering SLAs
β’ Rich portfolio of supported applications
β’ Simple, powerful disaster recovery solution
β’ Sophisticated, industry-leading test framework, used both inside and
outside Sun
β’ ...
Solaris Cluster Roadshow 26
27. What's New in Solaris
Cluster 3.2?
Solaris Cluster Engineering Staff
Sun Microsystems
28. Agenda
β’ This Presentation
β’ New Command Line Interface (CLI)
β’ Quorum Server
β’ Service Level Management
β’ Quantum Leap Upgrade
Solaris Cluster Roadshow 28
29. This presentation
β’ Introduces the new features in SC 3.2
β’ Not much detail, just overview information
β’ Want more? Sign up for training
Solaris Cluster Roadshow 29
31. New CLI: Benefits
β’ Object-Oriented
β’ Easy-to-Remember Command Names
β’ Easy-to-Remember Subcommands
β’ Consistent use of Subcommands and Options
β’ Helpful Help
β’ Configuration Replication
β’ Existing CLI available
> All existing commands continue to work
> Retraining not required
Solaris Cluster Roadshow 31
32. New CLI: Example 1
Examples - Object-Oriented
Create a resource group
node# clresourcegroup create rg1
Object type is resource group
Display status of a resource group Object is rg1
node# clresourcegroup status rg1
... <status is listed> ...
Display status of all cluster objects using umbrella command
node# cluster status
... <status is listed> ... Object type is cluster
Implied object is this cluster
Solaris Cluster Roadshow 32
33. New CLI
Benefit β Configuration Replication
β’ Ability to easily replicate configurations
> Most commands support export subcommand
β Outputs cluster configuration to XML
> Most create subcommands accept βinput option
β Uses XML file as input for creating objects in operand list
β Command line options over-ride config file content
β’ Possible future enhancements
> A single command to import entire config
> Apply changes to already existing objects
Solaris Cluster Roadshow 33
34. New CLI: Example 2
Example β Configuration Replication
Export entire cluster configuration to XML config file
node# cluster export > cluster_config.xml
Delete all resources and groups from the cluster
node# clresourcegroup delete βforce +
The βforce (-F) option
first deletes any resources
Rebuild groups and resources from the XML config file
node# clresource create -a -i cluster_config.xml +
The β+β operand wildcards
The -a option causes to mean all objects of this type
clresource to first create rgs
Solaris Cluster Roadshow 34
36. Quorum Server Overview
β’ SC3.2 introduces a new type of quorum device
β’ Quorum Server is a Quorum Device
> Runs on a host external to the cluster
β External host may be part of a cluster
β But, it may not be part of the cluster for which it provides
quorum
β Only Solaris 9 and 10 supported
> Can act as a quorum device for multiple clusters
> Quorum Server identified by:
β IP address
β Port number
Solaris Cluster Roadshow 36
37. Quorum Server Overview (2)
β’ Network Connectivity
> Clusters & QS may be on different subnetworks
> May be used in campus cluster configuration
β’ Interoperability - Cluster and QS Host
> May run different OS releases
> Quorum server and Cluster need not be the same
architecture
Solaris Cluster Roadshow 37
38. Quorum Server Installation
β’ Quorum Server is part of Java Enterprise System
> Availability Services
β Sun Cluster Quorum Server
β’ Quorum Server software is distributed separately,
because the software will reside on different
machines
β’ Quorum Server must be installed and configured
before cluster can configure quorum server as a
quorum device
Solaris Cluster Roadshow 38
40. Service Level Management in SC 3.2
β’ System Resource Utilization Status Monitoring/Telemetry
> Monitor node/zone and RG CPU/memory/swap usage
> Monitor disk and network adapter IO usage
> Threshold monitoring and trigger utilization status change event
> Historical data can be viewed in graph or exported to other applications.
β’ CPU allocation/prioritization on Cluster Resource Group
> Solaris Resource Management Integration on CPU resource
β CPU share per RG
β Dedicated CPU(Processor Set). Dynamically calculate CPU
requirement for all RGs running in the same zone container, and
attempt to allocate and bind CPU resource(processor) to the zone
container based on the calculation if there is enough CPU resource.
Solaris Cluster Roadshow 40
41. Benefit of the Utilization Status Monitoring
β’ Know the head room of cluster nodes. Help to consolidate
data services/RGs running on the cluster nodes.
β’ Know how data services/RGs are being performed, and
some other aspects unhealthy status in data services(such
as memory leakage and etc). Help to do planned switchover
instead of failover.
β’ Future system resource planning based on the weekly
statistic data.
Solaris Cluster Roadshow 41
44. Agenda
β’ This Presentation
β’ New Command Line Interface (CLI)
β’ Quorum Server
β’ Service Level Management
β’ Quantum Leap Upgrade
Solaris Cluster Roadshow 44
45. What is Quantum Leap?
β’ Quantum Leap (QL) is a fast cluster upgrade
technology
β’ Divide cluster into two partitions
> Exactly two, no more, no fewer
β’ Upgrade one partition at a time
β’ Quickly move applications from old version partition
to new version partition
> Use new partition for production while upgrading old
β’ Marketing name is βDual Partition Software Swapβ
> Could be used to downgrade as well, but that is untested
Solaris Cluster Roadshow 45
46. Advantages of Quantum Leap
β’ Quantum Leap provides the means to upgrade the
entire software stack on a cluster with only a small
outage.
> OS, cluster, applications, etc., may be upgraded
> Outage similar to application switchover
β’ Rolling Upgrade can only upgrade cluster software .
β’ Quantum Leap dramatically reduces the cost of
engineering development and testing.
Solaris Cluster Roadshow 46
47. What things are supported by QL?
β’ Upgrade from SC 3.1 FCS & all updates to SC 3.2
β’ Upgrade from S8/S9/S10 to S9u7/S10u3 and ahead
β’ Installing future patches & updates
β’ Upgrading other software:
> applications β Oracle, SAP, etc
> volume managers β SVM, VxVM
> file systems β VxFS
β’ Can use QL for upgrades with or without SC
changes
Solaris Cluster Roadshow 47
49. Solaris Cluster support for S10
containers
β’ Available since Sun Cluster 3.1 8/05: HA Container Agent
> Zone is a resource; zone can fail over between nodes
> All RGs configured in global zone
> RG contains zone resource and application resource
β’ Significantly Enhanced in Sun Cluster 3.2: βZone Nodesβ
> Zones are Virtual Nodes
> Multiple RGs can fail over independently between zones
β’ Coexistence and combination of both approaches in Sun
Cluster 3.2
Solaris Cluster Roadshow 49
50. Why use S10 containers with Solaris
Cluster?
β’ Combine benefits of clustering and containers
> Solaris Cluster provides high availability and load
balancing
> Containers provide application isolation, fault
containment, and control of system resource allocation
β’ Each application can run in its own zone
β’ Upon failure, application and/or zone can fail over to
another node
Solaris Cluster Roadshow 50
51. Zone-Nodes Provide
β’ Application isolation
β’ Ability to exploit Sun Cluster agent to monitor
application running within zone
β’ Ability to run most SC resource types (application and
agent) unmodified in a non-global zone
β’ Ability to run multiple resource groups in the same
zone that fail over independently
β’ Ability to dynamically create/destroy zones
> using the usual Solaris tools
> Automatic discovery by RGM
Solaris Cluster Roadshow 51
52. Zone-Nodes Provide (cont.)
β’ Support for unbounded number of zones
β’ Support for resource groups to fail over between
zones on the same node
> Does not really provide high availability
> Supports prototyping of cluster services
β’ Support for data services developed using Generic
Data Service (GDS), Agent Builder, or Data Service
Development Library (DSDL)
Solaris Cluster Roadshow 52
53. Sun Cluster components in zones
Node 1 Node 2
zone 'z3' zone 'z3'
PMF libscha libdsdev RG4 PMF libscha libdsdev RG4
zone 'z2' zone 'z2'
RG5 RG3 RG5 RG3
PMF libscha libdsdev PMF libscha libdsdev
zone 'z1' zone 'z1'
PMF libscha libdsdev RG2 PMF libscha libdsdev RG2
global zone global zone
RGM FED CCR UCMM RGM FED CCR UCMM RG1
RG1
Sun Cluster Infrastructure PMF Sun Cluster Infrastructure PMF
Solaris Cluster Roadshow 53
54. Zone isolation/security
β’ Zone isolation is incomplete
β’ User running in a non-global zone can βseeβ resource
groups configured in other zones
β’ User running in a non-global zone cannot modify or affect
behavior of RGs in other zones unless those RGs list the
non-global zone in their Nodelist property
β’ Some admin commands are not permitted to run in non-
global zone: RT register, RG create, ...
β’ Cluster administration is most easily done from the global
zone
β’ Security to be enhanced by the βClusterized Zonesβ (RAC in
zones) project in a future release
Solaris Cluster Roadshow 54
55. How to Use Zone-Nodes
β’ "Logical Nodename" nodename:zonename
or nodename
β’ (old) Nodelist=node1,node2,node3
β’ (new) Nodelist=
node1:zoneA,node2:zoneA,node3:zoneA
β’ Also permitted:
> RG runs in different zone name per node:
β Nodelist=node1:zoneA,node2:zoneB,node3:zoneC
> RG runs in multiple zones on single physical node:
β Nodelist=node1:zoneA,node1:zoneB,node1:zoneC
Solaris Cluster Roadshow 55
56. Zones support in Sun Cluster Manager
Solaris Cluster Roadshow 56
57. Data Services supported in non-
global zones
β’ Combined with the HA Container Agent:
Apache Tomcat MySQL
Samba IBM WebSphere MQ
PostgreSQL N1 Grid Service Provisioning System
β’ Using zone nodes with Sun Cluster 3.2
> all agents that are supported with the HA Container Agent, plus:
JES Application Server JES Web Server
JES MQ Server DNS
Apache Kerberos
HA Oracle Oracle E-Business Suite
Oracle 9iAS GDS
IBM Websphere MQ Broker (pending IBM support confirmation)
β’ Refer to the Config. Guide for the latest agent support info.
Solaris Cluster Roadshow 57
58. Competition
β’ Veritas Cluster (VCS) offers a Container Agent
> VCS has HA Oracle and SAP container aware
β’ SC3.1 8/05 has similar functionality to VCS Container Agent
> Some of our agents are container aware in 3.1 8/05
β’ SC3.2 supersedes VCS with Zone Nodes and the HA
Container Agent
> Starting with 3.2 many of our standard agents are Container aware
(order of magnitude more than VCS)
> All our GDS custom agents can be Container aware
> Application failover between zones can be tested in a single node
cluster for development purposes
Solaris Cluster Roadshow 58
60. Introduction
β’ Solaris Cluster 3.2 has an extensive portfolio of
supported applications
> Agents available on the Solaris Cluster DVD or download
> Most JES applications ship SC Agents
β’ APIs and tools available for custom Agents
β’ Talk outline
> Application Characteristics
> SC Resource Management Model
> Available APIs
> Solaris Cluster Agent Builder Tool
> Hands on exercise developing a custom agent
Solaris Cluster Roadshow 60
61. Application Characteristics
β’ Crash tolerance
> Be able to restart correctly after an unclean shutdown
> Sometime requires cleanup of lock/pid files
β’ Independence from server hostname
> THAT changes with a failover!
> New feature in SC32 to override application hostname
resolution
β export LD_PRELOAD=/usr/cluster/lib/libschost.so.1
β export SC_LHOSTNAME=βmyHAhostnameβ
β man libschost.so.1(1) for details
> Should be able to co-exist with multi-homed hosts
β’ Multi-hosted data
> Application should not hard code data paths
> Sometimes symbolic links can be used as work-around
Solaris Cluster Roadshow 61
62. Resource Management Model
β’ Key concepts
> Resource Type (RT): Is a representation of an HA entity
β Example: HA-Oracle RT, a HA Filesystem
> Resource: Is a specific instance of a RT
β Example: Oracle HR database
> Resource Group(RG): Is a collection of resources
β Example: A RG containing
β A failover filesystem resource
β A failover IP Address (aka LogicalHostname resource)
β A failover Oracle database instance
> Resources can have dependencies between them
β Facilitates proper startup/shutdown sequencing
β Dependencies can have various flavors such as strong/weak/restart
β Works across different cluster nodes
β’ Implemented by Resource Group Manager (RGM)
Solaris Cluster Roadshow 62
63. Example of a Failover RG
Name: Oracle-rg
Maximum_primaries: 1
Type: LogicalHostname
Hostname: ora-1
Type: HAStoragePlus
Name: hafs1
FilesystemMountPoints: /global/ora-1
Type: SUNW.Oracle_server
Oracle_home: /global/ora-1/oracle/
Oracle_SID: D001
Resource_dependencies: hafs1
Solaris Cluster Roadshow 63
64. Developing Resource Types
(aka HA Agents)
β’ Manages an applications such as Oracle, Apache, NFS etc.
β’ Implements callback methods
> To Start, Stop and Monitor the application
β’ Manages specific properties needed to manage the
applications
> eg, value of ORACLE_HOME, timeout value for a specific
task
> Optionally implement methods to VALIDATE and
UPDATE these properties
β’ Supplies a Resource Type Registration (RTR) file to specify
the above information
Solaris Cluster Roadshow 64
65. Available APIs
β’ Sun Cluster High Availability API (SCHA API)
β Querying properties: scha_resource_get(1HA)
β Taking action on failures: scha_control(1HA)
β Managing status: scha_resource_setstatus(1HA)
β Available in C and CLI form
β’ PMF (Process Monitoring Facility)
β To manage application processes
β Quick restart after failures
β Guaranteed stop
β CLI interface pmfadm(1M)
β’ To run arbitrary commands under a timeout
β hatimerun(1M)
Solaris Cluster Roadshow 65
66. Available APIs (contd...)
β’ Data Services Development Library (DSDL)
β Brings together SCHA API, PMF and hatimerun
β Provides an integrated fault monitor model
β Local restarts after application failures, repeated failures lead to inter
node failover
β Provides APIs for application fault monitoring
β scds_fm_net_connect(3HA)
β scds_simple_probe(3HA)
β scds_timerun(3HA)
β Available only in C
β’ Generic Data Service (GDS)
β Layered on top of DSDL
β Allows developers to plug in simple scripts and create RTs
β Customers love it because there is very little code to own and
maintain
Solaris Cluster Roadshow 66
67. Solaris Cluster Agent Builder
β’ GUI based code generation tool for RTs
β’ You only specify how to start your app
> Uses Sun Cluster Process Monitoring Facility (PMF) to
monitor the application processes
> Reliable application shutdown
> Tunable local restart and failover decisions
> Can perform simple TCP handshake level monitoring
> Optionally specify a monitor script for detailed application
health checks
β’ Can generate Agent in ksh, C and GDS
β’ Creates a Solaris package for the Agent
Solaris Cluster Roadshow 67
71. Agent Development Exercise
β’ Double check that Apache is setup correctly
β Start it on BOTH nodes with /usr/apache2/bin/apachectl start
β Check /var/apache2/logs/error_log in case of failures
β Start web browser and connect with apache
β Stop apache
β’ Start scdsbuilder and create a custom Apache
Agent
β Remember to set your DISPLAY shell variable
β Suggest using GDS, Failover and Network aware
β Use your own start, stop, validate and probes
β Package would be created in working directory/pkg
β Pkgadd it on all cluster nodes
Solaris Cluster Roadshow 71
72. Exercise contd...
β’ Deploy your Agent
β Run /ws/galileo/tools/bin/labinfo -v $your-cluster-name
β Look under the headings βFailover ADDRESSESβ and βShared
ADDRESSESβ to find out available HA hostnames
β Make sure the HA address you are going to use is not configured
already
β Run /opt/$pkgname/util/startapache -h <logical-hostname> -p
80/tcp
β’ Test your Agent
β Kill the Apache processes, they should be restarted
β Reboot the node running Apache, it should fail over to the other
node
Solaris Cluster Roadshow 72
74. For further information
β’ Check out Solaris Cluster blogs - http://blogs.sun.com/SC
β’ Discuss Solaris Cluster @ Sun Developer Network (SDN)
http://forum.java.sun.com/forum.jspa?forumID=842
β’ Check out http://www.sun.com/cluster
β’ Download Solaris Cluster software @
http://www.sun.com/download
β’ Get trained in Solaris Cluster @
http://www.sun.com/training/catalog/server/cluster.xml
β’ SC Documentation @ http://docs.sun.com
> SC 3.2 Documentation Center
http://docs.sun.com/app/docs/doc/820-0335/
Solaris Cluster Roadshow 74