SlideShare une entreprise Scribd logo
1  sur  76
Télécharger pour lire hors ligne
HP best practices for Microsoft Exchange Server 2000 and
2003 Cluster deployments




Executive summary............................................................................................................................... 4
Introduction......................................................................................................................................... 4
Overview of Microsoft Exchange Server clusters ...................................................................................... 5
  Clusters explained............................................................................................................................ 5
  Microsoft cluster server ..................................................................................................................... 5
  Shared-nothing cluster model............................................................................................................. 5
  Resources and resource groups.......................................................................................................... 6
  Key cluster terminology ..................................................................................................................... 8
  Microsoft cluster service components .................................................................................................. 9
    The cluster service ...................................................................................................................... 10
    The resource monitor .................................................................................................................. 10
    The resource DLL ........................................................................................................................ 10
  Cluster failover .............................................................................................................................. 11
  Cluster-aware vs. non cluster-aware applications ............................................................................... 11
    Active/active vs. active/passive................................................................................................... 11
  Exchange virtual servers.................................................................................................................. 12
  Clustering support for Microsoft Exchange Server components............................................................. 12
    Key component: Exchange resource DLL (EXRES.DLL) ...................................................................... 14
    Key component: Exchange cluster admin DLL (EXCLUADM.DLL)........................................................ 15
    Microsoft Exchange Server cluster resource dependencies............................................................... 15
    Microsoft Exchange virtual servers................................................................................................ 18
What’s new in Microsoft Exchange Server 2003................................................................................... 19
Planning your Microsoft Exchange 2003 cluster deployment................................................................... 20
   Best practices for hardware............................................................................................................. 20
      Check for hardware compatibility................................................................................................. 20
      Choose standardized configurations............................................................................................. 20
      Hardware redundancy ................................................................................................................ 21
   Best practices for cluster configuration .............................................................................................. 22
      Understanding resource dependency ............................................................................................ 22
      Planning cluster groups ............................................................................................................... 22
      Planning quorum type ................................................................................................................. 22
   Storage planning ........................................................................................................................... 23
Storage design is critical ............................................................................................................. 23
     Placement of Microsoft Windows and Microsoft Exchange components ............................................ 24
     Microsoft Windows Server .......................................................................................................... 24
     Microsoft Exchange Server 2003 binaries..................................................................................... 25
     Storage groups, databases, and transaction logs ........................................................................... 25
     Assigning drive-letters and labels.................................................................................................. 26
     Choosing RAID levels for Microsoft Exchange components .............................................................. 29
     Allocate dedicated spindles to each Microsoft Exchange virtual server.............................................. 30
     Using mount points ..................................................................................................................... 30
     Use a consistent naming standard for folders and databases ........................................................... 31
     Obtain TCP/IP addresses prior to installation................................................................................. 33
     Cluster naming conventions ......................................................................................................... 33
   Microsoft Exchange Server roles ...................................................................................................... 34
     Implement dedicated Microsoft Exchange clusters........................................................................... 34
     Assign specific roles to clusters for enterprise deployments .............................................................. 34
     First server and site replication services (SRS) server ....................................................................... 35
     Implement clusters as mailbox servers and public folder servers ....................................................... 35
     Clusters and active/active scalability limits .................................................................................... 35
What’s new in Microsoft Windows Server 2003 ................................................................................... 36
Best practices for server configuration .................................................................................................. 36
     Build redundancy into your Microsoft Windows Server infrastructure ................................................ 37
  Cluster service account best practices ............................................................................................... 37
     Create one cluster service account per cluster ................................................................................ 37
     Use a consistent naming convention for cluster service accounts....................................................... 37
     Do not use the cluster service account to logon interactively............................................................. 38
     Delegate Microsoft Exchange permissions ..................................................................................... 38
  Upgrade to the latest service pack on each node ............................................................................... 38
     Change the temp folder used by the cluster service account............................................................. 38
  Network configuration .................................................................................................................... 40
     Separate private and public networks ........................................................................................... 40
     Do not use DHCP for cluster network connections ........................................................................... 40
     Label network connections........................................................................................................... 41
     Modify the binding order on network connections .......................................................................... 41
     Disable unnecessary protocols, services on the cluster interconnect connection .................................. 42
     Set correct settings for cluster communications................................................................................ 43
     NIC teaming configuration .......................................................................................................... 44
     IPSec ........................................................................................................................................ 45
  Geographically-dispersed clusters .................................................................................................... 45
  Set staggered boot delays on each node .......................................................................................... 45
  OS tuning for large memory ............................................................................................................ 46
     More than 1GB of RAM in Microsoft Windows 2000 Advanced Server ........................................... 46
     More than 1GB of RAM in Microsoft Windows 2003 Server........................................................... 46
  Cluster validation testing ................................................................................................................. 47
     Planned failover ......................................................................................................................... 47
     Unplanned failover ..................................................................................................................... 47
     Power outage Test ...................................................................................................................... 47
  Best practices for Microsoft Exchange Server installation..................................................................... 47
     Prerequisites .............................................................................................................................. 47
     Document your cluster installation................................................................................................. 48
     Front-end / back-end architectures................................................................................................ 49
  Upgrading Microsoft Exchange 5.5 clusters ...................................................................................... 50
  Upgrading Microsoft Exchange 2000 clusters ................................................................................... 50
     Removing Microsoft Exchange 2000 Tuning Parameters ................................................................. 51
  Before upgrading or refreshing hardware ......................................................................................... 51
Best practices for systems management ................................................................................................ 51
  Active Directory ............................................................................................................................. 52
     Create a separate OU for virtual servers and cluster network names................................................. 52
  Cluster configuration ...................................................................................................................... 52
     Capture configuration information ................................................................................................ 52
Non-critical cluster resources........................................................................................................ 52
      Majority node set ....................................................................................................................... 53
   Training and expertise .................................................................................................................... 53
   Microsoft Windows Server resource kits............................................................................................ 54
   Securing your cluster ...................................................................................................................... 54
      Preventing viral attacks................................................................................................................ 55
   Standardizing configuration ............................................................................................................ 56
      Hardware configuration .............................................................................................................. 57
      Device drivers ............................................................................................................................ 57
      Using QFECHECK to verify hotfix installations................................................................................ 59
   Microsoft Exchange Server service packs .......................................................................................... 59
      Upgrade to the latest Microsoft Exchange Server service pack ......................................................... 59
      Recommended procedure for Microsoft Exchange service packs ...................................................... 59
      How to avoid reboots during a rolling upgrade ............................................................................. 60
   Third-party products........................................................................................................................ 60
   Use cluster administrator to perform cluster operations........................................................................ 60
   Use system manager to change Microsoft Exchange virtual servers ...................................................... 61
   Failovers ....................................................................................................................................... 62
      Planned failovers........................................................................................................................ 62
      Tips for reducing planned failover time ......................................................................................... 62
      Unplanned failovers.................................................................................................................... 63
      Tips for reducing unplanned failover time ...................................................................................... 64
      Managing storage...................................................................................................................... 64
   Performance monitoring .................................................................................................................. 65
      Operations checks...................................................................................................................... 65
      Event logs.................................................................................................................................. 65
      Monitoring mail queues............................................................................................................... 66
      Monitoring virtual memory........................................................................................................... 68
Best practices for disaster recovery ...................................................................................................... 69
     What should I backup? ............................................................................................................... 70
     Checking backups ...................................................................................................................... 71
     Microsoft Cluster Tool ................................................................................................................. 72
     Restoring Quorum ...................................................................................................................... 73
Conclusion........................................................................................................................................ 74
For more information.......................................................................................................................... 75
     Utilities...................................................................................................................................... 75
     Microsoft Knowledge base articles ............................................................................................... 75
Executive summary
Microsoft® Exchange Server cluster deployments have additional complexity over single server
implementations. Clusters depend on complex interaction between hardware, operating system,
applications, and administrators. Attention to detail is needed to ensure that the Microsoft Exchange
Server cluster implementation is properly planned, installed, and configured. The aim of this
whitepaper is to present HP Best Practices for Microsoft Exchange Server cluster deployments. The
following areas are addressed:

• Overview of Microsoft Exchange Server clusters
• What’s new in Microsoft Exchange Server 2003
• Planning your Microsoft Exchange Server cluster deployment
• What’s new in Microsoft Windows® Server 2003
• Best Practices for Microsoft Windows Server configuration
• Best Practices for Systems Management
• Best Practices for Disaster Recovery


Introduction
This document provides best practices for successful Microsoft Exchange Server 2003 cluster
deployments. These best practices have been derived from the experience of HP engineers in
deployments across a wide range of customer environments. This paper is an update to the original
whitepaper on best practices for Microsoft Exchange Server 2000 deployments, and there are
references in this document to both Microsoft Exchange Server 2003 and 2000. The original
document is available on HP ActiveAnswers, http://h71019.www7.hp.com/1-6-100-225-1-00.htm,
so if you are only considering a Microsoft Exchange 2000 deployment, you should consult that
document at http://activeanswers.compaq.com/aa_downloads/6/100/225/1/42311.pdf.
However, if you wish to understand the differences and the advantages of a Microsoft Exchange
2003 cluster, then this document will cover those differences. A primary consideration is whether you
plan to run Microsoft Windows Server 2003. Microsoft Exchange 2000 can not run on Microsoft
Windows Server 2003, yet Microsoft Exchange 2003 can run on either Microsoft Windows Server
2000 or 2003.


This paper addresses the following areas:

• Overview of Microsoft Exchange Server clusters: Covers concepts such as Microsoft Cluster Server,
  cluster resource groups, cluster terminology, cluster failover and failback operations, and how
  Microsoft Exchange Server interacts with Microsoft Cluster Server.
• What’s new in Microsoft Exchange Server 2003: focusing on the features and changes in the new
  product that affect clustering.
• Planning your Microsoft Exchange Server cluster: Best practices for choosing hardware for your
  cluster, designing storage for Microsoft Exchange, cluster naming conventions and
  recommendations for your Microsoft Windows Server infrastructure.
• What’s new in Microsoft Windows Server 2003: Specifically, what features and changes are in the
  new product and how they affect clustering.
• Best Practices for Microsoft Windows Server configuration: This section looks at configuring
  Microsoft Windows Server optimally for Microsoft Exchange Server 2003 clusters. Detailed
  recommendations on network configuration, failover testing, and securing your Microsoft Exchange
  Server 2003 cluster from viruses are presented.


                                                                                                        4
• Best Practices for Systems Management: Covers areas such as System Manager training, best
  practices for Service Pack upgrades, tips for reducing downtime during failover/failback
  operations, and performance monitoring
• Best Practices for Disaster Recovery: Covers cluster specific disaster recovery concepts. How to use
  Resource Kit tools such as the cluster recovery utility and the cluster diagnostic tools.


Overview of Microsoft Exchange Server clusters
Clusters explained
A cluster server consists of a group of servers that are configured to work together to provide a single
system with high availability to data and applications. When an application or hardware component
fails on a standalone server, service cannot be restored until the problem is resolved. When a failure
occurs on a server within a cluster, resources and applications can be redirected to another server in
the cluster, which takes over the workload.

Microsoft cluster server
Microsoft introduced clustering support with the release of Microsoft Windows NT 4.0 Enterprise
Edition in 1997. Microsoft Cluster Server (MSCS) could be used to connect two NT 4.0 servers and
provide high availability to file and print services and applications. Note the following differences
and progression:

• Microsoft Cluster Server in Microsoft Windows NT 4.0 is limited to a maximum of two servers
  (nodes) in a cluster.
• Microsoft Cluster Server in Microsoft Windows 2000 Advanced Server was also limited to two
  nodes per cluster; however, each node could host an active Microsoft Exchange Virtual Server
  (more detail on the design limitations of doing so later)
• Microsoft Windows 2000 Server Datacenter Edition supports a maximum of four nodes per cluster
  and could host N+I active virtual servers, for example 3 active and one passive.
• Microsoft Windows Server 2003 Enterprise Edition supports a maximum of eight nodes per cluster
  and could host N+I active virtual servers, for example 6 active and 2 passive.
• Microsoft Windows Server 2003 Datacenter Edition also supports a maximum of eight nodes per
  cluster.

See http://www.microsoft.com/windowsserver2003/evaluation/features/compareeditions.mspx for a
Microsoft Windows Server 2003 product comparison.
For a thorough discussion of what Microsoft cluster server can and cannot do, see:
http://www.microsoft.com/technet/prodtechnol/windowsserver2003/support/SerCsFAQ.asp.


Shared-nothing cluster model
Microsoft Cluster Server uses the shared-nothing cluster model. With shared-nothing, each server owns
and manages local devices as specific cluster resources. Devices that are common to the cluster and
physically available to all nodes are owned and managed by only one node at a time. For resources
to change ownership, a complex reservation and contention protocol is followed and implemented by
cluster services running on each node. The shared-nothing model dictates that while several nodes in
the cluster may have access to a device or resource, the resource is owned and managed by only one
system at a time. (In a Microsoft Cluster Server cluster, a resource is defined as any physical or logical
component that can be brought online and taken offline, managed in a cluster, hosted by only one
node at a time, and moved between nodes.)


                                                                                                        5
Each node has its own memory, system disk, operating system, and subset of the cluster’s resources. If
a node fails, the other node takes ownership of the failed node’s resources (this process is known as
failover). Microsoft Cluster Server then registers the network address for the resource on the new node
so that client traffic is routed to the system that is available and now owns the resource. When the
failed node is later brought back online, Microsoft Cluster Server can be configured to redistribute
resources and client requests appropriately (this process is known as failback).

Resources and resource groups
The basic unit of management in a Microsoft cluster is the resource. Resources are logical or physical
entities or units of management within a cluster system that can be changed in state from online to
offline, are manageable by the cluster services, and are owned by one cluster node at a time. Cluster
resources include entities such as hardware devices like network interface cards and disks or logical
entities like server name, network name, IP address, and services. Resources are defined within the
cluster manager and selected from a list of pre-defined choices. Microsoft Windows Server 2003
provides a set of resource DLLs (such as File and print shares and Generic services or applications)
and cluster-aware applications provide their own resource DLLs and resource monitors.
In a cluster, there are both physical and logical resources typically configured. Within the Microsoft
Cluster Server framework, shown in Figure 1, resources are grouped into logical units of management
and dependency called resource groups. A resource group is usually comprised of both logical and
physical resources such as virtual server name, IP address, and disk resources. Resource groups can
also just include cluster-specific resources used for managing the cluster itself. The key in the shared-
nothing model is that a resource can only be owned by one node at a time. Furthermore, the
resources that are part of the resource group owned by a cluster node must exist on that node only.




                                                                                                       6
Figure 1. Basic 2-Node Microsoft Cluster Server Configuration




                                                 Clients



          Node A                                                                 Node B



                                      Data and executable files
                                      on shared cluster drives




     System files on                                                              System files on
       local drive                                                                  local drive




The shared-nothing model prevents different nodes within the cluster from simultaneous ownership of
resource groups or resources within a resource group. As mentioned earlier, resource groups also
maintain a dependency relationship between different resources contained within each group. This is
because resources in a cluster very often depend upon the existence of other resources in order to
function or start. For example, a virtual server or network name must have a valid IP Address in order
for clients to access that resource. Therefore, in order for the network name or virtual server to start (or
not fail), the IP Address it depends upon must be available. This is known as resource dependency.
Within a resource group, the dependencies among resources can be quite simple or very complex.
Resource dependencies are maintained in the properties of each resource and allow the cluster
service to manage how resources are taken offline and brought on line. Also, resource dependencies
cannot extend beyond the context of the resource group to which they belong. For example, a virtual
server cannot have a dependency upon an IP Address that exists within a resource group other than
its own resource group.
This restriction is due to the fact that resource groups within a cluster can be brought online and
offline and moved from node to node independently of one another. Each resource group also
maintains a policy that is available cluster-wide that specifies the Preferred Owner (the node in the
cluster it prefers to run on) and the Possible Owners (the node that it can fail over to in the event of a
failure condition).




                                                                                                             7
Resources are the fundamental unit of management within a Microsoft Windows Cluster. As such, it is
important that you have an understanding of how they function and operate. Resource Groups are
used to logically organize these resources and enable easier management of failover between nodes.

Key cluster terminology
There has been much confusion in the world of cluster technology not over just basic architectural
implementations but also terminologies. Table 1 below highlights some key terminology for Microsoft
Cluster Server.

Table 1. Microsoft Cluster Server Key Terminology



  Term                      Definition/Description

  Resource                  Smallest unit that can be defined, monitored and maintained by the cluster. Examples are
                            Physical Disk, IP Address, Network Name, File Share, Print Spool, Generic Service and
                            Application. Resources are grouped together into a Resource group. The cluster uses the
                            state of each resource to determine whether a failover is needed.

  Resource Group            A collection of resources that logically represents a client/server function. The smallest unit
                            that can failover between nodes.

  Resource Dependency       A resource property defining its relationship to other resources. A resource is brought
                            online after any resource that it depends on. A resource is taken offline before any
                            resources that it depends on. All dependent resources must failover together.

  Quorum Resource           Stores the cluster log data and application data from the registry used to transfer state
                            information between clusters. Used by the cluster service to determine which node can
                            continue running when nodes cannot communicate. Note that the type of quorum
                            resource, and therefore the type of the cluster (either a quorum-device cluster or a
                            majority-of-nodes cluster), is established during cluster setup and can not be changed
                            later.

  Active/Passive            Term used for Service Failover mode where service is defined as a resource using the
                            generic resource DLL. Failover Manager limits application operation to only one node at
                            a time.

  Active/Active             More comprehensive failover capability also known as Resource Failover mode. Utilizes
                            ISV-developed resource DLLs that are “cluster aware”. Allows for operation of service on
                            multiple nodes and individual resource instances failover instead of entire service.

  Membership                Term used to describe the orderly addition and removal of active nodes to and from the
                            cluster.

  Global Update             Global update propagates cluster configuration changes to all members. The cluster
                            registry is maintained through this mechanism and all activities are atomic, ordered, and
                            tolerant to failures.

  Cluster Registry          Separate from the NT registry, the cluster registry maintains configuration updates on
                            members, resources, parameters and other configuration information and is stored on the
                            cluster quorum disk. Loaded under HKLMCluster

  Virtual Server            The network resource used by client for the Exchange cluster resource group – a
                            combination or collection of configuration information and resources such as network
                            name and IP address resources. Can refer to a Microsoft Cluster Server virtual server or a
                            logical set of services provided by Internet Information Server (IIS).

  Physical Machine          The physical hardware device that comprises an individual cluster node. Can host
                            multiple virtual servers and resources


                                                                                                                  8
Microsoft cluster service components
Microsoft Cluster Service is implemented as a set of independent and somewhat isolated group of
components (device drivers and services). This set of components layers on top of the Microsoft
Windows Server operating system and acts as a service. By using this design approach, Microsoft
avoided many complexities that may have been encountered in other design approaches such as
system scheduling and processing dependencies between the cluster service and the operating
system. When layered on Microsoft Windows Server, the cluster service provides some basic
functions that the operating system needs in order to support clustering. These basic functions include
dynamic network resource support, file system support for disk mounting and un-mounting, and shared
resource support for the I/O subsystem. Table 2 below provides a brief overview of each of these
components.

Table 2. Microsoft Cluster Service Components


  Component                                     Role/Function

  Node Manager                                  Maintains resource group ownership of cluster nodes
                                                based on resource group node preferences and the
                                                availability of cluster nodes.

  Resource Monitor                              Utilizes the cluster resource API and RPCs to maintain
                                                communication with the resource DLLs. Each monitor runs
                                                as a separate process.

  Failover Manager                              Works in conjunction with the resource monitors to
                                                manage resource functions within the cluster such as
                                                failovers and restarts.

  Checkpoint Manager                            Maintains and updates application states and registry
                                                keys on the cluster quorum resource.

  Configuration Database Manager                Maintains and ensures coherency of the cluster database
                                                on each cluster node that includes important cluster
                                                information such as node membership, resources,
                                                resource groups, and resource types.

  Event Processor                               Processes events relating to state changes and requests
                                                from cluster resources and applications.

  Membership Manager                            Manages cluster node membership and polls cluster
                                                nodes to determine state.

  Event Log Replication Manager                 Replicate system event log entries across all cluster
                                                nodes.

  Global Update Manager                         Provides updates to the Configuration Database
                                                Manager to ensure cluster configuration integrity and
                                                consistency.

  Object Manager                                Provides management of all cluster service objects and
                                                the interface for cluster administration.

  Log Manager                                   Works with the Checkpoint Manager to ensure that the
                                                recovery log on the cluster quorum disk is current and
                                                consistent.




                                                                                                          9
For discussions pertaining to Microsoft Exchange Server, there are three key components in Microsoft
Cluster Service to consider:

The cluster service
The Cluster Service (which is actually a group of components consisting of the Event Processor, the
Failover Manager/Resource Manager, the Global Update Manager, and so forth) is the core
component of Microsoft Cluster Server. The Cluster Service controls cluster activities and performs
such tasks as coordinating event notification, facilitating communication between cluster components,
handling failover operations, and managing the configuration. Each cluster node runs its own Cluster
Service.

The resource monitor
The Resource Monitor is an interface between the Cluster Service and the cluster resources, and runs
as an independent process. The Cluster Service uses the Resource Monitor to communicate with the
resource DLLs. The DLL handles all communication with the resource, thus shielding the Cluster Service
from resources that misbehave or stop functioning. Multiple copies of the Resource Monitor can be
running on a single node, thereby providing a means by which unpredictable resources can be
isolated from other resources.

The resource DLL
The third key Microsoft Cluster Server component is the resource DLL. The Resource Monitor and
resource DLL communicate using the Microsoft Cluster Server Cluster Resource API, which is a
collection of entry points, callback functions, and related structures and macros used to manage
resources. Applications that implement their own resource DLLs to communicate with the Cluster
Service and that use the Cluster API to request and update cluster information are defined as cluster-
aware applications. Applications and services that do not use the Cluster or Resource APIs and cluster
control code functions are unaware of clustering and have no knowledge that Microsoft Cluster Server
is running. These cluster-unaware applications are generally managed as generic applications or
services. Both cluster-aware and cluster-unaware applications run on a cluster node and can be
managed as cluster resources. However, only cluster-aware applications can take advantage of
features offered by Microsoft Cluster Server through the Cluster API. Cluster-aware applications can
report status upon request to the Resource Monitor, respond to requests to be brought online or to be
taken offline gracefully, respond more accurately to IsAlive and LooksAlive requests issued by the
cluster services. Cluster-aware applications should also implement Cluster Administrator extension
DLLs, which contain implementations of interfaces from the Cluster Administrator extension API. A
Cluster Administrator extension DLL allows an application to be configured into the Cluster
Administrator tool (CluAdmin.exe). Implementing custom resource and Cluster Administrator extension
DLLs allows for specialized management of the application and its related resources, and enables the
system administrator to install and configure the application more easily.
As discussed earlier, to the Cluster Service, a resource is any physical or logical component that can
be managed. Examples of resources are disks, network names, IP addresses, databases, IIS Web
roots, application programs, and any other entity that can be brought online and taken offline.
Resources are organized by type. Resource types include physical hardware (such as disk drives) and
logical items (such as IP addresses, file shares, and generic applications). Every resource uses a
resource DLL, a largely passive translation layer between the Resource Monitor and the resource. The
Resource Monitor calls the entry point functions of the resource DLL to check the status of the resource
and to bring the resource online and offline. The resource DLL is responsible for communicating with
its resource through any convenient IPC mechanism to implement these methods. Applications or
services that do not provide their own resource DLLs can still be configured into the cluster
environment. Microsoft Cluster Server includes a generic resource DLL, and the Cluster Service treats
these applications or services as generic, cluster-unaware applications or services. However, if an

                                                                                                     10
application or service needs to take full advantage of a clustered environment, it must implement a
custom resource DLL that can interact with the Cluster Service and take full advantage of the full set of
features provided by Microsoft Cluster Service.

Cluster failover
With Microsoft Cluster Server, two types of failover are supported: Resource and Service Failover.
Both allow for increased system availability. More comprehensive in capabilities, the Resource
Failover mode takes advantage of cluster APIs that enable applications to be “cluster aware.” This is
provided via a Resource DLL that can be configured to allow customizable failover of the application.
Resource DLLs provide a means for Microsoft Cluster Server to manage resources. They define
resource abstractions, interfaces, and management. In a resource failover mode of operation, it is
assumed that the service is running on both nodes of the Microsoft Cluster Server cluster (also known
as “Active/Active”) and that a specific resource – such as a database, virtual server, or an IP address
– fails over, not the entire service. Many applications, from independent software vendors as well as
those from Microsoft, do not have resource DLLs available that enable them to be cluster aware. To
offset this, Microsoft has provided a generic service resource DLL, which provides basic functionality
to these applications running on Microsoft Cluster Service. The generic resource DLL provides for the
Service Failover mode and limits the application to running on one node only (also known as
“Active/Passive”). In a Service Failover mode, a service is defined to Microsoft Cluster Server as a
resource. Once defined, the Microsoft Cluster Server Failover Manager ensures that the service is
running on only one node of the cluster at any given time. The service is part of a resource group that
uses a common name throughout the cluster. As such, all services running in the resource group are
available to any network clients using the common name.

Cluster-aware vs. non cluster-aware applications
Cluster-aware applications provide the high levels of functionality and availability in a Microsoft
Cluster Server environment. The applications and the cluster services can provide feedback that
facilitates optimal operation. In this scenario, as much application state as possible is preserved
during failover. Examples of cluster-aware applications are Microsoft SQL Server, SAP/R3, Baan,
PeopleSoft, Oracle, and Microsoft Exchange Server 2003 Enterprise Server. Non cluster-aware
applications have several limitations discussed previously. The application and the cluster software
cannot communicate with each other. Any communication that occurs is limited to that provided by
the generic resource DLL provided with Microsoft Cluster Server. Examples of non cluster-aware
applications are file and print services, Microsoft Internet Information Server, and Microsoft Exchange
Server 5.5 Enterprise Edition. The cluster software has no application awareness and simply
understands that a generic service or group of services and resources must be treated as a failover
unit.

Active/active vs. active/passive
When deploying cluster solutions with Microsoft Windows Server, the level of functionality and
flexibility that an application can enjoy in a clustered environment directly relates to whether it
supports active/passive or active/active configuration. Active/active means that an application
provides functionality on all nodes in the cluster at the same time. This means that the application
services are running and servicing users from each node in the cluster. To do this, an application must
have support for communicating with the cluster services via its own resource DLL (cluster-aware).
Also, the application must be architected in such a way that specific resource units can be treated
independently and failed over to other nodes. Per the discussions above, this requires specific support
from the application vendor (whether Microsoft or third-party vendors) in order for the application to
run in an active/active cluster configuration. For active/passive configurations, the application is
either limited architecturally (can not be run on two active notes), has no specific resource DLL
support, or both. In an active/passive 2-node configuration, the application runs only on one cluster

                                                                                                      11
node at a time. The application may or may not be cluster aware (see above). In some applications
capable of active/active, such as Microsoft Exchange Server 2000 and 2003, there may be
limitations on running in active/active versus active/passive (covered later).
The discussions above on the failover types (Service and Resource Failover) as well as the differences
between cluster-aware and non cluster-aware applications are distinct from active/active and
active/passive configurations. These terms are often confused, but an active/passive cluster
application can be cluster aware, with application services running on every node, even those that
are not actively providing end-user functionality. However, an active/active application must be
cluster aware and a non cluster-aware application must be deployed in an active/passive
configuration.

Exchange virtual servers
From an administrative perspective, all components required to provide services and a unit of failover
are grouped into a Microsoft Exchange Virtual Server (EVS). One or more Microsoft Exchange Virtual
Servers can exist in the cluster, and each virtual server runs on one of the nodes in the cluster.
Microsoft Exchange Server 2000/2003 can support multiple virtual servers on a single node,
although this may be less than desirable – see the discussion of active/active design and failover.
A Microsoft Exchange Virtual Server, at a minimum, will include a storage group, and required
protocols. From the viewpoint of Microsoft Cluster Server, a Microsoft Exchange Virtual Server exists
as a resource in each cluster resource group. If you have multiple Microsoft Exchange Virtual Servers
that shared the same physical disk resource (i.e. each has a storage group that resides on the same
disk device), they must all exist within the same resource group and cannot be split into separate
resource groups. This is done to ensure that the resources and virtual servers all failover as a single
unit and is an administrative restriction that ensures that resource group integrity is maintained. Clients
connect to the virtual servers the same way that they would connect to a stand-alone server. The
cluster service monitors the virtual servers in the cluster. In the event of a failure, the cluster service
restarts or moves the affected virtual servers to a healthy node. For planned outages, the administrator
can manually move the virtual servers to other nodes. In either event, the client will see an interruption
of service only during the brief time that the virtual server is in an online/offline pending state.

Clustering support for Microsoft Exchange Server components
Clustering support differs according to the component. All components of Microsoft Exchange Server
2003 are not currently supported in a clustered environment. The following table details which
components are supported, and in some cases, the type of clustering they are capable of supporting:

Table 3. Microsoft Exchange Server Component Cluster Support

Source: Deploying Exchange Server Clusters, and What's New in Exchange Server 2003 -- Microsoft
  Microsoft Exchange          Cluster             Microsoft Exchange 2000         Microsoft Exchange 2003
  Server Component            Functionality

  Exchange System             Active/Active       Each Microsoft Exchange         Same as Microsoft
  Attendant                                       Virtual Server is created by    Exchange 2000
                                                  the System Attendant
                                                  resource when configured.

  Information Store           Active/Active       Each cluster node is limited    Same as Microsoft
                                                  to 4 Storage Groups.            Exchange 2000

  Message Transfer Agent      Active/Passive      The MTA will be in only one     Same as Microsoft
  (MTA)                                           cluster group. One MTA          Exchange 2000
                                                  instance per cluster.


                                                                                                       12
Microsoft Exchange         Cluster             Microsoft Exchange 2000        Microsoft Exchange 2003
 Server Component           Functionality

 POP3 Protocol              Active/Active       Multiple virtual servers per   Same as Microsoft
                                                node.                          Exchange 2000

 IMAP Protocol              Active/Active       Multiple virtual servers per   Same as Microsoft
                                                node.                          Exchange 2000

 SMTP Protocol              Active/Active       Multiple virtual servers per   Same as Microsoft
                                                node.                          Exchange 2000

 HTTP DAV Protocol          Active/Active       Multiple virtual servers per   Same as Microsoft
                                                node.                          Exchange 2000

 NNTP Protocol              Active/Active       Multiple virtual servers per   Same as Microsoft
                                                node.                          Exchange 2000

 MS Search Server           Active/Active       One Instance per virtual       Same as Microsoft
                                                server                         Exchange 2000

 Site Replication Service   Active/Passive      Not Supported in a cluster     Same as Microsoft
                                                                               Exchange 2000

 MSMail Connector           Active/Passive      Not Supported in a cluster     Microsoft Exchange
                                                                               2000 only

 cc: Mail Connector         Active/Passive      Not Supported in a cluster     Microsoft Exchange
                                                                               2000 only

 Lotus Notes Connector      Active/Passive      Not Supported in a cluster     Same as Microsoft
                                                                               Exchange 2000

 Novell GroupWise           Active/Passive      Not Supported in a cluster     Same as Microsoft
 Connector                                                                     Exchange 2000

 SNADS Connector            Active/Passive      Not Supported in a cluster     Microsoft Exchange
                                                                               2000 only

 PROFS Connector            Active/Passive      Not Supported in a cluster     Microsoft Exchange
                                                                               2000 only

 Active Directory           Active/Passive      Not Supported in a cluster     Same as Microsoft
 Connector                                                                     Exchange 2000

 Key Management             Active/Passive      Not Supported in a cluster     Microsoft Exchange
 Service                                                                       2000 only

 Chat Service               Active/Passive      Not Supported in a cluster     Microsoft Exchange
                                                                               2000 only

 Conferencing Manager       Active/Passive      Not Supported in a cluster     Microsoft Exchange
 Services                                                                      2000 only

 Video Conferencing         Active/Passive      Not Supported in a cluster     Microsoft Exchange
 Service                                                                       2000 only



Microsoft Exchange Server provides its core features and support for Microsoft Cluster Server via two
key components as shown in Figure 2. These are the Microsoft Exchange Cluster Administration DLL
and the Exchange Resource DLL.



                                                                                                    13
Figure 2. How Microsoft Exchange interfaces with Microsoft Cluster Services




Key component: Exchange resource DLL (EXRES.DLL)
If you recall the earlier discussion of cluster-aware versus non cluster-aware applications, you
remember that it is the existence of an application-specific resource DLL that is the key differentiator
for cluster-aware applications. Also remember that Microsoft Exchange 5.5 did not provide its own
resource DLL and made use of the generic resource DLL that is provided with Microsoft Cluster Server.
For Microsoft Exchange Server 2003, Microsoft developers took the extra time and effort to
guarantee full-cluster functionality. The result of that effort is the Microsoft Exchange Resource DLL
called EXRES.DLL. This DLL for Microsoft Exchange Server 2003 is installed when the setup
application realizes that it is operating in a clustered environment. EXRES.DLL acts as a direct
resource monitor interface between the cluster services and Microsoft Exchange Server 2003 by
implementing the Microsoft Cluster Services API set. Table 4 below shows the typical interactions and
indications that EXRES.DLL will provide between Microsoft Exchange resources and cluster services.

Table 4. Microsoft Exchange Resource DLL (EXRES.DLL) Interactions and Functions



  Interaction/Indicator             Function

  Online/Offline                    Microsoft Exchange Virtual Server Resource is running, stopped, or in
                                    an idle state.

  Online/Offline Pending            In the process or service is in the state of starting or shutting down.

  Looks Alive/Is Alive              Resource polling functions to determine whether resource should be
                                    restarted or failed. Can be configured in cluster administrator on a per-
                                    resource basis.

  Failed                            The resource failed on the Is Alive call and was not able to be restarted
                                    (restart failed).

  Restart                           Resource has failed on Is Alive call and it directed to attempt to restart.




                                                                                                                  14
Key component: Exchange cluster admin DLL (EXCLUADM.DLL)
In order for Microsoft Exchange resources to be configured and controlled by the Cluster
Administrator, there must be an enabler for Microsoft Exchange services to communicate with the
Cluster Administrator and for the Cluster Administrator program to provide Microsoft Exchange-
specific configuration parameters and screens. The Microsoft Exchange Cluster Administration DLL or
EXCLUADM.DLL provides this support. The Microsoft Exchange Cluster Admin DLL provides the
necessary wizard screens when configuring Microsoft Exchange resources in Cluster Administrator
and presents Microsoft Exchange resources that can be added as resources in the cluster such as the
Microsoft Exchange System Attendant. The cluster administration DLL is a key component in
configuration and management of Microsoft Exchange services in the cluster. It is not required for
resource monitoring and restart or failover actions. The Microsoft Exchange Resource DLL (EXRES.DLL)
performs this role.

Microsoft Exchange Server cluster resource dependencies
Figure 3 illustrates a tree structure of Microsoft Exchange 2000 cluster resource dependencies and
Figure 4 shows the newer Microsoft Exchange Server 2003 cluster resource dependencies. Microsoft
Exchange services must have certain resources as predecessors before they can be brought online as
a cluster resource. By default, Microsoft Exchange Server 2003 installs nine resources in the form of
virtual servers into a cluster resource group that is being configured in the Cluster Administrator. Table
5 provides a brief description of each resource and its function.




                                                                                                      15
Figure 3. Microsoft Exchange Server 2000 Cluster Resource Dependencies




Figure 4. Microsoft Exchange Server 2003 Cluster Resource Dependencies.




                                                                          16
When you make a change to Storage Group configuration (including Recovery Storage Groups,
covered later in this document), you will see the prompt in System Manager shown in Figure 5 below
to ensure proper disk resource dependencies.



Figure 5. Prompt in System Manager to Ensure Proper Disk Resource Dependencies.




Note: As shown in Table 5 below, Microsoft Exchange
Server 2003 changes the Resource dependencies so that the
Internet protocols shown above are dependent directly on
the System Attendant instead of the Store.



Table 5. Microsoft Exchange Server 2000 and 2003 Cluster Resources




  Resource                Role

  System Attendant        Foundation Microsoft Exchange resource that must exist prior other resources being
                          added to the cluster.
                          Resource Dependency: Network Name, All Physical Disks for that Virtual Server

  Information Store       Virtual Server Instance for the STORE.EXE process and its presentation to MAPI
                          clients and other services.
                          Resource Dependency: System Attendant

  Routing Service         Microsoft Exchange Server 2003 Routing Service virtual server instance.
                          Resource Dependency: System Attendant

  MTA Service             Message Transfer Agent virtual server. Exists only on one cluster node
                          (active/passive). Provided for legacy messaging connector support and routing to
                          Microsoft Exchange 5.5 environments.
                          Resource Dependency: System Attendant




                                                                                                             17
Resource                Role

 MS Search Service       Microsoft Search Engine virtual server instance. Provides Microsoft Exchange content
                         indexing service for clients. Can be removed if that protocol is not needed.
                         Resource Dependency: Microsoft Exchange 2003 - System Attendant
                         Microsoft Exchange 2000 - Information Store

 SMTP Service            SMTP virtual server instance. Provides Internet protocol email sending functionality
                         and message routing within Microsoft Exchange 2000/2003.
                         Resource Dependency: Microsoft Exchange 2003 - System Attendant
                         Microsoft Exchange 2000 - Information Store

 HTTP Virtual Server     HTTP-DAV protocol virtual server. Provides Web/Browser-based client access to
                         information store.
                         Resource Dependency: Microsoft Exchange 2003 - System Attendant
                         Microsoft Exchange 2000 - Information Store

 POP3 Service            POP3 protocol virtual server. Provides POP3 client access to information store. Can
                         be removed if that protocol is not needed.
                         Resource Dependency: Microsoft Exchange 2003 - System Attendant
                         Microsoft Exchange 2000 – Information Store

 IMAP Service            IMAP protocol virtual server. Provides IMAP client access to information store. Can
                         be removed if that protocol is not needed.
                         Resource Dependency: Microsoft Exchange 2003 - System Attendant
                         Microsoft Exchange 2000 - Information Store
When configuring cluster resources for Microsoft Exchange, four prerequisites must be satisfied, each
of these will be explained in more detail:

1. Microsoft Exchange Server 2003 must be installed on all cluster nodes where Microsoft Exchange
   Virtual Servers will run.
2. The Microsoft Distributed Transaction Coordinator (MSDTC) is required as a cluster resource and
   must be configured before Microsoft Exchange 2003 can be installed or Microsoft Exchange
   2000 can be upgraded.
3. The network name must be created. As this resource is dependent upon the IP address, the IP
   address must be assigned and the resource created first.
4. The final step that must be accomplished before creating the Microsoft Exchange System Attendant
   is to create the physical disk resources that are required by the virtual server you are configuring.
   At a minimum, there must be at least one physical disk resource configured for Microsoft Exchange
   virtual servers to be added to the cluster configuration.
When Microsoft Exchange cluster resources start and stop (change states), they must do so in order of
resource dependency. This means that on startup, resources start in forward order of resource
dependence (bottom to top in Figure 5). When resources are stopped (or a resource group is taken
offline), resources are shutdown in reverse order of dependence (top to bottom in Figure 5). When
configuring cluster resources, having an understanding of the resource dependencies for Microsoft
Exchange Server clustering makes the task simpler.

Microsoft Exchange virtual servers
The virtual server is the key unit of client access for Microsoft Exchange Server 2003 services running
in a clustered environment. Virtual servers exist for several different Microsoft Exchange Server
services (as shown in Table 5). The virtual server is the name by which clients, resources, and other
services access individual Microsoft Exchange services within the cluster. In Microsoft Exchange


                                                                                                                18
Server 2003, a virtual server contains resources for Internet client protocols (SMTP, IMAP, POP3, and
HTTP-DAV), the Information Store (for MAPI clients), the MTA service, and the Microsoft Search
Service. The virtual server takes on the name property of the network name resource that is configured
prior to configuring Microsoft Exchange resources. For example, if you configure the network name
resource as “EXVS1,” each Exchange virtual server resource configured in the cluster resource group
for that virtual server will respond to that virtual server name when used by clients and other services
and resources.
Microsoft Exchange virtual server resources will all failover in the cluster as a managed unit. This
means that the entire cluster resource group containing the virtual server resources will be failed over
together from one node in the cluster to another. One or more Microsoft Exchange information
storage groups can be configured as part of a Microsoft Exchange Server 2003 virtual server.
However, a storage group can only belong to one virtual server. When deploying Microsoft
Exchange Server 2003 clusters, ensure that you familiarize yourself with virtual servers and how they
are used.
Microsoft Exchange Server 2000/2003 can support multiple virtual servers on a single node. In an
active/active design, one node must be capable of taking on the workload of the other in the event of
a failover. This limits the workload of each node (in a 2-node cluster), as you must leave ample
headroom for the failover to occur. In light of this, Microsoft places restrictions on active/active
design workloads. See Table 8 and the section preceding it for the restrictions. For clusters greater
than two nodes, they must be designed in an N+I active/passive configuration.


What’s new in Microsoft Exchange Server 2003
From a clustering standpoint, the most relevant changes are:

• Support for Microsoft Windows 2003 features such as 8 nodes, volume mount points, and new
  quorum resource models.
• Additional prerequisite checking during Microsoft Exchange installation
• Flattening of the resource dependency tree allowing for greater flexibility and shorter resource
  failover times
• The Microsoft Exchange 2003 permissions model has changed, for greater security

  – IPSec support for front-end to back-end server connections
  – Cluster service account requires no Microsoft Exchange-specific permissions
  – Support for Kerberos (enabled by default)
• Many Microsoft Exchange 2000 Tuning Parameters no longer necessary, as Microsoft Exchange
  2003 provides more accurate tuning in large memory and multi-processor servers
• The virtual memory management process is improved, to more efficiently reuse blocks of memory
  and thus reduce fragmentation
• On failover, Microsoft Exchange services are stopped on the passive node, which frees up memory
  and prevents fragmentation
• IMAP4 and POP3 resources are not added automatically when you create a Microsoft Exchange
  Virtual Server (but are upgraded if existing in Microsoft Exchange 2000)
• Change in the Setup routine so that it no longer prompts that a cluster has been detected (this would
  halt the setup until acknowledged)

See the Microsoft document What's New in Microsoft Exchange Server 2003,
http://www.microsoft.com/technet/prodtechnol/exchange/2003/library/newex03.mspx for a
more comprehensive list.


                                                                                                     19
See also HP’s paper Exchange Server 2003 Clusters on Windows Server 2003
http://h71028.www7.hp.com/aa_downloads/6/100/225/1/65234.pdf


Planning your Microsoft Exchange 2003 cluster deployment
Planning is critical for successful Microsoft Exchange Server 2003 cluster deployments. If you do not
plan your implementation carefully, you may have to tear down your cluster implementation and re-
implement it, including backup and restoration of critical data. The following areas should be
addressed:

• Server Hardware
• Storage Planning
• Naming Standards
• Microsoft Exchange Server roles
• Microsoft Windows Server infrastructure
• Cluster Service account

In addition to the following extensive list you may also wish to see the checklist: Preparation for
installing a cluster at http://go.microsoft.com/fwlink/?LinkId=16302.



Best practices for hardware
Check for hardware compatibility
The hardware used in a cluster must be listed on the Microsoft Windows Catalogs from Microsoft. The
Windows Catalogs are replacing the Hardware Compatibility List (HCL). The catalogs and HCL can
be found at http://www.microsoft.com/whdc/hcl/default.mspx. If you implement hardware that is
not in the catalog, Microsoft will not support your cluster configuration.

Choose standardized configurations
You should configure all nodes in a cluster with identical specifications for memory, disks, CPUs, etc.
Implementing nodes of varying specifications can lead to inconsistent performance levels as Microsoft
Exchange Virtual Servers are moved between nodes. HP makes it easier to choose hardware by
offering a wide selection of supported cluster configurations (see
http://www.hp.com/servers/proliant/highavailability).

You may hear references to deployments of non-standardized configurations; however, be certain of
the reasons for using varying hardware platforms. For example, within Microsoft’s internal clustering
deployment, several passive nodes are used for failover targets during rolling upgrades. These are
desirable mainly due to the frequency of applying new builds (since Microsoft frequently tests the
newest versions; applying them into production during the development release cycle). In addition,
varying hardware platforms can be used in a TEST cluster for evaluating new technology such as the
difference between 4-way and 6-way platforms of varying processor speeds and cache sizes etc.
In addition to standardized hardware (from the Microsoft Windows Catalogs), Microsoft specifies that
many configuration settings must be the same. For example, all network adapters on the same cluster
network must use identical communication settings such as speed, duplex mode, flow control, and
media type. The adapters connected to the same switch must have identical port settings on the
switch. Ideally, the NICs for each network should be identical hardware models and firmware
revision level in each server.




                                                                                                      20
In addition, each cluster network must be configured as a single IP subnet whose subnet is distinct
from those of the other cluster network. The addresses may be assigned by DHCP, but manual
configuration with static addresses is preferred, and the use of Automatic Private IP Addressing
(APIPA) is not supported.

Hardware redundancy
The key design principle of any cluster deployment is to provide high availability. In the event of a
hardware failure, a failover operation will move resources from the failed node to another node in the
cluster. During failovers of Microsoft Exchange, users will not be able to access e-mail folders for a
short time as resources are taken offline on the failed node and brought online on the other node. For
each node in the cluster, implement redundant hardware components in order to reduce the impact of
a hardware failure and thus avoid a failover. Examples of components where redundancy can be
implemented are as follows:

• Redundant Networks - Two or more independent networks must be used to connect the nodes of a
  cluster. A cluster connected by only one network is not a supported configuration. In addition, each
  network must fail independently of every other. No single component failure, neither a Network
  Interface Card (NIC) nor switch failure, should cause both cluster networks to fail. This also
  eliminates the use of a single Network Interface Card (NIC) with multiple ports for both cluster
  networks.
• Network Adapter or Network Interface Card (NIC) Teaming - NIC Teaming is a feature enabled on
  HP ProLiant servers by the addition of Support Pack software. Teaming allows multiple network
  cards to act as a single virtual card which is configured as if it were a single card. This allows for
  the failure of a single NIC card, cable or switch port without any interruption of service. When split
  across multiple network switches, switch failure can be handled. There are three types of teaming:
  Network Fault Tolerant (NFT), which handles faults but provides no additional throughput, Transmit
  Load Balancing (TLB), which can provide multiple outbound connections, and Switch-Assisted Load
  Balancing (SALB), which requires specific network switches, such as Cisco Fast EtherChannel. Use
  NFT teaming on the public (client) network in a cluster; however do not use NIC teaming for the
  private (heartbeat) network in a cluster as it is not supported by Microsoft (see Q254101). For
  more information on NIC Teaming, see HP.com - ProLiant networking - teaming, at:
  http://h18000.www1.hp.com/products/servers/networking/teaming.html.
• Power supplies and fans – Connect redundant power supplies to separate Power Distribution Units
  (PDU). If both power supplies are connected to the same PDU, and the PDU fails, power will be lost.
• Host Bus Adapter/Array Controllers – If you have implemented a Storage Area Network (SAN) with
  your cluster, implement redundancy into the connections between your nodes and the SAN, so that
  failures of the HBA, Fibre Channel connections, and SAN switches can be handled without the
  need to induce cluster node failover. HBA redundancy is controlled by HP StorageWorks Secure
  Path. For more information see: HP StorageWorks Secure Path for Windows: business continuity software -
  overview & features, at http://h18006.www1.hp.com/products/sanworks/secure-path/spwin.html.




                                                                                                      21
Note: The latest version of HP StorageWorks Secure Path
adds distributed port balancing across HBAs in a cluster.
Previous versions did not load-balance, and required
manually setting the paths (which was not persistent across
reboots). If you are using an older version or are not sure,
check the SAN throughput at the switch ports to see that
they are both active.



Best practices for cluster configuration
Understanding resource dependency
In the earlier section on resource dependencies, it was made clear that there is a specific sequence in
which resources are brought online or taken offline, as dictated by the resource dependency. It is
critical in your design to understand that the Microsoft Exchange System Attendant Resource is
dependent on all physical disks for that virtual server. This means that an entire Microsoft Exchange
Server can be affected by failure of any disk resource, be it database or log file drives. For very large
servers in a SAN environment, this exposes all users on that server to downtime for any disk resource
failure, which is quite undesirable. It is essential that all drives be protected from any failure through
RAID and redundant paths and that the number of disk resources is limited in order to reduce the risk
of a drive failure impacting the Microsoft Exchange virtual server. For example, a very large Microsoft
Exchange server with disks for four (4) Storage Groups, log files and separate disks for the SMTP
mailroot, and each of twenty (20) databases would perhaps be best split into 2 or more virtual
servers.

Planning cluster groups
Create a cluster group to contain only the quorum disk resource, cluster name, and cluster IP address.
The benefit of separating the quorum resource from the Microsoft Exchange cluster groups is that the
cluster will remain on-line if one of the resources fails in a Microsoft Exchange cluster group. The
cluster resource group owning the quorum can be run on the passive node, which isolates it from the
load of the active nodes. If the active node experiences a hard fault resulting in node failover, the
failover may be quicker because the cluster resource group is already online on the surviving node.

Planning quorum type
Deciding on the quorum type is important, as the type of quorum resource determines the type of
cluster. This decision between one of three types must be made before cluster installation. The cluster
type is either a quorum-device cluster (using either a local quorum or a single quorum device on a
shared SCSI bus or Fibre Channel) or a majority-of-nodes cluster. A quorum-device cluster is defined
by a quorum using either the Physical Disk resource type or local quorum resource type (in either
Microsoft Windows 2000 or 2003). The local quorum resource is often used for the purpose of
setting up a single-node cluster for testing and development. In Microsoft Windows Server 2003, the
Cluster service automatically creates the local quorum resource if it does not detect a shared disk (you
do not need to specify a special switch as in the Microsoft Windows 2000 procedure). You may also
create the local quorum resource, after installation of a quorum on a single quorum device (e.g., a
multi-node cluster) in Microsoft Windows Server 2003. This can be beneficial if you need to replace
or make other repairs on the shared disk resource subsystem (e.g., rebuild the drive array in a
manner that is data destructive).



                                                                                                      22
Figure 6. Selecting Quorum type during Cluster Setup




A majority-of-nodes cluster is only available in Microsoft Windows 2003, and is defined by a
majority node set quorum, where each node maintains its own copy of the cluster configuration data.
Determination of which node owns the quorum and whether the cluster is even operational, is done
through access to file shares. This design helps in geographically dispersed clusters and addresses the
‘split-brain’ issue (where more than one node could gain exclusive access to a quorum device,
because that device is replicated to more than one location). Since this design is much more complex,
it is not advised unless you work carefully with the team or engineer providing the storage
architecture. Microsoft states that you should use a majority node set cluster only in targeted scenarios
with a cluster solution offered by your Original Equipment Manufacturer (OEM), Independent
Software Vendor (ISV), or Independent Hardware Vendor (IHV)1.



Storage planning
Storage design is critical
Before building a cluster, you should try to estimate storage requirements. Microsoft Exchange
databases expand over time and you may start to run out of disk space if you have not implemented
sufficient capacity. Having spare capacity is also useful for performing database maintenance and
disaster recovery exercises. Adding additional disk space to a cluster may require future downtime if
your array controller does not support dynamic volume expansion. Before you can size mailbox
stores, you must answer the following questions:

• How many users need mailboxes?
• What is standard mailbox quota?
• How many servers will be deployed?
• How many Storage Groups will be deployed per server?
• How many Databases per Storage Group?
• Will the mailbox stores contain legacy e-mail and if so how much?
• What is the acceptable time to restore a database in a Disaster Recovery Scenario?
• What is the classification of mail usage (light, medium, heavy)?
• How long is deleted mail retained for?



1


http://www.microsoft.com/technet/prodtechnol/windowsserver2003/proddocs/entserver/sag_mscs
2planning_32.asp

                                                                                                     23
• What mail clients are used: POP, IMAP, HTTP, or MAPI? (This affects whether mail will be primarily
  stored in the EDB or STM files).

To assist customers in sizing storage, HP has developed a Microsoft Exchange Storage Planning
Calculator which is available for download from http://www.hp.com/solutions/activeanswers,
specifically HP ProLiant Storage Planning Calculator for Microsoft Exchange 2000,
http://h71019.www7.hp.com/1%2C1027%2C2400-6-100-225-1%2C00.htm.

Placement of Microsoft Windows and Microsoft Exchange components
As part of the storage design process, you will need to consider the placement of the following
Microsoft Windows and Microsoft Exchange components, listed in Table 6, below.

Table 6. Placement of Microsoft Windows and Microsoft Exchange components.


Component                                               Default Location (diskpath)

Microsoft Windows Server binaries                       %systemroot%

Microsoft Windows Server pagefile                       Typically C:

Quorum disk (for clusters)                              Administrator defined

Microsoft Exchange Server 2003 binaries                 %ProgramFiles%Exchsrvr typically on C:

Microsoft Exchange Server Storage Groups                Administrator defined – default for cluster is on
                                                        ‘Data Directory’

● Microsoft Exchange Server mail stores (EDB            Administrator defined – default for cluster is on
and STM files)                                          ‘Data Directory’

● Microsoft Exchange Server transaction logs            Administrator defined – default for cluster is on
                                                        ‘Data Directory’

● Public Folder stores (EDB and STM files)              See ‘Data Directory’ below

Microsoft Exchange Server 2003 SMTP mailroot            See ‘Data Directory’ below
folders (pickup, queue and badmail)

Microsoft Exchange Server 2003 Message                  See ‘Data Directory’ below
Transfer Agent (MTA) work directory

Microsoft Exchange Server 2003 message                  Administrator defined – see FTI whitepaper on
tracking logs                                           HP ActiveAnswers

Full text Indexing (FTI) files                          Administrator defined

Legacy e-mail connectors                                Default Location (diskpath)


Microsoft Windows Server
Microsoft Windows Server should be implemented on a mirrored (RAID1) drive. For clusters holding
large numbers of mailboxes, Microsoft recommends deploying a dedicated mirrored drive for the
pagefile to achieve improved performance. A write-caching controller can help improvement
performance of these disks (and it should be battery-backed for protection against data loss during
power disruptions).




                                                                                                      24
Microsoft Exchange Server 2003 binaries
Microsoft Exchange Server 2003 can be installed on to the same mirrored drive as Microsoft
Windows Server with little impact to performance. A separate logical volume or disk array can be
used for convenience if desired.

Storage groups, databases, and transaction logs
A storage group is a Microsoft Exchange management entity within the Store process that controls a
number of databases that use a common set of transaction logs. Microsoft Exchange databases are
composed of:

• EDB files (*.EDB). The header and property information of all messages is held in the EDB file.
• Streaming Database files (*.STM) – are the primary store for Internet clients/protocols such as
  Microsoft Outlook Express/IMAP. Content in the streaming file is stored by the Microsoft Exchange
  Installable File System (IFS) in native format so messages with attachments in audio/video format do
  not require conversion when stored and accessed with IMAP clients. Automatic format conversion to
  the EDB file takes place when MAPI (Microsoft Outlook) clients try to access content information in
  the streaming file.
• Transaction Logs (*.LOG) – All changes to the database are first written to the transaction log, then
  to database pages cached in memory (IS buffers), and finally (asynchronously), to the database
  files on disk.
• Checkpoint files (*.CHK) – Checkpoint files keep track of which transactions have been committed
  to disk.

Figure 7 shows two storage groups. Storage Group 1 has two sets of EDB and Streaming databases.
Storage Group 2 has three sets of EDB and Streaming Database. As mentioned previously, there is
one set of transaction logs per storage group.




                                                                                                    25
Figure 7. Storage Groups




Assigning drive-letters and labels
Microsoft Knowledge Base article Q318534 describes some best practices for assigning drive letters
on a cluster server. Microsoft recommends using Q as the quorum drive, and use letters R through to Z
as drives in the shared storage. This has some advantages:

• Additional disks can be added to storage on the local node and use drive letters E and F (by
  convention drives C and D are used in local nodes for Microsoft Windows Server/Microsoft
  Exchange Server 2003 binaries)
• It reduces the risk of the administrator mapping a drive on a cluster node and causing a drive letter
  conflict when a failover occurs.

Another good tip from Q318534: label the drives to match the drive letter. For example, for the V
drive, label it as DRIVEV, shown in Figure 8. In a disaster recovery situation the drive letter
information might get erased. Using meaningful labels makes it easy to determine that DRIVEV was
originally assigned the drive letter V.




                                                                                                    26
Figure 8. Setting a drive label to match the drive letter.




Assign Q: to the quorum drive
By convention, the quorum resource is usually placed on a partition with the drive-letter Q. You should
assign a separate physical disk in the shared storage to the quorum. This practice makes it easier to
manage the cluster and your storage. If you share the physical drive with another server/application
and you want to perform maintenance on the drive, you will have to take the cluster offline and move
the quorum resource.

M: drive
In Microsoft Exchange 2000, the Microsoft Exchange Installable File System (ExIFS) created an M:
drive as part of the installation. It was important in Microsoft Exchange 2000 that you did not assign
the drive letter M: to any drives in the shared storage. If a partition has been assigned the letter M:
or a network drive has been mapped using M:, it would cause the Microsoft Exchange 2000
installation to fail. A new installation of Microsoft Exchange 2003 does not explicitly mount the ExIFS
as drive M: but can be exposed via a registry key2. Servers upgraded from Microsoft Exchange
2000, however, may still expose the ExIFS as an M: drive.

Assigning drive-letters to storage groups and databases
In some scenarios, Microsoft Exchange cluster administrators have exhausted all available drive
letters, especially on clusters with multiple storage groups, databases, and virtual servers. For
example, an administrator decides to allocate a drive letter to each Microsoft Exchange database on
a virtual server with four storage groups. In each storage group, there are six databases. In this
scenario, 24 drive-letters will have to be assigned. Drives A, C, Q, and M may already be allocated


2
    Requires creating a string key ‘DriveLetter’ in HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesEXIFSParameters.


                                                                                                                             27
to the local storage, Quorum disk, and IFS respectively so there are insufficient drive letters available
in this scheme.
Microsoft Windows Server 2003 lifts the restrictions on drive letters by providing support for mount
points in clusters. Mount points are used as an alternative to drive letters when formatting a partition;
instead of selecting a drive letter, a folder in which to mount the drive is selected on a ‘parent’ drive.
The new drive is then accessed via this folder. For example, in Figure 9 below, Disk 25 is mounted as
the folder S:Logs (where S: is Disk 21 in logical disk manager). Writes to S:Logs will go directly to
Disk 25, as shown in logical disk manager below.



Figure 9. Sample Disk Management view showing Disk 25 as Mount Point




See the section on mount points for more detailed information on proper configuration.
Another way to avoid drive letter restrictions is as follows: do not dedicate volumes to individual
Microsoft Exchange databases. Create partitions large enough to hold multiple Microsoft Exchange
databases and group all databases from the same storage group on the same partitions. If a partition
contains databases from multiple storage groups and the disk resource goes offline, it will dismount
databases in all of the storage groups on that disk partition.




                                                                                                       28
Choosing RAID levels for Microsoft Exchange components
All EDB and streaming databases in the same Storage Group should be placed on the same drive. If
files are placed across multiple volumes and one volume fails, the other databases would still be
affected. At a minimum, the Stores will go offline and then come back online if possible. Choose RAID
0+1 (or RAID5 only if assigning sufficient physical spindles for both storage space and performance)
for EDB and streaming databases and keep the STM and EDB files on the same partition. This makes
it easier to locate and manage database files. Also, if you place STM and EDB files on different
partitions and lose one or the other, they must both be restored. Use the HP storage planning
calculator for Microsoft Exchange whenever possible to ensure that the design includes enough
physical spindles.
Transaction logs should be placed on a mirrored (RAID 1 drive). Transaction logs allow you to replay
transactions that have taken place since the last backup. They should be placed on drives separate to
the databases to facilitate recovery and for performance reasons.
SMTP mailroot, message transfer agent, message tracking logs
The SMTP folders, Message Transfer Agent, and Message Tracking Logs should be placed on a
mirrored drive, separate from the database and transaction logs. The SMTP virtual server can achieve
better performance when spread over multiple disks using RAID 1 (2 disks) or RAID 0+1 (4 or more
disks in an even number). The HP Enterprise Virtual Array can spread the virtual disk over many
physical spindles for high performance, and also provide RAID protection as vraid1 with hot sparing.
Keeping these components separate from the databases also makes it easier to evaluate the
performance of the database and queue drives. Note that the SMTP mailroot is placed on the
designated data drive in a cluster (which is external storage by definition), and that the steps for
relocating the SMTP mailroot for Microsoft Exchange 2000 described in Q318230, 318230 -
XCON: How to Change the Exchange 2000 SMTP Mailroot Directory Location, were not supported
in a cluster. Microsoft Exchange 2003 exposes the location for the SMTP Badmail directory and the
Queue directory in the Microsoft Exchange System Manager as shown in Figure10 below. These are
properties of each SMTP virtual server, and will be grayed-out unless you run the Microsoft Exchange
System Manager directly on that server.




                                                                                                  29
Figure 10. Microsoft Exchange 2003 System Manager SMTP property for SMTP Badmail and Queue directories




Allocate dedicated spindles to each Microsoft Exchange virtual server
Each disk in the SAN should be allocated to one Microsoft Exchange virtual server and one storage
group. Do not split spindles across multiple Microsoft Exchange virtual servers and cluster groups. If
you need to take a disk offline for maintenance, only one Microsoft Exchange virtual server would be
impacted.

Using mount points
As stated earlier, mount points are used as an alternative to drive letters when formatting a partition;
instead of selecting a drive letter, a folder in which to mount the drive is selected on a ‘parent’ drive.
The new drive is then accessed via this folder, for example S:Logs (where S: is disk 21 in logical
disk manager) will write directly to disk 25, as shown in logical disk manager in Figure 11. Microsoft
Windows Server 2003 eases restrictions on assigning drive letters by providing support for mount
points in clusters.

Resource dependencies
Mount points are a physical disk resource type and should be created as dependent on the parent
resource (the drive which is assigned a letter). If the parent resource goes offline then the junction
point for the volume mount point (VMP), which is a folder on parent resource, is no longer available
and writes to the VMP will fail. Thus, it is critical that the mount point be gracefully taken offline first,
forcing all outstanding writes.




                                                                                                           30
Recovering drives with mount points
If it is necessary to replace or recover the parent drive resource, the mount point must be re-associated
with the folder on the parent drive. This is done by selecting the mount point volume in Disk Manager
(Diskmgmt.msc) and selecting Change Drive Letter and Paths… Figure 11 below shows browsing for
the folder to associate a volume mount point.



Figure 11. Re-associating an Existing Volume Mount Point with a Folder in Disk Manager




Warning: If using a Volume Mount Point for Microsoft
Exchange Storage Group Logs and the parent drive
resource has been replaced, DO NOT bring the
Information Store resource online or it will create the Log
folder on the drive (as a folder and not a mount point). This
folder will have newly created Microsoft Exchange
transaction logs, which must be removed.



Use a consistent naming standard for folders and databases
You should use a consistent naming standard for Microsoft Exchange folders and databases. It makes
it easier to determine which Storage Group a database belongs to. A suggested naming convention is
shown in Table 7. In this example, one could determine from the name of the file that the mailbox
store EDB file V:exchsrvrSG2_MDBDATASG2Mailstore3.EDB is mailbox store 3 and is owned by
Storage Group 2.


                                                                                                     31
On clusters with multiple Microsoft Exchange virtual servers you should extend the naming standard to
include the virtual server. For example, the use of VS1 in the filename
V:exchsrvrSG2_VS1_MDBDATASG2VS1Mailstore3.EDB denotes that this database belongs to
virtual server 1.
All Microsoft Exchange components should be placed in folder trees that have ‘Exchsrvr’ or
‘Exchange’ as the root folder name. This makes it easier for the administrator to determine that
Microsoft Exchange components are stored in the folders.

Table 7. Example Naming standard for Microsoft Exchange folders




Component                                           Folder Name

Microsoft Exchange Binaries                        D:exchsrvrBIN

Message Transfer Agent                             R:exchsrvrMTADATA

Message Transfer Agent Work Directory              R:exchsrvrMTADATA

SMTP Mailroot                                      R:exchsrvrMailroot

Message Tracking Logs                              R:exchsrvrYourservername.log

Storage Group 1

SG1 Transaction Logs                               S:exchsrvrSG1_TRANSLOGS

Database folder                                    T:exchsrvrSG1_MDBDATA

SG1Mailstore1                                      T:exchsrvrSG1_MDBDATASG1Mailstore1.ED
                                                   B
                                                   T:exchsrvrSG1_MDBDATASG1Mailstore1.ST
                                                   M

SG1Mailstore2                                      T:exchsrvrSG1_MDBDATASG1Mailstore2.ED
                                                   B
                                                   T:exchsrvrSG1_MDBDATASG1Mailstore2.ST
                                                   M

SG1Mailstore3                                      T:exchsrvrSG1_MDBDATASG1Mailstore3.ED
                                                   B
                                                   T:exchsrvrSG1_MDBDATASG1Mailstore3.ST
                                                   M

SG1Mailstore4                                      T:exchsrvrSG1_MDBDATASG1Mailstore4.ED
                                                   B
                                                   T:exchsrvrSG1_MDBDATASG1Mailstore4.ST
                                                   M

Storage Group 2

SG2 Transaction Logs                               U:exchsrvrSG2_TRANSLOGS

Database folder                                    V:exchsrvrSG2_MDBDATA

SG2Mailstore1                                      V:exchsrvrSG2_MDBDATASG2Mailstore1.E
                                                   DB
                                                   V:exchsrvrSG2_MDBDATASG2Mailstore1.ST
                                                   M


                                                                                                   32
Component                                          Folder Name

SG2Mailstore2                                      V:exchsrvrSG2_MDBDATASG2Mailstore2.E
                                                   DB
                                                   V:exchsrvrSG2_MDBDATASG2Mailstore2.ST
                                                   M

SG2Mailstore3                                      V:exchsrvrSG2_MDBDATASG2Mailstore3.E
                                                   DB
                                                   V:exchsrvrSG2_MDBDATASG2Mailstore3.ST
                                                   M

SG2Mailstore4                                      V:exchsrvrSG2_MDBDATASG2Mailstore4.E
                                                   DB
                                                   V:exchsrvrSG2_MDBDATASG2Mailstore4.ST
                                                   M



Obtain TCP/IP addresses prior to installation
For a two node Microsoft Exchange Server 2003 Active/Passive cluster, you will need to assign IP
addresses for the following network resources:

• Node 1 (Public Network)
• Node 2 (Public Network)
• Microsoft Exchange Virtual Server
• Cluster IP Address
• Node 1 (Private Network)
• Node 2 (Private Network)

In Active/Active deployments, you will need to assign an IP address for a second virtual server. If you
plan to implement HP ProLiant Integrated Lights-Out (iLO) or Remote Insight Lights-Out Edition boards
for remote management of the cluster nodes, be sure to allocate an additional IP address for each
card.

Cluster naming conventions
Given the additional complexity and terminology of clusters, it is a good idea to choose a consistent
naming convention to aid understanding. In a two node Microsoft Exchange cluster, you will need to
assign names to cluster groups, node names, and virtual servers, as shown in Cluster Administrator in
Figure 12. Here is a suggested naming convention:

• Cluster Node 1                                  XXXCLNODE1
• Cluster Node 2                                  XXXCLNODE2
• Cluster Network Name                            XXXCLUS1
• Microsoft Exchange Group Name                   XXXEXCGRP1
• Microsoft Exchange Virtual Server Name          XXXEXCVS1


XXX             –   represents a site or company code to match your naming convention
CL              –   represents a cluster node/name
EXCGRP          –   represents a Microsoft Exchange cluster group
EXCVS           –   represents a Microsoft Exchange virtual server name


                                                                                                    33
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters
HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters

Contenu connexe

Tendances

Sap MM-configuration-step-by-step-guide
Sap MM-configuration-step-by-step-guideSap MM-configuration-step-by-step-guide
Sap MM-configuration-step-by-step-guideVenet Dheer
 
Sg247692 Websphere Accounting Chargeback For Tuam Guide
Sg247692 Websphere Accounting Chargeback For Tuam GuideSg247692 Websphere Accounting Chargeback For Tuam Guide
Sg247692 Websphere Accounting Chargeback For Tuam Guidebrzaaap
 
Managing and operating a Microsoft Virtual Server ...
Managing and operating a Microsoft Virtual Server ...Managing and operating a Microsoft Virtual Server ...
Managing and operating a Microsoft Virtual Server ...webhostingguy
 
Developing workflows and automation packages for ibm tivoli intelligent orche...
Developing workflows and automation packages for ibm tivoli intelligent orche...Developing workflows and automation packages for ibm tivoli intelligent orche...
Developing workflows and automation packages for ibm tivoli intelligent orche...Banking at Ho Chi Minh city
 
Supply chain management
Supply chain managementSupply chain management
Supply chain managementShwe Zin
 
Let us c++ yeshwant kanetkar
Let us c++ yeshwant kanetkarLet us c++ yeshwant kanetkar
Let us c++ yeshwant kanetkarVinayak Mishra
 
Hp man ppm9.20_whats_new_pdf
Hp man ppm9.20_whats_new_pdfHp man ppm9.20_whats_new_pdf
Hp man ppm9.20_whats_new_pdfugunal
 
Deployment guide series ibm tivoli identity manager 5.0 sg246477
Deployment guide series ibm tivoli identity manager 5.0 sg246477Deployment guide series ibm tivoli identity manager 5.0 sg246477
Deployment guide series ibm tivoli identity manager 5.0 sg246477Banking at Ho Chi Minh city
 
WebSphere Business Integration for SAP
WebSphere Business Integration for SAPWebSphere Business Integration for SAP
WebSphere Business Integration for SAPlargeman
 
Hibernate Reference
Hibernate ReferenceHibernate Reference
Hibernate ReferenceSyed Shahul
 

Tendances (18)

Sap MM-configuration-step-by-step-guide
Sap MM-configuration-step-by-step-guideSap MM-configuration-step-by-step-guide
Sap MM-configuration-step-by-step-guide
 
P4 perforce
P4 perforceP4 perforce
P4 perforce
 
Ppm7.5 demand cg
Ppm7.5 demand cgPpm7.5 demand cg
Ppm7.5 demand cg
 
c
cc
c
 
Ppm7.5 cmd tokval
Ppm7.5 cmd tokvalPpm7.5 cmd tokval
Ppm7.5 cmd tokval
 
Sg247692 Websphere Accounting Chargeback For Tuam Guide
Sg247692 Websphere Accounting Chargeback For Tuam GuideSg247692 Websphere Accounting Chargeback For Tuam Guide
Sg247692 Websphere Accounting Chargeback For Tuam Guide
 
Managing and operating a Microsoft Virtual Server ...
Managing and operating a Microsoft Virtual Server ...Managing and operating a Microsoft Virtual Server ...
Managing and operating a Microsoft Virtual Server ...
 
Developing workflows and automation packages for ibm tivoli intelligent orche...
Developing workflows and automation packages for ibm tivoli intelligent orche...Developing workflows and automation packages for ibm tivoli intelligent orche...
Developing workflows and automation packages for ibm tivoli intelligent orche...
 
Supply chain management
Supply chain managementSupply chain management
Supply chain management
 
Let us c++ yeshwant kanetkar
Let us c++ yeshwant kanetkarLet us c++ yeshwant kanetkar
Let us c++ yeshwant kanetkar
 
Hp man ppm9.20_whats_new_pdf
Hp man ppm9.20_whats_new_pdfHp man ppm9.20_whats_new_pdf
Hp man ppm9.20_whats_new_pdf
 
End note
End noteEnd note
End note
 
Deployment guide series ibm tivoli identity manager 5.0 sg246477
Deployment guide series ibm tivoli identity manager 5.0 sg246477Deployment guide series ibm tivoli identity manager 5.0 sg246477
Deployment guide series ibm tivoli identity manager 5.0 sg246477
 
2 x applicationserver
2 x applicationserver2 x applicationserver
2 x applicationserver
 
Hibernate reference
Hibernate referenceHibernate reference
Hibernate reference
 
Hibernate Reference
Hibernate ReferenceHibernate Reference
Hibernate Reference
 
WebSphere Business Integration for SAP
WebSphere Business Integration for SAPWebSphere Business Integration for SAP
WebSphere Business Integration for SAP
 
Hibernate Reference
Hibernate ReferenceHibernate Reference
Hibernate Reference
 

En vedette

Ms Tech Ed Best Practices For Exchange Server Cluster Deployments June 2003
Ms Tech Ed   Best Practices For Exchange Server Cluster Deployments June 2003Ms Tech Ed   Best Practices For Exchange Server Cluster Deployments June 2003
Ms Tech Ed Best Practices For Exchange Server Cluster Deployments June 2003Armando Leon
 
Microsoft Exchange 2013 deployment and coexistence
Microsoft Exchange 2013 deployment and coexistenceMicrosoft Exchange 2013 deployment and coexistence
Microsoft Exchange 2013 deployment and coexistenceMotty Ben Atia
 
Input Output Organization
Input Output OrganizationInput Output Organization
Input Output OrganizationKamal Acharya
 

En vedette (6)

Clustering overview
Clustering overviewClustering overview
Clustering overview
 
Ms Tech Ed Best Practices For Exchange Server Cluster Deployments June 2003
Ms Tech Ed   Best Practices For Exchange Server Cluster Deployments June 2003Ms Tech Ed   Best Practices For Exchange Server Cluster Deployments June 2003
Ms Tech Ed Best Practices For Exchange Server Cluster Deployments June 2003
 
Microsoft Exchange 2013 deployment and coexistence
Microsoft Exchange 2013 deployment and coexistenceMicrosoft Exchange 2013 deployment and coexistence
Microsoft Exchange 2013 deployment and coexistence
 
Exchange 2013
Exchange 2013Exchange 2013
Exchange 2013
 
Input Output Organization
Input Output OrganizationInput Output Organization
Input Output Organization
 
Exchange Server 2013 Architecture Deep Dive, Part 1
Exchange Server 2013 Architecture Deep Dive, Part 1Exchange Server 2013 Architecture Deep Dive, Part 1
Exchange Server 2013 Architecture Deep Dive, Part 1
 

Similaire à HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters

Ref arch for ve sg248155
Ref arch for ve sg248155Ref arch for ve sg248155
Ref arch for ve sg248155Accenture
 
Administrator guide
Administrator guideAdministrator guide
Administrator guiderturkman
 
irmpg_3.7_python_202301.pdf
irmpg_3.7_python_202301.pdfirmpg_3.7_python_202301.pdf
irmpg_3.7_python_202301.pdfFernandoBello39
 
Dns320 manual 100
Dns320 manual 100Dns320 manual 100
Dns320 manual 100markvw3
 
Qs2 consultants manual
Qs2 consultants manualQs2 consultants manual
Qs2 consultants manualkhayer
 
Ibm web sphere datapower b2b appliance xb60 revealed
Ibm web sphere datapower b2b appliance xb60 revealedIbm web sphere datapower b2b appliance xb60 revealed
Ibm web sphere datapower b2b appliance xb60 revealednetmotshop
 
WebHost Manager User Manual
WebHost Manager User ManualWebHost Manager User Manual
WebHost Manager User Manualwebhostingguy
 
WebHost Manager User Manual
WebHost Manager User ManualWebHost Manager User Manual
WebHost Manager User Manualwebhostingguy
 
BizTalk Practical Course Preview
BizTalk Practical Course PreviewBizTalk Practical Course Preview
BizTalk Practical Course PreviewMoustafaRefaat
 
Plesk 8.1 for Windows
Plesk 8.1 for WindowsPlesk 8.1 for Windows
Plesk 8.1 for Windowswebhostingguy
 
Plesk 8.1 for Windows
Plesk 8.1 for WindowsPlesk 8.1 for Windows
Plesk 8.1 for Windowswebhostingguy
 
Plesk 8.1 for Windows
Plesk 8.1 for WindowsPlesk 8.1 for Windows
Plesk 8.1 for Windowswebhostingguy
 
Certification guide series ibm tivoli usage and accounting manager v7.1 imple...
Certification guide series ibm tivoli usage and accounting manager v7.1 imple...Certification guide series ibm tivoli usage and accounting manager v7.1 imple...
Certification guide series ibm tivoli usage and accounting manager v7.1 imple...Banking at Ho Chi Minh city
 
Creating a VMware Software-Defined Data Center Reference Architecture
Creating a VMware Software-Defined Data Center Reference Architecture Creating a VMware Software-Defined Data Center Reference Architecture
Creating a VMware Software-Defined Data Center Reference Architecture EMC
 
IBM PureFlex System Solutions for Managed Service Providers
IBM PureFlex System Solutions for Managed Service ProvidersIBM PureFlex System Solutions for Managed Service Providers
IBM PureFlex System Solutions for Managed Service ProvidersIBM India Smarter Computing
 

Similaire à HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters (20)

Reseller's Guide
Reseller's GuideReseller's Guide
Reseller's Guide
 
Certifications
CertificationsCertifications
Certifications
 
Ref arch for ve sg248155
Ref arch for ve sg248155Ref arch for ve sg248155
Ref arch for ve sg248155
 
Citrix admin
Citrix adminCitrix admin
Citrix admin
 
Administrator guide
Administrator guideAdministrator guide
Administrator guide
 
irmpg_3.7_python_202301.pdf
irmpg_3.7_python_202301.pdfirmpg_3.7_python_202301.pdf
irmpg_3.7_python_202301.pdf
 
Dns320 manual 100
Dns320 manual 100Dns320 manual 100
Dns320 manual 100
 
Qs2 consultants manual
Qs2 consultants manualQs2 consultants manual
Qs2 consultants manual
 
Ibm web sphere datapower b2b appliance xb60 revealed
Ibm web sphere datapower b2b appliance xb60 revealedIbm web sphere datapower b2b appliance xb60 revealed
Ibm web sphere datapower b2b appliance xb60 revealed
 
WebHost Manager User Manual
WebHost Manager User ManualWebHost Manager User Manual
WebHost Manager User Manual
 
WebHost Manager User Manual
WebHost Manager User ManualWebHost Manager User Manual
WebHost Manager User Manual
 
BizTalk Practical Course Preview
BizTalk Practical Course PreviewBizTalk Practical Course Preview
BizTalk Practical Course Preview
 
Plesk 8.1 for Windows
Plesk 8.1 for WindowsPlesk 8.1 for Windows
Plesk 8.1 for Windows
 
Plesk 8.1 for Windows
Plesk 8.1 for WindowsPlesk 8.1 for Windows
Plesk 8.1 for Windows
 
Plesk 8.1 for Windows
Plesk 8.1 for WindowsPlesk 8.1 for Windows
Plesk 8.1 for Windows
 
Certification guide series ibm tivoli usage and accounting manager v7.1 imple...
Certification guide series ibm tivoli usage and accounting manager v7.1 imple...Certification guide series ibm tivoli usage and accounting manager v7.1 imple...
Certification guide series ibm tivoli usage and accounting manager v7.1 imple...
 
Creating a VMware Software-Defined Data Center Reference Architecture
Creating a VMware Software-Defined Data Center Reference Architecture Creating a VMware Software-Defined Data Center Reference Architecture
Creating a VMware Software-Defined Data Center Reference Architecture
 
This is
This is This is
This is
 
IBM PureFlex System Solutions for Managed Service Providers
IBM PureFlex System Solutions for Managed Service ProvidersIBM PureFlex System Solutions for Managed Service Providers
IBM PureFlex System Solutions for Managed Service Providers
 
PureFlex pour les MSP
PureFlex pour les MSPPureFlex pour les MSP
PureFlex pour les MSP
 

Plus de Armando Leon

Deep Freeze Std Manual Español
Deep Freeze Std Manual EspañolDeep Freeze Std Manual Español
Deep Freeze Std Manual EspañolArmando Leon
 
200308 Active Directory Security
200308 Active Directory Security200308 Active Directory Security
200308 Active Directory SecurityArmando Leon
 
VPN Make the case.
VPN Make the case.VPN Make the case.
VPN Make the case.Armando Leon
 
SMS SMTP Planning Guide
SMS SMTP Planning GuideSMS SMTP Planning Guide
SMS SMTP Planning GuideArmando Leon
 
SMS Implementation Guide
SMS Implementation GuideSMS Implementation Guide
SMS Implementation GuideArmando Leon
 
Optimer Sikkerheden Exchange Server 2003
Optimer Sikkerheden Exchange Server 2003Optimer Sikkerheden Exchange Server 2003
Optimer Sikkerheden Exchange Server 2003Armando Leon
 

Plus de Armando Leon (6)

Deep Freeze Std Manual Español
Deep Freeze Std Manual EspañolDeep Freeze Std Manual Español
Deep Freeze Std Manual Español
 
200308 Active Directory Security
200308 Active Directory Security200308 Active Directory Security
200308 Active Directory Security
 
VPN Make the case.
VPN Make the case.VPN Make the case.
VPN Make the case.
 
SMS SMTP Planning Guide
SMS SMTP Planning GuideSMS SMTP Planning Guide
SMS SMTP Planning Guide
 
SMS Implementation Guide
SMS Implementation GuideSMS Implementation Guide
SMS Implementation Guide
 
Optimer Sikkerheden Exchange Server 2003
Optimer Sikkerheden Exchange Server 2003Optimer Sikkerheden Exchange Server 2003
Optimer Sikkerheden Exchange Server 2003
 

Dernier

Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 

Dernier (20)

Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 

HP Best practices for deploying Microsoft Exchange Server 2000 and 2003 clusters

  • 1. HP best practices for Microsoft Exchange Server 2000 and 2003 Cluster deployments Executive summary............................................................................................................................... 4 Introduction......................................................................................................................................... 4 Overview of Microsoft Exchange Server clusters ...................................................................................... 5 Clusters explained............................................................................................................................ 5 Microsoft cluster server ..................................................................................................................... 5 Shared-nothing cluster model............................................................................................................. 5 Resources and resource groups.......................................................................................................... 6 Key cluster terminology ..................................................................................................................... 8 Microsoft cluster service components .................................................................................................. 9 The cluster service ...................................................................................................................... 10 The resource monitor .................................................................................................................. 10 The resource DLL ........................................................................................................................ 10 Cluster failover .............................................................................................................................. 11 Cluster-aware vs. non cluster-aware applications ............................................................................... 11 Active/active vs. active/passive................................................................................................... 11 Exchange virtual servers.................................................................................................................. 12 Clustering support for Microsoft Exchange Server components............................................................. 12 Key component: Exchange resource DLL (EXRES.DLL) ...................................................................... 14 Key component: Exchange cluster admin DLL (EXCLUADM.DLL)........................................................ 15 Microsoft Exchange Server cluster resource dependencies............................................................... 15 Microsoft Exchange virtual servers................................................................................................ 18 What’s new in Microsoft Exchange Server 2003................................................................................... 19 Planning your Microsoft Exchange 2003 cluster deployment................................................................... 20 Best practices for hardware............................................................................................................. 20 Check for hardware compatibility................................................................................................. 20 Choose standardized configurations............................................................................................. 20 Hardware redundancy ................................................................................................................ 21 Best practices for cluster configuration .............................................................................................. 22 Understanding resource dependency ............................................................................................ 22 Planning cluster groups ............................................................................................................... 22 Planning quorum type ................................................................................................................. 22 Storage planning ........................................................................................................................... 23
  • 2. Storage design is critical ............................................................................................................. 23 Placement of Microsoft Windows and Microsoft Exchange components ............................................ 24 Microsoft Windows Server .......................................................................................................... 24 Microsoft Exchange Server 2003 binaries..................................................................................... 25 Storage groups, databases, and transaction logs ........................................................................... 25 Assigning drive-letters and labels.................................................................................................. 26 Choosing RAID levels for Microsoft Exchange components .............................................................. 29 Allocate dedicated spindles to each Microsoft Exchange virtual server.............................................. 30 Using mount points ..................................................................................................................... 30 Use a consistent naming standard for folders and databases ........................................................... 31 Obtain TCP/IP addresses prior to installation................................................................................. 33 Cluster naming conventions ......................................................................................................... 33 Microsoft Exchange Server roles ...................................................................................................... 34 Implement dedicated Microsoft Exchange clusters........................................................................... 34 Assign specific roles to clusters for enterprise deployments .............................................................. 34 First server and site replication services (SRS) server ....................................................................... 35 Implement clusters as mailbox servers and public folder servers ....................................................... 35 Clusters and active/active scalability limits .................................................................................... 35 What’s new in Microsoft Windows Server 2003 ................................................................................... 36 Best practices for server configuration .................................................................................................. 36 Build redundancy into your Microsoft Windows Server infrastructure ................................................ 37 Cluster service account best practices ............................................................................................... 37 Create one cluster service account per cluster ................................................................................ 37 Use a consistent naming convention for cluster service accounts....................................................... 37 Do not use the cluster service account to logon interactively............................................................. 38 Delegate Microsoft Exchange permissions ..................................................................................... 38 Upgrade to the latest service pack on each node ............................................................................... 38 Change the temp folder used by the cluster service account............................................................. 38 Network configuration .................................................................................................................... 40 Separate private and public networks ........................................................................................... 40 Do not use DHCP for cluster network connections ........................................................................... 40 Label network connections........................................................................................................... 41 Modify the binding order on network connections .......................................................................... 41 Disable unnecessary protocols, services on the cluster interconnect connection .................................. 42 Set correct settings for cluster communications................................................................................ 43 NIC teaming configuration .......................................................................................................... 44 IPSec ........................................................................................................................................ 45 Geographically-dispersed clusters .................................................................................................... 45 Set staggered boot delays on each node .......................................................................................... 45 OS tuning for large memory ............................................................................................................ 46 More than 1GB of RAM in Microsoft Windows 2000 Advanced Server ........................................... 46 More than 1GB of RAM in Microsoft Windows 2003 Server........................................................... 46 Cluster validation testing ................................................................................................................. 47 Planned failover ......................................................................................................................... 47 Unplanned failover ..................................................................................................................... 47 Power outage Test ...................................................................................................................... 47 Best practices for Microsoft Exchange Server installation..................................................................... 47 Prerequisites .............................................................................................................................. 47 Document your cluster installation................................................................................................. 48 Front-end / back-end architectures................................................................................................ 49 Upgrading Microsoft Exchange 5.5 clusters ...................................................................................... 50 Upgrading Microsoft Exchange 2000 clusters ................................................................................... 50 Removing Microsoft Exchange 2000 Tuning Parameters ................................................................. 51 Before upgrading or refreshing hardware ......................................................................................... 51 Best practices for systems management ................................................................................................ 51 Active Directory ............................................................................................................................. 52 Create a separate OU for virtual servers and cluster network names................................................. 52 Cluster configuration ...................................................................................................................... 52 Capture configuration information ................................................................................................ 52
  • 3. Non-critical cluster resources........................................................................................................ 52 Majority node set ....................................................................................................................... 53 Training and expertise .................................................................................................................... 53 Microsoft Windows Server resource kits............................................................................................ 54 Securing your cluster ...................................................................................................................... 54 Preventing viral attacks................................................................................................................ 55 Standardizing configuration ............................................................................................................ 56 Hardware configuration .............................................................................................................. 57 Device drivers ............................................................................................................................ 57 Using QFECHECK to verify hotfix installations................................................................................ 59 Microsoft Exchange Server service packs .......................................................................................... 59 Upgrade to the latest Microsoft Exchange Server service pack ......................................................... 59 Recommended procedure for Microsoft Exchange service packs ...................................................... 59 How to avoid reboots during a rolling upgrade ............................................................................. 60 Third-party products........................................................................................................................ 60 Use cluster administrator to perform cluster operations........................................................................ 60 Use system manager to change Microsoft Exchange virtual servers ...................................................... 61 Failovers ....................................................................................................................................... 62 Planned failovers........................................................................................................................ 62 Tips for reducing planned failover time ......................................................................................... 62 Unplanned failovers.................................................................................................................... 63 Tips for reducing unplanned failover time ...................................................................................... 64 Managing storage...................................................................................................................... 64 Performance monitoring .................................................................................................................. 65 Operations checks...................................................................................................................... 65 Event logs.................................................................................................................................. 65 Monitoring mail queues............................................................................................................... 66 Monitoring virtual memory........................................................................................................... 68 Best practices for disaster recovery ...................................................................................................... 69 What should I backup? ............................................................................................................... 70 Checking backups ...................................................................................................................... 71 Microsoft Cluster Tool ................................................................................................................. 72 Restoring Quorum ...................................................................................................................... 73 Conclusion........................................................................................................................................ 74 For more information.......................................................................................................................... 75 Utilities...................................................................................................................................... 75 Microsoft Knowledge base articles ............................................................................................... 75
  • 4. Executive summary Microsoft® Exchange Server cluster deployments have additional complexity over single server implementations. Clusters depend on complex interaction between hardware, operating system, applications, and administrators. Attention to detail is needed to ensure that the Microsoft Exchange Server cluster implementation is properly planned, installed, and configured. The aim of this whitepaper is to present HP Best Practices for Microsoft Exchange Server cluster deployments. The following areas are addressed: • Overview of Microsoft Exchange Server clusters • What’s new in Microsoft Exchange Server 2003 • Planning your Microsoft Exchange Server cluster deployment • What’s new in Microsoft Windows® Server 2003 • Best Practices for Microsoft Windows Server configuration • Best Practices for Systems Management • Best Practices for Disaster Recovery Introduction This document provides best practices for successful Microsoft Exchange Server 2003 cluster deployments. These best practices have been derived from the experience of HP engineers in deployments across a wide range of customer environments. This paper is an update to the original whitepaper on best practices for Microsoft Exchange Server 2000 deployments, and there are references in this document to both Microsoft Exchange Server 2003 and 2000. The original document is available on HP ActiveAnswers, http://h71019.www7.hp.com/1-6-100-225-1-00.htm, so if you are only considering a Microsoft Exchange 2000 deployment, you should consult that document at http://activeanswers.compaq.com/aa_downloads/6/100/225/1/42311.pdf. However, if you wish to understand the differences and the advantages of a Microsoft Exchange 2003 cluster, then this document will cover those differences. A primary consideration is whether you plan to run Microsoft Windows Server 2003. Microsoft Exchange 2000 can not run on Microsoft Windows Server 2003, yet Microsoft Exchange 2003 can run on either Microsoft Windows Server 2000 or 2003. This paper addresses the following areas: • Overview of Microsoft Exchange Server clusters: Covers concepts such as Microsoft Cluster Server, cluster resource groups, cluster terminology, cluster failover and failback operations, and how Microsoft Exchange Server interacts with Microsoft Cluster Server. • What’s new in Microsoft Exchange Server 2003: focusing on the features and changes in the new product that affect clustering. • Planning your Microsoft Exchange Server cluster: Best practices for choosing hardware for your cluster, designing storage for Microsoft Exchange, cluster naming conventions and recommendations for your Microsoft Windows Server infrastructure. • What’s new in Microsoft Windows Server 2003: Specifically, what features and changes are in the new product and how they affect clustering. • Best Practices for Microsoft Windows Server configuration: This section looks at configuring Microsoft Windows Server optimally for Microsoft Exchange Server 2003 clusters. Detailed recommendations on network configuration, failover testing, and securing your Microsoft Exchange Server 2003 cluster from viruses are presented. 4
  • 5. • Best Practices for Systems Management: Covers areas such as System Manager training, best practices for Service Pack upgrades, tips for reducing downtime during failover/failback operations, and performance monitoring • Best Practices for Disaster Recovery: Covers cluster specific disaster recovery concepts. How to use Resource Kit tools such as the cluster recovery utility and the cluster diagnostic tools. Overview of Microsoft Exchange Server clusters Clusters explained A cluster server consists of a group of servers that are configured to work together to provide a single system with high availability to data and applications. When an application or hardware component fails on a standalone server, service cannot be restored until the problem is resolved. When a failure occurs on a server within a cluster, resources and applications can be redirected to another server in the cluster, which takes over the workload. Microsoft cluster server Microsoft introduced clustering support with the release of Microsoft Windows NT 4.0 Enterprise Edition in 1997. Microsoft Cluster Server (MSCS) could be used to connect two NT 4.0 servers and provide high availability to file and print services and applications. Note the following differences and progression: • Microsoft Cluster Server in Microsoft Windows NT 4.0 is limited to a maximum of two servers (nodes) in a cluster. • Microsoft Cluster Server in Microsoft Windows 2000 Advanced Server was also limited to two nodes per cluster; however, each node could host an active Microsoft Exchange Virtual Server (more detail on the design limitations of doing so later) • Microsoft Windows 2000 Server Datacenter Edition supports a maximum of four nodes per cluster and could host N+I active virtual servers, for example 3 active and one passive. • Microsoft Windows Server 2003 Enterprise Edition supports a maximum of eight nodes per cluster and could host N+I active virtual servers, for example 6 active and 2 passive. • Microsoft Windows Server 2003 Datacenter Edition also supports a maximum of eight nodes per cluster. See http://www.microsoft.com/windowsserver2003/evaluation/features/compareeditions.mspx for a Microsoft Windows Server 2003 product comparison. For a thorough discussion of what Microsoft cluster server can and cannot do, see: http://www.microsoft.com/technet/prodtechnol/windowsserver2003/support/SerCsFAQ.asp. Shared-nothing cluster model Microsoft Cluster Server uses the shared-nothing cluster model. With shared-nothing, each server owns and manages local devices as specific cluster resources. Devices that are common to the cluster and physically available to all nodes are owned and managed by only one node at a time. For resources to change ownership, a complex reservation and contention protocol is followed and implemented by cluster services running on each node. The shared-nothing model dictates that while several nodes in the cluster may have access to a device or resource, the resource is owned and managed by only one system at a time. (In a Microsoft Cluster Server cluster, a resource is defined as any physical or logical component that can be brought online and taken offline, managed in a cluster, hosted by only one node at a time, and moved between nodes.) 5
  • 6. Each node has its own memory, system disk, operating system, and subset of the cluster’s resources. If a node fails, the other node takes ownership of the failed node’s resources (this process is known as failover). Microsoft Cluster Server then registers the network address for the resource on the new node so that client traffic is routed to the system that is available and now owns the resource. When the failed node is later brought back online, Microsoft Cluster Server can be configured to redistribute resources and client requests appropriately (this process is known as failback). Resources and resource groups The basic unit of management in a Microsoft cluster is the resource. Resources are logical or physical entities or units of management within a cluster system that can be changed in state from online to offline, are manageable by the cluster services, and are owned by one cluster node at a time. Cluster resources include entities such as hardware devices like network interface cards and disks or logical entities like server name, network name, IP address, and services. Resources are defined within the cluster manager and selected from a list of pre-defined choices. Microsoft Windows Server 2003 provides a set of resource DLLs (such as File and print shares and Generic services or applications) and cluster-aware applications provide their own resource DLLs and resource monitors. In a cluster, there are both physical and logical resources typically configured. Within the Microsoft Cluster Server framework, shown in Figure 1, resources are grouped into logical units of management and dependency called resource groups. A resource group is usually comprised of both logical and physical resources such as virtual server name, IP address, and disk resources. Resource groups can also just include cluster-specific resources used for managing the cluster itself. The key in the shared- nothing model is that a resource can only be owned by one node at a time. Furthermore, the resources that are part of the resource group owned by a cluster node must exist on that node only. 6
  • 7. Figure 1. Basic 2-Node Microsoft Cluster Server Configuration Clients Node A Node B Data and executable files on shared cluster drives System files on System files on local drive local drive The shared-nothing model prevents different nodes within the cluster from simultaneous ownership of resource groups or resources within a resource group. As mentioned earlier, resource groups also maintain a dependency relationship between different resources contained within each group. This is because resources in a cluster very often depend upon the existence of other resources in order to function or start. For example, a virtual server or network name must have a valid IP Address in order for clients to access that resource. Therefore, in order for the network name or virtual server to start (or not fail), the IP Address it depends upon must be available. This is known as resource dependency. Within a resource group, the dependencies among resources can be quite simple or very complex. Resource dependencies are maintained in the properties of each resource and allow the cluster service to manage how resources are taken offline and brought on line. Also, resource dependencies cannot extend beyond the context of the resource group to which they belong. For example, a virtual server cannot have a dependency upon an IP Address that exists within a resource group other than its own resource group. This restriction is due to the fact that resource groups within a cluster can be brought online and offline and moved from node to node independently of one another. Each resource group also maintains a policy that is available cluster-wide that specifies the Preferred Owner (the node in the cluster it prefers to run on) and the Possible Owners (the node that it can fail over to in the event of a failure condition). 7
  • 8. Resources are the fundamental unit of management within a Microsoft Windows Cluster. As such, it is important that you have an understanding of how they function and operate. Resource Groups are used to logically organize these resources and enable easier management of failover between nodes. Key cluster terminology There has been much confusion in the world of cluster technology not over just basic architectural implementations but also terminologies. Table 1 below highlights some key terminology for Microsoft Cluster Server. Table 1. Microsoft Cluster Server Key Terminology Term Definition/Description Resource Smallest unit that can be defined, monitored and maintained by the cluster. Examples are Physical Disk, IP Address, Network Name, File Share, Print Spool, Generic Service and Application. Resources are grouped together into a Resource group. The cluster uses the state of each resource to determine whether a failover is needed. Resource Group A collection of resources that logically represents a client/server function. The smallest unit that can failover between nodes. Resource Dependency A resource property defining its relationship to other resources. A resource is brought online after any resource that it depends on. A resource is taken offline before any resources that it depends on. All dependent resources must failover together. Quorum Resource Stores the cluster log data and application data from the registry used to transfer state information between clusters. Used by the cluster service to determine which node can continue running when nodes cannot communicate. Note that the type of quorum resource, and therefore the type of the cluster (either a quorum-device cluster or a majority-of-nodes cluster), is established during cluster setup and can not be changed later. Active/Passive Term used for Service Failover mode where service is defined as a resource using the generic resource DLL. Failover Manager limits application operation to only one node at a time. Active/Active More comprehensive failover capability also known as Resource Failover mode. Utilizes ISV-developed resource DLLs that are “cluster aware”. Allows for operation of service on multiple nodes and individual resource instances failover instead of entire service. Membership Term used to describe the orderly addition and removal of active nodes to and from the cluster. Global Update Global update propagates cluster configuration changes to all members. The cluster registry is maintained through this mechanism and all activities are atomic, ordered, and tolerant to failures. Cluster Registry Separate from the NT registry, the cluster registry maintains configuration updates on members, resources, parameters and other configuration information and is stored on the cluster quorum disk. Loaded under HKLMCluster Virtual Server The network resource used by client for the Exchange cluster resource group – a combination or collection of configuration information and resources such as network name and IP address resources. Can refer to a Microsoft Cluster Server virtual server or a logical set of services provided by Internet Information Server (IIS). Physical Machine The physical hardware device that comprises an individual cluster node. Can host multiple virtual servers and resources 8
  • 9. Microsoft cluster service components Microsoft Cluster Service is implemented as a set of independent and somewhat isolated group of components (device drivers and services). This set of components layers on top of the Microsoft Windows Server operating system and acts as a service. By using this design approach, Microsoft avoided many complexities that may have been encountered in other design approaches such as system scheduling and processing dependencies between the cluster service and the operating system. When layered on Microsoft Windows Server, the cluster service provides some basic functions that the operating system needs in order to support clustering. These basic functions include dynamic network resource support, file system support for disk mounting and un-mounting, and shared resource support for the I/O subsystem. Table 2 below provides a brief overview of each of these components. Table 2. Microsoft Cluster Service Components Component Role/Function Node Manager Maintains resource group ownership of cluster nodes based on resource group node preferences and the availability of cluster nodes. Resource Monitor Utilizes the cluster resource API and RPCs to maintain communication with the resource DLLs. Each monitor runs as a separate process. Failover Manager Works in conjunction with the resource monitors to manage resource functions within the cluster such as failovers and restarts. Checkpoint Manager Maintains and updates application states and registry keys on the cluster quorum resource. Configuration Database Manager Maintains and ensures coherency of the cluster database on each cluster node that includes important cluster information such as node membership, resources, resource groups, and resource types. Event Processor Processes events relating to state changes and requests from cluster resources and applications. Membership Manager Manages cluster node membership and polls cluster nodes to determine state. Event Log Replication Manager Replicate system event log entries across all cluster nodes. Global Update Manager Provides updates to the Configuration Database Manager to ensure cluster configuration integrity and consistency. Object Manager Provides management of all cluster service objects and the interface for cluster administration. Log Manager Works with the Checkpoint Manager to ensure that the recovery log on the cluster quorum disk is current and consistent. 9
  • 10. For discussions pertaining to Microsoft Exchange Server, there are three key components in Microsoft Cluster Service to consider: The cluster service The Cluster Service (which is actually a group of components consisting of the Event Processor, the Failover Manager/Resource Manager, the Global Update Manager, and so forth) is the core component of Microsoft Cluster Server. The Cluster Service controls cluster activities and performs such tasks as coordinating event notification, facilitating communication between cluster components, handling failover operations, and managing the configuration. Each cluster node runs its own Cluster Service. The resource monitor The Resource Monitor is an interface between the Cluster Service and the cluster resources, and runs as an independent process. The Cluster Service uses the Resource Monitor to communicate with the resource DLLs. The DLL handles all communication with the resource, thus shielding the Cluster Service from resources that misbehave or stop functioning. Multiple copies of the Resource Monitor can be running on a single node, thereby providing a means by which unpredictable resources can be isolated from other resources. The resource DLL The third key Microsoft Cluster Server component is the resource DLL. The Resource Monitor and resource DLL communicate using the Microsoft Cluster Server Cluster Resource API, which is a collection of entry points, callback functions, and related structures and macros used to manage resources. Applications that implement their own resource DLLs to communicate with the Cluster Service and that use the Cluster API to request and update cluster information are defined as cluster- aware applications. Applications and services that do not use the Cluster or Resource APIs and cluster control code functions are unaware of clustering and have no knowledge that Microsoft Cluster Server is running. These cluster-unaware applications are generally managed as generic applications or services. Both cluster-aware and cluster-unaware applications run on a cluster node and can be managed as cluster resources. However, only cluster-aware applications can take advantage of features offered by Microsoft Cluster Server through the Cluster API. Cluster-aware applications can report status upon request to the Resource Monitor, respond to requests to be brought online or to be taken offline gracefully, respond more accurately to IsAlive and LooksAlive requests issued by the cluster services. Cluster-aware applications should also implement Cluster Administrator extension DLLs, which contain implementations of interfaces from the Cluster Administrator extension API. A Cluster Administrator extension DLL allows an application to be configured into the Cluster Administrator tool (CluAdmin.exe). Implementing custom resource and Cluster Administrator extension DLLs allows for specialized management of the application and its related resources, and enables the system administrator to install and configure the application more easily. As discussed earlier, to the Cluster Service, a resource is any physical or logical component that can be managed. Examples of resources are disks, network names, IP addresses, databases, IIS Web roots, application programs, and any other entity that can be brought online and taken offline. Resources are organized by type. Resource types include physical hardware (such as disk drives) and logical items (such as IP addresses, file shares, and generic applications). Every resource uses a resource DLL, a largely passive translation layer between the Resource Monitor and the resource. The Resource Monitor calls the entry point functions of the resource DLL to check the status of the resource and to bring the resource online and offline. The resource DLL is responsible for communicating with its resource through any convenient IPC mechanism to implement these methods. Applications or services that do not provide their own resource DLLs can still be configured into the cluster environment. Microsoft Cluster Server includes a generic resource DLL, and the Cluster Service treats these applications or services as generic, cluster-unaware applications or services. However, if an 10
  • 11. application or service needs to take full advantage of a clustered environment, it must implement a custom resource DLL that can interact with the Cluster Service and take full advantage of the full set of features provided by Microsoft Cluster Service. Cluster failover With Microsoft Cluster Server, two types of failover are supported: Resource and Service Failover. Both allow for increased system availability. More comprehensive in capabilities, the Resource Failover mode takes advantage of cluster APIs that enable applications to be “cluster aware.” This is provided via a Resource DLL that can be configured to allow customizable failover of the application. Resource DLLs provide a means for Microsoft Cluster Server to manage resources. They define resource abstractions, interfaces, and management. In a resource failover mode of operation, it is assumed that the service is running on both nodes of the Microsoft Cluster Server cluster (also known as “Active/Active”) and that a specific resource – such as a database, virtual server, or an IP address – fails over, not the entire service. Many applications, from independent software vendors as well as those from Microsoft, do not have resource DLLs available that enable them to be cluster aware. To offset this, Microsoft has provided a generic service resource DLL, which provides basic functionality to these applications running on Microsoft Cluster Service. The generic resource DLL provides for the Service Failover mode and limits the application to running on one node only (also known as “Active/Passive”). In a Service Failover mode, a service is defined to Microsoft Cluster Server as a resource. Once defined, the Microsoft Cluster Server Failover Manager ensures that the service is running on only one node of the cluster at any given time. The service is part of a resource group that uses a common name throughout the cluster. As such, all services running in the resource group are available to any network clients using the common name. Cluster-aware vs. non cluster-aware applications Cluster-aware applications provide the high levels of functionality and availability in a Microsoft Cluster Server environment. The applications and the cluster services can provide feedback that facilitates optimal operation. In this scenario, as much application state as possible is preserved during failover. Examples of cluster-aware applications are Microsoft SQL Server, SAP/R3, Baan, PeopleSoft, Oracle, and Microsoft Exchange Server 2003 Enterprise Server. Non cluster-aware applications have several limitations discussed previously. The application and the cluster software cannot communicate with each other. Any communication that occurs is limited to that provided by the generic resource DLL provided with Microsoft Cluster Server. Examples of non cluster-aware applications are file and print services, Microsoft Internet Information Server, and Microsoft Exchange Server 5.5 Enterprise Edition. The cluster software has no application awareness and simply understands that a generic service or group of services and resources must be treated as a failover unit. Active/active vs. active/passive When deploying cluster solutions with Microsoft Windows Server, the level of functionality and flexibility that an application can enjoy in a clustered environment directly relates to whether it supports active/passive or active/active configuration. Active/active means that an application provides functionality on all nodes in the cluster at the same time. This means that the application services are running and servicing users from each node in the cluster. To do this, an application must have support for communicating with the cluster services via its own resource DLL (cluster-aware). Also, the application must be architected in such a way that specific resource units can be treated independently and failed over to other nodes. Per the discussions above, this requires specific support from the application vendor (whether Microsoft or third-party vendors) in order for the application to run in an active/active cluster configuration. For active/passive configurations, the application is either limited architecturally (can not be run on two active notes), has no specific resource DLL support, or both. In an active/passive 2-node configuration, the application runs only on one cluster 11
  • 12. node at a time. The application may or may not be cluster aware (see above). In some applications capable of active/active, such as Microsoft Exchange Server 2000 and 2003, there may be limitations on running in active/active versus active/passive (covered later). The discussions above on the failover types (Service and Resource Failover) as well as the differences between cluster-aware and non cluster-aware applications are distinct from active/active and active/passive configurations. These terms are often confused, but an active/passive cluster application can be cluster aware, with application services running on every node, even those that are not actively providing end-user functionality. However, an active/active application must be cluster aware and a non cluster-aware application must be deployed in an active/passive configuration. Exchange virtual servers From an administrative perspective, all components required to provide services and a unit of failover are grouped into a Microsoft Exchange Virtual Server (EVS). One or more Microsoft Exchange Virtual Servers can exist in the cluster, and each virtual server runs on one of the nodes in the cluster. Microsoft Exchange Server 2000/2003 can support multiple virtual servers on a single node, although this may be less than desirable – see the discussion of active/active design and failover. A Microsoft Exchange Virtual Server, at a minimum, will include a storage group, and required protocols. From the viewpoint of Microsoft Cluster Server, a Microsoft Exchange Virtual Server exists as a resource in each cluster resource group. If you have multiple Microsoft Exchange Virtual Servers that shared the same physical disk resource (i.e. each has a storage group that resides on the same disk device), they must all exist within the same resource group and cannot be split into separate resource groups. This is done to ensure that the resources and virtual servers all failover as a single unit and is an administrative restriction that ensures that resource group integrity is maintained. Clients connect to the virtual servers the same way that they would connect to a stand-alone server. The cluster service monitors the virtual servers in the cluster. In the event of a failure, the cluster service restarts or moves the affected virtual servers to a healthy node. For planned outages, the administrator can manually move the virtual servers to other nodes. In either event, the client will see an interruption of service only during the brief time that the virtual server is in an online/offline pending state. Clustering support for Microsoft Exchange Server components Clustering support differs according to the component. All components of Microsoft Exchange Server 2003 are not currently supported in a clustered environment. The following table details which components are supported, and in some cases, the type of clustering they are capable of supporting: Table 3. Microsoft Exchange Server Component Cluster Support Source: Deploying Exchange Server Clusters, and What's New in Exchange Server 2003 -- Microsoft Microsoft Exchange Cluster Microsoft Exchange 2000 Microsoft Exchange 2003 Server Component Functionality Exchange System Active/Active Each Microsoft Exchange Same as Microsoft Attendant Virtual Server is created by Exchange 2000 the System Attendant resource when configured. Information Store Active/Active Each cluster node is limited Same as Microsoft to 4 Storage Groups. Exchange 2000 Message Transfer Agent Active/Passive The MTA will be in only one Same as Microsoft (MTA) cluster group. One MTA Exchange 2000 instance per cluster. 12
  • 13. Microsoft Exchange Cluster Microsoft Exchange 2000 Microsoft Exchange 2003 Server Component Functionality POP3 Protocol Active/Active Multiple virtual servers per Same as Microsoft node. Exchange 2000 IMAP Protocol Active/Active Multiple virtual servers per Same as Microsoft node. Exchange 2000 SMTP Protocol Active/Active Multiple virtual servers per Same as Microsoft node. Exchange 2000 HTTP DAV Protocol Active/Active Multiple virtual servers per Same as Microsoft node. Exchange 2000 NNTP Protocol Active/Active Multiple virtual servers per Same as Microsoft node. Exchange 2000 MS Search Server Active/Active One Instance per virtual Same as Microsoft server Exchange 2000 Site Replication Service Active/Passive Not Supported in a cluster Same as Microsoft Exchange 2000 MSMail Connector Active/Passive Not Supported in a cluster Microsoft Exchange 2000 only cc: Mail Connector Active/Passive Not Supported in a cluster Microsoft Exchange 2000 only Lotus Notes Connector Active/Passive Not Supported in a cluster Same as Microsoft Exchange 2000 Novell GroupWise Active/Passive Not Supported in a cluster Same as Microsoft Connector Exchange 2000 SNADS Connector Active/Passive Not Supported in a cluster Microsoft Exchange 2000 only PROFS Connector Active/Passive Not Supported in a cluster Microsoft Exchange 2000 only Active Directory Active/Passive Not Supported in a cluster Same as Microsoft Connector Exchange 2000 Key Management Active/Passive Not Supported in a cluster Microsoft Exchange Service 2000 only Chat Service Active/Passive Not Supported in a cluster Microsoft Exchange 2000 only Conferencing Manager Active/Passive Not Supported in a cluster Microsoft Exchange Services 2000 only Video Conferencing Active/Passive Not Supported in a cluster Microsoft Exchange Service 2000 only Microsoft Exchange Server provides its core features and support for Microsoft Cluster Server via two key components as shown in Figure 2. These are the Microsoft Exchange Cluster Administration DLL and the Exchange Resource DLL. 13
  • 14. Figure 2. How Microsoft Exchange interfaces with Microsoft Cluster Services Key component: Exchange resource DLL (EXRES.DLL) If you recall the earlier discussion of cluster-aware versus non cluster-aware applications, you remember that it is the existence of an application-specific resource DLL that is the key differentiator for cluster-aware applications. Also remember that Microsoft Exchange 5.5 did not provide its own resource DLL and made use of the generic resource DLL that is provided with Microsoft Cluster Server. For Microsoft Exchange Server 2003, Microsoft developers took the extra time and effort to guarantee full-cluster functionality. The result of that effort is the Microsoft Exchange Resource DLL called EXRES.DLL. This DLL for Microsoft Exchange Server 2003 is installed when the setup application realizes that it is operating in a clustered environment. EXRES.DLL acts as a direct resource monitor interface between the cluster services and Microsoft Exchange Server 2003 by implementing the Microsoft Cluster Services API set. Table 4 below shows the typical interactions and indications that EXRES.DLL will provide between Microsoft Exchange resources and cluster services. Table 4. Microsoft Exchange Resource DLL (EXRES.DLL) Interactions and Functions Interaction/Indicator Function Online/Offline Microsoft Exchange Virtual Server Resource is running, stopped, or in an idle state. Online/Offline Pending In the process or service is in the state of starting or shutting down. Looks Alive/Is Alive Resource polling functions to determine whether resource should be restarted or failed. Can be configured in cluster administrator on a per- resource basis. Failed The resource failed on the Is Alive call and was not able to be restarted (restart failed). Restart Resource has failed on Is Alive call and it directed to attempt to restart. 14
  • 15. Key component: Exchange cluster admin DLL (EXCLUADM.DLL) In order for Microsoft Exchange resources to be configured and controlled by the Cluster Administrator, there must be an enabler for Microsoft Exchange services to communicate with the Cluster Administrator and for the Cluster Administrator program to provide Microsoft Exchange- specific configuration parameters and screens. The Microsoft Exchange Cluster Administration DLL or EXCLUADM.DLL provides this support. The Microsoft Exchange Cluster Admin DLL provides the necessary wizard screens when configuring Microsoft Exchange resources in Cluster Administrator and presents Microsoft Exchange resources that can be added as resources in the cluster such as the Microsoft Exchange System Attendant. The cluster administration DLL is a key component in configuration and management of Microsoft Exchange services in the cluster. It is not required for resource monitoring and restart or failover actions. The Microsoft Exchange Resource DLL (EXRES.DLL) performs this role. Microsoft Exchange Server cluster resource dependencies Figure 3 illustrates a tree structure of Microsoft Exchange 2000 cluster resource dependencies and Figure 4 shows the newer Microsoft Exchange Server 2003 cluster resource dependencies. Microsoft Exchange services must have certain resources as predecessors before they can be brought online as a cluster resource. By default, Microsoft Exchange Server 2003 installs nine resources in the form of virtual servers into a cluster resource group that is being configured in the Cluster Administrator. Table 5 provides a brief description of each resource and its function. 15
  • 16. Figure 3. Microsoft Exchange Server 2000 Cluster Resource Dependencies Figure 4. Microsoft Exchange Server 2003 Cluster Resource Dependencies. 16
  • 17. When you make a change to Storage Group configuration (including Recovery Storage Groups, covered later in this document), you will see the prompt in System Manager shown in Figure 5 below to ensure proper disk resource dependencies. Figure 5. Prompt in System Manager to Ensure Proper Disk Resource Dependencies. Note: As shown in Table 5 below, Microsoft Exchange Server 2003 changes the Resource dependencies so that the Internet protocols shown above are dependent directly on the System Attendant instead of the Store. Table 5. Microsoft Exchange Server 2000 and 2003 Cluster Resources Resource Role System Attendant Foundation Microsoft Exchange resource that must exist prior other resources being added to the cluster. Resource Dependency: Network Name, All Physical Disks for that Virtual Server Information Store Virtual Server Instance for the STORE.EXE process and its presentation to MAPI clients and other services. Resource Dependency: System Attendant Routing Service Microsoft Exchange Server 2003 Routing Service virtual server instance. Resource Dependency: System Attendant MTA Service Message Transfer Agent virtual server. Exists only on one cluster node (active/passive). Provided for legacy messaging connector support and routing to Microsoft Exchange 5.5 environments. Resource Dependency: System Attendant 17
  • 18. Resource Role MS Search Service Microsoft Search Engine virtual server instance. Provides Microsoft Exchange content indexing service for clients. Can be removed if that protocol is not needed. Resource Dependency: Microsoft Exchange 2003 - System Attendant Microsoft Exchange 2000 - Information Store SMTP Service SMTP virtual server instance. Provides Internet protocol email sending functionality and message routing within Microsoft Exchange 2000/2003. Resource Dependency: Microsoft Exchange 2003 - System Attendant Microsoft Exchange 2000 - Information Store HTTP Virtual Server HTTP-DAV protocol virtual server. Provides Web/Browser-based client access to information store. Resource Dependency: Microsoft Exchange 2003 - System Attendant Microsoft Exchange 2000 - Information Store POP3 Service POP3 protocol virtual server. Provides POP3 client access to information store. Can be removed if that protocol is not needed. Resource Dependency: Microsoft Exchange 2003 - System Attendant Microsoft Exchange 2000 – Information Store IMAP Service IMAP protocol virtual server. Provides IMAP client access to information store. Can be removed if that protocol is not needed. Resource Dependency: Microsoft Exchange 2003 - System Attendant Microsoft Exchange 2000 - Information Store When configuring cluster resources for Microsoft Exchange, four prerequisites must be satisfied, each of these will be explained in more detail: 1. Microsoft Exchange Server 2003 must be installed on all cluster nodes where Microsoft Exchange Virtual Servers will run. 2. The Microsoft Distributed Transaction Coordinator (MSDTC) is required as a cluster resource and must be configured before Microsoft Exchange 2003 can be installed or Microsoft Exchange 2000 can be upgraded. 3. The network name must be created. As this resource is dependent upon the IP address, the IP address must be assigned and the resource created first. 4. The final step that must be accomplished before creating the Microsoft Exchange System Attendant is to create the physical disk resources that are required by the virtual server you are configuring. At a minimum, there must be at least one physical disk resource configured for Microsoft Exchange virtual servers to be added to the cluster configuration. When Microsoft Exchange cluster resources start and stop (change states), they must do so in order of resource dependency. This means that on startup, resources start in forward order of resource dependence (bottom to top in Figure 5). When resources are stopped (or a resource group is taken offline), resources are shutdown in reverse order of dependence (top to bottom in Figure 5). When configuring cluster resources, having an understanding of the resource dependencies for Microsoft Exchange Server clustering makes the task simpler. Microsoft Exchange virtual servers The virtual server is the key unit of client access for Microsoft Exchange Server 2003 services running in a clustered environment. Virtual servers exist for several different Microsoft Exchange Server services (as shown in Table 5). The virtual server is the name by which clients, resources, and other services access individual Microsoft Exchange services within the cluster. In Microsoft Exchange 18
  • 19. Server 2003, a virtual server contains resources for Internet client protocols (SMTP, IMAP, POP3, and HTTP-DAV), the Information Store (for MAPI clients), the MTA service, and the Microsoft Search Service. The virtual server takes on the name property of the network name resource that is configured prior to configuring Microsoft Exchange resources. For example, if you configure the network name resource as “EXVS1,” each Exchange virtual server resource configured in the cluster resource group for that virtual server will respond to that virtual server name when used by clients and other services and resources. Microsoft Exchange virtual server resources will all failover in the cluster as a managed unit. This means that the entire cluster resource group containing the virtual server resources will be failed over together from one node in the cluster to another. One or more Microsoft Exchange information storage groups can be configured as part of a Microsoft Exchange Server 2003 virtual server. However, a storage group can only belong to one virtual server. When deploying Microsoft Exchange Server 2003 clusters, ensure that you familiarize yourself with virtual servers and how they are used. Microsoft Exchange Server 2000/2003 can support multiple virtual servers on a single node. In an active/active design, one node must be capable of taking on the workload of the other in the event of a failover. This limits the workload of each node (in a 2-node cluster), as you must leave ample headroom for the failover to occur. In light of this, Microsoft places restrictions on active/active design workloads. See Table 8 and the section preceding it for the restrictions. For clusters greater than two nodes, they must be designed in an N+I active/passive configuration. What’s new in Microsoft Exchange Server 2003 From a clustering standpoint, the most relevant changes are: • Support for Microsoft Windows 2003 features such as 8 nodes, volume mount points, and new quorum resource models. • Additional prerequisite checking during Microsoft Exchange installation • Flattening of the resource dependency tree allowing for greater flexibility and shorter resource failover times • The Microsoft Exchange 2003 permissions model has changed, for greater security – IPSec support for front-end to back-end server connections – Cluster service account requires no Microsoft Exchange-specific permissions – Support for Kerberos (enabled by default) • Many Microsoft Exchange 2000 Tuning Parameters no longer necessary, as Microsoft Exchange 2003 provides more accurate tuning in large memory and multi-processor servers • The virtual memory management process is improved, to more efficiently reuse blocks of memory and thus reduce fragmentation • On failover, Microsoft Exchange services are stopped on the passive node, which frees up memory and prevents fragmentation • IMAP4 and POP3 resources are not added automatically when you create a Microsoft Exchange Virtual Server (but are upgraded if existing in Microsoft Exchange 2000) • Change in the Setup routine so that it no longer prompts that a cluster has been detected (this would halt the setup until acknowledged) See the Microsoft document What's New in Microsoft Exchange Server 2003, http://www.microsoft.com/technet/prodtechnol/exchange/2003/library/newex03.mspx for a more comprehensive list. 19
  • 20. See also HP’s paper Exchange Server 2003 Clusters on Windows Server 2003 http://h71028.www7.hp.com/aa_downloads/6/100/225/1/65234.pdf Planning your Microsoft Exchange 2003 cluster deployment Planning is critical for successful Microsoft Exchange Server 2003 cluster deployments. If you do not plan your implementation carefully, you may have to tear down your cluster implementation and re- implement it, including backup and restoration of critical data. The following areas should be addressed: • Server Hardware • Storage Planning • Naming Standards • Microsoft Exchange Server roles • Microsoft Windows Server infrastructure • Cluster Service account In addition to the following extensive list you may also wish to see the checklist: Preparation for installing a cluster at http://go.microsoft.com/fwlink/?LinkId=16302. Best practices for hardware Check for hardware compatibility The hardware used in a cluster must be listed on the Microsoft Windows Catalogs from Microsoft. The Windows Catalogs are replacing the Hardware Compatibility List (HCL). The catalogs and HCL can be found at http://www.microsoft.com/whdc/hcl/default.mspx. If you implement hardware that is not in the catalog, Microsoft will not support your cluster configuration. Choose standardized configurations You should configure all nodes in a cluster with identical specifications for memory, disks, CPUs, etc. Implementing nodes of varying specifications can lead to inconsistent performance levels as Microsoft Exchange Virtual Servers are moved between nodes. HP makes it easier to choose hardware by offering a wide selection of supported cluster configurations (see http://www.hp.com/servers/proliant/highavailability). You may hear references to deployments of non-standardized configurations; however, be certain of the reasons for using varying hardware platforms. For example, within Microsoft’s internal clustering deployment, several passive nodes are used for failover targets during rolling upgrades. These are desirable mainly due to the frequency of applying new builds (since Microsoft frequently tests the newest versions; applying them into production during the development release cycle). In addition, varying hardware platforms can be used in a TEST cluster for evaluating new technology such as the difference between 4-way and 6-way platforms of varying processor speeds and cache sizes etc. In addition to standardized hardware (from the Microsoft Windows Catalogs), Microsoft specifies that many configuration settings must be the same. For example, all network adapters on the same cluster network must use identical communication settings such as speed, duplex mode, flow control, and media type. The adapters connected to the same switch must have identical port settings on the switch. Ideally, the NICs for each network should be identical hardware models and firmware revision level in each server. 20
  • 21. In addition, each cluster network must be configured as a single IP subnet whose subnet is distinct from those of the other cluster network. The addresses may be assigned by DHCP, but manual configuration with static addresses is preferred, and the use of Automatic Private IP Addressing (APIPA) is not supported. Hardware redundancy The key design principle of any cluster deployment is to provide high availability. In the event of a hardware failure, a failover operation will move resources from the failed node to another node in the cluster. During failovers of Microsoft Exchange, users will not be able to access e-mail folders for a short time as resources are taken offline on the failed node and brought online on the other node. For each node in the cluster, implement redundant hardware components in order to reduce the impact of a hardware failure and thus avoid a failover. Examples of components where redundancy can be implemented are as follows: • Redundant Networks - Two or more independent networks must be used to connect the nodes of a cluster. A cluster connected by only one network is not a supported configuration. In addition, each network must fail independently of every other. No single component failure, neither a Network Interface Card (NIC) nor switch failure, should cause both cluster networks to fail. This also eliminates the use of a single Network Interface Card (NIC) with multiple ports for both cluster networks. • Network Adapter or Network Interface Card (NIC) Teaming - NIC Teaming is a feature enabled on HP ProLiant servers by the addition of Support Pack software. Teaming allows multiple network cards to act as a single virtual card which is configured as if it were a single card. This allows for the failure of a single NIC card, cable or switch port without any interruption of service. When split across multiple network switches, switch failure can be handled. There are three types of teaming: Network Fault Tolerant (NFT), which handles faults but provides no additional throughput, Transmit Load Balancing (TLB), which can provide multiple outbound connections, and Switch-Assisted Load Balancing (SALB), which requires specific network switches, such as Cisco Fast EtherChannel. Use NFT teaming on the public (client) network in a cluster; however do not use NIC teaming for the private (heartbeat) network in a cluster as it is not supported by Microsoft (see Q254101). For more information on NIC Teaming, see HP.com - ProLiant networking - teaming, at: http://h18000.www1.hp.com/products/servers/networking/teaming.html. • Power supplies and fans – Connect redundant power supplies to separate Power Distribution Units (PDU). If both power supplies are connected to the same PDU, and the PDU fails, power will be lost. • Host Bus Adapter/Array Controllers – If you have implemented a Storage Area Network (SAN) with your cluster, implement redundancy into the connections between your nodes and the SAN, so that failures of the HBA, Fibre Channel connections, and SAN switches can be handled without the need to induce cluster node failover. HBA redundancy is controlled by HP StorageWorks Secure Path. For more information see: HP StorageWorks Secure Path for Windows: business continuity software - overview & features, at http://h18006.www1.hp.com/products/sanworks/secure-path/spwin.html. 21
  • 22. Note: The latest version of HP StorageWorks Secure Path adds distributed port balancing across HBAs in a cluster. Previous versions did not load-balance, and required manually setting the paths (which was not persistent across reboots). If you are using an older version or are not sure, check the SAN throughput at the switch ports to see that they are both active. Best practices for cluster configuration Understanding resource dependency In the earlier section on resource dependencies, it was made clear that there is a specific sequence in which resources are brought online or taken offline, as dictated by the resource dependency. It is critical in your design to understand that the Microsoft Exchange System Attendant Resource is dependent on all physical disks for that virtual server. This means that an entire Microsoft Exchange Server can be affected by failure of any disk resource, be it database or log file drives. For very large servers in a SAN environment, this exposes all users on that server to downtime for any disk resource failure, which is quite undesirable. It is essential that all drives be protected from any failure through RAID and redundant paths and that the number of disk resources is limited in order to reduce the risk of a drive failure impacting the Microsoft Exchange virtual server. For example, a very large Microsoft Exchange server with disks for four (4) Storage Groups, log files and separate disks for the SMTP mailroot, and each of twenty (20) databases would perhaps be best split into 2 or more virtual servers. Planning cluster groups Create a cluster group to contain only the quorum disk resource, cluster name, and cluster IP address. The benefit of separating the quorum resource from the Microsoft Exchange cluster groups is that the cluster will remain on-line if one of the resources fails in a Microsoft Exchange cluster group. The cluster resource group owning the quorum can be run on the passive node, which isolates it from the load of the active nodes. If the active node experiences a hard fault resulting in node failover, the failover may be quicker because the cluster resource group is already online on the surviving node. Planning quorum type Deciding on the quorum type is important, as the type of quorum resource determines the type of cluster. This decision between one of three types must be made before cluster installation. The cluster type is either a quorum-device cluster (using either a local quorum or a single quorum device on a shared SCSI bus or Fibre Channel) or a majority-of-nodes cluster. A quorum-device cluster is defined by a quorum using either the Physical Disk resource type or local quorum resource type (in either Microsoft Windows 2000 or 2003). The local quorum resource is often used for the purpose of setting up a single-node cluster for testing and development. In Microsoft Windows Server 2003, the Cluster service automatically creates the local quorum resource if it does not detect a shared disk (you do not need to specify a special switch as in the Microsoft Windows 2000 procedure). You may also create the local quorum resource, after installation of a quorum on a single quorum device (e.g., a multi-node cluster) in Microsoft Windows Server 2003. This can be beneficial if you need to replace or make other repairs on the shared disk resource subsystem (e.g., rebuild the drive array in a manner that is data destructive). 22
  • 23. Figure 6. Selecting Quorum type during Cluster Setup A majority-of-nodes cluster is only available in Microsoft Windows 2003, and is defined by a majority node set quorum, where each node maintains its own copy of the cluster configuration data. Determination of which node owns the quorum and whether the cluster is even operational, is done through access to file shares. This design helps in geographically dispersed clusters and addresses the ‘split-brain’ issue (where more than one node could gain exclusive access to a quorum device, because that device is replicated to more than one location). Since this design is much more complex, it is not advised unless you work carefully with the team or engineer providing the storage architecture. Microsoft states that you should use a majority node set cluster only in targeted scenarios with a cluster solution offered by your Original Equipment Manufacturer (OEM), Independent Software Vendor (ISV), or Independent Hardware Vendor (IHV)1. Storage planning Storage design is critical Before building a cluster, you should try to estimate storage requirements. Microsoft Exchange databases expand over time and you may start to run out of disk space if you have not implemented sufficient capacity. Having spare capacity is also useful for performing database maintenance and disaster recovery exercises. Adding additional disk space to a cluster may require future downtime if your array controller does not support dynamic volume expansion. Before you can size mailbox stores, you must answer the following questions: • How many users need mailboxes? • What is standard mailbox quota? • How many servers will be deployed? • How many Storage Groups will be deployed per server? • How many Databases per Storage Group? • Will the mailbox stores contain legacy e-mail and if so how much? • What is the acceptable time to restore a database in a Disaster Recovery Scenario? • What is the classification of mail usage (light, medium, heavy)? • How long is deleted mail retained for? 1 http://www.microsoft.com/technet/prodtechnol/windowsserver2003/proddocs/entserver/sag_mscs 2planning_32.asp 23
  • 24. • What mail clients are used: POP, IMAP, HTTP, or MAPI? (This affects whether mail will be primarily stored in the EDB or STM files). To assist customers in sizing storage, HP has developed a Microsoft Exchange Storage Planning Calculator which is available for download from http://www.hp.com/solutions/activeanswers, specifically HP ProLiant Storage Planning Calculator for Microsoft Exchange 2000, http://h71019.www7.hp.com/1%2C1027%2C2400-6-100-225-1%2C00.htm. Placement of Microsoft Windows and Microsoft Exchange components As part of the storage design process, you will need to consider the placement of the following Microsoft Windows and Microsoft Exchange components, listed in Table 6, below. Table 6. Placement of Microsoft Windows and Microsoft Exchange components. Component Default Location (diskpath) Microsoft Windows Server binaries %systemroot% Microsoft Windows Server pagefile Typically C: Quorum disk (for clusters) Administrator defined Microsoft Exchange Server 2003 binaries %ProgramFiles%Exchsrvr typically on C: Microsoft Exchange Server Storage Groups Administrator defined – default for cluster is on ‘Data Directory’ ● Microsoft Exchange Server mail stores (EDB Administrator defined – default for cluster is on and STM files) ‘Data Directory’ ● Microsoft Exchange Server transaction logs Administrator defined – default for cluster is on ‘Data Directory’ ● Public Folder stores (EDB and STM files) See ‘Data Directory’ below Microsoft Exchange Server 2003 SMTP mailroot See ‘Data Directory’ below folders (pickup, queue and badmail) Microsoft Exchange Server 2003 Message See ‘Data Directory’ below Transfer Agent (MTA) work directory Microsoft Exchange Server 2003 message Administrator defined – see FTI whitepaper on tracking logs HP ActiveAnswers Full text Indexing (FTI) files Administrator defined Legacy e-mail connectors Default Location (diskpath) Microsoft Windows Server Microsoft Windows Server should be implemented on a mirrored (RAID1) drive. For clusters holding large numbers of mailboxes, Microsoft recommends deploying a dedicated mirrored drive for the pagefile to achieve improved performance. A write-caching controller can help improvement performance of these disks (and it should be battery-backed for protection against data loss during power disruptions). 24
  • 25. Microsoft Exchange Server 2003 binaries Microsoft Exchange Server 2003 can be installed on to the same mirrored drive as Microsoft Windows Server with little impact to performance. A separate logical volume or disk array can be used for convenience if desired. Storage groups, databases, and transaction logs A storage group is a Microsoft Exchange management entity within the Store process that controls a number of databases that use a common set of transaction logs. Microsoft Exchange databases are composed of: • EDB files (*.EDB). The header and property information of all messages is held in the EDB file. • Streaming Database files (*.STM) – are the primary store for Internet clients/protocols such as Microsoft Outlook Express/IMAP. Content in the streaming file is stored by the Microsoft Exchange Installable File System (IFS) in native format so messages with attachments in audio/video format do not require conversion when stored and accessed with IMAP clients. Automatic format conversion to the EDB file takes place when MAPI (Microsoft Outlook) clients try to access content information in the streaming file. • Transaction Logs (*.LOG) – All changes to the database are first written to the transaction log, then to database pages cached in memory (IS buffers), and finally (asynchronously), to the database files on disk. • Checkpoint files (*.CHK) – Checkpoint files keep track of which transactions have been committed to disk. Figure 7 shows two storage groups. Storage Group 1 has two sets of EDB and Streaming databases. Storage Group 2 has three sets of EDB and Streaming Database. As mentioned previously, there is one set of transaction logs per storage group. 25
  • 26. Figure 7. Storage Groups Assigning drive-letters and labels Microsoft Knowledge Base article Q318534 describes some best practices for assigning drive letters on a cluster server. Microsoft recommends using Q as the quorum drive, and use letters R through to Z as drives in the shared storage. This has some advantages: • Additional disks can be added to storage on the local node and use drive letters E and F (by convention drives C and D are used in local nodes for Microsoft Windows Server/Microsoft Exchange Server 2003 binaries) • It reduces the risk of the administrator mapping a drive on a cluster node and causing a drive letter conflict when a failover occurs. Another good tip from Q318534: label the drives to match the drive letter. For example, for the V drive, label it as DRIVEV, shown in Figure 8. In a disaster recovery situation the drive letter information might get erased. Using meaningful labels makes it easy to determine that DRIVEV was originally assigned the drive letter V. 26
  • 27. Figure 8. Setting a drive label to match the drive letter. Assign Q: to the quorum drive By convention, the quorum resource is usually placed on a partition with the drive-letter Q. You should assign a separate physical disk in the shared storage to the quorum. This practice makes it easier to manage the cluster and your storage. If you share the physical drive with another server/application and you want to perform maintenance on the drive, you will have to take the cluster offline and move the quorum resource. M: drive In Microsoft Exchange 2000, the Microsoft Exchange Installable File System (ExIFS) created an M: drive as part of the installation. It was important in Microsoft Exchange 2000 that you did not assign the drive letter M: to any drives in the shared storage. If a partition has been assigned the letter M: or a network drive has been mapped using M:, it would cause the Microsoft Exchange 2000 installation to fail. A new installation of Microsoft Exchange 2003 does not explicitly mount the ExIFS as drive M: but can be exposed via a registry key2. Servers upgraded from Microsoft Exchange 2000, however, may still expose the ExIFS as an M: drive. Assigning drive-letters to storage groups and databases In some scenarios, Microsoft Exchange cluster administrators have exhausted all available drive letters, especially on clusters with multiple storage groups, databases, and virtual servers. For example, an administrator decides to allocate a drive letter to each Microsoft Exchange database on a virtual server with four storage groups. In each storage group, there are six databases. In this scenario, 24 drive-letters will have to be assigned. Drives A, C, Q, and M may already be allocated 2 Requires creating a string key ‘DriveLetter’ in HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesEXIFSParameters. 27
  • 28. to the local storage, Quorum disk, and IFS respectively so there are insufficient drive letters available in this scheme. Microsoft Windows Server 2003 lifts the restrictions on drive letters by providing support for mount points in clusters. Mount points are used as an alternative to drive letters when formatting a partition; instead of selecting a drive letter, a folder in which to mount the drive is selected on a ‘parent’ drive. The new drive is then accessed via this folder. For example, in Figure 9 below, Disk 25 is mounted as the folder S:Logs (where S: is Disk 21 in logical disk manager). Writes to S:Logs will go directly to Disk 25, as shown in logical disk manager below. Figure 9. Sample Disk Management view showing Disk 25 as Mount Point See the section on mount points for more detailed information on proper configuration. Another way to avoid drive letter restrictions is as follows: do not dedicate volumes to individual Microsoft Exchange databases. Create partitions large enough to hold multiple Microsoft Exchange databases and group all databases from the same storage group on the same partitions. If a partition contains databases from multiple storage groups and the disk resource goes offline, it will dismount databases in all of the storage groups on that disk partition. 28
  • 29. Choosing RAID levels for Microsoft Exchange components All EDB and streaming databases in the same Storage Group should be placed on the same drive. If files are placed across multiple volumes and one volume fails, the other databases would still be affected. At a minimum, the Stores will go offline and then come back online if possible. Choose RAID 0+1 (or RAID5 only if assigning sufficient physical spindles for both storage space and performance) for EDB and streaming databases and keep the STM and EDB files on the same partition. This makes it easier to locate and manage database files. Also, if you place STM and EDB files on different partitions and lose one or the other, they must both be restored. Use the HP storage planning calculator for Microsoft Exchange whenever possible to ensure that the design includes enough physical spindles. Transaction logs should be placed on a mirrored (RAID 1 drive). Transaction logs allow you to replay transactions that have taken place since the last backup. They should be placed on drives separate to the databases to facilitate recovery and for performance reasons. SMTP mailroot, message transfer agent, message tracking logs The SMTP folders, Message Transfer Agent, and Message Tracking Logs should be placed on a mirrored drive, separate from the database and transaction logs. The SMTP virtual server can achieve better performance when spread over multiple disks using RAID 1 (2 disks) or RAID 0+1 (4 or more disks in an even number). The HP Enterprise Virtual Array can spread the virtual disk over many physical spindles for high performance, and also provide RAID protection as vraid1 with hot sparing. Keeping these components separate from the databases also makes it easier to evaluate the performance of the database and queue drives. Note that the SMTP mailroot is placed on the designated data drive in a cluster (which is external storage by definition), and that the steps for relocating the SMTP mailroot for Microsoft Exchange 2000 described in Q318230, 318230 - XCON: How to Change the Exchange 2000 SMTP Mailroot Directory Location, were not supported in a cluster. Microsoft Exchange 2003 exposes the location for the SMTP Badmail directory and the Queue directory in the Microsoft Exchange System Manager as shown in Figure10 below. These are properties of each SMTP virtual server, and will be grayed-out unless you run the Microsoft Exchange System Manager directly on that server. 29
  • 30. Figure 10. Microsoft Exchange 2003 System Manager SMTP property for SMTP Badmail and Queue directories Allocate dedicated spindles to each Microsoft Exchange virtual server Each disk in the SAN should be allocated to one Microsoft Exchange virtual server and one storage group. Do not split spindles across multiple Microsoft Exchange virtual servers and cluster groups. If you need to take a disk offline for maintenance, only one Microsoft Exchange virtual server would be impacted. Using mount points As stated earlier, mount points are used as an alternative to drive letters when formatting a partition; instead of selecting a drive letter, a folder in which to mount the drive is selected on a ‘parent’ drive. The new drive is then accessed via this folder, for example S:Logs (where S: is disk 21 in logical disk manager) will write directly to disk 25, as shown in logical disk manager in Figure 11. Microsoft Windows Server 2003 eases restrictions on assigning drive letters by providing support for mount points in clusters. Resource dependencies Mount points are a physical disk resource type and should be created as dependent on the parent resource (the drive which is assigned a letter). If the parent resource goes offline then the junction point for the volume mount point (VMP), which is a folder on parent resource, is no longer available and writes to the VMP will fail. Thus, it is critical that the mount point be gracefully taken offline first, forcing all outstanding writes. 30
  • 31. Recovering drives with mount points If it is necessary to replace or recover the parent drive resource, the mount point must be re-associated with the folder on the parent drive. This is done by selecting the mount point volume in Disk Manager (Diskmgmt.msc) and selecting Change Drive Letter and Paths… Figure 11 below shows browsing for the folder to associate a volume mount point. Figure 11. Re-associating an Existing Volume Mount Point with a Folder in Disk Manager Warning: If using a Volume Mount Point for Microsoft Exchange Storage Group Logs and the parent drive resource has been replaced, DO NOT bring the Information Store resource online or it will create the Log folder on the drive (as a folder and not a mount point). This folder will have newly created Microsoft Exchange transaction logs, which must be removed. Use a consistent naming standard for folders and databases You should use a consistent naming standard for Microsoft Exchange folders and databases. It makes it easier to determine which Storage Group a database belongs to. A suggested naming convention is shown in Table 7. In this example, one could determine from the name of the file that the mailbox store EDB file V:exchsrvrSG2_MDBDATASG2Mailstore3.EDB is mailbox store 3 and is owned by Storage Group 2. 31
  • 32. On clusters with multiple Microsoft Exchange virtual servers you should extend the naming standard to include the virtual server. For example, the use of VS1 in the filename V:exchsrvrSG2_VS1_MDBDATASG2VS1Mailstore3.EDB denotes that this database belongs to virtual server 1. All Microsoft Exchange components should be placed in folder trees that have ‘Exchsrvr’ or ‘Exchange’ as the root folder name. This makes it easier for the administrator to determine that Microsoft Exchange components are stored in the folders. Table 7. Example Naming standard for Microsoft Exchange folders Component Folder Name Microsoft Exchange Binaries D:exchsrvrBIN Message Transfer Agent R:exchsrvrMTADATA Message Transfer Agent Work Directory R:exchsrvrMTADATA SMTP Mailroot R:exchsrvrMailroot Message Tracking Logs R:exchsrvrYourservername.log Storage Group 1 SG1 Transaction Logs S:exchsrvrSG1_TRANSLOGS Database folder T:exchsrvrSG1_MDBDATA SG1Mailstore1 T:exchsrvrSG1_MDBDATASG1Mailstore1.ED B T:exchsrvrSG1_MDBDATASG1Mailstore1.ST M SG1Mailstore2 T:exchsrvrSG1_MDBDATASG1Mailstore2.ED B T:exchsrvrSG1_MDBDATASG1Mailstore2.ST M SG1Mailstore3 T:exchsrvrSG1_MDBDATASG1Mailstore3.ED B T:exchsrvrSG1_MDBDATASG1Mailstore3.ST M SG1Mailstore4 T:exchsrvrSG1_MDBDATASG1Mailstore4.ED B T:exchsrvrSG1_MDBDATASG1Mailstore4.ST M Storage Group 2 SG2 Transaction Logs U:exchsrvrSG2_TRANSLOGS Database folder V:exchsrvrSG2_MDBDATA SG2Mailstore1 V:exchsrvrSG2_MDBDATASG2Mailstore1.E DB V:exchsrvrSG2_MDBDATASG2Mailstore1.ST M 32
  • 33. Component Folder Name SG2Mailstore2 V:exchsrvrSG2_MDBDATASG2Mailstore2.E DB V:exchsrvrSG2_MDBDATASG2Mailstore2.ST M SG2Mailstore3 V:exchsrvrSG2_MDBDATASG2Mailstore3.E DB V:exchsrvrSG2_MDBDATASG2Mailstore3.ST M SG2Mailstore4 V:exchsrvrSG2_MDBDATASG2Mailstore4.E DB V:exchsrvrSG2_MDBDATASG2Mailstore4.ST M Obtain TCP/IP addresses prior to installation For a two node Microsoft Exchange Server 2003 Active/Passive cluster, you will need to assign IP addresses for the following network resources: • Node 1 (Public Network) • Node 2 (Public Network) • Microsoft Exchange Virtual Server • Cluster IP Address • Node 1 (Private Network) • Node 2 (Private Network) In Active/Active deployments, you will need to assign an IP address for a second virtual server. If you plan to implement HP ProLiant Integrated Lights-Out (iLO) or Remote Insight Lights-Out Edition boards for remote management of the cluster nodes, be sure to allocate an additional IP address for each card. Cluster naming conventions Given the additional complexity and terminology of clusters, it is a good idea to choose a consistent naming convention to aid understanding. In a two node Microsoft Exchange cluster, you will need to assign names to cluster groups, node names, and virtual servers, as shown in Cluster Administrator in Figure 12. Here is a suggested naming convention: • Cluster Node 1 XXXCLNODE1 • Cluster Node 2 XXXCLNODE2 • Cluster Network Name XXXCLUS1 • Microsoft Exchange Group Name XXXEXCGRP1 • Microsoft Exchange Virtual Server Name XXXEXCVS1 XXX – represents a site or company code to match your naming convention CL – represents a cluster node/name EXCGRP – represents a Microsoft Exchange cluster group EXCVS – represents a Microsoft Exchange virtual server name 33