SlideShare a Scribd company logo
1 of 300
Download to read offline
VMware vSphere 5.0
Clustering
Technical Deepdive
VMware vSphere 5.0 Clustering Technical Deepdive
Copyright © 2011 by Duncan Epping and Frank Denneman.

All rights reserved. No part of this book shall be reproduced, stored in a retrieval system, or transmitted by any means, electronic, mechanical, or
otherwise, without written permission from the publisher. No patent liability is assumed with respect to the use of the information contained herein.
Although every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or
omissions. Neither is any liability assumed for damages resulting from the use of the information contained herein.

All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized.

Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.

Version: 1.0
About the Authors
Duncan Epping is a Principal Architect working for VMware as part of Technical Marketing. Duncan specializes in vSphere HA, Storage DRS,
Storage I/O Control and vSphere Architecture. Duncan was among the first VMware Certified Design Experts (VCDX 007). Duncan is the owner of
Y
ellow-Bricks.com, one of the leading VMware/virtualization blogs worldwide (Voted number 1 virtualization blog for the 4th consecutive time on
vsphere-land.com.) and lead-author of the "vSphere Quick Start Guide" and co-author of "Foundation for Cloud Computing with VMware vSphere
4", “Cloud Computing with VMware vCloud Director” and “VMware vSphere 4.1 HA and DRS technical deepdive”. He can be followed on twitter at
http://twitter.com/DuncanYB.

Frank Denneman is a Consulting Architect working for VMware as part of the Professional Services Organization. Frank works primarily with
large Enterprise customers and Service Providers. He is focused on designing large vSphere Infrastructures and specializes in Resource
Management, vSphere DRS and storage. Frank is among the first VMware Certified Design Experts (VCDX 029). Frank is the owner of
FrankDenneman.nl which has recently been voted number 6 worldwide on vsphere-land.com and co-author of “VMware vSphere 4.1 HA and DRS
technical deepdive”. He can be followed on twitter at http://twitter.com/FrankDenneman.
Table of Contents
Acknowledgements
Foreword

Introduction to vSphere High Availability
vSphere 5.0
What’s New?
What is Required for HA to Work?
Prerequisites
Firewall Requirements
Configuring VMware High Availability

Components of High Availability
HOSTD Agent
vCenter

Fundamental Concepts
Master Agent
Slaves
Files for both Slave and Master
Heartbeating
Isolated versus Partitioned
Virtual Machine Protection

Restarting Virtual Machines
Restart Priority and Order
Restart Retries
Failed Host
Isolation Response and Detection
Selecting an Additional Isolation Address
Failure Detection Time
Restarting Virtual Machines
Corner Case Scenario: Split-Brain

Adding Resiliency to HA (Network Redundancy)
Link State Tracking

Admission Control
Admission Control Policy
Admission Control Mechanisms
Unbalanced Configurations and Impact on Slot Calculation
Percentage of Cluster Resources Reserved
Failover Hosts
Decision Making Time
Host Failures Cluster Tolerates
Percentage as Cluster Resources Reserved
Specify Failover Hosts
Recommendations
Selecting the Right Percentage

VM and Application Monitoring
Why Do You Need VM/Application Monitoring?
VM Monitoring Implementation Details
Screenshots
Application Monitoring

Advanced Options
Summarizing
Introduction to vSphere DRS
Cluster Level Resource Management
Requirements
DRS Cluster Settings
Automation Level
vCenter sizing
vCenter sizing
DRS Cluster configuration

vMotion and EVC
vMotion
Enhanced vMotion Capability

DRS Resource Entitlement
Resource Scheduler Architecture
Dynamic Entitlement
Resource Allocation Settings
Shares
Limits
Tying It All Together

Resource Pools and Controls
Root Resource Pool
Resource Pools
Resource Pool Resource Allocation Settings
Shares
The Resource Pool Priority-Pie Paradox
From Resource Pool Setting to Host-Local Resource Allocation
Resource Pool-Level Reservation
Memory Overhead Reservation
Expandable Reservation
Reservations are not Limits.
Limits
Expandable Reservation and Limits

Calculating DRS Recommendations
When is DRS Invoked?
Recommendation Calculation
Imbalance Calculation
GetBestMove
Cost-Benefit and Risk Analysis Criteria
The Biggest Bang for the Buck
Calculating the Migration Recommendation Priority Level
Guidance

Guiding DRS Recommendations
Virtual Machine Size and Initial Placement
MaxMovesPerHost
Rules
Impact of Rules on Organization
Virtual Machine Automation Level

Introduction to DPM
Enable DPM

Calculating DPM Recommendations
Evaluating Resource Utilization
Power-Off recommendations
Power-On recommendations
Recommendation Classifications

Guiding DPM Recommendations
DPM WOL Magic Packet
Baseboard Management Controller
Protocol Selection Order
DPM Scheduled Tasks

Summarizing
Introduction to vSphere Storage DRS
Requirements

Datastore Clusters
Creating a Datastore Cluster
Creating a Datastore Cluster
Datastore Cluster Creation and Considerations
Tools assisting Storage DRS

Datastore Cluster Settings
Automation Level
SDRS Runtime Rules
Advanced Options
Select Hosts and Clusters
Ready to Complete

Storage vMotion and SIOC
Storage vMotion
Storage I/O Control
Queuing Internals
Local Disk Scheduler
Datastore-Wide Disk Scheduler
Latency Threshold Recommendations
Injector

Calculating SDRS Recommendations
When is Storage DRS Invoked?
Initial Placement
Recommendation Calculation
Load balancing
Space Load Balancing
IO load balancing
Load Imbalance Recommendations

Guiding SDRS Recommendations
SDRS Recommendation
SDRS Virtual Machine Automation Level
Partially Connected Versus Fully Connected Datastore Clusters
Affinity and Anti-Affinity rules
Impact on Storage vMotion Efficiency
SDRS Scheduling
Maintenance Mode

Summarizing
Integration
HA and Stateless ESXi
HA and SDRS
Storage vMotion and HA
HA and DRS
Summarizing
Acknowledgements
The authors of this book work for VMware. The opinions expressed here are the authors’ personal opinions. Content published was
not approved in advance by VMware and does not necessarily reflect the views and opinions of VMware. This is the authors’ book,
not a VMware book.

First of all we would like to thank our VMware management team (Charu Chaubal, Kaushik Banerjee, Neil O’Donoghue and Bogomil
Balkansky) for supporting us on this and other projects.

A special thanks goes out to our technical reviewers and editors: Doug Baer, Keith Farkas and Elisha Ziskind(HA Engineering), Anne Holler, Irfan
Ahmad and Rajesekar Shanmugam(DRS and SDRS Engineering), Puneet Zaroo (VMkernel scheduling), Ali Mashtizadeh and Gabriel TarasukLevin (vMotion and Storage vMotion Engineering), Doug Fawley and Divya Ranganathan (EVC Engineering). Thanks for keeping us honest and
contributing to this book.

A very special thanks to our families for supporting this project. Without your support we could not have done this.

We would like to dedicate this book to everyone who made VMware to what it is today. Y have not only changed our lives but also revolutionized
ou
a complete industry.

Duncan Epping and Frank Denneman
Foreword
What are you reading? It’s a question that may come up as you’re reading this book in a public place like a coffee shop or airport. How do you
answer?

If you’re reading this book then you’re part of an elite group that “gets” virtualization. Chances are the person asking you the question has no idea
how to spell VMware, let alone how powerful server and desktop virtualization have become in the datacenter over the past several years.

My first introduction to virtualization was way back in 1999 with VMware Workstation. I was a sales engineer at the time and in order to effectively
demonstrate our products I had to carry 2 laptops (yes, TWO 1999 era laptops) PLUS a projector PLUS a 4 port hub and Ethernet cables. Imagine
how relieved my back and shoulders where when I discovered VMware Workstation and I could finally get rid of that 2nd laptop and associated
cabling. It was awesome to say the least!

Here we are, over a decade after the introduction of VMware Workstation. The topic of this book is HA & DRS, who would have thought of such
advanced concepts way back in 1999? Back then I was just happy to lighten my load…now look at where we are. Thanks to virtualization and
VMware we now have capabilities in the datacenter that have never existed before (sorry mainframe folks, I mean never before possible on x86).

Admit it, we’re geeks!

Y may never see this book on the New Y Times Bestseller list but that’s OK! Virtualization and VMware are very narrow topics, HA & DRS
ou
ork
even more narrow. Y
ou’re reading this because of your thirst for knowledge and I have to admit, Duncan and Frank deliver that knowledge like no
others before them.

Both Duncan and Frank have taken time out of their busy work schedules and dedicated themselves to not only blog about VMware virtualization,
but also write and co-author several books, including this one. It takes not only dedication, but also passion for the technology to continue to
produce great content.

The first edition, “VMware vSphere 4.1 HA and DRS technical deepdive” has been extremely popular, so popular in fact, that this book is all about
HA and DRS in vSphere 5.0. Not only the traditional DRS everyone has known for years but also Storage DRS. Storage DRS is something brand
new with vSphere 5.0 and it’s great to have this deep technical reference as it’s released even though it’s only a small part of VMware vSphere.
Use the information in the following pages not only to learn about this exciting new technology, but also to implement the best practices in your own
environment.

Finally, be sure to reach out to both Duncan and Frank either via their blogs, twitter or in person and thank them for getting what’s in their heads
onto the pages of this book. If you’ve ever tried writing (or blogging) then you know it takes time, dedication and passion.

Doug Hazelman
Senior Director, Product Strategy
Veeam Software

P.S. In case you didn’t know, Duncan keeps an album of book pictures that people have sent him on his blog’s Facebook page. So go ahead and
get creative and take a picture of this book and share it with him, just don’t get too creative… http://vee.am/HADRS
Part I

vSphere High Availability
Chapter 1
Introduction to vSphere High Availability
Availability has traditionally been one of the most important aspects when providing services. When providing services on a shared platform like
VMware vSphere, the impact of downtime exponentially grows and as such VMware engineered a feature called VMware vSphere High
Availability. VMware vSphere High Availability, hereafter simply referred to as HA, provides a simple and cost effective solution to increase
availability for any application running in a virtual machine regardless of its operating system. It is configured using a couple of simple steps through
vCenter Server (vCenter) and as such provides a uniform and simple interface. HA enables you to create a cluster out of multiple ESXi or ESX
servers. We will use ESXi in this book when referring to either, as ESXi is the standard going forward. This will enable you to protect virtual
machines and hence their workloads. In the event of a failure of one of the hosts in the cluster, impacted virtual machines are automatically restarted
on other ESXi hosts within that same VMware vSphere Cluster (cluster).

Figure 1: High Availability in action

On top of that, in the case of a Guest OS level failure, HA can restart the failed Guest OS. This feature is called VM Monitoring, but is sometimes
also referred to as VM-HA. This might sound fairly complex but again can be implemented with a single click.

Figure 2: OS Level HA just a single click

Unlike many other clustering solutions, HA is a fairly simple solution to implement and literally enabled within 5 clicks. On top of that, HA is widely
adopted and used in all situations. However, HA is not a 1:1 replacement for solutions like Microsoft Clustering Services (MSCS). The main
difference between MSCS and HA being that MSCS was designed to protect stateful cluster-aware applications while HA was designed to protect
any virtual machine regardless of the type of application within.

In the case of HA, a failover incurs downtime as the virtual machine is literally restarted on one of the remaining nodes in the cluster. Whereas
MSCS transitions the service to one of the remaining nodes in the cluster when a failure occurs. In contrary to what many believe, MSCS does not
guarantee that there is no downtime during a transition. On top of that, your application needs to be cluster-aware and stateful in order to get the
most out of this mechanism, which limits the number of workloads that could really benefit from this type of clustering.

One might ask why would you want to use HA when a virtual machine is restarted and service is temporarily lost. The answer is simple; not all virtual
machines (or services) need 99.999% uptime. For many services the type of availability HA provides is more than sufficient. On top of that, many
applications were never designed to run on top of an MSCS cluster. This means that there is no guarantee of availability or data consistency if an
application is clustered with MSCS but is not cluster-aware.

In addition, MSCS clustering can be complex and requires special skills and training. One example is managing patches and updates/upgrades in
a MSCS environment; this could even lead to more downtime if not operated correctly and definitely complicates operational procedures. HA
however reduces complexity, costs (associated with downtime and MSCS), resource overhead and unplanned downtime for minimal additional
costs. It is important to note that HA, contrary to MSCS, does not require any changes to the guest as HA is provided on the hypervisor level. Also,
VM Monitoring does not require any additional software or OS modifications except for VMware Tools, which should be installed anyway as a best
practice. In case even higher availability is required, VMware also provides a level of application awareness through Application Monitoring, which
has been leveraged by partners like Symantec to enable application level resiliency and could be used by in-house development teams to increase
resiliency for their application.

HA has proven itself over and over again and is widely adopted within the industry; if you are not using it today, hopefully you will be convinced after
reading this section of the book.
vSphere 5.0
Before we dive into the main constructs of HA and describe all the choices one has to make when configuring HA, we will first briefly touch on
what’s new in vSphere 5.0 and describe the basic requirements and steps needed to enable HA. The focus of this book is vSphere 5.0 and the
enhancements made to increase stability of your virtualized infrastructure. We will also emphasize the changes made which removed the historical
constraints. We will, however, still discuss features and concepts that have been around since vCenter 2.x to ensure that reading this book provides
understanding of every aspect of HA.
What’s New?
Those who have used HA in the past and have already played around with it in vSphere 5.0 might wonder what has changed. Looking at the
vSphere 5.0 Client, changes might not be obvious except for the fact that configuring HA takes substantially less time and some new concepts like
datastore heartbeats have been introduced.

Do not assume that this is it; underneath the covers HA has been completely redesigned and developed from the ground up. This is the reason
enabling or reconfiguring HA literally takes seconds today instead of minutes with previous versions.

With the redesign of the HA stack, some very welcome changes have been introduced to compliment the already extensive capabilities of HA.
Some of the key components of HA have changed completely and new functionality has been added. We have listed some of these changes below
for your convenience and we will discuss them in more detail in the appropriate chapters.
New HA Agent - Fault Domain Manager (FDM) is the name of the agent. HA has been rewritten from the ground up and FDM replaces the
legacy AAM agent.
No dependency on DNS – HA has been written to use IP only to avoid any dependency on DNS.
Primary node concept – The primary/secondary node mechanism has been completely removed to lift all limitations (maximum of 5 primary
nodes with vSphere 4.1 and before) associated with it.
Supports management network partitions – Capable of having multiple “master nodes” when multiple network partitions exist.
Enhanced isolation validation - Avoids false positives when the complete management network has failed.
Datastore heartbeating – This additional level of heartbeating reduces chances of false positives by using the storage layer to validate the
state of the host and to avoid unnecessary downtime when there’s a management network interruption.
Enhanced Admission Control Policies:
The Host Failures based admission control allows for more than 4 hosts to be specified (31 is the maximum).
The Percentage based admission control policy allows you to specify percentages for both CPU and memory separately.
The Failover Host based admission control policy allows you to specify multiple designated failover hosts.
What is Required for HA to Work?
Each feature or product has very specific requirements and HA is no different. Knowing the requirements of HA is part of the basics we have to
cover before diving into some of the more complex concepts. For those who are completely new to HA, we will also show you how to configure it.
Prerequisites
Before enabling HA it is highly recommend validating that the environment meets all the prerequisites. We have also included recommendations
from an infrastructure perspective that will enhance resiliency.

Requirements:
Minimum of two ESXi hosts
Minimum of 3GB memory per host to install ESXi and enable HA
VMware vCenter Server
Shared Storage for virtual machines
Pingable gateway or other reliable address

Recommendation:
Redundant Management Network (not a requirement, but highly recommended)
Multiple shared datastores
Firewall Requirements
The following table contains the ports that are used by HA for communication. If your environment contains firewalls external to the host, ensure
these ports are opened for HA to function correctly. HA will open the required ports on the ESX or ESXi firewall. Please note that this is the first
substantial difference as HA as of vSphere 5 only uses a single port compared to multiple ports pre-vSphere 5.0 and ESXi 5.0 is enhanced with a
firewall.

Table 1: High Availability port settings
Configuring VMware High Availability
HA can be configured with the default settings within a couple of clicks. The following steps will show you how to create a cluster and enable HA,
including VM Monitoring, as we feel this is the bare minimum configuration.
Each of the settings and the design decisions associated with these steps will be described in more depth in the following chapters.

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

Select the Hosts & Clusters view.
Right-click the Datacenter in the Inventory tree and click New Cluster.
Give the new cluster an appropriate name. We recommend at a minimum including the location of the cluster and a sequence number ie.
ams-hadrs-001.
In the Cluster Features section of the page, select Turn On VMware HA and click Next.
Ensure Host Monitoring Status and Admission Control is enabled and click Next.
Leave Cluster Default Settings for what they are and click Next.
Enable VM Monitoring Status by selecting “VM and Application Monitoring” and click Next.
Leave VMware EVC set to the default and click Next.
Leave the Swapfile Policy set to default and click Next.
Click Finish to complete the creation of the cluster.

Figure 3: Ready to complete the New Cluster Wizard

When the HA cluster has been created, the ESXi hosts can be added to the cluster simply by dragging them into the cluster, if they were already
added to vCenter, or by right clicking the cluster and selecting “Add Host”.

When an ESXi host is added to the newly-created cluster, the HA agent will be loaded and configured. Once this has completed, HA will enable
protection of the workloads running on this ESXi host.

As we have clearly demonstrated, HA is a simple clustering solution that will allow you to protect virtual machines against host failure and operating
system failure in literally minutes. Understanding the architecture of HA will enable you to reach that extra 9 when it comes to availability. The
following chapters will discuss the architecture and fundamental concepts of HA. We will also discuss all decision-making moments to ensure you
will configure HA in such a way that it meets the requirements of your or your customer’s environment.
Chapter 2
Components of High Availability
Now that we know what the pre-requisites are and how to configure HA the next steps will be describing which components form HA. Keep in mind
that this is still a “high level” overview. There is more under the cover that we will explain in following chapters. The following diagram depicts a twohost cluster and shows the key HA components.

Figure 4: Components of High Availability

As you can clearly see, there are three major components that form the foundation for HA as of vSphere 5.0:
FDM
hostd
vCenter

The first and probably the most important component that forms HA is FDM (Fault Domain Manager). This is the HA agent, and has replaced what
was once known as AAM (Legato’s Automated Availability Manager).

The FDM Agent is responsible for many tasks such as communicating host resource information, virtual machine states and HA properties to other
hosts in the cluster. FDM also handles heartbeat mechanisms, virtual machine placement, virtual machine restarts, logging and much more. We are
not going to discuss all of this in-depth separately as we feel that this will complicate things too much.

FDM, in our opinion, is one of the most important agents on an ESXi host, when HA is enabled, of course, and we are assuming this is the case.
The engineers recognized this importance and added an extra level of resiliency to HA. Contrary to AAM, FDM uses a single-process agent.
However, FDM spawns a watchdog process. In the unlikely event of an agent failure, the watchdog functionality will pick up on this and restart the
agent to ensure HA functionality remains without anyone ever noticing it failed. The agent is also resilient to network interruptions and “all path
down” (APD) conditions. Inter-host communication automatically uses another communication path (if the host is configured with redundant
management networks) in the case of a network failure.

As of vSphere 5.0, HA is no longer dependent on DNS as it works with IP addresses only. This is one of the major improvements that FDM
brought. This also means that the character limit that HA imposed on the FQDN has been lifted. (Pre-vSphere 5.0, FQDNs were limited to 27
characters.) This does not mean that ESXi hosts need to be registered with their IP addresses in vCenter; it is still a best practice to register ESXi
hosts by FQDN in vCenter. Although HA does not depend on DNS anymore, remember that many other services do. On top of that, monitoring and
troubleshooting will be much easier when hosts are correctly registered within vCenter and have a valid FQDN.
Basic design principle
Although HA is not dependent on DNS anymore, it is still recommended to register the hosts with their FQDN.

Another major change that FDM brings is logging. Some of you might have never realized this and some of you might have discovered it the hard
way: prior to vSphere 5.0, the HA log files were not sent to syslog.

vSphere 5.0 brings a standardized logging mechanism where a single log file has been created for all operational log messages; it is called
fdm.log. This log file is stored under /var/log/ as depicted in Figure 5.

Figure 5: HA log file

Basic design principle
Ensure syslog is correctly configured and log files are offloaded to a safe location to offer the possibility of performing a root cause
analysis in case disaster strikes.
HOSTD Agent
One of the most crucial agents on a host is hostd. This agent is responsible for many of the tasks we take for granted like powering on VMs. FDM
talks directly to hostd and vCenter, so it is not dependent on vpxa, like in previous releases. This is, of course, to avoid any unnecessary overhead
and dependencies, making HA more reliable than ever before and enabling HA to respond faster to power-on requests. That ultimately results in
higher VM uptime.

When, for whatever reason, hostd is unavailable or not yet running after a restart, the host will not participate in any FDM-related processes. FDM
relies on hostd for information about the virtual machines that are registered to the host, and manages the virtual machines using hostd APIs. In
short, FDM is dependent on hostd and if hostd is not operational, FDM halts all functions and waits for hostd to become operational.
vCenter
That brings us to our final component, the vCenter Server. vCenter is the core of every vSphere Cluster and is responsible for many tasks these
days. For our purposes, the following are the most important and the ones we will discuss in more detail:
Deploying and configuring HA Agents
Communication of cluster configuration changes
Protection of virtual machines

vCenter is responsible for pushing out the FDM agent to the ESXi hosts when applicable. Prior to vSphere 5, the push of these agents would be
done in a serial fashion. With vSphere 5.0, this is done in parallel to allow for faster deployment and configuration of multiple hosts in a cluster.
vCenter is also responsible for communicating configuration changes in the cluster to the host which is elected as the master. We will discuss this
concept of master and slaves in the following chapter. Examples of configuration changes are modification or addition of an advanced setting or
the introduction of a new host into the cluster.

As of vSphere 5.0, HA also leverages vCenter to retrieve information about the status of virtual machines and, of course, vCenter is used to display
the protection status (Figure 6) of virtual machines. (What “virtual machine protection” actually means will be discussed in Chapter 3. On top of that,
vCenter is responsible for the protection and unprotection of virtual machines. This not only applies to user initiated power-offs or power-ons of
virtual machines, but also in the case where an ESXi host is disconnected from vCenter at which point vCenter will request the master HA agent to
unprotect the affected virtual machines.

Figure 6: Virtual machine protection state

Although HA is configured by vCenter and exchanges virtual machine state information with HA, vCenter is not involved when HA responds to
failure. It is comforting to know that in case of a host failure containing the virtualized vCenter Server, HA takes care of the failure and restarts the
vCenter Server on another host, including all other configured virtual machines from that failed host.

There is a corner case scenario with regards to vCenter failure: if the ESXi hosts are so called “stateless hosts” and Distributed vSwitches are
used for the management network, virtual machine restarts will not be attempted until vCenter is restarted. For stateless environments, vCenter and
Auto Deploy availability is key as the ESXi hosts literally depend on them.

If vCenter is unavailable, it will not be possible to make changes to the configuration of the cluster. vCenter is the source of truth for the set of virtual
machines that are protected, the cluster configuration, the virtual machine-to-host compatibility information, and the host membership. So, while HA,
by design, will respond to failures without vCenter, HA relies on vCenter to be available to configure or monitor the cluster.
When a virtual vCenter server has been implemented, we recommend setting the correct HA restart priorities for it. Although vCenter Server is not
required to restart, there are multiple components that rely on vCenter and, as such, a speedy recovery is desired. When configuring your vCenter
virtual machine with a high priority for restarts, remember to include all services on which your vCenter server depends for a successful restart:
DNS, MS AD and MS SQL (or any other database server you are using).

Basic design principle
In stateless environments, ensure vCenter and Auto Deploy are highly available as recovery time of your virtual machines might be
dependent on them.

Understand the impact of virtualizing vCenter. Ensure it has high priority for restarts and ensure that services which vCenter Server depends on are
available: DNS, AD and database.
Chapter 3
Fundamental Concepts
Now that you know about the components of HA, it is time to start talking about some of the fundamental concepts of HA clusters:
Master / Slave Nodes
Heartbeating
Isolated vs Network partitioned
Virtual Machine Protection

Everyone who has implemented vSphere knows that multiple hosts can be configured into a cluster. A cluster can best be seen as a collection of
resources. These resources can be carved up with the use of VMware Distributed Resource Scheduler (DRS) into separate pools of resources or
used to increase availability by enabling HA.

With vSphere 5.0, a lot has changed when it comes to HA. For example, an HA cluster used to consist of two types of nodes. A node could either
be a primary or a secondary node. This concept was introduced due to the dependency on AAM and allowed scaling up to 32 hosts per cluster.
FDM has changed this game completely and removed the whole concept of primary and secondary nodes. (For more details about the legacy
(AAM) node mechanism we would like to refer you to the vSphere 4.1 HA and DRS Technical Deepdive.)

One of the most crucial parts of an HA design used to be the design considerations around HA primary nodes and the maximum of 5 per cluster.
These nodes became the core of every HA implementation and literally were responsible for restarting VMs. Without at least one primary node
surviving the failure, a restart of VMs was not possible. This led to some limitations from an architectural perspective and was also one of the
drivers for VMware to rewrite vSphere HA.

The vSphere 5.0 architecture introduces the concept of master and slave HA agents. Except during network partitions, which are discussed later,
there is only one master HA agent in a cluster. Any agent can serve as a master, and all others are considered its slaves. A master agent is in
charge of monitoring the health of virtual machines for which it is responsible and restarting any that fail. The slaves are responsible for forwarding
information to the master agents and restarting any virtual machines at the direction of the master. Another thing that has changed is that the HA
agent, regardless of its role as master or slave, implements the VM/App monitoring feature; with AAM, this feature was part of VPXA.
Master Agent
As stated, one of the primary tasks of the master is to keep track of the state of the virtual machines it is responsible for and to take action when
appropriate. It is important to realize that a virtual machine can only be the responsibility of a single master. We will discuss the scenario where
multiple masters can exist in a single cluster in one of the following sections, but for now let’s talk about a cluster with a single master. A master will
claim responsibility for a virtual machine by taking “ownership” of the datastore on which the virtual machine’s configuration file is stored.

Basic design principle
To maximize the chance of restarting virtual machines after a failure we recommend masking datastores on a cluster basis. Although
sharing of datastores across clusters will work, it will increase complexity from an administrative perspective.

That is not all, of course. The HA master is also responsible for exchanging state information with vCenter. This means that it will not only receive
but also send information to vCenter when required. The HA master is also the host that initiates the restart of virtual machines when a host has
failed. Y may immediately want to ask what happens when the master is the one that fails, or, more generically, which of the hosts can become
ou
the master and when is it elected?

Election
A master is elected by a set of HA agents whenever the agents are not in network contact with a master. A master election thus occurs when HA is
first enabled on a cluster and when the host on which the master is running:
fails,
becomes network partitioned or isolated,
is disconnected from VC,
is put into maintenance or standby mode,
or when HA is reconfigured on the host.

The HA master election takes approximately 15 seconds and is conducted using UDP. While HA won’t react to failures during the election, once a
master is elected, failures detected before and during the election will be handled. The election process is simple but robust. The host that is
participating in the election with the greatest number of connected datastores will be elected master. If two or more hosts have the same number of
datastores connected, the one with the highest Managed Object Id will be chosen. This however is done lexically; meaning that 99 beats 100 as 9
is larger than 1. For each host, the HA State of the host will be shown on the Summary tab. This includes the role as depicted in Figure 7 where the
host is a master host.

After a master is elected, each slave that has management network connectivity with it will setup a single secure, encrypted, TCP connection to the
master. This secure connection is SSL-based. One thing to stress here though is that slaves do not communicate with each other after the master
has been elected unless a re-election of the master needs to take place.

Figure 7: Master Agent
As stated earlier, when a master is elected it will try to acquire ownership of all of the datastores it can directly access or access by proxying
requests to one of the slaves connected to it using the management network. It does this by locking a file called “protectedlist” that is stored on the
datastores in an existing cluster. The master will also attempt to take ownership of any datastores it discovers along the way, and it will periodically
retry any it could not take ownership of previously.

The naming format and location of this file is as follows:
/<root of datastore>/.vSphere-HA/<cluster-specific-directory>/protectedlist

For those wondering how “<cluster-specific-directory>” is constructed:

<uuid of VC>-<number part of the MoID of the cluster>-<random 8 char string>-<name of the host running VC>

The master uses this protectedlist file to store the inventory. It keeps track of which virtual machines are protected by HA. Calling it an inventory
might be slightly overstating: it is a list of protected virtual machines. The master distributes this inventory across all datastores in use by the virtual
machines in the cluster. Figure 8 shows an example of this file on one of the datastores.

Figure 8: Protectedlist file
Now that we know the master locks a file on the datastore and that this file stores inventory details, what happens when the master is isolated or
fails? If the master fails, the answer is simple: the lock will expire and the new master will relock the file if the datastore is accessible to it.

In the case of isolation, this scenario is slightly different, although the result is similar. The master will release the lock it has on the file on the
datastore to ensure that when a new master is elected it can claim the responsibility for the virtual machines on these datastores by locking the file
on the datastore. If, by any chance, a master should fail right at the moment that it became isolated, the restart of the virtual machines will be
delayed until a new master has been elected. In a scenario like this, accuracy and the fact that virtual machines are restarted is more important than
a short delay.

Let’s assume for a second that your master has just failed. What will happen and how do the slaves know that the master has failed? vSphere 5.0
uses a network heartbeat mechanism. If the slaves have received no network heartbeats from the master, the slaves will try to elect a new master.
This new master will read the required information and will restart the virtual machines within 10 seconds. There is more to this process but we will
discuss that in Chapter 4.

Restarting virtual machines is not the only responsibility of the master. It is also responsible for monitoring the state of the slave hosts. If a slave fails
or becomes isolated, the master will determine which virtual machines must be restarted. When virtual machines need to be restarted, the master
is also responsible for determining the placement of those virtual machines. It uses a placement engine that will try to distribute the virtual machines
to be restarted evenly across all available hosts.

All of these responsibilities are really important, but without a mechanism to detect a slave has failed, the master would be useless. Just like the
slaves receive heartbeats from the master, the master receives heartbeats from the slaves so it knows they are alive.
Slaves
A slave has substantially fewer responsibilities than a master: a slave monitors the state of the virtual machines it is running and informs the master
about any changes to this state.

The slave also monitors the health of the master by monitoring heartbeats. If the master becomes unavailable, the slaves initiate and participate in
the election process. Last but not least, the slaves send heartbeats to the master so that the master can detect outages.

Figure 9: Slave Agent
Files for both Slave and Master
Both the master and slave use files not only to store state, but also as a communication mechanism. We’ve already seen the protectedlist file
(Figure 8) used by the master to store the list of protected virtual machines. We will now discuss the files that are created by both the master and
the slaves. Remote files are files stored on a shared datastore and local files are files that are stored in a location only directly accessible to that
host.

Remote Files
The set of powered on virtual machines is stored in a per-host “poweron” file. (See Figure 8 for an example of these files.) It should be noted that,
because a master also hosts virtual machines, it also creates a “poweron” file.

The naming scheme for this file is as follows:
host-<number>-poweron

Tracking virtual machine power-on state is not the only thing the “poweron” file is used for. This file is also used by the slaves to inform the master
that it is isolated: the top line of the file will either contain a 0 or a 1. A 0 means not-isolated and a 1 means isolated. The master will inform vCenter
about the isolation of the host.

Local Files
As mentioned before, when HA is configured on a host, the host will store specific information about its cluster locally.

Figure 10: Locally stored files

Each host, including the master, will store data locally. The data that is locally stored is important state information. Namely, the VM-to-host
compatibility matrix, cluster configuration, and host membership list. This information is persisted locally on each host. Updates to this information
is sent to the master by vCenter and propagated by the master to the slaves. Although we expect that most of you will never touch these files – and
we highly recommend against modifying them – we do want to explain how they are used:
clusterconfig
This file is not human-readable. It contains the configuration details of the cluster.
compatlist
This file is not human-readable. It contains the actual compatibility info matrix for every HA protected virtual machine and lists all the hosts with
which it is compatible.
fdm.cfg
This file contains the configuration settings around logging. For instance, the level of logging and syslog details are stored in here.
hostlist
A list of hosts participating in the cluster, including hostname, IP addresses, MAC addresses and heartbeat datastores.
Heartbeating
We mentioned it a couple of times already in this chapter, and it is an important mechanism that deserves its own section: heartbeating.
Heartbeating is the mechanism used by HA to validate whether a host is alive. With the introduction of vSphere 5.0, not only did the heartbeating
mechanism slightly change, an additional heartbeating mechanism was introduced. Let’s discuss traditional network heartbeating first.

Network Heartbeating
vSphere 5.0 introduces some changes to the well-known heartbeat mechanism. As vSphere 5.0 doesn’t use the concept of primary and secondary
nodes, there is no reason for 100s of heartbeat combinations. As of vSphere 5.0, each slave will send a heartbeat to its master and the master
sends a heartbeat to each of the slaves. These heartbeats are sent by default every second.

When a slave isn’t receiving any heartbeats from the master, it will try to determine whether it is Isolated– we will discuss “states” in more detail
later on in this chapter.

Basic design principle
Network heartbeating is key for determining the state of a host. Ensure the management network is highly resilient to enable proper
state determination.

Datastore Heartbeating
Those familiar with HA prior to vSphere 5.0 hopefully know that virtual machine restarts were always attempted, even if only the heartbeat network
was isolated and the virtual machines were still running on the host. As you can imagine, this added an unnecessary level of stress to the host. This
has been mitigated by the introduction of the datastore heartbeating mechanism. Datastore heartbeating adds a new level of resiliency and
prevents unnecessary restart attempts from occurring.

Datastore heartbeating enables a master to more correctly determine the state of a host that is not reachable via the management network. The
new datastore heartbeat mechanism is only used in case the master has lost network connectivity with the slaves. The datastore heartbeat
mechanism is then used to validate whether a host has failed or is merely isolated/network partitioned. Isolation will be validated through the
“poweron” file which, as mentioned earlier, will be updated by the host when it is isolated. Without the “poweron” file, there is no way for the master
to validate isolation. Let that be clear! Based on the results of checks of both files, the master will determine the appropriate action to take. If the
master determines that a host has failed (no datastore heartbeats), the master will restart the failed host’s virtual machines. If the master
determines that the slave is Isolated or Partitioned, it will only take action when it is appropriate to take action. With that meaning that the master
will only initiate restarts when virtual machines are down or powered down / shut down by a triggered isolation response, but we will discuss this in
more detail in Chapter 4.

By default, HA picks 2 heartbeat datastores – it will select datastores that are available on all hosts, or as many as possible. Although it is possible
to configure an advanced setting (das.heartbeatDsPerHost) to allow for more datastores for datastore heartbeating we do not recommend
configuring this option as the default should be sufficient for every scenario.
The selection process gives preference to VMFS datastores over NFS ones, and seeks to choose datastores that are backed by different storage
arrays or NFS servers. If desired, you can also select the heartbeat datastores yourself. We, however, recommend letting vCenter deal with this
operational “burden” as vCenter uses a selection algorithm to select heartbeat datastores that are presented to all hosts. This however is not a
guarantee that vCenter can select datastores which are connected to all hosts.

Figure 11: Selecting the heartbeat datastores
The question now arises: what, exactly, is this datastore heartbeating and which datastore is used for this heartbeating? Let’s answer which
datastore is used for datastore heartbeating first as we can simply show that with a screenshot, Figure 12. vSphere 5.0 has brought some new
capabilities to the “Cluster Status” feature on the Cluster’s Summary tab. This now shows you which datastores are being used for heartbeating
and which hosts are using which specific datastore(s). In addition, it displays how many virtual machines are protected and how many hosts are
connected to the master.
Figure 12: Validating the heartbeat datastores

How does this heartbeating mechanism work? HA leverages an existing VMFS file system mechanism. The mechanism uses a so called
“heartbeat region” which is updated as long as the file exists. On VMFS datastores, HA will simply check whether the heartbeat region has been
updated. In order to update a datastore heartbeat region, a host needs to have at least one open file on the volume. HA ensures there is at least
one file open on this volume by creating a file specifically for datastore heartbeating. In other words, a per-host file is created on the designated
heartbeating datastores, as shown in Figure 13. The naming scheme for this file is as follows:
host-<number>-hb
Figure 13: Heartbeat file
On NFS datastores, each host will write to its heartbeat file once every 5 seconds, ensuring that the master will be able to check host state. The
master will simply validate this by checking the time-stamp of the file.

Realize that in the case of a converged network environment, the effectiveness of datastore heartbeating will vary depending on the type of failure.
For instance, a NIC failure could impact both network and datastore heartbeating. If, for whatever reason, the datastore or NFS share becomes
unavailable or is removed from the cluster, HA will detect this and select a new datastore or NFS share to use for the heartbeating mechanism.

Basic design principle
Datastore heartbeating adds a new level of resiliency but is not the be-all end-all. In converged networking environments, the use of
datastore heartbeating adds little value due to the fact that a NIC failure may result in both the network and storage becoming unavailable.
Isolated versus Partitioned
We’ve already briefly touched on it and it is time to have a closer look. As of vSphere 5.0 HA, a new cluster node state called Partitioned exists.
What is this exactly and when is a host Partitioned rather than Isolated? Before we will explain this we want to point out that there is the state as
reported by the master and the state as observed by an administrator and the characteristics these have.

First of all, a host is considered to be either Isolated or Partitioned when it loses network access to a master but has not failed. To help explain the
difference, we‘ve listed both states and the associated criteria below:
Isolated
Is not receiving heartbeats from the master
Is not receiving any election traffic
Cannot ping the isolation address
Partitioned
Is not receiving heartbeats from the master
Is receiving election traffic
(at some point a new master will be elected at which the state will be reported to vCenter)

In the case of an Isolation, a host is separated from the master and the virtual machines running on it might be restarted, depending on the selected
isolation response and the availability of a master. It could occur that multiple hosts are fully isolated at the same time.

When multiple hosts are isolated but can still communicate amongst each other over the management networks, we call this a network partition.
When a network partition exists, a master election process will be issued so that a host failure or network isolation within this partition will result in
appropriate action on the impacted virtual machine(s). Figure 14 shows possible ways in which an Isolation or a Partition can occur.

Figure 14: Isolated versus Partitioned

If a cluster is partitioned in multiple segments, each partition will elect its own master, meaning that if you have 4 partitions your cluster will have 4
masters. When the network partition is corrected, any of the four masters will take over the role and be responsible for the cluster again. It should be
noted that a master could claim responsibility for a virtual machine that lives in a different partition. If this occurs and the virtual machine happens to
fail, the master will be notified through the datastore communication mechanism.
This still leaves the question open how the master determines if a host has Failed, is Partitioned or has become Isolated. This is where the new
datastore heartbeat mechanism comes in to play.

When the master stops receiving network heartbeats from a slave, it will check for host “liveness” for the next 15 seconds. Before the host is
declared failed, the master will validate if it has actually failed or not by doing additional liveness checks. First, the master will validate if the host is
still heartbeating to the datastore. Second, the master will ping the management IP-address of the host. If both are negative, the host will be
declared Failed. This doesn’t necessarily mean the host has PSOD’ed; it could be the network is unavailable, including the storage network, which
would make this host Isolated from an administrator’s perspective but Failed from an HA perspective. As you can imagine, however, there are a
various combinations possible. The following table depicts these combinations including the “state”.

Table 2: Host states

HA will trigger an action based on the state of the host. When the host is marked as Failed, a restart of the virtual machines will be initiated. When
the host is marked as Isolated, the master might initiate the restarts. As mentioned earlier, this is a substantial change compared to HA prior to
vSphere 5.0 when restarts were always initiated, regardless of the state of the virtual machines or hosts. The one thing to keep in mind when it
comes to isolation response is that a virtual machine will only be shut down or powered off when the isolated host knows there is a master out there
that has taken ownership for the virtual machine or when the isolated host loses access to the home datastore of the virtual machine.

For example, if a host is isolated and runs two virtual machines, stored on separate datastores, the host will validate if it can access each of the
home datastores of those virtual machines. If it can, the host will validate whether a master owns these datastores. If no master owns the
datastores, the isolation response will not be triggered and restarts will not be initiated. If the host does not have access to the datastore, for
instance, during an “All Paths Down” condition, HA will trigger the isolation response to ensure the “original” virtual machine is powered down and
will be safely restarted. This to avoid so-called “split-brain” scenarios.

To reiterate, as this is a major change compared to all previous versions of HA, the remaining hosts in the cluster will only be requested to restart
virtual machines when the master has detected that either the host has failed or has become isolated and the isolation response was triggered. If
the term isolation response is not clear yet, don’t worry as we will discuss it in more depth in Chapter 4.
Virtual Machine Protection
The way virtual machines are protected has changed substantially in vSphere 5.0. Prior to vSphere 5.0, virtual machine protection was handled by
vpxd which notified AAM through a vpxa module called vmap. With vSphere 5.0, virtual machine protection happens on several layers but is
ultimately the responsibility of vCenter. We have explained this briefly but want to expand on it a bit more to make sure everyone understands the
dependency on vCenter when it comes to protecting virtual machines. We do want to stress that this only applies to protecting virtual machines;
virtual machine restarts in no way require vCenter to be available at the time.

When the state of a virtual machine changes, vCenter will direct the master to enable or disable HA protection for that virtual machine. Protection,
however, is only guaranteed when the master has committed the change of state to disk. The reason for this, of course, is that a failure of the
master would result in the loss of any state changes that exist only in memory. As pointed out earlier, this state is distributed across the datastores
and stored in the “protectedlist” file.

When the power state change of a virtual machine has been committed to disk, the master will inform vCenter Server so that the change in status is
visible both for the user in vCenter and for other processes like monitoring tools.

To clarify the process, we have created a workflow diagram (Figure 15) of the protection of a virtual machine from the point it is powered on through
vCenter:

Figure 15: VM protection workflow

But what about “unprotection?” When a virtual machine is powered off, it must be removed from the protectedlist. We have documented this
workflow in Figure 16.

Figure 16: Unprotection workflow
Chapter 4
Restarting Virtual Machines
In the previous chapter, we have described most of the lower level fundamental concepts of HA. We have shown you that multiple new mechanisms
have been introduced to increase resiliency and reliability of HA. Reliability of HA in this case mostly refers to restarting virtual machines, as that
remains HA’s primary task.

HA will respond when the state of a host has changed, or, better said, when the state of one or more virtual machines has changed. There are
multiple scenarios in which HA will attempt to restart a virtual machine of which we have listed the most common below:

Failed host
Isolated host
Failed guest Operating System

Depending on the type of failure, but also depending on the role of the host, the process will differ slightly. Changing the process results in slightly
different recovery timelines. There are many different scenarios and there is no point in covering all of them, so we will try to describe the most
common scenario and include timelines where possible.

Before we dive into the different failure scenarios, we want to emphasize a couple of very substantial changes compared to vSphere pre-5.0 with
regards to restart priority and retries. These apply to every situation we will describe.
Restart Priority and Order
Prior to vSphere 5.0, HA would take the priority of the virtual machine into account when a restart of multiple virtual machines was required. This, by
itself, has not changed; HA will still take the configured priority of the virtual machine in to account. However, with vSphere 5.0, new types of virtual
machines have been introduced: Agent Virtual Machines. These virtual machines typically offer a “service” to other virtual machines and, as such,
take precedence during the restart procedure as the “regular” virtual machines may rely on them. A good example of an agent virtual machine is a
vShield Endpoint virtual machine which offers anti-virus services. These agent virtual machines are considered top priority virtual machines.

Prioritization is done by each host and not globally. Each host that has been requested to initiate restart attempts will attempt to restart all top
priority virtual machines before attempting to start any other virtual machines. If the restart of a top priority virtual machine fails, it will be retried after
a delay. In the meantime, however, HA will continue powering on the remaining virtual machines. Keep in mind that some virtual machines might be
dependent on the agent virtual machines. Y should document which virtual machines are dependent on which agent virtual machines and
ou
document the process to start up these services in the right order in the case the automatic restart of an agent virtual machine fails.

Basic design principle
Virtual machines can be dependent on the availability of agent virtual machines or other virtual machines. Although HA will do its best to
ensure all virtual machines are started in the correct order, this is not guaranteed. Document the proper recovery process.

Besides agent virtual machines, HA also prioritizes FT secondary machines. We have listed the full order in which virtual machines will be restarted
below:

Agent virtual machines
FT secondary virtual machines
Virtual Machines configured with a restart priority of high,
Virtual Machines configured with a medium restart priority
Virtual Machines configured with a low restart priority

Now that we have briefly touched on it, we would also like to address “restart retries” and parallelization of restarts as that more or less dictates how
long it could take before all virtual machines of a failed or isolated host are restarted.
Restart Retries
The number of retries is configurable as of vCenter 2.5 U4 with the advanced option “das.maxvmrestartcount”. The default value is 5. Prior to
vCenter 2.5 U4, HA would keep retrying forever which could lead to serious problems. This scenario is described in KB article 1009625 where
multiple virtual machines would be registered on multiple hosts simultaneously, leading to a confusing and an inconsistent state.
(http://kb.vmware.com/kb/1009625)

Note:
Prior to vSphere 5.0 “das.maxvmrestartcount” did not include the initial restart. Meaning that the total amount of restarts was 6. As of vSphere
5.0 the initial restart is included in the value.

HA will try to start the virtual machine on one of your hosts in the affected cluster; if this is unsuccessful on that host, the restart count will be
increased by 1. Before we go into the exact timeline, let it be clear that T0 is the point at which the master initiates the first restart attempt. This by
itself could be 30 seconds after the virtual machine has failed. The elapsed time between the failure of the virtual machine and the restart, though,
will depend on the scenario of the failure, which we will discuss in this chapter.

As said, prior to vSphere 5, the actual number of restart attempts was 6, as it excluded the initial attempt. With vSphere 5.0 the default is 5. There
are specific times associated with each of these attempts. The following bullet list will clarify this concept. The ‘m’ stands for “minutes” in this list.
T0 – Initial Restart
T2m – Restart retry 1
T6m – Restart retry 2
T14m – Restart retry 3
T30m – Restart retry 4

Figure 17: High Availability restart timeline
As shown above and clearly depicted in Figure 17, a successful power-on attempt could take up to ~30 minutes in the case where multiple poweron attempts are unsuccessful. This is, however, not exact science. For instance, there is a 2-minute waiting period between the initial restart and
the first restart retry. HA will start the 2-minute wait as soon as it has detected that the initial attempt has failed. So, in reality, T2 could be T2 and 8
seconds. Another important fact that we want emphasize is that if a different master claims responsibility for the virtual machine during this
sequence, the sequence will restart.

Let’s give an example to clarify the scenario in which a master fails during a restart sequence:

Cluster: 4 Host (esxi01, esxi02, esxi03, esxi04)
Master: esxi01

The host “esxi02” is running a single virtual machine called “vm01” and it fails. The master, esxi01, will try to restart it but the attempt
fails. It will try restarting “vm01” up to 5 times but, unfortunately, on the 4 th try, the master also fails. An election occurs and “esxi03” becomes
the new master. It will now initiate the restart of “vm01”, and if that restart would fail it will retry it up to 4 times again for a total including the
initial restart of 5.

Be aware, though, that a successful restart might never occur if the restart count is reached and all five restart attempts (the default value) were
unsuccessful.

When it comes to restarts, one thing that is very important to realize is that HA will not issue more than 32 concurrent power-on tasks on a given
host. To make that more clear, let’s use the example of a two host cluster: if a host fails which contained 33 virtual machines and all of these had the
same restart priority, 32 power on attempts would be initiated. The 33 rd power on attempt will only be initiated when one of those 32 attempts has
completed regardless of success or failure of one of those attempts.

Now, here comes the gotcha. If there are 32 low-priority virtual machines to be powered on and a single high-priority virtual machine, the power on
attempt for the low-priority virtual machines will not be issued until the power on attempt for the high priority virtual machine has completed. Let it be
absolutely clear that HA does not wait to restart the low-priority virtual machines until the high-priority virtual machines are started, it waits for the
issued power on attempt to be reported as “completed”. In theory, this means that if the power on attempt fails, the low-priority virtual machines
could be powered on before the high priority virtual machine.

Basic design principle
Configuring restart priority of a virtual machine is not a guarantee that virtual machines will actually be restarted in this order. Ensure
proper operational procedures are in place for restarting services or virtual machines in the appropriate order in the event of a failure.

Now that we know how virtual machine restart priority and restart retries are handled, it is time to look at the different scenarios.
Failed host
Failure of a master
Failure of a slave
Isolated host and response
Failed Host
Prior to vSphere 5.0, the restart of virtual machines from a failed host was straightforward. With the introduction of master/slave hosts and heartbeat
datastores in vSphere 5.0, the restart procedure has also changed, and with it the associated timelines. There is a clear distinction between the
failure of a master versus the failure of a slave. We want to emphasize this because the time it takes before a restart attempt is initiated differs
between these two scenarios. Let’s start with the most common failure, that of a host failing, but note that failures generally occur infrequently. In
most environments, hardware failures are very uncommon to begin with. Just in case it happens, it doesn’t hurt to understand the process and its
associated timelines.

The Failure of a Slave
This is a fairly complex scenario compared to how HA handled host failures prior to vSphere 5.0. Part of this complexity comes from the
introduction of a new heartbeat mechanism. Actually, there are two different scenarios: one where heartbeat datastores are configured and one
where heartbeat datastores are not configured. Keeping in mind that this is an actual failure of the host, the timeline is as follows:
T0 – Slave failure
T3s – Master begins monitoring datastore heartbeats for 15 seconds
T10s – The host is declared unreachable and the master will ping the management network of the failed host. This is a continuous ping for 5
seconds
T15s – If no heartbeat datastores are configured, the host will be declared dead
T18s – If heartbeat datastores are configured, the host will be declared dead

The master monitors the network heartbeats of a slave. When the slave fails, these heartbeats will no longer be received by the master. We have
defined this as T0. After 3 seconds (T3s), the master will start monitoring for datastore heartbeats and it will do this for 15 seconds. On the 10 th
second (T10s), when no network or datastore heartbeats have been detected, the host will be declared as “unreachable”. The master will also start
pinging the management network of the failed host at the 10th second and it will do so for 5 seconds. If no heartbeat datastores were configured,
the host will be declared “dead” at the 15th second (T15s) and VM restarts will be initiated by the master. If heartbeat datastores have been
configured, the host will be declared dead at the 18th second (T18s) and restarts will be initiated. We realize that this can be confusing and hope
the timeline depicted in Figure 18 makes it easier to digest.

Figure 18: Restart timeline slave failure
That leaves us with the question of what happens in the case of the failure of a master.

The Failure of a Master
In the case of a master failure, the process and the associated timeline are slightly different. The reason being that there needs to be a master
before any restart can be initiated. This means that an election will need to take place amongst the slaves. The timeline is as follows:
T0 – Master failure.
T10s – Master election process initiated.
T25s – New master elected and reads the protectedlist.
T35s – New master initiates restarts for all virtual machines on the protectedlist which are not running.

Slaves receive network heartbeats from their master. If the master fails, lets define this as T0, the slaves detect this when the network heartbeats
cease to be received. As every cluster needs a master, the slaves will initiate an election at T10s. The election process takes 15s to complete,
which brings us to T25s. At T25s, the new master reads the protectedlist. This list contains all the virtual machines which are protected by HA. At
T35s, the master initiates the restart of all virtual machines that are protected but not currently running. The timeline depicted in Figure 19 hopefully
clarifies the process.
Figure 19: Restart timeline master failure
Besides the failure of a host, there is another reason for restarting virtual machines: an isolation event.
Isolation Response and Detection
Before we will discuss the timeline and the process around the restart of virtual machines after an isolation event, we will discuss Isolation
Response and Isolation Detection. One of the first decisions that will need to be made when configuring HA is the “Isolation Response”.

Isolation Response
The Isolation Response refers to the action that HA takes for its virtual machines when the host has lost its connection with the network and the
remaining nodes in the cluster. This does not necessarily mean that the whole network is down; it could just be the management network ports of
this specific host. Today there are three isolation responses: “Power off”, “Leave powered on” and “Shut down”. This isolation response answers
the question, “what should a host do with the virtual machines it manages when it detects that it is isolated from the network?” Let’s discuss these
three options more in-depth:
Power off – When isolation occurs, all virtual machines are powered off. It is a hard stop, or to put it bluntly, the “virtual” power cable of the
virtual machine will be pulled out
Shut down – When isolation occurs, all virtual machines running on the host will be shut down using a guest-initiated shutdown through
VMware Tools. If this is not successful within 5 minutes, a “power off” will be executed. This time out value can be adjusted by setting the
advanced option das.isolationShutdownTimeout. If VMware Tools is not installed, a “power off” will be initiated immediately
Leave powered on – When isolation occurs on the host, the state of the virtual machines remains unchanged

This setting can be changed on the cluster settings under virtual machine options (Figure 20).

Figure 20: Cluster default settings

The default setting for the isolation response has changed multiple times over the last couple of years and this has caused some confusion.
Up to ESXi3.5 U2 / vCenter 2.5 U2 the default isolation response was “Power off”
With ESXi3.5 U3 / vCenter 2.5 U3 this was changed to “Leave powered on”
With vSphere 4.0 it was changed to “Shut down”
With vSphere 5.0 it has been changed to “Leave powered on”

Keep in mind that these changes are only applicable to newly created clusters. When creating a new cluster, it may be required to change the
default isolation response based on the configuration of existing clusters and/or your customer’s requirements, constraints and expectations. When
upgrading an existing cluster, it might be wise to apply the latest default values. Y might wonder why the default has changed once again. There
ou
was a lot of feedback from customers that “Leave powered on” was the desired default value.

Basic design principle
Before upgrading an environment to later versions, ensure you validate the best practices and default settings. Document them,
including justification, to ensure all people involved understand your reasons.

The question remains, which setting should be used? The obvious answer applies here; it depends. We prefer “Leave powered on” because it
eliminates the chances of having a false positive and its associated down time. One of the problems that people have experienced in the past is
that HA triggered its isolation response when the full management network went down. Basically resulting in the power off (or shutdown) of every
single virtual machine and none being restarted. With vSphere 5.0, this problem has been mitigated. HA will validate if virtual machines restarts can
be attempted – there is no reason to incur any down time unless absolutely necessary. It does this by validating that a master owns the datastore
the virtual machine is stored on. Of course, the isolated host can only validate this if it has access to the datastores. In a converged network
environment with iSCSI storage, for instance, it would be impossible to validate this during a full isolation as the validation would fail due to the
inaccessible datastore from the perspective of the isolated host.

We feel that changing the isolation response is most useful in environments where a failure of the management network is likely correlated with a
failure of the virtual machine network(s). If the failure of the management network won’t likely correspond with the failure of the virtual machine
networks, isolation response would cause unnecessary downtime as the virtual machines can continue to run without management network
connectivity to the host.

The question that we haven’t answered yet is how HA knows which virtual machines have been powered-off due to the triggered isolation response
and why the isolation response is more reliable than with previous version of HA. Previously, HA did not care and would always try to restart the
virtual machines according to the last known state of the host. That is no longer the case with vSphere 5.0. Before the isolation response is
triggered, the isolated host will verify whether a master is responsible for the virtual machine. As mentioned earlier, it does this by validating if a
master owns the home datastore of the virtual machine. When isolation response is triggered, the isolated host removes the virtual machines which
are powered off or shutdown from the “poweron” file. The master will recognize that the virtual machines have disappeared and initiate a restart. On
top of that, when the isolation response is triggered, it will create a per-virtual machine file under a “poweredoff” directory which indicates for the
master that this virtual machine was powered down as a result of a triggered isolation response. This information will be read by the master node
when it initiates the restart attempt in order to guarantee that only virtual machines that were powered off / shut down by HA will be restarted by HA.

This is, however, only one part of the increased reliability of HA. Reliability has also been improved with respect to “isolation detection,” which will
be described in the following section.

Isolation Detection
We have explained what the options are to respond to an isolation event and what happens when the selected response is triggered. However, we
have not extensively discussed how isolation is detected. The mechanism is fairly straightforward and works with heartbeats, as earlier explained.
There are, however, two scenarios again, and the process and associated timelines differ for each of them:
Isolation of a slave
Isolation of a master

Before we explain the differences in process between both scenarios, we want to make sure it is clear that a change in state will result in the
isolation response not being triggered in either scenario. Meaning that if a single ping is successful or the host observes election traffic and is
elected a master, the isolation response will not be triggered, which is exactly what you want as avoiding down time is at least as important as
recovering from down time.

Isolation of a Slave
The isolation detection mechanism has changed substantially since previous versions of vSphere. The main difference is the fact that HA triggers a
master election process before it will declare a host is isolated. In this timeline, “s” refers to seconds.
T0 – Isolation of the host (slave)
T10s – Slave enters “election state”
T25s – Slave elects itself as master
T25s – Slave pings “isolation addresses”
T30s – Slave declares itself isolated and “triggers” isolation response
After the completion of this sequence, the master will learn the slave was isolated through the “poweron” file as mentioned earlier, and will restart
virtual machines based on the information provided by the slave.

Isolation of a Master
In the case of the isolation of a master, this timeline is a bit less complicated because there is no need to go through an election process. In this
timeline, “s” refers to seconds.
T0 – Isolation of the host (master)
T0 – Master pings “isolation addresses”
T5s – Master declares itself isolated and “triggers” isolation response

Additional Checks
Before a host triggers the isolation response, it will ping the default isolation address which is the gateway specified for the management network.
HA gives you the option to define one or multiple additional isolation addresses using an advanced setting. This advanced setting is called
das.isolationaddress and could be used to reduce the chances of having a false positive. We recommend setting an additional isolation address
when a secondary management network is configured. If required, you can configure up to 10 additional isolation addresses. A secondary
management network will more than likely be on a different subnet and it is recommended to specify an additional isolation address which is part of
the subnet. (Figure 21)

Figure 21: Isolation Address
Selecting an Additional Isolation Address
A question asked by many people is which address should be specified for this additional isolation verification. We generally recommend an
isolation address close to the hosts to avoid too many network hops and an address that would correlate with the liveness of the virtual machine
network. In many cases, the most logical choice is the physical switch to which the host is directly connected. Basically, use the gateway for
whatever subnet your management network is on. Another usual suspect would be a router or any other reliable and pingable device on the same
subnet. However, when you are using IP-based shared storage like NFS or iSCSI, another good choice would be the IP-address of the storage
device.

Basic design principle
Select a reliable secondary isolation address. Try to minimize the number of “hops” between the host and this address.
Failure Detection Time
Those who are familiar with vSphere 4.x or VI 3.x will probably wonder by now what happened to the concept of “Failure Detection Time”. Prior to
vSphere 5.0, “das.failuredetectiontime” was probably the most used advanced setting within vSphere. As of vSphere 5.0, it is no longer possible to
configure this advanced setting. Let there be no misunderstanding here: it has been completely removed and it is not possible anymore to influence
the timelines for Failure Detection and Isolation Response. This advanced setting is no longer supported because of the additional resiliency
provided by both datastore heartbeating and the additional isolation checks.
Restarting Virtual Machines
The most important procedure has not yet been explained: restarting virtual machines. We have dedicated a full section to this concept as, again,
substantial changes have been introduced in vSphere 5.0.

We have explained the difference in behavior from a timing perspective for restarting virtual machines in the case of a both master node and slave
node failures. For now, let’s assume that a slave node has failed. When the master node declares the slave node as Partitioned or Isolated, it
determines which virtual machines were running on the host at the time of isolation by reading the “poweron” file. If the host was not Partitioned or
Isolated before the failure, the master uses cached data of the virtual machines that were last running on the host before the failure occurred.

Before the master will proceed with initiating restarts, it will wait for roughly 10 seconds. It does this in order to aggregate virtual machines from
possibly other failed hosts. Before it will initiate the restart attempts, though, the master will first validate that it owns the home datastores of the
virtual machines it needs to restart. If, by any chance, the master node does not have a lock on a datastore, it will filter out those particular virtual
machines. At this point, all virtual machines having a restart priority of “disabled” are filtered out.

Now that HA knows which virtual machines it should restart, it is time to decide where the virtual machines are placed. HA will take multiple things in
to account:
CPU and memory reservation, including the memory overhead of the virtual machine
Unreserved capacity of the hosts in the cluster
Restart priority of the virtual machine relative to the other virtual machines that need to be restarted
Virtual-machine-to-host compatibility set
The number of dvPorts required by a virtual machine and the number available on the candidate hosts
The maximum number of vCPUs and virtual machines that can be run on a given host
Restart latency

Restart latency refers to the amount of time it takes to initiate virtual machine restarts. This means that virtual machine restarts will be distributed by
the master across multiple hosts to avoid a boot storm, and thus a delay, on a single host.

If a placement is found, the master will send each target host the set of virtual machines it needs to restart. If this list exceeds 32 virtual machines,
HA will limit the number of concurrent power on attempts to 32. If a virtual machine successfully powers on, the node on which the virtual machine
was powered on will inform the master of the change in power state. The master will then remove the virtual machine from the restart list.

If a placement cannot be found, the master will place the virtual machine on a “pending placement list” and will retry placement of the virtual machine
when one of the following conditions changes:
A new virtual-machine-to-host compatibility list is provided by vCenter
A host reports that its unreserved capacity has increased
A host (re)joins the cluster (For instance, when a host is taken out of maintenance mode, a host is added to a cluster, etc.)
A new failure is detected and virtual machines have to be failed over
A failure occurred when failing over a virtual machine

But what about DRS? Wouldn’t DRS be able to help during the placement of virtual machines when all else fails? It does. The master node will
report to vCenter the set of virtual machines that still need placement, as is the case today. If DRS is enabled, this information will be used in an
attempt to have DRS make capacity available. This is described more in-depth in Chapter 27.
Corner Case Scenario: Split-Brain
In the past (pre-vSphere 4.1), split-brain scenarios could occur. A split brain in this case meaning that a virtual machine would be powered up
simultaneously on two different hosts. That would be possible in the scenario where the isolation response was set to “leave powered on” and
network based storage, like NFS or iSCSI, was used. This situation could occur during a full network isolation, which may result in the lock on the
virtual machine’s VMDK being lost, enabling HA to actually power up the virtual machine. As the virtual machine was not powered off on its original
host (isolation response set to “leave powered on”), it would exist in memory on the isolated host and in memory with a disk lock on the host that
was requested to restart the virtual machine.

vSphere 4.1 and vSphere 5.0 brought multiple enhancements to avoid scenarios like these. Keep in mind that they truly are corner case scenarios
which are very unlikely to occur in most environments. In case it does happen, HA relies on the “lost lock detection” mechanism to mitigate this
scenario. In short, as of version 4.0 Update 2, ESXi detects that the lock on the VMDK has been lost and issues a question whether the virtual
machine should be powered off; HA automatically answers the question with Y However, you will only see this question if you directly connect to
es.
the ESXi host during the failure. HA will generate an event for this auto-answered question though, which is viewable within vCenter. Below you can
find a screenshot of this question.

Figure 22: Virtual machine message

As stated above, as of ESXi 4 update 2, the question will be auto-answered and the virtual machine will be powered off to recover from the split
brain scenario.

The question still remains: in the case of an isolation with iSCSI or NFS, should you power off virtual machines or leave them powered on?

As just explained, HA will automatically power off your original virtual machine when it detects a split-brain scenario. As such, it is perfectly safe to
use the default isolation response of “Leave VM Powered On” and this is also what we recommend. We do, however, recommend increasing
heartbeat network resiliency to avoid getting in to this situation. We will discuss the options you have for enhancing Management Network resiliency
in the next chapter.
Chapter 5
Adding Resiliency to HA
In the previous chapter we extensively covered both Isolation Detection which triggers the selected Isolation Response and the impact of a false
positive. The Isolation Response enables HA to restart virtual machines when “Power off” or “Shut down” has been selected and the host becomes
isolated from the network. However, this also means that it is possible that, without proper redundancy, the Isolation Response may be
unnecessarily triggered. This leads to downtime and should be prevented.

To increase resiliency for networking, VMware implemented the concept of NIC teaming in the hypervisor for both VMkernel and VM networking.
When discussing HA, this is especially important for the Management Network.
“NIC teaming is the process of grouping together
several physical NICs into one single logical NIC,
which can be used for network fault tolerance
and load balancing.”

Using this mechanism, it is possible to add redundancy to the Management Network to decrease the chances of an isolation event. This is, of
course, also possible for other “Portgroups” but that is not the topic of this chapter or book. Another option is configuring an additional Management
Network by enabling the “management network” tick box on another VMkernel port. A little understood fact is that if there are multiple VMkernel
networks on the same subnet, HA will use all of them for management traffic, even if only one is specified for management traffic!
Although there are many configurations possible and supported, we recommend a simple but highly resilient configuration. We have included the
vMotion (VMkernel) network in our example as combining the Management Network and the vMotion network on a single vSwitch is the most
commonly used configuration and an industry accepted best practice.

Requirements:
2 physical NICs
VLAN trunking

Recommended:
2 physical switches
If available, enable “link state tracking” to ensure link failures are reported

The vSwitch should be configured as follows:
vSwitch0: 2 Physical NICs (vmnic0 and vmnic1)
2 Portgroups (Management Network and vMotion VMkernel)
Management Network active on vmnic0 and standby on vmnic1
vMotion VMkernel active on vmnic1 and standby on vmnic0
Failback set to No

Each portgroup has a VLAN ID assigned and runs dedicated on its own physical NIC; only in the case of a failure it is switched over to the standby
NIC. We highly recommend setting failback to “No” to avoid chances of an unwanted isolation event, which can occur when a physical switch routes
no traffic during boot but the ports are reported as “up”. (NIC Teaming Tab)
Pros: Only 2 NICs in total are needed for the Management Network and vMotion VMkernel, especially useful in blade server environments. Easy to
configure.

Cons: Just a single active path for heartbeats.
The following diagram depicts this active/standby scenario:

Figure 23: Active-Standby Management Network design

To increase resiliency, we also recommend implementing the following advanced settings and using NIC ports on different PCI busses – preferably
NICs of a different make and model. When using a different make and model, even a driver failure could be mitigated.

Advanced Settings:
das.isolationaddressX = <ip-address>

The isolation address setting is discussed in more detail in Chapter 4. In short; it is the IP address that the HA agent pings to identify if the host is
completely isolated from the network or just not receiving any heartbeats. If multiple VMkernel networks on different subnets are used, it is
recommended to set an isolation address per network to ensure that each of these will be able to validate isolation of the host.

Prior vSphere 5.0, it was also recommended to change “das.failuredetectiontime”. This advanced setting has been deprecated, as discussed in
Chapter 4.

Basic design principle
Take advantage of some of the basic features vSphere has to offer like NIC teaming. Combining different physical NICs will increase
overall resiliency of your solution.
Link State Tracking
This was already briefly mentioned in the list of recommendations, but this feature is something we would like to emphasize. We have noticed that
people often forget about this even though many switches offer this capability, especially in blade server environments.

Link state tracking will mirror the state of an upstream link to a downstream link. Let’s clarify that with a diagram.

Figure 24: Link State tracking mechanism

Figure 24 depicts a scenario where an uplink of a “Core Switch” has failed. Without Link State Tracking, the connection from the “Edge Switch” to
vmnic0 will be reported as up. With Link State Tracking enabled, the state of the link on the “Edge Switch” will reflect the state of the link of the
“Core Switch” and as such be marked as “down”. Y might wonder why this is important but think about it for a second. Many features that
ou
vSphere offer rely on networking and so do your virtual machines. In the case where the state is not reflected, some functionality might just fail, for
instance network heartbeating could fail if it needs to flow through the core switch. We call this a ‘black hole’ scenario: the host sends traffic down a
path that it believes is up, but the traffic never reaches its destination due to the failed upstream link.

Basic design principle
Know your network environment, talk to the network administrators and ensure advanced features like Link State Tracking are used
when possible to increase resiliency.
Chapter 6
Admission Control
Admission Control is more than likely the most misunderstood concept vSphere holds today and because of this it is often disabled. However,
Admission Control is a must when availability needs to be guaranteed and isn’t that the reason for enabling HA in the first place?

What is HA Admission Control about? Why does HA contain this concept called Admission Control? The “Availability Guide” a.k.a HA bible states
the following:
“vCenter Server uses admission control to ensure
that sufficient resources are available in a cluster
to provide failover protection and to ensure that
virtual machine resource reservations are respected.”

Please read that quote again and especially the first two words. Indeed it is vCenter Server that is responsible for Admission Control, contrary to
what many believe. Although this might seem like a trivial fact it is important to understand that this implies that Admission Control will not disallow
HA initiated restarts. HA initiated restarts are done on a host level and not through vCenter.

As said, Admission Control guarantees that capacity is available for an HA initiated failover by reserving resources within a cluster. It calculates the
capacity required for a failover based on available resources. In other words, if a host is placed into maintenance mode or disconnected, it is taken
out of the equation. This also implies that if a host has failed or is not responding but has not been removed from the cluster, it is still included in the
equation. “Available Resources” indicates that the virtualization overhead has already been subtracted from the total amount.

To give an example; VMkernel memory is subtracted from the total amount of memory to obtain the memory available memory for virtual machines.
There is one gotcha with Admission Control that we want to bring to your attention before drilling into the different policies. When Admission Control
is enabled, HA will in no way violate availability constraints. This means that it will always ensure multiple hosts are up and running and this applies
for manual maintenance mode actions and, for instance, to VMware Distributed Power Management. So, if a host is stuck trying to enter
Maintenance Mode, remember that it might be HA which is not allowing Maintenance Mode to proceed as it would violate the Admission Control
Policy. In this situation, users can manually vMotion virtual machines off the host or temporarily disable admission control to allow the operation to
proceed.

With vSphere 4.1 and prior when you disable Admission Control and enabled DPM it could lead to a serious impact on availability. When
Admission Control was disabled, DPM could place all hosts except for 1 in standby mode to reduce total power consumption. This could lead to
issues in the event that this single host would fail. As of vSphere 5.0, this behavior has changed: when DPM is enabled, HA will ensure that there
are always at least two hosts powered up for failover purposes.

As of vSphere 4.1, DPM is also smart enough to take hosts out of standby mode to ensure enough resources are available to provide for HA
initiated failovers. If by any chance the resources are not available, HA will wait for these resources to be made available by DPM and then attempt
the restart of the virtual machines. In other words, the retry count (5 retries by default) is not wasted in scenarios like these.

If you are still using an older version of vSphere or, god forbid, VI3, please understand that you could end up with all but one ESXi host placed in
standby mode, which could lead to potential issues when that particular host fails or resources are scarce as there will be no host available to
power on your virtual machines. This situation is described in the following knowledge base article: http://kb.vmware.com/kb/1007006.
Admission Control Policy
The Admission Control Policy dictates the mechanism that HA uses to guarantee enough resources are available for an HA initiated failover. This
section gives a general overview of the available Admission Control Policies. The impact of each policy is described in the following section,
including our recommendation. HA has three mechanisms to guarantee enough capacity is available to respect virtual machine resource
reservations.

Figure 25: Admission control policy

Below we have listed all three options currently available as the Admission Control Policy. Each option has a different mechanism to ensure
resources are available for a failover and each option has its caveats.
Admission Control Mechanisms
Each Admission Control Policy has its own Admission Control mechanism. Understanding each of these Admission Control mechanisms is
important to appreciate the impact each one has on your cluster design. For instance, setting a reservation on a specific virtual machine can have
an impact on the achieved consolidation ratio. This section will take you on a journey through the trenches of Admission Control Policies and their
respective mechanisms and algorithms.

Host Failures Cluster Tolerates
The Admission Control Policy that has been around the longest is the “Host Failures Cluster Tolerates” policy. It is also historically the least
understood Admission Control Policy due to its complex admission control mechanism.

Although the “Host Failures Tolerates” Admission Control Policy mechanism itself hasn’t changed, a limitation has been removed. Pre-vSphere
5.0, the maximum host failures that could be tolerated was 4, due to the primary/secondary node mechanism. In vSphere 5.0, this mechanism has
been replaced with a master/slave node mechanism and it is possible to plan for N-1 host failures. In the case of a 32 host cluster, you could
potentially set “Host failures the cluster tolerates” to 31.

Figure 26: A new maximum for Host Failures

The so-called “slots” mechanism is used when the “Host failures cluster tolerates” has been selected as the Admission Control Policy. The details
of this mechanism have changed several times in the past and it is one of the most restrictive policies; more than likely, it is also the least
understood.

Slots dictate how many virtual machines can be powered on before vCenter starts yelling “Out Of Resources!” Normally, a slot represents one
virtual machine. Admission Control does not limit HA in restarting virtual machines, it ensures enough unfragmented resources are available to
power on all virtual machines in the cluster by preventing “over-commitment”. Technically speaking “over-commitment” is not the correct terminology
as Admission Control ensures virtual machine reservations can be satisfied and that all virtual machines’ initial memory overhead requirements are
met. Although we have already touched on this, it doesn’t hurt repeating it as it is one of those myths that keeps coming back; HA initiated failovers
are not prone to the Admission Control Policy. Admission Control is done by vCenter Server. HA initiated restarts, in a normal scenario, are
executed directly on the ESXi host without the use of vCenter. The corner-case is where HA requests DRS (DRS is a vCenter task!) to defragment
resources but that is beside the point. Even if resources are low and vCenter would complain, it couldn’t stop the restart from happening.

Let’s dig in to this concept we have just introduced, slots.
“A slot is defined as a logical
representation of the memory and CPU resources
that satisfy the requirements for any
powered-on virtual machine in the cluster.”
In other words a slot is the worst case CPU and memory reservation scenario in a cluster. This directly leads to the first “gotcha.”

HA uses the highest CPU reservation of any given virtual machine and the highest memory reservation of any given VM in the cluster. If no
reservation of higher than 32 MHz is set, HA will use a default of 32 MHz for CPU. Note that this behavior has changed: pre-vSphere 5.0 the default
value was 256 MHz. This has changed as some felt that 256 MHz was too aggressive. If no memory reservation is set, HA will use a default of
0MB+memory overhead for memory. (See the VMware vSphere Resource Management Guide for more details on memory overhead per virtual
machine configuration.) The following example will clarify what “worst-case” actually means.

Example: If virtual machine “VM1” has 2GHz of CPU reserved and 1024MB of memory reserved and virtual machine “VM2” has 1GHz of CPU
reserved and 2048MB of memory reserved the slot size for memory will be 2048MB (+ its memory overhead) and the slot size for CPU will be
2GHz. It is a combination of the highest reservation of both virtual machines that leads to the total slot size. Reservations defined at the Resource
Pool level however, will not affect HA slot size calculations.

Basic design principle
Be really careful with reservations, if there’s no need to have them on a per virtual machine basis; don’t configure them, especially when
using host failures cluster tolerates. If reservations are needed, resort to resource pool based reservations.

Now that we know the worst-case scenario is always taken into account when it comes to slot size calculations, we will describe what dictates the
amount of available slots per cluster as that ultimately dictates how many virtual machines can be powered on in your cluster.

First, we will need to know the slot size for memory and CPU, next we will divide the total available CPU resources of a host by the CPU slot size
and the total available memory resources of a host by the memory slot size. This leaves us with a total number of slots for both memory and CPU
for a host. The most restrictive number (worst-case scenario) is the number of slots for this host. In other words, when you have 25 CPU slots but
only 5 memory slots, the amount of available slots for this host will be 5 as HA always takes the worst case scenario into account to “guarantee” all
virtual machines can be powered on in case of a failure or isolation.

The question we receive a lot is how do I know what my slot size is? The details around slot sizes can be monitored on the HA section of the
Cluster’s summary tab by clicking the “Advanced Runtime Info” line when the “Host Failures” Admission Control Policy is configured.

Figure 27: High Availability cluster summary tab

Clicking “Advanced Runtime Info” will show the specifics the slot size and more useful details such as the number of slots available as depicted in
Figure 28. Please note that although the number of vCPUs is listed in the Slot size section, it is not factored in by default. Although this used to be
the case in Pre-VI 3.5 Update 2, it often led to an overly conservative result and as such has been changed.

Figure 28: High Availability advanced runtime info
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive
Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive

More Related Content

Similar to Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive

Virtualize All the Things!
Virtualize All the Things!Virtualize All the Things!
Virtualize All the Things!
David Pechon
 
Virtualizing Testbeds For Fun And Profit
Virtualizing Testbeds For Fun And ProfitVirtualizing Testbeds For Fun And Profit
Virtualizing Testbeds For Fun And Profit
matthew.maisel
 

Similar to Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive (20)

SAP Solution On VMware - Best Practice Guide 2011
SAP Solution On VMware - Best Practice Guide 2011SAP Solution On VMware - Best Practice Guide 2011
SAP Solution On VMware - Best Practice Guide 2011
 
Oracle Virtualization Best Practices
Oracle Virtualization Best PracticesOracle Virtualization Best Practices
Oracle Virtualization Best Practices
 
Building appliances
Building appliancesBuilding appliances
Building appliances
 
Docker Enables DevOps
Docker Enables DevOpsDocker Enables DevOps
Docker Enables DevOps
 
Removing Environmental Differences - Simon Pearson
Removing Environmental Differences - Simon PearsonRemoving Environmental Differences - Simon Pearson
Removing Environmental Differences - Simon Pearson
 
There is NO CLOUD: For Non-Geeks
There is NO CLOUD: For Non-GeeksThere is NO CLOUD: For Non-Geeks
There is NO CLOUD: For Non-Geeks
 
SAP and VMware (Virtualizing SAP)
SAP and VMware (Virtualizing SAP)SAP and VMware (Virtualizing SAP)
SAP and VMware (Virtualizing SAP)
 
Share on LinkedIn Share on Twitter Share on Facebook Share on Google+ Share b...
Share on LinkedIn Share on Twitter Share on Facebook Share on Google+ Share b...Share on LinkedIn Share on Twitter Share on Facebook Share on Google+ Share b...
Share on LinkedIn Share on Twitter Share on Facebook Share on Google+ Share b...
 
VirtSec, and the Open Source impact
VirtSec,  and the Open Source impactVirtSec,  and the Open Source impact
VirtSec, and the Open Source impact
 
Local development with vvv jon trujillo
Local development with vvv   jon trujilloLocal development with vvv   jon trujillo
Local development with vvv jon trujillo
 
Corwin on Containers
Corwin on ContainersCorwin on Containers
Corwin on Containers
 
Varrow Madness Sneak Peek
Varrow Madness Sneak PeekVarrow Madness Sneak Peek
Varrow Madness Sneak Peek
 
DerbyCon 7 - Hacking VDI, Recon and Attack Methods
DerbyCon 7 - Hacking VDI, Recon and Attack MethodsDerbyCon 7 - Hacking VDI, Recon and Attack Methods
DerbyCon 7 - Hacking VDI, Recon and Attack Methods
 
Virtualizing OTM - Real World Experiences and Pitfalls
Virtualizing OTM - Real World Experiences and PitfallsVirtualizing OTM - Real World Experiences and Pitfalls
Virtualizing OTM - Real World Experiences and Pitfalls
 
Zerto Virtual Replication 4.5
Zerto Virtual Replication 4.5Zerto Virtual Replication 4.5
Zerto Virtual Replication 4.5
 
Designing virtual infrastructure
Designing virtual infrastructureDesigning virtual infrastructure
Designing virtual infrastructure
 
Virtualize All the Things!
Virtualize All the Things!Virtualize All the Things!
Virtualize All the Things!
 
There is NO CLOUD: Geeky Version
There is NO CLOUD: Geeky VersionThere is NO CLOUD: Geeky Version
There is NO CLOUD: Geeky Version
 
Virtualizing Testbeds For Fun And Profit
Virtualizing Testbeds For Fun And ProfitVirtualizing Testbeds For Fun And Profit
Virtualizing Testbeds For Fun And Profit
 
Charleston SC VMUG 8/14/13
Charleston SC VMUG 8/14/13Charleston SC VMUG 8/14/13
Charleston SC VMUG 8/14/13
 

Recently uploaded

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 

Frank Denneman, Duncan Epping - VMware vSphere 5 HA DeepDive

  • 2. VMware vSphere 5.0 Clustering Technical Deepdive Copyright © 2011 by Duncan Epping and Frank Denneman. All rights reserved. No part of this book shall be reproduced, stored in a retrieval system, or transmitted by any means, electronic, mechanical, or otherwise, without written permission from the publisher. No patent liability is assumed with respect to the use of the information contained herein. Although every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or omissions. Neither is any liability assumed for damages resulting from the use of the information contained herein. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. Version: 1.0
  • 3. About the Authors Duncan Epping is a Principal Architect working for VMware as part of Technical Marketing. Duncan specializes in vSphere HA, Storage DRS, Storage I/O Control and vSphere Architecture. Duncan was among the first VMware Certified Design Experts (VCDX 007). Duncan is the owner of Y ellow-Bricks.com, one of the leading VMware/virtualization blogs worldwide (Voted number 1 virtualization blog for the 4th consecutive time on vsphere-land.com.) and lead-author of the "vSphere Quick Start Guide" and co-author of "Foundation for Cloud Computing with VMware vSphere 4", “Cloud Computing with VMware vCloud Director” and “VMware vSphere 4.1 HA and DRS technical deepdive”. He can be followed on twitter at http://twitter.com/DuncanYB. Frank Denneman is a Consulting Architect working for VMware as part of the Professional Services Organization. Frank works primarily with large Enterprise customers and Service Providers. He is focused on designing large vSphere Infrastructures and specializes in Resource Management, vSphere DRS and storage. Frank is among the first VMware Certified Design Experts (VCDX 029). Frank is the owner of FrankDenneman.nl which has recently been voted number 6 worldwide on vsphere-land.com and co-author of “VMware vSphere 4.1 HA and DRS technical deepdive”. He can be followed on twitter at http://twitter.com/FrankDenneman.
  • 4. Table of Contents Acknowledgements Foreword Introduction to vSphere High Availability vSphere 5.0 What’s New? What is Required for HA to Work? Prerequisites Firewall Requirements Configuring VMware High Availability Components of High Availability HOSTD Agent vCenter Fundamental Concepts Master Agent Slaves Files for both Slave and Master Heartbeating Isolated versus Partitioned Virtual Machine Protection Restarting Virtual Machines Restart Priority and Order Restart Retries Failed Host Isolation Response and Detection Selecting an Additional Isolation Address Failure Detection Time Restarting Virtual Machines Corner Case Scenario: Split-Brain Adding Resiliency to HA (Network Redundancy) Link State Tracking Admission Control Admission Control Policy Admission Control Mechanisms Unbalanced Configurations and Impact on Slot Calculation Percentage of Cluster Resources Reserved Failover Hosts Decision Making Time Host Failures Cluster Tolerates Percentage as Cluster Resources Reserved Specify Failover Hosts Recommendations Selecting the Right Percentage VM and Application Monitoring Why Do You Need VM/Application Monitoring? VM Monitoring Implementation Details Screenshots Application Monitoring Advanced Options Summarizing Introduction to vSphere DRS Cluster Level Resource Management Requirements DRS Cluster Settings Automation Level vCenter sizing
  • 5. vCenter sizing DRS Cluster configuration vMotion and EVC vMotion Enhanced vMotion Capability DRS Resource Entitlement Resource Scheduler Architecture Dynamic Entitlement Resource Allocation Settings Shares Limits Tying It All Together Resource Pools and Controls Root Resource Pool Resource Pools Resource Pool Resource Allocation Settings Shares The Resource Pool Priority-Pie Paradox From Resource Pool Setting to Host-Local Resource Allocation Resource Pool-Level Reservation Memory Overhead Reservation Expandable Reservation Reservations are not Limits. Limits Expandable Reservation and Limits Calculating DRS Recommendations When is DRS Invoked? Recommendation Calculation Imbalance Calculation GetBestMove Cost-Benefit and Risk Analysis Criteria The Biggest Bang for the Buck Calculating the Migration Recommendation Priority Level Guidance Guiding DRS Recommendations Virtual Machine Size and Initial Placement MaxMovesPerHost Rules Impact of Rules on Organization Virtual Machine Automation Level Introduction to DPM Enable DPM Calculating DPM Recommendations Evaluating Resource Utilization Power-Off recommendations Power-On recommendations Recommendation Classifications Guiding DPM Recommendations DPM WOL Magic Packet Baseboard Management Controller Protocol Selection Order DPM Scheduled Tasks Summarizing Introduction to vSphere Storage DRS Requirements Datastore Clusters Creating a Datastore Cluster
  • 6. Creating a Datastore Cluster Datastore Cluster Creation and Considerations Tools assisting Storage DRS Datastore Cluster Settings Automation Level SDRS Runtime Rules Advanced Options Select Hosts and Clusters Ready to Complete Storage vMotion and SIOC Storage vMotion Storage I/O Control Queuing Internals Local Disk Scheduler Datastore-Wide Disk Scheduler Latency Threshold Recommendations Injector Calculating SDRS Recommendations When is Storage DRS Invoked? Initial Placement Recommendation Calculation Load balancing Space Load Balancing IO load balancing Load Imbalance Recommendations Guiding SDRS Recommendations SDRS Recommendation SDRS Virtual Machine Automation Level Partially Connected Versus Fully Connected Datastore Clusters Affinity and Anti-Affinity rules Impact on Storage vMotion Efficiency SDRS Scheduling Maintenance Mode Summarizing Integration HA and Stateless ESXi HA and SDRS Storage vMotion and HA HA and DRS Summarizing
  • 7. Acknowledgements The authors of this book work for VMware. The opinions expressed here are the authors’ personal opinions. Content published was not approved in advance by VMware and does not necessarily reflect the views and opinions of VMware. This is the authors’ book, not a VMware book. First of all we would like to thank our VMware management team (Charu Chaubal, Kaushik Banerjee, Neil O’Donoghue and Bogomil Balkansky) for supporting us on this and other projects. A special thanks goes out to our technical reviewers and editors: Doug Baer, Keith Farkas and Elisha Ziskind(HA Engineering), Anne Holler, Irfan Ahmad and Rajesekar Shanmugam(DRS and SDRS Engineering), Puneet Zaroo (VMkernel scheduling), Ali Mashtizadeh and Gabriel TarasukLevin (vMotion and Storage vMotion Engineering), Doug Fawley and Divya Ranganathan (EVC Engineering). Thanks for keeping us honest and contributing to this book. A very special thanks to our families for supporting this project. Without your support we could not have done this. We would like to dedicate this book to everyone who made VMware to what it is today. Y have not only changed our lives but also revolutionized ou a complete industry. Duncan Epping and Frank Denneman
  • 8. Foreword What are you reading? It’s a question that may come up as you’re reading this book in a public place like a coffee shop or airport. How do you answer? If you’re reading this book then you’re part of an elite group that “gets” virtualization. Chances are the person asking you the question has no idea how to spell VMware, let alone how powerful server and desktop virtualization have become in the datacenter over the past several years. My first introduction to virtualization was way back in 1999 with VMware Workstation. I was a sales engineer at the time and in order to effectively demonstrate our products I had to carry 2 laptops (yes, TWO 1999 era laptops) PLUS a projector PLUS a 4 port hub and Ethernet cables. Imagine how relieved my back and shoulders where when I discovered VMware Workstation and I could finally get rid of that 2nd laptop and associated cabling. It was awesome to say the least! Here we are, over a decade after the introduction of VMware Workstation. The topic of this book is HA & DRS, who would have thought of such advanced concepts way back in 1999? Back then I was just happy to lighten my load…now look at where we are. Thanks to virtualization and VMware we now have capabilities in the datacenter that have never existed before (sorry mainframe folks, I mean never before possible on x86). Admit it, we’re geeks! Y may never see this book on the New Y Times Bestseller list but that’s OK! Virtualization and VMware are very narrow topics, HA & DRS ou ork even more narrow. Y ou’re reading this because of your thirst for knowledge and I have to admit, Duncan and Frank deliver that knowledge like no others before them. Both Duncan and Frank have taken time out of their busy work schedules and dedicated themselves to not only blog about VMware virtualization, but also write and co-author several books, including this one. It takes not only dedication, but also passion for the technology to continue to produce great content. The first edition, “VMware vSphere 4.1 HA and DRS technical deepdive” has been extremely popular, so popular in fact, that this book is all about HA and DRS in vSphere 5.0. Not only the traditional DRS everyone has known for years but also Storage DRS. Storage DRS is something brand new with vSphere 5.0 and it’s great to have this deep technical reference as it’s released even though it’s only a small part of VMware vSphere. Use the information in the following pages not only to learn about this exciting new technology, but also to implement the best practices in your own environment. Finally, be sure to reach out to both Duncan and Frank either via their blogs, twitter or in person and thank them for getting what’s in their heads onto the pages of this book. If you’ve ever tried writing (or blogging) then you know it takes time, dedication and passion. Doug Hazelman Senior Director, Product Strategy Veeam Software P.S. In case you didn’t know, Duncan keeps an album of book pictures that people have sent him on his blog’s Facebook page. So go ahead and get creative and take a picture of this book and share it with him, just don’t get too creative… http://vee.am/HADRS
  • 9. Part I vSphere High Availability
  • 11. Introduction to vSphere High Availability Availability has traditionally been one of the most important aspects when providing services. When providing services on a shared platform like VMware vSphere, the impact of downtime exponentially grows and as such VMware engineered a feature called VMware vSphere High Availability. VMware vSphere High Availability, hereafter simply referred to as HA, provides a simple and cost effective solution to increase availability for any application running in a virtual machine regardless of its operating system. It is configured using a couple of simple steps through vCenter Server (vCenter) and as such provides a uniform and simple interface. HA enables you to create a cluster out of multiple ESXi or ESX servers. We will use ESXi in this book when referring to either, as ESXi is the standard going forward. This will enable you to protect virtual machines and hence their workloads. In the event of a failure of one of the hosts in the cluster, impacted virtual machines are automatically restarted on other ESXi hosts within that same VMware vSphere Cluster (cluster). Figure 1: High Availability in action On top of that, in the case of a Guest OS level failure, HA can restart the failed Guest OS. This feature is called VM Monitoring, but is sometimes also referred to as VM-HA. This might sound fairly complex but again can be implemented with a single click. Figure 2: OS Level HA just a single click Unlike many other clustering solutions, HA is a fairly simple solution to implement and literally enabled within 5 clicks. On top of that, HA is widely adopted and used in all situations. However, HA is not a 1:1 replacement for solutions like Microsoft Clustering Services (MSCS). The main difference between MSCS and HA being that MSCS was designed to protect stateful cluster-aware applications while HA was designed to protect any virtual machine regardless of the type of application within. In the case of HA, a failover incurs downtime as the virtual machine is literally restarted on one of the remaining nodes in the cluster. Whereas MSCS transitions the service to one of the remaining nodes in the cluster when a failure occurs. In contrary to what many believe, MSCS does not guarantee that there is no downtime during a transition. On top of that, your application needs to be cluster-aware and stateful in order to get the most out of this mechanism, which limits the number of workloads that could really benefit from this type of clustering. One might ask why would you want to use HA when a virtual machine is restarted and service is temporarily lost. The answer is simple; not all virtual machines (or services) need 99.999% uptime. For many services the type of availability HA provides is more than sufficient. On top of that, many applications were never designed to run on top of an MSCS cluster. This means that there is no guarantee of availability or data consistency if an application is clustered with MSCS but is not cluster-aware. In addition, MSCS clustering can be complex and requires special skills and training. One example is managing patches and updates/upgrades in a MSCS environment; this could even lead to more downtime if not operated correctly and definitely complicates operational procedures. HA however reduces complexity, costs (associated with downtime and MSCS), resource overhead and unplanned downtime for minimal additional
  • 12. costs. It is important to note that HA, contrary to MSCS, does not require any changes to the guest as HA is provided on the hypervisor level. Also, VM Monitoring does not require any additional software or OS modifications except for VMware Tools, which should be installed anyway as a best practice. In case even higher availability is required, VMware also provides a level of application awareness through Application Monitoring, which has been leveraged by partners like Symantec to enable application level resiliency and could be used by in-house development teams to increase resiliency for their application. HA has proven itself over and over again and is widely adopted within the industry; if you are not using it today, hopefully you will be convinced after reading this section of the book.
  • 13. vSphere 5.0 Before we dive into the main constructs of HA and describe all the choices one has to make when configuring HA, we will first briefly touch on what’s new in vSphere 5.0 and describe the basic requirements and steps needed to enable HA. The focus of this book is vSphere 5.0 and the enhancements made to increase stability of your virtualized infrastructure. We will also emphasize the changes made which removed the historical constraints. We will, however, still discuss features and concepts that have been around since vCenter 2.x to ensure that reading this book provides understanding of every aspect of HA.
  • 14. What’s New? Those who have used HA in the past and have already played around with it in vSphere 5.0 might wonder what has changed. Looking at the vSphere 5.0 Client, changes might not be obvious except for the fact that configuring HA takes substantially less time and some new concepts like datastore heartbeats have been introduced. Do not assume that this is it; underneath the covers HA has been completely redesigned and developed from the ground up. This is the reason enabling or reconfiguring HA literally takes seconds today instead of minutes with previous versions. With the redesign of the HA stack, some very welcome changes have been introduced to compliment the already extensive capabilities of HA. Some of the key components of HA have changed completely and new functionality has been added. We have listed some of these changes below for your convenience and we will discuss them in more detail in the appropriate chapters. New HA Agent - Fault Domain Manager (FDM) is the name of the agent. HA has been rewritten from the ground up and FDM replaces the legacy AAM agent. No dependency on DNS – HA has been written to use IP only to avoid any dependency on DNS. Primary node concept – The primary/secondary node mechanism has been completely removed to lift all limitations (maximum of 5 primary nodes with vSphere 4.1 and before) associated with it. Supports management network partitions – Capable of having multiple “master nodes” when multiple network partitions exist. Enhanced isolation validation - Avoids false positives when the complete management network has failed. Datastore heartbeating – This additional level of heartbeating reduces chances of false positives by using the storage layer to validate the state of the host and to avoid unnecessary downtime when there’s a management network interruption. Enhanced Admission Control Policies: The Host Failures based admission control allows for more than 4 hosts to be specified (31 is the maximum). The Percentage based admission control policy allows you to specify percentages for both CPU and memory separately. The Failover Host based admission control policy allows you to specify multiple designated failover hosts.
  • 15. What is Required for HA to Work? Each feature or product has very specific requirements and HA is no different. Knowing the requirements of HA is part of the basics we have to cover before diving into some of the more complex concepts. For those who are completely new to HA, we will also show you how to configure it.
  • 16. Prerequisites Before enabling HA it is highly recommend validating that the environment meets all the prerequisites. We have also included recommendations from an infrastructure perspective that will enhance resiliency. Requirements: Minimum of two ESXi hosts Minimum of 3GB memory per host to install ESXi and enable HA VMware vCenter Server Shared Storage for virtual machines Pingable gateway or other reliable address Recommendation: Redundant Management Network (not a requirement, but highly recommended) Multiple shared datastores
  • 17. Firewall Requirements The following table contains the ports that are used by HA for communication. If your environment contains firewalls external to the host, ensure these ports are opened for HA to function correctly. HA will open the required ports on the ESX or ESXi firewall. Please note that this is the first substantial difference as HA as of vSphere 5 only uses a single port compared to multiple ports pre-vSphere 5.0 and ESXi 5.0 is enhanced with a firewall. Table 1: High Availability port settings
  • 18. Configuring VMware High Availability HA can be configured with the default settings within a couple of clicks. The following steps will show you how to create a cluster and enable HA, including VM Monitoring, as we feel this is the bare minimum configuration. Each of the settings and the design decisions associated with these steps will be described in more depth in the following chapters. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Select the Hosts & Clusters view. Right-click the Datacenter in the Inventory tree and click New Cluster. Give the new cluster an appropriate name. We recommend at a minimum including the location of the cluster and a sequence number ie. ams-hadrs-001. In the Cluster Features section of the page, select Turn On VMware HA and click Next. Ensure Host Monitoring Status and Admission Control is enabled and click Next. Leave Cluster Default Settings for what they are and click Next. Enable VM Monitoring Status by selecting “VM and Application Monitoring” and click Next. Leave VMware EVC set to the default and click Next. Leave the Swapfile Policy set to default and click Next. Click Finish to complete the creation of the cluster. Figure 3: Ready to complete the New Cluster Wizard When the HA cluster has been created, the ESXi hosts can be added to the cluster simply by dragging them into the cluster, if they were already added to vCenter, or by right clicking the cluster and selecting “Add Host”. When an ESXi host is added to the newly-created cluster, the HA agent will be loaded and configured. Once this has completed, HA will enable protection of the workloads running on this ESXi host. As we have clearly demonstrated, HA is a simple clustering solution that will allow you to protect virtual machines against host failure and operating system failure in literally minutes. Understanding the architecture of HA will enable you to reach that extra 9 when it comes to availability. The following chapters will discuss the architecture and fundamental concepts of HA. We will also discuss all decision-making moments to ensure you will configure HA in such a way that it meets the requirements of your or your customer’s environment.
  • 19.
  • 21. Components of High Availability Now that we know what the pre-requisites are and how to configure HA the next steps will be describing which components form HA. Keep in mind that this is still a “high level” overview. There is more under the cover that we will explain in following chapters. The following diagram depicts a twohost cluster and shows the key HA components. Figure 4: Components of High Availability As you can clearly see, there are three major components that form the foundation for HA as of vSphere 5.0: FDM hostd vCenter The first and probably the most important component that forms HA is FDM (Fault Domain Manager). This is the HA agent, and has replaced what was once known as AAM (Legato’s Automated Availability Manager). The FDM Agent is responsible for many tasks such as communicating host resource information, virtual machine states and HA properties to other hosts in the cluster. FDM also handles heartbeat mechanisms, virtual machine placement, virtual machine restarts, logging and much more. We are not going to discuss all of this in-depth separately as we feel that this will complicate things too much. FDM, in our opinion, is one of the most important agents on an ESXi host, when HA is enabled, of course, and we are assuming this is the case. The engineers recognized this importance and added an extra level of resiliency to HA. Contrary to AAM, FDM uses a single-process agent. However, FDM spawns a watchdog process. In the unlikely event of an agent failure, the watchdog functionality will pick up on this and restart the agent to ensure HA functionality remains without anyone ever noticing it failed. The agent is also resilient to network interruptions and “all path down” (APD) conditions. Inter-host communication automatically uses another communication path (if the host is configured with redundant management networks) in the case of a network failure. As of vSphere 5.0, HA is no longer dependent on DNS as it works with IP addresses only. This is one of the major improvements that FDM brought. This also means that the character limit that HA imposed on the FQDN has been lifted. (Pre-vSphere 5.0, FQDNs were limited to 27 characters.) This does not mean that ESXi hosts need to be registered with their IP addresses in vCenter; it is still a best practice to register ESXi hosts by FQDN in vCenter. Although HA does not depend on DNS anymore, remember that many other services do. On top of that, monitoring and troubleshooting will be much easier when hosts are correctly registered within vCenter and have a valid FQDN.
  • 22. Basic design principle Although HA is not dependent on DNS anymore, it is still recommended to register the hosts with their FQDN. Another major change that FDM brings is logging. Some of you might have never realized this and some of you might have discovered it the hard way: prior to vSphere 5.0, the HA log files were not sent to syslog. vSphere 5.0 brings a standardized logging mechanism where a single log file has been created for all operational log messages; it is called fdm.log. This log file is stored under /var/log/ as depicted in Figure 5. Figure 5: HA log file Basic design principle Ensure syslog is correctly configured and log files are offloaded to a safe location to offer the possibility of performing a root cause analysis in case disaster strikes.
  • 23. HOSTD Agent One of the most crucial agents on a host is hostd. This agent is responsible for many of the tasks we take for granted like powering on VMs. FDM talks directly to hostd and vCenter, so it is not dependent on vpxa, like in previous releases. This is, of course, to avoid any unnecessary overhead and dependencies, making HA more reliable than ever before and enabling HA to respond faster to power-on requests. That ultimately results in higher VM uptime. When, for whatever reason, hostd is unavailable or not yet running after a restart, the host will not participate in any FDM-related processes. FDM relies on hostd for information about the virtual machines that are registered to the host, and manages the virtual machines using hostd APIs. In short, FDM is dependent on hostd and if hostd is not operational, FDM halts all functions and waits for hostd to become operational.
  • 24. vCenter That brings us to our final component, the vCenter Server. vCenter is the core of every vSphere Cluster and is responsible for many tasks these days. For our purposes, the following are the most important and the ones we will discuss in more detail: Deploying and configuring HA Agents Communication of cluster configuration changes Protection of virtual machines vCenter is responsible for pushing out the FDM agent to the ESXi hosts when applicable. Prior to vSphere 5, the push of these agents would be done in a serial fashion. With vSphere 5.0, this is done in parallel to allow for faster deployment and configuration of multiple hosts in a cluster. vCenter is also responsible for communicating configuration changes in the cluster to the host which is elected as the master. We will discuss this concept of master and slaves in the following chapter. Examples of configuration changes are modification or addition of an advanced setting or the introduction of a new host into the cluster. As of vSphere 5.0, HA also leverages vCenter to retrieve information about the status of virtual machines and, of course, vCenter is used to display the protection status (Figure 6) of virtual machines. (What “virtual machine protection” actually means will be discussed in Chapter 3. On top of that, vCenter is responsible for the protection and unprotection of virtual machines. This not only applies to user initiated power-offs or power-ons of virtual machines, but also in the case where an ESXi host is disconnected from vCenter at which point vCenter will request the master HA agent to unprotect the affected virtual machines. Figure 6: Virtual machine protection state Although HA is configured by vCenter and exchanges virtual machine state information with HA, vCenter is not involved when HA responds to failure. It is comforting to know that in case of a host failure containing the virtualized vCenter Server, HA takes care of the failure and restarts the vCenter Server on another host, including all other configured virtual machines from that failed host. There is a corner case scenario with regards to vCenter failure: if the ESXi hosts are so called “stateless hosts” and Distributed vSwitches are used for the management network, virtual machine restarts will not be attempted until vCenter is restarted. For stateless environments, vCenter and Auto Deploy availability is key as the ESXi hosts literally depend on them. If vCenter is unavailable, it will not be possible to make changes to the configuration of the cluster. vCenter is the source of truth for the set of virtual machines that are protected, the cluster configuration, the virtual machine-to-host compatibility information, and the host membership. So, while HA, by design, will respond to failures without vCenter, HA relies on vCenter to be available to configure or monitor the cluster.
  • 25. When a virtual vCenter server has been implemented, we recommend setting the correct HA restart priorities for it. Although vCenter Server is not required to restart, there are multiple components that rely on vCenter and, as such, a speedy recovery is desired. When configuring your vCenter virtual machine with a high priority for restarts, remember to include all services on which your vCenter server depends for a successful restart: DNS, MS AD and MS SQL (or any other database server you are using). Basic design principle In stateless environments, ensure vCenter and Auto Deploy are highly available as recovery time of your virtual machines might be dependent on them. Understand the impact of virtualizing vCenter. Ensure it has high priority for restarts and ensure that services which vCenter Server depends on are available: DNS, AD and database.
  • 27. Fundamental Concepts Now that you know about the components of HA, it is time to start talking about some of the fundamental concepts of HA clusters: Master / Slave Nodes Heartbeating Isolated vs Network partitioned Virtual Machine Protection Everyone who has implemented vSphere knows that multiple hosts can be configured into a cluster. A cluster can best be seen as a collection of resources. These resources can be carved up with the use of VMware Distributed Resource Scheduler (DRS) into separate pools of resources or used to increase availability by enabling HA. With vSphere 5.0, a lot has changed when it comes to HA. For example, an HA cluster used to consist of two types of nodes. A node could either be a primary or a secondary node. This concept was introduced due to the dependency on AAM and allowed scaling up to 32 hosts per cluster. FDM has changed this game completely and removed the whole concept of primary and secondary nodes. (For more details about the legacy (AAM) node mechanism we would like to refer you to the vSphere 4.1 HA and DRS Technical Deepdive.) One of the most crucial parts of an HA design used to be the design considerations around HA primary nodes and the maximum of 5 per cluster. These nodes became the core of every HA implementation and literally were responsible for restarting VMs. Without at least one primary node surviving the failure, a restart of VMs was not possible. This led to some limitations from an architectural perspective and was also one of the drivers for VMware to rewrite vSphere HA. The vSphere 5.0 architecture introduces the concept of master and slave HA agents. Except during network partitions, which are discussed later, there is only one master HA agent in a cluster. Any agent can serve as a master, and all others are considered its slaves. A master agent is in charge of monitoring the health of virtual machines for which it is responsible and restarting any that fail. The slaves are responsible for forwarding information to the master agents and restarting any virtual machines at the direction of the master. Another thing that has changed is that the HA agent, regardless of its role as master or slave, implements the VM/App monitoring feature; with AAM, this feature was part of VPXA.
  • 28. Master Agent As stated, one of the primary tasks of the master is to keep track of the state of the virtual machines it is responsible for and to take action when appropriate. It is important to realize that a virtual machine can only be the responsibility of a single master. We will discuss the scenario where multiple masters can exist in a single cluster in one of the following sections, but for now let’s talk about a cluster with a single master. A master will claim responsibility for a virtual machine by taking “ownership” of the datastore on which the virtual machine’s configuration file is stored. Basic design principle To maximize the chance of restarting virtual machines after a failure we recommend masking datastores on a cluster basis. Although sharing of datastores across clusters will work, it will increase complexity from an administrative perspective. That is not all, of course. The HA master is also responsible for exchanging state information with vCenter. This means that it will not only receive but also send information to vCenter when required. The HA master is also the host that initiates the restart of virtual machines when a host has failed. Y may immediately want to ask what happens when the master is the one that fails, or, more generically, which of the hosts can become ou the master and when is it elected? Election A master is elected by a set of HA agents whenever the agents are not in network contact with a master. A master election thus occurs when HA is first enabled on a cluster and when the host on which the master is running: fails, becomes network partitioned or isolated, is disconnected from VC, is put into maintenance or standby mode, or when HA is reconfigured on the host. The HA master election takes approximately 15 seconds and is conducted using UDP. While HA won’t react to failures during the election, once a master is elected, failures detected before and during the election will be handled. The election process is simple but robust. The host that is participating in the election with the greatest number of connected datastores will be elected master. If two or more hosts have the same number of datastores connected, the one with the highest Managed Object Id will be chosen. This however is done lexically; meaning that 99 beats 100 as 9 is larger than 1. For each host, the HA State of the host will be shown on the Summary tab. This includes the role as depicted in Figure 7 where the host is a master host. After a master is elected, each slave that has management network connectivity with it will setup a single secure, encrypted, TCP connection to the master. This secure connection is SSL-based. One thing to stress here though is that slaves do not communicate with each other after the master has been elected unless a re-election of the master needs to take place. Figure 7: Master Agent
  • 29. As stated earlier, when a master is elected it will try to acquire ownership of all of the datastores it can directly access or access by proxying requests to one of the slaves connected to it using the management network. It does this by locking a file called “protectedlist” that is stored on the datastores in an existing cluster. The master will also attempt to take ownership of any datastores it discovers along the way, and it will periodically retry any it could not take ownership of previously. The naming format and location of this file is as follows: /<root of datastore>/.vSphere-HA/<cluster-specific-directory>/protectedlist For those wondering how “<cluster-specific-directory>” is constructed: <uuid of VC>-<number part of the MoID of the cluster>-<random 8 char string>-<name of the host running VC> The master uses this protectedlist file to store the inventory. It keeps track of which virtual machines are protected by HA. Calling it an inventory might be slightly overstating: it is a list of protected virtual machines. The master distributes this inventory across all datastores in use by the virtual machines in the cluster. Figure 8 shows an example of this file on one of the datastores. Figure 8: Protectedlist file
  • 30. Now that we know the master locks a file on the datastore and that this file stores inventory details, what happens when the master is isolated or fails? If the master fails, the answer is simple: the lock will expire and the new master will relock the file if the datastore is accessible to it. In the case of isolation, this scenario is slightly different, although the result is similar. The master will release the lock it has on the file on the datastore to ensure that when a new master is elected it can claim the responsibility for the virtual machines on these datastores by locking the file on the datastore. If, by any chance, a master should fail right at the moment that it became isolated, the restart of the virtual machines will be delayed until a new master has been elected. In a scenario like this, accuracy and the fact that virtual machines are restarted is more important than a short delay. Let’s assume for a second that your master has just failed. What will happen and how do the slaves know that the master has failed? vSphere 5.0 uses a network heartbeat mechanism. If the slaves have received no network heartbeats from the master, the slaves will try to elect a new master. This new master will read the required information and will restart the virtual machines within 10 seconds. There is more to this process but we will discuss that in Chapter 4. Restarting virtual machines is not the only responsibility of the master. It is also responsible for monitoring the state of the slave hosts. If a slave fails or becomes isolated, the master will determine which virtual machines must be restarted. When virtual machines need to be restarted, the master is also responsible for determining the placement of those virtual machines. It uses a placement engine that will try to distribute the virtual machines to be restarted evenly across all available hosts. All of these responsibilities are really important, but without a mechanism to detect a slave has failed, the master would be useless. Just like the slaves receive heartbeats from the master, the master receives heartbeats from the slaves so it knows they are alive.
  • 31. Slaves A slave has substantially fewer responsibilities than a master: a slave monitors the state of the virtual machines it is running and informs the master about any changes to this state. The slave also monitors the health of the master by monitoring heartbeats. If the master becomes unavailable, the slaves initiate and participate in the election process. Last but not least, the slaves send heartbeats to the master so that the master can detect outages. Figure 9: Slave Agent
  • 32. Files for both Slave and Master Both the master and slave use files not only to store state, but also as a communication mechanism. We’ve already seen the protectedlist file (Figure 8) used by the master to store the list of protected virtual machines. We will now discuss the files that are created by both the master and the slaves. Remote files are files stored on a shared datastore and local files are files that are stored in a location only directly accessible to that host. Remote Files The set of powered on virtual machines is stored in a per-host “poweron” file. (See Figure 8 for an example of these files.) It should be noted that, because a master also hosts virtual machines, it also creates a “poweron” file. The naming scheme for this file is as follows: host-<number>-poweron Tracking virtual machine power-on state is not the only thing the “poweron” file is used for. This file is also used by the slaves to inform the master that it is isolated: the top line of the file will either contain a 0 or a 1. A 0 means not-isolated and a 1 means isolated. The master will inform vCenter about the isolation of the host. Local Files As mentioned before, when HA is configured on a host, the host will store specific information about its cluster locally. Figure 10: Locally stored files Each host, including the master, will store data locally. The data that is locally stored is important state information. Namely, the VM-to-host compatibility matrix, cluster configuration, and host membership list. This information is persisted locally on each host. Updates to this information is sent to the master by vCenter and propagated by the master to the slaves. Although we expect that most of you will never touch these files – and we highly recommend against modifying them – we do want to explain how they are used: clusterconfig This file is not human-readable. It contains the configuration details of the cluster. compatlist
  • 33. This file is not human-readable. It contains the actual compatibility info matrix for every HA protected virtual machine and lists all the hosts with which it is compatible. fdm.cfg This file contains the configuration settings around logging. For instance, the level of logging and syslog details are stored in here. hostlist A list of hosts participating in the cluster, including hostname, IP addresses, MAC addresses and heartbeat datastores.
  • 34. Heartbeating We mentioned it a couple of times already in this chapter, and it is an important mechanism that deserves its own section: heartbeating. Heartbeating is the mechanism used by HA to validate whether a host is alive. With the introduction of vSphere 5.0, not only did the heartbeating mechanism slightly change, an additional heartbeating mechanism was introduced. Let’s discuss traditional network heartbeating first. Network Heartbeating vSphere 5.0 introduces some changes to the well-known heartbeat mechanism. As vSphere 5.0 doesn’t use the concept of primary and secondary nodes, there is no reason for 100s of heartbeat combinations. As of vSphere 5.0, each slave will send a heartbeat to its master and the master sends a heartbeat to each of the slaves. These heartbeats are sent by default every second. When a slave isn’t receiving any heartbeats from the master, it will try to determine whether it is Isolated– we will discuss “states” in more detail later on in this chapter. Basic design principle Network heartbeating is key for determining the state of a host. Ensure the management network is highly resilient to enable proper state determination. Datastore Heartbeating Those familiar with HA prior to vSphere 5.0 hopefully know that virtual machine restarts were always attempted, even if only the heartbeat network was isolated and the virtual machines were still running on the host. As you can imagine, this added an unnecessary level of stress to the host. This has been mitigated by the introduction of the datastore heartbeating mechanism. Datastore heartbeating adds a new level of resiliency and prevents unnecessary restart attempts from occurring. Datastore heartbeating enables a master to more correctly determine the state of a host that is not reachable via the management network. The new datastore heartbeat mechanism is only used in case the master has lost network connectivity with the slaves. The datastore heartbeat mechanism is then used to validate whether a host has failed or is merely isolated/network partitioned. Isolation will be validated through the “poweron” file which, as mentioned earlier, will be updated by the host when it is isolated. Without the “poweron” file, there is no way for the master to validate isolation. Let that be clear! Based on the results of checks of both files, the master will determine the appropriate action to take. If the master determines that a host has failed (no datastore heartbeats), the master will restart the failed host’s virtual machines. If the master determines that the slave is Isolated or Partitioned, it will only take action when it is appropriate to take action. With that meaning that the master will only initiate restarts when virtual machines are down or powered down / shut down by a triggered isolation response, but we will discuss this in more detail in Chapter 4. By default, HA picks 2 heartbeat datastores – it will select datastores that are available on all hosts, or as many as possible. Although it is possible to configure an advanced setting (das.heartbeatDsPerHost) to allow for more datastores for datastore heartbeating we do not recommend configuring this option as the default should be sufficient for every scenario. The selection process gives preference to VMFS datastores over NFS ones, and seeks to choose datastores that are backed by different storage arrays or NFS servers. If desired, you can also select the heartbeat datastores yourself. We, however, recommend letting vCenter deal with this operational “burden” as vCenter uses a selection algorithm to select heartbeat datastores that are presented to all hosts. This however is not a guarantee that vCenter can select datastores which are connected to all hosts. Figure 11: Selecting the heartbeat datastores
  • 35. The question now arises: what, exactly, is this datastore heartbeating and which datastore is used for this heartbeating? Let’s answer which datastore is used for datastore heartbeating first as we can simply show that with a screenshot, Figure 12. vSphere 5.0 has brought some new capabilities to the “Cluster Status” feature on the Cluster’s Summary tab. This now shows you which datastores are being used for heartbeating and which hosts are using which specific datastore(s). In addition, it displays how many virtual machines are protected and how many hosts are connected to the master. Figure 12: Validating the heartbeat datastores How does this heartbeating mechanism work? HA leverages an existing VMFS file system mechanism. The mechanism uses a so called “heartbeat region” which is updated as long as the file exists. On VMFS datastores, HA will simply check whether the heartbeat region has been updated. In order to update a datastore heartbeat region, a host needs to have at least one open file on the volume. HA ensures there is at least one file open on this volume by creating a file specifically for datastore heartbeating. In other words, a per-host file is created on the designated heartbeating datastores, as shown in Figure 13. The naming scheme for this file is as follows: host-<number>-hb Figure 13: Heartbeat file
  • 36. On NFS datastores, each host will write to its heartbeat file once every 5 seconds, ensuring that the master will be able to check host state. The master will simply validate this by checking the time-stamp of the file. Realize that in the case of a converged network environment, the effectiveness of datastore heartbeating will vary depending on the type of failure. For instance, a NIC failure could impact both network and datastore heartbeating. If, for whatever reason, the datastore or NFS share becomes unavailable or is removed from the cluster, HA will detect this and select a new datastore or NFS share to use for the heartbeating mechanism. Basic design principle Datastore heartbeating adds a new level of resiliency but is not the be-all end-all. In converged networking environments, the use of datastore heartbeating adds little value due to the fact that a NIC failure may result in both the network and storage becoming unavailable.
  • 37. Isolated versus Partitioned We’ve already briefly touched on it and it is time to have a closer look. As of vSphere 5.0 HA, a new cluster node state called Partitioned exists. What is this exactly and when is a host Partitioned rather than Isolated? Before we will explain this we want to point out that there is the state as reported by the master and the state as observed by an administrator and the characteristics these have. First of all, a host is considered to be either Isolated or Partitioned when it loses network access to a master but has not failed. To help explain the difference, we‘ve listed both states and the associated criteria below: Isolated Is not receiving heartbeats from the master Is not receiving any election traffic Cannot ping the isolation address Partitioned Is not receiving heartbeats from the master Is receiving election traffic (at some point a new master will be elected at which the state will be reported to vCenter) In the case of an Isolation, a host is separated from the master and the virtual machines running on it might be restarted, depending on the selected isolation response and the availability of a master. It could occur that multiple hosts are fully isolated at the same time. When multiple hosts are isolated but can still communicate amongst each other over the management networks, we call this a network partition. When a network partition exists, a master election process will be issued so that a host failure or network isolation within this partition will result in appropriate action on the impacted virtual machine(s). Figure 14 shows possible ways in which an Isolation or a Partition can occur. Figure 14: Isolated versus Partitioned If a cluster is partitioned in multiple segments, each partition will elect its own master, meaning that if you have 4 partitions your cluster will have 4 masters. When the network partition is corrected, any of the four masters will take over the role and be responsible for the cluster again. It should be noted that a master could claim responsibility for a virtual machine that lives in a different partition. If this occurs and the virtual machine happens to fail, the master will be notified through the datastore communication mechanism.
  • 38. This still leaves the question open how the master determines if a host has Failed, is Partitioned or has become Isolated. This is where the new datastore heartbeat mechanism comes in to play. When the master stops receiving network heartbeats from a slave, it will check for host “liveness” for the next 15 seconds. Before the host is declared failed, the master will validate if it has actually failed or not by doing additional liveness checks. First, the master will validate if the host is still heartbeating to the datastore. Second, the master will ping the management IP-address of the host. If both are negative, the host will be declared Failed. This doesn’t necessarily mean the host has PSOD’ed; it could be the network is unavailable, including the storage network, which would make this host Isolated from an administrator’s perspective but Failed from an HA perspective. As you can imagine, however, there are a various combinations possible. The following table depicts these combinations including the “state”. Table 2: Host states HA will trigger an action based on the state of the host. When the host is marked as Failed, a restart of the virtual machines will be initiated. When the host is marked as Isolated, the master might initiate the restarts. As mentioned earlier, this is a substantial change compared to HA prior to vSphere 5.0 when restarts were always initiated, regardless of the state of the virtual machines or hosts. The one thing to keep in mind when it comes to isolation response is that a virtual machine will only be shut down or powered off when the isolated host knows there is a master out there that has taken ownership for the virtual machine or when the isolated host loses access to the home datastore of the virtual machine. For example, if a host is isolated and runs two virtual machines, stored on separate datastores, the host will validate if it can access each of the home datastores of those virtual machines. If it can, the host will validate whether a master owns these datastores. If no master owns the datastores, the isolation response will not be triggered and restarts will not be initiated. If the host does not have access to the datastore, for instance, during an “All Paths Down” condition, HA will trigger the isolation response to ensure the “original” virtual machine is powered down and will be safely restarted. This to avoid so-called “split-brain” scenarios. To reiterate, as this is a major change compared to all previous versions of HA, the remaining hosts in the cluster will only be requested to restart virtual machines when the master has detected that either the host has failed or has become isolated and the isolation response was triggered. If the term isolation response is not clear yet, don’t worry as we will discuss it in more depth in Chapter 4.
  • 39. Virtual Machine Protection The way virtual machines are protected has changed substantially in vSphere 5.0. Prior to vSphere 5.0, virtual machine protection was handled by vpxd which notified AAM through a vpxa module called vmap. With vSphere 5.0, virtual machine protection happens on several layers but is ultimately the responsibility of vCenter. We have explained this briefly but want to expand on it a bit more to make sure everyone understands the dependency on vCenter when it comes to protecting virtual machines. We do want to stress that this only applies to protecting virtual machines; virtual machine restarts in no way require vCenter to be available at the time. When the state of a virtual machine changes, vCenter will direct the master to enable or disable HA protection for that virtual machine. Protection, however, is only guaranteed when the master has committed the change of state to disk. The reason for this, of course, is that a failure of the master would result in the loss of any state changes that exist only in memory. As pointed out earlier, this state is distributed across the datastores and stored in the “protectedlist” file. When the power state change of a virtual machine has been committed to disk, the master will inform vCenter Server so that the change in status is visible both for the user in vCenter and for other processes like monitoring tools. To clarify the process, we have created a workflow diagram (Figure 15) of the protection of a virtual machine from the point it is powered on through vCenter: Figure 15: VM protection workflow But what about “unprotection?” When a virtual machine is powered off, it must be removed from the protectedlist. We have documented this workflow in Figure 16. Figure 16: Unprotection workflow
  • 40.
  • 42. Restarting Virtual Machines In the previous chapter, we have described most of the lower level fundamental concepts of HA. We have shown you that multiple new mechanisms have been introduced to increase resiliency and reliability of HA. Reliability of HA in this case mostly refers to restarting virtual machines, as that remains HA’s primary task. HA will respond when the state of a host has changed, or, better said, when the state of one or more virtual machines has changed. There are multiple scenarios in which HA will attempt to restart a virtual machine of which we have listed the most common below: Failed host Isolated host Failed guest Operating System Depending on the type of failure, but also depending on the role of the host, the process will differ slightly. Changing the process results in slightly different recovery timelines. There are many different scenarios and there is no point in covering all of them, so we will try to describe the most common scenario and include timelines where possible. Before we dive into the different failure scenarios, we want to emphasize a couple of very substantial changes compared to vSphere pre-5.0 with regards to restart priority and retries. These apply to every situation we will describe.
  • 43. Restart Priority and Order Prior to vSphere 5.0, HA would take the priority of the virtual machine into account when a restart of multiple virtual machines was required. This, by itself, has not changed; HA will still take the configured priority of the virtual machine in to account. However, with vSphere 5.0, new types of virtual machines have been introduced: Agent Virtual Machines. These virtual machines typically offer a “service” to other virtual machines and, as such, take precedence during the restart procedure as the “regular” virtual machines may rely on them. A good example of an agent virtual machine is a vShield Endpoint virtual machine which offers anti-virus services. These agent virtual machines are considered top priority virtual machines. Prioritization is done by each host and not globally. Each host that has been requested to initiate restart attempts will attempt to restart all top priority virtual machines before attempting to start any other virtual machines. If the restart of a top priority virtual machine fails, it will be retried after a delay. In the meantime, however, HA will continue powering on the remaining virtual machines. Keep in mind that some virtual machines might be dependent on the agent virtual machines. Y should document which virtual machines are dependent on which agent virtual machines and ou document the process to start up these services in the right order in the case the automatic restart of an agent virtual machine fails. Basic design principle Virtual machines can be dependent on the availability of agent virtual machines or other virtual machines. Although HA will do its best to ensure all virtual machines are started in the correct order, this is not guaranteed. Document the proper recovery process. Besides agent virtual machines, HA also prioritizes FT secondary machines. We have listed the full order in which virtual machines will be restarted below: Agent virtual machines FT secondary virtual machines Virtual Machines configured with a restart priority of high, Virtual Machines configured with a medium restart priority Virtual Machines configured with a low restart priority Now that we have briefly touched on it, we would also like to address “restart retries” and parallelization of restarts as that more or less dictates how long it could take before all virtual machines of a failed or isolated host are restarted.
  • 44. Restart Retries The number of retries is configurable as of vCenter 2.5 U4 with the advanced option “das.maxvmrestartcount”. The default value is 5. Prior to vCenter 2.5 U4, HA would keep retrying forever which could lead to serious problems. This scenario is described in KB article 1009625 where multiple virtual machines would be registered on multiple hosts simultaneously, leading to a confusing and an inconsistent state. (http://kb.vmware.com/kb/1009625) Note: Prior to vSphere 5.0 “das.maxvmrestartcount” did not include the initial restart. Meaning that the total amount of restarts was 6. As of vSphere 5.0 the initial restart is included in the value. HA will try to start the virtual machine on one of your hosts in the affected cluster; if this is unsuccessful on that host, the restart count will be increased by 1. Before we go into the exact timeline, let it be clear that T0 is the point at which the master initiates the first restart attempt. This by itself could be 30 seconds after the virtual machine has failed. The elapsed time between the failure of the virtual machine and the restart, though, will depend on the scenario of the failure, which we will discuss in this chapter. As said, prior to vSphere 5, the actual number of restart attempts was 6, as it excluded the initial attempt. With vSphere 5.0 the default is 5. There are specific times associated with each of these attempts. The following bullet list will clarify this concept. The ‘m’ stands for “minutes” in this list. T0 – Initial Restart T2m – Restart retry 1 T6m – Restart retry 2 T14m – Restart retry 3 T30m – Restart retry 4 Figure 17: High Availability restart timeline
  • 45. As shown above and clearly depicted in Figure 17, a successful power-on attempt could take up to ~30 minutes in the case where multiple poweron attempts are unsuccessful. This is, however, not exact science. For instance, there is a 2-minute waiting period between the initial restart and the first restart retry. HA will start the 2-minute wait as soon as it has detected that the initial attempt has failed. So, in reality, T2 could be T2 and 8 seconds. Another important fact that we want emphasize is that if a different master claims responsibility for the virtual machine during this sequence, the sequence will restart. Let’s give an example to clarify the scenario in which a master fails during a restart sequence: Cluster: 4 Host (esxi01, esxi02, esxi03, esxi04) Master: esxi01 The host “esxi02” is running a single virtual machine called “vm01” and it fails. The master, esxi01, will try to restart it but the attempt fails. It will try restarting “vm01” up to 5 times but, unfortunately, on the 4 th try, the master also fails. An election occurs and “esxi03” becomes the new master. It will now initiate the restart of “vm01”, and if that restart would fail it will retry it up to 4 times again for a total including the initial restart of 5. Be aware, though, that a successful restart might never occur if the restart count is reached and all five restart attempts (the default value) were
  • 46. unsuccessful. When it comes to restarts, one thing that is very important to realize is that HA will not issue more than 32 concurrent power-on tasks on a given host. To make that more clear, let’s use the example of a two host cluster: if a host fails which contained 33 virtual machines and all of these had the same restart priority, 32 power on attempts would be initiated. The 33 rd power on attempt will only be initiated when one of those 32 attempts has completed regardless of success or failure of one of those attempts. Now, here comes the gotcha. If there are 32 low-priority virtual machines to be powered on and a single high-priority virtual machine, the power on attempt for the low-priority virtual machines will not be issued until the power on attempt for the high priority virtual machine has completed. Let it be absolutely clear that HA does not wait to restart the low-priority virtual machines until the high-priority virtual machines are started, it waits for the issued power on attempt to be reported as “completed”. In theory, this means that if the power on attempt fails, the low-priority virtual machines could be powered on before the high priority virtual machine. Basic design principle Configuring restart priority of a virtual machine is not a guarantee that virtual machines will actually be restarted in this order. Ensure proper operational procedures are in place for restarting services or virtual machines in the appropriate order in the event of a failure. Now that we know how virtual machine restart priority and restart retries are handled, it is time to look at the different scenarios. Failed host Failure of a master Failure of a slave Isolated host and response
  • 47. Failed Host Prior to vSphere 5.0, the restart of virtual machines from a failed host was straightforward. With the introduction of master/slave hosts and heartbeat datastores in vSphere 5.0, the restart procedure has also changed, and with it the associated timelines. There is a clear distinction between the failure of a master versus the failure of a slave. We want to emphasize this because the time it takes before a restart attempt is initiated differs between these two scenarios. Let’s start with the most common failure, that of a host failing, but note that failures generally occur infrequently. In most environments, hardware failures are very uncommon to begin with. Just in case it happens, it doesn’t hurt to understand the process and its associated timelines. The Failure of a Slave This is a fairly complex scenario compared to how HA handled host failures prior to vSphere 5.0. Part of this complexity comes from the introduction of a new heartbeat mechanism. Actually, there are two different scenarios: one where heartbeat datastores are configured and one where heartbeat datastores are not configured. Keeping in mind that this is an actual failure of the host, the timeline is as follows: T0 – Slave failure T3s – Master begins monitoring datastore heartbeats for 15 seconds T10s – The host is declared unreachable and the master will ping the management network of the failed host. This is a continuous ping for 5 seconds T15s – If no heartbeat datastores are configured, the host will be declared dead T18s – If heartbeat datastores are configured, the host will be declared dead The master monitors the network heartbeats of a slave. When the slave fails, these heartbeats will no longer be received by the master. We have defined this as T0. After 3 seconds (T3s), the master will start monitoring for datastore heartbeats and it will do this for 15 seconds. On the 10 th second (T10s), when no network or datastore heartbeats have been detected, the host will be declared as “unreachable”. The master will also start pinging the management network of the failed host at the 10th second and it will do so for 5 seconds. If no heartbeat datastores were configured, the host will be declared “dead” at the 15th second (T15s) and VM restarts will be initiated by the master. If heartbeat datastores have been configured, the host will be declared dead at the 18th second (T18s) and restarts will be initiated. We realize that this can be confusing and hope the timeline depicted in Figure 18 makes it easier to digest. Figure 18: Restart timeline slave failure
  • 48. That leaves us with the question of what happens in the case of the failure of a master. The Failure of a Master In the case of a master failure, the process and the associated timeline are slightly different. The reason being that there needs to be a master before any restart can be initiated. This means that an election will need to take place amongst the slaves. The timeline is as follows: T0 – Master failure. T10s – Master election process initiated. T25s – New master elected and reads the protectedlist. T35s – New master initiates restarts for all virtual machines on the protectedlist which are not running. Slaves receive network heartbeats from their master. If the master fails, lets define this as T0, the slaves detect this when the network heartbeats cease to be received. As every cluster needs a master, the slaves will initiate an election at T10s. The election process takes 15s to complete, which brings us to T25s. At T25s, the new master reads the protectedlist. This list contains all the virtual machines which are protected by HA. At T35s, the master initiates the restart of all virtual machines that are protected but not currently running. The timeline depicted in Figure 19 hopefully clarifies the process. Figure 19: Restart timeline master failure
  • 49. Besides the failure of a host, there is another reason for restarting virtual machines: an isolation event.
  • 50. Isolation Response and Detection Before we will discuss the timeline and the process around the restart of virtual machines after an isolation event, we will discuss Isolation Response and Isolation Detection. One of the first decisions that will need to be made when configuring HA is the “Isolation Response”. Isolation Response The Isolation Response refers to the action that HA takes for its virtual machines when the host has lost its connection with the network and the remaining nodes in the cluster. This does not necessarily mean that the whole network is down; it could just be the management network ports of this specific host. Today there are three isolation responses: “Power off”, “Leave powered on” and “Shut down”. This isolation response answers the question, “what should a host do with the virtual machines it manages when it detects that it is isolated from the network?” Let’s discuss these three options more in-depth: Power off – When isolation occurs, all virtual machines are powered off. It is a hard stop, or to put it bluntly, the “virtual” power cable of the virtual machine will be pulled out Shut down – When isolation occurs, all virtual machines running on the host will be shut down using a guest-initiated shutdown through VMware Tools. If this is not successful within 5 minutes, a “power off” will be executed. This time out value can be adjusted by setting the advanced option das.isolationShutdownTimeout. If VMware Tools is not installed, a “power off” will be initiated immediately Leave powered on – When isolation occurs on the host, the state of the virtual machines remains unchanged This setting can be changed on the cluster settings under virtual machine options (Figure 20). Figure 20: Cluster default settings The default setting for the isolation response has changed multiple times over the last couple of years and this has caused some confusion. Up to ESXi3.5 U2 / vCenter 2.5 U2 the default isolation response was “Power off” With ESXi3.5 U3 / vCenter 2.5 U3 this was changed to “Leave powered on” With vSphere 4.0 it was changed to “Shut down” With vSphere 5.0 it has been changed to “Leave powered on” Keep in mind that these changes are only applicable to newly created clusters. When creating a new cluster, it may be required to change the default isolation response based on the configuration of existing clusters and/or your customer’s requirements, constraints and expectations. When upgrading an existing cluster, it might be wise to apply the latest default values. Y might wonder why the default has changed once again. There ou was a lot of feedback from customers that “Leave powered on” was the desired default value. Basic design principle Before upgrading an environment to later versions, ensure you validate the best practices and default settings. Document them, including justification, to ensure all people involved understand your reasons. The question remains, which setting should be used? The obvious answer applies here; it depends. We prefer “Leave powered on” because it
  • 51. eliminates the chances of having a false positive and its associated down time. One of the problems that people have experienced in the past is that HA triggered its isolation response when the full management network went down. Basically resulting in the power off (or shutdown) of every single virtual machine and none being restarted. With vSphere 5.0, this problem has been mitigated. HA will validate if virtual machines restarts can be attempted – there is no reason to incur any down time unless absolutely necessary. It does this by validating that a master owns the datastore the virtual machine is stored on. Of course, the isolated host can only validate this if it has access to the datastores. In a converged network environment with iSCSI storage, for instance, it would be impossible to validate this during a full isolation as the validation would fail due to the inaccessible datastore from the perspective of the isolated host. We feel that changing the isolation response is most useful in environments where a failure of the management network is likely correlated with a failure of the virtual machine network(s). If the failure of the management network won’t likely correspond with the failure of the virtual machine networks, isolation response would cause unnecessary downtime as the virtual machines can continue to run without management network connectivity to the host. The question that we haven’t answered yet is how HA knows which virtual machines have been powered-off due to the triggered isolation response and why the isolation response is more reliable than with previous version of HA. Previously, HA did not care and would always try to restart the virtual machines according to the last known state of the host. That is no longer the case with vSphere 5.0. Before the isolation response is triggered, the isolated host will verify whether a master is responsible for the virtual machine. As mentioned earlier, it does this by validating if a master owns the home datastore of the virtual machine. When isolation response is triggered, the isolated host removes the virtual machines which are powered off or shutdown from the “poweron” file. The master will recognize that the virtual machines have disappeared and initiate a restart. On top of that, when the isolation response is triggered, it will create a per-virtual machine file under a “poweredoff” directory which indicates for the master that this virtual machine was powered down as a result of a triggered isolation response. This information will be read by the master node when it initiates the restart attempt in order to guarantee that only virtual machines that were powered off / shut down by HA will be restarted by HA. This is, however, only one part of the increased reliability of HA. Reliability has also been improved with respect to “isolation detection,” which will be described in the following section. Isolation Detection We have explained what the options are to respond to an isolation event and what happens when the selected response is triggered. However, we have not extensively discussed how isolation is detected. The mechanism is fairly straightforward and works with heartbeats, as earlier explained. There are, however, two scenarios again, and the process and associated timelines differ for each of them: Isolation of a slave Isolation of a master Before we explain the differences in process between both scenarios, we want to make sure it is clear that a change in state will result in the isolation response not being triggered in either scenario. Meaning that if a single ping is successful or the host observes election traffic and is elected a master, the isolation response will not be triggered, which is exactly what you want as avoiding down time is at least as important as recovering from down time. Isolation of a Slave The isolation detection mechanism has changed substantially since previous versions of vSphere. The main difference is the fact that HA triggers a master election process before it will declare a host is isolated. In this timeline, “s” refers to seconds. T0 – Isolation of the host (slave) T10s – Slave enters “election state” T25s – Slave elects itself as master T25s – Slave pings “isolation addresses” T30s – Slave declares itself isolated and “triggers” isolation response
  • 52. After the completion of this sequence, the master will learn the slave was isolated through the “poweron” file as mentioned earlier, and will restart virtual machines based on the information provided by the slave. Isolation of a Master In the case of the isolation of a master, this timeline is a bit less complicated because there is no need to go through an election process. In this timeline, “s” refers to seconds. T0 – Isolation of the host (master) T0 – Master pings “isolation addresses” T5s – Master declares itself isolated and “triggers” isolation response Additional Checks Before a host triggers the isolation response, it will ping the default isolation address which is the gateway specified for the management network. HA gives you the option to define one or multiple additional isolation addresses using an advanced setting. This advanced setting is called das.isolationaddress and could be used to reduce the chances of having a false positive. We recommend setting an additional isolation address when a secondary management network is configured. If required, you can configure up to 10 additional isolation addresses. A secondary management network will more than likely be on a different subnet and it is recommended to specify an additional isolation address which is part of the subnet. (Figure 21) Figure 21: Isolation Address
  • 53. Selecting an Additional Isolation Address A question asked by many people is which address should be specified for this additional isolation verification. We generally recommend an isolation address close to the hosts to avoid too many network hops and an address that would correlate with the liveness of the virtual machine network. In many cases, the most logical choice is the physical switch to which the host is directly connected. Basically, use the gateway for whatever subnet your management network is on. Another usual suspect would be a router or any other reliable and pingable device on the same subnet. However, when you are using IP-based shared storage like NFS or iSCSI, another good choice would be the IP-address of the storage device. Basic design principle Select a reliable secondary isolation address. Try to minimize the number of “hops” between the host and this address.
  • 54. Failure Detection Time Those who are familiar with vSphere 4.x or VI 3.x will probably wonder by now what happened to the concept of “Failure Detection Time”. Prior to vSphere 5.0, “das.failuredetectiontime” was probably the most used advanced setting within vSphere. As of vSphere 5.0, it is no longer possible to configure this advanced setting. Let there be no misunderstanding here: it has been completely removed and it is not possible anymore to influence the timelines for Failure Detection and Isolation Response. This advanced setting is no longer supported because of the additional resiliency provided by both datastore heartbeating and the additional isolation checks.
  • 55. Restarting Virtual Machines The most important procedure has not yet been explained: restarting virtual machines. We have dedicated a full section to this concept as, again, substantial changes have been introduced in vSphere 5.0. We have explained the difference in behavior from a timing perspective for restarting virtual machines in the case of a both master node and slave node failures. For now, let’s assume that a slave node has failed. When the master node declares the slave node as Partitioned or Isolated, it determines which virtual machines were running on the host at the time of isolation by reading the “poweron” file. If the host was not Partitioned or Isolated before the failure, the master uses cached data of the virtual machines that were last running on the host before the failure occurred. Before the master will proceed with initiating restarts, it will wait for roughly 10 seconds. It does this in order to aggregate virtual machines from possibly other failed hosts. Before it will initiate the restart attempts, though, the master will first validate that it owns the home datastores of the virtual machines it needs to restart. If, by any chance, the master node does not have a lock on a datastore, it will filter out those particular virtual machines. At this point, all virtual machines having a restart priority of “disabled” are filtered out. Now that HA knows which virtual machines it should restart, it is time to decide where the virtual machines are placed. HA will take multiple things in to account: CPU and memory reservation, including the memory overhead of the virtual machine Unreserved capacity of the hosts in the cluster Restart priority of the virtual machine relative to the other virtual machines that need to be restarted Virtual-machine-to-host compatibility set The number of dvPorts required by a virtual machine and the number available on the candidate hosts The maximum number of vCPUs and virtual machines that can be run on a given host Restart latency Restart latency refers to the amount of time it takes to initiate virtual machine restarts. This means that virtual machine restarts will be distributed by the master across multiple hosts to avoid a boot storm, and thus a delay, on a single host. If a placement is found, the master will send each target host the set of virtual machines it needs to restart. If this list exceeds 32 virtual machines, HA will limit the number of concurrent power on attempts to 32. If a virtual machine successfully powers on, the node on which the virtual machine was powered on will inform the master of the change in power state. The master will then remove the virtual machine from the restart list. If a placement cannot be found, the master will place the virtual machine on a “pending placement list” and will retry placement of the virtual machine when one of the following conditions changes: A new virtual-machine-to-host compatibility list is provided by vCenter A host reports that its unreserved capacity has increased A host (re)joins the cluster (For instance, when a host is taken out of maintenance mode, a host is added to a cluster, etc.) A new failure is detected and virtual machines have to be failed over A failure occurred when failing over a virtual machine But what about DRS? Wouldn’t DRS be able to help during the placement of virtual machines when all else fails? It does. The master node will report to vCenter the set of virtual machines that still need placement, as is the case today. If DRS is enabled, this information will be used in an attempt to have DRS make capacity available. This is described more in-depth in Chapter 27.
  • 56.
  • 57. Corner Case Scenario: Split-Brain In the past (pre-vSphere 4.1), split-brain scenarios could occur. A split brain in this case meaning that a virtual machine would be powered up simultaneously on two different hosts. That would be possible in the scenario where the isolation response was set to “leave powered on” and network based storage, like NFS or iSCSI, was used. This situation could occur during a full network isolation, which may result in the lock on the virtual machine’s VMDK being lost, enabling HA to actually power up the virtual machine. As the virtual machine was not powered off on its original host (isolation response set to “leave powered on”), it would exist in memory on the isolated host and in memory with a disk lock on the host that was requested to restart the virtual machine. vSphere 4.1 and vSphere 5.0 brought multiple enhancements to avoid scenarios like these. Keep in mind that they truly are corner case scenarios which are very unlikely to occur in most environments. In case it does happen, HA relies on the “lost lock detection” mechanism to mitigate this scenario. In short, as of version 4.0 Update 2, ESXi detects that the lock on the VMDK has been lost and issues a question whether the virtual machine should be powered off; HA automatically answers the question with Y However, you will only see this question if you directly connect to es. the ESXi host during the failure. HA will generate an event for this auto-answered question though, which is viewable within vCenter. Below you can find a screenshot of this question. Figure 22: Virtual machine message As stated above, as of ESXi 4 update 2, the question will be auto-answered and the virtual machine will be powered off to recover from the split brain scenario. The question still remains: in the case of an isolation with iSCSI or NFS, should you power off virtual machines or leave them powered on? As just explained, HA will automatically power off your original virtual machine when it detects a split-brain scenario. As such, it is perfectly safe to use the default isolation response of “Leave VM Powered On” and this is also what we recommend. We do, however, recommend increasing heartbeat network resiliency to avoid getting in to this situation. We will discuss the options you have for enhancing Management Network resiliency in the next chapter.
  • 59. Adding Resiliency to HA In the previous chapter we extensively covered both Isolation Detection which triggers the selected Isolation Response and the impact of a false positive. The Isolation Response enables HA to restart virtual machines when “Power off” or “Shut down” has been selected and the host becomes isolated from the network. However, this also means that it is possible that, without proper redundancy, the Isolation Response may be unnecessarily triggered. This leads to downtime and should be prevented. To increase resiliency for networking, VMware implemented the concept of NIC teaming in the hypervisor for both VMkernel and VM networking. When discussing HA, this is especially important for the Management Network. “NIC teaming is the process of grouping together several physical NICs into one single logical NIC, which can be used for network fault tolerance and load balancing.” Using this mechanism, it is possible to add redundancy to the Management Network to decrease the chances of an isolation event. This is, of course, also possible for other “Portgroups” but that is not the topic of this chapter or book. Another option is configuring an additional Management Network by enabling the “management network” tick box on another VMkernel port. A little understood fact is that if there are multiple VMkernel networks on the same subnet, HA will use all of them for management traffic, even if only one is specified for management traffic! Although there are many configurations possible and supported, we recommend a simple but highly resilient configuration. We have included the vMotion (VMkernel) network in our example as combining the Management Network and the vMotion network on a single vSwitch is the most commonly used configuration and an industry accepted best practice. Requirements: 2 physical NICs VLAN trunking Recommended: 2 physical switches If available, enable “link state tracking” to ensure link failures are reported The vSwitch should be configured as follows: vSwitch0: 2 Physical NICs (vmnic0 and vmnic1) 2 Portgroups (Management Network and vMotion VMkernel) Management Network active on vmnic0 and standby on vmnic1 vMotion VMkernel active on vmnic1 and standby on vmnic0 Failback set to No Each portgroup has a VLAN ID assigned and runs dedicated on its own physical NIC; only in the case of a failure it is switched over to the standby NIC. We highly recommend setting failback to “No” to avoid chances of an unwanted isolation event, which can occur when a physical switch routes no traffic during boot but the ports are reported as “up”. (NIC Teaming Tab)
  • 60. Pros: Only 2 NICs in total are needed for the Management Network and vMotion VMkernel, especially useful in blade server environments. Easy to configure. Cons: Just a single active path for heartbeats. The following diagram depicts this active/standby scenario: Figure 23: Active-Standby Management Network design To increase resiliency, we also recommend implementing the following advanced settings and using NIC ports on different PCI busses – preferably NICs of a different make and model. When using a different make and model, even a driver failure could be mitigated. Advanced Settings: das.isolationaddressX = <ip-address> The isolation address setting is discussed in more detail in Chapter 4. In short; it is the IP address that the HA agent pings to identify if the host is completely isolated from the network or just not receiving any heartbeats. If multiple VMkernel networks on different subnets are used, it is recommended to set an isolation address per network to ensure that each of these will be able to validate isolation of the host. Prior vSphere 5.0, it was also recommended to change “das.failuredetectiontime”. This advanced setting has been deprecated, as discussed in Chapter 4. Basic design principle Take advantage of some of the basic features vSphere has to offer like NIC teaming. Combining different physical NICs will increase overall resiliency of your solution.
  • 61. Link State Tracking This was already briefly mentioned in the list of recommendations, but this feature is something we would like to emphasize. We have noticed that people often forget about this even though many switches offer this capability, especially in blade server environments. Link state tracking will mirror the state of an upstream link to a downstream link. Let’s clarify that with a diagram. Figure 24: Link State tracking mechanism Figure 24 depicts a scenario where an uplink of a “Core Switch” has failed. Without Link State Tracking, the connection from the “Edge Switch” to vmnic0 will be reported as up. With Link State Tracking enabled, the state of the link on the “Edge Switch” will reflect the state of the link of the “Core Switch” and as such be marked as “down”. Y might wonder why this is important but think about it for a second. Many features that ou vSphere offer rely on networking and so do your virtual machines. In the case where the state is not reflected, some functionality might just fail, for instance network heartbeating could fail if it needs to flow through the core switch. We call this a ‘black hole’ scenario: the host sends traffic down a path that it believes is up, but the traffic never reaches its destination due to the failed upstream link. Basic design principle Know your network environment, talk to the network administrators and ensure advanced features like Link State Tracking are used when possible to increase resiliency.
  • 63. Admission Control Admission Control is more than likely the most misunderstood concept vSphere holds today and because of this it is often disabled. However, Admission Control is a must when availability needs to be guaranteed and isn’t that the reason for enabling HA in the first place? What is HA Admission Control about? Why does HA contain this concept called Admission Control? The “Availability Guide” a.k.a HA bible states the following: “vCenter Server uses admission control to ensure that sufficient resources are available in a cluster to provide failover protection and to ensure that virtual machine resource reservations are respected.” Please read that quote again and especially the first two words. Indeed it is vCenter Server that is responsible for Admission Control, contrary to what many believe. Although this might seem like a trivial fact it is important to understand that this implies that Admission Control will not disallow HA initiated restarts. HA initiated restarts are done on a host level and not through vCenter. As said, Admission Control guarantees that capacity is available for an HA initiated failover by reserving resources within a cluster. It calculates the capacity required for a failover based on available resources. In other words, if a host is placed into maintenance mode or disconnected, it is taken out of the equation. This also implies that if a host has failed or is not responding but has not been removed from the cluster, it is still included in the equation. “Available Resources” indicates that the virtualization overhead has already been subtracted from the total amount. To give an example; VMkernel memory is subtracted from the total amount of memory to obtain the memory available memory for virtual machines. There is one gotcha with Admission Control that we want to bring to your attention before drilling into the different policies. When Admission Control is enabled, HA will in no way violate availability constraints. This means that it will always ensure multiple hosts are up and running and this applies for manual maintenance mode actions and, for instance, to VMware Distributed Power Management. So, if a host is stuck trying to enter Maintenance Mode, remember that it might be HA which is not allowing Maintenance Mode to proceed as it would violate the Admission Control Policy. In this situation, users can manually vMotion virtual machines off the host or temporarily disable admission control to allow the operation to proceed. With vSphere 4.1 and prior when you disable Admission Control and enabled DPM it could lead to a serious impact on availability. When Admission Control was disabled, DPM could place all hosts except for 1 in standby mode to reduce total power consumption. This could lead to issues in the event that this single host would fail. As of vSphere 5.0, this behavior has changed: when DPM is enabled, HA will ensure that there are always at least two hosts powered up for failover purposes. As of vSphere 4.1, DPM is also smart enough to take hosts out of standby mode to ensure enough resources are available to provide for HA initiated failovers. If by any chance the resources are not available, HA will wait for these resources to be made available by DPM and then attempt the restart of the virtual machines. In other words, the retry count (5 retries by default) is not wasted in scenarios like these. If you are still using an older version of vSphere or, god forbid, VI3, please understand that you could end up with all but one ESXi host placed in standby mode, which could lead to potential issues when that particular host fails or resources are scarce as there will be no host available to power on your virtual machines. This situation is described in the following knowledge base article: http://kb.vmware.com/kb/1007006.
  • 64. Admission Control Policy The Admission Control Policy dictates the mechanism that HA uses to guarantee enough resources are available for an HA initiated failover. This section gives a general overview of the available Admission Control Policies. The impact of each policy is described in the following section, including our recommendation. HA has three mechanisms to guarantee enough capacity is available to respect virtual machine resource reservations. Figure 25: Admission control policy Below we have listed all three options currently available as the Admission Control Policy. Each option has a different mechanism to ensure resources are available for a failover and each option has its caveats.
  • 65. Admission Control Mechanisms Each Admission Control Policy has its own Admission Control mechanism. Understanding each of these Admission Control mechanisms is important to appreciate the impact each one has on your cluster design. For instance, setting a reservation on a specific virtual machine can have an impact on the achieved consolidation ratio. This section will take you on a journey through the trenches of Admission Control Policies and their respective mechanisms and algorithms. Host Failures Cluster Tolerates The Admission Control Policy that has been around the longest is the “Host Failures Cluster Tolerates” policy. It is also historically the least understood Admission Control Policy due to its complex admission control mechanism. Although the “Host Failures Tolerates” Admission Control Policy mechanism itself hasn’t changed, a limitation has been removed. Pre-vSphere 5.0, the maximum host failures that could be tolerated was 4, due to the primary/secondary node mechanism. In vSphere 5.0, this mechanism has been replaced with a master/slave node mechanism and it is possible to plan for N-1 host failures. In the case of a 32 host cluster, you could potentially set “Host failures the cluster tolerates” to 31. Figure 26: A new maximum for Host Failures The so-called “slots” mechanism is used when the “Host failures cluster tolerates” has been selected as the Admission Control Policy. The details of this mechanism have changed several times in the past and it is one of the most restrictive policies; more than likely, it is also the least understood. Slots dictate how many virtual machines can be powered on before vCenter starts yelling “Out Of Resources!” Normally, a slot represents one virtual machine. Admission Control does not limit HA in restarting virtual machines, it ensures enough unfragmented resources are available to power on all virtual machines in the cluster by preventing “over-commitment”. Technically speaking “over-commitment” is not the correct terminology as Admission Control ensures virtual machine reservations can be satisfied and that all virtual machines’ initial memory overhead requirements are met. Although we have already touched on this, it doesn’t hurt repeating it as it is one of those myths that keeps coming back; HA initiated failovers are not prone to the Admission Control Policy. Admission Control is done by vCenter Server. HA initiated restarts, in a normal scenario, are executed directly on the ESXi host without the use of vCenter. The corner-case is where HA requests DRS (DRS is a vCenter task!) to defragment resources but that is beside the point. Even if resources are low and vCenter would complain, it couldn’t stop the restart from happening. Let’s dig in to this concept we have just introduced, slots. “A slot is defined as a logical representation of the memory and CPU resources that satisfy the requirements for any powered-on virtual machine in the cluster.” In other words a slot is the worst case CPU and memory reservation scenario in a cluster. This directly leads to the first “gotcha.” HA uses the highest CPU reservation of any given virtual machine and the highest memory reservation of any given VM in the cluster. If no reservation of higher than 32 MHz is set, HA will use a default of 32 MHz for CPU. Note that this behavior has changed: pre-vSphere 5.0 the default
  • 66. value was 256 MHz. This has changed as some felt that 256 MHz was too aggressive. If no memory reservation is set, HA will use a default of 0MB+memory overhead for memory. (See the VMware vSphere Resource Management Guide for more details on memory overhead per virtual machine configuration.) The following example will clarify what “worst-case” actually means. Example: If virtual machine “VM1” has 2GHz of CPU reserved and 1024MB of memory reserved and virtual machine “VM2” has 1GHz of CPU reserved and 2048MB of memory reserved the slot size for memory will be 2048MB (+ its memory overhead) and the slot size for CPU will be 2GHz. It is a combination of the highest reservation of both virtual machines that leads to the total slot size. Reservations defined at the Resource Pool level however, will not affect HA slot size calculations. Basic design principle Be really careful with reservations, if there’s no need to have them on a per virtual machine basis; don’t configure them, especially when using host failures cluster tolerates. If reservations are needed, resort to resource pool based reservations. Now that we know the worst-case scenario is always taken into account when it comes to slot size calculations, we will describe what dictates the amount of available slots per cluster as that ultimately dictates how many virtual machines can be powered on in your cluster. First, we will need to know the slot size for memory and CPU, next we will divide the total available CPU resources of a host by the CPU slot size and the total available memory resources of a host by the memory slot size. This leaves us with a total number of slots for both memory and CPU for a host. The most restrictive number (worst-case scenario) is the number of slots for this host. In other words, when you have 25 CPU slots but only 5 memory slots, the amount of available slots for this host will be 5 as HA always takes the worst case scenario into account to “guarantee” all virtual machines can be powered on in case of a failure or isolation. The question we receive a lot is how do I know what my slot size is? The details around slot sizes can be monitored on the HA section of the Cluster’s summary tab by clicking the “Advanced Runtime Info” line when the “Host Failures” Admission Control Policy is configured. Figure 27: High Availability cluster summary tab Clicking “Advanced Runtime Info” will show the specifics the slot size and more useful details such as the number of slots available as depicted in Figure 28. Please note that although the number of vCPUs is listed in the Slot size section, it is not factored in by default. Although this used to be the case in Pre-VI 3.5 Update 2, it often led to an overly conservative result and as such has been changed. Figure 28: High Availability advanced runtime info