SlideShare une entreprise Scribd logo
1  sur  52
Télécharger pour lire hors ligne
DELL EMC XtremIO X2 All-Flash Storage with
Microsoft Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
DELL EMC XTREMIO X2 ALL-FLASH STORAGE
WITH MICROSOFT WINDOWS SERVER AND
HYPER-V 2016
Abstract
This white paper describes the components, design, functionality, and
advantages of hosting Microsoft Windows Server and Hyper-V-2016-based
enterprise infrastructures on Dell EMC XtremIO X2 All-Flash storage array.
January, 2018
WHITE PAPER
2 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Contents
Abstract.............................................................................................................................................................1
Executive Summary...........................................................................................................................................3
Audience ............................................................................................................................................................................. 3
Business Case .................................................................................................................................................................... 3
Overview ............................................................................................................................................................................. 4
Solution Overview..............................................................................................................................................5
Dell EMC XtremIO X2 for Hyper-V Environments ..............................................................................................7
XtremIO X2 Overview ......................................................................................................................................................... 8
Architecture ......................................................................................................................................................................... 8
Multi-Dimensional Scaling................................................................................................................................................... 9
XIOS and the I/O Flow ......................................................................................................................................................11
XtremIO Write I/O Flow .................................................................................................................................................12
XtremIO Read I/O Flow.................................................................................................................................................14
System Features ...........................................................................................................................................................15
XtremIO Management Server .......................................................................................................................................21
XtremIO X2 Integration with Microsoft Technologies .......................................................................................27
Offloaded Data Transfer (ODX) ........................................................................................................................................27
Resilient File System (ReFS)............................................................................................................................................29
MPIO Best Practices .........................................................................................................................................................30
PowerPath Multipathing with XtremIO ..............................................................................................................................31
System Center Virtual Machine Manager .........................................................................................................................32
Failover Clustering ............................................................................................................................................................36
Use of Cluster Shared Volumes in a Failover Cluster ......................................................................................................36
Storage Quality of Service ................................................................................................................................................37
Space Reclamation ...........................................................................................................................................................38
Virtual Hard Disks .............................................................................................................................................................39
Virtual Hard Disk Format...............................................................................................................................................39
Using XtremIO X2 for Hyper-V VMs and Storage Migration.............................................................................................42
EMC Storage Integrator (ESI) 5.1....................................................................................................................42
XtremLib PowerShell Module 2.0.....................................................................................................................45
Enabling Integrated Copy Data Management with XtremIO X2 and AppSync 3.5............................................47
Conclusion.......................................................................................................................................................50
References......................................................................................................................................................51
How to Learn More..........................................................................................................................................52
3 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Executive Summary
Microsoft
®
Hyper-V
®
and Dell EMC XtremIO X2™ are feature-rich solutions that together provide a diverse range of
configurations to solve key business objectives such as performance and resiliency.
This white paper describes the components, design, and functionality of a Hyper-V Cluster managed by System Center
VMM 2016 running consolidated virtualized enterprise applications hosted on DELL EMC XtremIO X2 All-Flash storage
array.
It discusses and highlights the advantages to users whose enterprise IT operations and applications are already
virtualized, or who are considering hosting virtualized enterprise application deployments on DELL EMC XtremIO X2 All-
Flash array. The primary issues examined include:
• Performance of consolidated virtualized enterprise applications
• Management and monitoring efficiencies
• Business continuity and copy data management considerations
Audience
This white paper is intended for those actively involved in the management, deployment, and assessment of storage
arrays and virtualized solutions in an organization. This audience comprises, amongst others; storage and virtualization
administrators involved in management and operational activities, data center architects with responsibilities for planning
and designing datacenter infrastructures, as well as system administrators and storage engineers in charge of deployment
and integration.
Business Case
Data centers are mission-critical in any large enterprise and typically cost tens to hundreds of millions of dollars to build.
As the data center is one of the most financially concentrated assets of any business, ensuring the appropriate
infrastructural elements and functionality are present is fundamental to the success of any organization. Failure to do so
will mean that an appropriate return on investment may not be realized and the ability of the business to execute at
desired levels will be in jeopardy.
Enterprise data center infrastructures typically comprise a mix of networking infrastructure, compute and data storage
layers, as well as the management harness required to ensure desired interoperability, management and orchestration.
The decisions faced when choosing the appropriate solutions regarding the various infrastructure components are non-
trivial and have the potential to impact a business long after the initial purchase or integration phase has been completed.
For this reason, it is imperative that those components which are evaluated and eventually utilized as the building blocks
within a modern data center are verified for both functionality and reliability.
Storage performs a key function within any enterprise data center environment, as almost all applications will require
some form of persistent storage to perform their designated tasks. The ever-increasing importance of the storage layer
within the modern data center can be considered in terms of the continuing improvements in performance and capacity
densities of modern data-aware storage platforms. This increased density of modern storage capabilities enables ongoing
consolidation of business-critical workloads in the modern storage array, thereby increasing their overall prominence as a
focal point for the success of the data center.
The Dell EMC XtremIO X2 All-Flash storage arrays can service millions of IOPS assuring professionals of quality
communication. At the same time, since customers are consolidating and replacing multiple legacy arrays with higher
performing and higher density modern All-Flash storage arrays, these systems also need to provide excellent data
protection and extreme high availability. The Dell EMC XtremIO X2 platform is one such storage solution that satisfies
multiple use cases, due to its flash-centric architecture and ability to seamlessly integrate and scale in unison as the data
center grows to meet ever increasing data storage and access requirements.
4 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Overview
This paper primarily focuses on the Microsoft Hyper-V based data center. Microsoft recently released Windows Server
2016 and Windows Hyper-V 2016. These releases include a large number of new features designed to enhance the
capabilities and integration options available to enterprise data centers. A number of these improvements will be
referenced when discussing the building blocks of a Microsoft All-Flash data center. In addition, a number of
corresponding improvements relevant to the end users of the Dell EMC XtremIO X2 array in relation to Microsoft based
infrastructures will be discussed throughout this document.
With XtremIO X2, data center administrators have the ability to consolidate entire environments of transactional
databases, data warehouses, and business applications into a much smaller footprint and still deliver consistent and
predictable performance to the supported applications. XtremIO X2 allows IT departments to rapidly deliver I/O throughput
and capacity to the enterprise data center, helps reduce database sprawl, supports rapid provisioning, and creates
automated centralized management for databases and virtual workloads. This paper describes these various enterprise
functionalities along with the multiple integration mechanisms available to adopters of Microsoft modern data centers that
choose to adopt Dell EMC’s best of breed All-Flash storage. Combinations of these capabilities will be discussed
throughout the document.
This white paper explains to those involved in the planning or management of a Microsoft data center how XtremIO X2
All-Flash arrays fit in the enterprise solution, and what they should be aware of in relation to available design and
operational efficiencies. We ask the reader to consider this information and use it to make appropriate choices in the
design and deployment of their own data center infrastructures.
Potential challenges surrounding data storage and security prevent many organizations from tapping into the promise of
the storage utility provided by public cloud providers. That is not to say that these organization will forego the obvious
benefits of cloud deployment models and orchestration capabilities –it only means that they will look for internal resources
to deliver similar efficiencies with quantifiable gains. To realize these benefits, IT organizations are tasked with the
identification and validation of components and solutions to satisfy the internal drive for greater efficiencies and increased
business value.
Scale, budget, and functionality will be the determining factors in making the correct choice for any IT infrastructure
project. That said, once the required deterministic performance levels are understood, the choices become increasingly
straightforward. For enterprise architectures designed to host business critical applications and end-user environments,
the obvious choice is to examine and plan for the integration of shared All-Flash storage.
As time passes, a deeper understanding of the benefits All-Flash storage provides for enterprise environments is
becoming more prevalent across the industry. Early adopters and recent arrivals to the All-Flash revolution are now the
majority, with more choosing to deploy All-Flash every day. There are many reasons for this, and although each
deployment has its unique considerations, the underlying main motive is that All-Flash storage solves today’s modern
data center storage problems, and offers much new efficiency. Intelligent All-Flash storage offers capabilities which
improve existing processes and define tomorrow’s business workflows. As the clear market leader for All-Flash storage,
XtremIO X2 leads this revolution, and also carries the mantle as Dell EMC's (and the wider storage industry’s) fastest-
growing product ever.
Throughout this paper, information concerning the Dell EMC XtremIO X2 All-Flash array’s ability to meet and exceed the
requirements of a modern Microsoft virtualized data center will be highlighted, explained, and demonstrated. As more IT
organizations choose to deploy their data center solutions using Microsoft virtualization technologies, Dell EMC XtremIO
X2 continues to expand the available integration capabilities for these environments, while still providing best-in-class
performance, reliability, and storage management simplicity. The following sections describe the relevant technologies,
highlight the most important features and functionality, and show the reader how these can combine to deliver value to a
customer’s data center and business.
5 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Solution Overview
The Windows Server platform leverages Hyper-V for virtualization technology. Initially offered with Windows Server 2008,
Hyper-V has matured with each release to include many new features and enhancements.
Microsoft Hyper-V has evolved to become a mature, robust, proven virtualization platform. At a basic level, it is a layer of
software that presents the physical host server hardware resources in an optimized and virtualized manner to one or more
guest virtual machines (VMs). Hyper-V hosts (also referred to as nodes when clustered) can greatly enhance utilization of
physical hardware (such as processors, memory, NICs, and power supplies) by allowing many VMs to share these
resources at the same time. Hyper-V Manager and related management tools such as Failover Cluster Manager,
Microsoft System Center Virtual Machine Manager (SCVMM), and PowerShell, offer administrators greater control and
flexibility for managing host and VM resources.
The solution shown in Figure 1 represents a four-node Hyper-V virtualized distributed data center environment managed
by System Center VMM and connected to Cluster Shared Volumes. The consolidated virtualized enterprise applications
run on the production site which includes Oracle and Microsoft SQL database workloads, as well as additional data
warehousing profiles. For purposes of the solution proposed and described in this white paper, we will consider a pseudo-
company for which these are the primary workloads, and are considered essential to the continued fulfillment of key
business operational objectives. They should, at all times, behave in a consistent, reliable, and expected manner. In case
of a hardware failure, a redundant system should become operational with minimal service disruption.
The following sections describe the hardware layer of our solution, providing an in-depth view of our XtremIO X2 array
and the features and benefits it provides to Hyper-V environments, including the software layer providing configuration
details for Hyper-V environments, and Dell EMC XtremIO X2 tools for Microsoft environments such as ESI, AppSync, and
PowerShell modules.
Figure 1. Design Architecture Hyper-V Cluster Managed by System Center VMM 2016
6 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Table 1. Solution Hardware
HARDWARE QUANTITY CONFIGURATION NOTES
DELL EMC XtremIO X2 1 Two Storage Controllers (SCs) with:
• Two dual socket Haswell CPUs
• 346GB RAM
DAEs configured with 18 x 400 GB SSDs drives
XtremIO X2 X2-S 400GB 18 drives
Brocade 6510 SAN switch 1 32/16 Gbps FC switches 2 switches per site, dual FC fabric
configuration
Mellanox MSX1016 10GbE 1 10/1 Gbps Ethernet switches Infrastructure Ethernet switch
PowerEdge FC630 4 Intel Xeon CPU E5-2695 v4 @ 2.10GHz 524
GB
Windows 2016 v 1607 including
Hyper-V Role
Table 2. Solution Software
SOFTWARE QUANTITY CONFIGURATION
MSSQL Server 2017 VM (for SCVMM) 2 • 8 vCPU
• 16 GB Memory
• 100 GB VHDX
System Center VMM 2016 VM 1 • 4 vCPU
• 16 GB Memory
• 80 GB VHDX
Oracle 12C VM (DW Workload) 4 • 8 vCPU
• 16 GB Memory
• 256 GB VHDX
MSSQL Server 2017 VM (OLTP Workload) 4 • 4 vCPU
• 8 GB Memory
• 256 GB VHDX
Windows 10 VM (VDI Workload) 12 • 2 vCPU
• 4 GB Memory
• 60 GB VHDX
PowerShell Plugin for XtremIO X2 2.0.3 1 N/A
Dell EMC ESI Plugin 5.1 1 N/A
7 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Dell EMC XtremIO X2 for Hyper-V Environments
Dell EMC's XtremIO X2 is an enterprise-class scalable All-Flash storage array that provides rich data services with high
performance. It is designed from the ground up to unlock flash technology's full performance potential by uniquely
leveraging the characteristics of SSDs and uses advanced inline data reduction methods to reduce the physical data that
must be stored on the disks.
XtremIO X2 storage system uses industry-standard components and proprietary intelligent software to deliver
unparalleled levels of performance, achieving consistent low latency for up to millions of IOPS. It comes with a simple,
easy-to-use interface for storage administrators and fits a wide variety of use cases for customers in need of a fast and
efficient storage system for their data centers, requiring very little pre-provisioning preparation or planning.
XtremIO X2 storage system serves many use cases in the IT world, due to its high performance and advanced abilities.
One major use case is for virtualized environments and cloud computing. Figure 2 shows XtremIO X2’s incredible
performance in an intensive, live Hyper-V production environment. We can see extremely high IOPS stats (~1.6M)
handled by a quad XtremIO X2 storage array with a latency mostly below 1 msec. In addition, we can see an impressive
data reduction factor of 6.6:1 (2.8:1 for deduplication and 2.4:1 for compression) which lowers the physical footprint of the
solution.
Figure 2. Intensive Hyper-V Production Environment Workload from an XtremIO X2 Array Perspective
XtremIO leverages flash to deliver value across multiple dimensions:
• Performance (consistent low-latency and up to millions of IOPS)
• Scalability (using a scale-out and scale-up architecture)
• Storage efficiency (using data reduction techniques such as deduplication, compression and thin-provisioning)
• Data Protection (with a proprietary flash-optimized algorithm called XDP)
• Environment Consolidation (using XtremIO Virtual Copies or Microsoft ODX)
• Integration between Microsoft and Dell EMC storage technologies, providing ease of management, backup,
recovery, and application awareness
• Rapid deployment and protection of Hyper-V environments
8 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
XtremIO X2 Overview
XtremIO X2 is the new generation of Dell EMC's All-Flash Array storage system. It adds enhancements and flexibility to
the high proficiency and performance of the previous generation of storage arrays. Features such as scale-up for a more
flexible system, write boost for a more sensible and high-performing storage array, NVRAM for improved data availability,
and a new web-based UI for managing the storage array and monitoring its alerts and performance stats, all add the extra
value and advancements required in the evolving world of computer infrastructure.
The XtremIO X2 Storage Array uses building blocks called X-Bricks. Each X-Brick has its own compute, bandwidth and
storage resources, and can be clustered together with additional X-Bricks to grow in both performance and capacity
(scale-out). Each X-Brick can also grow individually in terms of capacity, with an option to add to up to 72 SSDs in each
X-Brick.
XtremIO architecture is based on a metadata-centric content-aware system, which helps streamline data operations
efficiently without requiring any post-write data transfer for any maintenance reason (data protection, data reduction, etc. –
all done inline). The system uniformly distributes the data across all SSDs in all X-Bricks in the system using unique
fingerprints of the incoming data, and controls access using metadata tables. This contributes to an extremely balanced
system across all X-Bricks in terms of compute power, storage bandwidth and capacity.
Using the same unique fingerprints, XtremIO provides exceptional, always-on inline data deduplication abilities, which
highly benefits virtualized environments. Together with its data compression and thin provisioning capabilities (both also
inline and always-on), it achieves unparalleled data reduction rates.
System operation is controlled by storage administrators via a stand-alone dedicated Linux-based server called the
XtremIO Management Server (XMS). An intuitive user interface is used to manage and monitor the storage cluster and its
performance. The XMS can be either a physical or a virtual server and can manage multiple XtremIO clusters.
With its intelligent architecture, XtremIO provides a storage system that is easy to set-up, needs zero tuning by the client,
and does not require complex capacity or data protection planning, which is provided by the system autonomously.
Architecture
The XtremIO X2 Storage System is comprised of a set of X-Bricks that together form a cluster. This is the basic building
block of an XtremIO array. There are two types of X2 X-Bricks available: X2-S and X2-R. X2-S is for environments whose
storage needs are more I/O intensive than capacity intensive, as they use smaller SSDs and less RAM. An effective use
of the X2-S type is for environments that have high data reduction ratios (high compression ratio or a lot of duplicated
data) which significantly lower the capacity footprint of the data. X2-R X-Bricks clusters are made for capacity intensive
environments, with bigger disks, more RAM and a bigger expansion potential in future releases. The two X-Brick types
cannot be mixed together in a single system, so the decision of which type is suitable for your environment must be made
in advance.
Each X-Brick is comprised of:
• Two 1U Storage Controllers (SCs) with:
o Two dual socket Haswell CPUs
o 346GB RAM (for X2-S) or 1TB RAM (for X2-R)
o Two 1/10GbE iSCSI ports
o Two user interface interchangeable ports (either 4/8/16Gb FC or 1/10GbE iSCSI)
o Two 56Gb/s InfiniBand ports
o One 100/1000/10000 Mb/s management port
o One 1Gb/s IPMI port
o Two redundant power supply units (PSUs)
9 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
• One 2U Disk Array Enclosure (DAE) containing:
o Up to 72 SSDs of sizes 400GB (for X2-S) or 1.92TB (for X2-R)
o Two redundant SAS interconnect modules
o Two redundant power supply units (PSUs)
Figure 3. An XtremIO X2 X-Brick
The Storage Controllers on each X-Brick are connected to their DAE via redundant SAS interconnects.
The XtremIO X2 storage array can have one or multiple X-Bricks. Multiple X-Bricks are clustered together into an XtremIO
X2 array, using an InfiniBand switch and the Storage Controllers' InfiniBand ports for back-end connectivity between
Storage Controllers and DAEs across all X-Bricks in the cluster. The system uses the Remote Direct Memory Access
(RDMA) protocol for this back-end connectivity, ensuring a highly-available, ultra-low latency network for communication
between all components of the cluster. The InfiniBand switches are the same size (1U) for both X2-S and X2-R cluster
types, but include 12 ports for X2-S and 36 ports for X2-R. By leveraging RDMA, an XtremIO X2 system is essentially a
single shared-memory space spanning all of its Storage Controllers.
The 1GB port for management is configured with an IPv4 address. The XMS, which is the cluster's management software,
communicates with the Storage Controllers via the management interface. Over this interface, the XMS communicates
with the Storage Controllers, and sends storage management requests creating an XtremIO X2 volume, mapping a
volume to an Initiator Group, or other similar requests.
The 1GB/s port for IPMI interconnects the X-Brick's two Storage Controllers. IPMI connectivity is strictly within the bounds
of an X-Brick, and will never be connected to an IPMI port of a Storage Controller in another X-Brick in the cluster.
Multi-Dimensional Scaling
With X2, an XtremIO cluster has both scale-out and scale-up capabilities, which enables a flexible growth capability
adapted to the customer's unique workload and needs. Scale-out is implemented by adding X-Bricks to an existing
cluster. The addition of an X-Brick to an existing cluster linearly increases its compute power, bandwidth and capacity.
Each X-Brick that is added to the cluster includes two Storage Controllers, each with its CPU power, RAM and FC/iSCSI
ports to service the clients of the environment, together with a DAE with SSDs to increase the capacity provided by the
cluster. Adding an X-Brick to scale-out an XtremIO cluster is intended for environments that grow both in capacity and
performance needs, such as in the case of an increase in the number of active users and their required data, or a
database which grows in data and complexity.
An XtremIO cluster can start with any number of X-Bricks as per the system’s initial requirements, and can currently grow
to up to 4 X-Bricks (for both X2-S and X2-R). Future code upgrades of XtremIO X2 will allow up to 8 supported X-Bricks
for X2-R arrays.
4U
First
Storage Controller
DAE2U
Second
Storage Controller
1U
1U
10 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 4. Scale Out Capabilities – Single to Multiple X2 X-Brick Clusters
Scale-up of an XtremIO cluster is implemented by adding SSDs to existing DAEs in the cluster. This is intended when
there is a need to grow in capacity but not performance. As an example, this may occur when the same number of users
need to store an increasing amount of data, or when data usage growth reaches the capacity limits, but not the
performance limits, of the current infrastructure.
Each DAE can hold up to 72 SSDs, and is divided to up to 2 groups of SSDs called Data Protection Groups (DPGs). Each
DPG can hold a minimum of 18 SSDs and can grow by increments of 6 SSDs up to a maximum of 36 SSDs, thus
supporting configurations of 18, 24, 30 or 36 SSDs per DPG, with up to 2 DPGs in a DAE.
SSDs are 400GB per drive for X2-S clusters and 1.92TB per drive for X2-R clusters. Future releases will allow customers
to populate their X2-R clusters with 3.84TB sized drives, doubling the physical capacity available in their clusters.
Figure 5. Multi-Dimensional scaling
11 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
XIOS and the I/O Flow
Each Storage Controller within the XtremIO cluster runs a specially-customized lightweight Linux-based operating system
as the base platform of the array. The XtremIO Operating System (XIOS) handles all activities within a Storage Controller
and runs on top of the Linux-based operating system. XIOS is optimized for handling high I/O rates and manages the
system's functional modules, RDMA communication, monitoring etc.
Figure 6. X-Brick Components
XIOS has a proprietary process scheduling-and-handling algorithm designed to meet the specific requirements of a
content-aware, low-latency, and high-performing storage system. It provides efficient scheduling and data access, full
exploitation of CPU resources, optimized inter-sub-process communication, and minimized dependency between sub-
processes running on different sockets.
The XtremIO Operating System gathers a variety of metadata tables on incoming data including the data fingerprint, its
location in the system, mappings, and reference counts. The metadata is used as the primary source of information for
performing system operations such as uniformly laying out incoming data, implementing inline data reduction services,
and accessing data on read requests. The metadata is also used to optimize integration and communication between the
storage system and external applications (such as VMware XCOPY and Microsoft ODX).
Regardless of which Storage Controller receives an I/O request from the host, multiple Storage Controllers on multiple X-
Bricks interact to process the request. The data layout in the XtremIO X2 system ensures that all components share the
load and participate evenly in processing I/O operations.
An important functionality of XIOS is its data reduction capabilities. This is achieved by using inline data deduplication and
compression. Data deduplication and data compression complement each other. Data deduplication removes
redundancies, whereas data compression compresses the already deduplicated data before it is written to the flash
media. XtremIO is an always-on, thin-provisioned storage system, further realizing storage savings by the storage system,
which never writes a block of zeros to the disks.
XtremIO connects with existing SANs through 16Gb/s Fibre Channel or 10Gb/s Ethernet iSCSI to service hosts' I/O
requests.
12 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
XtremIO Write I/O Flow
In a write operation to the storage array, the incoming data stream reaches any one of the Active-Active Storage
Controllers and is broken into data blocks. For every data block, the array fingerprints the data with a unique identifier and
stores it in the cluster's mapping table. The mapping table maps between the host Logical Block Addresses (LBA) and the
block fingerprints, and between the block fingerprints and its physical location in the array (the DAE, SSD and the block
location offset). The block fingerprint has two objectives:
1. To determine if the block is a duplicate of a block that already exists in the array
2. To distribute blocks uniformly across the cluster (the array divides the list of potential fingerprints among Storage
Controllers in the array and gives each Storage Controller a range of fingerprints to control.)
The mathematical process that calculates the fingerprints results in a uniform distribution of fingerprint values thus evenly
distributing blocks across all Storage Controllers in the cluster.
A write operation works as follows:
1. A new write request reaches the cluster.
2. The new write is broken into data blocks.
3. For each data block:
A. A fingerprint is calculated for the block.
B. An LBA-to-fingerprint mapping is created for this write request.
C. The fingerprint is checked to see if it already exists in the array.
1. If it exists, the reference count for this fingerprint is incremented by one.
2. If it does not exist:
a. A location is chosen on the array where the block will be written (distributed uniformly across the array
according to fingerprint value).
b. A fingerprint-to-physical location mapping is created.
c. The data is compressed.
d. The data is written.
e. The reference count for the fingerprint is set to one.
Deduplicated writes are of course much faster than original writes. Once the array identifies a write as a duplicate, it
updates the LBA-to-fingerprint mapping for the write and updates the reference count for this fingerprint. No further data is
written to the array and the operation is completed quickly, adding an extra benefit to inline deduplication.
Figure 7 shows an example of an incoming data stream which contains duplicate blocks with identical fingerprints.
13 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 7. Incoming Data Stream Example with Duplicate Blocks
As mentioned, fingerprints also help to decide where to write the block in the array. Figure 8 shows the incoming stream
after duplicates were removed as it is being written to the array. The blocks are divided to the correct Storage Controller
according to their fingerprint values, which ensures a uniform distribution of the data across the cluster. The blocks are
transferred to their destinations in the array using Remote Direct Memory Access (RDMA) via the low-latency InfiniBand
network.
Figure 8. Incoming Deduplicated Data Stream Written to the Storage Controllers
The actual write of the data blocks to the SSDs is done asynchronously. At the time of the application write, the system
places the data blocks in the in-memory write buffer, and protects it using journaling to local and remote NVRAMs. Once it
is written to the local NVRAM and replicated to a remote one, the Storage Controller returns an acknowledgment to the
host. This guarantees a quick response to the host, ensures low-latency of I/O traffic, and preserves the data in case of
system failure (power-related or any other). When enough blocks are collected in the buffer (to fill up a full stripe), the
system writes them to the SSDs on the DAE. Figure 9 shows data being written to the DAEs after a full stripe of data
blocks is collected in each Storage Controller.
Storage
Controller
Storage
Controller
DAE
Storage
Controller
Storage
Controller
DAE
CA38C90
Data
134F871
Data
0325F7A
Data
F3AFBA3
Data
AB45CB7
Data
20147A8
Data
963FE7B
Data
Data
DataData
DataData
Data Data
X-Brick 1
X-Brick 2
F, …
2, A, …
1, 9, …
0, C, …
InfiniBand
14 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 9. Full Stripe of Blocks Written to the DAEs
XtremIO Read I/O Flow
In a read operation, the system first performs a look-up of the logical address in the LBA-to-fingerprint mapping. The
fingerprint found is then looked up in the fingerprint-to-physical mapping and the data is retrieved from the right physical
location. As with writes, the read load is also evenly shared across the cluster, as blocks are evenly distributed, and all
volumes are accessible across all X-Bricks. If the requested block size is larger than the data block size, the system
performs parallel data block reads across the cluster and assembles them into bigger blocks before returning them to the
application. A compressed data block is decompressed before it is delivered to the host.
XtremIO has a memory-based read cache in each Storage Controller. The read cache is organized by content fingerprint.
Blocks whose contents are more likely to be read are placed in the read cache for a fast retrieve.
A read operation works as follows:
1. A new read request reaches the cluster.
2. The read request is analyzed to determine the LBAs for all data blocks and a buffer is created to hold the data.
3. For each LBA:
A. The LBA-to-fingerprint mapping is checked to find the fingerprint of each data block to be read.
B. The fingerprint-to-physical location mapping is checked to find the physical location of each of the data
blocks.
C. The requested data block is read from its physical location (read cache or a place in the disk) and
transmitted to the buffer created in step 2 in the Storage Controller that processes the request via RDMA
over InfiniBand.
4. The system assembles the requested read from all data blocks transmitted to the buffer and sends it back to the
host.
Storage
Controller
Storage
Controller
DAE
Storage
Controller
Storage
Controller
DAE
Data Data Data Data P1 P2DataDataDataDataDataData
Data Data Data Data P1 P2DataDataDataDataDataData
Data Data Data Data P1 P2DataDataDataDataDataData
Data Data Data Data P1 P2DataDataDataDataDataData
Data
DataData
DataData
Data Data
X-Brick 1
X-Brick 2
15 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
System Features
The XtremIO X2 Storage Array offers a wide range of built-in features that require no special license. The architecture and
implementation of these features is unique to XtremIO and is designed around the capabilities and limitations of flash
media. Key features included in the system are listed below.
Inline Data Reduction
XtremIO's unique Inline Data Reduction is achieved by these two mechanisms: Inline Data Deduplication and Inline Data
Compression.
Data Deduplication
Inline Data Deduplication is the removal of duplicate I/O blocks from a stream of data prior to it being written to the flash
media. XtremIO inline deduplication is always on, meaning no configuration is needed for this important feature. The
deduplication is at a global level, meaning no duplicate blocks are written over the entire array. As an inline and global
process, no resource-consuming background processes or additional reads and writes (which are mainly associated with
post-processing deduplication) are necessary for the feature's activity, thus increasing SSD endurance and eliminating
performance degradation.
As mentioned earlier, deduplication on XtremIO is performed using the content's fingerprints (see XtremIO Write I/O Flow
on page 12). The fingerprints are also used for uniform distribution of data blocks across the array, which provides
inherent load balancing for performance and enhances flash wear-level efficiency, since the data never needs to be
rewritten or rebalanced.
XtremIO uses a content-aware, globally deduplicated Unified Data Cache for highly efficient data deduplication. The
system's unique content-aware storage architecture provides a substantially larger cache size with a small DRAM
allocation. Therefore, XtremIO is the ideal solution for difficult data access patterns, such as "boot storms" that are
common in Hyper-V environments.
XtremIO has excellent data deduplication ratios, especially for virtualized environments. With it, SSD usage is smarter,
flash longevity is maximized, the logical storage capacity is multiplied, and total cost of ownership is reduced.
Figure 10 shows the CPU utilization of our Storage Controllers during a Hyper-V production workload. When new blocks
are written to the system the hash calculation is distributed across all Storage Controllers. The excellent synergy across
the X2 cluster can be seen, when all the Active-Active Storage Controllers' CPUs share the load and effort, and the CPU
utilization between all are virtually equal for the entire workload.
Figure 10. XtremIO X2 CPU Utilization
Data Compression
Inline data compression is the compression of data prior to it being written to the flash media. XtremIO automatically
compresses data after all duplications are removed, ensuring that the compression is performed only for unique data
blocks. The compression is performed in real-time and not as a post-processing operation. This way, it does not overuse
the SSDs or impact performance. Compression rates depend on the type of data written.
16 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Data Compression complements data deduplication in many cases, and saves storage capacity by storing only unique
data block in the most efficient manner.
Data compression is always inline and is never performed as a post-processing activity. Therefore, the data is written only
once. It increases the overall endurance of the flash array's SSDs.
In a Hyper-V environment, deduplication dramatically reduces the required capacity for the virtual servers. In addition,
compression reduces the specific user data. As a result, an increased number of virtual servers can be managed by a
single X-Brick.
Using the two data reduction techniques, less physical capacity is required to store the data, increasing the storage
array's efficiency and dramatically reducing the $/GB cost of storage, even when compared to hybrid storage systems.
Figure 11 shows the benefits and capacity savings for the deduplication-compression combination.
Figure 11. Data Deduplication and Data Compression Demonstrated
In the above example, the twelve data blocks written by the host are first deduplicated to four data blocks, demonstrating
a 3:1 data deduplication ratio. Following the data deduplication process, the four data blocks are then each compressed,
by a ratio of 2:1, resulting in a total data reduction ratio of 6:1.
Thin Provisioning
XtremIO storage is natively thin provisioned, using a small internal block size. All volumes in the system are thin
provisioned, meaning that the system consumes capacity only when it is needed. No storage space is ever pre-allocated
before writing.
Because of XtremIO's content-aware architecture, blocks can be stored at any location in the system (using the metadata
to reference their location), and the data is written only when unique blocks are received. Therefore, as opposed to disk-
oriented architecture, no space creeping or garbage collection is necessary on XtremIO, volume fragmentation does not
occur in the array, and no defragmentation utilities are needed.
This feature on XtremIO enables consistent performance and data management across the entire life cycle of a volume,
regardless of the system capacity utilization or the write patterns of clients.
This characteristic allows frequent manual and automatic reclamation of unused space directly from NTFS/ReFS
and virtual machines, thus providing the following benefits:
• The allocated disks can be used optimally, and the actual space reports are more accurate.
• More efficient snapshots, as blocks that are no longer needed are not protected by additional snapshots.
Data Written by Host
3:1
Data
Deduplication
2:1
Data
Compression
6:1 Total Data Reduction
This is the only
data written to
the flash media.
17 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Integrated Copy Data Management
XtremIO pioneered the concept of integrated Copy Data Management (iCDM) – the ability to consolidate both primary
data and its associated copies on the same scale-out, All-Flash array for unprecedented agility and efficiency.
XtremIO is one of a kind in its ability to consolidate multiple workloads and entire business processes safely and
efficiently, providing organizations with a new level of agility and self-service for on-demand procedures. XtremIO provides
consolidation, supports on-demand copy operations at scale, and still maintains delivery of all performance SLAs in a
consistent and predictable way.
Consolidation of primary data and its copies in the same array has numerous benefits:
• It can make development and testing activities up to 50% faster, quickly creating copies of production code for
development and testing purposes, and then refreshing the output back into production in the same array for the
full cycle of code upgrades. This dramatically reduces complexity and infrastructure needs, as well as
development risks, and increases the quality of the product.
• Production data can be extracted and pushed to all downstream analytics applications on-demand as a simple in-
memory operation. Data can be copied with high performance and with the same SLA as production copies,
without compromising production SLAs. XtremIO offers this on-demand as both self-service and automated
workflows for both application and infrastructure teams.
• Operations such as patches, upgrades and tuning tests can be made quickly using copies of production data.
Diagnosing application and database problems can be done using these copies, and applying the changes back
to production can also be done by returning copies. The same approach can be used for testing new technologies
and combining them in production environments.
• iCDM can also be used for data protection purposes, as it enables creating many point-in-time copies at short
time intervals for recovery. Application integration and orchestration policies can be set to auto-manage data
protection, using different SLAs.
XtremIO Virtual Copies
For all iCDM purposes, XtremIO uses its own implementation of snapshots called XtremIO Virtual Copies (XVCs). XVCs
are created by capturing the state of data in volumes at a particular point in time, and allowing users to access that data
when needed, regardless of the state of the source volume (even deletion). They allow any access type and can be taken
either from a source volume or another Virtual Copy.
XtremIO's Virtual Copy technology is implemented by leveraging the content-aware capabilities of the system, and is
optimized for SSDs, with a unique metadata tree structure that directs I/O to the right timestamp of the data. This allows
efficient copy creation that can sustain high performance, while maximizing the media endurance.
Figure 12. A Metadata Tree Structure Example of XVCs
18 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
When creating a Virtual Copy, the system only generates a pointer to the ancestor metadata of the actual data in the
system, making the operation very quick. This operation does not have any impact on the system and does not consume
any capacity when created, unlike traditional snapshots, which may need to reserve space or copy the metadata for each
snapshot. Virtual Copy capacity consumption occurs only when changes are made to any copy of the data. Then, the
system updates the metadata of the changed volume to reflect the new write, and stores its blocks in the system using the
standard write flow process.
The system supports the creation of Virtual Copies on a single or set of volumes. All Virtual Copies of the volumes in the
set are cross-consistent and contain the exact same point-in-time. This can be done manually by selecting a set of
volumes for copying, or by placing volumes in a Consistency Group and making copies of that Consistency Group.
Virtual Copy deletions are lightweight and proportional only to the amount of changed blocks between the entities. The
system uses its content-aware capabilities to handle copy deletions. Each data block has a counter that indicates the
number of instances of that block in the system. If a block is referenced from some copy of the data, it will not be deleted.
Any block whose counter value reaches zero is marked as deleted and will be overwritten when new unique data enters
the system.
With XVCs, XtremIO's iCDM offers the following tools and workflows to provide the consolidation capabilities:
• Consistency Groups (CG) – Grouping of volumes to allow Virtual Copies to be taken on a group of volumes as a
single entity.
• Snapshot Sets – A group of Virtual Copies of volumes taken together using CGs or a group of manually-chosen
volumes.
• Protection Copies – Immutable read-only copies created for data protection and recovery purposes.
• Protection Scheduler – Used for local protection of a volume or a CG. It can be defined using intervals of
seconds/minutes/hours or can be set using a specific time of day or week. It has a retention policy based on the
number of copies required or the configured age limit of the oldest snapshot.
• Restore from Protection – Restore a production volume or CG from one of its descendant snapshot sets.
• Repurposing Copies – Virtual Copies configured with changing access types (read-write / read-only / no-access)
for alternating purposes.
• Refresh a Repurposing Copy – Refresh a Virtual Copy of a volume or a CG from the parent object or other related
copies with relevant updated data. It does not require volume provisioning changes for the refresh to take effect,
but can be discovered only by host-side logical volume management operations.
XtremIO Data Protection
XtremIO Data Protection (XDP) provides the storage system with highly efficient "self-healing" double-parity data
protection. It requires very little capacity overhead and metadata space, and does not require dedicated spare drives for
rebuilds. Instead, XDP leverages the "hot space" concept, where any free space available in the array can be utilized for
failed drive reconstructions. The system always reserves sufficient distributed capacity for performing at least a single
drive rebuild. In the rare case of a double SSD failure, the second drive will be rebuilt only if there is enough space to
rebuild it as well, or when one of the failed SSDs is replaced.
The XDP algorithm provides:
• N+2 drives protection
• Capacity overhead of only 5.5%-11% (depending on the number of disks in the protection group)
• 60% more write-efficient than RAID1
• Superior flash endurance to any RAID algorithm, due to the smaller number of writes and even distribution of data
• Automatic rebuilds that are faster than traditional RAID algorithms
19 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
As shown in Figure 13, XDP uses a variation of N+2 row and diagonal parity which provides protection from two
simultaneous SSD errors. An X-Brick DAE may contain up to 72 SSDs organized in two Data Protection Groups (DPGs).
XDP is managed independently on the DPG level. A DPG of 36 SSDs will result in capacity overhead of only 5.5% for its
data protection needs.
Figure 13. N+2 Row and Diagonal Parity
Data at Rest Encryption
Where needed, Data at Rest Encryption (DARE) provides a solution for securing critical data even when the media is
removed from the array. XtremIO arrays utilize a high-performance inline encryption technique to ensure that all data
stored on the array is unusable if the SSD media is removed. This prevents unauthorized access in the event of theft or
loss during transport, and makes it possible to return/replace failed components containing sensitive data. DARE has
been established as a mandatory requirement in several industries, such as health care, banking, and government
institutions.
At the heart of XtremIO's DARE solution lays the use of the Self-Encrypting Drive (SED) technology. An SED uses
dedicated hardware to encrypt and decrypt data as it is written to or read from the drive. Offloading the encryption task to
the SSDs enables XtremIO to maintain the same software architecture whether encryption is enabled or disabled on the
array. All XtremIO's features and services (including Inline Data Reduction, XtremIO Data Protection, Thin Provisioning,
XtremIO Virtual Copies, etc.) are available on an encrypted cluster as well as on a non-encrypted cluster, and
performance is not impacted when using encryption.
A unique Data Encryption Key (DEK) is created during the drive manufacturing process, and does not leave the drive at
any time. The DEK can be erased or changed, rendering its current data unreadable forever. To ensure that only
authorized hosts can access the data on the SED, the DEK is protected by an Authentication Key (AK) that resides on the
Storage Controller. Without the AK, the DEK is encrypted and cannot be used to encrypt or decrypt data.
1 2
2 3
3 4
D0 D1
3 4
4 5
5 1
D2 D3
k = 5 (prime)
5
1
2
D4
1
2
3
P Q
4 5 1 2 3 4
k-1
5
20 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 14. Data at Rest Encryption in XtremIO
Write Boost
In the new X2 storage array, the write flow algorithm was improved significantly to improve array performance, following
the rise in compute power and disk speeds, and taking into account common applications' I/O patterns and block sizes. As
described earlier for the write I/O flow, the commit to the host is now asynchronous to the actual writing of the blocks to
disk. The commit is sent after the changes are written to local and remote NVRAMs for protection, and are written to the
disk only later, at a time that best optimizes the system's activity. In addition to the shortened procedure from write to
commit, the new algorithm addresses an issue relevant to many applications and clients: a high percentage of small I/Os
creating load on the storage system and influencing latency, especially on bigger I/O blocks. Examining customers'
applications and I/O patterns, it was found that many I/Os from common applications come in small blocks, under 16K
pages, creating high loads on the storage array. Figure 15 shows the block size histogram from the entire XtremIO
installed base. The percentage of blocks smaller than 16KB is highly evident. The new algorithm takes care of this issue
by aggregating small writes to bigger blocks in the array before writing them to disk, making them less demanding on the
system, which can now handle larger I/Os faster. The test results for the improved algorithm were amazing: in several
cases the improvement in latency is around 400% and allows XtremIO X2 to address application latency requirements of
0.5msec or lower.
Figure 15. XtremIO Install Base Block Size Histogram
21 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
XtremIO Management Server
The XtremIO Management Server (XMS) is the component that manages XtremIO clusters (up to 8 clusters). It is
preinstalled with CLI, GUI and RESTful API interfaces, and can be installed on a dedicated physical server or a virtual
machine.
The XMS manages the cluster through the management ports on both Storage Controllers of the first X-Brick in the
cluster, using a standard TCP/IP connection for communication. It is not part of the XtremIO data path, and thus can be
disconnected from an XtremIO cluster without jeopardizing I/O tasks. A failure on the XMS only affects monitoring and
configuration activities, such as creating and attaching volumes. A virtual XMS is naturally less vulnerable to such failures.
The GUI is based on a new Web User Interface (WebUI), which is accessible via any browser, and provides easy-to-use
tools for performing most system operations (certain management operations must be performed using the CLI). Some of
these features are described in the following sections.
Dashboard
The Dashboard window presents a main overview of the cluster. It has three panels:
• Health - the main overview of the system's health status, alerts, etc.
• Performance (shown in Figure 16) – the main overview of the system's overall performance and top used
Volumes and Initiator Groups.
• Capacity (shown in Figure 17) – the main overview of the system's physical capacity and data savings.
Figure 16. XtremIO WebUI – Dashboard – Performance Panel
22 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 17. XtremIO WebUI – Dashboard – Capacity Panel
The main Navigation menu bar is located on the left side of the UI, allowing users to easily select the desired XtremIO
management actions. The main menu options are: Dashboard, Notifications, Configuration, Reports, Hardware, and
Inventory.
Notifications
From the Notifications menu, we can navigate to the Events window (shown in Figure 18) and the Alerts window,
showing major and minor issues related to the cluster's health and operations.
Figure 18. XtremIO WebUI – Notifications – Events Window
23 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Configuration
The Configuration window displays the cluster's logical components: Volumes (shown in Figure 19), Consistency Groups,
Snapshot Sets, Initiator Groups, Initiators, and Protection Schedulers. Through this window we can create and modify
these entities, using the action panel on the top right side.
Figure 19. XtremIO WebUI – Configuration
Reports
From the Reports menu, we can navigate to different windows to show graphs and data related to different aspects of the
system's activities, mainly system performance and resource utilization. The menu included the following options:
Overview, Performance, Blocks, Latency, CPU Utilization, Capacity, Savings, Endurance, SSD Balance, Usage or User
Defined reports. We can view reports using different resolutions of time and components: choosing specific entities in the
"Select Entity" option (shown in Figure 20) that appears at the top of the Reports menus, or selecting predefined and
custom days and times for which to review reports (shown in Figure 21).
24 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 20. XtremIO WebUI – Reports – Selecting Specific Entities to View
Figure 21. XtremIO WebUI – Reports – Selecting Specific Times to View
The Overview window shows basic reports on the system, including performance, weekly I/O patterns and storage
capacity information. The Performance window shows extensive performance reports such as Bandwidth, IOPS and
Latency information. The Blocks window shows block distribution and statistics of I/Os within the system. The Latency
window (shown in Figure 22) shows Latency reports, with different dimensions such as block sizes and IOPS metrics. The
CPU Utilization window shows CPU utilization of all Storage Controllers in the system.
25 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 22. XtremIO WebUI – Reports – Latency Window
The Capacity window (shown in Figure 23) shows capacity statistics and the change in storage capacity over time. The
Savings window shows Data Reduction statistics and change over time. The Endurance window shows SSD's
endurance status and statistics. The SSD Balance window shows the data distribution over the SSDs. The Usage
window shows Bandwidth and IOPS usage, both overall and divided to reads and writes. The User Defined window
allows users to define their own reports to view.
Figure 23. XtremIO WebUI – Reports – Capacity Window
Hardware
In the Hardware menu, we can see a visual overview of the cluster and X-Bricks. When viewing the FRONT panel, we can
choose and highlight any component of the X-Brick and view information about it in the Information panel on the right. In
Figure 24 we can see extended information on Storage Controller 1 in X-Brick 1, but we can also view more detailed
information such as local disks and Status LEDs. We can click on the "OPEN DAE" button to see visual illustration of the
X-Brick's DAE and its SSDs, and view additional information on each SSD and Row Controller.
26 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 24. XtremIO WebUI – Hardware – Front Panel
In the BACK panel, we can view an illustration of the rear of the X-Brick and see all physical connections to the X-Brick
together with an internal view. Rear connections include FC connections, Power, iSCSI, SAS, Management, IPMI and
InfiniBand. The view can be filtered by the "Show Connections" list at the top right. An example of this view is seen in
Figure 25.
Figure 25. XtremIO WebUI – Hardware – Back Panel – Show Connections
Inventory
In the Inventory menu, we can see all components of the system with relevant information including: XMS, Clusters, X-
Bricks, Storage Controllers, Local Disks, Storage Controller PSUs, XEnvs, Data Protection Groups, SSDs, DAEs, DAE
Controllers, DAE PSUs, DAE Row Controllers, InfiniBand Switches and NVRAMs.
As mentioned, there are other interfaces available to monitor and manage an XtremIO cluster through the XMS server.
The system's Command Line Interface (CLI) can be used for everything the GUI can be used for and more, a RESTful
API is another pre-installed interface in the system which allows HTTP-based commands to manage clusters, and a
PowerShell API Module is also an option to use Windows' PowerShell console to administer XtremIO clusters.
27 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
XtremIO X2 Integration with Microsoft Technologies
This section summarizes the Microsoft platform integration capabilities offered by the Dell EMC XtremIO X2 storage
system. These include XtremIO’s PowerShell library for integration into Microsoft automation frameworks, SCVMM and
EMC Storage Integrator (ESI) for centralized storage management, as well as Offloaded Data Transfer (ODX) support for
improved performance in Microsoft enterprise environments. These enhancements are offered by XtremIO X2 to create a
seamless and efficient management experience for infrastructures built upon Microsoft Hyper-V virtualization
technologies.
Offloaded Data Transfer (ODX)
The Offloaded Data Transfer (ODX) feature was first made available with the release of Windows Server 2012. This
feature allows large segments of data, and even entire virtual machines, to be moved or copied at speeds significantly
faster than legacy methods. By offloading the file copy or transfer operation to the storage array, ODX helps minimize
fabric and host level contention of available resources, thus increasing realized performance capabilities and reducing
overall I/O latencies.
ODX uses a token-based mechanism for reading and writing data on intelligent storage arrays. Instead of routing the data
through the host, a small token is copied between the source and destination. The token serves as a confirmed point-in-
time representation of the data. XtremIO implements ODX support using the array’s native XVC functionality. When an
ODX token is created for a given file, the array will create a read-only snapshot of the data segment thus preserving the
point in time of the data being copied. The array then performs an internal copy operation to duplicate the data as
requested, and the original file is then made available at the destination without the need to route data over the network or
thorough the host(s).
Figure 26. Implementation Sequence of Offloaded Data Transfer (ODX) Activities on Dell EMC XtremIO Platform
The host initiated ODX action leverages a built-in capability of the supporting storage array to inform the storage layer of
what data needs to be copied and to where. The storage confirms the request is possible and then performs the desired
copy or repeated write activity without the need to send or receive all the data over the network connecting the host(s) and
storage.
For a storage array like XtremIO X2, whose architecture is custom built to exploit the capabilities of flash storage via a
content addressable indirection engine, this activity can be completed extremely quickly since it is an in-memory metadata
update operation.
28 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
ODX is enabled by default and can be used for any file copy operation where the file is greater than 256kB in size. ODX
works on NTFS partitioned disks, and does not support files which are compressed, encrypted, or utilize BitLocker
protection. Windows Server or Hyper-V will automatically detect if the source and target storage volumes support ODX. If
not, the operation will revert back to legacy copy methods without intervention from the user, and data transmission will be
done via the host.
ODX is particularly useful for copying large files between file shares, deploying virtual machines from templates, and
performing live migration of virtual machine storage between supporting volumes. In addition, end users will experience
the benefits of ODX functionality which eliminates repeated zero writes when creating fixed size VHDX files.
To support the ODX operations, virtual SCSI adapters with VHDX files, pass-through disks, or connectivity via virtual Fiber
Channel adapters are required.
When deploying virtual machines within SCVMM from a Library template, administrators can note if ODX is supported by
observing if the Create Virtual Machine job invokes ‘Rapid deploy using SAN copy’ as shown in Figure 27. In addition,
virtual machine creation time measured in seconds as opposed to minutes also indicates that ODX was used.
Figure 27. SCVMM "Cloning Virtual Machine" Operation Utilizing ODX
29 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Below is a comparison of cloning a 150GB virtual machine with and without ODX. We can see the outstanding
improvement in copy rates when running on top of Dell EMC XtremIO X2’s ODX implementation.
Figure 28. An Example of Copy Performance for ODX-Enabled Operation
Figure 29. An Example of Copy Performance for a Non-ODX Copy Task
Resilient File System (ReFS)
The Resilient File System (ReFS) is Microsoft's newest file system, designed to maximize data availability, scale efficiently
to large data sets across diverse workloads, and provide data integrity by means of resiliency to corruption. It seeks to
address an expanding set of storage scenarios and establish a foundation for future innovations.
Key benefits:
• ReFS introduces new features that can precisely detect and fix corruptions while remaining online, helping to
provide increased integrity and availability for data.
• Integrity-streams - ReFS uses checksums for metadata and optionally for file data, giving ReFS the ability to
reliably detect corruptions.
• Storage Spaces integration - When used in conjunction with a mirror or parity space, ReFS can automatically
repair detected corruptions using the alternate copy of the data provided by Storage Spaces. Repair processes
are both localized to the area of corruption and performed online, requiring no volume downtime.
• Salvaging data - If a volume becomes corrupted and an alternate copy of the corrupted data doesn't exist, ReFS
removes the corrupt data from the namespace. ReFS generally keeps the volume online while it handles most
non-correctable corruptions, although there are rare cases that require ReFS to take the volume offline.
• Proactive error correction - In addition to validating data before reads and writes, ReFS introduces a data integrity
scanner, known as a scrubber. This scrubber periodically scans the volume, identifying latent corruptions and
proactively triggering a repair of corrupt data.
30 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Microsoft introduced a new feature called Accelerated VHDX Operations in ReFS with Windows Server 2016. This serves
as Microsoft’s data center file system of the future with improved VHDX and VHD functions, such as:
• Creating and extending a virtual hard disk
• Merging checkpoints (previously called Hyper-V snapshots)
• Supporting backups based on production checkpoints
One of the core features of ReFS is the use of metadata to protect data integrity. This metadata is used when creating or
extending a virtual hard disk; instead of zeroing out new data blocks on disk, the file system will write metadata. Thus,
when an application such as Hyper-V asks to read zeroed-out blocks, the file system checks the metadata, and responds
with “nothing to see here.”
Checkpoints can be costly in terms of IOPS, impacting all virtual machines on that LUN. When a checkpoint is merged,
the last modified blocks of an AVHD/X file are written back into the parent VHD/X file. Hyper-V will use checkpoints to
perform consistent backups (not relying on VSS at the volume level) to achieve greater scalability. However, merging
many checkpoints is costly. Instead, ReFS will perform a metadata operation and delete unwanted data. This means that
no data actually moves on the volume, and merging a checkpoint will be much quicker and have less impact on services
hosted on that disk.
Formatting ReFS with 64 KB allocation unit size is recommended for optimal for Hyper-V operation.
Figure 30 and Figure 31 show the huge difference between the allocation of a VHDX disk on ReFS file system versus
NTFS file system.
Figure 30. An Example of VHDX File Creation on an NTFS File System
Figure 31. An Example of Accelerated VHDX Operations for VHDX File Creation on an ReFS File System
MPIO Best Practices
The Windows Server operating system and Hyper-V 2012 (or later) natively support MPIO with the built-in Device Specific
Module (DSM) that is bundled with the OS. Although the basic functionally offered with the Microsoft DSM is supported,
Dell EMC recommends the use of Dell EMC PowerPath™ MPIO management on server hosts and VMs instead of the
Microsoft DSM. Dell EMC PowerPath is a server-resident software solution designed to enhance performance and
application availability as follows:
• Combines automatic load balancing, path failover, and multiple-path I/O capabilities into one integrated package.
• Enhances application availability by providing load balancing, automatic path failover, and recovery functionality.
• Supports servers, including cluster servers, connected to Dell EMC and third-party arrays.
31 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Windows and Hyper-V Hosts will default to Round Robin with Dell EMC XtremIO X2 storage, unless the administrator sets
a different default MPIO policy on the host.
Figure 32. XtremIO XtremApp Multi-Path Disk Device Properties
PowerPath Multipathing with XtremIO
XtremIO supports multipathing using EMC PowerPath on Windows. PowerPath versions 5.7 SP2 and above provide
Loadable Array Module (LAM) for XtremIO Array devices. With this support, XtremIO devices running versions 2.2 and
above are managed under the XtremIO class.
PowerPath provides enhanced path management capabilities for up to 32 paths per logical device, as well as intelligent
dynamic I/O load-balancing functionalities specifically designed to work within the Microsoft Multipathing I/O (MPIO)
framework.
Having multiple paths enables the host to access a storage device even if a specific path is unavailable. Multiple paths
share the I/O traffic to a storage device, using intelligent load-balancing policies which enhance I/O performance and
increase application availability. EMC PowerPath is the recommended multipathing choice.
PowerPath features include:
• Multiple paths - provides higher availability and I/O performance. This includes support on Server Core and
Hyper-V (available in Windows Server 2008 and later).
• Running PowerPath in Hyper-V VMs (guest operating systems), PowerPath supports:
o iSCSI through software initiator
o Virtual Fibre Channel for Hyper-V (available in Windows Server 2012 and above) that provides the guest
operating system with unmediated access to a SAN through vHBA
• Path management insight capabilities - PowerPath characterizes I/O patterns and aides in diagnosing I/O
problems due to flaky paths or unexpected latency values. Metrics are provided on:
o Read and write - in MB/seconds per LUN
o Latency distribution - the high and low watermarks per path
o Retries - the number of failed I/Os on a specific path
• Autostandby - automatically detects intermittent I/O failures and places paths into autostandby (also known as
flaky paths).
32 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
• PowerPath Migration Enabler - a host-based migration tool that allows migrating data between storage systems
and supports migration in an MSCS environment (for Windows 2008 and later). PowerPath Migration Enabler
works in conjunction with the host operating system (also called Host Copy) and other underlying technologies
such as Open Replicator (OR).
• Remote monitoring and management
o PowerPath Management Appliance 2.2 (PPMA 2.2)
o Systems Management Server (SMS)
o Microsoft Operations Manager
System Center Virtual Machine Manager
Microsoft’s System Center Virtual Machine Manger (SCVMM) is widely used as the primary management console for
larger Microsoft Hyper-V solutions. It can be used for deployments of all sizes, but its true value as a control platform
becomes obvious when providing a means to efficiently oversee and manage multiple servers, clusters, virtual machines,
network components, and physical resources within a virtualized environment. From the management console, you can
discover, deploy, or migrate existing virtual machines between physical servers or failover clusters. This functionality can
be used to dynamically manage physical and virtual resources within the system and to allow allocation and assignment
of virtualized resources to meet ever changing business needs.
VMM Components:
• VMM management server- The computer on which the VMM service runs. It processes commands and controls
communications with the VMM database, the library server, and virtual machine hosts.
• VMM database - A Microsoft SQL Server database that stores VMM configuration information such as profiles
and virtual machine and service templates.
• VMM console - The program that provides access to a VMM management server in order to centrally view and
manage physical and virtual resources, such as virtual machine hosts, virtual machines, services, and library
resources.
• VMM library and VMM library server - the catalog of resources (for example, virtual hard disks, templates, and
profiles) that are used to deploy virtual machines and services. A library server hosts shared folders that are used
to store file-based resources in the VMM library.
• VMM command shell - The Windows PowerSheII-based command shell that makes available the cmdlets that
perform all functions in VMM.
33 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 33. System Center Virtual Machine Manager Components
Using System Center Virtual Machine Manager allows us to manage multiple Hyper-V servers from one central location,
and to create a Microsoft Cluster that contains multiple Hyper-V servers connected to shared cluster disks exposed from
the XtremIO X2 storage array. This mechanism allows the distribution of virtual machines between the various XtremIO
LUNs and Hyper-V servers thus providing maximum flexibility. It also provides the ability to recover virtual machines
following a physical server crash or a maintenance activity., In addition, it provides the ability to migrate a VM from one
LUN to another or from a physical Hyper-V host to another without downtime, all this while providing great performance for
mission critical applications.
The following examples describe the management of multiple virtual DB servers and enterprise applications which use
SCVMM to generate tens of thousands of IOPS each, all managed under a unified interface that incorporates the physical
Hyper-V servers in the cluster.
34 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 34. System Center Virtual Machine Manager Manages Multiple Virtual Machines from Several Hyper-V Hosts
In Figure 35, we can see the IOPS and latency statistics for an Intensive Hyper-V virtual machines workload. The graph
shows that IOPS are well over 200k but that the latency for all I/O operations remains less than 0.6 msec, yielding
excellent application performance.
Figure 35. XtremIO X2 Overall Performance – Intensive Hyper-V Virtual Machines Workload
Figure 36 shows latency vs IOPS during an intensive workload. It can be seen that latency is mostly below 0.5 msec, with
some higher peaks always below 0.6 msec.
35 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 36. XtremIO X2 Latency vs. IOPS–Intensive Hyper-V Virtual Machines Workload
Figure 37 shows the incredible storage capacity efficiency which can be achieved for Hyper-V virtual machines on
XtremIO X2. We can see an impressive data reduction factor of 6.1:1 (2.7:1 for deduplication and 2.3:1 for compression)
that lowers the data capacity footprint to an incredible 449.92GB for all the virtual machines.
Figure 37. XtremIO X2 Data Savings – Intensive Hyper-V Virtual Machines Workload
Figure 38 shows the CPU utilization of the Storage Controllers during an intensive Hyper-V virtual machines workload. We
can see the excellent synergy across the X2 cluster, with all of the Active-Active Storage Controllers' CPUs equally
sharing the load for the entire process.
36 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 38. XtremIO X2 CPU Utilization – Intensive Hyper-V Virtual Machines Workload
Failover Clustering
Failover Clustering is a Windows Server feature that enables grouping multiple servers into a fault-tolerant cluster, and
provides new and improved features for software-defined data center customers and workloads running on physical
hardware or on virtual machines.
A failover cluster is a group of independent servers that work together to increase the availability and scalability of
clustered roles (formerly called clustered applications and services). The clustered servers (called nodes) are connected
by physical servers and by software. If one or more of the cluster nodes fail, other nodes begin to provide its services (a
process known as failover). In addition, the clustered roles are proactively monitored to verify that they are working
properly. If they are not working, they are restarted or moved to another node.
Connecting our Hyper-V hosts to the same XtremIO X2 LUNs allows us to convert them from NTFS/ReFS to Cluster
Shared Volumes (CSV). This provides CSV functionality with a consistent, distributed namespace that clustered roles can
use to access shared storage from all nodes. With the Failover Clustering feature, users experience a minimum of
disruptions in service.
Failover Clustering has many practical applications, including:
• Continuously available LUNs for applications such as Microsoft SQL Server and Hyper-V virtual machines
• Highly-available clustered roles that run on physical servers or on virtual machines installed on servers running
Hyper-V
Use of Cluster Shared Volumes in a Failover Cluster
Cluster Shared Volumes (CSV) enable multiple nodes in a failover cluster to simultaneously have read-write access to the
same LUN (disk) that is provisioned as an NTFS volume (in Windows Server 2012 R2, disks can be provisioned as
Resilient File System (ReFS) or NTFS). With CSVs, clustered roles can failover quickly from one node to another without
requiring change in drive ownership, or dismounting and remounting a volume. CSVs also help simplify the management
of a potentially large number of LUNs in a failover cluster.
CSVs provide a general-purpose, clustered file system layered on top of NTFS or ReFS, and bind clustered Virtual Hard
Disk (VHD/VHDX) files for clustered Hyper-V virtual machines.
37 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
This mechanism allows the distribution of virtual machines between the various XtremIO LUNs and Hyper-V servers. This
provides maximum flexibility and the ability to recover virtual machines due to physical server crashes or maintenance
activities. and the ability to migrate a VM from one LUN to another or from a physical Hyper-V host to another without
downtime, all this while providing great performance for mission critical applications.
Figure 39. CSV XtremIO X2 LUNs Connected to Hyper-V Hosts
Storage Quality of Service
Storage Quality of Service (QoS) in Windows Server 2016 provides a way to centrally monitor and manage storage
performance for virtual machines using Hyper-V. The feature automatically improves the fairness of storage resource
sharing between multiple virtual machines, and allows policy-based minimum and maximum performance goals to be
configured in units of normalized IOPS.
We can use Storage QoS in Windows Server 2016 to accomplish the following:
• Mitigate noisy neighbor issues - By default, Storage QoS ensures that a single virtual machine cannot consume
all storage resources and starve other virtual machines of storage bandwidth.
• Manage Storage I/O based on workload business needs - Storage QoS policies define and enforce minimum and
maximum performance metrics for virtual machines. This provides consistent performance to virtual machines,
even in dense and overprovisioned environments. If policies cannot be met, alerts are available to track when
VMs are out of policy or are assigned invalid policies.
Figure 40. SCVMM Components
38 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
After a Failover Cluster is created and a CSV disk is configured, Storage QoS Resource is displayed as a Cluster Core
Resource and is visible in both Failover Cluster Manager and Windows PowerShell. The intent is that the failover cluster
system will manage this resource with no manual action required.
Figure 41. Storage QoS Resource Displayed as a Cluster Core Resource in Failover Cluster Manager
Space Reclamation
Space reclamation is a key concern for thinly provisioned storage arrays operating in virtualized environments. Space
reclamation refers to the ability of the host operating system to return unused capacity to the storage array following the
removal/deletion of large files, virtual hard disks, or virtual machines.
Windows Server and Windows Hyper-V are capable of identifying the provisioning type and the UNMAP (or TRIM)
capability of a disk. Space reclamation can be triggered by file deletion, a file system level trim, or a storage optimization
operation. You can check if your disks are identified as thin or not by examining the Logical Units under a specific Storage
Pool within SCVMM, or alternatively, by using the “Defragment and Optimize your Drives” option from the ‘Administrative
Tools’ section of the host’s Control Panel (if using a desktop based installation).
Figure 42. Verifying XtremIO Storage Volume Presentation Type and Associated Space Efficiency via the Microsoft Operating
System Level "Optimize Drives" Option
At the host level, the TRIM operation and SCSI UNMAP commands will be passed directly to the array as long as TRIM is
enabled. This can be checked using the command shown in Figure 43. A value of 1 indicates that the feature is disabled
and 0 indicates that this is enabled. TRIM is enabled by default unless an administrator disables it.
39 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 43. Verifying Operating System Support for TRIM/UNMAP
When a large file is deleted at the host or guest level on a supporting file system, UNMAP commands will be invoked and
sent to the array. This operation does not impact the host, but it does invoke large bandwidth resource usage internal to
the storage array in order to zero-out the reclaimed space.
For an intelligent data-aware storage array such as XtremIO X2, the zero data will not be written to disk, but instead the
address space allocated to the file will be released and flagged as containing null data. This operation will consume a
portion of available storage controller compute resources, but the impact of this is minimal.
For environments where multiple users have full control over large host layer file spaces that cohabit on the storage array
with production systems, it may be wise to consider disabling this feature with a view to running storage optimization tasks
during periodic and pre-defined maintenance windows to minimize any potential impact to any hosted application’s
realized latencies.
Once the OPTIMIZE-VOLUME command is issued, the UNMAP command issued to the array will result in the high
bandwidth shown in Figure 44. This space reclamation operation used approximately 15% of an idle XtremIO cluster’s
compute resources, but per design, this usage will be lower in a system which is already highly utilized.
Figure 44. A Manually-Invoked Storage Space Reclamation Initiated using the Microsoft 'Optimize-Volume' Command, as Viewed
from XtremIO X2 Performance Graphs
Virtual Hard Disks
A virtual hard disk is a set of data blocks that is stored as a regular Windows file with a .vhd, .vhdx or .vhds extension,
using the host operating system. It is important to understand the different format and type options for virtual hard disks
and how this integrates with Dell EMC XtremIO X2.
Virtual Hard Disk Format
There are three different kinds of virtual hard disk formats that are supported with either VM generation:
• VHD is supported with all Hyper-V versions and is limited to a maximum size of 2 TB. This is now considered a
legacy format (use VHDX instead for new VM deployments).
• VHDX is supported with Windows Server 2012 (or newer) Hyper-V. The VHDX format offers better resiliency in
the event of a power loss, better performance, and supports a maximum size of 64 TB. VHD files can be
converted to the VHDX format using tools such as Hyper-V Manager or PowerShell.
40 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
• VHDS (VHD Set) is supported on Windows Server 2016 (or newer) Hyper-V. VHDS is for virtual hard disks that
are shared by two or more guest VMs in support of highly-available (HA) guest VM clustering configurations.
Figure 45. Virtual Hard Disk Formats
In addition to the format, a virtual hard disk can be designated as fixed, dynamically expanding, or differencing.
Figure 46. Virtual Hard Disk Types
The dynamically expanding disk type will work well for most workloads on Dell EMC XtremIO X2. Since Dell EMC
XtremIO X2 arrays leverage thin provisioning, only data that is actually written to a virtual hard disk will consume space on
the array regardless of the disk type (fixed, dynamic, or differencing). As a result, determining the best disk type when
using XtremIO as a storage backend is mostly a function of the workload as opposed to how it will impact storage
utilization. For workloads generating very high I/O, such as Microsoft SQL Server databases, Microsoft recommends
using the fixed size virtual hard disk type for optimal performance.
A fixed virtual hard disk consumes the full amount of space from the perspective of the host server. For a dynamic virtual
hard disk, the space consumed is equal to the amount of data on the virtual disk (plus some metadata overhead), and is
more space efficient from the perspective of the host. From the perspective of the guest VM, either type of Virtual Hard
Disk shown in this example will present a full 60 GB of available space to the guest.
There are some performance and management best practices to keep in mind when choosing the right kind of virtual hard
disk type for your environment.
41 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Fixed-size virtual hard disks:
• Are recommended for virtual hard disks that should experience a high level of disk activity, such as Microsoft SQL
Server, Microsoft Exchange, or OS page or swap files. For many workloads, the performance difference between
fixed and dynamic will be negligible. When formatted, they take up the full amount of space on the host server
volume.
• Are less susceptible to fragmentation at the host level.
• Take longer to copy (for example, from one host server to another over the network) because the file size is the
same as the formatted size.
• With pre-2012 versions of Hyper-V, provisioning of fixed virtual hard disks may require significant time due to lack
of native Offloaded Data Transfer (ODX) support. With Windows Server 2012 (or newer), provisioning time for
fixed virtual hard disks is significantly reduced when ODX is supported and enabled.
Dynamically expanding virtual hard disks:
• Are recommended for most virtual hard disks, except in cases of workloads with very high disk I/O.
• Require slightly more CPU and I/O overhead because they grow compared to fixed size virtual hard disks. This
usually does not impact the workload except in cases where I/O demands are very high. This is minimized even
further on Dell EMC XtremIO X2 due to the higher performance of All-Flash disk pools.
• Are more susceptible to fragmentation at the host level.
• Consume very little space (for some metadata) when initially formatted, and expand as new data is written to
them by the guest VM.
• Take less time to copy to other locations than a fixed size disk because only the actual data is copied. For
example, the time required to copy a 500 GB dynamically expanding virtual hard disk that contains 20 GB of data
will be that needed to copy 20 GB of data - not 500 GB.
• Allow the host server volume to be over-provisioned. In this case, best practice is to configure alerting on the host
server to avoid unintentionally running out of space in the volume.
Differencing virtual hard disks:
• Offer some storage savings by allowing multiple Hyper-V guest VMs with identical operating systems to share a
common boot virtual hard disk.
• Require all children to use the same virtual hard disk format as the parent.
• Require new data to be written to the child virtual hard disk.
• Are created for each native Hyper-V based snapshot of a Hyper-V guest VM in order to freeze the changed data
since the last snapshot, and allow new data to be written to a new virtual hard disk file. Creating native Hyper-V
based snapshots of a Hyper-V guest VM can increase the CPU usage of storage I/O, but will probably not affect
performance noticeably unless the guest VM experiences very high I/O demands.
• Can result in performance impacts to the Hyper-V guest VM. This impact is a result of maintaining a long chain of
native Hyper-V based snapshots of the guest VM which requires reading from the virtual hard disk and checking
for the requested blocks in a chain of many differencing virtual hard disks.
• Should not be used at all or at least kept to a minimum with native Hyper-V based snapshots of Hyper-V guests,
in order to maintain optimal disk I/O performance. With Dell EMC XtremIO X2, native Hyper-V snapshots can be
minimized or even avoided altogether by leveraging array-based storage snapshots. Administrators can leverage
array-based snapshots to recover VMs and replicate data to other locations for archive or recovery.
Because of thin provisioning, space on the XtremIO X2 array is consumed only when actual data is written regardless of
the type of virtual hard disk. Choosing dynamic over fixed virtual hard disks does not improve storage space utilization on
Dell EMC XtremIO X2 arrays. Other factors such as the I/O performance of the workload would be primary considerations
when determining the type of virtual hard disk in your environment.
42 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Using XtremIO X2 for Hyper-V VMs and Storage Migration
Microsoft provides native tools to move or migrate VMs with Windows Server 2012 and 2016 Hyper-V, so there are fewer
use cases for using SAN-based snapshots for such moves. When a guest VM is migrated live from one node to another
node within the same Hyper-V cluster configuration, no data needs to be copied or moved because all nodes in that
cluster have shared access to the underlying cluster shared volumes (CSV).
However, when an administrator needs to migrate a guest VM from one Volume to another, the data (the virtual hard
disks) must be copied to the target Volume.
When moving VMs between different XtremIO X2 volumes, the Dell EMC XtremIO X2 array leverages ODX commands
which provides exceptional transfer rate for migrating or cloning virtual machines.
Figure 47. Live Storage Migration between XtremIO X2 Volumes Initiated from SCVMM
Figure 48. Live Storage Migration between XtremIO X2 Volumes Leveraging ODX on the Arrays Side
EMC Storage Integrator (ESI) 5.1
The ESI for Windows Suite is designed for Microsoft administrators with responsibilities for the management and
monitoring of storage platforms and hosted applications. This software enables administrators to view, provision, and
manage block and file storage on supported Dell EMC storage systems for use with Microsoft Server and Hyper-V. For
Hyper-V virtual machines, you can create virtual hard disk (VHD and VHDX) files and pass-through SCSI disks. You can
also create host disks and cluster shared volumes. Control of the managed environment is possible using either an ESI
specific PowerShell CLI (Command Line Interface) or the ESI GUI (Graphical User Interface) which provides a dashboard
interface for oversight of all managed components. You can run the ESI GUI as a stand-alone tool or as part of a
Microsoft Management Console (MMC) snap-in on Microsoft operating systems.
The latest release (at the time of writing this document) of ESI is version 5.1. With this release, ESI has increased
capabilities for management of XtremIO XVC volumes. This capability allows administrators to create both writable and
read-only instantaneous copies of their data and also refresh existing snapshot copies as required. ESI 5.1 officially
supports Microsoft Server 2016 and Microsoft Hyper-V 2016.
43 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
The ESI suite is comprised of the following core software components:
• ESI Installer and PowerShell Toolkit: The ESI install toolkit provides the central control layer and adapters for the
integration structure, and the ESI PowerShell kit offers scripting capabilities specific to available ESI functionality.
Both the ESI service and the PowerShell kit are installed simultaneously on the assigned ESI machine.
• ESI Service and SCOM Management Packs: The management pack binaries are installed on the Microsoft
Systems Center Operations Manager (SCOM) management group and provide integration between the ESI
Service and SCOM environmental monitoring.
• ESI Graphical User Interface: The GUI provides a management interface to oversee and manage all ESI
integrated components.
Connecting XtremIO X2 to ESI allows viewing and managing all XtremIO objects directly from ESI as shown in Figure 49.
Figure 49. XtremIO X2 Objects from ESI View
Attaching the Microsoft cluster to ESI allows viewing and managing all cluster objects directly from ESI, including hosts,
mappings, disks and connections between the Cluster volumes and LUNS in the storage array.
Figure 50. Managing Microsoft Cluster Resources and Volumes From the ESI Plugin
A powerful ability of ESI is the option to assign a LUN directly from the storage to the Hyper-V servers. This process
creates the disks on the XtremIO X2 storage array, maps them to all Hyper-V servers in the cluster, performs rescan,
creates file system, adds them to the cluster and converts them to CSVs, all using one simple operation as shown in
Figure 51.
44 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft
Windows Server and Hyper-V 2016
© 2018 Dell Inc. or its subsidiaries.
Figure 51. Allocating CSV Volumes Directly from XtremIO X2 Arrays Using the ESI Plugin
Connecting the Hyper-V servers as Hypervisors to ESI provides information about all the disks that are connected to
these servers and all virtual machines and virtual disks which reside on top of them, as shown in Figure 52.
Figure 52. ESI Virtual Machines Disks Inventory
Another interesting feature of the ESI is its support for adding a pass-through or VHDX disk directly from the XtremIO X2
X-Brick to a specific virtual machine. This process will create a new LUN in the storage device, map it to the relevant
Hyper-V server, and connect it to the virtual machine as shown in Figure 53.
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper

Contenu connexe

Similaire à Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper

Dell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White PaperDell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White Paper
Itzik Reich
 
Dondi J Vigesaa Resume Latest
Dondi J Vigesaa Resume LatestDondi J Vigesaa Resume Latest
Dondi J Vigesaa Resume Latest
Dondi Vigesaa
 
Sample_Blueprint-Fault_Tolerant_NAS
Sample_Blueprint-Fault_Tolerant_NASSample_Blueprint-Fault_Tolerant_NAS
Sample_Blueprint-Fault_Tolerant_NAS
Mike Alvarado
 
IBM BCFC White Paper - Why Choose IBM BladeCenter Foundation for Cloud
IBM BCFC White Paper - Why Choose IBM BladeCenter Foundation for CloudIBM BCFC White Paper - Why Choose IBM BladeCenter Foundation for Cloud
IBM BCFC White Paper - Why Choose IBM BladeCenter Foundation for Cloud
IBM India Smarter Computing
 
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_finalsp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
Kunal Khairnar
 
Managing Complexity in the x86 Data Center: The User Experience
Managing Complexity in the x86 Data Center: The User ExperienceManaging Complexity in the x86 Data Center: The User Experience
Managing Complexity in the x86 Data Center: The User Experience
IBM India Smarter Computing
 
Resume_Mike_Llado_current2
Resume_Mike_Llado_current2Resume_Mike_Llado_current2
Resume_Mike_Llado_current2
Michael Llado
 
The Small and Medium Enterprises utilize some important criteria when acquiri...
The Small and Medium Enterprises utilize some important criteria when acquiri...The Small and Medium Enterprises utilize some important criteria when acquiri...
The Small and Medium Enterprises utilize some important criteria when acquiri...
IBM India Smarter Computing
 

Similaire à Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper (20)

Dell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White PaperDell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White Paper
 
Dell: Why Virtualization
Dell: Why VirtualizationDell: Why Virtualization
Dell: Why Virtualization
 
Dondi J Vigesaa Resume Latest
Dondi J Vigesaa Resume LatestDondi J Vigesaa Resume Latest
Dondi J Vigesaa Resume Latest
 
Sample_Blueprint-Fault_Tolerant_NAS
Sample_Blueprint-Fault_Tolerant_NASSample_Blueprint-Fault_Tolerant_NAS
Sample_Blueprint-Fault_Tolerant_NAS
 
Dell EMC OEM Storage Brochure
Dell EMC OEM Storage BrochureDell EMC OEM Storage Brochure
Dell EMC OEM Storage Brochure
 
Customer relationship management performance: Microsoft Dynamics on the Dell ...
Customer relationship management performance: Microsoft Dynamics on the Dell ...Customer relationship management performance: Microsoft Dynamics on the Dell ...
Customer relationship management performance: Microsoft Dynamics on the Dell ...
 
Virtualization Performance on the IBM PureFlex System
Virtualization Performance on the IBM PureFlex SystemVirtualization Performance on the IBM PureFlex System
Virtualization Performance on the IBM PureFlex System
 
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed ReviewDELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
 
Red Hat Storage combined with Commvault Simpana
Red Hat Storage combined with Commvault SimpanaRed Hat Storage combined with Commvault Simpana
Red Hat Storage combined with Commvault Simpana
 
IBM BCFC White Paper - Why Choose IBM BladeCenter Foundation for Cloud
IBM BCFC White Paper - Why Choose IBM BladeCenter Foundation for CloudIBM BCFC White Paper - Why Choose IBM BladeCenter Foundation for Cloud
IBM BCFC White Paper - Why Choose IBM BladeCenter Foundation for Cloud
 
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_finalsp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
 
Managing Complexity in the x86 Data Center: The User Experience
Managing Complexity in the x86 Data Center: The User ExperienceManaging Complexity in the x86 Data Center: The User Experience
Managing Complexity in the x86 Data Center: The User Experience
 
Resume_Mike_Llado_current2
Resume_Mike_Llado_current2Resume_Mike_Llado_current2
Resume_Mike_Llado_current2
 
White Paper: Sizing EMC VNX Series for VDI Workload — An Architectural Guidel...
White Paper: Sizing EMC VNX Series for VDI Workload — An Architectural Guidel...White Paper: Sizing EMC VNX Series for VDI Workload — An Architectural Guidel...
White Paper: Sizing EMC VNX Series for VDI Workload — An Architectural Guidel...
 
Data Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewData Lake Protection - A Technical Review
Data Lake Protection - A Technical Review
 
Dell SalesPlayBook.pdf
Dell SalesPlayBook.pdfDell SalesPlayBook.pdf
Dell SalesPlayBook.pdf
 
The Small and Medium Enterprises utilize some important criteria when acquiri...
The Small and Medium Enterprises utilize some important criteria when acquiri...The Small and Medium Enterprises utilize some important criteria when acquiri...
The Small and Medium Enterprises utilize some important criteria when acquiri...
 
The IBM zEnterprise EC12
The IBM zEnterprise EC12The IBM zEnterprise EC12
The IBM zEnterprise EC12
 
Solution Brief HPE StoreOnce backup with Veeam
Solution Brief HPE StoreOnce backup with VeeamSolution Brief HPE StoreOnce backup with Veeam
Solution Brief HPE StoreOnce backup with Veeam
 
White Paper: Rethink Storage: Transform the Data Center with EMC ViPR Softwar...
White Paper: Rethink Storage: Transform the Data Center with EMC ViPR Softwar...White Paper: Rethink Storage: Transform the Data Center with EMC ViPR Softwar...
White Paper: Rethink Storage: Transform the Data Center with EMC ViPR Softwar...
 

Plus de Itzik Reich

Plus de Itzik Reich (8)

Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
 
Best practices for running Microsoft sql server on xtremIO X2_h16920
Best practices for running Microsoft sql server on xtremIO X2_h16920Best practices for running Microsoft sql server on xtremIO X2_h16920
Best practices for running Microsoft sql server on xtremIO X2_h16920
 
consolidating and protecting virtualized enterprise environments with Dell EM...
consolidating and protecting virtualized enterprise environments with Dell EM...consolidating and protecting virtualized enterprise environments with Dell EM...
consolidating and protecting virtualized enterprise environments with Dell EM...
 
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
 
VMUG ISRAEL November 2012, EMC session by Itzik Reich
VMUG ISRAEL November 2012, EMC session by Itzik ReichVMUG ISRAEL November 2012, EMC session by Itzik Reich
VMUG ISRAEL November 2012, EMC session by Itzik Reich
 
Bca1931 final
Bca1931 finalBca1931 final
Bca1931 final
 
Vce vdi reference_architecture_knowledgeworkerenvironments
Vce vdi reference_architecture_knowledgeworkerenvironmentsVce vdi reference_architecture_knowledgeworkerenvironments
Vce vdi reference_architecture_knowledgeworkerenvironments
 
Emc world svpg68_2011_05_06_final
Emc world svpg68_2011_05_06_finalEmc world svpg68_2011_05_06_final
Emc world svpg68_2011_05_06_final
 

Dernier

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 

Dernier (20)

AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 

Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper

  • 1. DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. DELL EMC XTREMIO X2 ALL-FLASH STORAGE WITH MICROSOFT WINDOWS SERVER AND HYPER-V 2016 Abstract This white paper describes the components, design, functionality, and advantages of hosting Microsoft Windows Server and Hyper-V-2016-based enterprise infrastructures on Dell EMC XtremIO X2 All-Flash storage array. January, 2018 WHITE PAPER
  • 2. 2 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Contents Abstract.............................................................................................................................................................1 Executive Summary...........................................................................................................................................3 Audience ............................................................................................................................................................................. 3 Business Case .................................................................................................................................................................... 3 Overview ............................................................................................................................................................................. 4 Solution Overview..............................................................................................................................................5 Dell EMC XtremIO X2 for Hyper-V Environments ..............................................................................................7 XtremIO X2 Overview ......................................................................................................................................................... 8 Architecture ......................................................................................................................................................................... 8 Multi-Dimensional Scaling................................................................................................................................................... 9 XIOS and the I/O Flow ......................................................................................................................................................11 XtremIO Write I/O Flow .................................................................................................................................................12 XtremIO Read I/O Flow.................................................................................................................................................14 System Features ...........................................................................................................................................................15 XtremIO Management Server .......................................................................................................................................21 XtremIO X2 Integration with Microsoft Technologies .......................................................................................27 Offloaded Data Transfer (ODX) ........................................................................................................................................27 Resilient File System (ReFS)............................................................................................................................................29 MPIO Best Practices .........................................................................................................................................................30 PowerPath Multipathing with XtremIO ..............................................................................................................................31 System Center Virtual Machine Manager .........................................................................................................................32 Failover Clustering ............................................................................................................................................................36 Use of Cluster Shared Volumes in a Failover Cluster ......................................................................................................36 Storage Quality of Service ................................................................................................................................................37 Space Reclamation ...........................................................................................................................................................38 Virtual Hard Disks .............................................................................................................................................................39 Virtual Hard Disk Format...............................................................................................................................................39 Using XtremIO X2 for Hyper-V VMs and Storage Migration.............................................................................................42 EMC Storage Integrator (ESI) 5.1....................................................................................................................42 XtremLib PowerShell Module 2.0.....................................................................................................................45 Enabling Integrated Copy Data Management with XtremIO X2 and AppSync 3.5............................................47 Conclusion.......................................................................................................................................................50 References......................................................................................................................................................51 How to Learn More..........................................................................................................................................52
  • 3. 3 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Executive Summary Microsoft ® Hyper-V ® and Dell EMC XtremIO X2™ are feature-rich solutions that together provide a diverse range of configurations to solve key business objectives such as performance and resiliency. This white paper describes the components, design, and functionality of a Hyper-V Cluster managed by System Center VMM 2016 running consolidated virtualized enterprise applications hosted on DELL EMC XtremIO X2 All-Flash storage array. It discusses and highlights the advantages to users whose enterprise IT operations and applications are already virtualized, or who are considering hosting virtualized enterprise application deployments on DELL EMC XtremIO X2 All- Flash array. The primary issues examined include: • Performance of consolidated virtualized enterprise applications • Management and monitoring efficiencies • Business continuity and copy data management considerations Audience This white paper is intended for those actively involved in the management, deployment, and assessment of storage arrays and virtualized solutions in an organization. This audience comprises, amongst others; storage and virtualization administrators involved in management and operational activities, data center architects with responsibilities for planning and designing datacenter infrastructures, as well as system administrators and storage engineers in charge of deployment and integration. Business Case Data centers are mission-critical in any large enterprise and typically cost tens to hundreds of millions of dollars to build. As the data center is one of the most financially concentrated assets of any business, ensuring the appropriate infrastructural elements and functionality are present is fundamental to the success of any organization. Failure to do so will mean that an appropriate return on investment may not be realized and the ability of the business to execute at desired levels will be in jeopardy. Enterprise data center infrastructures typically comprise a mix of networking infrastructure, compute and data storage layers, as well as the management harness required to ensure desired interoperability, management and orchestration. The decisions faced when choosing the appropriate solutions regarding the various infrastructure components are non- trivial and have the potential to impact a business long after the initial purchase or integration phase has been completed. For this reason, it is imperative that those components which are evaluated and eventually utilized as the building blocks within a modern data center are verified for both functionality and reliability. Storage performs a key function within any enterprise data center environment, as almost all applications will require some form of persistent storage to perform their designated tasks. The ever-increasing importance of the storage layer within the modern data center can be considered in terms of the continuing improvements in performance and capacity densities of modern data-aware storage platforms. This increased density of modern storage capabilities enables ongoing consolidation of business-critical workloads in the modern storage array, thereby increasing their overall prominence as a focal point for the success of the data center. The Dell EMC XtremIO X2 All-Flash storage arrays can service millions of IOPS assuring professionals of quality communication. At the same time, since customers are consolidating and replacing multiple legacy arrays with higher performing and higher density modern All-Flash storage arrays, these systems also need to provide excellent data protection and extreme high availability. The Dell EMC XtremIO X2 platform is one such storage solution that satisfies multiple use cases, due to its flash-centric architecture and ability to seamlessly integrate and scale in unison as the data center grows to meet ever increasing data storage and access requirements.
  • 4. 4 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Overview This paper primarily focuses on the Microsoft Hyper-V based data center. Microsoft recently released Windows Server 2016 and Windows Hyper-V 2016. These releases include a large number of new features designed to enhance the capabilities and integration options available to enterprise data centers. A number of these improvements will be referenced when discussing the building blocks of a Microsoft All-Flash data center. In addition, a number of corresponding improvements relevant to the end users of the Dell EMC XtremIO X2 array in relation to Microsoft based infrastructures will be discussed throughout this document. With XtremIO X2, data center administrators have the ability to consolidate entire environments of transactional databases, data warehouses, and business applications into a much smaller footprint and still deliver consistent and predictable performance to the supported applications. XtremIO X2 allows IT departments to rapidly deliver I/O throughput and capacity to the enterprise data center, helps reduce database sprawl, supports rapid provisioning, and creates automated centralized management for databases and virtual workloads. This paper describes these various enterprise functionalities along with the multiple integration mechanisms available to adopters of Microsoft modern data centers that choose to adopt Dell EMC’s best of breed All-Flash storage. Combinations of these capabilities will be discussed throughout the document. This white paper explains to those involved in the planning or management of a Microsoft data center how XtremIO X2 All-Flash arrays fit in the enterprise solution, and what they should be aware of in relation to available design and operational efficiencies. We ask the reader to consider this information and use it to make appropriate choices in the design and deployment of their own data center infrastructures. Potential challenges surrounding data storage and security prevent many organizations from tapping into the promise of the storage utility provided by public cloud providers. That is not to say that these organization will forego the obvious benefits of cloud deployment models and orchestration capabilities –it only means that they will look for internal resources to deliver similar efficiencies with quantifiable gains. To realize these benefits, IT organizations are tasked with the identification and validation of components and solutions to satisfy the internal drive for greater efficiencies and increased business value. Scale, budget, and functionality will be the determining factors in making the correct choice for any IT infrastructure project. That said, once the required deterministic performance levels are understood, the choices become increasingly straightforward. For enterprise architectures designed to host business critical applications and end-user environments, the obvious choice is to examine and plan for the integration of shared All-Flash storage. As time passes, a deeper understanding of the benefits All-Flash storage provides for enterprise environments is becoming more prevalent across the industry. Early adopters and recent arrivals to the All-Flash revolution are now the majority, with more choosing to deploy All-Flash every day. There are many reasons for this, and although each deployment has its unique considerations, the underlying main motive is that All-Flash storage solves today’s modern data center storage problems, and offers much new efficiency. Intelligent All-Flash storage offers capabilities which improve existing processes and define tomorrow’s business workflows. As the clear market leader for All-Flash storage, XtremIO X2 leads this revolution, and also carries the mantle as Dell EMC's (and the wider storage industry’s) fastest- growing product ever. Throughout this paper, information concerning the Dell EMC XtremIO X2 All-Flash array’s ability to meet and exceed the requirements of a modern Microsoft virtualized data center will be highlighted, explained, and demonstrated. As more IT organizations choose to deploy their data center solutions using Microsoft virtualization technologies, Dell EMC XtremIO X2 continues to expand the available integration capabilities for these environments, while still providing best-in-class performance, reliability, and storage management simplicity. The following sections describe the relevant technologies, highlight the most important features and functionality, and show the reader how these can combine to deliver value to a customer’s data center and business.
  • 5. 5 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Solution Overview The Windows Server platform leverages Hyper-V for virtualization technology. Initially offered with Windows Server 2008, Hyper-V has matured with each release to include many new features and enhancements. Microsoft Hyper-V has evolved to become a mature, robust, proven virtualization platform. At a basic level, it is a layer of software that presents the physical host server hardware resources in an optimized and virtualized manner to one or more guest virtual machines (VMs). Hyper-V hosts (also referred to as nodes when clustered) can greatly enhance utilization of physical hardware (such as processors, memory, NICs, and power supplies) by allowing many VMs to share these resources at the same time. Hyper-V Manager and related management tools such as Failover Cluster Manager, Microsoft System Center Virtual Machine Manager (SCVMM), and PowerShell, offer administrators greater control and flexibility for managing host and VM resources. The solution shown in Figure 1 represents a four-node Hyper-V virtualized distributed data center environment managed by System Center VMM and connected to Cluster Shared Volumes. The consolidated virtualized enterprise applications run on the production site which includes Oracle and Microsoft SQL database workloads, as well as additional data warehousing profiles. For purposes of the solution proposed and described in this white paper, we will consider a pseudo- company for which these are the primary workloads, and are considered essential to the continued fulfillment of key business operational objectives. They should, at all times, behave in a consistent, reliable, and expected manner. In case of a hardware failure, a redundant system should become operational with minimal service disruption. The following sections describe the hardware layer of our solution, providing an in-depth view of our XtremIO X2 array and the features and benefits it provides to Hyper-V environments, including the software layer providing configuration details for Hyper-V environments, and Dell EMC XtremIO X2 tools for Microsoft environments such as ESI, AppSync, and PowerShell modules. Figure 1. Design Architecture Hyper-V Cluster Managed by System Center VMM 2016
  • 6. 6 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Table 1. Solution Hardware HARDWARE QUANTITY CONFIGURATION NOTES DELL EMC XtremIO X2 1 Two Storage Controllers (SCs) with: • Two dual socket Haswell CPUs • 346GB RAM DAEs configured with 18 x 400 GB SSDs drives XtremIO X2 X2-S 400GB 18 drives Brocade 6510 SAN switch 1 32/16 Gbps FC switches 2 switches per site, dual FC fabric configuration Mellanox MSX1016 10GbE 1 10/1 Gbps Ethernet switches Infrastructure Ethernet switch PowerEdge FC630 4 Intel Xeon CPU E5-2695 v4 @ 2.10GHz 524 GB Windows 2016 v 1607 including Hyper-V Role Table 2. Solution Software SOFTWARE QUANTITY CONFIGURATION MSSQL Server 2017 VM (for SCVMM) 2 • 8 vCPU • 16 GB Memory • 100 GB VHDX System Center VMM 2016 VM 1 • 4 vCPU • 16 GB Memory • 80 GB VHDX Oracle 12C VM (DW Workload) 4 • 8 vCPU • 16 GB Memory • 256 GB VHDX MSSQL Server 2017 VM (OLTP Workload) 4 • 4 vCPU • 8 GB Memory • 256 GB VHDX Windows 10 VM (VDI Workload) 12 • 2 vCPU • 4 GB Memory • 60 GB VHDX PowerShell Plugin for XtremIO X2 2.0.3 1 N/A Dell EMC ESI Plugin 5.1 1 N/A
  • 7. 7 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Dell EMC XtremIO X2 for Hyper-V Environments Dell EMC's XtremIO X2 is an enterprise-class scalable All-Flash storage array that provides rich data services with high performance. It is designed from the ground up to unlock flash technology's full performance potential by uniquely leveraging the characteristics of SSDs and uses advanced inline data reduction methods to reduce the physical data that must be stored on the disks. XtremIO X2 storage system uses industry-standard components and proprietary intelligent software to deliver unparalleled levels of performance, achieving consistent low latency for up to millions of IOPS. It comes with a simple, easy-to-use interface for storage administrators and fits a wide variety of use cases for customers in need of a fast and efficient storage system for their data centers, requiring very little pre-provisioning preparation or planning. XtremIO X2 storage system serves many use cases in the IT world, due to its high performance and advanced abilities. One major use case is for virtualized environments and cloud computing. Figure 2 shows XtremIO X2’s incredible performance in an intensive, live Hyper-V production environment. We can see extremely high IOPS stats (~1.6M) handled by a quad XtremIO X2 storage array with a latency mostly below 1 msec. In addition, we can see an impressive data reduction factor of 6.6:1 (2.8:1 for deduplication and 2.4:1 for compression) which lowers the physical footprint of the solution. Figure 2. Intensive Hyper-V Production Environment Workload from an XtremIO X2 Array Perspective XtremIO leverages flash to deliver value across multiple dimensions: • Performance (consistent low-latency and up to millions of IOPS) • Scalability (using a scale-out and scale-up architecture) • Storage efficiency (using data reduction techniques such as deduplication, compression and thin-provisioning) • Data Protection (with a proprietary flash-optimized algorithm called XDP) • Environment Consolidation (using XtremIO Virtual Copies or Microsoft ODX) • Integration between Microsoft and Dell EMC storage technologies, providing ease of management, backup, recovery, and application awareness • Rapid deployment and protection of Hyper-V environments
  • 8. 8 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. XtremIO X2 Overview XtremIO X2 is the new generation of Dell EMC's All-Flash Array storage system. It adds enhancements and flexibility to the high proficiency and performance of the previous generation of storage arrays. Features such as scale-up for a more flexible system, write boost for a more sensible and high-performing storage array, NVRAM for improved data availability, and a new web-based UI for managing the storage array and monitoring its alerts and performance stats, all add the extra value and advancements required in the evolving world of computer infrastructure. The XtremIO X2 Storage Array uses building blocks called X-Bricks. Each X-Brick has its own compute, bandwidth and storage resources, and can be clustered together with additional X-Bricks to grow in both performance and capacity (scale-out). Each X-Brick can also grow individually in terms of capacity, with an option to add to up to 72 SSDs in each X-Brick. XtremIO architecture is based on a metadata-centric content-aware system, which helps streamline data operations efficiently without requiring any post-write data transfer for any maintenance reason (data protection, data reduction, etc. – all done inline). The system uniformly distributes the data across all SSDs in all X-Bricks in the system using unique fingerprints of the incoming data, and controls access using metadata tables. This contributes to an extremely balanced system across all X-Bricks in terms of compute power, storage bandwidth and capacity. Using the same unique fingerprints, XtremIO provides exceptional, always-on inline data deduplication abilities, which highly benefits virtualized environments. Together with its data compression and thin provisioning capabilities (both also inline and always-on), it achieves unparalleled data reduction rates. System operation is controlled by storage administrators via a stand-alone dedicated Linux-based server called the XtremIO Management Server (XMS). An intuitive user interface is used to manage and monitor the storage cluster and its performance. The XMS can be either a physical or a virtual server and can manage multiple XtremIO clusters. With its intelligent architecture, XtremIO provides a storage system that is easy to set-up, needs zero tuning by the client, and does not require complex capacity or data protection planning, which is provided by the system autonomously. Architecture The XtremIO X2 Storage System is comprised of a set of X-Bricks that together form a cluster. This is the basic building block of an XtremIO array. There are two types of X2 X-Bricks available: X2-S and X2-R. X2-S is for environments whose storage needs are more I/O intensive than capacity intensive, as they use smaller SSDs and less RAM. An effective use of the X2-S type is for environments that have high data reduction ratios (high compression ratio or a lot of duplicated data) which significantly lower the capacity footprint of the data. X2-R X-Bricks clusters are made for capacity intensive environments, with bigger disks, more RAM and a bigger expansion potential in future releases. The two X-Brick types cannot be mixed together in a single system, so the decision of which type is suitable for your environment must be made in advance. Each X-Brick is comprised of: • Two 1U Storage Controllers (SCs) with: o Two dual socket Haswell CPUs o 346GB RAM (for X2-S) or 1TB RAM (for X2-R) o Two 1/10GbE iSCSI ports o Two user interface interchangeable ports (either 4/8/16Gb FC or 1/10GbE iSCSI) o Two 56Gb/s InfiniBand ports o One 100/1000/10000 Mb/s management port o One 1Gb/s IPMI port o Two redundant power supply units (PSUs)
  • 9. 9 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. • One 2U Disk Array Enclosure (DAE) containing: o Up to 72 SSDs of sizes 400GB (for X2-S) or 1.92TB (for X2-R) o Two redundant SAS interconnect modules o Two redundant power supply units (PSUs) Figure 3. An XtremIO X2 X-Brick The Storage Controllers on each X-Brick are connected to their DAE via redundant SAS interconnects. The XtremIO X2 storage array can have one or multiple X-Bricks. Multiple X-Bricks are clustered together into an XtremIO X2 array, using an InfiniBand switch and the Storage Controllers' InfiniBand ports for back-end connectivity between Storage Controllers and DAEs across all X-Bricks in the cluster. The system uses the Remote Direct Memory Access (RDMA) protocol for this back-end connectivity, ensuring a highly-available, ultra-low latency network for communication between all components of the cluster. The InfiniBand switches are the same size (1U) for both X2-S and X2-R cluster types, but include 12 ports for X2-S and 36 ports for X2-R. By leveraging RDMA, an XtremIO X2 system is essentially a single shared-memory space spanning all of its Storage Controllers. The 1GB port for management is configured with an IPv4 address. The XMS, which is the cluster's management software, communicates with the Storage Controllers via the management interface. Over this interface, the XMS communicates with the Storage Controllers, and sends storage management requests creating an XtremIO X2 volume, mapping a volume to an Initiator Group, or other similar requests. The 1GB/s port for IPMI interconnects the X-Brick's two Storage Controllers. IPMI connectivity is strictly within the bounds of an X-Brick, and will never be connected to an IPMI port of a Storage Controller in another X-Brick in the cluster. Multi-Dimensional Scaling With X2, an XtremIO cluster has both scale-out and scale-up capabilities, which enables a flexible growth capability adapted to the customer's unique workload and needs. Scale-out is implemented by adding X-Bricks to an existing cluster. The addition of an X-Brick to an existing cluster linearly increases its compute power, bandwidth and capacity. Each X-Brick that is added to the cluster includes two Storage Controllers, each with its CPU power, RAM and FC/iSCSI ports to service the clients of the environment, together with a DAE with SSDs to increase the capacity provided by the cluster. Adding an X-Brick to scale-out an XtremIO cluster is intended for environments that grow both in capacity and performance needs, such as in the case of an increase in the number of active users and their required data, or a database which grows in data and complexity. An XtremIO cluster can start with any number of X-Bricks as per the system’s initial requirements, and can currently grow to up to 4 X-Bricks (for both X2-S and X2-R). Future code upgrades of XtremIO X2 will allow up to 8 supported X-Bricks for X2-R arrays. 4U First Storage Controller DAE2U Second Storage Controller 1U 1U
  • 10. 10 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 4. Scale Out Capabilities – Single to Multiple X2 X-Brick Clusters Scale-up of an XtremIO cluster is implemented by adding SSDs to existing DAEs in the cluster. This is intended when there is a need to grow in capacity but not performance. As an example, this may occur when the same number of users need to store an increasing amount of data, or when data usage growth reaches the capacity limits, but not the performance limits, of the current infrastructure. Each DAE can hold up to 72 SSDs, and is divided to up to 2 groups of SSDs called Data Protection Groups (DPGs). Each DPG can hold a minimum of 18 SSDs and can grow by increments of 6 SSDs up to a maximum of 36 SSDs, thus supporting configurations of 18, 24, 30 or 36 SSDs per DPG, with up to 2 DPGs in a DAE. SSDs are 400GB per drive for X2-S clusters and 1.92TB per drive for X2-R clusters. Future releases will allow customers to populate their X2-R clusters with 3.84TB sized drives, doubling the physical capacity available in their clusters. Figure 5. Multi-Dimensional scaling
  • 11. 11 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. XIOS and the I/O Flow Each Storage Controller within the XtremIO cluster runs a specially-customized lightweight Linux-based operating system as the base platform of the array. The XtremIO Operating System (XIOS) handles all activities within a Storage Controller and runs on top of the Linux-based operating system. XIOS is optimized for handling high I/O rates and manages the system's functional modules, RDMA communication, monitoring etc. Figure 6. X-Brick Components XIOS has a proprietary process scheduling-and-handling algorithm designed to meet the specific requirements of a content-aware, low-latency, and high-performing storage system. It provides efficient scheduling and data access, full exploitation of CPU resources, optimized inter-sub-process communication, and minimized dependency between sub- processes running on different sockets. The XtremIO Operating System gathers a variety of metadata tables on incoming data including the data fingerprint, its location in the system, mappings, and reference counts. The metadata is used as the primary source of information for performing system operations such as uniformly laying out incoming data, implementing inline data reduction services, and accessing data on read requests. The metadata is also used to optimize integration and communication between the storage system and external applications (such as VMware XCOPY and Microsoft ODX). Regardless of which Storage Controller receives an I/O request from the host, multiple Storage Controllers on multiple X- Bricks interact to process the request. The data layout in the XtremIO X2 system ensures that all components share the load and participate evenly in processing I/O operations. An important functionality of XIOS is its data reduction capabilities. This is achieved by using inline data deduplication and compression. Data deduplication and data compression complement each other. Data deduplication removes redundancies, whereas data compression compresses the already deduplicated data before it is written to the flash media. XtremIO is an always-on, thin-provisioned storage system, further realizing storage savings by the storage system, which never writes a block of zeros to the disks. XtremIO connects with existing SANs through 16Gb/s Fibre Channel or 10Gb/s Ethernet iSCSI to service hosts' I/O requests.
  • 12. 12 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. XtremIO Write I/O Flow In a write operation to the storage array, the incoming data stream reaches any one of the Active-Active Storage Controllers and is broken into data blocks. For every data block, the array fingerprints the data with a unique identifier and stores it in the cluster's mapping table. The mapping table maps between the host Logical Block Addresses (LBA) and the block fingerprints, and between the block fingerprints and its physical location in the array (the DAE, SSD and the block location offset). The block fingerprint has two objectives: 1. To determine if the block is a duplicate of a block that already exists in the array 2. To distribute blocks uniformly across the cluster (the array divides the list of potential fingerprints among Storage Controllers in the array and gives each Storage Controller a range of fingerprints to control.) The mathematical process that calculates the fingerprints results in a uniform distribution of fingerprint values thus evenly distributing blocks across all Storage Controllers in the cluster. A write operation works as follows: 1. A new write request reaches the cluster. 2. The new write is broken into data blocks. 3. For each data block: A. A fingerprint is calculated for the block. B. An LBA-to-fingerprint mapping is created for this write request. C. The fingerprint is checked to see if it already exists in the array. 1. If it exists, the reference count for this fingerprint is incremented by one. 2. If it does not exist: a. A location is chosen on the array where the block will be written (distributed uniformly across the array according to fingerprint value). b. A fingerprint-to-physical location mapping is created. c. The data is compressed. d. The data is written. e. The reference count for the fingerprint is set to one. Deduplicated writes are of course much faster than original writes. Once the array identifies a write as a duplicate, it updates the LBA-to-fingerprint mapping for the write and updates the reference count for this fingerprint. No further data is written to the array and the operation is completed quickly, adding an extra benefit to inline deduplication. Figure 7 shows an example of an incoming data stream which contains duplicate blocks with identical fingerprints.
  • 13. 13 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 7. Incoming Data Stream Example with Duplicate Blocks As mentioned, fingerprints also help to decide where to write the block in the array. Figure 8 shows the incoming stream after duplicates were removed as it is being written to the array. The blocks are divided to the correct Storage Controller according to their fingerprint values, which ensures a uniform distribution of the data across the cluster. The blocks are transferred to their destinations in the array using Remote Direct Memory Access (RDMA) via the low-latency InfiniBand network. Figure 8. Incoming Deduplicated Data Stream Written to the Storage Controllers The actual write of the data blocks to the SSDs is done asynchronously. At the time of the application write, the system places the data blocks in the in-memory write buffer, and protects it using journaling to local and remote NVRAMs. Once it is written to the local NVRAM and replicated to a remote one, the Storage Controller returns an acknowledgment to the host. This guarantees a quick response to the host, ensures low-latency of I/O traffic, and preserves the data in case of system failure (power-related or any other). When enough blocks are collected in the buffer (to fill up a full stripe), the system writes them to the SSDs on the DAE. Figure 9 shows data being written to the DAEs after a full stripe of data blocks is collected in each Storage Controller. Storage Controller Storage Controller DAE Storage Controller Storage Controller DAE CA38C90 Data 134F871 Data 0325F7A Data F3AFBA3 Data AB45CB7 Data 20147A8 Data 963FE7B Data Data DataData DataData Data Data X-Brick 1 X-Brick 2 F, … 2, A, … 1, 9, … 0, C, … InfiniBand
  • 14. 14 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 9. Full Stripe of Blocks Written to the DAEs XtremIO Read I/O Flow In a read operation, the system first performs a look-up of the logical address in the LBA-to-fingerprint mapping. The fingerprint found is then looked up in the fingerprint-to-physical mapping and the data is retrieved from the right physical location. As with writes, the read load is also evenly shared across the cluster, as blocks are evenly distributed, and all volumes are accessible across all X-Bricks. If the requested block size is larger than the data block size, the system performs parallel data block reads across the cluster and assembles them into bigger blocks before returning them to the application. A compressed data block is decompressed before it is delivered to the host. XtremIO has a memory-based read cache in each Storage Controller. The read cache is organized by content fingerprint. Blocks whose contents are more likely to be read are placed in the read cache for a fast retrieve. A read operation works as follows: 1. A new read request reaches the cluster. 2. The read request is analyzed to determine the LBAs for all data blocks and a buffer is created to hold the data. 3. For each LBA: A. The LBA-to-fingerprint mapping is checked to find the fingerprint of each data block to be read. B. The fingerprint-to-physical location mapping is checked to find the physical location of each of the data blocks. C. The requested data block is read from its physical location (read cache or a place in the disk) and transmitted to the buffer created in step 2 in the Storage Controller that processes the request via RDMA over InfiniBand. 4. The system assembles the requested read from all data blocks transmitted to the buffer and sends it back to the host. Storage Controller Storage Controller DAE Storage Controller Storage Controller DAE Data Data Data Data P1 P2DataDataDataDataDataData Data Data Data Data P1 P2DataDataDataDataDataData Data Data Data Data P1 P2DataDataDataDataDataData Data Data Data Data P1 P2DataDataDataDataDataData Data DataData DataData Data Data X-Brick 1 X-Brick 2
  • 15. 15 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. System Features The XtremIO X2 Storage Array offers a wide range of built-in features that require no special license. The architecture and implementation of these features is unique to XtremIO and is designed around the capabilities and limitations of flash media. Key features included in the system are listed below. Inline Data Reduction XtremIO's unique Inline Data Reduction is achieved by these two mechanisms: Inline Data Deduplication and Inline Data Compression. Data Deduplication Inline Data Deduplication is the removal of duplicate I/O blocks from a stream of data prior to it being written to the flash media. XtremIO inline deduplication is always on, meaning no configuration is needed for this important feature. The deduplication is at a global level, meaning no duplicate blocks are written over the entire array. As an inline and global process, no resource-consuming background processes or additional reads and writes (which are mainly associated with post-processing deduplication) are necessary for the feature's activity, thus increasing SSD endurance and eliminating performance degradation. As mentioned earlier, deduplication on XtremIO is performed using the content's fingerprints (see XtremIO Write I/O Flow on page 12). The fingerprints are also used for uniform distribution of data blocks across the array, which provides inherent load balancing for performance and enhances flash wear-level efficiency, since the data never needs to be rewritten or rebalanced. XtremIO uses a content-aware, globally deduplicated Unified Data Cache for highly efficient data deduplication. The system's unique content-aware storage architecture provides a substantially larger cache size with a small DRAM allocation. Therefore, XtremIO is the ideal solution for difficult data access patterns, such as "boot storms" that are common in Hyper-V environments. XtremIO has excellent data deduplication ratios, especially for virtualized environments. With it, SSD usage is smarter, flash longevity is maximized, the logical storage capacity is multiplied, and total cost of ownership is reduced. Figure 10 shows the CPU utilization of our Storage Controllers during a Hyper-V production workload. When new blocks are written to the system the hash calculation is distributed across all Storage Controllers. The excellent synergy across the X2 cluster can be seen, when all the Active-Active Storage Controllers' CPUs share the load and effort, and the CPU utilization between all are virtually equal for the entire workload. Figure 10. XtremIO X2 CPU Utilization Data Compression Inline data compression is the compression of data prior to it being written to the flash media. XtremIO automatically compresses data after all duplications are removed, ensuring that the compression is performed only for unique data blocks. The compression is performed in real-time and not as a post-processing operation. This way, it does not overuse the SSDs or impact performance. Compression rates depend on the type of data written.
  • 16. 16 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Data Compression complements data deduplication in many cases, and saves storage capacity by storing only unique data block in the most efficient manner. Data compression is always inline and is never performed as a post-processing activity. Therefore, the data is written only once. It increases the overall endurance of the flash array's SSDs. In a Hyper-V environment, deduplication dramatically reduces the required capacity for the virtual servers. In addition, compression reduces the specific user data. As a result, an increased number of virtual servers can be managed by a single X-Brick. Using the two data reduction techniques, less physical capacity is required to store the data, increasing the storage array's efficiency and dramatically reducing the $/GB cost of storage, even when compared to hybrid storage systems. Figure 11 shows the benefits and capacity savings for the deduplication-compression combination. Figure 11. Data Deduplication and Data Compression Demonstrated In the above example, the twelve data blocks written by the host are first deduplicated to four data blocks, demonstrating a 3:1 data deduplication ratio. Following the data deduplication process, the four data blocks are then each compressed, by a ratio of 2:1, resulting in a total data reduction ratio of 6:1. Thin Provisioning XtremIO storage is natively thin provisioned, using a small internal block size. All volumes in the system are thin provisioned, meaning that the system consumes capacity only when it is needed. No storage space is ever pre-allocated before writing. Because of XtremIO's content-aware architecture, blocks can be stored at any location in the system (using the metadata to reference their location), and the data is written only when unique blocks are received. Therefore, as opposed to disk- oriented architecture, no space creeping or garbage collection is necessary on XtremIO, volume fragmentation does not occur in the array, and no defragmentation utilities are needed. This feature on XtremIO enables consistent performance and data management across the entire life cycle of a volume, regardless of the system capacity utilization or the write patterns of clients. This characteristic allows frequent manual and automatic reclamation of unused space directly from NTFS/ReFS and virtual machines, thus providing the following benefits: • The allocated disks can be used optimally, and the actual space reports are more accurate. • More efficient snapshots, as blocks that are no longer needed are not protected by additional snapshots. Data Written by Host 3:1 Data Deduplication 2:1 Data Compression 6:1 Total Data Reduction This is the only data written to the flash media.
  • 17. 17 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Integrated Copy Data Management XtremIO pioneered the concept of integrated Copy Data Management (iCDM) – the ability to consolidate both primary data and its associated copies on the same scale-out, All-Flash array for unprecedented agility and efficiency. XtremIO is one of a kind in its ability to consolidate multiple workloads and entire business processes safely and efficiently, providing organizations with a new level of agility and self-service for on-demand procedures. XtremIO provides consolidation, supports on-demand copy operations at scale, and still maintains delivery of all performance SLAs in a consistent and predictable way. Consolidation of primary data and its copies in the same array has numerous benefits: • It can make development and testing activities up to 50% faster, quickly creating copies of production code for development and testing purposes, and then refreshing the output back into production in the same array for the full cycle of code upgrades. This dramatically reduces complexity and infrastructure needs, as well as development risks, and increases the quality of the product. • Production data can be extracted and pushed to all downstream analytics applications on-demand as a simple in- memory operation. Data can be copied with high performance and with the same SLA as production copies, without compromising production SLAs. XtremIO offers this on-demand as both self-service and automated workflows for both application and infrastructure teams. • Operations such as patches, upgrades and tuning tests can be made quickly using copies of production data. Diagnosing application and database problems can be done using these copies, and applying the changes back to production can also be done by returning copies. The same approach can be used for testing new technologies and combining them in production environments. • iCDM can also be used for data protection purposes, as it enables creating many point-in-time copies at short time intervals for recovery. Application integration and orchestration policies can be set to auto-manage data protection, using different SLAs. XtremIO Virtual Copies For all iCDM purposes, XtremIO uses its own implementation of snapshots called XtremIO Virtual Copies (XVCs). XVCs are created by capturing the state of data in volumes at a particular point in time, and allowing users to access that data when needed, regardless of the state of the source volume (even deletion). They allow any access type and can be taken either from a source volume or another Virtual Copy. XtremIO's Virtual Copy technology is implemented by leveraging the content-aware capabilities of the system, and is optimized for SSDs, with a unique metadata tree structure that directs I/O to the right timestamp of the data. This allows efficient copy creation that can sustain high performance, while maximizing the media endurance. Figure 12. A Metadata Tree Structure Example of XVCs
  • 18. 18 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. When creating a Virtual Copy, the system only generates a pointer to the ancestor metadata of the actual data in the system, making the operation very quick. This operation does not have any impact on the system and does not consume any capacity when created, unlike traditional snapshots, which may need to reserve space or copy the metadata for each snapshot. Virtual Copy capacity consumption occurs only when changes are made to any copy of the data. Then, the system updates the metadata of the changed volume to reflect the new write, and stores its blocks in the system using the standard write flow process. The system supports the creation of Virtual Copies on a single or set of volumes. All Virtual Copies of the volumes in the set are cross-consistent and contain the exact same point-in-time. This can be done manually by selecting a set of volumes for copying, or by placing volumes in a Consistency Group and making copies of that Consistency Group. Virtual Copy deletions are lightweight and proportional only to the amount of changed blocks between the entities. The system uses its content-aware capabilities to handle copy deletions. Each data block has a counter that indicates the number of instances of that block in the system. If a block is referenced from some copy of the data, it will not be deleted. Any block whose counter value reaches zero is marked as deleted and will be overwritten when new unique data enters the system. With XVCs, XtremIO's iCDM offers the following tools and workflows to provide the consolidation capabilities: • Consistency Groups (CG) – Grouping of volumes to allow Virtual Copies to be taken on a group of volumes as a single entity. • Snapshot Sets – A group of Virtual Copies of volumes taken together using CGs or a group of manually-chosen volumes. • Protection Copies – Immutable read-only copies created for data protection and recovery purposes. • Protection Scheduler – Used for local protection of a volume or a CG. It can be defined using intervals of seconds/minutes/hours or can be set using a specific time of day or week. It has a retention policy based on the number of copies required or the configured age limit of the oldest snapshot. • Restore from Protection – Restore a production volume or CG from one of its descendant snapshot sets. • Repurposing Copies – Virtual Copies configured with changing access types (read-write / read-only / no-access) for alternating purposes. • Refresh a Repurposing Copy – Refresh a Virtual Copy of a volume or a CG from the parent object or other related copies with relevant updated data. It does not require volume provisioning changes for the refresh to take effect, but can be discovered only by host-side logical volume management operations. XtremIO Data Protection XtremIO Data Protection (XDP) provides the storage system with highly efficient "self-healing" double-parity data protection. It requires very little capacity overhead and metadata space, and does not require dedicated spare drives for rebuilds. Instead, XDP leverages the "hot space" concept, where any free space available in the array can be utilized for failed drive reconstructions. The system always reserves sufficient distributed capacity for performing at least a single drive rebuild. In the rare case of a double SSD failure, the second drive will be rebuilt only if there is enough space to rebuild it as well, or when one of the failed SSDs is replaced. The XDP algorithm provides: • N+2 drives protection • Capacity overhead of only 5.5%-11% (depending on the number of disks in the protection group) • 60% more write-efficient than RAID1 • Superior flash endurance to any RAID algorithm, due to the smaller number of writes and even distribution of data • Automatic rebuilds that are faster than traditional RAID algorithms
  • 19. 19 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. As shown in Figure 13, XDP uses a variation of N+2 row and diagonal parity which provides protection from two simultaneous SSD errors. An X-Brick DAE may contain up to 72 SSDs organized in two Data Protection Groups (DPGs). XDP is managed independently on the DPG level. A DPG of 36 SSDs will result in capacity overhead of only 5.5% for its data protection needs. Figure 13. N+2 Row and Diagonal Parity Data at Rest Encryption Where needed, Data at Rest Encryption (DARE) provides a solution for securing critical data even when the media is removed from the array. XtremIO arrays utilize a high-performance inline encryption technique to ensure that all data stored on the array is unusable if the SSD media is removed. This prevents unauthorized access in the event of theft or loss during transport, and makes it possible to return/replace failed components containing sensitive data. DARE has been established as a mandatory requirement in several industries, such as health care, banking, and government institutions. At the heart of XtremIO's DARE solution lays the use of the Self-Encrypting Drive (SED) technology. An SED uses dedicated hardware to encrypt and decrypt data as it is written to or read from the drive. Offloading the encryption task to the SSDs enables XtremIO to maintain the same software architecture whether encryption is enabled or disabled on the array. All XtremIO's features and services (including Inline Data Reduction, XtremIO Data Protection, Thin Provisioning, XtremIO Virtual Copies, etc.) are available on an encrypted cluster as well as on a non-encrypted cluster, and performance is not impacted when using encryption. A unique Data Encryption Key (DEK) is created during the drive manufacturing process, and does not leave the drive at any time. The DEK can be erased or changed, rendering its current data unreadable forever. To ensure that only authorized hosts can access the data on the SED, the DEK is protected by an Authentication Key (AK) that resides on the Storage Controller. Without the AK, the DEK is encrypted and cannot be used to encrypt or decrypt data. 1 2 2 3 3 4 D0 D1 3 4 4 5 5 1 D2 D3 k = 5 (prime) 5 1 2 D4 1 2 3 P Q 4 5 1 2 3 4 k-1 5
  • 20. 20 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 14. Data at Rest Encryption in XtremIO Write Boost In the new X2 storage array, the write flow algorithm was improved significantly to improve array performance, following the rise in compute power and disk speeds, and taking into account common applications' I/O patterns and block sizes. As described earlier for the write I/O flow, the commit to the host is now asynchronous to the actual writing of the blocks to disk. The commit is sent after the changes are written to local and remote NVRAMs for protection, and are written to the disk only later, at a time that best optimizes the system's activity. In addition to the shortened procedure from write to commit, the new algorithm addresses an issue relevant to many applications and clients: a high percentage of small I/Os creating load on the storage system and influencing latency, especially on bigger I/O blocks. Examining customers' applications and I/O patterns, it was found that many I/Os from common applications come in small blocks, under 16K pages, creating high loads on the storage array. Figure 15 shows the block size histogram from the entire XtremIO installed base. The percentage of blocks smaller than 16KB is highly evident. The new algorithm takes care of this issue by aggregating small writes to bigger blocks in the array before writing them to disk, making them less demanding on the system, which can now handle larger I/Os faster. The test results for the improved algorithm were amazing: in several cases the improvement in latency is around 400% and allows XtremIO X2 to address application latency requirements of 0.5msec or lower. Figure 15. XtremIO Install Base Block Size Histogram
  • 21. 21 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. XtremIO Management Server The XtremIO Management Server (XMS) is the component that manages XtremIO clusters (up to 8 clusters). It is preinstalled with CLI, GUI and RESTful API interfaces, and can be installed on a dedicated physical server or a virtual machine. The XMS manages the cluster through the management ports on both Storage Controllers of the first X-Brick in the cluster, using a standard TCP/IP connection for communication. It is not part of the XtremIO data path, and thus can be disconnected from an XtremIO cluster without jeopardizing I/O tasks. A failure on the XMS only affects monitoring and configuration activities, such as creating and attaching volumes. A virtual XMS is naturally less vulnerable to such failures. The GUI is based on a new Web User Interface (WebUI), which is accessible via any browser, and provides easy-to-use tools for performing most system operations (certain management operations must be performed using the CLI). Some of these features are described in the following sections. Dashboard The Dashboard window presents a main overview of the cluster. It has three panels: • Health - the main overview of the system's health status, alerts, etc. • Performance (shown in Figure 16) – the main overview of the system's overall performance and top used Volumes and Initiator Groups. • Capacity (shown in Figure 17) – the main overview of the system's physical capacity and data savings. Figure 16. XtremIO WebUI – Dashboard – Performance Panel
  • 22. 22 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 17. XtremIO WebUI – Dashboard – Capacity Panel The main Navigation menu bar is located on the left side of the UI, allowing users to easily select the desired XtremIO management actions. The main menu options are: Dashboard, Notifications, Configuration, Reports, Hardware, and Inventory. Notifications From the Notifications menu, we can navigate to the Events window (shown in Figure 18) and the Alerts window, showing major and minor issues related to the cluster's health and operations. Figure 18. XtremIO WebUI – Notifications – Events Window
  • 23. 23 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Configuration The Configuration window displays the cluster's logical components: Volumes (shown in Figure 19), Consistency Groups, Snapshot Sets, Initiator Groups, Initiators, and Protection Schedulers. Through this window we can create and modify these entities, using the action panel on the top right side. Figure 19. XtremIO WebUI – Configuration Reports From the Reports menu, we can navigate to different windows to show graphs and data related to different aspects of the system's activities, mainly system performance and resource utilization. The menu included the following options: Overview, Performance, Blocks, Latency, CPU Utilization, Capacity, Savings, Endurance, SSD Balance, Usage or User Defined reports. We can view reports using different resolutions of time and components: choosing specific entities in the "Select Entity" option (shown in Figure 20) that appears at the top of the Reports menus, or selecting predefined and custom days and times for which to review reports (shown in Figure 21).
  • 24. 24 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 20. XtremIO WebUI – Reports – Selecting Specific Entities to View Figure 21. XtremIO WebUI – Reports – Selecting Specific Times to View The Overview window shows basic reports on the system, including performance, weekly I/O patterns and storage capacity information. The Performance window shows extensive performance reports such as Bandwidth, IOPS and Latency information. The Blocks window shows block distribution and statistics of I/Os within the system. The Latency window (shown in Figure 22) shows Latency reports, with different dimensions such as block sizes and IOPS metrics. The CPU Utilization window shows CPU utilization of all Storage Controllers in the system.
  • 25. 25 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 22. XtremIO WebUI – Reports – Latency Window The Capacity window (shown in Figure 23) shows capacity statistics and the change in storage capacity over time. The Savings window shows Data Reduction statistics and change over time. The Endurance window shows SSD's endurance status and statistics. The SSD Balance window shows the data distribution over the SSDs. The Usage window shows Bandwidth and IOPS usage, both overall and divided to reads and writes. The User Defined window allows users to define their own reports to view. Figure 23. XtremIO WebUI – Reports – Capacity Window Hardware In the Hardware menu, we can see a visual overview of the cluster and X-Bricks. When viewing the FRONT panel, we can choose and highlight any component of the X-Brick and view information about it in the Information panel on the right. In Figure 24 we can see extended information on Storage Controller 1 in X-Brick 1, but we can also view more detailed information such as local disks and Status LEDs. We can click on the "OPEN DAE" button to see visual illustration of the X-Brick's DAE and its SSDs, and view additional information on each SSD and Row Controller.
  • 26. 26 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 24. XtremIO WebUI – Hardware – Front Panel In the BACK panel, we can view an illustration of the rear of the X-Brick and see all physical connections to the X-Brick together with an internal view. Rear connections include FC connections, Power, iSCSI, SAS, Management, IPMI and InfiniBand. The view can be filtered by the "Show Connections" list at the top right. An example of this view is seen in Figure 25. Figure 25. XtremIO WebUI – Hardware – Back Panel – Show Connections Inventory In the Inventory menu, we can see all components of the system with relevant information including: XMS, Clusters, X- Bricks, Storage Controllers, Local Disks, Storage Controller PSUs, XEnvs, Data Protection Groups, SSDs, DAEs, DAE Controllers, DAE PSUs, DAE Row Controllers, InfiniBand Switches and NVRAMs. As mentioned, there are other interfaces available to monitor and manage an XtremIO cluster through the XMS server. The system's Command Line Interface (CLI) can be used for everything the GUI can be used for and more, a RESTful API is another pre-installed interface in the system which allows HTTP-based commands to manage clusters, and a PowerShell API Module is also an option to use Windows' PowerShell console to administer XtremIO clusters.
  • 27. 27 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. XtremIO X2 Integration with Microsoft Technologies This section summarizes the Microsoft platform integration capabilities offered by the Dell EMC XtremIO X2 storage system. These include XtremIO’s PowerShell library for integration into Microsoft automation frameworks, SCVMM and EMC Storage Integrator (ESI) for centralized storage management, as well as Offloaded Data Transfer (ODX) support for improved performance in Microsoft enterprise environments. These enhancements are offered by XtremIO X2 to create a seamless and efficient management experience for infrastructures built upon Microsoft Hyper-V virtualization technologies. Offloaded Data Transfer (ODX) The Offloaded Data Transfer (ODX) feature was first made available with the release of Windows Server 2012. This feature allows large segments of data, and even entire virtual machines, to be moved or copied at speeds significantly faster than legacy methods. By offloading the file copy or transfer operation to the storage array, ODX helps minimize fabric and host level contention of available resources, thus increasing realized performance capabilities and reducing overall I/O latencies. ODX uses a token-based mechanism for reading and writing data on intelligent storage arrays. Instead of routing the data through the host, a small token is copied between the source and destination. The token serves as a confirmed point-in- time representation of the data. XtremIO implements ODX support using the array’s native XVC functionality. When an ODX token is created for a given file, the array will create a read-only snapshot of the data segment thus preserving the point in time of the data being copied. The array then performs an internal copy operation to duplicate the data as requested, and the original file is then made available at the destination without the need to route data over the network or thorough the host(s). Figure 26. Implementation Sequence of Offloaded Data Transfer (ODX) Activities on Dell EMC XtremIO Platform The host initiated ODX action leverages a built-in capability of the supporting storage array to inform the storage layer of what data needs to be copied and to where. The storage confirms the request is possible and then performs the desired copy or repeated write activity without the need to send or receive all the data over the network connecting the host(s) and storage. For a storage array like XtremIO X2, whose architecture is custom built to exploit the capabilities of flash storage via a content addressable indirection engine, this activity can be completed extremely quickly since it is an in-memory metadata update operation.
  • 28. 28 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. ODX is enabled by default and can be used for any file copy operation where the file is greater than 256kB in size. ODX works on NTFS partitioned disks, and does not support files which are compressed, encrypted, or utilize BitLocker protection. Windows Server or Hyper-V will automatically detect if the source and target storage volumes support ODX. If not, the operation will revert back to legacy copy methods without intervention from the user, and data transmission will be done via the host. ODX is particularly useful for copying large files between file shares, deploying virtual machines from templates, and performing live migration of virtual machine storage between supporting volumes. In addition, end users will experience the benefits of ODX functionality which eliminates repeated zero writes when creating fixed size VHDX files. To support the ODX operations, virtual SCSI adapters with VHDX files, pass-through disks, or connectivity via virtual Fiber Channel adapters are required. When deploying virtual machines within SCVMM from a Library template, administrators can note if ODX is supported by observing if the Create Virtual Machine job invokes ‘Rapid deploy using SAN copy’ as shown in Figure 27. In addition, virtual machine creation time measured in seconds as opposed to minutes also indicates that ODX was used. Figure 27. SCVMM "Cloning Virtual Machine" Operation Utilizing ODX
  • 29. 29 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Below is a comparison of cloning a 150GB virtual machine with and without ODX. We can see the outstanding improvement in copy rates when running on top of Dell EMC XtremIO X2’s ODX implementation. Figure 28. An Example of Copy Performance for ODX-Enabled Operation Figure 29. An Example of Copy Performance for a Non-ODX Copy Task Resilient File System (ReFS) The Resilient File System (ReFS) is Microsoft's newest file system, designed to maximize data availability, scale efficiently to large data sets across diverse workloads, and provide data integrity by means of resiliency to corruption. It seeks to address an expanding set of storage scenarios and establish a foundation for future innovations. Key benefits: • ReFS introduces new features that can precisely detect and fix corruptions while remaining online, helping to provide increased integrity and availability for data. • Integrity-streams - ReFS uses checksums for metadata and optionally for file data, giving ReFS the ability to reliably detect corruptions. • Storage Spaces integration - When used in conjunction with a mirror or parity space, ReFS can automatically repair detected corruptions using the alternate copy of the data provided by Storage Spaces. Repair processes are both localized to the area of corruption and performed online, requiring no volume downtime. • Salvaging data - If a volume becomes corrupted and an alternate copy of the corrupted data doesn't exist, ReFS removes the corrupt data from the namespace. ReFS generally keeps the volume online while it handles most non-correctable corruptions, although there are rare cases that require ReFS to take the volume offline. • Proactive error correction - In addition to validating data before reads and writes, ReFS introduces a data integrity scanner, known as a scrubber. This scrubber periodically scans the volume, identifying latent corruptions and proactively triggering a repair of corrupt data.
  • 30. 30 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Microsoft introduced a new feature called Accelerated VHDX Operations in ReFS with Windows Server 2016. This serves as Microsoft’s data center file system of the future with improved VHDX and VHD functions, such as: • Creating and extending a virtual hard disk • Merging checkpoints (previously called Hyper-V snapshots) • Supporting backups based on production checkpoints One of the core features of ReFS is the use of metadata to protect data integrity. This metadata is used when creating or extending a virtual hard disk; instead of zeroing out new data blocks on disk, the file system will write metadata. Thus, when an application such as Hyper-V asks to read zeroed-out blocks, the file system checks the metadata, and responds with “nothing to see here.” Checkpoints can be costly in terms of IOPS, impacting all virtual machines on that LUN. When a checkpoint is merged, the last modified blocks of an AVHD/X file are written back into the parent VHD/X file. Hyper-V will use checkpoints to perform consistent backups (not relying on VSS at the volume level) to achieve greater scalability. However, merging many checkpoints is costly. Instead, ReFS will perform a metadata operation and delete unwanted data. This means that no data actually moves on the volume, and merging a checkpoint will be much quicker and have less impact on services hosted on that disk. Formatting ReFS with 64 KB allocation unit size is recommended for optimal for Hyper-V operation. Figure 30 and Figure 31 show the huge difference between the allocation of a VHDX disk on ReFS file system versus NTFS file system. Figure 30. An Example of VHDX File Creation on an NTFS File System Figure 31. An Example of Accelerated VHDX Operations for VHDX File Creation on an ReFS File System MPIO Best Practices The Windows Server operating system and Hyper-V 2012 (or later) natively support MPIO with the built-in Device Specific Module (DSM) that is bundled with the OS. Although the basic functionally offered with the Microsoft DSM is supported, Dell EMC recommends the use of Dell EMC PowerPath™ MPIO management on server hosts and VMs instead of the Microsoft DSM. Dell EMC PowerPath is a server-resident software solution designed to enhance performance and application availability as follows: • Combines automatic load balancing, path failover, and multiple-path I/O capabilities into one integrated package. • Enhances application availability by providing load balancing, automatic path failover, and recovery functionality. • Supports servers, including cluster servers, connected to Dell EMC and third-party arrays.
  • 31. 31 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Windows and Hyper-V Hosts will default to Round Robin with Dell EMC XtremIO X2 storage, unless the administrator sets a different default MPIO policy on the host. Figure 32. XtremIO XtremApp Multi-Path Disk Device Properties PowerPath Multipathing with XtremIO XtremIO supports multipathing using EMC PowerPath on Windows. PowerPath versions 5.7 SP2 and above provide Loadable Array Module (LAM) for XtremIO Array devices. With this support, XtremIO devices running versions 2.2 and above are managed under the XtremIO class. PowerPath provides enhanced path management capabilities for up to 32 paths per logical device, as well as intelligent dynamic I/O load-balancing functionalities specifically designed to work within the Microsoft Multipathing I/O (MPIO) framework. Having multiple paths enables the host to access a storage device even if a specific path is unavailable. Multiple paths share the I/O traffic to a storage device, using intelligent load-balancing policies which enhance I/O performance and increase application availability. EMC PowerPath is the recommended multipathing choice. PowerPath features include: • Multiple paths - provides higher availability and I/O performance. This includes support on Server Core and Hyper-V (available in Windows Server 2008 and later). • Running PowerPath in Hyper-V VMs (guest operating systems), PowerPath supports: o iSCSI through software initiator o Virtual Fibre Channel for Hyper-V (available in Windows Server 2012 and above) that provides the guest operating system with unmediated access to a SAN through vHBA • Path management insight capabilities - PowerPath characterizes I/O patterns and aides in diagnosing I/O problems due to flaky paths or unexpected latency values. Metrics are provided on: o Read and write - in MB/seconds per LUN o Latency distribution - the high and low watermarks per path o Retries - the number of failed I/Os on a specific path • Autostandby - automatically detects intermittent I/O failures and places paths into autostandby (also known as flaky paths).
  • 32. 32 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. • PowerPath Migration Enabler - a host-based migration tool that allows migrating data between storage systems and supports migration in an MSCS environment (for Windows 2008 and later). PowerPath Migration Enabler works in conjunction with the host operating system (also called Host Copy) and other underlying technologies such as Open Replicator (OR). • Remote monitoring and management o PowerPath Management Appliance 2.2 (PPMA 2.2) o Systems Management Server (SMS) o Microsoft Operations Manager System Center Virtual Machine Manager Microsoft’s System Center Virtual Machine Manger (SCVMM) is widely used as the primary management console for larger Microsoft Hyper-V solutions. It can be used for deployments of all sizes, but its true value as a control platform becomes obvious when providing a means to efficiently oversee and manage multiple servers, clusters, virtual machines, network components, and physical resources within a virtualized environment. From the management console, you can discover, deploy, or migrate existing virtual machines between physical servers or failover clusters. This functionality can be used to dynamically manage physical and virtual resources within the system and to allow allocation and assignment of virtualized resources to meet ever changing business needs. VMM Components: • VMM management server- The computer on which the VMM service runs. It processes commands and controls communications with the VMM database, the library server, and virtual machine hosts. • VMM database - A Microsoft SQL Server database that stores VMM configuration information such as profiles and virtual machine and service templates. • VMM console - The program that provides access to a VMM management server in order to centrally view and manage physical and virtual resources, such as virtual machine hosts, virtual machines, services, and library resources. • VMM library and VMM library server - the catalog of resources (for example, virtual hard disks, templates, and profiles) that are used to deploy virtual machines and services. A library server hosts shared folders that are used to store file-based resources in the VMM library. • VMM command shell - The Windows PowerSheII-based command shell that makes available the cmdlets that perform all functions in VMM.
  • 33. 33 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 33. System Center Virtual Machine Manager Components Using System Center Virtual Machine Manager allows us to manage multiple Hyper-V servers from one central location, and to create a Microsoft Cluster that contains multiple Hyper-V servers connected to shared cluster disks exposed from the XtremIO X2 storage array. This mechanism allows the distribution of virtual machines between the various XtremIO LUNs and Hyper-V servers thus providing maximum flexibility. It also provides the ability to recover virtual machines following a physical server crash or a maintenance activity., In addition, it provides the ability to migrate a VM from one LUN to another or from a physical Hyper-V host to another without downtime, all this while providing great performance for mission critical applications. The following examples describe the management of multiple virtual DB servers and enterprise applications which use SCVMM to generate tens of thousands of IOPS each, all managed under a unified interface that incorporates the physical Hyper-V servers in the cluster.
  • 34. 34 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 34. System Center Virtual Machine Manager Manages Multiple Virtual Machines from Several Hyper-V Hosts In Figure 35, we can see the IOPS and latency statistics for an Intensive Hyper-V virtual machines workload. The graph shows that IOPS are well over 200k but that the latency for all I/O operations remains less than 0.6 msec, yielding excellent application performance. Figure 35. XtremIO X2 Overall Performance – Intensive Hyper-V Virtual Machines Workload Figure 36 shows latency vs IOPS during an intensive workload. It can be seen that latency is mostly below 0.5 msec, with some higher peaks always below 0.6 msec.
  • 35. 35 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 36. XtremIO X2 Latency vs. IOPS–Intensive Hyper-V Virtual Machines Workload Figure 37 shows the incredible storage capacity efficiency which can be achieved for Hyper-V virtual machines on XtremIO X2. We can see an impressive data reduction factor of 6.1:1 (2.7:1 for deduplication and 2.3:1 for compression) that lowers the data capacity footprint to an incredible 449.92GB for all the virtual machines. Figure 37. XtremIO X2 Data Savings – Intensive Hyper-V Virtual Machines Workload Figure 38 shows the CPU utilization of the Storage Controllers during an intensive Hyper-V virtual machines workload. We can see the excellent synergy across the X2 cluster, with all of the Active-Active Storage Controllers' CPUs equally sharing the load for the entire process.
  • 36. 36 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 38. XtremIO X2 CPU Utilization – Intensive Hyper-V Virtual Machines Workload Failover Clustering Failover Clustering is a Windows Server feature that enables grouping multiple servers into a fault-tolerant cluster, and provides new and improved features for software-defined data center customers and workloads running on physical hardware or on virtual machines. A failover cluster is a group of independent servers that work together to increase the availability and scalability of clustered roles (formerly called clustered applications and services). The clustered servers (called nodes) are connected by physical servers and by software. If one or more of the cluster nodes fail, other nodes begin to provide its services (a process known as failover). In addition, the clustered roles are proactively monitored to verify that they are working properly. If they are not working, they are restarted or moved to another node. Connecting our Hyper-V hosts to the same XtremIO X2 LUNs allows us to convert them from NTFS/ReFS to Cluster Shared Volumes (CSV). This provides CSV functionality with a consistent, distributed namespace that clustered roles can use to access shared storage from all nodes. With the Failover Clustering feature, users experience a minimum of disruptions in service. Failover Clustering has many practical applications, including: • Continuously available LUNs for applications such as Microsoft SQL Server and Hyper-V virtual machines • Highly-available clustered roles that run on physical servers or on virtual machines installed on servers running Hyper-V Use of Cluster Shared Volumes in a Failover Cluster Cluster Shared Volumes (CSV) enable multiple nodes in a failover cluster to simultaneously have read-write access to the same LUN (disk) that is provisioned as an NTFS volume (in Windows Server 2012 R2, disks can be provisioned as Resilient File System (ReFS) or NTFS). With CSVs, clustered roles can failover quickly from one node to another without requiring change in drive ownership, or dismounting and remounting a volume. CSVs also help simplify the management of a potentially large number of LUNs in a failover cluster. CSVs provide a general-purpose, clustered file system layered on top of NTFS or ReFS, and bind clustered Virtual Hard Disk (VHD/VHDX) files for clustered Hyper-V virtual machines.
  • 37. 37 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. This mechanism allows the distribution of virtual machines between the various XtremIO LUNs and Hyper-V servers. This provides maximum flexibility and the ability to recover virtual machines due to physical server crashes or maintenance activities. and the ability to migrate a VM from one LUN to another or from a physical Hyper-V host to another without downtime, all this while providing great performance for mission critical applications. Figure 39. CSV XtremIO X2 LUNs Connected to Hyper-V Hosts Storage Quality of Service Storage Quality of Service (QoS) in Windows Server 2016 provides a way to centrally monitor and manage storage performance for virtual machines using Hyper-V. The feature automatically improves the fairness of storage resource sharing between multiple virtual machines, and allows policy-based minimum and maximum performance goals to be configured in units of normalized IOPS. We can use Storage QoS in Windows Server 2016 to accomplish the following: • Mitigate noisy neighbor issues - By default, Storage QoS ensures that a single virtual machine cannot consume all storage resources and starve other virtual machines of storage bandwidth. • Manage Storage I/O based on workload business needs - Storage QoS policies define and enforce minimum and maximum performance metrics for virtual machines. This provides consistent performance to virtual machines, even in dense and overprovisioned environments. If policies cannot be met, alerts are available to track when VMs are out of policy or are assigned invalid policies. Figure 40. SCVMM Components
  • 38. 38 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. After a Failover Cluster is created and a CSV disk is configured, Storage QoS Resource is displayed as a Cluster Core Resource and is visible in both Failover Cluster Manager and Windows PowerShell. The intent is that the failover cluster system will manage this resource with no manual action required. Figure 41. Storage QoS Resource Displayed as a Cluster Core Resource in Failover Cluster Manager Space Reclamation Space reclamation is a key concern for thinly provisioned storage arrays operating in virtualized environments. Space reclamation refers to the ability of the host operating system to return unused capacity to the storage array following the removal/deletion of large files, virtual hard disks, or virtual machines. Windows Server and Windows Hyper-V are capable of identifying the provisioning type and the UNMAP (or TRIM) capability of a disk. Space reclamation can be triggered by file deletion, a file system level trim, or a storage optimization operation. You can check if your disks are identified as thin or not by examining the Logical Units under a specific Storage Pool within SCVMM, or alternatively, by using the “Defragment and Optimize your Drives” option from the ‘Administrative Tools’ section of the host’s Control Panel (if using a desktop based installation). Figure 42. Verifying XtremIO Storage Volume Presentation Type and Associated Space Efficiency via the Microsoft Operating System Level "Optimize Drives" Option At the host level, the TRIM operation and SCSI UNMAP commands will be passed directly to the array as long as TRIM is enabled. This can be checked using the command shown in Figure 43. A value of 1 indicates that the feature is disabled and 0 indicates that this is enabled. TRIM is enabled by default unless an administrator disables it.
  • 39. 39 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 43. Verifying Operating System Support for TRIM/UNMAP When a large file is deleted at the host or guest level on a supporting file system, UNMAP commands will be invoked and sent to the array. This operation does not impact the host, but it does invoke large bandwidth resource usage internal to the storage array in order to zero-out the reclaimed space. For an intelligent data-aware storage array such as XtremIO X2, the zero data will not be written to disk, but instead the address space allocated to the file will be released and flagged as containing null data. This operation will consume a portion of available storage controller compute resources, but the impact of this is minimal. For environments where multiple users have full control over large host layer file spaces that cohabit on the storage array with production systems, it may be wise to consider disabling this feature with a view to running storage optimization tasks during periodic and pre-defined maintenance windows to minimize any potential impact to any hosted application’s realized latencies. Once the OPTIMIZE-VOLUME command is issued, the UNMAP command issued to the array will result in the high bandwidth shown in Figure 44. This space reclamation operation used approximately 15% of an idle XtremIO cluster’s compute resources, but per design, this usage will be lower in a system which is already highly utilized. Figure 44. A Manually-Invoked Storage Space Reclamation Initiated using the Microsoft 'Optimize-Volume' Command, as Viewed from XtremIO X2 Performance Graphs Virtual Hard Disks A virtual hard disk is a set of data blocks that is stored as a regular Windows file with a .vhd, .vhdx or .vhds extension, using the host operating system. It is important to understand the different format and type options for virtual hard disks and how this integrates with Dell EMC XtremIO X2. Virtual Hard Disk Format There are three different kinds of virtual hard disk formats that are supported with either VM generation: • VHD is supported with all Hyper-V versions and is limited to a maximum size of 2 TB. This is now considered a legacy format (use VHDX instead for new VM deployments). • VHDX is supported with Windows Server 2012 (or newer) Hyper-V. The VHDX format offers better resiliency in the event of a power loss, better performance, and supports a maximum size of 64 TB. VHD files can be converted to the VHDX format using tools such as Hyper-V Manager or PowerShell.
  • 40. 40 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. • VHDS (VHD Set) is supported on Windows Server 2016 (or newer) Hyper-V. VHDS is for virtual hard disks that are shared by two or more guest VMs in support of highly-available (HA) guest VM clustering configurations. Figure 45. Virtual Hard Disk Formats In addition to the format, a virtual hard disk can be designated as fixed, dynamically expanding, or differencing. Figure 46. Virtual Hard Disk Types The dynamically expanding disk type will work well for most workloads on Dell EMC XtremIO X2. Since Dell EMC XtremIO X2 arrays leverage thin provisioning, only data that is actually written to a virtual hard disk will consume space on the array regardless of the disk type (fixed, dynamic, or differencing). As a result, determining the best disk type when using XtremIO as a storage backend is mostly a function of the workload as opposed to how it will impact storage utilization. For workloads generating very high I/O, such as Microsoft SQL Server databases, Microsoft recommends using the fixed size virtual hard disk type for optimal performance. A fixed virtual hard disk consumes the full amount of space from the perspective of the host server. For a dynamic virtual hard disk, the space consumed is equal to the amount of data on the virtual disk (plus some metadata overhead), and is more space efficient from the perspective of the host. From the perspective of the guest VM, either type of Virtual Hard Disk shown in this example will present a full 60 GB of available space to the guest. There are some performance and management best practices to keep in mind when choosing the right kind of virtual hard disk type for your environment.
  • 41. 41 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Fixed-size virtual hard disks: • Are recommended for virtual hard disks that should experience a high level of disk activity, such as Microsoft SQL Server, Microsoft Exchange, or OS page or swap files. For many workloads, the performance difference between fixed and dynamic will be negligible. When formatted, they take up the full amount of space on the host server volume. • Are less susceptible to fragmentation at the host level. • Take longer to copy (for example, from one host server to another over the network) because the file size is the same as the formatted size. • With pre-2012 versions of Hyper-V, provisioning of fixed virtual hard disks may require significant time due to lack of native Offloaded Data Transfer (ODX) support. With Windows Server 2012 (or newer), provisioning time for fixed virtual hard disks is significantly reduced when ODX is supported and enabled. Dynamically expanding virtual hard disks: • Are recommended for most virtual hard disks, except in cases of workloads with very high disk I/O. • Require slightly more CPU and I/O overhead because they grow compared to fixed size virtual hard disks. This usually does not impact the workload except in cases where I/O demands are very high. This is minimized even further on Dell EMC XtremIO X2 due to the higher performance of All-Flash disk pools. • Are more susceptible to fragmentation at the host level. • Consume very little space (for some metadata) when initially formatted, and expand as new data is written to them by the guest VM. • Take less time to copy to other locations than a fixed size disk because only the actual data is copied. For example, the time required to copy a 500 GB dynamically expanding virtual hard disk that contains 20 GB of data will be that needed to copy 20 GB of data - not 500 GB. • Allow the host server volume to be over-provisioned. In this case, best practice is to configure alerting on the host server to avoid unintentionally running out of space in the volume. Differencing virtual hard disks: • Offer some storage savings by allowing multiple Hyper-V guest VMs with identical operating systems to share a common boot virtual hard disk. • Require all children to use the same virtual hard disk format as the parent. • Require new data to be written to the child virtual hard disk. • Are created for each native Hyper-V based snapshot of a Hyper-V guest VM in order to freeze the changed data since the last snapshot, and allow new data to be written to a new virtual hard disk file. Creating native Hyper-V based snapshots of a Hyper-V guest VM can increase the CPU usage of storage I/O, but will probably not affect performance noticeably unless the guest VM experiences very high I/O demands. • Can result in performance impacts to the Hyper-V guest VM. This impact is a result of maintaining a long chain of native Hyper-V based snapshots of the guest VM which requires reading from the virtual hard disk and checking for the requested blocks in a chain of many differencing virtual hard disks. • Should not be used at all or at least kept to a minimum with native Hyper-V based snapshots of Hyper-V guests, in order to maintain optimal disk I/O performance. With Dell EMC XtremIO X2, native Hyper-V snapshots can be minimized or even avoided altogether by leveraging array-based storage snapshots. Administrators can leverage array-based snapshots to recover VMs and replicate data to other locations for archive or recovery. Because of thin provisioning, space on the XtremIO X2 array is consumed only when actual data is written regardless of the type of virtual hard disk. Choosing dynamic over fixed virtual hard disks does not improve storage space utilization on Dell EMC XtremIO X2 arrays. Other factors such as the I/O performance of the workload would be primary considerations when determining the type of virtual hard disk in your environment.
  • 42. 42 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Using XtremIO X2 for Hyper-V VMs and Storage Migration Microsoft provides native tools to move or migrate VMs with Windows Server 2012 and 2016 Hyper-V, so there are fewer use cases for using SAN-based snapshots for such moves. When a guest VM is migrated live from one node to another node within the same Hyper-V cluster configuration, no data needs to be copied or moved because all nodes in that cluster have shared access to the underlying cluster shared volumes (CSV). However, when an administrator needs to migrate a guest VM from one Volume to another, the data (the virtual hard disks) must be copied to the target Volume. When moving VMs between different XtremIO X2 volumes, the Dell EMC XtremIO X2 array leverages ODX commands which provides exceptional transfer rate for migrating or cloning virtual machines. Figure 47. Live Storage Migration between XtremIO X2 Volumes Initiated from SCVMM Figure 48. Live Storage Migration between XtremIO X2 Volumes Leveraging ODX on the Arrays Side EMC Storage Integrator (ESI) 5.1 The ESI for Windows Suite is designed for Microsoft administrators with responsibilities for the management and monitoring of storage platforms and hosted applications. This software enables administrators to view, provision, and manage block and file storage on supported Dell EMC storage systems for use with Microsoft Server and Hyper-V. For Hyper-V virtual machines, you can create virtual hard disk (VHD and VHDX) files and pass-through SCSI disks. You can also create host disks and cluster shared volumes. Control of the managed environment is possible using either an ESI specific PowerShell CLI (Command Line Interface) or the ESI GUI (Graphical User Interface) which provides a dashboard interface for oversight of all managed components. You can run the ESI GUI as a stand-alone tool or as part of a Microsoft Management Console (MMC) snap-in on Microsoft operating systems. The latest release (at the time of writing this document) of ESI is version 5.1. With this release, ESI has increased capabilities for management of XtremIO XVC volumes. This capability allows administrators to create both writable and read-only instantaneous copies of their data and also refresh existing snapshot copies as required. ESI 5.1 officially supports Microsoft Server 2016 and Microsoft Hyper-V 2016.
  • 43. 43 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. The ESI suite is comprised of the following core software components: • ESI Installer and PowerShell Toolkit: The ESI install toolkit provides the central control layer and adapters for the integration structure, and the ESI PowerShell kit offers scripting capabilities specific to available ESI functionality. Both the ESI service and the PowerShell kit are installed simultaneously on the assigned ESI machine. • ESI Service and SCOM Management Packs: The management pack binaries are installed on the Microsoft Systems Center Operations Manager (SCOM) management group and provide integration between the ESI Service and SCOM environmental monitoring. • ESI Graphical User Interface: The GUI provides a management interface to oversee and manage all ESI integrated components. Connecting XtremIO X2 to ESI allows viewing and managing all XtremIO objects directly from ESI as shown in Figure 49. Figure 49. XtremIO X2 Objects from ESI View Attaching the Microsoft cluster to ESI allows viewing and managing all cluster objects directly from ESI, including hosts, mappings, disks and connections between the Cluster volumes and LUNS in the storage array. Figure 50. Managing Microsoft Cluster Resources and Volumes From the ESI Plugin A powerful ability of ESI is the option to assign a LUN directly from the storage to the Hyper-V servers. This process creates the disks on the XtremIO X2 storage array, maps them to all Hyper-V servers in the cluster, performs rescan, creates file system, adds them to the cluster and converts them to CSVs, all using one simple operation as shown in Figure 51.
  • 44. 44 | DELL EMC XtremIO X2 All-Flash Storage with Microsoft Windows Server and Hyper-V 2016 © 2018 Dell Inc. or its subsidiaries. Figure 51. Allocating CSV Volumes Directly from XtremIO X2 Arrays Using the ESI Plugin Connecting the Hyper-V servers as Hypervisors to ESI provides information about all the disks that are connected to these servers and all virtual machines and virtual disks which reside on top of them, as shown in Figure 52. Figure 52. ESI Virtual Machines Disks Inventory Another interesting feature of the ESI is its support for adding a pass-through or VHDX disk directly from the XtremIO X2 X-Brick to a specific virtual machine. This process will create a new LUN in the storage device, map it to the relevant Hyper-V server, and connect it to the virtual machine as shown in Figure 53.