SlideShare une entreprise Scribd logo
1  sur  37
Télécharger pour lire hors ligne
White Paper




SIZING EMC® VNX™ SERIES FOR VDI
WORKLOAD
An Architectural Guideline




                EMC Solutions Group

                Abstract

                This white paper provides storage sizing guidelines to implement virtual
                desktop infrastructure in VNX unified storage systems.


                September 2012
Copyright © 2012 EMC Corporation. All rights reserved. Published in the USA.

Published September, 2012

EMC believes the information in this publication is accurate of its publication
date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no
representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or
fitness for a particular purpose. Use, copying, and distribution of any EMC
software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC
Corporation in the United States and other countries. All other trademarks used
herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to the
technical documentation and advisories section on EMC Online Support.

Part Number H11096




                        Sizing EMC VNX Series for VDI Workload — White Paper        2
Table of contents
  Executive summary ............................................................................................. 4

  Introduction ...................................................................................................... 5
     Scope...................................................................................................................................... 5
     Audience ................................................................................................................................. 5
     Terminology ............................................................................................................................. 5

  VDI technology overview....................................................................................... 6
     Overview.................................................................................................................................. 6
     Citrix XenDesktop ..................................................................................................................... 6
     VMware View ......................................................................................................................... 11

  VDI I/O patterns ............................................................................................... 15
     Overview................................................................................................................................ 15
     Measuring desktop performance.............................................................................................. 15
     Optimizing desktops............................................................................................................... 15
     VDI workloads ........................................................................................................................ 16

  Sizing VNX series for VDI workload ........................................................................ 22
     Overview................................................................................................................................ 22
     Sizing with building block ....................................................................................................... 22
     Backend array ........................................................................................................................ 24
     Data Mover ............................................................................................................................ 25
     FAST Cache ............................................................................................................................ 26

  Deployment considerations for VDI sizing ............................................................... 28
     Overview................................................................................................................................ 28
     Sizing for heavier desktop workload......................................................................................... 28
     Concurrency ........................................................................................................................... 28
     Persistent/dedicated vs. non-persistent/pooled mode .............................................................. 28
     User data and desktop files ..................................................................................................... 28
     Multiple master images........................................................................................................... 29
     Running other applications...................................................................................................... 29
     VMware View Storage Accelerator ............................................................................................ 29
     VMware vSphere Memory Overcommitment .............................................................................. 30
     Applying the sizing guidelines ................................................................................................. 30

  Conclusion...................................................................................................... 35
     Summary ............................................................................................................................... 35

  References...................................................................................................... 37
     EMC documents ..................................................................................................................... 37


                                                                    Sizing EMC VNX Series for VDI Workload — White Paper                             3
Executive summary
            This white paper provides sizing guidelines to choose the appropriate storage
            resources to implement virtual desktop infrastructure (VDI). The sizing guidelines are
            for EMC® VNX ™ unified storage arrays. This paper also provides information about
            how VDI architecture uses storage systems. These guidelines will help
            implementation engineers to choose the appropriate VNX system for their VDI
            environment.




                                          Sizing EMC VNX Series for VDI Workload — White Paper       4
Introduction
               Today, many businesses add desktop virtualization to their IT infrastructure. VDI
               provides better desktop security, rapid provisioning of applications, reliable desktop
               patch deployment, and remote access across a multitude of devices.

               A well thought-out design and implementation plan is critical to building a successful
               VDI environment that provides predictable performance to end users and scalability
               for desktop administrators. When designing a VDI environment, consider the profile
               of the end users, the service level agreements (SLAs) that must be fulfilled, and the
               desired user experience. When implementing VDI, the CPU, memory, network
               utilization, and storage are shared among virtual desktops. The design should ensure
               that all desktops are given enough resources at all times.

Scope          It is assumed that the reader is familiar with the concepts and operations related to
               VDI technologies and their use in information infrastructure. This paper discusses
               multiple EMC products, VMware and Citrix products. It also outlines some of the
               general architectural designs. Refer to the documentation of the specific product for
               detailed information on installation and administration.


Audience       This white paper is intended for EMC employees, partners, and customers . This
               includes IT planners, virtualization architects and administrators, and any other IT
               professionals involved in evaluating, acquiring, managing, operating, or designing
               VDI leveraging EMC technologies.

Terminology    This paper includes the following terminology.

               Table 1.     Terminology
                Term                      Definition
                Storage Processor (SP)    A hardware component that performs and manages backend
                                          array storage operations.
                Login VSI                 A third-party benchmarking tool developed by Login
                                          Consultants. This tool simulates realworld VDI workload by
                                          using an AutoIT script and determines the maximum system
                                          capacity based on users’ response time.
                Storage pool              An aggregation of storage disks configured with a particular
                                          RAID type.




                                             Sizing EMC VNX Series for VDI Workload — White Paper        5
VDI technology overview
Overview            VDI has many moving components and requires the involvement of several IT
                    departments to be successful. Before sizing the storage, it is important to understand
                    how the VDI architecture works and how each component uses the storage system.
                    This section explains the two popular desktop virtualization environments—Citrix®
                    XenDesktop® and VMware® View ™.

Citrix XenDesktop   Citrix XenDesktop transforms Windows desktops to an on-demand service for any
                    user, any device, anywhere. XenDesktop quickly and securely delivers any type of
                    virtual desktop application to the latest PCs, Macs, tablets, smartphones, laptops,
                    and thin clients with a high-definition user experience.

                    XenDesktop has two configuration methods:

                          Machine Creation Services (MCS)
                          Provisioning Services (PVS)

                    Machine Creation Services

                    MCS is a provisioning mechanism introduced in XenDesktop 5. It is integrated with
                    the XenDesktop management interface, XenDesktop Studio, to provision, manage,
                    and decommission desktops throughout the desktop lifecycle management from a
                    centralized point of management. Figure 1 shows the key components of XenDesktop
                    infrastructure with the MCS.




                    Figure 1.   Citrix XenDesktop MCS architecture diagram



                                                  Sizing EMC VNX Series for VDI Workload — White Paper       6
The Web interface provides the user interface to the XenDesktop environment.
The end user uses the web interface to authenticate and receive login access
information to access the virtual desktop directly.
XenDesktop Controller orchestrates the VDI environment. It authenticates
users, brokers connections between users and their virtual desktops, monitors
the state of the virtual desktops, and starts/stops desktops based on the
demand and administrative configuration.
The License Server validates and manages the licenses of each XenDesktop
component.
The AD/DNS/DHCP server provides the following:
    IP address to virtual desktops using DHCP
    Secure communication between users and virtual desktops using Active
     Directory
    IP host name resolution using DNS

The Database Server stores information about the XenDesktop environment
configuration, virtual desktops, and their status.
The Hypervisor hosts the virtual desktops. It has built-in capabilities to manage
and configure virtual desktops. XenDesktop Controller uses the hypervisor’s
built-in features through MCS.
The X enDesktop Agent provides a communication channel between
XenDesktop Controller and virtual desktops. It also provides a direct
connection between virtual desktops and end users.
The Storage Array provides storage to the database and the hypervisor.




                        Sizing EMC VNX Series for VDI Workload — White Paper        7
Figure 2 shows the storage mapping of virtual desktops in MCS.




Figure 2.    XenDesktop MCS virtual desktop storage

A master image is a tuned desktop used to create new base images. Administrators
can have multiple versions of the master image by taking snapshots at different
configurations. A master image can be placed on its own datastore.

The base image is a point-in-time copy of the master image. One base image copy is
placed on every datastore allocated to host virtual desktops. The base image is read-
only. However, the base image is common to all the virtual desktops created on the
same datastore. Therefore, all the read operations to the virtual desktops are
redirected to the base image.

A differencing disk is a thinly provisioned disk used to capture changes made to the
virtual desktop operating system. One differencing disk is created for each virtual
desktop. When a virtual desktop is created with a dedicated option, the differencing
disk preserves the changes made to the virtual desktop. However, the differencing
disk is recreated every time the desktop is restarted in pooled virtual desktops.

 A 16 MB identity disk is created for each virtual desktop to store the user and
machine identity information. During restart/refresh, the identity disk is preserved
regardless of whether the desktop is deployed with the dedicated or the pooled
option.




                               Sizing EMC VNX Series for VDI Workload — White Paper     8
Provisioning Services

PVS uses the streaming technology to provision virtual desktops. PVS uses a single
shared desktop image to stream across all the virtual desktops. This approach
enables organizations to manage virtual desktop environment using fewer disk
images. Figure 3 shows the key components in XenDesktop with PVS.




Figure 3.   XenDesktop PVS architecture diagram

PVS requires a similar environment as MCS. However, it requires two additional
servers to stream the desktop image and deliver it to the virtual desktops.

       The TFTP Server is used by the virtual desktop to boot from the network and
       download the bootstrap file. The bootstrap file has the information to access
       the PVS server and stream the appropriate desktop image.

       The PVS server is used to stream the desktop image to the virtual desktops.
       The PVS server has a special storage location called vDisk that stores all the
       streaming images.

       The DHCP server provides the IP address and PXE boot information to virtual
       desktops using DHCP.




                              Sizing EMC VNX Series for VDI Workload — White Paper      9
Figure 4 shows the storage mapping of virtual desktops in PVS.




Figure 4.   XenDesktop PVS virtual desktop storage

Similar to MCS, the master image is a tuned desktop used to create a new base
image. The master image can be placed on a separate datastore. The master image is
accessed only when PVS creates/updates the base image.

The PVS server extracts the master image and creates a base image on the vDisk
datastore. The base image is streamed to virtual desktops and set to the read-only
mode. The PVS server uses a Citrix propriety version control mechanism to keep
multiple versions of the base image.

A write-cache disk (similar to MCS differencing disk) is created for each virtual
desktop. The write-cache disk is thinly provisioned and is used to capture changes
made to the virtual desktop operating system.

Personal vDisk

Personal vDisk is a new feature introduced in XenDesktop 5.6. Personal vDisk
preserves the customization settings and user-installed applications in a pooled
desktop by redirecting the changes from the user’s pooled virtual machine to a
separate disk called Personal vDisk. The personal vDisk can be deployed with MCS
and PVS configurations.


                              Sizing EMC VNX Series for VDI Workload — White Paper   10
Figure 5 shows the storage mapping of virtual desktops using personal vDisk in MCS.




              Figure 5.   XenDesktop Personal vDisk with MCS virtual desktop storage

              The personal vDisk stores users’ personal and application data of the virtual desktop.
              The users’ personal data is visible to the end user as drive P. However, the
              application data is hidden. During a desktop session, the content of the personal
              vDisk application data is blended with the content from the base virtual machine and
              differential disk (in case of PVS, it is blended with base image and write-cache disk)
              to provide a unified experience to the end user as drive C. For better end-user access
              time, place the personal vDisk on a separate datastore.


VMware View   VMware View provides rich and personalized virtual desktops to end users. With
              VMware View, administrators can virtualize the operating system, applications, and
              user data while gaining control, efficiency, and security by having desktop data in a
              data center. VMware View has several components that work together to deliver a
              robust VDI environment.




                                            Sizing EMC VNX Series for VDI Workload — White Paper       11
Figure 6 shows the typical VMware View VDI components.




Figure 6.   VMware View architecture diagram

      VMware View Manager orchestrates the VDI environment. It authenticates
      users, assigns virtual desktops to users, monitors the state of the virtual
      desktops, and starts/stops desktops based on the demand and the
      administrative configuration.
      The DHCP server provides the IP address to virtual desktops
      The Database Server stores the VMware View, vCenter, and virtual desktop
      configuration information in a database.
      The VMware Virtual Infrastructure hosts the virtual desktops. VMware View
      Composer uses the built-in capabilities of VMware Virtual Infrastructure to
      manage and configure virtual desktops.
      The View Agent provides communication between View Manager and the virtual
      desktops. It also provides a direct connection between virtual desktops and
      end users through VMware View Client.
      The View Client (user endpoint) communicates with View Manager and View
      Agent to authenticate and connect to the virtual desktop.
      The Storage Array provides storage to database and VMware virtual
      infrastructure.




                              Sizing EMC VNX Series for VDI Workload — White Paper   12
Figure 7 shows the different storage components in each virtual desktop.




Figure 7.    VMware View architecture virtual desktop storage

A base image is a tuned desktop that will be used to create new replica images.
Administrators can have multiple versions of the base image by taking snapshots at
different configurations. The base image can be placed on its own datastore.

During virtual desktop creation, the first thing that View Composer does is to ensure
that each datastore has its own replica. A replica is a thinly provisioned full copy of
the base image. To create a replica, the administrator has to select a point-in-time
copy of the base image. If a separate replica datastore is selected, one replica is
created for every one thousand virtual desktops. The replica disk is read-only.
However, the replica disk is common to all the virtual desktops. Therefore, the read
operations of virtual machines to their OS files are redirected to the replica disk.

After the replica is created, linked clones are created for each virtual machine. A
linked clone is an empty disk when the virtual desktop is created. It is used as a place
holder to store all the changes made to the virtual desktop operating system.

A disposable disk is created to store the virtual machine’s page file, Windows system
temporary files, and VMware log files. This content is deleted when the virtual
machine is restarted. The disposable disks are placed on the same datastore as the
linked clones.




                               Sizing EMC VNX Series for VDI Workload — White Paper        13
When virtual desktops are created with the dedicated option, View Composer creates
a separate persistent disk for each virtual desktop. The persistent disk stores all the
user profile and user data information for the virtual desktops. The persistent disk
can be detached and attached to any virtual desktop to access data.

During the virtual machine creation, an internal disk of 16 MB is created for each
virtual desktop. It is used to store the personalized information such as computer
name, domain name, username, and password. When the virtual desktop is
recomposed/restarted, the data in the internal disk is preserved. The data is written
during the initial creation of the virtual desktop, and at the stage of the virtual
machine joining to a domain and password reset.




                               Sizing EMC VNX Series for VDI Workload — White Paper       14
VDI I/O patterns
Overview            Sizing storage for VDI is one of the most complicated and critical components during
                    the design and implementation process. Storage sizing becomes complex when the
                    storage requirement of the desktop users is not fully understood. A desktop user has
                    requirements for storage capacity and performance. Sizing is often considered only
                    with the aspect of storage capacity. This leads to undesirable performance to end
                    users.

Measuring desktop From a user’s perspective, good performance of a virtual desktop is the ability to
performance       complete any desktop operation in a reasonable amount of time. This means the
                  storage system that supports the virtual desktop must be able to deliver the data
                  (read/write operation) quickly. Therefore, the correct way for sizing storage is to
                  calculate the IOPS of each virtual desktop and design storage to meet the IOPS
                  requirements in a reasonable response time.

Optimizing          It is highly recommended to optimize the OS image used for the Master/Base image.
desktops            The following data shows the effect of optimization during steady state workload. The
                    optimized image used in this case has several Windows features and services, such
                    as Windows 7 themes, Windows search index, Bit Locker Drive Encryption, and
                    Windows Defender is disabled to reduce resource utilization.




                    Figure 8.    Desktop IOPS: optimized vs. non-optimized

                    Figure 9 shows that during steady state, the IOPS of optimized desktops reduces by
                    15 percent. When planning the VDI implementation, the administrator must decide
                    whether saving IOPS is worth disabling some of the Windows features.




                                                  Sizing EMC VNX Series for VDI Workload — White Paper      15
VDI workloads   The I/O workloads of virtual desktops vary greatly during the production cycle. The
                four common workloads during the virtual desktop production cycle are:

                      Boot storm
                      Login storm
                      Steady state
                      Virus scan

                For a successful deployment, the storage system must designed to satisfy all of these
                workloads.

                Boot storm: Boot storms occur when all virtual desktops are powered on
                simultaneously. The boot storm puts a heavy load on the storage system because all
                the virtual desktops are competing for the shared storage resources. To minimize the
                heavy load on the storage system, it is recommended to boot the virtual desktops
                during non-peak hours and boot a few virtual desktops at a time.

                Finding the I/O requirement to boot a single virtual desktop depends on the desktop
                image and the boot time. Use the following formula to calculate the IOPS requirement
                to boot a single virtual desktop:

                                                   Data required to boot
                  Required IOPS per desktop =
                                                   Boot time * I/O size

                The data required to boot depends on the desktop image. Test results show that a
                Windows 7 desktop image with Microsoft Office installed requires about 130 MB
                (133120 KB) data to boot and register a virtual desktop. The average I/O size during
                boot storm is about 4 KB. Plugging these values shows that 70 IOPS is required to
                boot a virtual desktop in eight minutes (480 seconds).

                Login storm: Login storms occur when users log in to the virtual desktops at the same
                time. Unlike boot storms, login storms cannot be avoided in an environment where
                end users start work at the same time.




                                              Sizing EMC VNX Series for VDI Workload — White Paper      16
Figure 9.   Average desktop IOPS during login

Figure 9 shows the IOPS required during login time. The login IOPS are measured on a
physical and virtual desktop for comparison. The data shows that both desktops
require similar IOPS during login session.

Steady state: Steady state occurs when users interact with the desktop. The IOPS for
steady state varies because user activities differ. Login VSI medium workload was
used to measure the IOPS during steady state. The IOPS was compared between
physical and virtual desktops.




Figure 10. Desktop IOPS during steady state



                              Sizing EMC VNX Series for VDI Workload — White Paper     17
Figure 10 shows the IOPS required during steady state. Both physical and virtual
desktops show similar IOPS requirement. Maximum IOPS occurred at the beginning of
the Login VSI test when applications were launched.

Figure 11 shows how the virtual desktop I/Os are served by the hypervisor and VNX
system. The desktops are provisioned through VMware View or Citrix XenDesktop
with MCS. In this example, the storage system is provisioned to the hypervisor
through NFS. The FAST Cache on page 26 provides more details on EMC FAST™ Cache.




Figure 11. IOPS flow for VMware View and XenDesktop with MCS

The virtual desktops generated an average of 8.2 IOPS during steady state. The
hypervisor converts the virtual desktop IOPS into NFS IOPS. During the conversion, 20
percent more IOPS are observed due to NFS metadata. Even though NFS comes with
an overhead, it has the benefit of being simple and easy-to-manage in a VDI
environment. In high-end VNX platforms, the Data Mover has large cache and some of
the NFS I/Os are served by this cache. This reduces the I/O going to the storage
processor and compensates for the overhead due to NFS protocol.

The Data Mover sends the NFS IOPS to the storage processor. The storage processor
has DRAM cache as well as FAST Cache configured for the desktop storage pool.
Testing shows that more than 90 percent of the IOPS are served by the DRAM and
FAST Cache. Only 10 percent of the IOPS are served by the SAS disks. Note that the
number of I/Os served by DRAM and FAST Cache depends on the virtual desktop


                              Sizing EMC VNX Series for VDI Workload — White Paper      18
workload type and the cache size. The VNX system must be configured with
appropriate Flash drives to optimize the FAST Cache utilization.

Figure 12 shows how the virtual desktop I/Os are served by the hypervisor and VNX
system in XenDesktop with PVS. In this example, the storage system is provisioned to
the hypervisor through NFS.




Figure 12. IOPS flow for XenDesktop with PVS

Similar to VMware View and XenDesktop with MCS, the virtual desktops generated an
average of 8.2 IOPS during steady state. The PVS server streamed 16 percent of the
steady state desktop IOPS. The remaining 84 percent are served by the VNX storage.

The hypervisor converts 84 percent of the virtual desktop storage IOPS into NFS IOPS.
During the conversion, 20 percent more IOPS are observed due to NFS metadata. The
Data Mover sends the NFS IOPS to the storage processor. The storage processor has
DRAM cache as well as FAST Cache configured for the desktop storage pool. Testing
shows that 88 percent of the storage IOPS are served by the DRAM and FAST Cache.
Only 12 percent of the IOPS are served by the SAS disks.




                              Sizing EMC VNX Series for VDI Workload — White Paper      19
Figure 13 shows the virtual desktop IOPS distribution for VMware View and Citrix
XenDesktop with MCS and PVS provisioning.




Figure 13. IOPS distribution for VMware View and XenDesktop with MCS and PVS

Note: The pie chart percentage distribution does not take into account the NFS
      overhead mentioned in these examples.

Virus scan: Virus scan occurs during full antivirus scan of virtual desktops. The I/O
requirement to conduct the virus scan depends on the desktop image and the time to
complete the full scan.




                              Sizing EMC VNX Series for VDI Workload — White Paper      20
Figure 14. Desktop IOPS requirement for virus scan

Figure 14 shows the trend line of required IOPS with the time to complete the scan.
The trend line is based on the Windows 7 desktop image with Microsoft Office
installed. The virus scan requires very high IOPS per desktop. To avoid any impact on
end-user experience, it is recommended to run virus scan during non-peak hours.




                              Sizing EMC VNX Series for VDI Workload — White Paper      21
Sizing VNX series for VDI workload
Overview         The VNX series delivers a single-box block and file solution, which offers a centralized
                 point of management for distributed environments. This make it possible to
                 dynamically grow, share, and cost effectively manage multiprotocol file systems and
                 multiprotocol block access. Administrators can take advantage of the simultaneous
                 support of NFS and CIFS protocols by enabling Windows and Linux/Unix clients to
                 share files by using the sophisticated file-locking mechanism of VNX for file and VNX
                 for Block for high-bandwidth or for latency-sensitive applications. The VNX series
                 unified storage offers the following four models.




                 Figure 15. VNX series unified storage systems

                 These four options are based on the number of disks it can support. Customers can
                 configure the VNX system with 5 to 1000 disks. This helps customers to select the
                 right VNX system according to their environment needs. The VNX series is built for
                 speed delivering robust performance and efficiency to support the provisioning of
                 virtual desktops (thin/thick). The VNX series is optimized for virtualization, and thus
                 supports all leading hypervisors, simplifies desktop creation and storage
                 configuration. The sizing guidelines are based on the desktop profile mentioned in
                 VDI I/O patterns on page 15. Additional storage resources must be allocated for
                 higher desktop profile environment. Using these guidelines, the VDI can be deployed
                 with FC or NFS connectivity for the virtual desktop datastores.

                 The next few sections give details on selecting the right VNX system components for a
                 VDI environment.


Sizing with      Sizing VNX storage system to meet virtual desktop IOPS is a complicated process.
building block   When an I/O reaches the VNX storage, it is served by several components such as
                 Data Mover (NFS), backend dynamic random access memory (DRAM) cache, FAST
                 Cache, and disks. To reduce the complexity, a building block approach is used. A
                 building block is a set of spindles used to support a certain number of virtual
                 desktops.



                                                Sizing EMC VNX Series for VDI Workload — White Paper        22
The following building block configuration is recommended for all VDI VNX systems.

Table 2.    VNX series building block configuration – XenDesktop with MCS
                                                      Drives
 Number of desktops
                                      SSD drives               15K SAS drives
 1000                         2 (RAID 1)                  20 (RAID 5)

Table 3.    VNX series building block configuration –XenDesktop with PVS
                                                      Drives
 Number of desktops
                                      SSD drives               15K SAS drives
 1000                         2 (RAID 1)                  16 (RAID 10)

Table 4.    VNX series building block configuration – VMware View
                                                      Drives
 Number of desktops
                                      SSD drives               15K SAS drives
 1000                         2 (RAID 1)                  15 (RAID 5)

This basic building block can be used to scale users by multiplying the number of
drives for every 1000 desktops. The solid state drives (SSD) are used for FAST Cache.
The SAS drives are used to store virtual desktops.

Storage disks come in different sizes. The maximum size of the virtual desktop
depends on the capacity of the storage disk and the RAID type.

Table 5 shows the maximum available desktop space when selecting different disk
sizes.

Table 5.    Maximum storage capacity per virtual desktop
                                               Maximum desktop space
 Desktop provisioning
 methods                             300 GB, 15K                 600 GB, 15K
                                      RPM SAS                     RPM SAS
 VMware View                  3 GB (RAID 5)               6 GB (RAID 5)
 Citrix XenDesktop (MCS)      4 GB (RAID 5)               8 GB (RAID 5)
 Citrix XenDesktop (PVS)      2 GB (RAID 10)              4 GB (RAID 10)

The maximum desktop space mentioned in Table 5 includes the virtual desktop
vswap file. Therefore, the available space for end user is equal to the maximum
desktop space shown in Table 5 minus the vswap file. The size of the vswap file is
typically equal to the size of the memory allocated to the virtual desktop. However,
the vswap file size can be reduced with memory reservation. For example, a desktop
with 1 GB RAM and 50 percent memory reservation creates a vswap file of 0.5 GB.

When using these building blocks, it is important to note that CPU, memory, and
network must be properly sized to meet the virtual desktops requirement. A wrong


                              Sizing EMC VNX Series for VDI Workload — White Paper      23
sizing choice of these resources significantly increases the storage IOPS requirement
                and invalidates the building block recommendation.

                The following sections provide additional recommendation to select other VNX
                components.

Backend array   The VNX backend array provides block storage connectivity to Data Mover and other
                servers. Each VNX backend array has two storage processors for high availability and
                fault tolerance. Customers can select different backend array configuration
                depending on the performance and capacity needs. Table 6 shows the backend array
                configuration for different VNX systems.

                Table 6.       VNX series backend array configuration
                 Configuration         VNX5300        VNX5500           VNX5700           VNX7500
                 Min. form factor      7U             7U-9U             8U-11U            8U-15U

                 Max. drives           125            250               500               1000
                 Drive types           3.5” Flash, 15K SAS and 7.2K NL-SAS and 2.5” 10K SAS
                 CPU/cores/memory      1.6 GHz/ 4/8    2.13 GHz/        2.4 GHz/ 4/18     2.8 GHz/
                 (per SP)              GB              4/12 GB          GB                6/24 GB
                 Protocols                                      FC, iSCSI, FCoE

                The entire VNX system backend array supports FC, iSCSI, and FCoE protocols. The
                storage processor CPU and memory is increased on high-end backend arrays to
                support higher drives count. Administrators can select the backend storage based on
                the number of drives needed for the VDI implementation.




                Figure 16. Backend array IOPS scaling with CPU utilization



                                                Sizing EMC VNX Series for VDI Workload — White Paper    24
Figure 16 shows the scalability of VNX systems. When the VDI workload is increased
             with higher backend arrays, the CPU utilization does not increase significantly. This
             shows that high-end backend arrays of VNX series can scale up well with higher VDI
             workloads. When virtual desktop load on the same backend array is increased, the
             scale up will not be linear. Table 7 shows the recommended maximum virtual
             desktops for different backend arrays.

             Table 7.     Recommended maximum virtual desktops for backend arrays
              Backend array             Maximum virtual desktops
              VNX5300                   1,500
              VNX5500                   3,000
              VNX5700                   4,500
              VNX7500                   7,500



Data Mover   Data Movers are used when VDI implementation uses NFS datastores to store virtual
             desktops and CIFS share to store user data. A Data Mover is an independent server
             running the EMC propriety operating system, data access in real time (DART). Each
             Data Mover has multiple network ports, network identities, and connections to the
             backend storage array. In many ways, a Data Mover operates as an independent file
             server, bridging the LAN and the back-end storage array. The VNX system has one or
             more Data Movers installed in its frame.

             Table 8.     VNX series Data Mover configuration
              Configuration        VNX5300         VNX5500          VNX5700         VNX7500
              Data Movers          1 or 2          1 or 2 or 3      2 or 3 or 4     2 to 8
              CPU/cores/memory     2.13 GHz/       2.13 GHz/        2.4 GHz/        2.8 GHz/
              (per Data Mover)     4/6 GB          4/12 GB          4/12 GB         6/24 GB
              Protocols                                 NFS, CIFS, MPFS, pNFS

             To ensure high availability, VNX supports a configuration in which one Data Mover
             acts as a standby for one or more active Data Movers. When an active Data Mover
             fails, the standby quickly takes over the identity and storage tasks of the failed Data
             Mover.




                                            Sizing EMC VNX Series for VDI Workload — White Paper       25
Figure 17. CPU utilization for VDIs on a single Data Mover

             Figure 17 shows that on a single Data Mover, the CPU utilization increases linearly
             when virtual desktops are added. Testing shows that increasing virtual desktops
             does not scale linearly with the number of Data Movers. It is recommended to use
             1,500 virtual desktops for one Data Mover when implementing over NAS protocol.

             Table 9 shows the recommended virtual desktops with active Data Movers.

             Table 9.    Recommended virtual desktops with active Data Movers
              Number of active Data Movers      Maximum virtual desktops
                             1                                  1,500
                             2                                  3,000

                             3                                  4,500
                             5                                  7,500


FAST Cache   EMC FAST Cache uses the SSD to add an extra layer of cache between DRAM cache
             and disk drives. The FAST Cache feature is available to all VNX systems. FAST Cache
             works by examining 64 KB chunks of data in FAST Cache-enabled objects on the
             array. Frequently accessed data is copied to the FAST Cache and subsequent
             accesses to the data chunk are serviced by FAST Cache. This enables immediate
             promotion of very active data to the Flash drives.

             This extended read/write cache is an ideal caching mechanism for the VDI
             environment because the desktop image and other active user data are so frequently
             accessed that the data is serviced directly from the SSDs without accessing the
             slower drives at the lower storage tier. FAST Cache can be enabled on the desktop
             and user data storage pools.



                                           Sizing EMC VNX Series for VDI Workload — White Paper    26
Figure 18. FAST Cache IOPS with increasing drives

Figure 18 shows FAST Cache IOPS with increased SSDs on different VNX platforms.
FAST Cache consists of SSD IOPS and backend array DRAM IOPS. In higher VNX
platforms, the backend array DRAM is higher. When scaling virtual desktops on the
VNX system, low-end systems need additional SSDs to compensate for the DRAM
memory to maintain the same FAST Cache hit ratio. Additional FAST Cache drives
requirement varies for different VNX series backend array because they have different
DRAM installed on SPs. Table 10 shows the threshold for additional SSD requirement
for scaling VDI workload.

Table 10.   Additional FAST Cache requirement threshold
 Backend array    Additional FAST Cache requirement
 VNX5300          > 1,000 virtual desktops
 VNX5500          > 2,000 virtual desktops
 VNX5700          > 3,000 virtual desktops
 VNX7500          > 5,000 virtual desktops

It is recommended to add a pair of SSDs for every 1,000 VDI after the threshold for
additional FAST Cache requirement is reached.




                              Sizing EMC VNX Series for VDI Workload — White Paper      27
Deployment considerations for VDI sizing
Overview             VDI can be implemented in several ways to meet end-user requirement. This section
                     provides information on a few key areas that must be considered before the VDI
                     deployment because they impact the storage requirement.

Sizing for heavier   The sizing guideline in this paper is based on the Login VSI medium workload. This
desktop workload     workload is considered as a typical office workload. However, some customer
                     environment may have more active user profiles.

                     If a company has 500 users, and due to customer corporate applications, each user
                     generates 24 IOPS as compared to 8 IOPS used in the sizing guideline. This customer
                     will need 12,000 IOPS (500 users * 24 IOPS per virtual desktop). In this case, one
                     building block will be underpowered because it has been rated to 8,000 IOPS (1,000
                     desktops * 8 IOPS per virtual desktop). Therefore, the customer must use two
                     building blocks configuration to satisfy the IOPS requirement for this environment.

Concurrency          The sizing guidelines in this paper assume that all desktop users will be active at all
                     times. In other words, a single building block architecture, all 1000 desktops
                     generating workload in parallel, all booted at the same time, and so on. If a customer
                     expects to have 1500 users, but only 50 percent of them will be logged on at any
                     given time due to time zone difference or alternate shifts, 750 active users out of the
                     total 1500 users can be supported by the one building block architecture.

Persistent/dedicat   Virtual desktops can be created either in a persistent/dedicated mode or a non-
ed vs. non-          persistent/pooled mode. In the persistent/dedicated environment, the user data,
persistent/pooled    user installed applications, and customizations are preserved during log out/restart.
mode                 However, in the non-persistent/pooled environment, the user data or customizations
                     are lost during log out/restart.

                     The storage requirement for both environments is similar at the beginning of the
                     deployment. However, persistent/dedicated desktops grow over time when patches
                     and applications are installed. This reduces the number of I/Os that can be served by
                     EMC FAST Cache. More I/Os are passed on to the lower storage tier. This increases
                     the response time and affects the end-user experience. To maintain the same level of
                     end-user experience, configure additional FAST Cache on the persistent/dedicated
                     configurations.

User data and        A virtual desktop consists of Windows OS files, applications files, user data, and user
desktop files        customizations. Windows OS files and applications files are very frequently accessed
                     and they do not grow rapidly. The user data and user customizations files are not
                     accessed heavily. However, the user data grows over a period of time. To meet these
                     two different storage requirements, it is best to separate them and place them in two
                     different storage tiers.




                                                    Sizing EMC VNX Series for VDI Workload — White Paper       28
Table 11 shows the recommended building block user data configuration.

                  Table 11.   User data building block disk configuration

                   Number of                 Drives             Maximum user data with drive capacity
                   desktops           7.2K NL-SAS drives           1 TB          2 TB           3 TB
                   1000              16 (RAID 6)               10 GB         20 GB          30 GB

                  There are several tools and methods to separate the user data and user setting from
                  the desktop files. Microsoft roaming profile and folder redirection is one of the easier
                  ways to separate and place these in two different storage tiers. Along with these
                  tools, there are several third-party applications such as XenDesktop Profile Manager,
                  VMware Persona Management, and AppSense that can separate and move the user
                  data into lower storage tier. It is recommended to use SAS drives for desktop files and
                  NL-SAS drives for user data.

Multiple master   When virtual desktops are deployed using Citrix (MCS) and VMware VDI, the master
images            image is copied to create a replica/base image. The replica/base image is common to
                  many virtual desktops. Because it is accessed heavily, the data is on FAST Cache.
                  When multiple master images are used to deploy virtual desktops, one replica/base
                  image must be created for each master image. This will create a space contention on
                  FAST Cache and not all the replica/base images are promoted to FAST Cache. This will
                  significantly reduce the storage performance and impact the end-user experience. To
                  avoid FAST Cache contention, add a pair of FAST Cache drives for every additional
                  eight replicas/base images on a building block.

Running other     To enhance the desktop experience, deploy virtual desktops with other applications
applications      such as Citrix XenApp and VMware vShield. However, these applications require
                  additional storage and server resources. Follow the application guidelines and add
                  additional storage resources based on vendor recommendations.

                  For example, when Citrix XenApp is used to stream applications to virtual desktops, it
                  creates additional I/Os. To keep the same level of user experience, add ten SAS
                  drives (eight for RAID 10 configuration)to the virtual desktop storage pools for every
                  building block.

VMware View       VMware View Storage Accelerator reduces the storage load associated with virtual
Storage           desktops by caching the common blocks of desktop images into local vSphere host
Accelerator       memory. The Accelerator leverages a VMware vSphere 5.0 platform feature called
                  Content Based Read Cache (CBRC) that is implemented inside the vSphere
                  hypervisor. When enabled for the View virtual desktop pools, the host hypervisor
                  scans the storage disk blocks to generate digests of the block contents. When these
                  blocks are read into the hypervisor, they are cached in the host-based CBRC.
                  Subsequent reads of blocks with the same digest will be served from the in-memory
                  cache directly.

                  When VMware View Storage Accelerator is implemented in VDI, it reduces the read-
                  intensive I/Os such as boot storm, login storm, and antivirus scan to the VNX system.
                  However, it does not reduce the storage I/O during the steady state. The storage must



                                                   Sizing EMC VNX Series for VDI Workload — White Paper      29
be configured with the recommended number of building blocks to provide optimum
                     end-user experience.

VMware vSphere       In a virtual environment, it is common to provision virtual machines with more
Memory               memory than the hypervisor physically has due to budget constraints. The memory
Overcommitment       over-commitment technique takes the advantage that each virtual machine does not
                     fully utilize the amount of memory allocated to it. It makes business sense to
                     oversubscribe the memory usage to some degree. The administrator has the
                     responsibility to proactively monitor the oversubscription rate such that it does not
                     shift the bottleneck away from the server and become a burden to the storage
                     subsystem.

                     If VMware vSphere runs out of memory for the guest operating systems, paging will
                     begin to take place, resulting in extra I/O activity going to the vswap files. If the
                     storage subsystem is sized correctly, occasional spikes due to vswap activity may not
                     cause performance issue as transient bursts of load can be absorbed. However, if the
                     memory oversubscription rate is so high that the storage subsystem is severely
                     impacted by a continuing overload of vswap activity, more disks will need to be
                     added due to the demand of increased performance. At this juncture, it is up to the
                     administrator to decide whether it is cost effective to add more physical memory to
                     the server or to increase the amount of storage.

                     If the administrator decides to increase the storage amount, then the planned
                     number of desktops needs to be adjusted. For example, if customers have 1500
                     desktops to be virtualized, but each one needs 2 GB memory instead of 1 GB, then
                     plan for 3000 desktops and choose three building block storage configuration to
                     accommodate additional storage requirement.

Applying the sizing It is possible that a customer environment does not exactly match the specification of
guidelines          the profile mentioned in this white paper. In such cases, the sizing guidelines must
                    be adjusted to meet the customer profile. Consider the following examples:

                     Example 1: VDI deployment using VMware View through NFS

                     A customer environment has 1200 users and each user generates an average of 20
                     IOPS. Because of the heavy user activity, each of their virtual desktops is provisioned
                     with 4 GB of RAM with 50% memory reservation. They want to deploy VDI with
                     VMware View with persistent desktops and each desktop must have at least 10 GB of
                     desktop space. The customer environment has 15 different departments and each
                     department has its own custom master/base image.. They want to provision the
                     storage through NFS. They want to redirect the user data into a CIFS share and
                     provide 25 GB space for each desktop.

                     Table 12.   Customer 1 environment characteristics
                       Characteristics                        Value
                       VDI implementation type                VMware View
                       Number of desktops                     1200
                       Steady state IOPS per desktop          20
                       Concurrency                            100%



                                                       Sizing EMC VNX Series for VDI Workload — White Paper    30
Characteristics                          Value
 Desktop capacity                         10 GB
 Protocol used (NFS or FC)                NFS

 User data per desktop                    25 GB
 List of applications running             None
 Number of master/base images             15

Table 13.    Customer 1 sizing tasks
Task 1: Calculate the number of desktop drives = 7 Flash drives + 47 SAS drives
    Active desktops at any given time = 1200 * 100% (concurrency)= 1200
    Customer IOPS required for this environment = 1200 * 20 = 24,000 IOPS
    IOPS per building block = 8,200 IOPS (1000 users * 8.2 steady state IOPS )
     Number of building blocks required for customer = 24,000/8,200 = 3
 Based on recommendation in Table 4, one building block VMware View requires two Flash
 drives and 15 SAS drives.
 Total number of drives required = 3* (2 Flash drive + 15 SAS drive) + hotspares
  = 6 Flash drives + 45 SAS drives + (1 Flash drive and 2 SAS drives) for hotspare
  = 7 Flash drives + 47 SAS drives
 Task 2: Calculate additional storage requirement to meet other applications = 0
 None
 Task 3: Choose the appropriate desktop drive to meet the capacity = 600 GB drive
 Desktop space required = 1200 desktops *( 10 GB desktop space + vswap file)
                        = 1200 (10 GB + 4 GB of RAM * 50% memory reservation)
                         = 14.4 TB
 Based on recommendation in Table 5, one building block available capacity is 3 TB (3 GB
 available space per desktop * 1000 desktops)for 300 GB drives and 6 TB (6 GB available
 space per desktop* 1000 desktops) for 600 GB drives.
 Based on the Task 1 calculation, we need three building blocks. With 300 GB drives, the
 maximum capacity available for three building blocks is 9 TB (3 TB * 3 building blocks)
 which is not enough to meet the storage need of 14.4 TB. Therefore, the customer needs to
 choose 600 GB drive (6 TB * 3 building block = 18 TB) to meet the additional desktop space
 requirement.
 Task 4: Calculate the number of Data Movers required = 3
 Based on the recommendation in Table 9, one Data Mover can support 12,300 IOPS (1500
 users * 8.2 average IOPS).
 Required Data Mover = Customer IOPS/Max IOPS supported by a Data Mover
 Required Data Mover = 24,000 IOPS / 12,300 IOPS per Data Mover = 2
 To provide high availability, one Data Mover must act as stand-by. Therefore, three Data
 Movers are required.
 Task 5: Choose the appropriate user data drives to meet the capacity = 50, 1 TB
 NL-SAS


                                 Sizing EMC VNX Series for VDI Workload — White Paper         31
User space required = 1200 desktops * 25 GB user space
                     = 30 TB
 Based on the recommendation in Table 11, one building block available capacity is 10 TB
 (10 GB available space per user * 1000 desktops) for 1 TB drives, 20 TB (20 GB available
 space per user* 1000 desktops) for 2 TB drives, and 30 TB (30 GB available space per user*
 1000 desktops) for 3 TB drives.
 Based on the Task 1 calculation, we need 3 building blocks. With 1 TB drives, the maximum
 capacity available for three building blocks is 30 TB (10 TB * 3 building blocks) which is
 enough to meet the storage need of 30 TB.
 Required drives = Number of building block * 16 NL-SAS drives (Table 11) + hotspare
                 = 3* 16 + hotspares
                 = 48 + 2 = 50 NL-SAS drives
 Task 6: Choose the appropriate VNX = VNX5500
    Number of building block required = 3 (Task 1)
    Number of equivalent user per building block = 3000 (number of building block *
    number of users per building block)
    Number of Data Movers required = 3 (Task 4)
 Smallest VNX platform that supports 3000 (Table 7) users and three Data Movers (Table 8)
 is VNX5500
 Task 7: Additional Flash drives due to VNX platform = 2 Flash drives
 Based on the recommendation in Table 10, VNX5500 requires additional Flash drive when
 more than 2000 (two building blocks) desktops are deployed.
 The customer environment requires three building blocks. Therefore, two pairs of
 additional Flash drives are required.
 Task 8: Additional Flash drives due to multiple Master/Base Images = 6 Flash
 drives
 Customer uses 15 master/base images for their VDI environment.
 A pair of Flash drives must be added for every additional eight master/base images per
 building block. Therefore, customer needs to add three pairs of Flash drives since the
 customer environment requires three building blocks.

Table 14.    Customer 1 components required
  Components                              Quantity
 VNX5500                                1
 600 GB, 15K SAS drives                 47 (45 active + 2 hotspare)
 100 GB, Flash drives                   15 (14 active + 1 hotspare)
 1 TB, 7.2K RPM NL-SAS drives           50 (48 active + 2 hotspare)
 Data Movers                            3 (2 active + 1 stand-by)
 License                                NFS, CIFS

Example 2: VDI deployment using Citrix XenDesktop with PVS through FC

A customer environment has 5000 users and each user generates an average of 16
IOPS. Due to shift schedule, only 50% of the desktops are active at any given time.


                                Sizing EMC VNX Series for VDI Workload — White Paper          32
Each of their virtual desktops is provisioned with 1 GB of RAM. They want to deploy
VDI with Citrix XenDesktop with PVS. They want their desktop to have at least 5 GB of
desktop space. The customer environment uses five different master/base images
on the PVS server. They want to provision VDI storage through FC. They want to deploy
XenApp to stream their applications. They already have a NAS system where they
redirect the user data.

Table 15.    Customer 2 environment characteristics
  Characteristics                         value
  VDI Implementation type                 Citrix XenDesktop with PVS
  Number of desktops                      5000
  Steady state IOPS per desktop           16
  Concurrency                             50%
  Desktop capacity                        5 GB
  Protocol used (NFS or FC)               FC

  User data per desktop                   0
  List of applications running            XenApp
  Number of Master/Base Image             5

Table 16.    Customer 2 sizing tasks
 Task 1: Calculate the number of desktop drives = 11 Flash drives + 83 SAS drives
    Active desktops at any given time = 5000 * 50% (concurrency) = 2500
    Customer IOPS required for this environment = 2500 * 16 = 40,000 IOPS
    IOPS per building block = 8,200 IOPS (1000 users * 8.2 steady state IOPS )
    Number of building blocks required for customer = 40,000/8,200 = 5
 Based on the recommendation in Table 3, one building block Citrix XenDesktop with PVS
 requires two Flash drives and 16 SAS drives.
 Total number of drives required = 5* (two Flash drives + 16 SAS drives) + hotspares
  = 10 Flash drives + 80 SAS drives + (One Flash drive and three SAS drives) for hotspare
  = 11 Flash drives + 83 SAS drives
 Task 2: Calculate additional storage requirement to meet other applications = 42
 SAS drives
 XenApp requires additional eight SAS drives for each RAID 10 building block.
 Drives required to host XenApp = number of building block required * 8 SAS drives +
 hotspare
 = 40 SAS drives + 1 hotspare
 = 41 SAS drives
 Task 3: Choose the appropriate desktop drive to meet the capacity = 600 GB drive
 Desktop space required = 5000 desktops *( 5 GB desktop space + vswap file)
                        = 5000 (5 GB + 1 GB of RAM )


                                  Sizing EMC VNX Series for VDI Workload — White Paper      33
= 30 TB
 Based on the recommendation in Table 5, one building block available capacity is 2 TB (2
 GB available space per desktop * 1000 desktops)for 300 GB drives and 4 TB (4 GB
 available space per desktop* 1000 desktops) for 600 GB drives.
 With XenApp requirement of adding eight drives per building block, one building block
 available capacity is 3 TB (2 TB + 1 TB from additional drives)for 300 GB drives and 6 TB (4
 GB +2 TB from additional drives) for 600 GB drives.
 Based on the Task 1 calculation, we need five building blocks. With 300 GB drives, the
 maximum capacity available for five building blocks is 15 TB (3 TB * 5 building block) which
 is not enough to meet the storage need of 30 TB. Therefore, the customer need to choose
 600 GB drive (6TB * 5 building block = 30 TB) to meet the additional desktop space
 requirement.
 Task 4: Calculate the number of Data Movers required = 0
 None. This is an FC implementation.
 Task 5: Choose the appropriate user data drive to meet the capacity = 0
 None. NAS is already available.
 Task 6: Choose the appropriate VNX = VNX5500
    Number of building block required = 5 (Task 1)
    Number of equivalent user per building block = 5000 (number of building block *
    number of users per building block)
    Number of Data Movers required = 0 (Task 4)
    Smallest VNX platform that supports 5000 (Table 7) desktops is VNX7500
 Task 7: Additional Flash drives due to VNX platform = 0 Flash drives
 Based on the recommendation in Table 10, VNX7500 requires additional Flash drive when
 more than 5000 (five building blocks) desktops are deployed.
 The customer environment requires five building blocks. Therefore, no additional Flash
 drives are required.
 Task 8: Additional Flash drives due to multiple Master/Base Images = 0 Flash
 drives
 Customer uses five master/base images for their VDI environment.
 A pair of Flash drives must be added for every additional eight master/base images per
 building block. Therefore, no additional Flash drives are needed for this environment.

Table 17.    Customer 2 Components required
 Components                               Quantity
 VNX 7500                                 1
 600 GB, 15K SAS drives                   124 (120 active+4 hotspare)
 100 GB, Flash drives                     11 (10 active+1 hotspare)

 1 TB, 7.2K RPM NL-SAS drives             0
 Data Movers                              0




                                   Sizing EMC VNX Series for VDI Workload — White Paper         34
Conclusion
Summary             These sizing guidelines in this white paper show how to choose the appropriate VNX
                    Series and components for different VDI environments. The following tables show the
                    recommended configuration for different VDI workloads.

                    Table 18.    Sizing summary for Citrix XenDesktop with MCS
                            Data                                         Required dri ves
     No. of      VNX       Move rs            SSD drive s                    15K SAS drives              7.2K NL-SAS drives
    desk tops   system    (Active+
                          standby)   Active   Hotspa re     Total   Active       Hotspa re    Total   Active   Hotspa re   Total

          500   VNX5300    2 (1+1)       2            1         3       20               1      21       16            1      17

        1000    VNX5300    2 (1+1)       2            1         3       20               1      21       16            1      17

        1500    VNX5300    2 (1+1)       6            1         7       40               2      42       32            2      34

        2000    VNX5500    3 (2+1)       4            1         5       40               2      42       32            2      34

        2500    VNX5500    3 (2+1)       8            1         9       60               2      62       48            2      50

        3000    VNX5500    3 (2+1)       8            1         9       60               2      62       48            2      50

        3500    VNX5700    4 (3+1)      10            1       11        80               3      83       64            3      67

        4000    VNX5700    4 (3+1)      10            1       11        80               3      83       64            3      67

        4500    VNX5700    4 (3+1)      14            1       15       100               4     104       80            3      83

        5000    VNX7500    5 (4+1)      10            1       11       100               4     104       80            3      83

        5500    VNX7500    5 (4+1)      14            1       15       120               4     124       96            4      100

        6000    VNX7500    5 (4+1)      14            1       15       120               4     124       96            4      100

        6500    VNX7500    6 (5+1)      18            1       19       140               5     145      112            4      116

        7000    VNX7500    6 (5+1)      18            1       19       140               5     145      112            4      116

        7500    VNX7500    6 (5+1)      22            1       23       160               6     166      128            5      133


                    Table 19.    Sizing summary for VMware View
                            Data                                         Required dri ves
     No. of      VNX       Move rs            SSD drive s                    15K SAS drives              7.2K NL-SAS drives
    desk tops   system    (Active+
                          standby)   Active   Hotspa re     Total   Active       Hotspa re    Total   Active   Hotspa re   Total

          500   VNX5300    2 (1+1)       2            1         3       15               1      16       16            1      17

        1000    VNX5300    2 (1+1)       2            1         3       15               1      16       16            1      17

        1500    VNX5300    2 (1+1)       6            1         7       30               1      31       32            2      34

        2000    VNX5500    3 (2+1)       4            1         5       30               1      31       32            2      34

        2500    VNX5500    3 (2+1)       8            1         9       45               2      47       48            2      50

        3000    VNX5500    3 (2+1)       8            1         9       45               2      47       48            2      50

        3500    VNX5700    4 (3+1)      10            1       11        60               2      62       64            3      67




                                                       Sizing EMC VNX Series for VDI Workload — White Paper                    35
Data                                         Required dri ves
 No. of      VNX       Move rs            SSD drive s                    15K SAS drives              7.2K NL-SAS drives
desk tops   system    (Active+
                      standby)   Active   Hotspa re     Total   Active       Hotspa re    Total   Active   Hotspa re   Total

    4000    VNX5700    4 (3+1)      10            1       11        60               2      62       64            3      67

    4500    VNX5700    4 (3+1)      14            1       15        75               3      78       80            3      83

    5000    VNX7500    5 (4+1)      10            1       11        75               3      78       80            3      83

    5500    VNX7500    5 (4+1)      14            1       15        90               3      93       96            4      100

    6000    VNX7500    5 (4+1)      14            1       15        90               3      93       96            4      100

    6500    VNX7500    6 (5+1)      18            1       19       105               4     109      112            4      116

    7000    VNX7500    6 (5+1)      18            1       19       105               4     109      112            4      116

    7500    VNX7500    6 (5+1)      22            1       23       120               4     124      128            5      133


                Table 20.    Sizing summary for Citrix XenDesktop with PVS
                        Data                                         Required dri ves
 No. of      VNX       Move rs            SSD drive s                    15K SAS drives              7.2K NL-SAS drives
desk tops   system    (Active+
                      standby)   Active   Hotspa re     Total   Active       Hotspa re    Total   Active   Hotspa re   Total

     500    VNX5300    2 (1+1)       2            1         3       16               1      17       16            1      17

    1000    VNX5300    2 (1+1)       2            1         3       16               1      17       16            1      17

    1500    VNX5300    2 (1+1)       6            1         7       32               2      34       32            2      34

    2000    VNX5500    3 (2+1)       4            1         5       32               2      34       32            2      34

    2500    VNX5500    3 (2+1)       8            1         9       48               2      50       48            2      50

    3000    VNX5500    3 (2+1)       8            1         9       48               2      50       48            2      50

    3500    VNX5700    4 (3+1)      10            1       11        64               3      67       64            3      67

    4000    VNX5700    4 (3+1)      10            1       11        64               3      67       64            3      67

    4500    VNX5700    4 (3+1)      14            1       15        80               3      83       80            3      83

    5000    VNX7500    5 (4+1)      10            1       11        80               3      83       80            3      83

    5500    VNX7500    5 (4+1)      14            1       15        96               4     100       96            4      100

    6000    VNX7500    5 (4+1)      14            1       15        96               4     100       96            4      100

    6500    VNX7500    6 (5+1)      18            1       19       112               4     116      112            4      116

    7000    VNX7500    6 (5+1)      18            1       19       112               4     116      112            4      116

    7500    VNX7500    6 (5+1)      22            1       23       128               5     133      128            5      133




                                                   Sizing EMC VNX Series for VDI Workload — White Paper                    36
References
EMC documents   The following documents, located on EMC Online Support, provide additional and
                relevant information. Access to these documents depends on your login credentials.
                If you do not have access to a document, contact your EMC representative:

                       EMC Infrastructure for VMware View 5.0, EMC VNX Series (NFS), VMware
                       vSphere 5.0, VMware View 5.0, VMware View Persona Management, and
                       VMware View Composer 2.7—Reference Architecture
                       Infrastructure for VMware View 5.0 — EMC VNX Series (NFS), VMware vSphere
                       5.0, VMware View 5.0, and VMware View Composer 2.7 —Reference
                       Architecture
                       EMC Infrastructure for VMware View 5.0 — EMC VNX Series (NFS), VMware
                       vSphere 5.0, VMware View 5.0, and VMware View Composer 2.7 —Proven
                       Solutions Guide
                       EMC Performance Optimization for Microsoft Windows XP for the Virtual
                       Desktop Infrastructure—Applied Best Practices
                       Deploying Microsoft Windows 7 Virtual Desktops with VMware View—Applied
                       Best Practices Guide
                       EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (FC),
                       VMware vSphere 4.1, and Citrix XenDesktop 5—Reference Architecture
                       EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (FC),
                       VMware vSphere 4.1, and Citrix XenDesktop 5 — Proven Solution Guide
                       EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (NFS),
                       Cisco UCS, VMware vSphere 4.1, and Citrix XenDesktop 5—Reference
                       Architecture
                       EMC Infrastructure for Citrix XenDesktop 5.5 (PVS) EMC VNX Series (NFS),
                       Cisco UCS, Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6—
                       Reference Architecture
                       EMC Infrastructure for Citrix XenDesktop 5.5 (PVS) EMC VNX Series (NFS),
                       Cisco UCS, Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6—
                       Proven Solution Guide
                       EMC Infrastructure for Citrix XenDesktop 5.5 — EMC VNX Series (NFS), Cisco
                       UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6— Reference
                       Architecture
                       EMC Infrastructure for Citrix XenDesktop 5.5 — EMC VNX Series (NFS), Cisco
                       UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6— Proven Solutions
                       Guide
                       EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (NFS),
                       VMware vSphere 4.1, and Citrix XenDesktop 5 — Proven Solution Guide




                                               Sizing EMC VNX Series for VDI Workload — White Paper   37

Contenu connexe

Tendances

Citrix Virtual Desktop Handbook
Citrix Virtual Desktop HandbookCitrix Virtual Desktop Handbook
Citrix Virtual Desktop HandbookNuno Alves
 
Complemenatry & Alternative Solution to VDI
Complemenatry & Alternative Solution to VDIComplemenatry & Alternative Solution to VDI
Complemenatry & Alternative Solution to VDIDeskStream, Inc
 
The Storage Hypervisor: The missing link for the Software Defined Datacenter
The Storage Hypervisor:  The missing link for the Software Defined Datacenter The Storage Hypervisor:  The missing link for the Software Defined Datacenter
The Storage Hypervisor: The missing link for the Software Defined Datacenter Virsto Software
 
3529 v mware_solution_brochure_final
3529 v mware_solution_brochure_final3529 v mware_solution_brochure_final
3529 v mware_solution_brochure_finalVictor Diaz Campos
 
Vdi strategy
Vdi strategyVdi strategy
Vdi strategylatheefca
 
White Paper: EMC Backup-as-a-Service
White Paper: EMC Backup-as-a-Service   White Paper: EMC Backup-as-a-Service
White Paper: EMC Backup-as-a-Service EMC
 
Risk Analysis and Mitigation in Virtualized Environments
Risk Analysis and Mitigation in Virtualized EnvironmentsRisk Analysis and Mitigation in Virtualized Environments
Risk Analysis and Mitigation in Virtualized EnvironmentsSiddharth Coontoor
 
IRJET-Virtualization Technique for Effective Resource Utilization and Dedicat...
IRJET-Virtualization Technique for Effective Resource Utilization and Dedicat...IRJET-Virtualization Technique for Effective Resource Utilization and Dedicat...
IRJET-Virtualization Technique for Effective Resource Utilization and Dedicat...IRJET Journal
 
EMC Desktop as a Service
EMC Desktop as a Service  EMC Desktop as a Service
EMC Desktop as a Service EMC
 
Vdi And Storage Deep Impact V1 0
Vdi And Storage   Deep Impact V1 0Vdi And Storage   Deep Impact V1 0
Vdi And Storage Deep Impact V1 0Zernike College
 
VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualization
VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualizationVMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization
VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualizationIBM India Smarter Computing
 
Unlocking the Value of Delivering Services Event – Monday 18th March 2013 – S...
Unlocking the Value of Delivering Services Event – Monday 18th March 2013 – S...Unlocking the Value of Delivering Services Event – Monday 18th March 2013 – S...
Unlocking the Value of Delivering Services Event – Monday 18th March 2013 – S...Arrow ECS UK
 
Virtual desktop infrastructure
Virtual desktop infrastructureVirtual desktop infrastructure
Virtual desktop infrastructureKavaskar Ganesan
 
High Level Solution Document for VDI Project
High Level Solution Document for VDI ProjectHigh Level Solution Document for VDI Project
High Level Solution Document for VDI ProjectShahab Al Yamin Chawdhury
 
Ibm smart cloud entry+ for system x administrator guide
Ibm smart cloud entry+ for system x administrator guideIbm smart cloud entry+ for system x administrator guide
Ibm smart cloud entry+ for system x administrator guideIBM India Smarter Computing
 

Tendances (19)

Citrix Virtual Desktop Handbook
Citrix Virtual Desktop HandbookCitrix Virtual Desktop Handbook
Citrix Virtual Desktop Handbook
 
Complemenatry & Alternative Solution to VDI
Complemenatry & Alternative Solution to VDIComplemenatry & Alternative Solution to VDI
Complemenatry & Alternative Solution to VDI
 
Paravirtualization
ParavirtualizationParavirtualization
Paravirtualization
 
The Storage Hypervisor: The missing link for the Software Defined Datacenter
The Storage Hypervisor:  The missing link for the Software Defined Datacenter The Storage Hypervisor:  The missing link for the Software Defined Datacenter
The Storage Hypervisor: The missing link for the Software Defined Datacenter
 
3529 v mware_solution_brochure_final
3529 v mware_solution_brochure_final3529 v mware_solution_brochure_final
3529 v mware_solution_brochure_final
 
Vdi strategy
Vdi strategyVdi strategy
Vdi strategy
 
White Paper: EMC Backup-as-a-Service
White Paper: EMC Backup-as-a-Service   White Paper: EMC Backup-as-a-Service
White Paper: EMC Backup-as-a-Service
 
Introduction to Virtual Desktop Architectures
Introduction to Virtual Desktop ArchitecturesIntroduction to Virtual Desktop Architectures
Introduction to Virtual Desktop Architectures
 
Risk Analysis and Mitigation in Virtualized Environments
Risk Analysis and Mitigation in Virtualized EnvironmentsRisk Analysis and Mitigation in Virtualized Environments
Risk Analysis and Mitigation in Virtualized Environments
 
IRJET-Virtualization Technique for Effective Resource Utilization and Dedicat...
IRJET-Virtualization Technique for Effective Resource Utilization and Dedicat...IRJET-Virtualization Technique for Effective Resource Utilization and Dedicat...
IRJET-Virtualization Technique for Effective Resource Utilization and Dedicat...
 
EMC Desktop as a Service
EMC Desktop as a Service  EMC Desktop as a Service
EMC Desktop as a Service
 
Vdi And Storage Deep Impact V1 0
Vdi And Storage   Deep Impact V1 0Vdi And Storage   Deep Impact V1 0
Vdi And Storage Deep Impact V1 0
 
VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualization
VMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualizationVMware vSphere 5 and  IBM XIV Gen3 end-to-end  virtualization
VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualization
 
RingCube vDesk with VMware Fusion
RingCube vDesk with VMware FusionRingCube vDesk with VMware Fusion
RingCube vDesk with VMware Fusion
 
Unlocking the Value of Delivering Services Event – Monday 18th March 2013 – S...
Unlocking the Value of Delivering Services Event – Monday 18th March 2013 – S...Unlocking the Value of Delivering Services Event – Monday 18th March 2013 – S...
Unlocking the Value of Delivering Services Event – Monday 18th March 2013 – S...
 
Virtual desktop infrastructure
Virtual desktop infrastructureVirtual desktop infrastructure
Virtual desktop infrastructure
 
High Level Solution Document for VDI Project
High Level Solution Document for VDI ProjectHigh Level Solution Document for VDI Project
High Level Solution Document for VDI Project
 
Ibm smart cloud entry+ for system x administrator guide
Ibm smart cloud entry+ for system x administrator guideIbm smart cloud entry+ for system x administrator guide
Ibm smart cloud entry+ for system x administrator guide
 
Dz25764770
Dz25764770Dz25764770
Dz25764770
 

En vedette

En vedette (20)

Arrive alive
Arrive aliveArrive alive
Arrive alive
 
Iron mountain Records Management Observing
Iron mountain Records Management ObservingIron mountain Records Management Observing
Iron mountain Records Management Observing
 
Determinants of supply fri032814
Determinants of supply fri032814Determinants of supply fri032814
Determinants of supply fri032814
 
El correu electrònic imad
El correu electrònic imadEl correu electrònic imad
El correu electrònic imad
 
Informe datos del paciente criterio
Informe datos del paciente criterioInforme datos del paciente criterio
Informe datos del paciente criterio
 
Day 3 mon world
Day 3 mon worldDay 3 mon world
Day 3 mon world
 
Mi3
Mi3Mi3
Mi3
 
Jump start your application monitoring with APM
Jump start your application monitoring with APMJump start your application monitoring with APM
Jump start your application monitoring with APM
 
The darvaza well
The darvaza wellThe darvaza well
The darvaza well
 
Windows Server 2012 Virtualization: Notes from the Field
Windows Server 2012 Virtualization: Notes from the FieldWindows Server 2012 Virtualization: Notes from the Field
Windows Server 2012 Virtualization: Notes from the Field
 
Beauty of-mathematics
Beauty of-mathematicsBeauty of-mathematics
Beauty of-mathematics
 
Bringing Consistency to Digital Resource Evaluation
Bringing Consistency to Digital Resource EvaluationBringing Consistency to Digital Resource Evaluation
Bringing Consistency to Digital Resource Evaluation
 
Data security and compliancy in Office 365
Data security and compliancy in Office 365Data security and compliancy in Office 365
Data security and compliancy in Office 365
 
Wrk
WrkWrk
Wrk
 
elasticity 2014
elasticity 2014elasticity 2014
elasticity 2014
 
кино дасгал
кино дасгалкино дасгал
кино дасгал
 
Ppp burgernomics etc
Ppp burgernomics etcPpp burgernomics etc
Ppp burgernomics etc
 
Forex game
Forex gameForex game
Forex game
 
What’s in Windows Server 8 for the ITPro – a demo tour
What’s in Windows Server 8 for the ITPro – a demo tourWhat’s in Windows Server 8 for the ITPro – a demo tour
What’s in Windows Server 8 for the ITPro – a demo tour
 
Awesome powerpoint
Awesome powerpointAwesome powerpoint
Awesome powerpoint
 

Similaire à White Paper: Sizing EMC VNX Series for VDI Workload — An Architectural Guideline

Citrix XenDesktop Reference Architecture for 750 users
Citrix XenDesktop Reference Architecture for 750 usersCitrix XenDesktop Reference Architecture for 750 users
Citrix XenDesktop Reference Architecture for 750 usersX-IO Technologies
 
Vdi complete
Vdi completeVdi complete
Vdi completeLinYiHui
 
PCF-VxRail-ReferenceArchiteture
PCF-VxRail-ReferenceArchiteturePCF-VxRail-ReferenceArchiteture
PCF-VxRail-ReferenceArchitetureVuong Pham
 
Vdi how-it-works618
Vdi how-it-works618Vdi how-it-works618
Vdi how-it-works618shiva2shetty
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices EMC
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices EMC
 
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM India Smarter Computing
 
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktopIBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktopIBM India Smarter Computing
 
IBM SmartCloud Virtual Desktop Infrastructure for Microsoft Windows Server 20...
IBM SmartCloud Virtual Desktop Infrastructure for Microsoft Windows Server 20...IBM SmartCloud Virtual Desktop Infrastructure for Microsoft Windows Server 20...
IBM SmartCloud Virtual Desktop Infrastructure for Microsoft Windows Server 20...IBM India Smarter Computing
 
Make Kubernetes containers on Dell EMC PowerEdge R740xd servers easier to man...
Make Kubernetes containers on Dell EMC PowerEdge R740xd servers easier to man...Make Kubernetes containers on Dell EMC PowerEdge R740xd servers easier to man...
Make Kubernetes containers on Dell EMC PowerEdge R740xd servers easier to man...Principled Technologies
 
V mwarev sphere5.1notes-v2
V mwarev sphere5.1notes-v2V mwarev sphere5.1notes-v2
V mwarev sphere5.1notes-v2karanamsaibabu
 
X-Pod for Citrix VDI on UCS with ISE 700 Hybrid Storage Array
X-Pod for Citrix VDI on UCS with ISE 700 Hybrid Storage ArrayX-Pod for Citrix VDI on UCS with ISE 700 Hybrid Storage Array
X-Pod for Citrix VDI on UCS with ISE 700 Hybrid Storage ArrayX-IO Technologies
 
A Dell PowerEdge MX environment using OpenManage Enterprise and OpenManage En...
A Dell PowerEdge MX environment using OpenManage Enterprise and OpenManage En...A Dell PowerEdge MX environment using OpenManage Enterprise and OpenManage En...
A Dell PowerEdge MX environment using OpenManage Enterprise and OpenManage En...Principled Technologies
 
Mid term report
Mid term reportMid term report
Mid term reportlokesh039
 
transtec vdi in-a-box
transtec vdi in-a-boxtranstec vdi in-a-box
transtec vdi in-a-boxTTEC
 
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White PaperMicrosoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White PaperItzik Reich
 
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5 Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5 Principled Technologies
 
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...X-IO Technologies
 

Similaire à White Paper: Sizing EMC VNX Series for VDI Workload — An Architectural Guideline (20)

Citrix XenDesktop Reference Architecture for 750 users
Citrix XenDesktop Reference Architecture for 750 usersCitrix XenDesktop Reference Architecture for 750 users
Citrix XenDesktop Reference Architecture for 750 users
 
Vdi complete
Vdi completeVdi complete
Vdi complete
 
PCF-VxRail-ReferenceArchiteture
PCF-VxRail-ReferenceArchiteturePCF-VxRail-ReferenceArchiteture
PCF-VxRail-ReferenceArchiteture
 
IBM SmartCloud Desktop Infrastructure
IBM SmartCloud Desktop Infrastructure IBM SmartCloud Desktop Infrastructure
IBM SmartCloud Desktop Infrastructure
 
Vdi how-it-works618
Vdi how-it-works618Vdi how-it-works618
Vdi how-it-works618
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices
 
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
 
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktopIBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
 
IBM SmartCloud Virtual Desktop Infrastructure for Microsoft Windows Server 20...
IBM SmartCloud Virtual Desktop Infrastructure for Microsoft Windows Server 20...IBM SmartCloud Virtual Desktop Infrastructure for Microsoft Windows Server 20...
IBM SmartCloud Virtual Desktop Infrastructure for Microsoft Windows Server 20...
 
Make Kubernetes containers on Dell EMC PowerEdge R740xd servers easier to man...
Make Kubernetes containers on Dell EMC PowerEdge R740xd servers easier to man...Make Kubernetes containers on Dell EMC PowerEdge R740xd servers easier to man...
Make Kubernetes containers on Dell EMC PowerEdge R740xd servers easier to man...
 
VDI Cost benefit analysis
VDI Cost benefit analysisVDI Cost benefit analysis
VDI Cost benefit analysis
 
V mwarev sphere5.1notes-v2
V mwarev sphere5.1notes-v2V mwarev sphere5.1notes-v2
V mwarev sphere5.1notes-v2
 
X-Pod for Citrix VDI on UCS with ISE 700 Hybrid Storage Array
X-Pod for Citrix VDI on UCS with ISE 700 Hybrid Storage ArrayX-Pod for Citrix VDI on UCS with ISE 700 Hybrid Storage Array
X-Pod for Citrix VDI on UCS with ISE 700 Hybrid Storage Array
 
A Dell PowerEdge MX environment using OpenManage Enterprise and OpenManage En...
A Dell PowerEdge MX environment using OpenManage Enterprise and OpenManage En...A Dell PowerEdge MX environment using OpenManage Enterprise and OpenManage En...
A Dell PowerEdge MX environment using OpenManage Enterprise and OpenManage En...
 
Mid term report
Mid term reportMid term report
Mid term report
 
transtec vdi in-a-box
transtec vdi in-a-boxtranstec vdi in-a-box
transtec vdi in-a-box
 
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White PaperMicrosoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
 
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5 Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5
 
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
 

Plus de EMC

INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDINDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDEMC
 
Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote EMC
 
EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC
 
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOTransforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOEMC
 
Citrix ready-webinar-xtremio
Citrix ready-webinar-xtremioCitrix ready-webinar-xtremio
Citrix ready-webinar-xtremioEMC
 
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC
 
EMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC
 
Modern infrastructure for business data lake
Modern infrastructure for business data lakeModern infrastructure for business data lake
Modern infrastructure for business data lakeEMC
 
Force Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereForce Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereEMC
 
Pivotal : Moments in Container History
Pivotal : Moments in Container History Pivotal : Moments in Container History
Pivotal : Moments in Container History EMC
 
Data Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewData Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewEMC
 
Mobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeMobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeEMC
 
Virtualization Myths Infographic
Virtualization Myths Infographic Virtualization Myths Infographic
Virtualization Myths Infographic EMC
 
Intelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityIntelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityEMC
 
The Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeThe Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeEMC
 
EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC
 
EMC Academic Summit 2015
EMC Academic Summit 2015EMC Academic Summit 2015
EMC Academic Summit 2015EMC
 
Data Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesData Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesEMC
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsEMC
 
Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookEMC
 

Plus de EMC (20)

INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDINDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
 
Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote
 
EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX
 
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOTransforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
 
Citrix ready-webinar-xtremio
Citrix ready-webinar-xtremioCitrix ready-webinar-xtremio
Citrix ready-webinar-xtremio
 
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
 
EMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC with Mirantis Openstack
EMC with Mirantis Openstack
 
Modern infrastructure for business data lake
Modern infrastructure for business data lakeModern infrastructure for business data lake
Modern infrastructure for business data lake
 
Force Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereForce Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop Elsewhere
 
Pivotal : Moments in Container History
Pivotal : Moments in Container History Pivotal : Moments in Container History
Pivotal : Moments in Container History
 
Data Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewData Lake Protection - A Technical Review
Data Lake Protection - A Technical Review
 
Mobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeMobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or Foe
 
Virtualization Myths Infographic
Virtualization Myths Infographic Virtualization Myths Infographic
Virtualization Myths Infographic
 
Intelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityIntelligence-Driven GRC for Security
Intelligence-Driven GRC for Security
 
The Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeThe Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure Age
 
EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015
 
EMC Academic Summit 2015
EMC Academic Summit 2015EMC Academic Summit 2015
EMC Academic Summit 2015
 
Data Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesData Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education Services
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere Environments
 
Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBook
 

Dernier

Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilV3cube
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 

Dernier (20)

Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 

White Paper: Sizing EMC VNX Series for VDI Workload — An Architectural Guideline

  • 1. White Paper SIZING EMC® VNX™ SERIES FOR VDI WORKLOAD An Architectural Guideline EMC Solutions Group Abstract This white paper provides storage sizing guidelines to implement virtual desktop infrastructure in VNX unified storage systems. September 2012
  • 2. Copyright © 2012 EMC Corporation. All rights reserved. Published in the USA. Published September, 2012 EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on EMC Online Support. Part Number H11096 Sizing EMC VNX Series for VDI Workload — White Paper 2
  • 3. Table of contents Executive summary ............................................................................................. 4 Introduction ...................................................................................................... 5 Scope...................................................................................................................................... 5 Audience ................................................................................................................................. 5 Terminology ............................................................................................................................. 5 VDI technology overview....................................................................................... 6 Overview.................................................................................................................................. 6 Citrix XenDesktop ..................................................................................................................... 6 VMware View ......................................................................................................................... 11 VDI I/O patterns ............................................................................................... 15 Overview................................................................................................................................ 15 Measuring desktop performance.............................................................................................. 15 Optimizing desktops............................................................................................................... 15 VDI workloads ........................................................................................................................ 16 Sizing VNX series for VDI workload ........................................................................ 22 Overview................................................................................................................................ 22 Sizing with building block ....................................................................................................... 22 Backend array ........................................................................................................................ 24 Data Mover ............................................................................................................................ 25 FAST Cache ............................................................................................................................ 26 Deployment considerations for VDI sizing ............................................................... 28 Overview................................................................................................................................ 28 Sizing for heavier desktop workload......................................................................................... 28 Concurrency ........................................................................................................................... 28 Persistent/dedicated vs. non-persistent/pooled mode .............................................................. 28 User data and desktop files ..................................................................................................... 28 Multiple master images........................................................................................................... 29 Running other applications...................................................................................................... 29 VMware View Storage Accelerator ............................................................................................ 29 VMware vSphere Memory Overcommitment .............................................................................. 30 Applying the sizing guidelines ................................................................................................. 30 Conclusion...................................................................................................... 35 Summary ............................................................................................................................... 35 References...................................................................................................... 37 EMC documents ..................................................................................................................... 37 Sizing EMC VNX Series for VDI Workload — White Paper 3
  • 4. Executive summary This white paper provides sizing guidelines to choose the appropriate storage resources to implement virtual desktop infrastructure (VDI). The sizing guidelines are for EMC® VNX ™ unified storage arrays. This paper also provides information about how VDI architecture uses storage systems. These guidelines will help implementation engineers to choose the appropriate VNX system for their VDI environment. Sizing EMC VNX Series for VDI Workload — White Paper 4
  • 5. Introduction Today, many businesses add desktop virtualization to their IT infrastructure. VDI provides better desktop security, rapid provisioning of applications, reliable desktop patch deployment, and remote access across a multitude of devices. A well thought-out design and implementation plan is critical to building a successful VDI environment that provides predictable performance to end users and scalability for desktop administrators. When designing a VDI environment, consider the profile of the end users, the service level agreements (SLAs) that must be fulfilled, and the desired user experience. When implementing VDI, the CPU, memory, network utilization, and storage are shared among virtual desktops. The design should ensure that all desktops are given enough resources at all times. Scope It is assumed that the reader is familiar with the concepts and operations related to VDI technologies and their use in information infrastructure. This paper discusses multiple EMC products, VMware and Citrix products. It also outlines some of the general architectural designs. Refer to the documentation of the specific product for detailed information on installation and administration. Audience This white paper is intended for EMC employees, partners, and customers . This includes IT planners, virtualization architects and administrators, and any other IT professionals involved in evaluating, acquiring, managing, operating, or designing VDI leveraging EMC technologies. Terminology This paper includes the following terminology. Table 1. Terminology Term Definition Storage Processor (SP) A hardware component that performs and manages backend array storage operations. Login VSI A third-party benchmarking tool developed by Login Consultants. This tool simulates realworld VDI workload by using an AutoIT script and determines the maximum system capacity based on users’ response time. Storage pool An aggregation of storage disks configured with a particular RAID type. Sizing EMC VNX Series for VDI Workload — White Paper 5
  • 6. VDI technology overview Overview VDI has many moving components and requires the involvement of several IT departments to be successful. Before sizing the storage, it is important to understand how the VDI architecture works and how each component uses the storage system. This section explains the two popular desktop virtualization environments—Citrix® XenDesktop® and VMware® View ™. Citrix XenDesktop Citrix XenDesktop transforms Windows desktops to an on-demand service for any user, any device, anywhere. XenDesktop quickly and securely delivers any type of virtual desktop application to the latest PCs, Macs, tablets, smartphones, laptops, and thin clients with a high-definition user experience. XenDesktop has two configuration methods: Machine Creation Services (MCS) Provisioning Services (PVS) Machine Creation Services MCS is a provisioning mechanism introduced in XenDesktop 5. It is integrated with the XenDesktop management interface, XenDesktop Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle management from a centralized point of management. Figure 1 shows the key components of XenDesktop infrastructure with the MCS. Figure 1. Citrix XenDesktop MCS architecture diagram Sizing EMC VNX Series for VDI Workload — White Paper 6
  • 7. The Web interface provides the user interface to the XenDesktop environment. The end user uses the web interface to authenticate and receive login access information to access the virtual desktop directly. XenDesktop Controller orchestrates the VDI environment. It authenticates users, brokers connections between users and their virtual desktops, monitors the state of the virtual desktops, and starts/stops desktops based on the demand and administrative configuration. The License Server validates and manages the licenses of each XenDesktop component. The AD/DNS/DHCP server provides the following:  IP address to virtual desktops using DHCP  Secure communication between users and virtual desktops using Active Directory  IP host name resolution using DNS The Database Server stores information about the XenDesktop environment configuration, virtual desktops, and their status. The Hypervisor hosts the virtual desktops. It has built-in capabilities to manage and configure virtual desktops. XenDesktop Controller uses the hypervisor’s built-in features through MCS. The X enDesktop Agent provides a communication channel between XenDesktop Controller and virtual desktops. It also provides a direct connection between virtual desktops and end users. The Storage Array provides storage to the database and the hypervisor. Sizing EMC VNX Series for VDI Workload — White Paper 7
  • 8. Figure 2 shows the storage mapping of virtual desktops in MCS. Figure 2. XenDesktop MCS virtual desktop storage A master image is a tuned desktop used to create new base images. Administrators can have multiple versions of the master image by taking snapshots at different configurations. A master image can be placed on its own datastore. The base image is a point-in-time copy of the master image. One base image copy is placed on every datastore allocated to host virtual desktops. The base image is read- only. However, the base image is common to all the virtual desktops created on the same datastore. Therefore, all the read operations to the virtual desktops are redirected to the base image. A differencing disk is a thinly provisioned disk used to capture changes made to the virtual desktop operating system. One differencing disk is created for each virtual desktop. When a virtual desktop is created with a dedicated option, the differencing disk preserves the changes made to the virtual desktop. However, the differencing disk is recreated every time the desktop is restarted in pooled virtual desktops. A 16 MB identity disk is created for each virtual desktop to store the user and machine identity information. During restart/refresh, the identity disk is preserved regardless of whether the desktop is deployed with the dedicated or the pooled option. Sizing EMC VNX Series for VDI Workload — White Paper 8
  • 9. Provisioning Services PVS uses the streaming technology to provision virtual desktops. PVS uses a single shared desktop image to stream across all the virtual desktops. This approach enables organizations to manage virtual desktop environment using fewer disk images. Figure 3 shows the key components in XenDesktop with PVS. Figure 3. XenDesktop PVS architecture diagram PVS requires a similar environment as MCS. However, it requires two additional servers to stream the desktop image and deliver it to the virtual desktops. The TFTP Server is used by the virtual desktop to boot from the network and download the bootstrap file. The bootstrap file has the information to access the PVS server and stream the appropriate desktop image. The PVS server is used to stream the desktop image to the virtual desktops. The PVS server has a special storage location called vDisk that stores all the streaming images. The DHCP server provides the IP address and PXE boot information to virtual desktops using DHCP. Sizing EMC VNX Series for VDI Workload — White Paper 9
  • 10. Figure 4 shows the storage mapping of virtual desktops in PVS. Figure 4. XenDesktop PVS virtual desktop storage Similar to MCS, the master image is a tuned desktop used to create a new base image. The master image can be placed on a separate datastore. The master image is accessed only when PVS creates/updates the base image. The PVS server extracts the master image and creates a base image on the vDisk datastore. The base image is streamed to virtual desktops and set to the read-only mode. The PVS server uses a Citrix propriety version control mechanism to keep multiple versions of the base image. A write-cache disk (similar to MCS differencing disk) is created for each virtual desktop. The write-cache disk is thinly provisioned and is used to capture changes made to the virtual desktop operating system. Personal vDisk Personal vDisk is a new feature introduced in XenDesktop 5.6. Personal vDisk preserves the customization settings and user-installed applications in a pooled desktop by redirecting the changes from the user’s pooled virtual machine to a separate disk called Personal vDisk. The personal vDisk can be deployed with MCS and PVS configurations. Sizing EMC VNX Series for VDI Workload — White Paper 10
  • 11. Figure 5 shows the storage mapping of virtual desktops using personal vDisk in MCS. Figure 5. XenDesktop Personal vDisk with MCS virtual desktop storage The personal vDisk stores users’ personal and application data of the virtual desktop. The users’ personal data is visible to the end user as drive P. However, the application data is hidden. During a desktop session, the content of the personal vDisk application data is blended with the content from the base virtual machine and differential disk (in case of PVS, it is blended with base image and write-cache disk) to provide a unified experience to the end user as drive C. For better end-user access time, place the personal vDisk on a separate datastore. VMware View VMware View provides rich and personalized virtual desktops to end users. With VMware View, administrators can virtualize the operating system, applications, and user data while gaining control, efficiency, and security by having desktop data in a data center. VMware View has several components that work together to deliver a robust VDI environment. Sizing EMC VNX Series for VDI Workload — White Paper 11
  • 12. Figure 6 shows the typical VMware View VDI components. Figure 6. VMware View architecture diagram VMware View Manager orchestrates the VDI environment. It authenticates users, assigns virtual desktops to users, monitors the state of the virtual desktops, and starts/stops desktops based on the demand and the administrative configuration. The DHCP server provides the IP address to virtual desktops The Database Server stores the VMware View, vCenter, and virtual desktop configuration information in a database. The VMware Virtual Infrastructure hosts the virtual desktops. VMware View Composer uses the built-in capabilities of VMware Virtual Infrastructure to manage and configure virtual desktops. The View Agent provides communication between View Manager and the virtual desktops. It also provides a direct connection between virtual desktops and end users through VMware View Client. The View Client (user endpoint) communicates with View Manager and View Agent to authenticate and connect to the virtual desktop. The Storage Array provides storage to database and VMware virtual infrastructure. Sizing EMC VNX Series for VDI Workload — White Paper 12
  • 13. Figure 7 shows the different storage components in each virtual desktop. Figure 7. VMware View architecture virtual desktop storage A base image is a tuned desktop that will be used to create new replica images. Administrators can have multiple versions of the base image by taking snapshots at different configurations. The base image can be placed on its own datastore. During virtual desktop creation, the first thing that View Composer does is to ensure that each datastore has its own replica. A replica is a thinly provisioned full copy of the base image. To create a replica, the administrator has to select a point-in-time copy of the base image. If a separate replica datastore is selected, one replica is created for every one thousand virtual desktops. The replica disk is read-only. However, the replica disk is common to all the virtual desktops. Therefore, the read operations of virtual machines to their OS files are redirected to the replica disk. After the replica is created, linked clones are created for each virtual machine. A linked clone is an empty disk when the virtual desktop is created. It is used as a place holder to store all the changes made to the virtual desktop operating system. A disposable disk is created to store the virtual machine’s page file, Windows system temporary files, and VMware log files. This content is deleted when the virtual machine is restarted. The disposable disks are placed on the same datastore as the linked clones. Sizing EMC VNX Series for VDI Workload — White Paper 13
  • 14. When virtual desktops are created with the dedicated option, View Composer creates a separate persistent disk for each virtual desktop. The persistent disk stores all the user profile and user data information for the virtual desktops. The persistent disk can be detached and attached to any virtual desktop to access data. During the virtual machine creation, an internal disk of 16 MB is created for each virtual desktop. It is used to store the personalized information such as computer name, domain name, username, and password. When the virtual desktop is recomposed/restarted, the data in the internal disk is preserved. The data is written during the initial creation of the virtual desktop, and at the stage of the virtual machine joining to a domain and password reset. Sizing EMC VNX Series for VDI Workload — White Paper 14
  • 15. VDI I/O patterns Overview Sizing storage for VDI is one of the most complicated and critical components during the design and implementation process. Storage sizing becomes complex when the storage requirement of the desktop users is not fully understood. A desktop user has requirements for storage capacity and performance. Sizing is often considered only with the aspect of storage capacity. This leads to undesirable performance to end users. Measuring desktop From a user’s perspective, good performance of a virtual desktop is the ability to performance complete any desktop operation in a reasonable amount of time. This means the storage system that supports the virtual desktop must be able to deliver the data (read/write operation) quickly. Therefore, the correct way for sizing storage is to calculate the IOPS of each virtual desktop and design storage to meet the IOPS requirements in a reasonable response time. Optimizing It is highly recommended to optimize the OS image used for the Master/Base image. desktops The following data shows the effect of optimization during steady state workload. The optimized image used in this case has several Windows features and services, such as Windows 7 themes, Windows search index, Bit Locker Drive Encryption, and Windows Defender is disabled to reduce resource utilization. Figure 8. Desktop IOPS: optimized vs. non-optimized Figure 9 shows that during steady state, the IOPS of optimized desktops reduces by 15 percent. When planning the VDI implementation, the administrator must decide whether saving IOPS is worth disabling some of the Windows features. Sizing EMC VNX Series for VDI Workload — White Paper 15
  • 16. VDI workloads The I/O workloads of virtual desktops vary greatly during the production cycle. The four common workloads during the virtual desktop production cycle are: Boot storm Login storm Steady state Virus scan For a successful deployment, the storage system must designed to satisfy all of these workloads. Boot storm: Boot storms occur when all virtual desktops are powered on simultaneously. The boot storm puts a heavy load on the storage system because all the virtual desktops are competing for the shared storage resources. To minimize the heavy load on the storage system, it is recommended to boot the virtual desktops during non-peak hours and boot a few virtual desktops at a time. Finding the I/O requirement to boot a single virtual desktop depends on the desktop image and the boot time. Use the following formula to calculate the IOPS requirement to boot a single virtual desktop: Data required to boot Required IOPS per desktop = Boot time * I/O size The data required to boot depends on the desktop image. Test results show that a Windows 7 desktop image with Microsoft Office installed requires about 130 MB (133120 KB) data to boot and register a virtual desktop. The average I/O size during boot storm is about 4 KB. Plugging these values shows that 70 IOPS is required to boot a virtual desktop in eight minutes (480 seconds). Login storm: Login storms occur when users log in to the virtual desktops at the same time. Unlike boot storms, login storms cannot be avoided in an environment where end users start work at the same time. Sizing EMC VNX Series for VDI Workload — White Paper 16
  • 17. Figure 9. Average desktop IOPS during login Figure 9 shows the IOPS required during login time. The login IOPS are measured on a physical and virtual desktop for comparison. The data shows that both desktops require similar IOPS during login session. Steady state: Steady state occurs when users interact with the desktop. The IOPS for steady state varies because user activities differ. Login VSI medium workload was used to measure the IOPS during steady state. The IOPS was compared between physical and virtual desktops. Figure 10. Desktop IOPS during steady state Sizing EMC VNX Series for VDI Workload — White Paper 17
  • 18. Figure 10 shows the IOPS required during steady state. Both physical and virtual desktops show similar IOPS requirement. Maximum IOPS occurred at the beginning of the Login VSI test when applications were launched. Figure 11 shows how the virtual desktop I/Os are served by the hypervisor and VNX system. The desktops are provisioned through VMware View or Citrix XenDesktop with MCS. In this example, the storage system is provisioned to the hypervisor through NFS. The FAST Cache on page 26 provides more details on EMC FAST™ Cache. Figure 11. IOPS flow for VMware View and XenDesktop with MCS The virtual desktops generated an average of 8.2 IOPS during steady state. The hypervisor converts the virtual desktop IOPS into NFS IOPS. During the conversion, 20 percent more IOPS are observed due to NFS metadata. Even though NFS comes with an overhead, it has the benefit of being simple and easy-to-manage in a VDI environment. In high-end VNX platforms, the Data Mover has large cache and some of the NFS I/Os are served by this cache. This reduces the I/O going to the storage processor and compensates for the overhead due to NFS protocol. The Data Mover sends the NFS IOPS to the storage processor. The storage processor has DRAM cache as well as FAST Cache configured for the desktop storage pool. Testing shows that more than 90 percent of the IOPS are served by the DRAM and FAST Cache. Only 10 percent of the IOPS are served by the SAS disks. Note that the number of I/Os served by DRAM and FAST Cache depends on the virtual desktop Sizing EMC VNX Series for VDI Workload — White Paper 18
  • 19. workload type and the cache size. The VNX system must be configured with appropriate Flash drives to optimize the FAST Cache utilization. Figure 12 shows how the virtual desktop I/Os are served by the hypervisor and VNX system in XenDesktop with PVS. In this example, the storage system is provisioned to the hypervisor through NFS. Figure 12. IOPS flow for XenDesktop with PVS Similar to VMware View and XenDesktop with MCS, the virtual desktops generated an average of 8.2 IOPS during steady state. The PVS server streamed 16 percent of the steady state desktop IOPS. The remaining 84 percent are served by the VNX storage. The hypervisor converts 84 percent of the virtual desktop storage IOPS into NFS IOPS. During the conversion, 20 percent more IOPS are observed due to NFS metadata. The Data Mover sends the NFS IOPS to the storage processor. The storage processor has DRAM cache as well as FAST Cache configured for the desktop storage pool. Testing shows that 88 percent of the storage IOPS are served by the DRAM and FAST Cache. Only 12 percent of the IOPS are served by the SAS disks. Sizing EMC VNX Series for VDI Workload — White Paper 19
  • 20. Figure 13 shows the virtual desktop IOPS distribution for VMware View and Citrix XenDesktop with MCS and PVS provisioning. Figure 13. IOPS distribution for VMware View and XenDesktop with MCS and PVS Note: The pie chart percentage distribution does not take into account the NFS overhead mentioned in these examples. Virus scan: Virus scan occurs during full antivirus scan of virtual desktops. The I/O requirement to conduct the virus scan depends on the desktop image and the time to complete the full scan. Sizing EMC VNX Series for VDI Workload — White Paper 20
  • 21. Figure 14. Desktop IOPS requirement for virus scan Figure 14 shows the trend line of required IOPS with the time to complete the scan. The trend line is based on the Windows 7 desktop image with Microsoft Office installed. The virus scan requires very high IOPS per desktop. To avoid any impact on end-user experience, it is recommended to run virus scan during non-peak hours. Sizing EMC VNX Series for VDI Workload — White Paper 21
  • 22. Sizing VNX series for VDI workload Overview The VNX series delivers a single-box block and file solution, which offers a centralized point of management for distributed environments. This make it possible to dynamically grow, share, and cost effectively manage multiprotocol file systems and multiprotocol block access. Administrators can take advantage of the simultaneous support of NFS and CIFS protocols by enabling Windows and Linux/Unix clients to share files by using the sophisticated file-locking mechanism of VNX for file and VNX for Block for high-bandwidth or for latency-sensitive applications. The VNX series unified storage offers the following four models. Figure 15. VNX series unified storage systems These four options are based on the number of disks it can support. Customers can configure the VNX system with 5 to 1000 disks. This helps customers to select the right VNX system according to their environment needs. The VNX series is built for speed delivering robust performance and efficiency to support the provisioning of virtual desktops (thin/thick). The VNX series is optimized for virtualization, and thus supports all leading hypervisors, simplifies desktop creation and storage configuration. The sizing guidelines are based on the desktop profile mentioned in VDI I/O patterns on page 15. Additional storage resources must be allocated for higher desktop profile environment. Using these guidelines, the VDI can be deployed with FC or NFS connectivity for the virtual desktop datastores. The next few sections give details on selecting the right VNX system components for a VDI environment. Sizing with Sizing VNX storage system to meet virtual desktop IOPS is a complicated process. building block When an I/O reaches the VNX storage, it is served by several components such as Data Mover (NFS), backend dynamic random access memory (DRAM) cache, FAST Cache, and disks. To reduce the complexity, a building block approach is used. A building block is a set of spindles used to support a certain number of virtual desktops. Sizing EMC VNX Series for VDI Workload — White Paper 22
  • 23. The following building block configuration is recommended for all VDI VNX systems. Table 2. VNX series building block configuration – XenDesktop with MCS Drives Number of desktops SSD drives 15K SAS drives 1000 2 (RAID 1) 20 (RAID 5) Table 3. VNX series building block configuration –XenDesktop with PVS Drives Number of desktops SSD drives 15K SAS drives 1000 2 (RAID 1) 16 (RAID 10) Table 4. VNX series building block configuration – VMware View Drives Number of desktops SSD drives 15K SAS drives 1000 2 (RAID 1) 15 (RAID 5) This basic building block can be used to scale users by multiplying the number of drives for every 1000 desktops. The solid state drives (SSD) are used for FAST Cache. The SAS drives are used to store virtual desktops. Storage disks come in different sizes. The maximum size of the virtual desktop depends on the capacity of the storage disk and the RAID type. Table 5 shows the maximum available desktop space when selecting different disk sizes. Table 5. Maximum storage capacity per virtual desktop Maximum desktop space Desktop provisioning methods 300 GB, 15K 600 GB, 15K RPM SAS RPM SAS VMware View 3 GB (RAID 5) 6 GB (RAID 5) Citrix XenDesktop (MCS) 4 GB (RAID 5) 8 GB (RAID 5) Citrix XenDesktop (PVS) 2 GB (RAID 10) 4 GB (RAID 10) The maximum desktop space mentioned in Table 5 includes the virtual desktop vswap file. Therefore, the available space for end user is equal to the maximum desktop space shown in Table 5 minus the vswap file. The size of the vswap file is typically equal to the size of the memory allocated to the virtual desktop. However, the vswap file size can be reduced with memory reservation. For example, a desktop with 1 GB RAM and 50 percent memory reservation creates a vswap file of 0.5 GB. When using these building blocks, it is important to note that CPU, memory, and network must be properly sized to meet the virtual desktops requirement. A wrong Sizing EMC VNX Series for VDI Workload — White Paper 23
  • 24. sizing choice of these resources significantly increases the storage IOPS requirement and invalidates the building block recommendation. The following sections provide additional recommendation to select other VNX components. Backend array The VNX backend array provides block storage connectivity to Data Mover and other servers. Each VNX backend array has two storage processors for high availability and fault tolerance. Customers can select different backend array configuration depending on the performance and capacity needs. Table 6 shows the backend array configuration for different VNX systems. Table 6. VNX series backend array configuration Configuration VNX5300 VNX5500 VNX5700 VNX7500 Min. form factor 7U 7U-9U 8U-11U 8U-15U Max. drives 125 250 500 1000 Drive types 3.5” Flash, 15K SAS and 7.2K NL-SAS and 2.5” 10K SAS CPU/cores/memory 1.6 GHz/ 4/8 2.13 GHz/ 2.4 GHz/ 4/18 2.8 GHz/ (per SP) GB 4/12 GB GB 6/24 GB Protocols FC, iSCSI, FCoE The entire VNX system backend array supports FC, iSCSI, and FCoE protocols. The storage processor CPU and memory is increased on high-end backend arrays to support higher drives count. Administrators can select the backend storage based on the number of drives needed for the VDI implementation. Figure 16. Backend array IOPS scaling with CPU utilization Sizing EMC VNX Series for VDI Workload — White Paper 24
  • 25. Figure 16 shows the scalability of VNX systems. When the VDI workload is increased with higher backend arrays, the CPU utilization does not increase significantly. This shows that high-end backend arrays of VNX series can scale up well with higher VDI workloads. When virtual desktop load on the same backend array is increased, the scale up will not be linear. Table 7 shows the recommended maximum virtual desktops for different backend arrays. Table 7. Recommended maximum virtual desktops for backend arrays Backend array Maximum virtual desktops VNX5300 1,500 VNX5500 3,000 VNX5700 4,500 VNX7500 7,500 Data Mover Data Movers are used when VDI implementation uses NFS datastores to store virtual desktops and CIFS share to store user data. A Data Mover is an independent server running the EMC propriety operating system, data access in real time (DART). Each Data Mover has multiple network ports, network identities, and connections to the backend storage array. In many ways, a Data Mover operates as an independent file server, bridging the LAN and the back-end storage array. The VNX system has one or more Data Movers installed in its frame. Table 8. VNX series Data Mover configuration Configuration VNX5300 VNX5500 VNX5700 VNX7500 Data Movers 1 or 2 1 or 2 or 3 2 or 3 or 4 2 to 8 CPU/cores/memory 2.13 GHz/ 2.13 GHz/ 2.4 GHz/ 2.8 GHz/ (per Data Mover) 4/6 GB 4/12 GB 4/12 GB 6/24 GB Protocols NFS, CIFS, MPFS, pNFS To ensure high availability, VNX supports a configuration in which one Data Mover acts as a standby for one or more active Data Movers. When an active Data Mover fails, the standby quickly takes over the identity and storage tasks of the failed Data Mover. Sizing EMC VNX Series for VDI Workload — White Paper 25
  • 26. Figure 17. CPU utilization for VDIs on a single Data Mover Figure 17 shows that on a single Data Mover, the CPU utilization increases linearly when virtual desktops are added. Testing shows that increasing virtual desktops does not scale linearly with the number of Data Movers. It is recommended to use 1,500 virtual desktops for one Data Mover when implementing over NAS protocol. Table 9 shows the recommended virtual desktops with active Data Movers. Table 9. Recommended virtual desktops with active Data Movers Number of active Data Movers Maximum virtual desktops 1 1,500 2 3,000 3 4,500 5 7,500 FAST Cache EMC FAST Cache uses the SSD to add an extra layer of cache between DRAM cache and disk drives. The FAST Cache feature is available to all VNX systems. FAST Cache works by examining 64 KB chunks of data in FAST Cache-enabled objects on the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to the Flash drives. This extended read/write cache is an ideal caching mechanism for the VDI environment because the desktop image and other active user data are so frequently accessed that the data is serviced directly from the SSDs without accessing the slower drives at the lower storage tier. FAST Cache can be enabled on the desktop and user data storage pools. Sizing EMC VNX Series for VDI Workload — White Paper 26
  • 27. Figure 18. FAST Cache IOPS with increasing drives Figure 18 shows FAST Cache IOPS with increased SSDs on different VNX platforms. FAST Cache consists of SSD IOPS and backend array DRAM IOPS. In higher VNX platforms, the backend array DRAM is higher. When scaling virtual desktops on the VNX system, low-end systems need additional SSDs to compensate for the DRAM memory to maintain the same FAST Cache hit ratio. Additional FAST Cache drives requirement varies for different VNX series backend array because they have different DRAM installed on SPs. Table 10 shows the threshold for additional SSD requirement for scaling VDI workload. Table 10. Additional FAST Cache requirement threshold Backend array Additional FAST Cache requirement VNX5300 > 1,000 virtual desktops VNX5500 > 2,000 virtual desktops VNX5700 > 3,000 virtual desktops VNX7500 > 5,000 virtual desktops It is recommended to add a pair of SSDs for every 1,000 VDI after the threshold for additional FAST Cache requirement is reached. Sizing EMC VNX Series for VDI Workload — White Paper 27
  • 28. Deployment considerations for VDI sizing Overview VDI can be implemented in several ways to meet end-user requirement. This section provides information on a few key areas that must be considered before the VDI deployment because they impact the storage requirement. Sizing for heavier The sizing guideline in this paper is based on the Login VSI medium workload. This desktop workload workload is considered as a typical office workload. However, some customer environment may have more active user profiles. If a company has 500 users, and due to customer corporate applications, each user generates 24 IOPS as compared to 8 IOPS used in the sizing guideline. This customer will need 12,000 IOPS (500 users * 24 IOPS per virtual desktop). In this case, one building block will be underpowered because it has been rated to 8,000 IOPS (1,000 desktops * 8 IOPS per virtual desktop). Therefore, the customer must use two building blocks configuration to satisfy the IOPS requirement for this environment. Concurrency The sizing guidelines in this paper assume that all desktop users will be active at all times. In other words, a single building block architecture, all 1000 desktops generating workload in parallel, all booted at the same time, and so on. If a customer expects to have 1500 users, but only 50 percent of them will be logged on at any given time due to time zone difference or alternate shifts, 750 active users out of the total 1500 users can be supported by the one building block architecture. Persistent/dedicat Virtual desktops can be created either in a persistent/dedicated mode or a non- ed vs. non- persistent/pooled mode. In the persistent/dedicated environment, the user data, persistent/pooled user installed applications, and customizations are preserved during log out/restart. mode However, in the non-persistent/pooled environment, the user data or customizations are lost during log out/restart. The storage requirement for both environments is similar at the beginning of the deployment. However, persistent/dedicated desktops grow over time when patches and applications are installed. This reduces the number of I/Os that can be served by EMC FAST Cache. More I/Os are passed on to the lower storage tier. This increases the response time and affects the end-user experience. To maintain the same level of end-user experience, configure additional FAST Cache on the persistent/dedicated configurations. User data and A virtual desktop consists of Windows OS files, applications files, user data, and user desktop files customizations. Windows OS files and applications files are very frequently accessed and they do not grow rapidly. The user data and user customizations files are not accessed heavily. However, the user data grows over a period of time. To meet these two different storage requirements, it is best to separate them and place them in two different storage tiers. Sizing EMC VNX Series for VDI Workload — White Paper 28
  • 29. Table 11 shows the recommended building block user data configuration. Table 11. User data building block disk configuration Number of Drives Maximum user data with drive capacity desktops 7.2K NL-SAS drives 1 TB 2 TB 3 TB 1000 16 (RAID 6) 10 GB 20 GB 30 GB There are several tools and methods to separate the user data and user setting from the desktop files. Microsoft roaming profile and folder redirection is one of the easier ways to separate and place these in two different storage tiers. Along with these tools, there are several third-party applications such as XenDesktop Profile Manager, VMware Persona Management, and AppSense that can separate and move the user data into lower storage tier. It is recommended to use SAS drives for desktop files and NL-SAS drives for user data. Multiple master When virtual desktops are deployed using Citrix (MCS) and VMware VDI, the master images image is copied to create a replica/base image. The replica/base image is common to many virtual desktops. Because it is accessed heavily, the data is on FAST Cache. When multiple master images are used to deploy virtual desktops, one replica/base image must be created for each master image. This will create a space contention on FAST Cache and not all the replica/base images are promoted to FAST Cache. This will significantly reduce the storage performance and impact the end-user experience. To avoid FAST Cache contention, add a pair of FAST Cache drives for every additional eight replicas/base images on a building block. Running other To enhance the desktop experience, deploy virtual desktops with other applications applications such as Citrix XenApp and VMware vShield. However, these applications require additional storage and server resources. Follow the application guidelines and add additional storage resources based on vendor recommendations. For example, when Citrix XenApp is used to stream applications to virtual desktops, it creates additional I/Os. To keep the same level of user experience, add ten SAS drives (eight for RAID 10 configuration)to the virtual desktop storage pools for every building block. VMware View VMware View Storage Accelerator reduces the storage load associated with virtual Storage desktops by caching the common blocks of desktop images into local vSphere host Accelerator memory. The Accelerator leverages a VMware vSphere 5.0 platform feature called Content Based Read Cache (CBRC) that is implemented inside the vSphere hypervisor. When enabled for the View virtual desktop pools, the host hypervisor scans the storage disk blocks to generate digests of the block contents. When these blocks are read into the hypervisor, they are cached in the host-based CBRC. Subsequent reads of blocks with the same digest will be served from the in-memory cache directly. When VMware View Storage Accelerator is implemented in VDI, it reduces the read- intensive I/Os such as boot storm, login storm, and antivirus scan to the VNX system. However, it does not reduce the storage I/O during the steady state. The storage must Sizing EMC VNX Series for VDI Workload — White Paper 29
  • 30. be configured with the recommended number of building blocks to provide optimum end-user experience. VMware vSphere In a virtual environment, it is common to provision virtual machines with more Memory memory than the hypervisor physically has due to budget constraints. The memory Overcommitment over-commitment technique takes the advantage that each virtual machine does not fully utilize the amount of memory allocated to it. It makes business sense to oversubscribe the memory usage to some degree. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. If VMware vSphere runs out of memory for the guest operating systems, paging will begin to take place, resulting in extra I/O activity going to the vswap files. If the storage subsystem is sized correctly, occasional spikes due to vswap activity may not cause performance issue as transient bursts of load can be absorbed. However, if the memory oversubscription rate is so high that the storage subsystem is severely impacted by a continuing overload of vswap activity, more disks will need to be added due to the demand of increased performance. At this juncture, it is up to the administrator to decide whether it is cost effective to add more physical memory to the server or to increase the amount of storage. If the administrator decides to increase the storage amount, then the planned number of desktops needs to be adjusted. For example, if customers have 1500 desktops to be virtualized, but each one needs 2 GB memory instead of 1 GB, then plan for 3000 desktops and choose three building block storage configuration to accommodate additional storage requirement. Applying the sizing It is possible that a customer environment does not exactly match the specification of guidelines the profile mentioned in this white paper. In such cases, the sizing guidelines must be adjusted to meet the customer profile. Consider the following examples: Example 1: VDI deployment using VMware View through NFS A customer environment has 1200 users and each user generates an average of 20 IOPS. Because of the heavy user activity, each of their virtual desktops is provisioned with 4 GB of RAM with 50% memory reservation. They want to deploy VDI with VMware View with persistent desktops and each desktop must have at least 10 GB of desktop space. The customer environment has 15 different departments and each department has its own custom master/base image.. They want to provision the storage through NFS. They want to redirect the user data into a CIFS share and provide 25 GB space for each desktop. Table 12. Customer 1 environment characteristics Characteristics Value VDI implementation type VMware View Number of desktops 1200 Steady state IOPS per desktop 20 Concurrency 100% Sizing EMC VNX Series for VDI Workload — White Paper 30
  • 31. Characteristics Value Desktop capacity 10 GB Protocol used (NFS or FC) NFS User data per desktop 25 GB List of applications running None Number of master/base images 15 Table 13. Customer 1 sizing tasks Task 1: Calculate the number of desktop drives = 7 Flash drives + 47 SAS drives Active desktops at any given time = 1200 * 100% (concurrency)= 1200 Customer IOPS required for this environment = 1200 * 20 = 24,000 IOPS IOPS per building block = 8,200 IOPS (1000 users * 8.2 steady state IOPS ) Number of building blocks required for customer = 24,000/8,200 = 3 Based on recommendation in Table 4, one building block VMware View requires two Flash drives and 15 SAS drives. Total number of drives required = 3* (2 Flash drive + 15 SAS drive) + hotspares = 6 Flash drives + 45 SAS drives + (1 Flash drive and 2 SAS drives) for hotspare = 7 Flash drives + 47 SAS drives Task 2: Calculate additional storage requirement to meet other applications = 0 None Task 3: Choose the appropriate desktop drive to meet the capacity = 600 GB drive Desktop space required = 1200 desktops *( 10 GB desktop space + vswap file) = 1200 (10 GB + 4 GB of RAM * 50% memory reservation) = 14.4 TB Based on recommendation in Table 5, one building block available capacity is 3 TB (3 GB available space per desktop * 1000 desktops)for 300 GB drives and 6 TB (6 GB available space per desktop* 1000 desktops) for 600 GB drives. Based on the Task 1 calculation, we need three building blocks. With 300 GB drives, the maximum capacity available for three building blocks is 9 TB (3 TB * 3 building blocks) which is not enough to meet the storage need of 14.4 TB. Therefore, the customer needs to choose 600 GB drive (6 TB * 3 building block = 18 TB) to meet the additional desktop space requirement. Task 4: Calculate the number of Data Movers required = 3 Based on the recommendation in Table 9, one Data Mover can support 12,300 IOPS (1500 users * 8.2 average IOPS). Required Data Mover = Customer IOPS/Max IOPS supported by a Data Mover Required Data Mover = 24,000 IOPS / 12,300 IOPS per Data Mover = 2 To provide high availability, one Data Mover must act as stand-by. Therefore, three Data Movers are required. Task 5: Choose the appropriate user data drives to meet the capacity = 50, 1 TB NL-SAS Sizing EMC VNX Series for VDI Workload — White Paper 31
  • 32. User space required = 1200 desktops * 25 GB user space = 30 TB Based on the recommendation in Table 11, one building block available capacity is 10 TB (10 GB available space per user * 1000 desktops) for 1 TB drives, 20 TB (20 GB available space per user* 1000 desktops) for 2 TB drives, and 30 TB (30 GB available space per user* 1000 desktops) for 3 TB drives. Based on the Task 1 calculation, we need 3 building blocks. With 1 TB drives, the maximum capacity available for three building blocks is 30 TB (10 TB * 3 building blocks) which is enough to meet the storage need of 30 TB. Required drives = Number of building block * 16 NL-SAS drives (Table 11) + hotspare = 3* 16 + hotspares = 48 + 2 = 50 NL-SAS drives Task 6: Choose the appropriate VNX = VNX5500 Number of building block required = 3 (Task 1) Number of equivalent user per building block = 3000 (number of building block * number of users per building block) Number of Data Movers required = 3 (Task 4) Smallest VNX platform that supports 3000 (Table 7) users and three Data Movers (Table 8) is VNX5500 Task 7: Additional Flash drives due to VNX platform = 2 Flash drives Based on the recommendation in Table 10, VNX5500 requires additional Flash drive when more than 2000 (two building blocks) desktops are deployed. The customer environment requires three building blocks. Therefore, two pairs of additional Flash drives are required. Task 8: Additional Flash drives due to multiple Master/Base Images = 6 Flash drives Customer uses 15 master/base images for their VDI environment. A pair of Flash drives must be added for every additional eight master/base images per building block. Therefore, customer needs to add three pairs of Flash drives since the customer environment requires three building blocks. Table 14. Customer 1 components required Components Quantity VNX5500 1 600 GB, 15K SAS drives 47 (45 active + 2 hotspare) 100 GB, Flash drives 15 (14 active + 1 hotspare) 1 TB, 7.2K RPM NL-SAS drives 50 (48 active + 2 hotspare) Data Movers 3 (2 active + 1 stand-by) License NFS, CIFS Example 2: VDI deployment using Citrix XenDesktop with PVS through FC A customer environment has 5000 users and each user generates an average of 16 IOPS. Due to shift schedule, only 50% of the desktops are active at any given time. Sizing EMC VNX Series for VDI Workload — White Paper 32
  • 33. Each of their virtual desktops is provisioned with 1 GB of RAM. They want to deploy VDI with Citrix XenDesktop with PVS. They want their desktop to have at least 5 GB of desktop space. The customer environment uses five different master/base images on the PVS server. They want to provision VDI storage through FC. They want to deploy XenApp to stream their applications. They already have a NAS system where they redirect the user data. Table 15. Customer 2 environment characteristics Characteristics value VDI Implementation type Citrix XenDesktop with PVS Number of desktops 5000 Steady state IOPS per desktop 16 Concurrency 50% Desktop capacity 5 GB Protocol used (NFS or FC) FC User data per desktop 0 List of applications running XenApp Number of Master/Base Image 5 Table 16. Customer 2 sizing tasks Task 1: Calculate the number of desktop drives = 11 Flash drives + 83 SAS drives Active desktops at any given time = 5000 * 50% (concurrency) = 2500 Customer IOPS required for this environment = 2500 * 16 = 40,000 IOPS IOPS per building block = 8,200 IOPS (1000 users * 8.2 steady state IOPS ) Number of building blocks required for customer = 40,000/8,200 = 5 Based on the recommendation in Table 3, one building block Citrix XenDesktop with PVS requires two Flash drives and 16 SAS drives. Total number of drives required = 5* (two Flash drives + 16 SAS drives) + hotspares = 10 Flash drives + 80 SAS drives + (One Flash drive and three SAS drives) for hotspare = 11 Flash drives + 83 SAS drives Task 2: Calculate additional storage requirement to meet other applications = 42 SAS drives XenApp requires additional eight SAS drives for each RAID 10 building block. Drives required to host XenApp = number of building block required * 8 SAS drives + hotspare = 40 SAS drives + 1 hotspare = 41 SAS drives Task 3: Choose the appropriate desktop drive to meet the capacity = 600 GB drive Desktop space required = 5000 desktops *( 5 GB desktop space + vswap file) = 5000 (5 GB + 1 GB of RAM ) Sizing EMC VNX Series for VDI Workload — White Paper 33
  • 34. = 30 TB Based on the recommendation in Table 5, one building block available capacity is 2 TB (2 GB available space per desktop * 1000 desktops)for 300 GB drives and 4 TB (4 GB available space per desktop* 1000 desktops) for 600 GB drives. With XenApp requirement of adding eight drives per building block, one building block available capacity is 3 TB (2 TB + 1 TB from additional drives)for 300 GB drives and 6 TB (4 GB +2 TB from additional drives) for 600 GB drives. Based on the Task 1 calculation, we need five building blocks. With 300 GB drives, the maximum capacity available for five building blocks is 15 TB (3 TB * 5 building block) which is not enough to meet the storage need of 30 TB. Therefore, the customer need to choose 600 GB drive (6TB * 5 building block = 30 TB) to meet the additional desktop space requirement. Task 4: Calculate the number of Data Movers required = 0 None. This is an FC implementation. Task 5: Choose the appropriate user data drive to meet the capacity = 0 None. NAS is already available. Task 6: Choose the appropriate VNX = VNX5500 Number of building block required = 5 (Task 1) Number of equivalent user per building block = 5000 (number of building block * number of users per building block) Number of Data Movers required = 0 (Task 4) Smallest VNX platform that supports 5000 (Table 7) desktops is VNX7500 Task 7: Additional Flash drives due to VNX platform = 0 Flash drives Based on the recommendation in Table 10, VNX7500 requires additional Flash drive when more than 5000 (five building blocks) desktops are deployed. The customer environment requires five building blocks. Therefore, no additional Flash drives are required. Task 8: Additional Flash drives due to multiple Master/Base Images = 0 Flash drives Customer uses five master/base images for their VDI environment. A pair of Flash drives must be added for every additional eight master/base images per building block. Therefore, no additional Flash drives are needed for this environment. Table 17. Customer 2 Components required Components Quantity VNX 7500 1 600 GB, 15K SAS drives 124 (120 active+4 hotspare) 100 GB, Flash drives 11 (10 active+1 hotspare) 1 TB, 7.2K RPM NL-SAS drives 0 Data Movers 0 Sizing EMC VNX Series for VDI Workload — White Paper 34
  • 35. Conclusion Summary These sizing guidelines in this white paper show how to choose the appropriate VNX Series and components for different VDI environments. The following tables show the recommended configuration for different VDI workloads. Table 18. Sizing summary for Citrix XenDesktop with MCS Data Required dri ves No. of VNX Move rs SSD drive s 15K SAS drives 7.2K NL-SAS drives desk tops system (Active+ standby) Active Hotspa re Total Active Hotspa re Total Active Hotspa re Total 500 VNX5300 2 (1+1) 2 1 3 20 1 21 16 1 17 1000 VNX5300 2 (1+1) 2 1 3 20 1 21 16 1 17 1500 VNX5300 2 (1+1) 6 1 7 40 2 42 32 2 34 2000 VNX5500 3 (2+1) 4 1 5 40 2 42 32 2 34 2500 VNX5500 3 (2+1) 8 1 9 60 2 62 48 2 50 3000 VNX5500 3 (2+1) 8 1 9 60 2 62 48 2 50 3500 VNX5700 4 (3+1) 10 1 11 80 3 83 64 3 67 4000 VNX5700 4 (3+1) 10 1 11 80 3 83 64 3 67 4500 VNX5700 4 (3+1) 14 1 15 100 4 104 80 3 83 5000 VNX7500 5 (4+1) 10 1 11 100 4 104 80 3 83 5500 VNX7500 5 (4+1) 14 1 15 120 4 124 96 4 100 6000 VNX7500 5 (4+1) 14 1 15 120 4 124 96 4 100 6500 VNX7500 6 (5+1) 18 1 19 140 5 145 112 4 116 7000 VNX7500 6 (5+1) 18 1 19 140 5 145 112 4 116 7500 VNX7500 6 (5+1) 22 1 23 160 6 166 128 5 133 Table 19. Sizing summary for VMware View Data Required dri ves No. of VNX Move rs SSD drive s 15K SAS drives 7.2K NL-SAS drives desk tops system (Active+ standby) Active Hotspa re Total Active Hotspa re Total Active Hotspa re Total 500 VNX5300 2 (1+1) 2 1 3 15 1 16 16 1 17 1000 VNX5300 2 (1+1) 2 1 3 15 1 16 16 1 17 1500 VNX5300 2 (1+1) 6 1 7 30 1 31 32 2 34 2000 VNX5500 3 (2+1) 4 1 5 30 1 31 32 2 34 2500 VNX5500 3 (2+1) 8 1 9 45 2 47 48 2 50 3000 VNX5500 3 (2+1) 8 1 9 45 2 47 48 2 50 3500 VNX5700 4 (3+1) 10 1 11 60 2 62 64 3 67 Sizing EMC VNX Series for VDI Workload — White Paper 35
  • 36. Data Required dri ves No. of VNX Move rs SSD drive s 15K SAS drives 7.2K NL-SAS drives desk tops system (Active+ standby) Active Hotspa re Total Active Hotspa re Total Active Hotspa re Total 4000 VNX5700 4 (3+1) 10 1 11 60 2 62 64 3 67 4500 VNX5700 4 (3+1) 14 1 15 75 3 78 80 3 83 5000 VNX7500 5 (4+1) 10 1 11 75 3 78 80 3 83 5500 VNX7500 5 (4+1) 14 1 15 90 3 93 96 4 100 6000 VNX7500 5 (4+1) 14 1 15 90 3 93 96 4 100 6500 VNX7500 6 (5+1) 18 1 19 105 4 109 112 4 116 7000 VNX7500 6 (5+1) 18 1 19 105 4 109 112 4 116 7500 VNX7500 6 (5+1) 22 1 23 120 4 124 128 5 133 Table 20. Sizing summary for Citrix XenDesktop with PVS Data Required dri ves No. of VNX Move rs SSD drive s 15K SAS drives 7.2K NL-SAS drives desk tops system (Active+ standby) Active Hotspa re Total Active Hotspa re Total Active Hotspa re Total 500 VNX5300 2 (1+1) 2 1 3 16 1 17 16 1 17 1000 VNX5300 2 (1+1) 2 1 3 16 1 17 16 1 17 1500 VNX5300 2 (1+1) 6 1 7 32 2 34 32 2 34 2000 VNX5500 3 (2+1) 4 1 5 32 2 34 32 2 34 2500 VNX5500 3 (2+1) 8 1 9 48 2 50 48 2 50 3000 VNX5500 3 (2+1) 8 1 9 48 2 50 48 2 50 3500 VNX5700 4 (3+1) 10 1 11 64 3 67 64 3 67 4000 VNX5700 4 (3+1) 10 1 11 64 3 67 64 3 67 4500 VNX5700 4 (3+1) 14 1 15 80 3 83 80 3 83 5000 VNX7500 5 (4+1) 10 1 11 80 3 83 80 3 83 5500 VNX7500 5 (4+1) 14 1 15 96 4 100 96 4 100 6000 VNX7500 5 (4+1) 14 1 15 96 4 100 96 4 100 6500 VNX7500 6 (5+1) 18 1 19 112 4 116 112 4 116 7000 VNX7500 6 (5+1) 18 1 19 112 4 116 112 4 116 7500 VNX7500 6 (5+1) 22 1 23 128 5 133 128 5 133 Sizing EMC VNX Series for VDI Workload — White Paper 36
  • 37. References EMC documents The following documents, located on EMC Online Support, provide additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative: EMC Infrastructure for VMware View 5.0, EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7—Reference Architecture Infrastructure for VMware View 5.0 — EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and VMware View Composer 2.7 —Reference Architecture EMC Infrastructure for VMware View 5.0 — EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and VMware View Composer 2.7 —Proven Solutions Guide EMC Performance Optimization for Microsoft Windows XP for the Virtual Desktop Infrastructure—Applied Best Practices Deploying Microsoft Windows 7 Virtual Desktops with VMware View—Applied Best Practices Guide EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Reference Architecture EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5 — Proven Solution Guide EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (NFS), Cisco UCS, VMware vSphere 4.1, and Citrix XenDesktop 5—Reference Architecture EMC Infrastructure for Citrix XenDesktop 5.5 (PVS) EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6— Reference Architecture EMC Infrastructure for Citrix XenDesktop 5.5 (PVS) EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6— Proven Solution Guide EMC Infrastructure for Citrix XenDesktop 5.5 — EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6— Reference Architecture EMC Infrastructure for Citrix XenDesktop 5.5 — EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6— Proven Solutions Guide EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (NFS), VMware vSphere 4.1, and Citrix XenDesktop 5 — Proven Solution Guide Sizing EMC VNX Series for VDI Workload — White Paper 37