SlideShare une entreprise Scribd logo
1  sur  68
Télécharger pour lire hors ligne
White Paper




STORAGE TIERING FOR VMWARE
ENVIRONMENTS DEPLOYED ON
EMC SYMMETRIX VMAX WITH ENGINUITY 5876
The use of FAST VP (Virtual Pools) in VMware environments




                   Abstract
                   As a business’s virtualization storage needs continue to expand,
                   the challenge of where to put data throughout its lifecycle is
                   ever present. With EMC’s extended Fully Automated Storage
                   Tiering with Virtual Pools (FAST VP) functionality, this problem is
                   addressed through the automation of data movement to the
                   right disk tier, at the right time. This white paper will
                   demonstrate how this technology can be used effectively in an
                   environment virtualized using VMware® technologies.

                   September 2012
Copyright © 2012 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate of
its publication date. The information is subject to change
without notice.

The information in this publication is provided “as is”. EMC
Corporation makes no representations or warranties of any kind
with respect to the information in this publication, and
specifically disclaims implied warranties of merchantability or
fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in
this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC
Corporation Trademarks on EMC.com.

VMware, ESXi, ESXi, vMotion, and vSphere are registered
trademarks or trademarks of VMware, Inc. in the United States
and/or other jurisdictions. All other trademarks used herein are
the property of their respective owners.

Part Number h8101.4




               Storage Tiering for VMware Environments Deployed on   2
                           EMC Symmetrix VMAX with Enginuity 5876
Table of Contents
Executive summary.................................................................................................. 7
   Audience ............................................................................................................................ 8
   Terminology ....................................................................................................................... 8
Symmetrix VMAX using Enginuity 5876 .................................................................... 8
EMC Unisphere for VMAX ......................................................................................... 9
EMC Virtual Storage Integrator ............................................................................... 10
Oracle Applications ............................................................................................... 12
SwingBench .......................................................................................................... 12
Symmetrix Virtual Provisioning .............................................................................. 13
Federated Tiered Storage (FTS): Overview ............................................................... 14
Fully Automated Storage Tiering (FAST) .................................................................. 15
   FAST and Fully Automated Storage Tiering with Virtual Pools (FAST VP) ............................. 15
   FAST managed objects ..................................................................................................... 15
   FAST VP components ........................................................................................................ 16
FAST VP allocation by FAST Policy .......................................................................... 17
FAST VP SRDF coordination .................................................................................... 18
Working with Virtual LUN VP Mobility in VMware environments ............................... 18
   Manual tiering .................................................................................................................. 20
     Pinning a device in FAST/FAST VP ................................................................................. 22
   Changing disk and RAID type ............................................................................................ 27
FAST VP and Oracle Applications 12 ...................................................................... 33
   Applications Architecture ................................................................................................. 33
   Working with FAST VP and Oracle Applications on VMware infrastructure ......................... 34
   Oracle Applications Tablespace Model ............................................................................. 35
   Oracle Applications implementation................................................................................. 36
   Use case implementation ................................................................................................. 37
   Static tablespace placement ............................................................................................ 39
   Oracle Application deployment ........................................................................................ 39
   Hardware layout ............................................................................................................... 40
FAST VP configuration ............................................................................................ 40
      Configuring FAST VP ..................................................................................................... 44
Oracle Applications case study using FAST VP performance monitoring ................... 56
   Order Entry application ..................................................................................................... 59
   FAST VP Results ................................................................................................................ 64
   FAST VP, manual tiering, and real-world considerations.................................................... 66




                                                          Storage Tiering for VMware Environments Deployed on                                3
                                                                      EMC Symmetrix VMAX with Enginuity 5876
Conclusion ............................................................................................................ 67
References ............................................................................................................ 68




                                                   Storage Tiering for VMware Environments Deployed on                       4
                                                               EMC Symmetrix VMAX with Enginuity 5876
List of Figures

Figure 1. Version 1.1 of Unisphere for VMAX .......................................................................................... 10
Figure 2. VSI 5.3 features ....................................................................................................................... 11
Figure 3. VSI 5.3 Storage Viewer feature................................................................................................. 12
Figure 4. Thin devices and thin pools containing data devices ............................................................... 13
Figure 5. FAST managed objects ............................................................................................................ 16
Figure 6. FAST VP components ............................................................................................................... 17
Figure 7. Thin LUN distribution across pools viewed through SYMCLI ..................................................... 20
Figure 8. Thin LUN distribution across pools viewed through Unisphere ................................................. 21
Figure 9. Thin LUN displayed in VSI ........................................................................................................ 21
Figure 10. Pinning a device in Unisphere................................................................................................ 23
Figure 11. Pinning a device through SYMCLI ........................................................................................... 24
Figure 12. Validating the migration ........................................................................................................ 25
Figure 13. Executing the migration ......................................................................................................... 25
Figure 14. Completing and terminating the migration ............................................................................. 26
Figure 15. The thin LUN reallocated to a single pool ............................................................................... 27
Figure 16. The reallocated LUN in Unisphere .......................................................................................... 27
Figure 17. Thin LUN 26B in a RAID 6 configuration .................................................................................. 28
Figure 18. Thin LUN 26B located in a SATA pool ..................................................................................... 29
Figure 19. FC_Pool thin pool containing the Fibre Channel disk .............................................................. 30
Figure 20. Validate the migration ........................................................................................................... 30
Figure 21. Query the migration session .................................................................................................. 31
Figure 22. Verify and terminate the migration ......................................................................................... 31
Figure 23. Thin LUN 26B located in the FC pool ...................................................................................... 32
Figure 24. Thin LUN 26B in a RAID 1 configuration .................................................................................. 33
Figure 25. Oracle Applications architecture ............................................................................................ 34
Figure 26. Promotion of the application from SATA to FC and EFD over time ............................................ 38
Figure 27. Physical/virtual environment diagram ................................................................................... 40
Figure 28. Enabling FAST VP using EMC Unisphere for VMAX .................................................................. 41
Figure 29. Determining the state of the FAST VP engine using SYMCLI .................................................... 42
Figure 30. Diskgroup summary .............................................................................................................. 42
Figure 31. Thin pools on the Symmetrix VMAX........................................................................................ 43
Figure 32. Storage tier listing ................................................................................................................. 44
Figure 33. Creating a storage tier in Unisphere ....................................................................................... 45
Figure 34. Storage group for FAST VP in EMC Unisphere ......................................................................... 46
Figure 35. Storage group for FAST VP as viewed from VMware ESXi using EMC VSI .................................. 46
Figure 36. Creation of a FAST VP policy through CLI ................................................................................ 47
Figure 37. Listing the FAST VP policy in CLI ............................................................................................. 48
Figure 38. Creating a FAST VP policy in Unisphere .................................................................................. 49
Figure 39. Associating a storage group to a FAST VP policy in CLI............................................................ 50
Figure 40. Associating a storage group to a FAST policy in Unisphere ..................................................... 51
Figure 41. FAST VP policy association with demand detail ...................................................................... 52
Figure 42. FAST VP policy management .................................................................................................. 53



                                                                 Storage Tiering for VMware Environments Deployed on                               5
                                                                             EMC Symmetrix VMAX with Enginuity 5876
Figure 43. FAST VP general settings in Unisphere ................................................................................... 54
Figure 44. Setting FAST VP performance and movement time windows ................................................... 55
Figure 45. Using symvm to translate Linux database mounts on the FASTDB VM .................................... 57
Figure 46. Virtual disk mapping in the EMC VSI Storage Viewer feature .................................................. 58
Figure 47. Thin device bound to the pool containing SATA drives ........................................................... 59
Figure 48. Total size of the Order Entry application ................................................................................. 60
Figure 49. Order Entry benchmark .......................................................................................................... 61
Figure 50. FAST VP move mode .............................................................................................................. 62
Figure 51. Initial track movement for the database thin device ............................................................... 63
Figure 52. Completed track movement for the database thin device ....................................................... 63
Figure 53. FAST policy demand usage ................................................................................................... 64
Figure 54. Mid-run of the Order Entry benchmark ................................................................................... 65
Figure 55. Transaction response time comparing pre-FAST VP and post-FAST VP movement ................... 66




                                                              Storage Tiering for VMware Environments Deployed on                           6
                                                                          EMC Symmetrix VMAX with Enginuity 5876
Executive summary
            Unlike storage arrays of the past, today’s enterprise-class storage arrays contain
            multiple drive types and protection methodologies. This gives the storage
            administrator, server administrator, and application administrator the challenge of
            selecting the correct storage configuration, or storage class, for each application
            being deployed. The trend toward virtualizing the entire environment to optimize IT
            infrastructures often exacerbates the problem by consolidating multiple disparate
            applications on a small number of large devices. Given this challenge, it is not
            uncommon that a single storage type (such as Fibre Channel drives) best suited for
            the most demanding application, is selected for all virtual machine deployments,
            effectively assigning all applications, regardless of their performance requirements,
            to the same tier. This traditional approach is wasteful since all applications and data
            are not equally performance-critical to the business. Furthermore, within applications
            themselves, particularly those reliant upon databases, there is also the opportunity to
            further diversify the storage make-up. Making use of high-density low-cost SATA
            drives for the less active applications or data, FC drives for the medium active, and
            Enterprise Flash Drives for the very active, allows for efficient use of storage
            resources, reducing the overall cost and the number of drives necessary for the virtual
            infrastructure. This in turn also helps to reduce energy requirements and floor space,
            which are both cost-saving items to the business.
            To achieve this “tiered” storage approach in a proactive way for VMware
            environments it is possible to use Symmetrix® Enhanced Virtual LUN Technology to
            move devices between drive types and RAID protections seamlessly inside the
            storage array. Symmetrix Virtual LUN technology is nondisruptive to application and
            nondisruptive and transparent to the user. It preserves the devices’ identity and
            therefore there is no need to change anything in the virtual infrastructure, from
            VMware® ESX® hosts to virtual machines. Canonical names, file system mount
            points, volume manager settings, and even scripts do not need to be altered. It also
            preserves any TimeFinder® or Symmetrix Remote Data Facility (SRDF®) business
            continuity aspects even as the data migration takes place. In a very similar way, this
            approach to storage tiering can be automated using Fully Automated Storage Tiering,
            or FAST. FAST is available for thick devices as FAST DP (Disk Provisioning) and thin
            devices as FAST VP (Virtual Provisioning). 1 FAST and FAST VP both use policies to
            manage sets of devices and the allocation of their data on available storage tiers.
            Based on the policy guidance and the actual workload profile over time, the FAST
            controller will recommend or execute automatically the movement of the managed
            devices between the storage tiers, even at the sub-LUN level.
            This white paper describes a tiered storage architecture for an application running on
            VMware virtual machines in a VMware virtual infrastructure, and how volumes on that


1
 Typically the term FAST is substituted for FAST DP. In addition, the engine and controller of the technology is frequently
preceded by the term FAST only, though it also applies to FAST VP.




                                                         Storage Tiering for VMware Environments Deployed on                  7
                                                                     EMC Symmetrix VMAX with Enginuity 5876
storage can be moved around nondisruptively using FAST VP technology, resulting in
the right data on the right storage tier at the right time.

Audience
This white paper is intended for VMware administrators, server administrators, and
storage administrators responsible for creating, managing, and using VMFS
datastores and RDMs, as well as their underlying storage devices, for their VMware
vSphere™ environments attached to a Symmetrix VMAX™ storage array running
Enginuity™ 5876. The white paper assumes the reader is familiar with Oracle
databases and applications, VMware environments, EMC Symmetrix, and the related
software.

Terminology
Term                    Definition
Device                  LUN, logical volume
Volume                  LUN, logical volume
Symmwin Disk Group      A collection of physical disks that have the same physical
                        characteristics
Unisphere               Unisphere for VMAX
SYMCLI/CLI              Solutions Enabler's Command Line Interface
Metavolume              A collection of Symmetrix devices that represent one device at the host
                        level

Acronym/Abbreviation    Definition
LUN                     Logical Unit Number
VLUN                    Virtual LUN
TDEV                    Symmetrix Thin Device
FAST VP                 Fully Automated Storage Tiering with Virtual Pools
SG                      Storage Group
RDM                     Raw Device Mapping
VMFS                    VMware Virtual Machine File System




Symmetrix VMAX using Enginuity 5876
Enginuity 5876 carries the extended and systematic feature development forward
from previous Symmetrix generations. This means all of the reliability, availability,
and serviceability features, all of the interoperability and host operating systems
coverage, and all of the application software capabilities developed by EMC and its
partners continue to perform productively and seamlessly even as underlying
technology is refreshed.




                                       Storage Tiering for VMware Environments Deployed on        8
                                                   EMC Symmetrix VMAX with Enginuity 5876
EMC Unisphere for VMAX
Beginning with Enginuity 5876, Symmetrix Management Console has been
transformed into EMC® Unisphere™ for VMAX™ (hitherto known simply as
Unisphere) which offers big-button navigation and streamlined operations to simplify
and reduce the time required to manage a data center. Unisphere for VMAX simplifies
storage management under a common framework, incorporating Symmetrix
Performance Analyzer which previously required a separate interface. You can use
Unisphere to:
•   Manage user accounts and roles
•   Perform configuration operations (create volumes, mask volumes, set
•   Symmetrix attributes, set volume attributes, set port flags, and create SAVE
    volume pools)
•   Manage volumes (change volume configuration, set volume status, and
    create/dissolve meta volumes)
•   Manage Fully Automated Storage Tiering (FAST™, FAST VP)
•   Perform and monitor replication operations (TimeFinder®/Snap, TimeFinder/VP
    Snap, TimeFinder/Clone, Symmetrix Remote Data Facility (SRDF®), Open
    Replicator for Symmetrix (ORS))
•   Manage advanced Symmetrix features, such as:
    o Fully Automated Storage Tiering (FAST)
    o Fully Automated Storage Tiering for virtual pools (FAST VP)
    o Enhanced Virtual LUN Technology
    o Auto-provisioning Groups
    o Virtual Provisioning
    o Federated Live Migration
    o Federated Tiered Storage (FTS)
•   Monitor alerts
In addition, with the Performance monitoring option, Unisphere for VMAX provides
tools for performing analysis and historical trending of Symmetrix system
performance data. You can use the performance option to:
•   Monitor performance and capacity over time
•   Drill-down through data to investigate issues
•   View graphs detailing system performance
•   Set performance thresholds and alerts
•   View high frequency metrics in real time



                                     Storage Tiering for VMware Environments Deployed on   9
                                                 EMC Symmetrix VMAX with Enginuity 5876
•   Perform root cause analysis
•   View Symmetrix system heat maps
•   Execute scheduled and ongoing reports (queries), and export that data to a file
•   Utilize predefined dashboards for many of the system components
•   Customize your own dashboard templates
The new GUI interface dashboard is presented in Figure 1.




Figure 1. Version 1.1 of Unisphere for VMAX

Unisphere for VMAX, shown in the preceding figure, can be run on a number of
different kinds of open systems hosts, physical or virtual. Unisphere for VMAX is also
available as a virtual appliance for ESX version 4.0 (and later) in the VMware
infrastructure. For more details please visit Powerlink® at http://Powerlink.EMC.com.


EMC Virtual Storage Integrator
EMC Virtual Storage Integrator (VSI) for vSphere Client version 5.x provides multiple
feature sets including: Storage Viewer (SV), Path Management, Unified Storage




                                     Storage Tiering for VMware Environments Deployed on   10
                                                 EMC Symmetrix VMAX with Enginuity 5876
Management, and SRDF SRA Utilities. Storage Viewer functionality extends the
VMware vSphere Client to facilitate the discovery and identification of EMC
Symmetrix, VPLEX™, CLARiiON®, Isilon®, VNX® and Celerra® storage devices that are
allocated to VMware ESX/ESXi™ hosts and virtual machines. Unified Storage
Management simplifies the provisioning of Symmetrix VMAX virtual pooled storage
for data centers, ESX servers, clusters, and resource pools. Path Management allows
the user to control how datastores are accessed, while the SRA Utilities provide a
framework for working with the SRDF SRA adapter in VMware vCenter Site Recovery
Manager environments. These features are shown installed in Figure 2.




Figure 2. VSI 5.3 features

VSI for vSphere Client presents the underlying storage details to the virtual datacenter
administrator, merging the data of several different storage mapping tools into a few,
seamless vSphere Client views. VSI enables you to resolve the underlying storage of
Virtual Machine File System (VMFS) and Network File System (NFS) datastores and
virtual disks, as well as raw device mappings (RDM). In addition, you are presented
with lists of storage arrays and devices that are accessible to the ESX(i) hosts in the
virtual datacenter.
One of these features, the Storage Viewer, is displayed in Figure 3 and is
demonstrating how to obtain detailed information about a LUN.




                                     Storage Tiering for VMware Environments Deployed on   11
                                                 EMC Symmetrix VMAX with Enginuity 5876
Figure 3. VSI 5.3 Storage Viewer feature


Oracle Applications
Oracle Applications is a tightly integrated family of Financial, ERP, CRM, and
manufacturing application products that share a common look and feel. Using the
menus and windows of Oracle Applications, users have access to all the functions
they need to manage their business information. Oracle Applications is highly
responsive to users, supporting a multi-window GUI that provides users with full
point-and-click capability. In addition, Oracle Applications offers many other features
such as field-to-field validation and a list of values to help users simplify data entry
and maintain the integrity of the data they enter.


SwingBench
SwingBench© is a GUI tool developed in Java by Dominic Giles of the Oracle Database
Solutions Group. The tool is designed to generate a simulated multi-user workload
and provide a graphical indication of system throughput and response times.
Benchmarks provide a good substitute for what otherwise would be a daunting task
of gathering hundreds of applications users and training them to perform a pre-
configured set of tasks.




                                     Storage Tiering for VMware Environments Deployed on   12
                                                 EMC Symmetrix VMAX with Enginuity 5876
There are four benchmarks included with SwingBench: Order Entry (jdbc), Order Entry
(PL/SQL), Calling Circle, and a set of PL/SQL stubs that allow users to create their own
benchmark.


Symmetrix Virtual Provisioning
Symmetrix Virtual Provisioning™, starting in Enginuity 5773, introduced a new type of
host-accessible device called a thin device that can be used in many of the same
ways that regular, host-accessible Symmetrix devices have traditionally been used.
Unlike regular Symmetrix devices, thin devices do not need to have physical storage
completely allocated at the time the devices are created and presented to a host. The
physical storage that is used to supply drive space for a thin device comes from a
shared thin pool that has been associated with the thin device. A thin pool is
comprised of internal Symmetrix devices called data devices that are dedicated to the
purpose of providing the actual physical storage used by thin devices. When they are
first created, thin devices are not associated with any particular thin pool. An
operation referred to as “binding” must be performed to associate a thin device with
a thin pool.
Figure 4 depicts the relationships between thin devices and their associated thin
pools. There are nine devices associated with thin Pool A and three thin devices
associated with thin pool B.




Figure 4. Thin devices and thin pools containing data devices




                                     Storage Tiering for VMware Environments Deployed on   13
                                                 EMC Symmetrix VMAX with Enginuity 5876
When a write is performed to a portion of the thin device, the Symmetrix allocates a
minimum allotment of physical storage from the pool and maps that storage to a
region of the thin device including the area targeted by the write. The storage
allocation operations are performed in small units of storage called a “thin extent.” A
round-robin mechanism is used to balance the allocation of thin extents across all of
the data devices in the pool that are enabled and that have remaining unused
capacity. A thin extent size is comprised of twelve 64 KB tracks (768 KB). That means
that the initial bind of a thin device to a pool causes one extent, or 12 tracks, to be
allocated per thin device.
When a read is performed on a thin device, the data being read is retrieved from the
appropriate data device in the storage pool to which the thin device is bound. Reads
directed to an area of a thin device that has not been mapped do not trigger
allocation operations. The result of reading an unmapped block is that a block in
which each byte is equal to zero will be returned. When more storage is required to
service existing or future thin devices, data devices can be added to existing thin
pools. New thin devices can also be created and associated with existing thin pools.
Prior to Enginuity 5875, a thin device could only be bound to, and have extents
allocated in, a single thin pool. This thin pool can, in turn, only contain Symmetrix
data devices of a single RAID protection type, and a single drive technology (and
single rotation speed in the case of FC and SATA drives). Starting with Enginuity
5875, a thin device will still only be considered bound to a single thin pool but may
have extents allocated in multiple pools within a single Symmetrix. A thin device may
also be moved to a different thin pool, without any loss of data or data access, by
using Virtual LUN VP Mobility. Virtual LUN VP Mobility provides the ability to migrate a
thin device from one thin pool to another. If the LUN to move is part of a FAST VP
Policy, it may only be moved to one of the thin pools in the policy.


Federated Tiered Storage (FTS): Overview
Introduced with Enginuity 5876, Federated Tiered Storage (FTS) allows LUNs that exist
on external arrays to be used to provide physical storage for Symmetrix VMAX arrays.
The external LUNs can be used as raw storage space for the creation of Symmetrix
devices in the same way internal Symmetrix physical drives are used. These devices
are referred to as eDisks. Data on the external LUNs can also be preserved and
accessed through Symmetrix devices. This allows the use of Symmetrix Enginuity
functionality, such as local replication, remote replication, storage tiering, data
management, and data migration with data that resides on external arrays.




                                     Storage Tiering for VMware Environments Deployed on   14
                                                 EMC Symmetrix VMAX with Enginuity 5876
Fully Automated Storage Tiering (FAST)
           Fully Automated Storage Tiering (FAST) automates the identification of data volumes
           for the purposes of relocating application data across different performance/capacity
           tiers within an array, or to an external array using Federated Tiered Storage (FTS). 2
           The primary benefits of FAST include:
           •   Elimination of manually tiering applications when performance objectives change
               over time
           •   Automating the process of identifying data that can benefit from Enterprise Flash
               Drives or that can be kept on higher-capacity, less-expensive SATA drives without
               impacting performance
           •   Improving application performance at the same cost, or providing the same
               application performance at lower cost. Cost is defined as acquisition (both
               hardware and software), space/energy, and management expense
           •   Optimizing and prioritizing business applications, allowing customers to
               dynamically allocate resources within a single array
           •   Delivering greater flexibility in meeting different price/performance ratios
               throughout the lifecycle of the information stored

           FAST and Fully Automated Storage Tiering with Virtual Pools (FAST VP)
           EMC Symmetrix FAST (FAST DP) and FAST VP automate the identification of data
           volumes for the purposes of relocating application data across different
           performance/capacity tiers within an array. FAST operates on standard Symmetrix
           devices. Data movements executed between tiers are performed at the full volume
           level. FAST VP operates on virtual devices. As such, data movement execution can be
           performed at the sub-LUN level, and a single thin device may have extents allocated
           across multiple thin pools within the array. Because FAST DP and FAST VP support
           different device types – standard and virtual respectively – they both can operate
           simultaneously within a single array. 3 Aside from some shared configuration
           parameters, the management and operation of each are separate.

           FAST managed objects
           There are three main elements related to the use of both FAST and FAST VP on
           Symmetrix VMAX, graphically depicted in Figure 5. These are:
           •   Storage tier — A shared resource with common technologies



2
  Other than the brief overview provided, Federated Tiered Storage will not be addressed in this particular whitepaper. For
more information on FTS, refer to the Design and Implementation Best Practices for EMC Symmetrix Federated Tiered Storage
(FTS) technical note available at http://Powerlink.EMC.com.
3
  This holds true for all the Symmetrix family except the VMAXe/VMAX 10K which only supports FAST VP, being a completely
thin-provisioned array.




                                                      Storage Tiering for VMware Environments Deployed on                     15
                                                                  EMC Symmetrix VMAX with Enginuity 5876
•   FAST policy — Manages a set of tier usage rules that provide guidelines for data
    placement and movement across Symmetrix tiers to achieve service levels and for
    one or more storage groups
•   Storage group — A logical grouping of devices for common management




Figure 5. FAST managed objects

Each of the three managed objects can be created and managed by using either
Unisphere for VMAX (Unisphere) or the Solutions Enabler Command Line Interface
(SYMCLI).

FAST VP components
There are two components of FAST VP – the FAST controller and the Symmetrix
microcode or Enginuity.
The FAST controller is a service that runs on the Symmetrix VMAX service processor.
The Symmetrix microcode is a part of the Enginuity operating environment that
controls components within the array. When FAST VP is active, both components
participate in the execution of two algorithms – the intelligent tiering algorithm and
the allocation compliance algorithm – to determine appropriate data placement.
The intelligent tiering algorithm uses performance data collected by the microcode, as
well as supporting calculations performed by the FAST controller, to issue data
movement requests to the Virtual LUN (VLUN) VP data movement engine.
The allocation compliance algorithm enforces the upper limits of storage capacity that
can be used in each tier by a given storage group by also issuing data movement
requests to the Virtual LUN (VLUN) VP data movement engine to satisfy the capacity
compliance.
Performance time windows can be defined to specify when the FAST controller should
collect performance data, upon which analysis is performed to determine the
appropriate tier for devices. By default, this will occur 24 hours a day. Defined data
movement windows determine when to execute the data movements necessary to
move data between tiers. Data movements performed by the microcode are achieved
by moving allocated extents between tiers. The size of data movement can be as
small as 768 KB, representing a single allocated thin device extent, but more typically
will be an entire extent group, which is 7,680 KB in size (10 thin extents).




                                     Storage Tiering for VMware Environments Deployed on   16
                                                 EMC Symmetrix VMAX with Enginuity 5876
FAST VP has two modes of operation, Automatic or Off. When operating in Automatic
mode, data analysis and data movements will occur continuously during the defined
data movement windows. In Off mode, performance statistics will continue to be
collected, but no data analysis or data movements will take place.
Figure 6 shows the FAST controller operation.




Figure 6. FAST VP components

Note: For more information on FAST VP specifically please see the technical note FAST
VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance
available at http://Powerlink.EMC.com.


FAST VP allocation by FAST Policy
A new feature for FAST VP in 5876 is the ability for a device to allocate new extents
from any thin pool participating in the FAST VP Policy. When this feature is enabled,
FAST VP will attempt to allocate new extents in the most appropriate tier, based upon
performance metrics. If those performance metrics are unavailable it will default to
allocating in the pool to which the device is bound. If, however, the chosen pool is




                                    Storage Tiering for VMware Environments Deployed on   17
                                                EMC Symmetrix VMAX with Enginuity 5876
full, regardless of performance metrics, then FAST VP will allocate from one of the
other thin pools in the policy. As long as there is space available in one of the thin
pools, new extent allocations will be successful.
This new feature is enabled at the Symmetrix array level and applies to all devices
managed by FAST VP. The feature cannot, therefore, be applied to some FAST VP
policies and not others. By default it is disabled and any new allocations will come
from the pool to which the device is bound.
A pinned device is not considered to have performance metrics available and
therefore new allocations will be done in the pool to which the device is bound.


FAST VP SRDF coordination
The use of FAST VP with SRDF devices is fully supported; however FAST VP operates
within a single array, and therefore will only impact the RDF devices on that array.
Previously there was no coordination of the data movement between RDF pairs. Each
device's extents would move according to the manner in which they were accessed on
that array, source or target.
For instance, an R1 device will typically be subject to a read/write workload, while the
R2 will only experience the writes that are propagated across the link from the R1.
Because the reads to the R1 are not propagated to the R2, FAST VP on the R2 side will
make its decisions based solely on the writes and therefore the R2 data will likely not
be moved to the same tiers, in the same amounts, as on the R1.
To rectify this problem, EMC introduced FAST VP SRDF coordination in 5876. FAST VP
SRDF coordination allows the R1 performance metrics to be transmitted across the
link and used by the FAST VP engine on the R2 array to make promotion and demotion
decisions.
FAST VP SRDF coordination is enabled or disabled at the storage group that is
associated with the FAST VP policy. The default state is disabled.
FAST VP SRDF coordination is supported for single and concurrent SRDF pairings (R1
and R11 devices) in any mode of operation: Synchronous, asynchronous, or adaptive
copy. FAST VP SRDF coordination is not supported for SRDF/Star, SRDF/EDP, or
Cascaded SRDF including R21 and R22 devices.


Working with Virtual LUN VP Mobility in VMware environments
Symmetrix Virtual LUN technology enables the seamless movement of volumes within
a Symmetrix without disrupting the hosts, application, or replication sessions. Prior
versions permitted the relocation of fully provisioned (thick) FBA and CKD devices
across drive types (capacity or rotational speed) and RAID protection types. VLUN VP
provides “thin-to-thin” mobility, enabling users to meet tiered storage requirements
by migrating thin FBA LUNs between virtual pools in the same array. Virtual LUN VP
Mobility gives administrators the option to “re-tier” a thin volume or set of thin



                                      Storage Tiering for VMware Environments Deployed on   18
                                                  EMC Symmetrix VMAX with Enginuity 5876
volumes by moving them between thin pools in a given FAST or FAST VP
configuration. This manual “override” option helps FAST/FAST VP users respond
rapidly to changing performance requirements or unexpected events.
Virtual LUN VP (VLUN) migrations are session-based – each session may contain
multiple devices to be migrated at the same time. There may also be multiple
concurrent migration sessions. At the time of execution of a migration, a migration
session name is specified. This session name is subsequently used for monitoring
and managing the migration.
While an entire thin device will be specified for migration, only thin device extents
that are allocated will be relocated. Thin device extents that have been allocated, but
not written to (for example, pre-allocated tracks), will be relocated but will not cause
any actual data to be copied. New extent allocations that occur as a result of a host
write to the thin device during the migration will be satisfied from the migration target
pool.
When using VLUN VP mobility with FAST VP, the destination pool must be part of the
FAST VP policy. While a VLUN migration is active, FAST VP will not attempt to make
any changes. Once the migration is complete, however, all the tracks will be
available for re-tiering. To prevent movements post-migration, the relocated device(s)
can be pinned.
The advances in VLUN enable customers to move Symmetrix thin devices from one
thin pool to another thin pool on the same Symmetrix without disrupting user
applications and with minimal impact to host I/O. Users may move thin devices
between thin pools to:
•   Change the disk media on which the thin devices are stored
•   Change the thin device underlying RAID protection level
•   Consolidate a thin device that was managed by FAST VP to a single thin pool
•   Move all extents from a thin device that are in one thin pool to another thin pool
Beginning with 5876, VLUN VP Mobility users have the option to move only part of the
thin device from one source pool to one destination pool. This feature could be very
useful, for instance, in an environment where a subset of financial data is heavily
accessed each month for reporting purposes. If all that financial data is stored on a
particular LUN under FAST VP control, over the course of the month the data will most
likely be re-tiered up to EFD due to access patterns. Once the month’s reporting is
complete, however, that data is now aged, and the next month’s data is paramount.
Rather than wait for FAST VP to down-tier the data, through the up-tiering of the new
month’s data, the user can simply move all tracks from that LUN from EFD down to FC
or SATA, freeing up all the EFD space for promotion of this month’s data.
In a VMware environment, a customer may have any number of use cases for VLUN.
For instance, if a customer elects to not use FAST VP, manual tiering is achievable
through VLUN. A heavily used datastore residing on SATA drives may require the
ability to provide improved performance. That thin device underlying the datastore




                                      Storage Tiering for VMware Environments Deployed on   19
                                                  EMC Symmetrix VMAX with Enginuity 5876
could be manually moved to FC or EFD. Conversely a datastore residing on FC that
houses data needing to be archived could be moved to a SATA device which,
although a less-performing disk tier, has a much smaller cost per GB.
In a FAST VP environment, customers may wish to circumvent the automatic process
in cases where they know all the data on a thin device has changed its function.
Take, for instance, the previous example of archiving. A datastore that contains
information that has now been designated as archive could be removed from FAST VP
control then migrated over to the thin pool that is comprised of the disk technology
most suited for archived data, SATA.
Following are two examples of using VLUN in a VMware environment:
•   Manual tiering
•   Changing disk and RAID type

Manual tiering
Here are two screenshots, Figure 7 contains the SYMCLI(CLI) and Figure 8 shows the
Unisphere GUI, showing the distribution of a thin device or TDEV, across three
different thin pools representing three different disk technologies, captured at
different times in the track movement.




Figure 7. Thin LUN distribution across pools viewed through SYMCLI




                                   Storage Tiering for VMware Environments Deployed on   20
                                               EMC Symmetrix VMAX with Enginuity 5876
Figure 8. Thin LUN distribution across pools viewed through Unisphere

This TDEV is currently under FAST VP control and is presented to an ESXi cluster (Figure 9).




Figure 9. Thin LUN displayed in VSI

It has been determined that the data on this TDEV needs to be archived, and therefore
the desire is to have all of it placed on SATA technology both because the
performance requirements match that disk type and also to reduce the cost to the
business for keeping this data. Because of the archiving requirement, it will not be
necessary to have this TDEV under FAST VP control going forward; however the
business cannot be certain that future developments may change the requirements of
the data on the disk. Also, removing this TDEV from FAST VP control will require the
policy associated with this storage group to be removed. So rather than remove the




                                      Storage Tiering for VMware Environments Deployed on      21
                                                  EMC Symmetrix VMAX with Enginuity 5876
existing FAST VP policy, the user can prevent the FAST engine from looking at the
device’s statistics and making changes by a simple process known as “pinning.” So
before beginning the VLUN migration of the data back to the SATA pool, the device
should be pinned.

Pinning a device in FAST/FAST VP
Pinning a device can be done in Unisphere or through the CLI. In Unisphere, shown in
Figure 10, one navigates through the array/storage/volumes path. The user then
highlights the device and clicks the double-arrow icon on the bottom menu. From the
pop-up menu, the user selects “Pin”. By pinning the device there is no concern that
once the migration completes, the FAST engine will recommence movement of the
extent groups belonging to that device. Note that even if a device is not pinned prior
to migration, the FAST engine will not attempt any movements while the device is
under migration; however if it is not pinned, data movements may begin again
immediately following termination of the migration session.




                                    Storage Tiering for VMware Environments Deployed on   22
                                                EMC Symmetrix VMAX with Enginuity 5876
Figure 10. Pinning a device in Unisphere




                                    Storage Tiering for VMware Environments Deployed on   23
                                                EMC Symmetrix VMAX with Enginuity 5876
For the CLI, use symdev to pin the device as in Figure 11:




Figure 11. Pinning a device through SYMCLI

Once the device is pinned, the migration can begin. VLUN migrations, just like thick
LUN migrations, use the SYMCLI command, symmigrate. For thin device migration
a target pool needs to be supplied. Start by first validating the migration. Although
this is an optional step, it is recommended to ensure the task is permitted. Create a
text file that contains the device(s) that are to be migrated back to the single pool.
For this migration the thin pool is named SATA_Pool. Note that even though this
example is moving data for device 17B back to the pool to which it is bound, it is




                                     Storage Tiering for VMware Environments Deployed on   24
                                                 EMC Symmetrix VMAX with Enginuity 5876
possible to move the device to a different pool and symmigrate will bind the device
to that pool, completely transparent to the hosts accessing the device. Recall,
however, if that device is under FAST VP control, only a thin pool in the policy can be
the migration target pool. The contents of the text file used to perform the migration
are shown in Figure 12. The figure also shows the SYMCLI command to validate the
proposed migration.




Figure 12. Validating the migration

Once validated, the migration can start as shown in Figure 13. Once the migration is
in process, the session can be queried, which is also shown.




Figure 13. Executing the migration




                                      Storage Tiering for VMware Environments Deployed on   25
                                                  EMC Symmetrix VMAX with Enginuity 5876
Once the migration is complete the status will change from “SyncInProg” to
“Migrated”. When it is in the migrated state, the session needs to be terminated and
thus end the VLUN migration as seen in Figure 14.




Figure 14. Completing and terminating the migration

Viewing the TDEV now with SYMCLI in Figure 15 or with Unisphere in Figure 16, one
sees that all data is returned to the SATA_Pool thin pool.




                                    Storage Tiering for VMware Environments Deployed on   26
                                                EMC Symmetrix VMAX with Enginuity 5876
Figure 15. The thin LUN reallocated to a single pool




Figure 16. The reallocated LUN in Unisphere

Changing disk and RAID type
This example will show how a TDEV can be moved from one type of disk technology
and RAID configuration to another disk technology and RAID configuration. One




                                     Storage Tiering for VMware Environments Deployed on   27
                                                 EMC Symmetrix VMAX with Enginuity 5876
reason a customer might perform this type of migration is for a device not under FAST
VP control and when business requirements require a change in performance
characteristics for the application(s) residing on the device. In this example the
migration will change the tier of the TDEV from a SATA RAID 6 thin pool to an FC RAID
1 thin pool. Figure 17 and Figure 18 show the device, 26B, as residing on SATA disk
in a RAID 6 configuration, bound to thin pool SATA_Pool.




Figure 17. Thin LUN 26B in a RAID 6 configuration




                                    Storage Tiering for VMware Environments Deployed on   28
                                                EMC Symmetrix VMAX with Enginuity 5876
Figure 18. Thin LUN 26B located in a SATA pool

The TDEV now will be migrated from thin pool SATA_Pool to FC_Pool, which is on FC
technology as seen in Figure 19.




                                   Storage Tiering for VMware Environments Deployed on   29
                                               EMC Symmetrix VMAX with Enginuity 5876
Figure 19. FC_Pool thin pool containing the Fibre Channel disk

First, the device for the migration is validated as seen in Figure 20.




Figure 20. Validate the migration

Once the validation completes successfully, the migration can follow and the process
can be queried for status as demonstrated in Figure 21.




                                      Storage Tiering for VMware Environments Deployed on   30
                                                  EMC Symmetrix VMAX with Enginuity 5876
Figure 21. Query the migration session

In this case, with the device having only part of its 100 GB allocated, the migration
completes quickly. If there is any question as to whether the session is complete, run
the SYMCLI command verify first to show the session is migrated, then a
terminate as in Figure 22.




Figure 22. Verify and terminate the migration




                                    Storage Tiering for VMware Environments Deployed on   31
                                                EMC Symmetrix VMAX with Enginuity 5876
Recall that all this migration activity has been transparent to the user and
nondisruptive to the application. If one now views the configuration of device 26B in
Unisphere, as highlighted in Figure 23, it shows that it indeed has changed RAID
configuration and disk technology.




Figure 23. Thin LUN 26B located in the FC pool

By running a refresh in EMC’s Virtual Storage Integrator as in Figure 24, one can see
that the thin device reflects the new configuration.




                                     Storage Tiering for VMware Environments Deployed on   32
                                                 EMC Symmetrix VMAX with Enginuity 5876
Figure 24. Thin LUN 26B in a RAID 1 configuration


FAST VP and Oracle Applications 12
Applications Architecture
The Oracle Applications Architecture is a framework for multi-tiered, distributed
computing that supports Oracle Applications products. In this model, various servers
or services are distributed among three levels, or tiers.
A tier is a logical grouping of services, potentially spread across more than one
physical or virtual machine. The three-tier architecture that comprises an Oracle E-
Business Suite installation is made up of the database tier, which supports and
manages the Oracle database; the application tier, which supports and manages the
various Applications components, and is sometimes known as the middle tier; and
the desktop tier, which provides the user interface through an add-on component to a
standard web browser.
The simplest architecture for Oracle Applications is to have all tiers, except the
desktop tier, installed on a single server. This configuration might be acceptable in a
development environment, but for production environments scaling would quickly



                                     Storage Tiering for VMware Environments Deployed on   33
                                                 EMC Symmetrix VMAX with Enginuity 5876
become an issue. In order to mimic a more realistic production environment,
therefore, the architecture of the FAST VP testing environment is built with a separate
physical application tier and database tier as shown in Figure 25. A third desktop
tier houses the SwingBench application, representing the users accessing the
system.




Figure 25. Oracle Applications architecture



Working with FAST VP and Oracle Applications on VMware infrastructure
As already mentioned, because of the diversity of Oracle Applications, in that there
are hundreds of different modules within a single product, deploying them
appropriately on the right tier of storage is a daunting task. Implementing them in a
VMware environment that utilizes FAST VP will demonstrate how a customer can
achieve proper performance and cost savings at the same time. For this study, the
latest Oracle Applications release 12 was installed and configured. There are a few
benefits to using the latest release. First, Oracle pre-packages release 12 with
version 11g of the Oracle database. Version 11g is Oracle’s latest database release




                                     Storage Tiering for VMware Environments Deployed on   34
                                                 EMC Symmetrix VMAX with Enginuity 5876
and represents a significant advancement over 10g in performance and functionality.
              Second, Oracle has now divorced itself from the practice of having two tablespaces,
              and hence at least two datafiles, per application. Prior to release 12, each
              Applications module had its own set of tablespaces and datafiles, one for the data
              and one for the index. With over 200 schemas, managing a database of over 400
              tablespaces and datafiles was, and is, a sizable undertaking. The new approach that
              Oracle uses in release 12 is called the Oracle Applications Tablespace Model, or
              OATM.

              Oracle Applications Tablespace Model
              Oracle Applications release 12 utilizes as the standard a modern infrastructure for
              tablespace management, the Oracle Applications Tablespace Model (OATM). The
              OATM is similar to the traditional model in retaining the system, undo, and temporary
              tablespaces. The key difference is that Applications products in an OATM
              environment share a much smaller number of tablespaces, rather than having their
              own dedicated tablespaces.
              Applications schema objects are allocated to the shared tablespaces based on two
              main factors: the type of data they contain, and I/O characteristics such as size, life
              span, access methods, and locking granularity. For example, tables that contain seed
              data are allocated to a different tablespace from the tables that contain transactional
              data. In addition, while most indexes are held in the same tablespace as the base
              table, indexes on transaction tables are held in a single tablespace dedicated to such
              indexes.
              The OATM provides a variety of benefits, summarized in the list below and discussed
              in more detail later:
              •   Simplifies maintenance and recovery by using far fewer tablespaces than the
                  older model
              •   Makes best use of the restricted number of raw devices available in Oracle Real
                  Applications Cluster (Oracle RAC) and other environments, where every
                  tablespace requires its own raw device
              •   Utilizes locally managed tablespaces, enabling more precise control over unused
                  space and hence reducing fragmentation
              •   Takes advantage of automatic segment space management, eliminating the need
                  for manual space management tasks
              •   Increases block-packing compared to the older model, reducing the overall
                  number of buffer gets and improving runtime performance
              •   Maximizes usefulness of wide disk stripe configurations
              The OATM uses locally managed tablespaces, which enables extent 4 sizes either to
              be determined automatically (autoallocate), or for all extents to be made the same,

4
    An extent is a set of contiguous blocks allocated in the database (in this case the datafile associated with the tablespace).




                                                           Storage Tiering for VMware Environments Deployed on                      35
                                                                       EMC Symmetrix VMAX with Enginuity 5876
user-specified size (uniform). This choice of extent management types means that
locally managed tablespaces offer greater flexibility than the dictionary-managed
tablespaces used in the traditional tablespace model. However, when using uniform
extents with locally managed tablespaces, the extent size must be chosen with care:
Too small a size can have an adverse effect on space management and performance.
A further benefit of locally managed tablespaces, and hence use of OATM, is the
introduction of automatic segment space management, a simpler and more efficient
way of managing space within a segment. It can require more space but eliminates
the need for traditional manual segment space management tasks such as specifying
and tuning schema object storage parameters such as PCTUSED. This and related
storage parameters are only used to determine space allocation for objects in
dictionary-managed tablespaces, and have no meaning in the context of locally
managed tablespaces.

Oracle Applications implementation
A customer implementation of Oracle Applications is not a quick process. The
installation itself is only the first part of what can be an endeavor lasting many
months or longer. Although there are almost 200 application modules in the Oracle
Applications, customers rarely, if ever, use all of them. They use a selection of them,
or perhaps a bundle such as Financials, or CRM. These modules are then
implemented (typically) in a phased approach. The transition from an existing
applications system or implementation of a new system takes time. How that system
eventually will be used, and more importantly how that database will be accessed,
presents a real challenge for the system administrator and database administrator.
These individuals are tasked with providing the right performance for the right
application at the right price. In other words, both performance optimization and cost
optimization are extremely important to them; however, obtaining the balance
between the two is not an easy task. The three disk technologies covered in this
paper — SATA, FC, and EFD — have differing performance characteristics and very
different costs. For instance, how will they decide what part of the database belongs
on the various disk technologies that represent the cost and performance balancing
act they attempt each day? This is made even more difficult under the new Oracle
Applications Tablespace Model. Oracle’s new model certainly does a good job at
high-level database object consolidation – less tablespaces and less datafiles – but
since the application modules are no longer separated into individual tablespaces
and datafiles, there really is no practical way to put different modules on different
tiers of storage – until now. FAST VP is the perfect complement to the manner in
which Oracle implements the database in release 12 of the Oracle application suite.
In fact, the entire database of user data can be placed on a single mount point on a
VMware virtual disk and yet still be spread over the appropriate disk technologies
that match the business requirements. The simplicity of this deployment model is
enabled by FAST VP. The following section presents one scenario on how this might
be done in a production environment.




                                     Storage Tiering for VMware Environments Deployed on   36
                                                 EMC Symmetrix VMAX with Enginuity 5876
Use case implementation
In this example, Oracle Applications release 12 was installed using Oracle’s pre-
configured and seeded sample database, the Vision database, all on a VMware
infrastructure. Each of the two tiers in the implementation was a virtual machine. The
entire installation is known as the Vision Demo system. The Vision Demo system
installation includes the licensing of all of Oracle’s application modules along with a
database that contains data for these modules. Such a system allows customers to
learn how to use Oracle Applications as well as provide a good foundation for the
type of testing documented herein. In this FAST VP use case environment, all
products begin their storage lifecycle on an inexpensive tier, SATA. Although these
drives are slower than Fibre Channel or Flash drives, they can easily meet the
performance needs required for the early phases of implementation. A customer may
spend many months converting data, entering financial charts of accounts, and doing
other pre-production tasks. Due to the low I/O and less stringent response time
requirements during this period it is unnecessary to use a storage tier that has better
performance characteristics than SATA. This is also a great cost-saver to a company
since SATA has a lower cost per GB than either Fibre Channel drives or EFDs.
If, however, a customer was taking an existing Oracle Applications environment and
moving it under FAST VP control, they might want to start their database at a higher
tier of storage like Fibre Channel, to avoid any performance implications while the
FAST VP engine was determining where best to place the data. Recall that FAST VP
will both promote and demote data so eventually the data will be placed on the
correct tier of disk. The caveat, of course, is that there needs to be sufficient Fibre
Channel disk to support the entire database at the point FAST VP is implemented.
So once the implementation phase comes to a close, and application modules are
brought live, how will FAST VP recognize the need to move the data representing
those modules from SATA to a higher-performing storage tier? Let’s take the example
of a Financials implementation. A customer “goes live” with accounts receivable,
accounts payable and general ledger. Currently these modules exist in single
tablespaces spread across a few datafiles that are stored on a single Linux mount
point, created on a virtual disk in a VMware VMFS datastore. That datastore is
created on a single thin LUN that is bound to a pool of SATA disks. As that thin LUN is
bound to a pool of SATA disks, the data in the modules is actually spread across all
those pooled disks. In a traditional storage implementation, this might seem to be an
impossible task. With the data being spread across so many different disks how
would one find the most accessed data and move it to a different storage tier? The
true genius of FAST VP is that from the module or user perspective, there is no need to
know anything about how the application is accessed. In other words, FAST VP works
with complete user transparency. Here is how it is accomplished: On the Symmetrix
the storage administrator creates a set of tiers, each representing a different type of
disk in the box, for example, SATA, Fibre Channel, and Flash, each able to have their
own type of RAID protection – RAID 6, RAID 1, and RAID 5 respectively. Each of those
tiers is then associated with a thin pool that contains one of the aforementioned disk
types. In this example, one of those tiers is associated with the SATA pool in which




                                     Storage Tiering for VMware Environments Deployed on   37
                                                 EMC Symmetrix VMAX with Enginuity 5876
the database containing the Financial modules is located. A policy is then set up that
dictates how much space is to be made available in each tier of disk. As the
production workload ramps up in these Financial modules, FAST VP is gathering
statistics on how the data is being accessed in those thin pools. As I/O increases
across the Financial modules, FAST VP is able to determine that portions of the data
located in the SATA pool need to be up-tiered – to Fibre Channel or Flash or both.
Once determined, the data is moved automatically as represented in Figure 26 – no
user intervention is required.




Figure 26. Promotion of the application from SATA to FC and EFD over time

The amount of data moved will represent only that data that is being heavily accessed
within the application modules. So though one of these Financial application
modules may be many gigabytes in size as determined by the tables and indexes that
make it up, FAST VP is only going to move the data that is being heavily accessed.
Since only a portion of the data is moving, less disk space of the higher and more
expensive tiers is being used, thus not only saving money but leaving more available
space for other heavily accessed applications. As other applications (for example GL
or other Oracle Applications modules) are brought live, they too will benefit from FAST
VP. Conversely, it should not be forgotten that FAST VP will also demote. That
financial data that FAST VP moved to the higher tier may be slated for archiving next
month. When access patterns/workloads change, FAST VP will recognize it and move
the data accordingly, in this case back to SATA. In the end the customer will benefit
by having the right data placed on the right storage type at the right time and at the
right cost.




                                     Storage Tiering for VMware Environments Deployed on   38
                                                 EMC Symmetrix VMAX with Enginuity 5876
Static tablespace placement
As Oracle Applications is powered by an Oracle database, there are a number of
database objects that are in all Oracle databases: temp files, system files, undo files,
and redo logs. These tablespaces and their respective datafiles are a small part of
the database but are essential components that are accessed, in the case of the redo
logs, constantly. These Oracle tablespaces are not part of the Oracle Applications
Tablespace Model. Unlike individual application modules therefore, it is possible to
place these Oracle datafiles and logfiles on different mount points and/or different
disk technologies from the start. Thus one may, in fact, choose not to make these
part of a FAST VP policy and instead place them on high-performing disk permanently.
In general, these tablespaces and logfiles do not grow significantly in size as
compared to the user data portion of the database, nor do their performance
characteristics change drastically over time. In addition, when using EMC’s
replication technologies, it is always best practice to separate the redo logs and temp
tablespaces at the very least (for details on Oracle running on EMC systems please
see the TechBook Oracle Databases on EMC Symmetrix Storage Systems on
www.EMC.com). Although it is possible to follow the same strategy presented here
and put all components on SATA and in a FAST VP policy, a production
implementation will access these files frequently and thus the data would be moved
to higher tiers. Given the limited amount of disk space the files occupy and the
relative certainty of their access patterns, having the FAST VP engine analyze this data
is unnecessary, and in fact adds overhead. The following study puts this into
practice, separating out these files, both because of the explanation above and also
to accurately account for the sub-LUN movements of the application module.

Oracle Application deployment
The VMware environment deployed in this study consists of three ESXi 5.0 servers
with a total of four virtual machines listed in Table 1. The environment is managed by
a VMware vCenter Server. Figure 27 is a visual representation of the environment.
Table 1. Example environment

   Server           Name              Model      OS &        CPUs   RAM      Disk
                                                 Version            (GB)

    Database Tier          fastdb      VMware      OEL 5      4       16       SAN
                                        VM         64-bit
    Applications           fastapp     VMware      OEL 5      1       8        SAN
       Tier                             VM         64-bit
    Management          fastmgmt       VMware     Win2008     1       4        SAN
      Server                            VM         64-bit
   Virtual Center     sibu_infra_vc    VMware     Win2008     2       4      Local/SAN
                                        VM         64-bit
       EMC            000198700046    VMAX 10K     5876               43      62 TB
     Symmetrix                                   microcode          usable    Total




                                       Storage Tiering for VMware Environments Deployed on   39
                                                   EMC Symmetrix VMAX with Enginuity 5876
Hardware layout




Figure 27. Physical/virtual environment diagram


FAST VP configuration
The first step to showcase the FAST VP functionality in a virtualized Oracle
Applications environment is to ensure that FAST VP is enabled. This can be done
through the use of management tools for Symmetrix – Solution Enabler CLI or
Unisphere. The process of enabling FAST VP using the Unisphere interface is shown
in Figure 28, and the result of the change is shown in Figure 29 by utilizing the
SYMCLI(CLI) interface.




                                   Storage Tiering for VMware Environments Deployed on   40
                                               EMC Symmetrix VMAX with Enginuity 5876
Figure 28. Enabling FAST VP using EMC Unisphere for VMAX




                                  Storage Tiering for VMware Environments Deployed on   41
                                              EMC Symmetrix VMAX with Enginuity 5876
Figure 29. Determining the state of the FAST VP engine using SYMCLI

After enabling the engine, there are a number of steps that follow: Creation of FAST
VP tiers, storage groups, and FAST VP policies. The Unisphere application provides
users with an easy-to-understand interface to perform these activities; however, the
objects can also be created using command line (SYMCLI). Figure 30 contains a list of
the disk groups that show the disk technologies available in Symmetrix VMAX 10K.
As seen in the figure, the array has Fibre Channel, SATA, and EFD (or Flash) drives.




Figure 30. Diskgroup summary




                                    Storage Tiering for VMware Environments Deployed on   42
                                                EMC Symmetrix VMAX with Enginuity 5876
In order to use FAST VP, at least two different disk technologies are required. From
these disks, the thin pools can be built. To demonstrate the use of FAST VP in an
Oracle Applications environment, three pools were built in this environment to match
the three different disk technologies: FC_Pool, SATA_Pool, and EFD_Pool. The
detailed procedure used for the creation of the thin pools is not included herein as
the Virtual Provisioning feature has been available since release 5773 of the
microcode. The pools are shown in Figure 31.




Figure 31. Thin pools on the Symmetrix VMAX

The thin pools are backed by data devices that are configured as follows:
   •   EFD_Pool - 64 x 15 GB RAID 5 (3+1)
   •   FC_Pool – 200 x 30 GB RAID 1
   •   SATA_Pool – 200 x 50 GB RAID 6 (6+2)
It is important to use enough devices to ensure that the data on the TDEVs is striped
wide, thereby avoiding hotspots.
The allocation of disk space in the thin pools, however, is not simply based upon the
available disk in VMAX. The majority of data in the Vision demo environment will not
be accessed frequently and therefore whether it starts on SATA or not, FAST VP will
ensure that a larger portion of it will end up there. This is one of the reasons that it is
a logical decision to place the entire database on SATA from the start, and hence why
the SATA pool is the largest. This is all for the best from a cost perspective also, since
both FC and particularly EFD are more expensive than SATA.



                                      Storage Tiering for VMware Environments Deployed on     43
                                                  EMC Symmetrix VMAX with Enginuity 5876
Configuring FAST VP
The five steps to configure FAST VP are as follows:

Step 1 - Create storage tiers
Three storage tiers were used in the environment: one for 15k Fibre Channel, one for
Flash drives, and one for 7200 SATA drives. For simplicity’s sake, they are named
FC_Tier, EFD_Tier, and SATA_Tier. The CLI command to create the EFD_Tier storage tier
is shown in Figure 32.




Figure 32. Storage tier listing

Figure 33 is the dialog box to create storage tiers in Unisphere. The storage tiers
created for this use case are listed.




                                     Storage Tiering for VMware Environments Deployed on   44
                                                 EMC Symmetrix VMAX with Enginuity 5876
Figure 33. Creating a storage tier in Unisphere



Step 2 – Create the storage group
This again can be done via the CLI or Unisphere. In many customer environments a
storage group already exists for mapping and masking storage to the hosts. In this
environment there are two storage groups that represent the Vision database, each
with a single LUN. As can be seen in Figure 34, the storage group dsib1115_WP_sg
contains one device, 17B, which is associated with a FASTVP policy. This device
contains all the user data. The other group dsib1115_WP2_sg contains device 183
which is not part of a FAST VP policy as it is the location of temp files, system files,
undo files, and redo logs from the database.



                                      Storage Tiering for VMware Environments Deployed on   45
                                                  EMC Symmetrix VMAX with Enginuity 5876
Figure 34. Storage group for FAST VP in EMC Unisphere

A view of the storage group as seen from VMware ESXi using EMC Virtual Storage
Integrator (VSI) is displayed in Figure 35.




Figure 35. Storage group for FAST VP as viewed from VMware ESXi using EMC VSI




                                   Storage Tiering for VMware Environments Deployed on   46
                                               EMC Symmetrix VMAX with Enginuity 5876
Step 3 - Create a FAST VP policy
The CLI syntax for creating the policy is included below in Figure 36. The GUI interface
for creating the FAST VP policy can be seen in Figure 38. The policy here is set up
such that all devices hosting the Vision database can exist on SATA (100 percent).
Recall that in the environment presented in this paper, all applications start on SATA.
For this to occur, the policy has to allow 100 percent of the storage to reside on SATA
drives. If this is set to a percentage of storage that is less than the size of the TDEVs in
the policy, the storage group will not be compliant with the FAST VP policy and FAST
will perform a compliance move although the performance characteristics may not
warrant such a move. The other tiers are set to 18% for FC and 3% for EFD in order to
more realistically represent a customer’s environment.




Figure 36. Creation of a FAST VP policy through CLI

The policy percentages of disk technologies used in this use case speak to two
realities: first that both EFD and FC are more expensive mediums than SATA, and
second, and more importantly, that only a small percentage of an application or
database is going to be accessed regularly. FAST is designed to make use of the disk
provided to it. If cost were not a concern, the best policy to institute is 100/100/100,
which would allow FAST full reign to use as much of each tier as it needed.
Unfortunately, in the real world cost is one of the prime concerns, and as a result
customers are more likely to have smaller amounts of FC and EFD in their Symmetrix
than SATA. This leaves less to dedicate to a FAST VP policy; however the good news
is that most data in applications and databases are rarely accessed so it is unlikely
large amounts of very fast disks such as EFD will be required.
The result of the policy creation is shown in Figure 37.




                                       Storage Tiering for VMware Environments Deployed on     47
                                                   EMC Symmetrix VMAX with Enginuity 5876
Figure 37. Listing the FAST VP policy in CLI




                                      Storage Tiering for VMware Environments Deployed on   48
                                                  EMC Symmetrix VMAX with Enginuity 5876
Figure 38. Creating a FAST VP policy in Unisphere




                                    Storage Tiering for VMware Environments Deployed on   49
                                                EMC Symmetrix VMAX with Enginuity 5876
Step 4 - Associate the storage group with the new policy
To associate the storage group from Figure 36 to the newly created policy, the CLI
command is:




Figure 39. Associating a storage group to a FAST VP policy in CLI

This can also be accomplished in Unisphere as shown in Figure 40. At the point of
associating a storage group, the user can check a box to enable RDF coordination as
explained in the section FAST VP SRDF coordination.




                                     Storage Tiering for VMware Environments Deployed on   50
                                                 EMC Symmetrix VMAX with Enginuity 5876
Figure 40. Associating a storage group to a FAST policy in Unisphere

One can now list the details of the association, including in what way the storage
group complies with the policy. This is shown in Figure 41. From the output we can
see that the total amount of space that the current storage group can “demand” is
400 GB, or the size of the thin device 17B, though the current allocation is only 290
GB.




                                     Storage Tiering for VMware Environments Deployed on   51
                                                 EMC Symmetrix VMAX with Enginuity 5876
Figure 41. FAST VP policy association with demand detail




                                   Storage Tiering for VMware Environments Deployed on   52
                                               EMC Symmetrix VMAX with Enginuity 5876
Similar demand details can be obtained from the Unisphere interface as shown in
Figure 42.




Figure 42. FAST VP policy management

Step 5 – Configure a performance and move window
After the storage group is associated with the FAST VP policy, two time windows need
to be setup. One window dictates when the FAST VP algorithms observe the
performance of the devices, and the other window specifies when the generated
moves may be executed. Although this can be configured through the CLI, the
Unisphere GUI interface is much easier to navigate and was utilized in this study. The
window setups are shown in Figure 43 and Figure 44. Because of the nature of the
use case and the limited testing windows, the “Time to Sample before First Analysis”
was set to 2 hours, and the “Workload Analysis Period” to 1 week. When setting the
time for performance and movement, the local time of the machine is used; however
all times will be converted to UTC to ensure all time windows are set as the user
intended whether they are on the East Coast or West Coast. The performance and
move windows similarly were set to adhere to the testing and thus they are 24 hours
a day. In a customer environment, the performance window should be set to match
the hours the data will be accessed, while the move window should be set to a time
period of less activity on the system.



                                    Storage Tiering for VMware Environments Deployed on   53
                                                EMC Symmetrix VMAX with Enginuity 5876
Figure 43. FAST VP general settings in Unisphere




                                    Storage Tiering for VMware Environments Deployed on   54
                                                EMC Symmetrix VMAX with Enginuity 5876
Figure 44. Setting FAST VP performance and movement time windows

With the general FAST VP environment configured, the testing can proceed.




                                   Storage Tiering for VMware Environments Deployed on   55
                                               EMC Symmetrix VMAX with Enginuity 5876
Oracle Applications case study using FAST VP performance
monitoring
For the purposes of this example, the following assumptions were made about the
Oracle E-Business Suite 12 implementation.
The use case will use the Order Entry (schema SOE) module as the basis for
demonstrating implementing Oracle Applications. The other modules will be
implemented at a future date, and thus are not accessed during the testing. Order
Entry will have 200 active users on the system during normal business hours as the
environment is open through the web as a B2B.
As mentioned earlier, the customer user data portion of the Vision database is
configured on a single mount point that is actually a virtual disk in a VMware virtual
machine with Oracle Enterprise Linux as the guest operating system. Drilling down
into the database VM itself, FASTDB, on the Linux OS one can use the Solutions
Enabler command symvm introduced in version 7.2 to show how the local file
systems map to the VMFS datastores and ultimately the Symmetrix. The database
device /dev/sdd was partitioned using fdisk into a single partition on which the ext3
filesystem was created (mkfs.ext3). This mount houses the database user data.
Similarly, device /dev/sde was partitioned and contains the database system files.
Figure 45 demonstrates the use of the symvm command to map the local file system
to the VMFS datastore and then further to show the Symmetrix device that backs the
VMFS datastore.




                                    Storage Tiering for VMware Environments Deployed on   56
                                                EMC Symmetrix VMAX with Enginuity 5876
Figure 45. Using symvm to translate Linux database mounts on the FASTDB VM

To see this information at a high level, use VSI Storage Viewer as shown in Figure 46.
VSI will include important details not seen with the symvm command such as the
RAID configuration, thin pools, metavolume type (if applicable), and storage group.




                                     Storage Tiering for VMware Environments Deployed on   57
                                                 EMC Symmetrix VMAX with Enginuity 5876
Figure 46. Virtual disk mapping in the EMC VSI Storage Viewer feature

Viewing the path management owner of the device in VSI in Figure 46, one can see it
is managed by PowerPath®. EMC PowerPath/VE was installed on all hosts for load
balancing, failover, and high availability. It is a best practice to use PowerPath/VE in
a VMware infrastructure running on EMC Symmetrix.
The TDEV is not the only device bound to the thin pool SATA_Pool as shown in Figure
47. Though it is not required, customers may find it is easier to manage and keep
track of those TDEVs under FAST VP control by creating thin pools dedicated for FAST
VP.
In some views, VSI Storage Viewer includes a column for RAID as in Figure 46. For
devices under FAST VP control, the algorithm that VSI uses may show the RAID
configuration of any of the thin pools in the policy.




                                     Storage Tiering for VMware Environments Deployed on   58
                                                 EMC Symmetrix VMAX with Enginuity 5876
Figure 47. Thin device bound to the pool containing SATA drives

Now that all the specific FAST VP setup activities are complete, the promotion of the
Oracle Applications module to a live state can begin.

Order Entry application
As mentioned the Order Entry schema is owned by the user SOE. The total size of the
SOE application is about 26 GB as shown in Figure 48.




                                     Storage Tiering for VMware Environments Deployed on   59
                                                 EMC Symmetrix VMAX with Enginuity 5876
Figure 48. Total size of the Order Entry application

SwingBench was used to simulate the workload by executing those transactions that
are most common in Order Entry: adding customers, searching for products, ordering
products, searching for orders, and processing orders. The SwingBench Order Entry
benchmark is designed to hit the majority of data in the schema.
The benchmark was run for about an hour, with the default setup designed to mimic
an hour in a regular business day for a customer. In the screenshot shown in Figure
49, the benchmark is in mid-run, with approximately 200 users connected and
generating an average of 12,173 transactions a minute.




                                      Storage Tiering for VMware Environments Deployed on   60
                                                  EMC Symmetrix VMAX with Enginuity 5876
Figure 49. Order Entry benchmark

As the parameters for FAST are set to analyze two hours’ worth of statistics (as
previously shown in Figure 44), movement began shortly after that time. In real
customer environments, depending on the settings used for the performance window,
it is reasonable for changes to take several hours or, in some instances, days. In a
production environment it would be most advantageous for a customer to set the
performance window to the span of time in a day during which business activity takes
place. It is important that nonproductive hours are not included in the performance
window as this could pollute the performance statistics that the FAST controller uses,
and may result in incorrect placement of data. This is a noteworthy point since unlike
FAST DP for thick devices, in FAST VP there is no choice for customers to approve or
even to review the recommendations made by the controller. If FAST determines that
tracks need to be moved, and the FAST engine is set to automatic as in this case
study (Figure 50), the moves take place automatically in the background during the
move window.




                                    Storage Tiering for VMware Environments Deployed on   61
                                                EMC Symmetrix VMAX with Enginuity 5876
Figure 50. FAST VP move mode

Per the FAST settings, about an hour after the benchmark completes, the FAST engine
begins moving tracks from the SATA_Pool thin pool to the two other configured pools,
FC_Pool and EFD_Pool. Note the subtlety of how the FAST engine works. Since it is
working on the array at the sub-LUN level, there is no knowledge of the application by
FAST. It simply uses the access patterns to determine where to place the data. In this
manner, the Order Entry application data ends up on three separate tiers. Figure 51
catches the movement just as it is beginning.




                                    Storage Tiering for VMware Environments Deployed on   62
                                                EMC Symmetrix VMAX with Enginuity 5876
Figure 51. Initial track movement for the database thin device

After a time (about an hour), as seen in Figure 52, the initial movement of data is
complete, with all three thin pools containing some portion of device 17B.




Figure 52. Completed track movement for the database thin device

If one views the FAST VP policy in the CLI, the usage for each tier is listed. Based on
how SwingBench has accessed the user data, FAST has moved some of that data from




                                     Storage Tiering for VMware Environments Deployed on   63
                                                 EMC Symmetrix VMAX with Enginuity 5876
SATA to EFD and FC. The output demonstrates that 11 GB of the Order Entry
              application has been re-tiered to EFD while 9 GB was placed on FC. These increases
              have led to the decrease of the SATA tier by the sum total of the two other tiers which
              is 20 GB. This is depicted in Figure 53.




              Figure 53. FAST policy demand usage

              Note also that there is still growth possible in all storage tiers, indicating that FAST
              could have utilized more EFD or FC but the access patterns did not warrant it.

              FAST VP Results
              Once the move is complete, the FAST VP mode is set to off to prevent additional
              movement during the second test since the performance and movement windows are
              24 hours. The Oracle Vision database is then refreshed from a TimeFinder clone
              backup to ensure the post-move test is the same as the pre-test. 5 The benchmark is
              re-run to see if there is a noticeable difference in any of the measurable statistics.
              Figure 54 shows a graph of the mid-run of the benchmark-executed post-FAST VP
              optimizations. For each of the SwingBench runs, the loader gathers statistics on each
              of the five transaction types providing the minimum, maximum, and average
              transaction response times. These statistics are saved to an XML file at the end of
              each run.




5
    Restoring the clone data does not impact the location of the tracks on the disk tiers.




                                                           Storage Tiering for VMware Environments Deployed on   64
                                                                       EMC Symmetrix VMAX with Enginuity 5876
Figure 54. Mid-run of the Order Entry benchmark

By simply transposing the average response times for each run on a graph, the pre
and post FAST VP runs can be compared. Figure 55 is a composite graph which
contains pre and post FAST VP test results of the transaction response times for each
of the five Order Entry functions. The black lines represent the pre-FAST VP
environment while the green represents the post-FAST VP environment.




                                    Storage Tiering for VMware Environments Deployed on   65
                                                EMC Symmetrix VMAX with Enginuity 5876
Figure 55. Transaction response time comparing pre-FAST VP and post-FAST VP
movement

Reviewing the graph, the post-FAST VP movement environment shows clear gains over
the pre-FAST VP movement environment in every type of transaction. Some
transaction types show a more pronounced benefit, such as Browse Orders which
went from 36 milliseconds to 23 milliseconds, but the results are undeniably better
for each one. This test of course is just a microcosm for what is possible in a large
enterprise environment. Reduced transaction times mean more work can be
accomplished by the existing hardware and software, or that a user has a better
experience when accessing the application or database. More importantly FAST VP
will continue to work throughout the lifecycle of the applications and databases
under its control and up-tier or down-tier according to how that data is accessed,
efficiently making use of the storage within the FAST VP policy.

FAST VP, manual tiering, and real-world considerations
Navigating the varied applications and databases in a customer environment in order
to manually tier them across different disk technologies is a difficult task. If
resources are at a premium it moves from difficult to near impossible. An application
or database is not a single entity. Invariably there will be some data more heavily
accessed than other data, and thus placing an entire application or database on a
single disk technology will either waste money (FC, EFD) or limit performance (SATA).
If all those applications and databases were static entities with the same
performance requirements, it might be a more feasible undertaking to manually tier
them; but they are not. They are living; or rather they have a lifecycle to them. The
simple Order Entry use case conducted for this paper is evidence of that.
At the end of the test, despite its nominal size, the Order Entry application is now
spread across three tiers of storage, with the most heavily accessed data being




                                     Storage Tiering for VMware Environments Deployed on   66
                                                 EMC Symmetrix VMAX with Enginuity 5876
placed on EFD, the second on Fibre Channel, and the least or not at all remaining on
SATA. All three tiers are utilized based upon how the data was accessed during the
test and most importantly no manual tiering was required to achieve the result. FAST
VP took over the tiering of the application or database in this case, and did so using
real-time disk metrics. Unlike a manual tiering which may be a single event, FAST VP
will continue to gather metrics and make additional changes over the lifecycle of the
application thereby mitigating cost and maximizing performance.


Conclusion
Using Virtual LUN and FAST VP technologies from EMC it is possible to properly tier
applications running in a vSphere environment, without the complications of many
virtual disks on many datastores or the use of raw device mappings. All the benefits
of storage tiering in an EMC Symmetrix VMAX and VMware vSphere infrastructure can
even be achieved using a single datastore on a single thin LUN, sacrificing neither
manageability nor performance. Data movement can be completely automated,
eliminating the need for time-consuming analysis by database and IT staff. As only
what is necessary of each tier of disk is used and as most data in large databases
supporting various applications is not frequently accessed, the majority can remain
on cost-effective SATA. These benefits make FAST a wise investment for any
company.




                                     Storage Tiering for VMware Environments Deployed on   67
                                                 EMC Symmetrix VMAX with Enginuity 5876
References
The following are available on the Oracle Technology Networkhttp://otn.oracle.com/:
•   Oracle Applications Installation Guide: Using Rapid Install Release 12
•   Oracle Applications Concepts Release 12
The following are available on EMC’s Powerlink website:
•   Oracle Databases on EMC Symmetrix Storage Systems TechBook
•   Implementing Fully Automated Storage Tiering with Virtual Pools (FAST VP) for EMC
    Symmetrix VMAX Series Arrays
•   FAST VP for EMC® Symmetrix® VMAX® Theory and Best Practices for Planning
    and Performance
•   VSI for VMware vSphere Storage Viewer Product Guide 5.3 Product Guide
•   Unisphere for VMAX Installation Guide
•   EMC Solutions Enabler Symmetrix Array Controls CLI Product Guide
•   EMC Solutions Enabler Symmetrix Array Management CLI Product Guide
•   EMC Solutions Enabler Symmetrix CLI Command Reference HTML Help
•   EMC Solutions Enabler Installation Guide




                                     Storage Tiering for VMware Environments Deployed on   68
                                                 EMC Symmetrix VMAX with Enginuity 5876

Contenu connexe

Tendances

H10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wpH10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wp
smdsamee384
 

Tendances (20)

TechBook: Using EMC VNX Storage with VMware vSphere
TechBook: Using EMC VNX Storage with VMware vSphereTechBook: Using EMC VNX Storage with VMware vSphere
TechBook: Using EMC VNX Storage with VMware vSphere
 
EMC Cisco SAP HANA Appliance Disaster Tolerance
EMC Cisco SAP HANA Appliance Disaster ToleranceEMC Cisco SAP HANA Appliance Disaster Tolerance
EMC Cisco SAP HANA Appliance Disaster Tolerance
 
TechBook: Using EMC Symmetrix Storage in VMware vSphere Environments
TechBook: Using EMC Symmetrix Storage in VMware vSphere Environments  TechBook: Using EMC Symmetrix Storage in VMware vSphere Environments
TechBook: Using EMC Symmetrix Storage in VMware vSphere Environments
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
 
consolidating and protecting virtualized enterprise environments with Dell EM...
consolidating and protecting virtualized enterprise environments with Dell EM...consolidating and protecting virtualized enterprise environments with Dell EM...
consolidating and protecting virtualized enterprise environments with Dell EM...
 
Networking for Storage Virtualization and EMC RecoverPoint TechBook
Networking for Storage Virtualization and EMC RecoverPoint TechBook Networking for Storage Virtualization and EMC RecoverPoint TechBook
Networking for Storage Virtualization and EMC RecoverPoint TechBook
 
White Paper: EMC Compute-as-a-Service
White Paper: EMC Compute-as-a-Service   White Paper: EMC Compute-as-a-Service
White Paper: EMC Compute-as-a-Service
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere Environments
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere Environments
 
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
 
H10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wpH10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wp
 
H4160 emc solutions for oracle database
H4160 emc solutions for oracle databaseH4160 emc solutions for oracle database
H4160 emc solutions for oracle database
 
Emc cla rii on fibre channel storage fundamentals
Emc cla rii on fibre channel storage fundamentalsEmc cla rii on fibre channel storage fundamentals
Emc cla rii on fibre channel storage fundamentals
 
DDoS Secure: VMware Virtual Edition Installation Guide
DDoS Secure: VMware Virtual Edition Installation GuideDDoS Secure: VMware Virtual Edition Installation Guide
DDoS Secure: VMware Virtual Edition Installation Guide
 
8000 guide
8000 guide8000 guide
8000 guide
 
Thin Reclamation Using Veritas Storage Foundation Enterprise HA from Symantec...
Thin Reclamation Using Veritas Storage Foundation Enterprise HA from Symantec...Thin Reclamation Using Veritas Storage Foundation Enterprise HA from Symantec...
Thin Reclamation Using Veritas Storage Foundation Enterprise HA from Symantec...
 
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems  TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
 
Introduction to the EMC XtremIO All-Flash Array
Introduction to the EMC XtremIO All-Flash ArrayIntroduction to the EMC XtremIO All-Flash Array
Introduction to the EMC XtremIO All-Flash Array
 
VMware Networking 5.0
VMware Networking 5.0VMware Networking 5.0
VMware Networking 5.0
 
White Paper: Introduction to VFCache
White Paper: Introduction to VFCache   White Paper: Introduction to VFCache
White Paper: Introduction to VFCache
 

En vedette

Data Center Storge Architecture comparison EMC VMAX vs HUAWEI 18000 series
Data Center  Storge Architecture comparison EMC VMAX vs HUAWEI 18000 seriesData Center  Storge Architecture comparison EMC VMAX vs HUAWEI 18000 series
Data Center Storge Architecture comparison EMC VMAX vs HUAWEI 18000 series
Lachezar Georgiev
 
роль методиста по профориентации в формировании готовности
роль методиста по профориентации в формировании готовностироль методиста по профориентации в формировании готовности
роль методиста по профориентации в формировании готовности
Татьяна Глинская
 
Target audience feedback
Target audience feedbackTarget audience feedback
Target audience feedback
harryronchetti
 
Introduction - Lab Report
Introduction - Lab ReportIntroduction - Lab Report
Introduction - Lab Report
Quanina Quan
 
Facebook vs Twitter
Facebook vs TwitterFacebook vs Twitter
Facebook vs Twitter
Siti Rizki
 
Market structures project and quiz
Market structures project and quizMarket structures project and quiz
Market structures project and quiz
Travis Klein
 

En vedette (18)

White Paper: EMC VNXe Data Protection — A Detailed Review
White Paper: EMC VNXe Data Protection — A Detailed Review   White Paper: EMC VNXe Data Protection — A Detailed Review
White Paper: EMC VNXe Data Protection — A Detailed Review
 
emc vnx unisphere
emc vnx unisphereemc vnx unisphere
emc vnx unisphere
 
EMC Vnx master-presentation
EMC Vnx master-presentationEMC Vnx master-presentation
EMC Vnx master-presentation
 
Data Center Storge Architecture comparison EMC VMAX vs HUAWEI 18000 series
Data Center  Storge Architecture comparison EMC VMAX vs HUAWEI 18000 seriesData Center  Storge Architecture comparison EMC VMAX vs HUAWEI 18000 series
Data Center Storge Architecture comparison EMC VMAX vs HUAWEI 18000 series
 
Emc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshopEmc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshop
 
New microsoft office word document
New microsoft office word documentNew microsoft office word document
New microsoft office word document
 
роль методиста по профориентации в формировании готовности
роль методиста по профориентации в формировании готовностироль методиста по профориентации в формировании готовности
роль методиста по профориентации в формировании готовности
 
Target audience feedback
Target audience feedbackTarget audience feedback
Target audience feedback
 
Penelitian
PenelitianPenelitian
Penelitian
 
Introduction - Lab Report
Introduction - Lab ReportIntroduction - Lab Report
Introduction - Lab Report
 
RSA Incident Response Threat Emerging Threat Profile: Shell_Crew
 RSA Incident Response Threat Emerging Threat Profile: Shell_Crew RSA Incident Response Threat Emerging Threat Profile: Shell_Crew
RSA Incident Response Threat Emerging Threat Profile: Shell_Crew
 
Deployment Day Session 2 MDT 2012 Advanced
Deployment Day Session 2 MDT 2012 AdvancedDeployment Day Session 2 MDT 2012 Advanced
Deployment Day Session 2 MDT 2012 Advanced
 
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
 
BIOENERGY TECHNOLOGY STATUS IN THAILAND: CHALLENGES AND OPPORTUNITIES
BIOENERGY TECHNOLOGY STATUS IN THAILAND: CHALLENGES AND OPPORTUNITIESBIOENERGY TECHNOLOGY STATUS IN THAILAND: CHALLENGES AND OPPORTUNITIES
BIOENERGY TECHNOLOGY STATUS IN THAILAND: CHALLENGES AND OPPORTUNITIES
 
Facebook vs Twitter
Facebook vs TwitterFacebook vs Twitter
Facebook vs Twitter
 
Wed thurs reform
Wed thurs reformWed thurs reform
Wed thurs reform
 
Beautiful 1
Beautiful 1 Beautiful 1
Beautiful 1
 
Market structures project and quiz
Market structures project and quizMarket structures project and quiz
Market structures project and quiz
 

Similaire à White Paper: Storage Tiering for VMware Environments Deployed on EMC Symmetrix VMAX with Enginuity 5875

V mware implementation with ibm system storage ds4000 ds5000 redp4609
V mware implementation with ibm system storage ds4000 ds5000 redp4609V mware implementation with ibm system storage ds4000 ds5000 redp4609
V mware implementation with ibm system storage ds4000 ds5000 redp4609
Banking at Ho Chi Minh city
 
V mware v-sphere-replication-overview
V mware v-sphere-replication-overviewV mware v-sphere-replication-overview
V mware v-sphere-replication-overview
Firman Indrianto
 

Similaire à White Paper: Storage Tiering for VMware Environments Deployed on EMC Symmetrix VMAX with Enginuity 5875 (20)

White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
 
Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBook
 
Reference Architecture: EMC Hybrid Cloud with VMware
Reference Architecture: EMC Hybrid Cloud with VMwareReference Architecture: EMC Hybrid Cloud with VMware
Reference Architecture: EMC Hybrid Cloud with VMware
 
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments  White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
 
What's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - StorageWhat's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - Storage
 
V mware implementation with ibm system storage ds4000 ds5000 redp4609
V mware implementation with ibm system storage ds4000 ds5000 redp4609V mware implementation with ibm system storage ds4000 ds5000 redp4609
V mware implementation with ibm system storage ds4000 ds5000 redp4609
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
 
Cloud Foundry Platform as a Service on Vblock System
Cloud Foundry Platform as a Service on Vblock SystemCloud Foundry Platform as a Service on Vblock System
Cloud Foundry Platform as a Service on Vblock System
 
TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
TechBook: IMS on z/OS Using EMC Symmetrix Storage SystemsTechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
 
Whitepaper
WhitepaperWhitepaper
Whitepaper
 
V mware v-sphere-replication-overview
V mware v-sphere-replication-overviewV mware v-sphere-replication-overview
V mware v-sphere-replication-overview
 
White Paper: Using VPLEX Metro with VMware High Availability and Fault Tolera...
White Paper: Using VPLEX Metro with VMware High Availability and Fault Tolera...White Paper: Using VPLEX Metro with VMware High Availability and Fault Tolera...
White Paper: Using VPLEX Metro with VMware High Availability and Fault Tolera...
 
Networker integration for optimal performance
Networker integration for optimal performanceNetworker integration for optimal performance
Networker integration for optimal performance
 
IBM Storwize 7000 Unified, SONAS, and VMware Site Recovery Manager: An overvi...
IBM Storwize 7000 Unified, SONAS, and VMware Site Recovery Manager: An overvi...IBM Storwize 7000 Unified, SONAS, and VMware Site Recovery Manager: An overvi...
IBM Storwize 7000 Unified, SONAS, and VMware Site Recovery Manager: An overvi...
 
White Paper: EMC Compute-as-a-Service — EMC Ionix IT Orchestrator, VCE Vblock...
White Paper: EMC Compute-as-a-Service — EMC Ionix IT Orchestrator, VCE Vblock...White Paper: EMC Compute-as-a-Service — EMC Ionix IT Orchestrator, VCE Vblock...
White Paper: EMC Compute-as-a-Service — EMC Ionix IT Orchestrator, VCE Vblock...
 
How to backup and restore a vm using veeam
How to backup and restore a vm using veeamHow to backup and restore a vm using veeam
How to backup and restore a vm using veeam
 
White paper: EMC Performance Optimization for Microsoft FAST Search Server 20...
White paper: EMC Performance Optimization for Microsoft FAST Search Server 20...White paper: EMC Performance Optimization for Microsoft FAST Search Server 20...
White paper: EMC Performance Optimization for Microsoft FAST Search Server 20...
 
Db2 virtualization
Db2 virtualizationDb2 virtualization
Db2 virtualization
 
Practical Guide to Business Continuity & Disaster Recovery
Practical Guide to Business Continuity & Disaster RecoveryPractical Guide to Business Continuity & Disaster Recovery
Practical Guide to Business Continuity & Disaster Recovery
 
White Paper: EMC Infrastructure for Microsoft Private Cloud
White Paper: EMC Infrastructure for Microsoft Private Cloud White Paper: EMC Infrastructure for Microsoft Private Cloud
White Paper: EMC Infrastructure for Microsoft Private Cloud
 

Plus de EMC

Modern infrastructure for business data lake
Modern infrastructure for business data lakeModern infrastructure for business data lake
Modern infrastructure for business data lake
EMC
 
Virtualization Myths Infographic
Virtualization Myths Infographic Virtualization Myths Infographic
Virtualization Myths Infographic
EMC
 
Data Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesData Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education Services
EMC
 

Plus de EMC (20)

INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDINDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
 
Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote
 
EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX
 
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOTransforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
 
Citrix ready-webinar-xtremio
Citrix ready-webinar-xtremioCitrix ready-webinar-xtremio
Citrix ready-webinar-xtremio
 
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
 
EMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC with Mirantis Openstack
EMC with Mirantis Openstack
 
Modern infrastructure for business data lake
Modern infrastructure for business data lakeModern infrastructure for business data lake
Modern infrastructure for business data lake
 
Force Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereForce Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop Elsewhere
 
Pivotal : Moments in Container History
Pivotal : Moments in Container History Pivotal : Moments in Container History
Pivotal : Moments in Container History
 
Data Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewData Lake Protection - A Technical Review
Data Lake Protection - A Technical Review
 
Mobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeMobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or Foe
 
Virtualization Myths Infographic
Virtualization Myths Infographic Virtualization Myths Infographic
Virtualization Myths Infographic
 
Intelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityIntelligence-Driven GRC for Security
Intelligence-Driven GRC for Security
 
The Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeThe Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure Age
 
EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015
 
EMC Academic Summit 2015
EMC Academic Summit 2015EMC Academic Summit 2015
EMC Academic Summit 2015
 
Data Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesData Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education Services
 
2014 Cybercrime Roundup: The Year of the POS Breach
2014 Cybercrime Roundup: The Year of the POS Breach2014 Cybercrime Roundup: The Year of the POS Breach
2014 Cybercrime Roundup: The Year of the POS Breach
 
EMC Isilon Best Practices for Hadoop Data Storage
EMC Isilon Best Practices for Hadoop Data StorageEMC Isilon Best Practices for Hadoop Data Storage
EMC Isilon Best Practices for Hadoop Data Storage
 

Dernier

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 

Dernier (20)

Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 

White Paper: Storage Tiering for VMware Environments Deployed on EMC Symmetrix VMAX with Enginuity 5875

  • 1. White Paper STORAGE TIERING FOR VMWARE ENVIRONMENTS DEPLOYED ON EMC SYMMETRIX VMAX WITH ENGINUITY 5876 The use of FAST VP (Virtual Pools) in VMware environments Abstract As a business’s virtualization storage needs continue to expand, the challenge of where to put data throughout its lifecycle is ever present. With EMC’s extended Fully Automated Storage Tiering with Virtual Pools (FAST VP) functionality, this problem is addressed through the automation of data movement to the right disk tier, at the right time. This white paper will demonstrate how this technology can be used effectively in an environment virtualized using VMware® technologies. September 2012
  • 2. Copyright © 2012 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided “as is”. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware, ESXi, ESXi, vMotion, and vSphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number h8101.4 Storage Tiering for VMware Environments Deployed on 2 EMC Symmetrix VMAX with Enginuity 5876
  • 3. Table of Contents Executive summary.................................................................................................. 7 Audience ............................................................................................................................ 8 Terminology ....................................................................................................................... 8 Symmetrix VMAX using Enginuity 5876 .................................................................... 8 EMC Unisphere for VMAX ......................................................................................... 9 EMC Virtual Storage Integrator ............................................................................... 10 Oracle Applications ............................................................................................... 12 SwingBench .......................................................................................................... 12 Symmetrix Virtual Provisioning .............................................................................. 13 Federated Tiered Storage (FTS): Overview ............................................................... 14 Fully Automated Storage Tiering (FAST) .................................................................. 15 FAST and Fully Automated Storage Tiering with Virtual Pools (FAST VP) ............................. 15 FAST managed objects ..................................................................................................... 15 FAST VP components ........................................................................................................ 16 FAST VP allocation by FAST Policy .......................................................................... 17 FAST VP SRDF coordination .................................................................................... 18 Working with Virtual LUN VP Mobility in VMware environments ............................... 18 Manual tiering .................................................................................................................. 20 Pinning a device in FAST/FAST VP ................................................................................. 22 Changing disk and RAID type ............................................................................................ 27 FAST VP and Oracle Applications 12 ...................................................................... 33 Applications Architecture ................................................................................................. 33 Working with FAST VP and Oracle Applications on VMware infrastructure ......................... 34 Oracle Applications Tablespace Model ............................................................................. 35 Oracle Applications implementation................................................................................. 36 Use case implementation ................................................................................................. 37 Static tablespace placement ............................................................................................ 39 Oracle Application deployment ........................................................................................ 39 Hardware layout ............................................................................................................... 40 FAST VP configuration ............................................................................................ 40 Configuring FAST VP ..................................................................................................... 44 Oracle Applications case study using FAST VP performance monitoring ................... 56 Order Entry application ..................................................................................................... 59 FAST VP Results ................................................................................................................ 64 FAST VP, manual tiering, and real-world considerations.................................................... 66 Storage Tiering for VMware Environments Deployed on 3 EMC Symmetrix VMAX with Enginuity 5876
  • 4. Conclusion ............................................................................................................ 67 References ............................................................................................................ 68 Storage Tiering for VMware Environments Deployed on 4 EMC Symmetrix VMAX with Enginuity 5876
  • 5. List of Figures Figure 1. Version 1.1 of Unisphere for VMAX .......................................................................................... 10 Figure 2. VSI 5.3 features ....................................................................................................................... 11 Figure 3. VSI 5.3 Storage Viewer feature................................................................................................. 12 Figure 4. Thin devices and thin pools containing data devices ............................................................... 13 Figure 5. FAST managed objects ............................................................................................................ 16 Figure 6. FAST VP components ............................................................................................................... 17 Figure 7. Thin LUN distribution across pools viewed through SYMCLI ..................................................... 20 Figure 8. Thin LUN distribution across pools viewed through Unisphere ................................................. 21 Figure 9. Thin LUN displayed in VSI ........................................................................................................ 21 Figure 10. Pinning a device in Unisphere................................................................................................ 23 Figure 11. Pinning a device through SYMCLI ........................................................................................... 24 Figure 12. Validating the migration ........................................................................................................ 25 Figure 13. Executing the migration ......................................................................................................... 25 Figure 14. Completing and terminating the migration ............................................................................. 26 Figure 15. The thin LUN reallocated to a single pool ............................................................................... 27 Figure 16. The reallocated LUN in Unisphere .......................................................................................... 27 Figure 17. Thin LUN 26B in a RAID 6 configuration .................................................................................. 28 Figure 18. Thin LUN 26B located in a SATA pool ..................................................................................... 29 Figure 19. FC_Pool thin pool containing the Fibre Channel disk .............................................................. 30 Figure 20. Validate the migration ........................................................................................................... 30 Figure 21. Query the migration session .................................................................................................. 31 Figure 22. Verify and terminate the migration ......................................................................................... 31 Figure 23. Thin LUN 26B located in the FC pool ...................................................................................... 32 Figure 24. Thin LUN 26B in a RAID 1 configuration .................................................................................. 33 Figure 25. Oracle Applications architecture ............................................................................................ 34 Figure 26. Promotion of the application from SATA to FC and EFD over time ............................................ 38 Figure 27. Physical/virtual environment diagram ................................................................................... 40 Figure 28. Enabling FAST VP using EMC Unisphere for VMAX .................................................................. 41 Figure 29. Determining the state of the FAST VP engine using SYMCLI .................................................... 42 Figure 30. Diskgroup summary .............................................................................................................. 42 Figure 31. Thin pools on the Symmetrix VMAX........................................................................................ 43 Figure 32. Storage tier listing ................................................................................................................. 44 Figure 33. Creating a storage tier in Unisphere ....................................................................................... 45 Figure 34. Storage group for FAST VP in EMC Unisphere ......................................................................... 46 Figure 35. Storage group for FAST VP as viewed from VMware ESXi using EMC VSI .................................. 46 Figure 36. Creation of a FAST VP policy through CLI ................................................................................ 47 Figure 37. Listing the FAST VP policy in CLI ............................................................................................. 48 Figure 38. Creating a FAST VP policy in Unisphere .................................................................................. 49 Figure 39. Associating a storage group to a FAST VP policy in CLI............................................................ 50 Figure 40. Associating a storage group to a FAST policy in Unisphere ..................................................... 51 Figure 41. FAST VP policy association with demand detail ...................................................................... 52 Figure 42. FAST VP policy management .................................................................................................. 53 Storage Tiering for VMware Environments Deployed on 5 EMC Symmetrix VMAX with Enginuity 5876
  • 6. Figure 43. FAST VP general settings in Unisphere ................................................................................... 54 Figure 44. Setting FAST VP performance and movement time windows ................................................... 55 Figure 45. Using symvm to translate Linux database mounts on the FASTDB VM .................................... 57 Figure 46. Virtual disk mapping in the EMC VSI Storage Viewer feature .................................................. 58 Figure 47. Thin device bound to the pool containing SATA drives ........................................................... 59 Figure 48. Total size of the Order Entry application ................................................................................. 60 Figure 49. Order Entry benchmark .......................................................................................................... 61 Figure 50. FAST VP move mode .............................................................................................................. 62 Figure 51. Initial track movement for the database thin device ............................................................... 63 Figure 52. Completed track movement for the database thin device ....................................................... 63 Figure 53. FAST policy demand usage ................................................................................................... 64 Figure 54. Mid-run of the Order Entry benchmark ................................................................................... 65 Figure 55. Transaction response time comparing pre-FAST VP and post-FAST VP movement ................... 66 Storage Tiering for VMware Environments Deployed on 6 EMC Symmetrix VMAX with Enginuity 5876
  • 7. Executive summary Unlike storage arrays of the past, today’s enterprise-class storage arrays contain multiple drive types and protection methodologies. This gives the storage administrator, server administrator, and application administrator the challenge of selecting the correct storage configuration, or storage class, for each application being deployed. The trend toward virtualizing the entire environment to optimize IT infrastructures often exacerbates the problem by consolidating multiple disparate applications on a small number of large devices. Given this challenge, it is not uncommon that a single storage type (such as Fibre Channel drives) best suited for the most demanding application, is selected for all virtual machine deployments, effectively assigning all applications, regardless of their performance requirements, to the same tier. This traditional approach is wasteful since all applications and data are not equally performance-critical to the business. Furthermore, within applications themselves, particularly those reliant upon databases, there is also the opportunity to further diversify the storage make-up. Making use of high-density low-cost SATA drives for the less active applications or data, FC drives for the medium active, and Enterprise Flash Drives for the very active, allows for efficient use of storage resources, reducing the overall cost and the number of drives necessary for the virtual infrastructure. This in turn also helps to reduce energy requirements and floor space, which are both cost-saving items to the business. To achieve this “tiered” storage approach in a proactive way for VMware environments it is possible to use Symmetrix® Enhanced Virtual LUN Technology to move devices between drive types and RAID protections seamlessly inside the storage array. Symmetrix Virtual LUN technology is nondisruptive to application and nondisruptive and transparent to the user. It preserves the devices’ identity and therefore there is no need to change anything in the virtual infrastructure, from VMware® ESX® hosts to virtual machines. Canonical names, file system mount points, volume manager settings, and even scripts do not need to be altered. It also preserves any TimeFinder® or Symmetrix Remote Data Facility (SRDF®) business continuity aspects even as the data migration takes place. In a very similar way, this approach to storage tiering can be automated using Fully Automated Storage Tiering, or FAST. FAST is available for thick devices as FAST DP (Disk Provisioning) and thin devices as FAST VP (Virtual Provisioning). 1 FAST and FAST VP both use policies to manage sets of devices and the allocation of their data on available storage tiers. Based on the policy guidance and the actual workload profile over time, the FAST controller will recommend or execute automatically the movement of the managed devices between the storage tiers, even at the sub-LUN level. This white paper describes a tiered storage architecture for an application running on VMware virtual machines in a VMware virtual infrastructure, and how volumes on that 1 Typically the term FAST is substituted for FAST DP. In addition, the engine and controller of the technology is frequently preceded by the term FAST only, though it also applies to FAST VP. Storage Tiering for VMware Environments Deployed on 7 EMC Symmetrix VMAX with Enginuity 5876
  • 8. storage can be moved around nondisruptively using FAST VP technology, resulting in the right data on the right storage tier at the right time. Audience This white paper is intended for VMware administrators, server administrators, and storage administrators responsible for creating, managing, and using VMFS datastores and RDMs, as well as their underlying storage devices, for their VMware vSphere™ environments attached to a Symmetrix VMAX™ storage array running Enginuity™ 5876. The white paper assumes the reader is familiar with Oracle databases and applications, VMware environments, EMC Symmetrix, and the related software. Terminology Term Definition Device LUN, logical volume Volume LUN, logical volume Symmwin Disk Group A collection of physical disks that have the same physical characteristics Unisphere Unisphere for VMAX SYMCLI/CLI Solutions Enabler's Command Line Interface Metavolume A collection of Symmetrix devices that represent one device at the host level Acronym/Abbreviation Definition LUN Logical Unit Number VLUN Virtual LUN TDEV Symmetrix Thin Device FAST VP Fully Automated Storage Tiering with Virtual Pools SG Storage Group RDM Raw Device Mapping VMFS VMware Virtual Machine File System Symmetrix VMAX using Enginuity 5876 Enginuity 5876 carries the extended and systematic feature development forward from previous Symmetrix generations. This means all of the reliability, availability, and serviceability features, all of the interoperability and host operating systems coverage, and all of the application software capabilities developed by EMC and its partners continue to perform productively and seamlessly even as underlying technology is refreshed. Storage Tiering for VMware Environments Deployed on 8 EMC Symmetrix VMAX with Enginuity 5876
  • 9. EMC Unisphere for VMAX Beginning with Enginuity 5876, Symmetrix Management Console has been transformed into EMC® Unisphere™ for VMAX™ (hitherto known simply as Unisphere) which offers big-button navigation and streamlined operations to simplify and reduce the time required to manage a data center. Unisphere for VMAX simplifies storage management under a common framework, incorporating Symmetrix Performance Analyzer which previously required a separate interface. You can use Unisphere to: • Manage user accounts and roles • Perform configuration operations (create volumes, mask volumes, set • Symmetrix attributes, set volume attributes, set port flags, and create SAVE volume pools) • Manage volumes (change volume configuration, set volume status, and create/dissolve meta volumes) • Manage Fully Automated Storage Tiering (FAST™, FAST VP) • Perform and monitor replication operations (TimeFinder®/Snap, TimeFinder/VP Snap, TimeFinder/Clone, Symmetrix Remote Data Facility (SRDF®), Open Replicator for Symmetrix (ORS)) • Manage advanced Symmetrix features, such as: o Fully Automated Storage Tiering (FAST) o Fully Automated Storage Tiering for virtual pools (FAST VP) o Enhanced Virtual LUN Technology o Auto-provisioning Groups o Virtual Provisioning o Federated Live Migration o Federated Tiered Storage (FTS) • Monitor alerts In addition, with the Performance monitoring option, Unisphere for VMAX provides tools for performing analysis and historical trending of Symmetrix system performance data. You can use the performance option to: • Monitor performance and capacity over time • Drill-down through data to investigate issues • View graphs detailing system performance • Set performance thresholds and alerts • View high frequency metrics in real time Storage Tiering for VMware Environments Deployed on 9 EMC Symmetrix VMAX with Enginuity 5876
  • 10. Perform root cause analysis • View Symmetrix system heat maps • Execute scheduled and ongoing reports (queries), and export that data to a file • Utilize predefined dashboards for many of the system components • Customize your own dashboard templates The new GUI interface dashboard is presented in Figure 1. Figure 1. Version 1.1 of Unisphere for VMAX Unisphere for VMAX, shown in the preceding figure, can be run on a number of different kinds of open systems hosts, physical or virtual. Unisphere for VMAX is also available as a virtual appliance for ESX version 4.0 (and later) in the VMware infrastructure. For more details please visit Powerlink® at http://Powerlink.EMC.com. EMC Virtual Storage Integrator EMC Virtual Storage Integrator (VSI) for vSphere Client version 5.x provides multiple feature sets including: Storage Viewer (SV), Path Management, Unified Storage Storage Tiering for VMware Environments Deployed on 10 EMC Symmetrix VMAX with Enginuity 5876
  • 11. Management, and SRDF SRA Utilities. Storage Viewer functionality extends the VMware vSphere Client to facilitate the discovery and identification of EMC Symmetrix, VPLEX™, CLARiiON®, Isilon®, VNX® and Celerra® storage devices that are allocated to VMware ESX/ESXi™ hosts and virtual machines. Unified Storage Management simplifies the provisioning of Symmetrix VMAX virtual pooled storage for data centers, ESX servers, clusters, and resource pools. Path Management allows the user to control how datastores are accessed, while the SRA Utilities provide a framework for working with the SRDF SRA adapter in VMware vCenter Site Recovery Manager environments. These features are shown installed in Figure 2. Figure 2. VSI 5.3 features VSI for vSphere Client presents the underlying storage details to the virtual datacenter administrator, merging the data of several different storage mapping tools into a few, seamless vSphere Client views. VSI enables you to resolve the underlying storage of Virtual Machine File System (VMFS) and Network File System (NFS) datastores and virtual disks, as well as raw device mappings (RDM). In addition, you are presented with lists of storage arrays and devices that are accessible to the ESX(i) hosts in the virtual datacenter. One of these features, the Storage Viewer, is displayed in Figure 3 and is demonstrating how to obtain detailed information about a LUN. Storage Tiering for VMware Environments Deployed on 11 EMC Symmetrix VMAX with Enginuity 5876
  • 12. Figure 3. VSI 5.3 Storage Viewer feature Oracle Applications Oracle Applications is a tightly integrated family of Financial, ERP, CRM, and manufacturing application products that share a common look and feel. Using the menus and windows of Oracle Applications, users have access to all the functions they need to manage their business information. Oracle Applications is highly responsive to users, supporting a multi-window GUI that provides users with full point-and-click capability. In addition, Oracle Applications offers many other features such as field-to-field validation and a list of values to help users simplify data entry and maintain the integrity of the data they enter. SwingBench SwingBench© is a GUI tool developed in Java by Dominic Giles of the Oracle Database Solutions Group. The tool is designed to generate a simulated multi-user workload and provide a graphical indication of system throughput and response times. Benchmarks provide a good substitute for what otherwise would be a daunting task of gathering hundreds of applications users and training them to perform a pre- configured set of tasks. Storage Tiering for VMware Environments Deployed on 12 EMC Symmetrix VMAX with Enginuity 5876
  • 13. There are four benchmarks included with SwingBench: Order Entry (jdbc), Order Entry (PL/SQL), Calling Circle, and a set of PL/SQL stubs that allow users to create their own benchmark. Symmetrix Virtual Provisioning Symmetrix Virtual Provisioning™, starting in Enginuity 5773, introduced a new type of host-accessible device called a thin device that can be used in many of the same ways that regular, host-accessible Symmetrix devices have traditionally been used. Unlike regular Symmetrix devices, thin devices do not need to have physical storage completely allocated at the time the devices are created and presented to a host. The physical storage that is used to supply drive space for a thin device comes from a shared thin pool that has been associated with the thin device. A thin pool is comprised of internal Symmetrix devices called data devices that are dedicated to the purpose of providing the actual physical storage used by thin devices. When they are first created, thin devices are not associated with any particular thin pool. An operation referred to as “binding” must be performed to associate a thin device with a thin pool. Figure 4 depicts the relationships between thin devices and their associated thin pools. There are nine devices associated with thin Pool A and three thin devices associated with thin pool B. Figure 4. Thin devices and thin pools containing data devices Storage Tiering for VMware Environments Deployed on 13 EMC Symmetrix VMAX with Enginuity 5876
  • 14. When a write is performed to a portion of the thin device, the Symmetrix allocates a minimum allotment of physical storage from the pool and maps that storage to a region of the thin device including the area targeted by the write. The storage allocation operations are performed in small units of storage called a “thin extent.” A round-robin mechanism is used to balance the allocation of thin extents across all of the data devices in the pool that are enabled and that have remaining unused capacity. A thin extent size is comprised of twelve 64 KB tracks (768 KB). That means that the initial bind of a thin device to a pool causes one extent, or 12 tracks, to be allocated per thin device. When a read is performed on a thin device, the data being read is retrieved from the appropriate data device in the storage pool to which the thin device is bound. Reads directed to an area of a thin device that has not been mapped do not trigger allocation operations. The result of reading an unmapped block is that a block in which each byte is equal to zero will be returned. When more storage is required to service existing or future thin devices, data devices can be added to existing thin pools. New thin devices can also be created and associated with existing thin pools. Prior to Enginuity 5875, a thin device could only be bound to, and have extents allocated in, a single thin pool. This thin pool can, in turn, only contain Symmetrix data devices of a single RAID protection type, and a single drive technology (and single rotation speed in the case of FC and SATA drives). Starting with Enginuity 5875, a thin device will still only be considered bound to a single thin pool but may have extents allocated in multiple pools within a single Symmetrix. A thin device may also be moved to a different thin pool, without any loss of data or data access, by using Virtual LUN VP Mobility. Virtual LUN VP Mobility provides the ability to migrate a thin device from one thin pool to another. If the LUN to move is part of a FAST VP Policy, it may only be moved to one of the thin pools in the policy. Federated Tiered Storage (FTS): Overview Introduced with Enginuity 5876, Federated Tiered Storage (FTS) allows LUNs that exist on external arrays to be used to provide physical storage for Symmetrix VMAX arrays. The external LUNs can be used as raw storage space for the creation of Symmetrix devices in the same way internal Symmetrix physical drives are used. These devices are referred to as eDisks. Data on the external LUNs can also be preserved and accessed through Symmetrix devices. This allows the use of Symmetrix Enginuity functionality, such as local replication, remote replication, storage tiering, data management, and data migration with data that resides on external arrays. Storage Tiering for VMware Environments Deployed on 14 EMC Symmetrix VMAX with Enginuity 5876
  • 15. Fully Automated Storage Tiering (FAST) Fully Automated Storage Tiering (FAST) automates the identification of data volumes for the purposes of relocating application data across different performance/capacity tiers within an array, or to an external array using Federated Tiered Storage (FTS). 2 The primary benefits of FAST include: • Elimination of manually tiering applications when performance objectives change over time • Automating the process of identifying data that can benefit from Enterprise Flash Drives or that can be kept on higher-capacity, less-expensive SATA drives without impacting performance • Improving application performance at the same cost, or providing the same application performance at lower cost. Cost is defined as acquisition (both hardware and software), space/energy, and management expense • Optimizing and prioritizing business applications, allowing customers to dynamically allocate resources within a single array • Delivering greater flexibility in meeting different price/performance ratios throughout the lifecycle of the information stored FAST and Fully Automated Storage Tiering with Virtual Pools (FAST VP) EMC Symmetrix FAST (FAST DP) and FAST VP automate the identification of data volumes for the purposes of relocating application data across different performance/capacity tiers within an array. FAST operates on standard Symmetrix devices. Data movements executed between tiers are performed at the full volume level. FAST VP operates on virtual devices. As such, data movement execution can be performed at the sub-LUN level, and a single thin device may have extents allocated across multiple thin pools within the array. Because FAST DP and FAST VP support different device types – standard and virtual respectively – they both can operate simultaneously within a single array. 3 Aside from some shared configuration parameters, the management and operation of each are separate. FAST managed objects There are three main elements related to the use of both FAST and FAST VP on Symmetrix VMAX, graphically depicted in Figure 5. These are: • Storage tier — A shared resource with common technologies 2 Other than the brief overview provided, Federated Tiered Storage will not be addressed in this particular whitepaper. For more information on FTS, refer to the Design and Implementation Best Practices for EMC Symmetrix Federated Tiered Storage (FTS) technical note available at http://Powerlink.EMC.com. 3 This holds true for all the Symmetrix family except the VMAXe/VMAX 10K which only supports FAST VP, being a completely thin-provisioned array. Storage Tiering for VMware Environments Deployed on 15 EMC Symmetrix VMAX with Enginuity 5876
  • 16. FAST policy — Manages a set of tier usage rules that provide guidelines for data placement and movement across Symmetrix tiers to achieve service levels and for one or more storage groups • Storage group — A logical grouping of devices for common management Figure 5. FAST managed objects Each of the three managed objects can be created and managed by using either Unisphere for VMAX (Unisphere) or the Solutions Enabler Command Line Interface (SYMCLI). FAST VP components There are two components of FAST VP – the FAST controller and the Symmetrix microcode or Enginuity. The FAST controller is a service that runs on the Symmetrix VMAX service processor. The Symmetrix microcode is a part of the Enginuity operating environment that controls components within the array. When FAST VP is active, both components participate in the execution of two algorithms – the intelligent tiering algorithm and the allocation compliance algorithm – to determine appropriate data placement. The intelligent tiering algorithm uses performance data collected by the microcode, as well as supporting calculations performed by the FAST controller, to issue data movement requests to the Virtual LUN (VLUN) VP data movement engine. The allocation compliance algorithm enforces the upper limits of storage capacity that can be used in each tier by a given storage group by also issuing data movement requests to the Virtual LUN (VLUN) VP data movement engine to satisfy the capacity compliance. Performance time windows can be defined to specify when the FAST controller should collect performance data, upon which analysis is performed to determine the appropriate tier for devices. By default, this will occur 24 hours a day. Defined data movement windows determine when to execute the data movements necessary to move data between tiers. Data movements performed by the microcode are achieved by moving allocated extents between tiers. The size of data movement can be as small as 768 KB, representing a single allocated thin device extent, but more typically will be an entire extent group, which is 7,680 KB in size (10 thin extents). Storage Tiering for VMware Environments Deployed on 16 EMC Symmetrix VMAX with Enginuity 5876
  • 17. FAST VP has two modes of operation, Automatic or Off. When operating in Automatic mode, data analysis and data movements will occur continuously during the defined data movement windows. In Off mode, performance statistics will continue to be collected, but no data analysis or data movements will take place. Figure 6 shows the FAST controller operation. Figure 6. FAST VP components Note: For more information on FAST VP specifically please see the technical note FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance available at http://Powerlink.EMC.com. FAST VP allocation by FAST Policy A new feature for FAST VP in 5876 is the ability for a device to allocate new extents from any thin pool participating in the FAST VP Policy. When this feature is enabled, FAST VP will attempt to allocate new extents in the most appropriate tier, based upon performance metrics. If those performance metrics are unavailable it will default to allocating in the pool to which the device is bound. If, however, the chosen pool is Storage Tiering for VMware Environments Deployed on 17 EMC Symmetrix VMAX with Enginuity 5876
  • 18. full, regardless of performance metrics, then FAST VP will allocate from one of the other thin pools in the policy. As long as there is space available in one of the thin pools, new extent allocations will be successful. This new feature is enabled at the Symmetrix array level and applies to all devices managed by FAST VP. The feature cannot, therefore, be applied to some FAST VP policies and not others. By default it is disabled and any new allocations will come from the pool to which the device is bound. A pinned device is not considered to have performance metrics available and therefore new allocations will be done in the pool to which the device is bound. FAST VP SRDF coordination The use of FAST VP with SRDF devices is fully supported; however FAST VP operates within a single array, and therefore will only impact the RDF devices on that array. Previously there was no coordination of the data movement between RDF pairs. Each device's extents would move according to the manner in which they were accessed on that array, source or target. For instance, an R1 device will typically be subject to a read/write workload, while the R2 will only experience the writes that are propagated across the link from the R1. Because the reads to the R1 are not propagated to the R2, FAST VP on the R2 side will make its decisions based solely on the writes and therefore the R2 data will likely not be moved to the same tiers, in the same amounts, as on the R1. To rectify this problem, EMC introduced FAST VP SRDF coordination in 5876. FAST VP SRDF coordination allows the R1 performance metrics to be transmitted across the link and used by the FAST VP engine on the R2 array to make promotion and demotion decisions. FAST VP SRDF coordination is enabled or disabled at the storage group that is associated with the FAST VP policy. The default state is disabled. FAST VP SRDF coordination is supported for single and concurrent SRDF pairings (R1 and R11 devices) in any mode of operation: Synchronous, asynchronous, or adaptive copy. FAST VP SRDF coordination is not supported for SRDF/Star, SRDF/EDP, or Cascaded SRDF including R21 and R22 devices. Working with Virtual LUN VP Mobility in VMware environments Symmetrix Virtual LUN technology enables the seamless movement of volumes within a Symmetrix without disrupting the hosts, application, or replication sessions. Prior versions permitted the relocation of fully provisioned (thick) FBA and CKD devices across drive types (capacity or rotational speed) and RAID protection types. VLUN VP provides “thin-to-thin” mobility, enabling users to meet tiered storage requirements by migrating thin FBA LUNs between virtual pools in the same array. Virtual LUN VP Mobility gives administrators the option to “re-tier” a thin volume or set of thin Storage Tiering for VMware Environments Deployed on 18 EMC Symmetrix VMAX with Enginuity 5876
  • 19. volumes by moving them between thin pools in a given FAST or FAST VP configuration. This manual “override” option helps FAST/FAST VP users respond rapidly to changing performance requirements or unexpected events. Virtual LUN VP (VLUN) migrations are session-based – each session may contain multiple devices to be migrated at the same time. There may also be multiple concurrent migration sessions. At the time of execution of a migration, a migration session name is specified. This session name is subsequently used for monitoring and managing the migration. While an entire thin device will be specified for migration, only thin device extents that are allocated will be relocated. Thin device extents that have been allocated, but not written to (for example, pre-allocated tracks), will be relocated but will not cause any actual data to be copied. New extent allocations that occur as a result of a host write to the thin device during the migration will be satisfied from the migration target pool. When using VLUN VP mobility with FAST VP, the destination pool must be part of the FAST VP policy. While a VLUN migration is active, FAST VP will not attempt to make any changes. Once the migration is complete, however, all the tracks will be available for re-tiering. To prevent movements post-migration, the relocated device(s) can be pinned. The advances in VLUN enable customers to move Symmetrix thin devices from one thin pool to another thin pool on the same Symmetrix without disrupting user applications and with minimal impact to host I/O. Users may move thin devices between thin pools to: • Change the disk media on which the thin devices are stored • Change the thin device underlying RAID protection level • Consolidate a thin device that was managed by FAST VP to a single thin pool • Move all extents from a thin device that are in one thin pool to another thin pool Beginning with 5876, VLUN VP Mobility users have the option to move only part of the thin device from one source pool to one destination pool. This feature could be very useful, for instance, in an environment where a subset of financial data is heavily accessed each month for reporting purposes. If all that financial data is stored on a particular LUN under FAST VP control, over the course of the month the data will most likely be re-tiered up to EFD due to access patterns. Once the month’s reporting is complete, however, that data is now aged, and the next month’s data is paramount. Rather than wait for FAST VP to down-tier the data, through the up-tiering of the new month’s data, the user can simply move all tracks from that LUN from EFD down to FC or SATA, freeing up all the EFD space for promotion of this month’s data. In a VMware environment, a customer may have any number of use cases for VLUN. For instance, if a customer elects to not use FAST VP, manual tiering is achievable through VLUN. A heavily used datastore residing on SATA drives may require the ability to provide improved performance. That thin device underlying the datastore Storage Tiering for VMware Environments Deployed on 19 EMC Symmetrix VMAX with Enginuity 5876
  • 20. could be manually moved to FC or EFD. Conversely a datastore residing on FC that houses data needing to be archived could be moved to a SATA device which, although a less-performing disk tier, has a much smaller cost per GB. In a FAST VP environment, customers may wish to circumvent the automatic process in cases where they know all the data on a thin device has changed its function. Take, for instance, the previous example of archiving. A datastore that contains information that has now been designated as archive could be removed from FAST VP control then migrated over to the thin pool that is comprised of the disk technology most suited for archived data, SATA. Following are two examples of using VLUN in a VMware environment: • Manual tiering • Changing disk and RAID type Manual tiering Here are two screenshots, Figure 7 contains the SYMCLI(CLI) and Figure 8 shows the Unisphere GUI, showing the distribution of a thin device or TDEV, across three different thin pools representing three different disk technologies, captured at different times in the track movement. Figure 7. Thin LUN distribution across pools viewed through SYMCLI Storage Tiering for VMware Environments Deployed on 20 EMC Symmetrix VMAX with Enginuity 5876
  • 21. Figure 8. Thin LUN distribution across pools viewed through Unisphere This TDEV is currently under FAST VP control and is presented to an ESXi cluster (Figure 9). Figure 9. Thin LUN displayed in VSI It has been determined that the data on this TDEV needs to be archived, and therefore the desire is to have all of it placed on SATA technology both because the performance requirements match that disk type and also to reduce the cost to the business for keeping this data. Because of the archiving requirement, it will not be necessary to have this TDEV under FAST VP control going forward; however the business cannot be certain that future developments may change the requirements of the data on the disk. Also, removing this TDEV from FAST VP control will require the policy associated with this storage group to be removed. So rather than remove the Storage Tiering for VMware Environments Deployed on 21 EMC Symmetrix VMAX with Enginuity 5876
  • 22. existing FAST VP policy, the user can prevent the FAST engine from looking at the device’s statistics and making changes by a simple process known as “pinning.” So before beginning the VLUN migration of the data back to the SATA pool, the device should be pinned. Pinning a device in FAST/FAST VP Pinning a device can be done in Unisphere or through the CLI. In Unisphere, shown in Figure 10, one navigates through the array/storage/volumes path. The user then highlights the device and clicks the double-arrow icon on the bottom menu. From the pop-up menu, the user selects “Pin”. By pinning the device there is no concern that once the migration completes, the FAST engine will recommence movement of the extent groups belonging to that device. Note that even if a device is not pinned prior to migration, the FAST engine will not attempt any movements while the device is under migration; however if it is not pinned, data movements may begin again immediately following termination of the migration session. Storage Tiering for VMware Environments Deployed on 22 EMC Symmetrix VMAX with Enginuity 5876
  • 23. Figure 10. Pinning a device in Unisphere Storage Tiering for VMware Environments Deployed on 23 EMC Symmetrix VMAX with Enginuity 5876
  • 24. For the CLI, use symdev to pin the device as in Figure 11: Figure 11. Pinning a device through SYMCLI Once the device is pinned, the migration can begin. VLUN migrations, just like thick LUN migrations, use the SYMCLI command, symmigrate. For thin device migration a target pool needs to be supplied. Start by first validating the migration. Although this is an optional step, it is recommended to ensure the task is permitted. Create a text file that contains the device(s) that are to be migrated back to the single pool. For this migration the thin pool is named SATA_Pool. Note that even though this example is moving data for device 17B back to the pool to which it is bound, it is Storage Tiering for VMware Environments Deployed on 24 EMC Symmetrix VMAX with Enginuity 5876
  • 25. possible to move the device to a different pool and symmigrate will bind the device to that pool, completely transparent to the hosts accessing the device. Recall, however, if that device is under FAST VP control, only a thin pool in the policy can be the migration target pool. The contents of the text file used to perform the migration are shown in Figure 12. The figure also shows the SYMCLI command to validate the proposed migration. Figure 12. Validating the migration Once validated, the migration can start as shown in Figure 13. Once the migration is in process, the session can be queried, which is also shown. Figure 13. Executing the migration Storage Tiering for VMware Environments Deployed on 25 EMC Symmetrix VMAX with Enginuity 5876
  • 26. Once the migration is complete the status will change from “SyncInProg” to “Migrated”. When it is in the migrated state, the session needs to be terminated and thus end the VLUN migration as seen in Figure 14. Figure 14. Completing and terminating the migration Viewing the TDEV now with SYMCLI in Figure 15 or with Unisphere in Figure 16, one sees that all data is returned to the SATA_Pool thin pool. Storage Tiering for VMware Environments Deployed on 26 EMC Symmetrix VMAX with Enginuity 5876
  • 27. Figure 15. The thin LUN reallocated to a single pool Figure 16. The reallocated LUN in Unisphere Changing disk and RAID type This example will show how a TDEV can be moved from one type of disk technology and RAID configuration to another disk technology and RAID configuration. One Storage Tiering for VMware Environments Deployed on 27 EMC Symmetrix VMAX with Enginuity 5876
  • 28. reason a customer might perform this type of migration is for a device not under FAST VP control and when business requirements require a change in performance characteristics for the application(s) residing on the device. In this example the migration will change the tier of the TDEV from a SATA RAID 6 thin pool to an FC RAID 1 thin pool. Figure 17 and Figure 18 show the device, 26B, as residing on SATA disk in a RAID 6 configuration, bound to thin pool SATA_Pool. Figure 17. Thin LUN 26B in a RAID 6 configuration Storage Tiering for VMware Environments Deployed on 28 EMC Symmetrix VMAX with Enginuity 5876
  • 29. Figure 18. Thin LUN 26B located in a SATA pool The TDEV now will be migrated from thin pool SATA_Pool to FC_Pool, which is on FC technology as seen in Figure 19. Storage Tiering for VMware Environments Deployed on 29 EMC Symmetrix VMAX with Enginuity 5876
  • 30. Figure 19. FC_Pool thin pool containing the Fibre Channel disk First, the device for the migration is validated as seen in Figure 20. Figure 20. Validate the migration Once the validation completes successfully, the migration can follow and the process can be queried for status as demonstrated in Figure 21. Storage Tiering for VMware Environments Deployed on 30 EMC Symmetrix VMAX with Enginuity 5876
  • 31. Figure 21. Query the migration session In this case, with the device having only part of its 100 GB allocated, the migration completes quickly. If there is any question as to whether the session is complete, run the SYMCLI command verify first to show the session is migrated, then a terminate as in Figure 22. Figure 22. Verify and terminate the migration Storage Tiering for VMware Environments Deployed on 31 EMC Symmetrix VMAX with Enginuity 5876
  • 32. Recall that all this migration activity has been transparent to the user and nondisruptive to the application. If one now views the configuration of device 26B in Unisphere, as highlighted in Figure 23, it shows that it indeed has changed RAID configuration and disk technology. Figure 23. Thin LUN 26B located in the FC pool By running a refresh in EMC’s Virtual Storage Integrator as in Figure 24, one can see that the thin device reflects the new configuration. Storage Tiering for VMware Environments Deployed on 32 EMC Symmetrix VMAX with Enginuity 5876
  • 33. Figure 24. Thin LUN 26B in a RAID 1 configuration FAST VP and Oracle Applications 12 Applications Architecture The Oracle Applications Architecture is a framework for multi-tiered, distributed computing that supports Oracle Applications products. In this model, various servers or services are distributed among three levels, or tiers. A tier is a logical grouping of services, potentially spread across more than one physical or virtual machine. The three-tier architecture that comprises an Oracle E- Business Suite installation is made up of the database tier, which supports and manages the Oracle database; the application tier, which supports and manages the various Applications components, and is sometimes known as the middle tier; and the desktop tier, which provides the user interface through an add-on component to a standard web browser. The simplest architecture for Oracle Applications is to have all tiers, except the desktop tier, installed on a single server. This configuration might be acceptable in a development environment, but for production environments scaling would quickly Storage Tiering for VMware Environments Deployed on 33 EMC Symmetrix VMAX with Enginuity 5876
  • 34. become an issue. In order to mimic a more realistic production environment, therefore, the architecture of the FAST VP testing environment is built with a separate physical application tier and database tier as shown in Figure 25. A third desktop tier houses the SwingBench application, representing the users accessing the system. Figure 25. Oracle Applications architecture Working with FAST VP and Oracle Applications on VMware infrastructure As already mentioned, because of the diversity of Oracle Applications, in that there are hundreds of different modules within a single product, deploying them appropriately on the right tier of storage is a daunting task. Implementing them in a VMware environment that utilizes FAST VP will demonstrate how a customer can achieve proper performance and cost savings at the same time. For this study, the latest Oracle Applications release 12 was installed and configured. There are a few benefits to using the latest release. First, Oracle pre-packages release 12 with version 11g of the Oracle database. Version 11g is Oracle’s latest database release Storage Tiering for VMware Environments Deployed on 34 EMC Symmetrix VMAX with Enginuity 5876
  • 35. and represents a significant advancement over 10g in performance and functionality. Second, Oracle has now divorced itself from the practice of having two tablespaces, and hence at least two datafiles, per application. Prior to release 12, each Applications module had its own set of tablespaces and datafiles, one for the data and one for the index. With over 200 schemas, managing a database of over 400 tablespaces and datafiles was, and is, a sizable undertaking. The new approach that Oracle uses in release 12 is called the Oracle Applications Tablespace Model, or OATM. Oracle Applications Tablespace Model Oracle Applications release 12 utilizes as the standard a modern infrastructure for tablespace management, the Oracle Applications Tablespace Model (OATM). The OATM is similar to the traditional model in retaining the system, undo, and temporary tablespaces. The key difference is that Applications products in an OATM environment share a much smaller number of tablespaces, rather than having their own dedicated tablespaces. Applications schema objects are allocated to the shared tablespaces based on two main factors: the type of data they contain, and I/O characteristics such as size, life span, access methods, and locking granularity. For example, tables that contain seed data are allocated to a different tablespace from the tables that contain transactional data. In addition, while most indexes are held in the same tablespace as the base table, indexes on transaction tables are held in a single tablespace dedicated to such indexes. The OATM provides a variety of benefits, summarized in the list below and discussed in more detail later: • Simplifies maintenance and recovery by using far fewer tablespaces than the older model • Makes best use of the restricted number of raw devices available in Oracle Real Applications Cluster (Oracle RAC) and other environments, where every tablespace requires its own raw device • Utilizes locally managed tablespaces, enabling more precise control over unused space and hence reducing fragmentation • Takes advantage of automatic segment space management, eliminating the need for manual space management tasks • Increases block-packing compared to the older model, reducing the overall number of buffer gets and improving runtime performance • Maximizes usefulness of wide disk stripe configurations The OATM uses locally managed tablespaces, which enables extent 4 sizes either to be determined automatically (autoallocate), or for all extents to be made the same, 4 An extent is a set of contiguous blocks allocated in the database (in this case the datafile associated with the tablespace). Storage Tiering for VMware Environments Deployed on 35 EMC Symmetrix VMAX with Enginuity 5876
  • 36. user-specified size (uniform). This choice of extent management types means that locally managed tablespaces offer greater flexibility than the dictionary-managed tablespaces used in the traditional tablespace model. However, when using uniform extents with locally managed tablespaces, the extent size must be chosen with care: Too small a size can have an adverse effect on space management and performance. A further benefit of locally managed tablespaces, and hence use of OATM, is the introduction of automatic segment space management, a simpler and more efficient way of managing space within a segment. It can require more space but eliminates the need for traditional manual segment space management tasks such as specifying and tuning schema object storage parameters such as PCTUSED. This and related storage parameters are only used to determine space allocation for objects in dictionary-managed tablespaces, and have no meaning in the context of locally managed tablespaces. Oracle Applications implementation A customer implementation of Oracle Applications is not a quick process. The installation itself is only the first part of what can be an endeavor lasting many months or longer. Although there are almost 200 application modules in the Oracle Applications, customers rarely, if ever, use all of them. They use a selection of them, or perhaps a bundle such as Financials, or CRM. These modules are then implemented (typically) in a phased approach. The transition from an existing applications system or implementation of a new system takes time. How that system eventually will be used, and more importantly how that database will be accessed, presents a real challenge for the system administrator and database administrator. These individuals are tasked with providing the right performance for the right application at the right price. In other words, both performance optimization and cost optimization are extremely important to them; however, obtaining the balance between the two is not an easy task. The three disk technologies covered in this paper — SATA, FC, and EFD — have differing performance characteristics and very different costs. For instance, how will they decide what part of the database belongs on the various disk technologies that represent the cost and performance balancing act they attempt each day? This is made even more difficult under the new Oracle Applications Tablespace Model. Oracle’s new model certainly does a good job at high-level database object consolidation – less tablespaces and less datafiles – but since the application modules are no longer separated into individual tablespaces and datafiles, there really is no practical way to put different modules on different tiers of storage – until now. FAST VP is the perfect complement to the manner in which Oracle implements the database in release 12 of the Oracle application suite. In fact, the entire database of user data can be placed on a single mount point on a VMware virtual disk and yet still be spread over the appropriate disk technologies that match the business requirements. The simplicity of this deployment model is enabled by FAST VP. The following section presents one scenario on how this might be done in a production environment. Storage Tiering for VMware Environments Deployed on 36 EMC Symmetrix VMAX with Enginuity 5876
  • 37. Use case implementation In this example, Oracle Applications release 12 was installed using Oracle’s pre- configured and seeded sample database, the Vision database, all on a VMware infrastructure. Each of the two tiers in the implementation was a virtual machine. The entire installation is known as the Vision Demo system. The Vision Demo system installation includes the licensing of all of Oracle’s application modules along with a database that contains data for these modules. Such a system allows customers to learn how to use Oracle Applications as well as provide a good foundation for the type of testing documented herein. In this FAST VP use case environment, all products begin their storage lifecycle on an inexpensive tier, SATA. Although these drives are slower than Fibre Channel or Flash drives, they can easily meet the performance needs required for the early phases of implementation. A customer may spend many months converting data, entering financial charts of accounts, and doing other pre-production tasks. Due to the low I/O and less stringent response time requirements during this period it is unnecessary to use a storage tier that has better performance characteristics than SATA. This is also a great cost-saver to a company since SATA has a lower cost per GB than either Fibre Channel drives or EFDs. If, however, a customer was taking an existing Oracle Applications environment and moving it under FAST VP control, they might want to start their database at a higher tier of storage like Fibre Channel, to avoid any performance implications while the FAST VP engine was determining where best to place the data. Recall that FAST VP will both promote and demote data so eventually the data will be placed on the correct tier of disk. The caveat, of course, is that there needs to be sufficient Fibre Channel disk to support the entire database at the point FAST VP is implemented. So once the implementation phase comes to a close, and application modules are brought live, how will FAST VP recognize the need to move the data representing those modules from SATA to a higher-performing storage tier? Let’s take the example of a Financials implementation. A customer “goes live” with accounts receivable, accounts payable and general ledger. Currently these modules exist in single tablespaces spread across a few datafiles that are stored on a single Linux mount point, created on a virtual disk in a VMware VMFS datastore. That datastore is created on a single thin LUN that is bound to a pool of SATA disks. As that thin LUN is bound to a pool of SATA disks, the data in the modules is actually spread across all those pooled disks. In a traditional storage implementation, this might seem to be an impossible task. With the data being spread across so many different disks how would one find the most accessed data and move it to a different storage tier? The true genius of FAST VP is that from the module or user perspective, there is no need to know anything about how the application is accessed. In other words, FAST VP works with complete user transparency. Here is how it is accomplished: On the Symmetrix the storage administrator creates a set of tiers, each representing a different type of disk in the box, for example, SATA, Fibre Channel, and Flash, each able to have their own type of RAID protection – RAID 6, RAID 1, and RAID 5 respectively. Each of those tiers is then associated with a thin pool that contains one of the aforementioned disk types. In this example, one of those tiers is associated with the SATA pool in which Storage Tiering for VMware Environments Deployed on 37 EMC Symmetrix VMAX with Enginuity 5876
  • 38. the database containing the Financial modules is located. A policy is then set up that dictates how much space is to be made available in each tier of disk. As the production workload ramps up in these Financial modules, FAST VP is gathering statistics on how the data is being accessed in those thin pools. As I/O increases across the Financial modules, FAST VP is able to determine that portions of the data located in the SATA pool need to be up-tiered – to Fibre Channel or Flash or both. Once determined, the data is moved automatically as represented in Figure 26 – no user intervention is required. Figure 26. Promotion of the application from SATA to FC and EFD over time The amount of data moved will represent only that data that is being heavily accessed within the application modules. So though one of these Financial application modules may be many gigabytes in size as determined by the tables and indexes that make it up, FAST VP is only going to move the data that is being heavily accessed. Since only a portion of the data is moving, less disk space of the higher and more expensive tiers is being used, thus not only saving money but leaving more available space for other heavily accessed applications. As other applications (for example GL or other Oracle Applications modules) are brought live, they too will benefit from FAST VP. Conversely, it should not be forgotten that FAST VP will also demote. That financial data that FAST VP moved to the higher tier may be slated for archiving next month. When access patterns/workloads change, FAST VP will recognize it and move the data accordingly, in this case back to SATA. In the end the customer will benefit by having the right data placed on the right storage type at the right time and at the right cost. Storage Tiering for VMware Environments Deployed on 38 EMC Symmetrix VMAX with Enginuity 5876
  • 39. Static tablespace placement As Oracle Applications is powered by an Oracle database, there are a number of database objects that are in all Oracle databases: temp files, system files, undo files, and redo logs. These tablespaces and their respective datafiles are a small part of the database but are essential components that are accessed, in the case of the redo logs, constantly. These Oracle tablespaces are not part of the Oracle Applications Tablespace Model. Unlike individual application modules therefore, it is possible to place these Oracle datafiles and logfiles on different mount points and/or different disk technologies from the start. Thus one may, in fact, choose not to make these part of a FAST VP policy and instead place them on high-performing disk permanently. In general, these tablespaces and logfiles do not grow significantly in size as compared to the user data portion of the database, nor do their performance characteristics change drastically over time. In addition, when using EMC’s replication technologies, it is always best practice to separate the redo logs and temp tablespaces at the very least (for details on Oracle running on EMC systems please see the TechBook Oracle Databases on EMC Symmetrix Storage Systems on www.EMC.com). Although it is possible to follow the same strategy presented here and put all components on SATA and in a FAST VP policy, a production implementation will access these files frequently and thus the data would be moved to higher tiers. Given the limited amount of disk space the files occupy and the relative certainty of their access patterns, having the FAST VP engine analyze this data is unnecessary, and in fact adds overhead. The following study puts this into practice, separating out these files, both because of the explanation above and also to accurately account for the sub-LUN movements of the application module. Oracle Application deployment The VMware environment deployed in this study consists of three ESXi 5.0 servers with a total of four virtual machines listed in Table 1. The environment is managed by a VMware vCenter Server. Figure 27 is a visual representation of the environment. Table 1. Example environment Server Name Model OS & CPUs RAM Disk Version (GB) Database Tier fastdb VMware OEL 5 4 16 SAN VM 64-bit Applications fastapp VMware OEL 5 1 8 SAN Tier VM 64-bit Management fastmgmt VMware Win2008 1 4 SAN Server VM 64-bit Virtual Center sibu_infra_vc VMware Win2008 2 4 Local/SAN VM 64-bit EMC 000198700046 VMAX 10K 5876 43 62 TB Symmetrix microcode usable Total Storage Tiering for VMware Environments Deployed on 39 EMC Symmetrix VMAX with Enginuity 5876
  • 40. Hardware layout Figure 27. Physical/virtual environment diagram FAST VP configuration The first step to showcase the FAST VP functionality in a virtualized Oracle Applications environment is to ensure that FAST VP is enabled. This can be done through the use of management tools for Symmetrix – Solution Enabler CLI or Unisphere. The process of enabling FAST VP using the Unisphere interface is shown in Figure 28, and the result of the change is shown in Figure 29 by utilizing the SYMCLI(CLI) interface. Storage Tiering for VMware Environments Deployed on 40 EMC Symmetrix VMAX with Enginuity 5876
  • 41. Figure 28. Enabling FAST VP using EMC Unisphere for VMAX Storage Tiering for VMware Environments Deployed on 41 EMC Symmetrix VMAX with Enginuity 5876
  • 42. Figure 29. Determining the state of the FAST VP engine using SYMCLI After enabling the engine, there are a number of steps that follow: Creation of FAST VP tiers, storage groups, and FAST VP policies. The Unisphere application provides users with an easy-to-understand interface to perform these activities; however, the objects can also be created using command line (SYMCLI). Figure 30 contains a list of the disk groups that show the disk technologies available in Symmetrix VMAX 10K. As seen in the figure, the array has Fibre Channel, SATA, and EFD (or Flash) drives. Figure 30. Diskgroup summary Storage Tiering for VMware Environments Deployed on 42 EMC Symmetrix VMAX with Enginuity 5876
  • 43. In order to use FAST VP, at least two different disk technologies are required. From these disks, the thin pools can be built. To demonstrate the use of FAST VP in an Oracle Applications environment, three pools were built in this environment to match the three different disk technologies: FC_Pool, SATA_Pool, and EFD_Pool. The detailed procedure used for the creation of the thin pools is not included herein as the Virtual Provisioning feature has been available since release 5773 of the microcode. The pools are shown in Figure 31. Figure 31. Thin pools on the Symmetrix VMAX The thin pools are backed by data devices that are configured as follows: • EFD_Pool - 64 x 15 GB RAID 5 (3+1) • FC_Pool – 200 x 30 GB RAID 1 • SATA_Pool – 200 x 50 GB RAID 6 (6+2) It is important to use enough devices to ensure that the data on the TDEVs is striped wide, thereby avoiding hotspots. The allocation of disk space in the thin pools, however, is not simply based upon the available disk in VMAX. The majority of data in the Vision demo environment will not be accessed frequently and therefore whether it starts on SATA or not, FAST VP will ensure that a larger portion of it will end up there. This is one of the reasons that it is a logical decision to place the entire database on SATA from the start, and hence why the SATA pool is the largest. This is all for the best from a cost perspective also, since both FC and particularly EFD are more expensive than SATA. Storage Tiering for VMware Environments Deployed on 43 EMC Symmetrix VMAX with Enginuity 5876
  • 44. Configuring FAST VP The five steps to configure FAST VP are as follows: Step 1 - Create storage tiers Three storage tiers were used in the environment: one for 15k Fibre Channel, one for Flash drives, and one for 7200 SATA drives. For simplicity’s sake, they are named FC_Tier, EFD_Tier, and SATA_Tier. The CLI command to create the EFD_Tier storage tier is shown in Figure 32. Figure 32. Storage tier listing Figure 33 is the dialog box to create storage tiers in Unisphere. The storage tiers created for this use case are listed. Storage Tiering for VMware Environments Deployed on 44 EMC Symmetrix VMAX with Enginuity 5876
  • 45. Figure 33. Creating a storage tier in Unisphere Step 2 – Create the storage group This again can be done via the CLI or Unisphere. In many customer environments a storage group already exists for mapping and masking storage to the hosts. In this environment there are two storage groups that represent the Vision database, each with a single LUN. As can be seen in Figure 34, the storage group dsib1115_WP_sg contains one device, 17B, which is associated with a FASTVP policy. This device contains all the user data. The other group dsib1115_WP2_sg contains device 183 which is not part of a FAST VP policy as it is the location of temp files, system files, undo files, and redo logs from the database. Storage Tiering for VMware Environments Deployed on 45 EMC Symmetrix VMAX with Enginuity 5876
  • 46. Figure 34. Storage group for FAST VP in EMC Unisphere A view of the storage group as seen from VMware ESXi using EMC Virtual Storage Integrator (VSI) is displayed in Figure 35. Figure 35. Storage group for FAST VP as viewed from VMware ESXi using EMC VSI Storage Tiering for VMware Environments Deployed on 46 EMC Symmetrix VMAX with Enginuity 5876
  • 47. Step 3 - Create a FAST VP policy The CLI syntax for creating the policy is included below in Figure 36. The GUI interface for creating the FAST VP policy can be seen in Figure 38. The policy here is set up such that all devices hosting the Vision database can exist on SATA (100 percent). Recall that in the environment presented in this paper, all applications start on SATA. For this to occur, the policy has to allow 100 percent of the storage to reside on SATA drives. If this is set to a percentage of storage that is less than the size of the TDEVs in the policy, the storage group will not be compliant with the FAST VP policy and FAST will perform a compliance move although the performance characteristics may not warrant such a move. The other tiers are set to 18% for FC and 3% for EFD in order to more realistically represent a customer’s environment. Figure 36. Creation of a FAST VP policy through CLI The policy percentages of disk technologies used in this use case speak to two realities: first that both EFD and FC are more expensive mediums than SATA, and second, and more importantly, that only a small percentage of an application or database is going to be accessed regularly. FAST is designed to make use of the disk provided to it. If cost were not a concern, the best policy to institute is 100/100/100, which would allow FAST full reign to use as much of each tier as it needed. Unfortunately, in the real world cost is one of the prime concerns, and as a result customers are more likely to have smaller amounts of FC and EFD in their Symmetrix than SATA. This leaves less to dedicate to a FAST VP policy; however the good news is that most data in applications and databases are rarely accessed so it is unlikely large amounts of very fast disks such as EFD will be required. The result of the policy creation is shown in Figure 37. Storage Tiering for VMware Environments Deployed on 47 EMC Symmetrix VMAX with Enginuity 5876
  • 48. Figure 37. Listing the FAST VP policy in CLI Storage Tiering for VMware Environments Deployed on 48 EMC Symmetrix VMAX with Enginuity 5876
  • 49. Figure 38. Creating a FAST VP policy in Unisphere Storage Tiering for VMware Environments Deployed on 49 EMC Symmetrix VMAX with Enginuity 5876
  • 50. Step 4 - Associate the storage group with the new policy To associate the storage group from Figure 36 to the newly created policy, the CLI command is: Figure 39. Associating a storage group to a FAST VP policy in CLI This can also be accomplished in Unisphere as shown in Figure 40. At the point of associating a storage group, the user can check a box to enable RDF coordination as explained in the section FAST VP SRDF coordination. Storage Tiering for VMware Environments Deployed on 50 EMC Symmetrix VMAX with Enginuity 5876
  • 51. Figure 40. Associating a storage group to a FAST policy in Unisphere One can now list the details of the association, including in what way the storage group complies with the policy. This is shown in Figure 41. From the output we can see that the total amount of space that the current storage group can “demand” is 400 GB, or the size of the thin device 17B, though the current allocation is only 290 GB. Storage Tiering for VMware Environments Deployed on 51 EMC Symmetrix VMAX with Enginuity 5876
  • 52. Figure 41. FAST VP policy association with demand detail Storage Tiering for VMware Environments Deployed on 52 EMC Symmetrix VMAX with Enginuity 5876
  • 53. Similar demand details can be obtained from the Unisphere interface as shown in Figure 42. Figure 42. FAST VP policy management Step 5 – Configure a performance and move window After the storage group is associated with the FAST VP policy, two time windows need to be setup. One window dictates when the FAST VP algorithms observe the performance of the devices, and the other window specifies when the generated moves may be executed. Although this can be configured through the CLI, the Unisphere GUI interface is much easier to navigate and was utilized in this study. The window setups are shown in Figure 43 and Figure 44. Because of the nature of the use case and the limited testing windows, the “Time to Sample before First Analysis” was set to 2 hours, and the “Workload Analysis Period” to 1 week. When setting the time for performance and movement, the local time of the machine is used; however all times will be converted to UTC to ensure all time windows are set as the user intended whether they are on the East Coast or West Coast. The performance and move windows similarly were set to adhere to the testing and thus they are 24 hours a day. In a customer environment, the performance window should be set to match the hours the data will be accessed, while the move window should be set to a time period of less activity on the system. Storage Tiering for VMware Environments Deployed on 53 EMC Symmetrix VMAX with Enginuity 5876
  • 54. Figure 43. FAST VP general settings in Unisphere Storage Tiering for VMware Environments Deployed on 54 EMC Symmetrix VMAX with Enginuity 5876
  • 55. Figure 44. Setting FAST VP performance and movement time windows With the general FAST VP environment configured, the testing can proceed. Storage Tiering for VMware Environments Deployed on 55 EMC Symmetrix VMAX with Enginuity 5876
  • 56. Oracle Applications case study using FAST VP performance monitoring For the purposes of this example, the following assumptions were made about the Oracle E-Business Suite 12 implementation. The use case will use the Order Entry (schema SOE) module as the basis for demonstrating implementing Oracle Applications. The other modules will be implemented at a future date, and thus are not accessed during the testing. Order Entry will have 200 active users on the system during normal business hours as the environment is open through the web as a B2B. As mentioned earlier, the customer user data portion of the Vision database is configured on a single mount point that is actually a virtual disk in a VMware virtual machine with Oracle Enterprise Linux as the guest operating system. Drilling down into the database VM itself, FASTDB, on the Linux OS one can use the Solutions Enabler command symvm introduced in version 7.2 to show how the local file systems map to the VMFS datastores and ultimately the Symmetrix. The database device /dev/sdd was partitioned using fdisk into a single partition on which the ext3 filesystem was created (mkfs.ext3). This mount houses the database user data. Similarly, device /dev/sde was partitioned and contains the database system files. Figure 45 demonstrates the use of the symvm command to map the local file system to the VMFS datastore and then further to show the Symmetrix device that backs the VMFS datastore. Storage Tiering for VMware Environments Deployed on 56 EMC Symmetrix VMAX with Enginuity 5876
  • 57. Figure 45. Using symvm to translate Linux database mounts on the FASTDB VM To see this information at a high level, use VSI Storage Viewer as shown in Figure 46. VSI will include important details not seen with the symvm command such as the RAID configuration, thin pools, metavolume type (if applicable), and storage group. Storage Tiering for VMware Environments Deployed on 57 EMC Symmetrix VMAX with Enginuity 5876
  • 58. Figure 46. Virtual disk mapping in the EMC VSI Storage Viewer feature Viewing the path management owner of the device in VSI in Figure 46, one can see it is managed by PowerPath®. EMC PowerPath/VE was installed on all hosts for load balancing, failover, and high availability. It is a best practice to use PowerPath/VE in a VMware infrastructure running on EMC Symmetrix. The TDEV is not the only device bound to the thin pool SATA_Pool as shown in Figure 47. Though it is not required, customers may find it is easier to manage and keep track of those TDEVs under FAST VP control by creating thin pools dedicated for FAST VP. In some views, VSI Storage Viewer includes a column for RAID as in Figure 46. For devices under FAST VP control, the algorithm that VSI uses may show the RAID configuration of any of the thin pools in the policy. Storage Tiering for VMware Environments Deployed on 58 EMC Symmetrix VMAX with Enginuity 5876
  • 59. Figure 47. Thin device bound to the pool containing SATA drives Now that all the specific FAST VP setup activities are complete, the promotion of the Oracle Applications module to a live state can begin. Order Entry application As mentioned the Order Entry schema is owned by the user SOE. The total size of the SOE application is about 26 GB as shown in Figure 48. Storage Tiering for VMware Environments Deployed on 59 EMC Symmetrix VMAX with Enginuity 5876
  • 60. Figure 48. Total size of the Order Entry application SwingBench was used to simulate the workload by executing those transactions that are most common in Order Entry: adding customers, searching for products, ordering products, searching for orders, and processing orders. The SwingBench Order Entry benchmark is designed to hit the majority of data in the schema. The benchmark was run for about an hour, with the default setup designed to mimic an hour in a regular business day for a customer. In the screenshot shown in Figure 49, the benchmark is in mid-run, with approximately 200 users connected and generating an average of 12,173 transactions a minute. Storage Tiering for VMware Environments Deployed on 60 EMC Symmetrix VMAX with Enginuity 5876
  • 61. Figure 49. Order Entry benchmark As the parameters for FAST are set to analyze two hours’ worth of statistics (as previously shown in Figure 44), movement began shortly after that time. In real customer environments, depending on the settings used for the performance window, it is reasonable for changes to take several hours or, in some instances, days. In a production environment it would be most advantageous for a customer to set the performance window to the span of time in a day during which business activity takes place. It is important that nonproductive hours are not included in the performance window as this could pollute the performance statistics that the FAST controller uses, and may result in incorrect placement of data. This is a noteworthy point since unlike FAST DP for thick devices, in FAST VP there is no choice for customers to approve or even to review the recommendations made by the controller. If FAST determines that tracks need to be moved, and the FAST engine is set to automatic as in this case study (Figure 50), the moves take place automatically in the background during the move window. Storage Tiering for VMware Environments Deployed on 61 EMC Symmetrix VMAX with Enginuity 5876
  • 62. Figure 50. FAST VP move mode Per the FAST settings, about an hour after the benchmark completes, the FAST engine begins moving tracks from the SATA_Pool thin pool to the two other configured pools, FC_Pool and EFD_Pool. Note the subtlety of how the FAST engine works. Since it is working on the array at the sub-LUN level, there is no knowledge of the application by FAST. It simply uses the access patterns to determine where to place the data. In this manner, the Order Entry application data ends up on three separate tiers. Figure 51 catches the movement just as it is beginning. Storage Tiering for VMware Environments Deployed on 62 EMC Symmetrix VMAX with Enginuity 5876
  • 63. Figure 51. Initial track movement for the database thin device After a time (about an hour), as seen in Figure 52, the initial movement of data is complete, with all three thin pools containing some portion of device 17B. Figure 52. Completed track movement for the database thin device If one views the FAST VP policy in the CLI, the usage for each tier is listed. Based on how SwingBench has accessed the user data, FAST has moved some of that data from Storage Tiering for VMware Environments Deployed on 63 EMC Symmetrix VMAX with Enginuity 5876
  • 64. SATA to EFD and FC. The output demonstrates that 11 GB of the Order Entry application has been re-tiered to EFD while 9 GB was placed on FC. These increases have led to the decrease of the SATA tier by the sum total of the two other tiers which is 20 GB. This is depicted in Figure 53. Figure 53. FAST policy demand usage Note also that there is still growth possible in all storage tiers, indicating that FAST could have utilized more EFD or FC but the access patterns did not warrant it. FAST VP Results Once the move is complete, the FAST VP mode is set to off to prevent additional movement during the second test since the performance and movement windows are 24 hours. The Oracle Vision database is then refreshed from a TimeFinder clone backup to ensure the post-move test is the same as the pre-test. 5 The benchmark is re-run to see if there is a noticeable difference in any of the measurable statistics. Figure 54 shows a graph of the mid-run of the benchmark-executed post-FAST VP optimizations. For each of the SwingBench runs, the loader gathers statistics on each of the five transaction types providing the minimum, maximum, and average transaction response times. These statistics are saved to an XML file at the end of each run. 5 Restoring the clone data does not impact the location of the tracks on the disk tiers. Storage Tiering for VMware Environments Deployed on 64 EMC Symmetrix VMAX with Enginuity 5876
  • 65. Figure 54. Mid-run of the Order Entry benchmark By simply transposing the average response times for each run on a graph, the pre and post FAST VP runs can be compared. Figure 55 is a composite graph which contains pre and post FAST VP test results of the transaction response times for each of the five Order Entry functions. The black lines represent the pre-FAST VP environment while the green represents the post-FAST VP environment. Storage Tiering for VMware Environments Deployed on 65 EMC Symmetrix VMAX with Enginuity 5876
  • 66. Figure 55. Transaction response time comparing pre-FAST VP and post-FAST VP movement Reviewing the graph, the post-FAST VP movement environment shows clear gains over the pre-FAST VP movement environment in every type of transaction. Some transaction types show a more pronounced benefit, such as Browse Orders which went from 36 milliseconds to 23 milliseconds, but the results are undeniably better for each one. This test of course is just a microcosm for what is possible in a large enterprise environment. Reduced transaction times mean more work can be accomplished by the existing hardware and software, or that a user has a better experience when accessing the application or database. More importantly FAST VP will continue to work throughout the lifecycle of the applications and databases under its control and up-tier or down-tier according to how that data is accessed, efficiently making use of the storage within the FAST VP policy. FAST VP, manual tiering, and real-world considerations Navigating the varied applications and databases in a customer environment in order to manually tier them across different disk technologies is a difficult task. If resources are at a premium it moves from difficult to near impossible. An application or database is not a single entity. Invariably there will be some data more heavily accessed than other data, and thus placing an entire application or database on a single disk technology will either waste money (FC, EFD) or limit performance (SATA). If all those applications and databases were static entities with the same performance requirements, it might be a more feasible undertaking to manually tier them; but they are not. They are living; or rather they have a lifecycle to them. The simple Order Entry use case conducted for this paper is evidence of that. At the end of the test, despite its nominal size, the Order Entry application is now spread across three tiers of storage, with the most heavily accessed data being Storage Tiering for VMware Environments Deployed on 66 EMC Symmetrix VMAX with Enginuity 5876
  • 67. placed on EFD, the second on Fibre Channel, and the least or not at all remaining on SATA. All three tiers are utilized based upon how the data was accessed during the test and most importantly no manual tiering was required to achieve the result. FAST VP took over the tiering of the application or database in this case, and did so using real-time disk metrics. Unlike a manual tiering which may be a single event, FAST VP will continue to gather metrics and make additional changes over the lifecycle of the application thereby mitigating cost and maximizing performance. Conclusion Using Virtual LUN and FAST VP technologies from EMC it is possible to properly tier applications running in a vSphere environment, without the complications of many virtual disks on many datastores or the use of raw device mappings. All the benefits of storage tiering in an EMC Symmetrix VMAX and VMware vSphere infrastructure can even be achieved using a single datastore on a single thin LUN, sacrificing neither manageability nor performance. Data movement can be completely automated, eliminating the need for time-consuming analysis by database and IT staff. As only what is necessary of each tier of disk is used and as most data in large databases supporting various applications is not frequently accessed, the majority can remain on cost-effective SATA. These benefits make FAST a wise investment for any company. Storage Tiering for VMware Environments Deployed on 67 EMC Symmetrix VMAX with Enginuity 5876
  • 68. References The following are available on the Oracle Technology Networkhttp://otn.oracle.com/: • Oracle Applications Installation Guide: Using Rapid Install Release 12 • Oracle Applications Concepts Release 12 The following are available on EMC’s Powerlink website: • Oracle Databases on EMC Symmetrix Storage Systems TechBook • Implementing Fully Automated Storage Tiering with Virtual Pools (FAST VP) for EMC Symmetrix VMAX Series Arrays • FAST VP for EMC® Symmetrix® VMAX® Theory and Best Practices for Planning and Performance • VSI for VMware vSphere Storage Viewer Product Guide 5.3 Product Guide • Unisphere for VMAX Installation Guide • EMC Solutions Enabler Symmetrix Array Controls CLI Product Guide • EMC Solutions Enabler Symmetrix Array Management CLI Product Guide • EMC Solutions Enabler Symmetrix CLI Command Reference HTML Help • EMC Solutions Enabler Installation Guide Storage Tiering for VMware Environments Deployed on 68 EMC Symmetrix VMAX with Enginuity 5876