SlideShare une entreprise Scribd logo
1  sur  102
Télécharger pour lire hors ligne
VMware vSphere 5 and
         IBM XIV Gen3 end-to-end
              virtualization

  Lab report: vSphere 5, vMotion, HA, SDRS,
    I/O Control, vCenter, VAAI and VASA




IBM Corporation 2011                    1|Page
Contents

1. Executive summary .............................................................................................................. 4
2. Introduction ............................................................................................................................ 5
   2.1.      VMware vSphere 5 features and benefits ................................................................. 5
   2.2.      Introduction to new XIV Gen3 features ...................................................................... 6
   2.3.      Testing goals .................................................................................................................. 7
   2.4.      Description of the equipment....................................................................................... 7
3. Test structure......................................................................................................................... 7
   3.1.      Hardware setup.............................................................................................................. 7
      3.1.2.       ISCSI configuration..............................................................................................................8
      3.1.3.       VMware vSphere..................................................................................................................8
   3.2.      VMware 5.0 environment Software setup installation.............................................. 8
      3.2.1.       VMware 5.0 Configuration ..................................................................................................8
      3.2.2.       VM OS software ...................................................................................................................8
      3.2.3.       Testing software...................................................................................................................9
4. Test procedures .................................................................................................................... 9
   4.1.      Iometer for performance testing .................................................................................. 9
      4.1.1.       Disk and network controller performance.........................................................................9
      4.1.2.       Bandwidth and latency capabilities of buses...................................................................9
   4.2.      vSphere vMotion.......................................................................................................... 10
      4.2.1.       vSphere vMotion - Transfer time of VMs to a local disk (DAS) ..................................10
      4.2.2.       vSphere vMotion - Transfer times of VMs to XIV LUN (SAN).....................................10
   4.3.      vSphere High Availability............................................................................................ 11
   4.5.      Profile-Driven Storage ................................................................................................ 13
   4.6.      vSphere Storage I/O Control ..................................................................................... 14
   4.7.      vCenter.......................................................................................................................... 15
   4.8.      VMware vSphere Storage API Program .................................................................. 15
      4.8.1.       vSphere Storage APIs for Array Integration (VAAI) .....................................................15
      •            Full copy, Hardware-Assisted Locking, and Block Zeroing .........................................16
      4.8.2.       vStorage APIs for Storage Awareness (VASA).............................................................19




IBM Corporation 2011                                                                                                            2|Page
5. Conclusion ........................................................................................................................... 20
Appendix A (Iometer for performance testing) ...................................................................... 22
Appendix B (vSphere vMotion) ................................................................................................ 47
Appendix C (Transfer times of VMs to XIV LUNs (SAN))................................................... 51
Appendix D ( vSphere High Availability)................................................................................. 55
Appendix E (vSphere Storage DRS)....................................................................................... 59
Appendix F (Profile-Driven Storage) ....................................................................................... 75
Appendix G (Storage I/O Control) ........................................................................................... 90
Trademarks and special notices ............................................................................................ 101




IBM Corporation 2011                                                                                                   3|Page
1. Executive summary
   The values of server virtualization are well understood today. Customers implement
   server virtualization to increase server utilization, handle peak loads efficiently,
   decrease total cost of ownership (TCO), and streamline server landscapes.

   Similarly, storage virtualization helps to address the same challenges as server
   virtualization. Storage virtualization also expands beyond the boundaries of physical
   resources and helps to control how IT infrastructures adjust to rapidly changing
   business demands. Storage virtualization benefits customers through improved
   physical resource utilization and improved hardware efficiency, as well as reduced
   power and cooling expenses. In addition, consolidation of resources obtained
   through virtualization offers measurable returns on investment for today’s
   businesses. Finally, virtualization serves as one of the key enablers of cloud
   solutions, which are designed to deliver services economically and on demand.

   The features of VMware vSphere 5.0 and IBM XIV® Gen3 storage together build a
   powerful end to end virtualized infrastructure, covering not only servers and storage,
   but also end-to-end infrastructure management, leading to more-efficient and higher-
   performing applications.

   VMware is a leading manufacturer of virtualization software. VMware vSphere 5 is
   the first version of VMware vSphere built exclusively on ESXi, a hypervisor purpose-
   built for virtualization that runs independently from a general purpose operating
   system. With an ultra-thin architecture, ESXi delivers industry-leading performance,
   reliability and scalability all within a footprint of less than 100 MB. The result is
   streamlined deployment and configuration as well as simplified patching, updating
   and better security.

   The IBM XIV Storage System Gen3 uses an advanced storage fabric architecture
   built for today’s dynamic data centers with an eye towards tomorrow. With industry
   leading storage software and a high-speed InifiniBand fabric, the XIV Gen3 delivers
   storage features and performance demanded in VMware infrastructures including:

              •   Automation and simplicity
              •   Multi-level integration with vSphere
              •   Centralized management in vCenter
              •   vStorage APIs for Array Integration (VAAI)
              •   vStorage APIs for Storage Awareness integration (VASA)
              •   Storage Replication Adapter (SRA) for Site Recovery Manager (SRM)
              •   Engineering-level collaboration for vSphere 5, and beyond

   A global partnership with IBM and VMware coupled with the forward thinking
   architecture of IBM XIV Gen3 Storage System provide a solid foundation for virtual
   infrastructures today and into the future. On top of this solid foundation, VMware
   vSphere 5.0 and IBM XIV Gen3 complement each other to create a strong
   virtualization environment. Evidence of how seamlessly these features work together


IBM Corporation 2011                                                             4|Page
to provide this powerful virtualized environment are found in the following sections.
   Testing details can be found in Appendices A through G.


2. Introduction
   2.1. VMware vSphere 5 features and benefits
              Enhancements and new features in VMware vSphere 5 are designed to
              help deliver improved application performance and availability for all
              business-critical applications. VMware vSphere 5 introduces advanced
              automation capabilities including:

           • Four times larger virtual machines (VMs) scale to support any
             application. With VMware vSphere 5, VMware helps make it easier for
             customers to virtualize. VMware vSphere 5 is capable of running VMs
             four times more powerful than VMware vSphere 4, supporting up to 1
             terabyte of memory and up to 32 virtual processors. These VMs are able
             to process in excess of 1 million I/O operations per second, helping
             surpass current requirements of the most resource-intensive applications.
             For example, VMware vSphere 5 is able to support a database that
             processes more than two billion transactions per day.
           • Updates to vSphere High Availability (HA) offer reliable protection
             against unplanned downtime. VMware vSphere 5 features a new HA
             architecture that is easier to set up than with the previous vSphere 4.1
             release (customers can get their applications set up with HA in minutes),
             is more scalable, and offers availability guarantees.
           • Intelligent     Policy      Management:        Three      new   automation
             advancements deliver cloud agility. VMware vSphere 5 introduces
             three new features that automate datacenter resource management to
             help IT respond to the business faster while reducing operating expenses.
             These features deliver intelligent policy management: A “set it and forget
             it” approach to data center resource management. Customers define the
             policy and establish the operating parameters, and VMware vSphere 5
             does the rest. VMware vSphere 5 intelligent policy management features
             include:
             • Auto-Deploy enables automatic server deployment “on the fly” and
                  e.g. reduces the time that it takes to deploy a non-virtualized data
                  center with 40 servers from 20 hours to 10 minutes. After the servers
                  are up and running, Auto-Deploy also automates the patching
                  process, making it possible to instantly apply patches to many servers
                  at once.
             • Profile-Driven Storage reduces the number of steps required to
                  select storage resources by grouping storage according to user-
                  defined policies (for example, gold, silver, bronze, and so on). During
                  the provisioning process, customers simply select a level of service
                  for the VM, and VMware vSphere automatically uses the storage
                  resources that best align with that level of service.
             • Storage Distributed Resource Scheduler (DRS) extends the
                  automated load-balancing capabilities that VMware first introduced in



IBM Corporation 2011                                                             5|Page
2006 with DRS to include storage characteristics. After a customer
                 has set the storage policy of a VM, Storage DRS automatically
                 manages the placement and balancing of the VM across storage
                 resources. By automating the ongoing resource allocations, Storage
                 DRS eliminates the need for IT to monitor or intervene, while ensuring
                 the VM maintains the service level defined by its policy.

   2.2. Introduction to new XIV Gen3 features
      The XIV Storage System has received rapid market success with thousands of
      installations in diverse industries worldwide, including financial services,
      healthcare, energy, education and manufacturing. IBM XIV integrates easily with
      virtualization, email, database, analytics and data protection solutions from IBM,
      SAP, Oracle, SAS, VMware, Symantec and others.

      The XIV Gen3 model exemplifies the XIV series’ evolutionary capability: Each
      hardware component has been upgraded with the latest technologies, while the
      core of the architecture remains intact. The XIV Gen3 model gives applications a
      tremendous performance boost, helping customers meet increasing demands
      with fewer servers and networks.

      The XIV Storage System series common features enable it to:
         • Self-tune and deliver consistently high performance with automated
            balanced data placement across all key system resources, eliminating hot
            spots
         • Provide unprecedented data protection and availability through active-
            active N+1 redundancy of system components and rapid self-healing (<
            60 minutes for 2 TB drives)
         • Enables unmatched ease of management through automated tasks and
            an intuitive user interface
         • Help promote low TCO enabled by high-density disks and optimal
            utilization
         • Offer seamless and easy-to-use integrated application solutions with the
            leading host platforms and business applications

      XIV Gen3 adds ultra-performance capabilities to the XIV series compared to its
      previous generation by providing:
         • Up to 4 times the throughput, cutting time and boosting performance for
             business intelligence, archiving and other extremely demanding
             applications
         • Up to 3 times speedier response time, enabling faster transaction
             processing and greater scalability with online transaction processing
             (OLTP), database and email applications
         • Power to serve even more applications from a single system with a
             comprehensive hardware upgrade that includes InfiniBand inter-module
             connect, larger cache, faster disk controllers, increased processing
             power, and more fibre-channel (FC) and iSCSI connectivity.
         • Option for future upgradeability to solid-state drive (SSD) caching for
             breakthrough SSD performance levels at a fraction of typical SSD storage
             costs, combined with very high-density drives helping achieve even lower
             TCO.


IBM Corporation 2011                                                             6|Page
2.3. Testing goals
       The purpose of the following test cases is to show that VMware vSphere 5 and
       the IBM XIV Storage System Gen3 storage solution seamlessly complement
       each other as an efficient storage virtualization solution.

       The testing in this paper is for proof of concept and should not be used as a
       performance statement.


   2.4. Description of the equipment
       The test setup utilizes the following IBM equipment:
          • (3) IBM System x® 3650 M3 servers
          • (2) IBM System Storage® SAN24B-4 Express switches
          • (3) Qlogic QLE2562 HBAs
          • IBM XIV Storage System Gen3 series hardware, Firmware Version 11.0


3. Test structure
   3.1.       Hardware setup
       Figure 1 shows the vSphere 5.0 system x and XIV reference architecture
diagram.




IBM Corporation 2011                                                              7|Page
Ether
                                                           Et




                                                                         Et
                                                              he
                                                 FC




                                                                           he
                        FC




                                            FC
                                                                 rn FC




                                    FC




                                                      FC
                                                                   et




                                                                             rn




                                                                                          ne
                                                                                et




                                                                                        t
                                                 FC
                                    FC




Figure 1. vSphere 5.0 System x and XIV reference architecture diagram

Fibre Channel configuration
             •     (3) IBM x3650 M3 servers
             •     (2) SAN24B-4 Express (8GB) (SAN A and SAN B)
             •     (3) Qlogic QLE2562 HBAs (8GB)

         3.1.2. ISCSI configuration
             •     (2) IBM x3650 M3 servers
             •     1 GB Ethernet Switch

         3.1.3. VMware vSphere
             •     (1) VMware vSphere VM (Microsoft® Windows® 2008 R2)

    3.2. VMware 5.0 environment Software setup installation

        3.2.1. VMware 5.0 Configuration
             • VMware 5.0 Enterprise Plus

        3.2.2. VM OS software
             • Windows 2008 R2
                 • Linux rhel 6.0


IBM Corporation 2011                                                                           8|Page
3.2.3. Testing software
              Iometer for I/O testing
              Note: Iometer is downloaded from www.iometer.org and distributed under
              the terms of the Intel Open Source License. The iomtr_kstat kernel
              modules, as well as other future independent components, are distributed
              under the terms of the GNU Public License)


4. Test procedures
   4.1. Iometer for performance testing
      When implementing storage, whether the storage is directly attached to a server
      (direct-attach storage or DAS), connected to a file-based network (network-
      attached storage or NAS), or resides on its own dedicated storage network
      (storage area network or SAN — Fibre Channel or iSCSI), it is important to
      understand storage performance. Without this information, managing growth
      becomes difficult. Iometer can help deliver this critical performance data to help
      you make better decisions about the storage needed or whether the current
      storage solution can handle an increased load.
      4.1.1. Disk and network controller performance

            The following two tests show the possible throughput of a three-VM setup
            and the IBM XIV Gen3 storage array configuration without any special
            tuning. See “Appendix A (Iometer for performance testing)” for test
            procedures.

       Test object       Performance of disk and network controllers.
       Setup             (3) VMs, (1) processor, 4 GB memory, (3) 40GB XIV LUNs for
                         test
       Test steps        Install Windows 2008 R2
                         Install Iometer
                         Set up test with Iometer 40 Workers
                                 8k block size, 30% write and 70% reads
                         Run-time 1 hour
                         See “Appendix A (Iometer for performance testing)”
       Results            VM (1) 76737 IOPS
                          VM (2) 77296 IOPS
                          VM (3) 72248 IOPS
       Test notes        *This is not a performance measurement test.

      4.1.2.Bandwidth and latency capabilities of buses

       Test object       Bandwidth and latency capabilities of buses
       Setup             (3) VMs (1) processor 4 GB memory (3) 40GB XIV LUNs for
                         test
       Test Steps        Install Windows 2008 r2
                         Install Iometer
                         Set up test with Iometer 40 Workers



IBM Corporation 2011                                                            9|Page
8k block size, 30% write and 70% reads
                              Run-time 1 hour
                              See “Appendix A (Iometer for performance testing)”
          Results             VM (1) 588 Mbps, 0.4641 ms average latency
                              VM (2) 603 Mbps, 0.0257 ms average latency
                              VM (3) 565 Mbps, 0.8856 ms average latency
          Test notes          *This is not a performance measurement test.

                 The Iometer testing shows that the IBM XIV Gen3 performed
                 exceptionally well with 70000+ IOPS range with a latency well below the 1
                 ms range. Figure 2 shows the Iometer measured performance results for
                 VM1.




Figure 2. Iometer VM1 results for 40 workers

    4.2. vSphere vMotion
        VMware vSphere vMotion technology enables live migration of VMs from server
        to server.

        This test demonstrates the difference in transfer times between moving VMs
        between local server disks (DAS) and moving VMs to the IBM XIV Gen3 (SAN).
        This demonstration also shows that the XIV Gen3 can move data at computer
        bus speeds.
        4.2.1.vSphere vMotion - Transfer time of VMs to a local disk (DAS)
          Test object        Transfer time of VMs to local disk
          Setup              VM Size 14.44 GB
          Test steps         See “Appendix B (vSphere vMotion)”
          Results            10 min 3 seconds
          Test notes         None


        4.2.2. vSphere vMotion - Transfer times of VMs to XIV LUN (SAN)

          Test Object         Transfer time of VMs to XIV LUN


IBM Corporation 2011                                                               10 | P a g e
Setup         VM Size 14.44 GB
          Test steps    See “Appendix C (Transfer times of VMs to XIV LUNs (SAN)))”
          Results       1 minute 31 seconds
          Test notes    None

      Overall test results: For the two tested VMs, transferring all data from the server
      to XIV was 6.7x faster than from the server to the local disk for the tested
      configuration, demonstrating the synergy between XIV and vSphere vMotion.
      See “Appendix B (vSphere vMotion)” and “Appendix C (Transfer times of VMs to
      XIV LUNs (SAN))” for test details.

   4.3.        vSphere High Availability
      The vSphere High Availability (HA) feature delivers reliability and
      dependability needed by many applications running on virtual machines,
      independent of the operating system and applications running within it. vSphere
      HA provides uniform, cost-effective failover protection against hardware and
      operating system failures within VMware virtualized IT environments.


      Test object       Failover of an ESX server
      Setup             See “Appendix D ( vSphere High Availability)”
      Test steps        See “Appendix D ( vSphere High Availability)”
      Results           When encountering a test-induced failure, the host moved to a
                        new ESXI host and the storage seamlessly moved with it.
      Test notes        none

      This test shows that the High Availability feature works seamlessly with the IBM
      XIV Gen3 as the test results show how a failure automatically moves the VM to a
      new ESXI host and the storage seamlessly moves with it as shown in Figure 3.
      See “Appendix D ( vSphere High Availability)” for test details.




IBM Corporation 2011                                                               11 | P a g e
Figure 3. Demonstrating HA feature: VM moves to new ESXI host along with storage



   4.4. vSphere Storage Distributed Resource Scheduler

      The vSphere Storage Distributed Resource Scheduler (SDRS) aggregates
      storage resources from several storage volumes into a single pool and simplifies
      storage management. Intelligently placing workloads on storage volumes during
      provisioning based on the available storage resources, SDRS performs ongoing
      load balancing between volumes to ensure space and I/O bottlenecks are
      avoided as per predefined rules that reflect business needs and changing
      priorities.

       Test object       Testing aggregated storage resources of several storage
                         volumes.
       Setup
       Test steps        See “Appendix E (vSphere Storage DRS)”
       Results           Passed, storage bottle neck avoided
       Test notes        None

      When run without SDRS, a storage bottleneck occurs. When SDRS is running,
      the system performs a task to load balance the disk. An imbalance on the
      datastore triggers the Storage DRS recommendation to migrate a virtual
      machine. Storage DRS makes multiple recommendations to solve this datastore
      imbalance. See “Appendix E (vSphere Storage DRS)” for test details.




IBM Corporation 2011                                                                       12 | P a g e
Figure 4. Storage DRS recommendations solve a datastore imbalance.

   4.5. Profile-Driven Storage
      Profile-Driven Storage enables easy and accurate selection of the correct
      datastore on which to deploy VMs. The selection of the datastore is based on the
      capabilities of that datastore. Then, throughout the lifecycle of the VM, a
      database administrator (DBA) can manually check to ensure that the underlying
      storage is still compatible, that is, it has the correct capabilities. This means that
      if the VM is cold-migrated or migrated using Storage vMotion, administrators can
      ensure that the VM moves to storage that meets the same characteristics and
      requirements of the original source “profile.” If the VM is moved without checking
      the capabilities of the destination storage, the compliance of the VM's physical
      storage characteristics can still be checked from the User Interface at any time,
      and the administrator can take corrective actions if the VM is no longer on a
      datastore which meets its storage requirements.

       Test object       Deploying VMs on Profile-Driven Storage
       Setup
       Test steps        See “Appendix F (Profile-Driven Storage)”
       Result            This test demonstrates that with Profile Driven Storage, a user
                         is able to ensure physical storage characteristics are consistent
                         between migrations of a VM
       Test notes

      This test shows that the Profile-Driven Storage feature works with IBM XIV Gen3
      to help ensure VM storage profiles meet requirements as shown in Figures 5 and
      6. See “Appendix F (Profile-Driven Storage)” for test details.


IBM Corporation 2011                                                                13 | P a g e
Figure 5. The VM storage profile is now compliant.




              Figure 6. VM storage profile

   4.6. vSphere Storage I/O Control
      VMware vSphere 5.0 extends Storage I/O Control to provide cluster-wide I/O
      sharing and limits for datastores. This feature helps ensure that no single virtual
      machine should be able to create a bottleneck in any IT environment regardless
      of the type of shared storage used. Storage I/O Control automatically throttles a
      VM that is consuming a disparate amount of I/O bandwidth when the configured
      latency threshold has been exceeded. This enables other virtual machines using
      the same datastore to receive their fair share of I/O performance. Storage DRS
      and Storage I/O Control work together to prevent deprecation of service-level
      agreements while providing long- term and short-term I/O distribution balance.

       Test object        Test cluster-wide I/O sharing and limits for datastores
       Setup
       Test steps         See “Appendix G (Storage I/O Control)” for test details



IBM Corporation 2011                                                                14 | P a g e
Results           Observed a gradual increase in the IOPS for the VM with 2000
                            shares and a gradual decrease in IOPS for the VM with 1000
                            shares.
                            The test results showed that more resources needed to be
          Test notes        allocated to one VM to balance the workload. VMware throttled
                            the I/O of the higher IOPS VM to give more I/O to the slower
                            VM.

      This test shows that the Storage I/O Control feature works within VMware 5.0
      with no changes to the IBM XIV Gen3. See “Appendix G (Storage I/O Control)”
      for test details.

   4.7.        vCenter
      VMware vCenter Server is a tool that manages multiple host servers that run
      VMs. It enables the provisioning of new server VMs, the migration of VMs
      between host servers and the creation of a library of standardized VMs
      templates. You can install plug-ins to add several other features, for example,
      VASA for discovery of storage topology and capability, event and alert status;
      SRM for disaster recovery automation exploiting storage business-continuity
      features.

   4.8. VMware vSphere Storage API Program
      VMware vSphere provides an API and software development kit (SDK)
      environment to allow customers and independent software vendors to enhance
      and extend the functionality and control of vSphere. VMware has created several
      storage virtualization APIs that help address storage functionality and control.

      4.8.1. vSphere Storage APIs for Array Integration (VAAI)

             Virtualization administrators look for ways to improve scalability,
             performance, and efficiency of their vSphere infrastructure. One way is by
             utilizing storage integration with VMware vStorage APIs for Array Integration
             VAAI. VAAI is a set of APIs or primitives that allow vSphere infrastructures
             to offload processing of data-related tasks, which can burden a VMware
             ESX server. Utilizing a storage platform like XIV with VAAI
             enabled, can provide significant improvements in vSphere
             performance, scalability, and availability. This capability was initially a
             private API requiring a plug-in in vSphere v4.1, but with vSphere 5.0, it is
             now a T10 SCSI standard.

             The VAAI driver for XIV enables the following primitives:

                    •   Full copy (also known as hardware copy offload):
                           o Benefit: Considerable boost in system performance and fast
                               completion of copy operations; minimizes host processing
                               and network traffic
                    •   Hardware-assisted locking (also known as atomic test and set):
                        Replacement of the SCSI-2 lock/reservation in Virtual Machine File
                        System (VMFS)
                           o Benefit: Significantly improves scalability and performance


IBM Corporation 2011                                                                15 | P a g e
•   Block zeroing (also known as write same)
                        o Benefit: Reduces the amount of processor effort, and
                            input/output operations per second (IOPS) required to write
                            zeroes across an entire EagerZeroedThick (EZT) Virtual
                            Machine Disk (VMDK)

              The XIV Storage System now provides full support for VAAI. The following
              sections describe each of these primitives.


          •     Full copy
                Tasks such as VM provisioning and VM migration are part of everyday
                activities of most VMware administrators. As the virtual environment
                continues to scale, it is important to monitor the overall impact that these
                activities have on the VMware infrastructure.

                Toggle the hardware assisted copy by changing the
                DataMover.HardwareAcceleratedMove parameter in the Advanced
                Settings tab in vSphere Virtual Center (set to 1 to enable, 0 to disable).

                When the value for hardware acceleration is 1, the data path changes for
                tasks such as Storage vMotion, as illustrated in Figure 7.
                                                         ction
                                    tion
                                    opy

                                           Data
                              tr u c
                             ta C




                                                       Instru
                                             Cop
                          Ins
                          Da




                                                y




                       Figure 7: VAAI Full copy primitive


              In this instance, the ESX server is removed from the data path of the data
              copy when hardware copy is enabled. Removing copy transactions from
              the server workload greatly increases the speed of these copy functions
              while reducing the impact to the ESX server.

              How effective is the VAAI full copy offload process?

                During IBM lab testing, data retrieved from the VMware monitoring tool,
                esxtop showed that commands per second on the ESX host were
                reduced by a factor of 10. Copy time reduction varies depending on the
                VM but is usually significant (over 50% for most profiles).




IBM Corporation 2011                                                                  16 | P a g e
A few examples of this performance boost at customer data centers are
              shown in Table 1: Field results for VAAI full copy
              .


              Customer              Test                  Before VAAI   After VAAI    Time
                                                                                      reduction (in
                                                                                      percentage)
             Major financial      2 VMs               433 sec           180 sec      59%

             Electric             2 VMs               944 sec           517 sec      45%
             company
             Petroleum            40 VMs              1 hour            20 min       67%
             company
              Table 1: Field results for VAAI full copy


           Full copy effect: Thousands of commands and IOPs on the ESX server are
           freed up for other tasks and promote greater scalability.


              Hardware-assisted locking (atomic test and set)

              Just as important as the demonstrated effect of hardware-assisted copy,
              the hardware-assisted locking primitive also greatly enhances VMware
              cluster scalability and disk operations for clustered file system (VMFS)
              with tighter granularity and efficiency.

              It is important to understand why locking occurs in the first place. For
              block storage environments, VMware data stores are formatted with
              VMFS. VMFS is a clustered file system that uses Small Computer System
              Interface (SCSI) reservations to handle distributive lock management.
              When there is a change to the metadata of the file system by an ESX
              server, the SCSI reservation process ensures that shared resources do
              not overlap with other connected ESX hosts by obtaining exclusive
              access to the logical unit number (LUN).

                A SCSI reservation is created on VMFS when (not a complete list):
                   • Virtual Machine Disk (VMDK) is first created
                   • VMDK is deleted
                   • VMDK is migrated
                   • VMDK is created via a template
                   • A template is created from a VMDK
                   • Creating or deleting VM snapshots
                   • VM is switched on or off

              Although normal I/O operations do not require this mechanism, these
              boundary conditions have become more common as features such as
              vMotion with Distributed Resource Scheduler (DRS) are used more
              frequently. This SCSI reservation design leads to early storage area


IBM Corporation 2011                                                                       17 | P a g e
network (SAN) best practices for vSphere to dictate a limit in cluster size
              for block storage (about 8 to 10 ESX hosts).

              With hardware-assisted locking as shown in Figure 8, LUN locking
              processing is transferred to the storage system. This reduces the number
              of commands required to access a lock, provides locks to be more
              granular, and leads to better scalability of the virtual infrastructure.




                          V    V   V                                                   V   V    V
                          M    M   M                                                   M   M    M
                          D    D   D                                                   D   D    D
                          K    K   K                                                   K   K    K




              Figure 8: VAAI Atomic test and set primitive


              Hardware-assisted locking effect: Hardware-assisted locking will
              increase VMs per data store, ESX servers per data store, and overall
              performance. This functionality coupled with 60 processors and 360 GB
              of cache memory for the XIV Storage System Gen3 helps provide better
              consolidation, density, and performance capabilities for the most
              demanding virtual infrastructures.

              Block zeroing (write same)
              Block zeroing, as shown in Figure 9, is designed to reduce the amount of
              processor and storage I/O utilization required to write zeroes across an
              entire EZT VMDK when it is created. With the block zeroing primitive,
              zeroing operation for EZT VMDK files are offloaded to the XIV Storage
              System without the host having to issue several commands.




                                                        Block Zeroing
                                                           Enabled
                                         Zero




                                                                         Zero
                                                    o
                                                Zero
                                       Zero

                                                Zero
                                                  r
                                                Ze




                                                                                Zero
                                   0 0 0 0 0                            0 0 0 0 0




IBM Corporation 2011                                                                           18 | P a g e
Figure 9. The VAAI write same or block zeroing primitive

              Block zeroing effect: Block zeroing reduces overhead and provides
              better performance for creating EZT virtual disks. With XIV, EZT volumes
              are available immediately through fast write caching and de-staging.

              VAAI support on XIV storage systems liberates valuable compute
              resources in the virtual infrastructure   Offloading processor and disk
              intensive activities from the ESX server to the storage system provides
              significant improvements in vSphere performance, scalability and
              availability.

              Note: Before installing the VAAI driver for the XIV storage system, ensure
              10.2.4a or higher is the installed microcode. For vSphere 5.x and later,
              the VAAI driver is no longer required for IBM Storage.

      4.8.2. vStorage APIs for Storage Awareness (VASA)
           The IBM Storage provider for VMware VASA, illustrated in Figure 10,
           provides even more real-time information about the XIV Storage System.
           VMware vStorage APIs for Storage Awareness (VASA) enable vCenter to
           see the capabilities of storage array LUNs and corresponding datastores.
           With visibility into capabilities underlying a datastore, it is much easier to
           select the appropriate disk for virtual machine placement. The IBM XIV
           Storage System VASA provider for VMware vCenter adds:
                             • Real-time disk status
                             • Real-time alerts and events from the XIV Storage System
                                to vCenter
                             • Support for multiple vCenter consoles and multiple XIV
                                Storage Systems
                             • Continuous monitoring through storage monitoring service
                                (SMS) for vSphere
                             • Foundation for future functions such as SDRS and policy-
                                driven storage deployment.




IBM Corporation 2011                                                              19 | P a g e
Figure 10. VASA block diagram

            Adding VASA support, available in vSphere 5, allows VMware and Cloud
            administrators insights which lead to improved availability, performance,
            and management of the storage infrastructure.
            In addition to VASA, the XIV Storage System also provides a vCenter Plug-
            in for vSphere 4 and vSphere 5, which extends management of the storage
            to provisioning, mapping, and monitoring of replication, snapshots, and
            capacity.


5. Conclusion

   Demonstrated through this set of IBM functional tests, VMware vSphere 5 and the
   IBM XIV Storage System Gen3 storage solution seamlessly complement each other
   as an efficient storage virtualization solution. Evaluation testing verified that VMware
   vSphere 5 and the IBM XIV Storage System Gen3 consistently performed as
   expected. The test setup and results can be further evaluated by exploring
   Appendices A through G.

   The release of VMware vSphere 5 is accompanied by many new and improved
   features. VMware vSphere Storage Distributed Resource Scheduler (SDRS)
   aggregates storage resources from several storage volumes into a single pool, and
   simplifies storage management. Profile Driven Storage enables easy and accurate
   selection of the correct datastore on which to deploy Virtual Machines. Storage I/O
   Control provides cluster-wide I/O sharing and limits for datastores. VAAI, integrated
   into vSphere 5, provides enhanced performance via storage array exploitation
   without the need for a plug-in. VASA delivers realtime VMware administrator
   discovery of storage: capacity, capabilities, events and alerts. With the addition of
   these new features IT professionals can realize more efficient utilization of storage
   resources to help achieve higher productivity at reduced costs.




IBM Corporation 2011                                                               20 | P a g e
For more information regarding VMware vSphere 5 and the IBM XIV Storage System
   Gen3, reference the following links:

   VMware:
   www.vmware.com/products/vsphere/overview.html

   IBM XIV Storage System Gen3
   ibm.com/systems/storage/disk/xiv/resources.html

   Iometer
   Iometer is downloaded from www.iometer.org/ and distributed under the terms of the
   Intel Open Source License. The iomtr_kstat kernel module, as well as other future
   independent components, is distributed under the terms of the GNU Public License).




IBM Corporation 2011                                                          21 | P a g e
Appendix A (Iometer for performance testing)

      1. Test objective: Performance of VMware vSphere 5.0 using XIV disk and
         network controllers.

      2. Setup Steps: Create 3 New Virtual Machines on vSphere

          2.1. Download Windows 2008 R2 from the Microsoft website
          www.microsoft.com/en-us/server-cloud/windows-server/2008-r2-trial.aspx.

          2.2. Download the MS 2008 R2 ISO to vSphere machine.

          2.3. On the vSphere 5.0 machine, open vSphere.




          2.4. Right Click on ESX server and Select “New Virtual Machine.”




          2.5. Select “Name:” Type a name for Virtual Machine; for the tested
          configuration, the name used was “New Virtual Machine.”




IBM Corporation 2011                                                            22 | P a g e
2.6. Select “Next.”



          2.7. Select VM Storage.




          2.8. Select “Next.”


          2.9. Select Guest Operating System “Windows” Version type.




IBM Corporation 2011                                                   23 | P a g e
2.10. Select “Next.”

          2.11. Select Create Network Connections.
          2.12. Set “How many NICs do you want to connect” to “1.”
          2.13. Select NIC 1.
          2.14. Select Adapter, for this test, “E1000. ”




          2.15. Select “Next.”

          2.16. Select “Virtual disk size:”




IBM Corporation 2011                                                 24 | P a g e
2.17. Select “Next.”
          2.18. Select “Finish” to finish the VM creation.

          2.19. Select the Virtual Machine just created.




          2.20. Right Click on VM.




          2.21. Select “Open Console.”
          2.22. Select “Power on” (Green Arrow).



IBM Corporation 2011                                         25 | P a g e
2.23. Select “CD tool.”




          2.24. Select “Connect to ISO image on local disk.”




          2.25. Select WS 2008 R2 ISO.



IBM Corporation 2011                                           26 | P a g e
2.26. Select “Open.”

          2.27. After executing windows server install, assign IP address.




          2.28. Right Click on VM.

          2.29. Select “Open Console.”




          2.30. Run Windows updates, and Windows activation.
          2.31. Shutdown Windows server.
          2.32. Install test hard drives (XIV Gen3).
          2.33. Right click on VM.




IBM Corporation 2011                                                         27 | P a g e
2.34. Select “Edit Settings”




          2.35. Select “Add”




IBM Corporation 2011                     28 | P a g e
2.36. Select “Hard Disk”




          2.37. Select “Next” and Select “Next”



          2.38. Select “Disk Size” 40 GB
          2.39. Select “Specify a datastore or datastore cluster:”
          2.40. Select “Browse”




IBM Corporation 2011                                                 29 | P a g e
2.41. Select appropriate disk volume - In this case is “XIV-ISVX8_X9”




          2.42. Select “OK”




          2.43. Select “Next”




IBM Corporation 2011                                                              30 | P a g e
2.44. Select “Next”




          2.45. Select “Finish”


          2.46. Start the VM Select “Power on” (Green Arrow)




IBM Corporation 2011                                           31 | P a g e
2.47. Select “VM.”




          2.48. Select “Guest.”




IBM Corporation 2011              32 | P a g e
2.49. Select “Send Crtl+Alt+del.”




          2.50. Enter password

          2.51. Select VM.




          2.52. Select “Guest.”




IBM Corporation 2011                          33 | P a g e
2.53. Select “Install/Upgrade VMware Tools.”




          2.54. To add newly created disk to Windows server, select “Start.”




          2.55. Right Click “My Computer.”




          2.56. Select “Manage.”



          2.57. Select “Offline disk.”




IBM Corporation 2011                                                           34 | P a g e
2.58. Right Click, and select “Online.”




          2.59. Right click on volume

          Select “New Simple Volume”



          Login to VM




          2.60. Select “Next.”
          2.61. Select “Assign Drive” and select “Next.”




IBM Corporation 2011                                       35 | P a g e
2.62. Select “Volume label,” in this case disk 3, and select “Next.”




          2.63. Select “Finish.”




IBM Corporation 2011                                                             36 | P a g e
2.64. Finished

           2.65 Please repeat the above procedure a total of three times to create
                                  disk 1, disk 2 and disk 3.

          2.66 Now perform a remote desktop (RDP) to the VM:

          2.67. Download Iometer from this website:
          http://www.Iometer.org/doc/downloads.html

          2.68. Download Version 2006.07.27 (or latest version).
          [download] Windows i386 Installer and prebuild binaries cc5814fd01a0ef936964d590e4bbce7a

          2.69. Download Iometer to the desktop.

          2.70. Double click on Iometer-2006.07.27.win32.i386-setup.

          2.71. Select “Run.”




          2.72. Select “Next.”



          2.73. Read License Agreement.




IBM Corporation 2011                                                                                 37 | P a g e
2.74. Select “I Agree” and select “Next” to choose the components to install.

          2.75. Select “Install.”




          2.76. Select “Finish” to finish installing Iometer.




      3. Test Steps to create 3VMs and test performance via Iometer
         3.1. To Run Iometer, select windows “Start.”




          3.2. Select “All Programs.”




IBM Corporation 2011                                                             38 | P a g e
3.3. Select “Iometer 2006.07.27” or the latest version available.




          3.4. Select “Iometer”




IBM Corporation 2011                                                          39 | P a g e
3.5. Select “+” under “All Managers”
          3.6 Create a Worker; select “Worker 1.”




          3.7. Select desired drive to use, in this case, E: disk 1.




          3.8 Add Network Targets; select to add Network Targets.




IBM Corporation 2011                                                   40 | P a g e
3.9. Select “Worker 2.”




          3.10. Select Network from the Network targets tab.




          3.11. Select “Access Specifications.”



IBM Corporation 2011                                           41 | P a g e
3.12. Select “New.”




          3.13. Select “Name.”
          3.14. Create test name.
          3.15. Select “Transfer Request Size” and set to “2 KB.”
          3.16. Change to 8KB to mimic SQL server.
          3.17. Select “Percent Read/Write Distribution.”
          3.18. Change specification to 30% Write and 70% Read.




IBM Corporation 2011                                                42 | P a g e
3.19. After Changes, Select “OK.”
          3.20. Scroll down to find test name.




          3.21. Select test name.
          3.22. Select “Add.”




IBM Corporation 2011                             43 | P a g e
3.23. Select “Test Setup.”




          3.24. Select “Test Description.”
          3.25. Type test name.
          3.26. Select “Run Time.”
          3.27. Set to 1 hour.

          3.28. Select “Results Display.”

          3.29. Select “Update Frequency (seconds).”
          3.30. Set Update Frequency to 1 second to view results.




IBM Corporation 2011                                                44 | P a g e
3.31. Select Start (Green Flag).
          3.32. Select “File name.”




          3.33. Select “Save.”

          The test will run for 1 hour.




IBM Corporation 2011                         45 | P a g e
3.34. Start Results.


      4.   Iometer performance results




              (3) VM, (1) CPU, 4GB Memory, (3) 40GB XIV LUNS were used for this
              test.

              The results screen shows the achieved IOPS, throughput and CPU
              utilization for VM1; the tests were repeated for VM2 and VM3. These
              tests showed the possible throughput of 3 VMs and the IBM XIV Gen3
              storage array configuration without any special tuning. The 3 VMs
              averaged approximately 75,000 IOPS with <0.5ms latency.




IBM Corporation 2011                                                          46 | P a g e
Appendix B (vSphere vMotion)

      1. Test object: vSphere vMotion - Transfer time of VMs to local disk (Vmware
         5.0)

      2. Setup steps: This section demonstrates vMotion using local disk

              2.1. Download a stop watch from http://download.cnet.com/Stop-
              Watch/3000-2350_4-10773544.html?tag=mncol;5 and install

              Screen Setup for test:




      3. Test Steps: Test transfer time to migrate data to local disk
         3.1. Select Virtual Machine (VM).




IBM Corporation 2011                                                           47 | P a g e
3.2. Right click on VM.
              3.3. Select “Migrate.”




              3.4. Select “Change datastore” and select “Next.”




IBM Corporation 2011                                              48 | P a g e
3.5. Select a Local Datastore “ISVX8-local-0” and select “Next.”




              Start of test
              3.6. Start the Stopwatch; Select “Restart.”




IBM Corporation 2011                                                             49 | P a g e
3.7. At the Completion of the test, select “Pause.”




              End of the test

      4. Results:
            The recorded transfer time migrating VMs to local disk (Vmware 5.0) was
            10 min 3 seconds.




IBM Corporation 2011                                                         50 | P a g e
Appendix C (Transfer times of VMs to XIV LUNs (SAN))
      5. Test object: Transfer times of VMs to XIV LUNs (SAN)

      6. Setup steps: This section demonstrates vMotion using XIV
            2.1. Download a stop watch from http://download.cnet.com/Stop-
            Watch/3000-2350_4-10773544.html?tag=mncol;5 and install.

              Screen Setup for test




      7. Test Setup: Test transfer time to migrate data to XIV disk.
            3.1. Select Virtual Machine (VM).




              3.2. Right click on VM.
              3.3. Select “Migrate.”




IBM Corporation 2011                                                         51 | P a g e
3.4. Select “Change datastore” and select “Next.”




              3.5. Select the XIV LUN ”XIV_ISVX8_X9” and select “Next.”



IBM Corporation 2011                                                      52 | P a g e
Start of test
              3.6. Start the Stopwatch
              3.7. Select “Finish”




IBM Corporation 2011                     53 | P a g e
3.8. At the Completion of the test, select “Pause” and record the total
              migration time.




              End of test

      8. Results:
            The recorded transfer time migrating VMs to XIV Gen3 (Vmware 5.0) was
            1 min 31 seconds.

              For the two tested VMs, transferring all data from the server to XIV was
              6.7 times faster than from the server to the local disk for the tested
              configuration, demonstrating the efficiency and synergy using XIV and
              vSphere vMotion.




IBM Corporation 2011                                                               54 | P a g e
Appendix D ( vSphere High Availability)

      9. Test object: vSphere High Availability - Failover of an ESX server

      10. Test steps: Create a VMware vSphere 5.0 with a cluster environment

              2.1. In the VMware cluster environment, select a VM that is not Fault
              Tolerant.

              2.2. Right Click on the VM.




              2.3. Select “Edit Settings.”

              2.4. Ensure that the VM uses XIV Gen3 hard disk as in the example
              below; select “OK.”




IBM Corporation 2011                                                             55 | P a g e
2.5. Right click on VM.
              2.6. Select “Fault Tolerance.”




              2.7. Select “Turn On Fault Tolerance.”




IBM Corporation 2011                                   56 | P a g e
2.8. Select “Yes”




              2.9. Results




              Fault Tolerance is now active.




IBM Corporation 2011                           57 | P a g e
2.10. Right click on VM.
              2.11. Select “Power” and “Power On.”




              2.12. Set up complete.




      11. Test Steps:
             3.1. Right Click on VM.
             3.2. Select “Fault Tolerance.”


IBM Corporation 2011                                 58 | P a g e
3.3. Make note of the “Host and Storage.“
              3.4. Select “Test Failover.”




             Observe that “VM and Storage” has moved to a new host.
      12. Results:

              The VM moved to a new ESXI host and the storage seamlessly moved
              with it.


Appendix E (vSphere Storage DRS)
      1. Test object: vSphere Storage DRS

      2. Setup steps: Demonstrate SDRS using VMware vSphere 5.0 startup
         screen

              2.1. Select “Inventory.”
              2.2. Select “Datastore and Datastore Cluster.”




IBM Corporation 2011                                                      59 | P a g e
2.3. Right click on “Datacenter.”




              2.3. Select “New Datastore Cluster.”




IBM Corporation 2011                                 60 | P a g e
2.4. Create the Datastore Cluster Name.




              2.5. Select “Turn on Storage DRS,” and select “Next.”
              2.6. Select “Fully Automated” and select “Next.”



IBM Corporation 2011                                                  61 | P a g e
2.7. Select “Show Advance Options.”
              2.8. Review Settings (Use Defaults), and select “Next.”




              2.9. Select “Cluster,” and select “Next.”




IBM Corporation 2011                                                    62 | P a g e
2.10. Select the datastore to use, then select “Next.”




              2.11. Review results under “Ready to Complete.”




IBM Corporation 2011                                                   63 | P a g e
2.12. Select “Finish.”

              The new cluster datastore shows all operations were completed
              successfully.




IBM Corporation 2011                                                          64 | P a g e
2.13. Build a new virtual machine.
              2.14. Right Click on “Cluster.”




              2.15. Select “New Virtual Machine,” then select “Next.”




IBM Corporation 2011                                                    65 | P a g e
2.16. Name the virtual machine and select “Next.”




              2.17. Select host and then select “Next.”




IBM Corporation 2011                                              66 | P a g e
2.18. Select datastore cluster, then select “Next.”




              2.19. Select Guest Operating System, and select “Next.”




              2.20. Select “Create Network Connections,” and select “Next.”




IBM Corporation 2011                                                          67 | P a g e
2.21. Specify the virtual disk size, and select “Next.”




              2.22. Select “Show all storage recommendations.”




IBM Corporation 2011                                                    68 | P a g e
2.23. Select “Continue.”

              2.24. Select “Apply Recommendations.”




IBM Corporation 2011                                  69 | P a g e
2.25. Observe that “Apply Storage DRS recommendations” has
              completed.

              Exploring the Datastore Cluster




              2.26. Select “Datastore and Datastore Cluster” from vSphere Home
              Screen.

              2.27. Select datastore.




IBM Corporation 2011                                                         70 | P a g e
2.28. Right click.




              2.29. Right click on new VM created.
              2.30. Select “Migrate.”




              2.31. Select “Change datastore,” and select “Next.”




IBM Corporation 2011                                                71 | P a g e
2.32. Select “XIV_ISVX8_X9” and select “Next.”




              2.33. Select “Finish.”




IBM Corporation 2011                                           72 | P a g e
SDRS set up completed.

      3. Test Steps:
            3.1. Select Datastore cluster.
            3.2. Select “Run Storage DRS.”




IBM Corporation 2011                         73 | P a g e
“Relocate virtual machine” shows test status of “Completed.”

      4. Results: Storage DRS (SDRS)
         When an imbalance occurs on the datastore, Storage DRS recommends a
         virtual machine to be migrated. Storage DRS will make multiple
         recommendations to solve datastore imbalances.




IBM Corporation 2011                                                         74 | P a g e
Appendix F (Profile-Driven Storage)
      1. Test object: Profile-Driven Storage

      2. Setup steps: This test demonstrates Profile-Driven Storage

              2.1. Select “VM Storage Profile” from the Home vSphere window.




              2.2. Select “Enable VM Storage Profiles.”




              2.3. Select “Enable.”




IBM Corporation 2011                                                           75 | P a g e
2.4. Note that VM Storage Profile Status is enabled and select “Close.”




              2.5. Select “Manage Storage Capabilities.”




IBM Corporation 2011                                                             76 | P a g e
2.6. Select “Add.”




              2.7. Select “Name,” type “Gold.”
              2.8. Select “Description,” type “Gold Storage Capability.”




              2.9. Select “Ok” and “Close.”




IBM Corporation 2011                                                       77 | P a g e
2.10. Note: Recent Tasks and select “Home.”




              2.11. Select “Datastores and Datastores Cluster” from the Home vSphere
              window.




              2.12. Select disk choice for User-Defined Storage Capability:
              2.13. Select disk and right click.




              2.14. Select “Assign User-Defined Storage Capability.”



IBM Corporation 2011                                                          78 | P a g e
2.15. Select “Name” pull down, select “Gold,” and select “Ok.”




              2.16. Select “Summary.”




IBM Corporation 2011                                                           79 | P a g e
2.17. Select “Home.”
              2.18. Select “VM Storage Profiles” from the vSphere Home screen.




              2.19. Select “Create VM Storage Profile.”




              2.20. Select “Name” type: Gold Profile.
              2.21. Select “Description” type: Storage Profile for VMs that should reside
              on Gold storage, and select “Next.”




IBM Corporation 2011                                                              80 | P a g e
2.22. Select “Gold,” and select “Next.”




IBM Corporation 2011                                    81 | P a g e
2.23. Select “Finish.”

              2.24. Select “Gold Profile.”
              2.25. Select “Summary.”




              2.26. Observe the settings for later comparison.




IBM Corporation 2011                                             82 | P a g e
3. Test Steps:
            3.1. Assign a VM storage profile to a VM.
            3.2. Select “Home.”




IBM Corporation 2011                                    83 | P a g e
3.3. Select “Hosts and Clusters.”




              3.4. Select a VM.




              3.5. Right click the VM.
              3.6. Select “VM Storage Profile.”
              3.7. Select “Manage Profiles.”




IBM Corporation 2011                              84 | P a g e
3.8. Select “Home VM Storage Profile.”
              3.9. Select “Gold Profile” from pull down menu.
              3.10. Select “Propagate to disks,” and select “Ok.”




IBM Corporation 2011                                                85 | P a g e
3.11. Observe the setting in “VM Storage Profiles for virtual disks” for
              future use and select “Ok.”




              3.12. Observe in VM Storage Profiles section the profile is
              “Noncompliant,” as the storage characteristics in the “to” storage do not
              meet the same requirements.




              3.13. Right click on VM.




IBM Corporation 2011                                                                86 | P a g e
3.14. Select “Migrate.”




              3.15. Select “Change datastore,” and select “Next.”




IBM Corporation 2011                                                87 | P a g e
3.16. Select “VM Storage Profile.”
              3.17. Select “Gold Profile.”
              3.18. Select Compatible disk, and select “Next.”




              Note: the VM is being migrated.




              3.19. Select “Refresh.”

      4. Results: Profile Driven Storage
            Note the VM Storage Profile is now Compliant.




IBM Corporation 2011                                             88 | P a g e
Gold VM Storage Profile:




              This test demonstrates that with Profile Driven Storage, a user is able to
              ensure physical storage characteristics are consistent between VM
              migrations.




IBM Corporation 2011                                                              89 | P a g e
Appendix G (Storage I/O Control)
      1. Test object: Storage I/O Control

      2. Setup steps: Create a VM with 2 hard drives to demonstrate Storage I/O
         Control




              2.1. Start VM.
              2.2. Use remote desktop (RDP) to go to the VM.
              2.3. Install Iometer from http://www.Iometer.org/doc/downloads.html




              2.4. Once installed, run Iometer.




IBM Corporation 2011                                                        90 | P a g e
2.5. Select “Worker 1.”




              2.6. Select “E: disk1.”
              2.7. Select “Access Specifications,” and select “New.”




IBM Corporation 2011                                                   91 | P a g e
2.8. Set “Transfer Request” Size to “10 Megabytes, 2 Kilobytes, 0 Bytes.”
              2.9. Set “Percent Read/Write Distribution” to 75% Write / 25% Read and
              select “Ok” (these settings provide a heavier load on the VM).




              2.10. Select “Untitled 1” under Global Access Specifications.




IBM Corporation 2011                                                             92 | P a g e
2.11. Select “Results Display.”

              2.12. Select “Update Frequency” to “1.”
              2.13. Select “Green flag” to start.




              2.14. Select “Save” to save results.




IBM Corporation 2011                                    93 | P a g e
2.15. Return to vSphere.
              2.16. Select “Home.”
              2.17. Select “Datastores and Datastore Clusters.”




              2.18. Select the Host running the VM.
              Note Storage I/O Control is “Disabled”




IBM Corporation 2011                                              94 | P a g e
2.19. Select “Properties.”
              2.20. Set “Storage I/O Control” to “Enabled.”
              2.21. Select “Advanced.”




              2.22. Select “OK.”




IBM Corporation 2011                                          95 | P a g e
2.23. Select “OK.”




              2.24. Select “Close.”




              2.25. Go to the VM used for testing and “Edit Settings.”
              2.26. Select “Resources.”




IBM Corporation 2011                                                     96 | P a g e
2.27. Select “Disk,” and select “OK.”




              Note: Storage I/O Control (SIOC) is set on Disk 2.




IBM Corporation 2011                                               97 | P a g e
2.28. Set Hard disk 2 “Share” to High and “Limit – IOPS” to 100.




      3. Test Steps: Demonstrate Storage I/O Control



IBM Corporation 2011                                                             98 | P a g e
3.1. Now look at SIOC’s enforcing of the IOPS limit. Go back to the
              vSphere Client Performance tab or the virtual machine’s Iometer results
              to see the number of IOPS currently being generated. The value for this
              exercise is approximately 500–600 IOPS.

              3.2. Go to the VM running Iometer.
              3.3. Stop Iometer.
              3.4. Change “# of Outstanding I/Os” to 65.
              3.5. Restart Iometer.




              3.6. Go to Results Display.




IBM Corporation 2011                                                            99 | P a g e
4. Results: Storage I/O Control




              Implementing Storage I/O Control recommendations shows a gradual
              movement towards the prioritizing of shares. The test demonstrates a
              gradual increase in the IOPS for the virtual machine with 2000 shares and
              a gradual decrease in IOPS for the virtual machine with 1000 shares. This
              completes the evaluation of Storage I/O Control.




IBM Corporation 2011                                                            100 | P a g e
Trademarks and special notices
© Copyright IBM Corporation 2011. All rights Reserved.
References in this document to IBM products or services do not imply that IBM intends
to make them available in every country.
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corporation in the United States, other countries, or
both. If these and other IBM trademarked terms are marked on their first occurrence in
this information with a trademark symbol (® or ™), these symbols indicate U.S.
registered or common law trademarks owned by IBM at the time this information was
published. Such trademarks may also be registered or common law trademarks in other
countries. A current list of IBM trademarks is available on the Web at "Copyright and
trademark information" at www.ibm.com/legal/copytrade.shtml.
Java and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of
others.
Information is provided "AS IS" without warranty of any kind.
All customer examples described are presented as illustrations of how those customers
have used IBM products and the results they may have achieved. Actual environmental
costs and performance characteristics may vary by customer.
Information concerning non-IBM products was obtained from a supplier of these
products, published announcement material, or other publicly available sources and
does not constitute an endorsement of such products by IBM. Sources for non-IBM list
prices and performance numbers are taken from publicly available information, including
vendor announcements and vendor worldwide homepages. IBM has not tested these
products and cannot confirm the accuracy of performance, capability, or any other
claims related to non-IBM products. Questions on the capability of non-IBM products
should be addressed to the supplier of those products.
All statements regarding IBM future direction and intent are subject to change or
withdrawal without notice, and represent goals and objectives only. Contact your local
IBM office or IBM authorized reseller for the full text of the specific Statement of
Direction.
Some information addresses anticipated future capabilities. Such information is not
intended as a definitive statement of a commitment to specific levels of performance,
function or delivery schedules with respect to any future products. Such commitments
are only made in IBM product announcements. The information is presented here to
communicate IBM's current investment and development activities as a good faith effort
to help with our customers' future planning.


IBM Corporation 2011                                                              101 | P a g e
Performance is based on measurements and projections using standard IBM
benchmarks in a controlled environment. The actual throughput or performance that any
user will experience will vary depending upon considerations such as the amount of
multiprogramming in the user's job stream, the I/O configuration, the storage
configuration, and the workload processed. Therefore, no assurance can be given that
an individual user will achieve throughput or performance improvements equivalent to
the ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in
production models.
Any references in this information to non-IBM websites are provided for convenience
only and do not in any manner serve as an endorsement of those websites. The
materials at those websites are not part of the materials for this IBM product and use of
those websites is at your own risk.




IBM Corporation 2011                                                               102 | P a g e

Contenu connexe

Tendances

Advantages of HyperV over vSphere 5.1
Advantages of HyperV over vSphere 5.1Advantages of HyperV over vSphere 5.1
Advantages of HyperV over vSphere 5.1uNIX Jim
 
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Sverige
 
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & Replication
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & ReplicationVMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & Replication
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & ReplicationSuministros Obras y Sistemas
 
White paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsWhite paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsthinkASG
 
Total cost comparison: VMware vSphere vs. Microsoft Hyper-V
Total cost comparison: VMware vSphere vs. Microsoft Hyper-VTotal cost comparison: VMware vSphere vs. Microsoft Hyper-V
Total cost comparison: VMware vSphere vs. Microsoft Hyper-VPrincipled Technologies
 
Managing the HCI stack: A comparison of two approaches with Dell EMC VxRail a...
Managing the HCI stack: A comparison of two approaches with Dell EMC VxRail a...Managing the HCI stack: A comparison of two approaches with Dell EMC VxRail a...
Managing the HCI stack: A comparison of two approaches with Dell EMC VxRail a...Principled Technologies
 
VMware vSphere Vs. Microsoft Hyper-V: A Technical Analysis
VMware vSphere Vs. Microsoft Hyper-V: A Technical AnalysisVMware vSphere Vs. Microsoft Hyper-V: A Technical Analysis
VMware vSphere Vs. Microsoft Hyper-V: A Technical AnalysisCorporate Technologies
 
Hyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the DifferencesHyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the DifferencesSolarWinds
 
VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1Sanjeev Kumar
 
VMWARE VS MS-HYPER-V
VMWARE VS MS-HYPER-VVMWARE VS MS-HYPER-V
VMWARE VS MS-HYPER-VDavid Ramirez
 
VMware vSphere Version Comparison 4.0 to 6.5
VMware  vSphere Version Comparison 4.0 to 6.5VMware  vSphere Version Comparison 4.0 to 6.5
VMware vSphere Version Comparison 4.0 to 6.5Sabir Hussain
 
Ibm smart cloud entry+ for system x administrator guide
Ibm smart cloud entry+ for system x administrator guideIbm smart cloud entry+ for system x administrator guide
Ibm smart cloud entry+ for system x administrator guideIBM India Smarter Computing
 
VMware vSphere 5 seminar
VMware vSphere 5 seminarVMware vSphere 5 seminar
VMware vSphere 5 seminarMarkiting_be
 
Virtual Infrastructure Overview
Virtual Infrastructure OverviewVirtual Infrastructure Overview
Virtual Infrastructure Overviewvalerian_ceaus
 
Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookEMC
 

Tendances (20)

Advantages of HyperV over vSphere 5.1
Advantages of HyperV over vSphere 5.1Advantages of HyperV over vSphere 5.1
Advantages of HyperV over vSphere 5.1
 
Whitepaper
WhitepaperWhitepaper
Whitepaper
 
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
 
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & Replication
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & ReplicationVMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & Replication
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & Replication
 
White paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsWhite paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware Environments
 
Vsp 40 admin_guide
Vsp 40 admin_guideVsp 40 admin_guide
Vsp 40 admin_guide
 
Total cost comparison: VMware vSphere vs. Microsoft Hyper-V
Total cost comparison: VMware vSphere vs. Microsoft Hyper-VTotal cost comparison: VMware vSphere vs. Microsoft Hyper-V
Total cost comparison: VMware vSphere vs. Microsoft Hyper-V
 
VMware vSphere5.1 Training
VMware vSphere5.1 TrainingVMware vSphere5.1 Training
VMware vSphere5.1 Training
 
Vmware inter
Vmware interVmware inter
Vmware inter
 
Managing the HCI stack: A comparison of two approaches with Dell EMC VxRail a...
Managing the HCI stack: A comparison of two approaches with Dell EMC VxRail a...Managing the HCI stack: A comparison of two approaches with Dell EMC VxRail a...
Managing the HCI stack: A comparison of two approaches with Dell EMC VxRail a...
 
Vm Vs Hyperv
Vm Vs HypervVm Vs Hyperv
Vm Vs Hyperv
 
VMware vSphere Vs. Microsoft Hyper-V: A Technical Analysis
VMware vSphere Vs. Microsoft Hyper-V: A Technical AnalysisVMware vSphere Vs. Microsoft Hyper-V: A Technical Analysis
VMware vSphere Vs. Microsoft Hyper-V: A Technical Analysis
 
Hyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the DifferencesHyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the Differences
 
VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1
 
VMWARE VS MS-HYPER-V
VMWARE VS MS-HYPER-VVMWARE VS MS-HYPER-V
VMWARE VS MS-HYPER-V
 
VMware vSphere Version Comparison 4.0 to 6.5
VMware  vSphere Version Comparison 4.0 to 6.5VMware  vSphere Version Comparison 4.0 to 6.5
VMware vSphere Version Comparison 4.0 to 6.5
 
Ibm smart cloud entry+ for system x administrator guide
Ibm smart cloud entry+ for system x administrator guideIbm smart cloud entry+ for system x administrator guide
Ibm smart cloud entry+ for system x administrator guide
 
VMware vSphere 5 seminar
VMware vSphere 5 seminarVMware vSphere 5 seminar
VMware vSphere 5 seminar
 
Virtual Infrastructure Overview
Virtual Infrastructure OverviewVirtual Infrastructure Overview
Virtual Infrastructure Overview
 
Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBook
 

Similaire à VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualization lab report

Why Choose VMware for Server Virtualization
Why Choose VMware for Server VirtualizationWhy Choose VMware for Server Virtualization
Why Choose VMware for Server VirtualizationVMware
 
Whitepaper Server Virtualisation And Storage Management
Whitepaper   Server Virtualisation And Storage ManagementWhitepaper   Server Virtualisation And Storage Management
Whitepaper Server Virtualisation And Storage ManagementAlan McSweeney
 
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_finalsp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_finalKunal Khairnar
 
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...IBM India Smarter Computing
 
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...EMC
 
Virtualization Performance on the IBM PureFlex System
Virtualization Performance on the IBM PureFlex SystemVirtualization Performance on the IBM PureFlex System
Virtualization Performance on the IBM PureFlex SystemIBM India Smarter Computing
 
Ds v sphere-enterprise-ent-plus
Ds v sphere-enterprise-ent-plusDs v sphere-enterprise-ent-plus
Ds v sphere-enterprise-ent-plusChau Tuan Nguyen
 
Practical Guide to Business Continuity & Disaster Recovery
Practical Guide to Business Continuity & Disaster RecoveryPractical Guide to Business Continuity & Disaster Recovery
Practical Guide to Business Continuity & Disaster Recoveryatif_kamal
 
Vm Ware X Xen Server
Vm Ware X Xen ServerVm Ware X Xen Server
Vm Ware X Xen ServerAndre Flor
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...EMC
 
Virtualization meisen 042811
Virtualization meisen 042811Virtualization meisen 042811
Virtualization meisen 042811Morty Eisen
 
Networker integration for optimal performance
Networker integration for optimal performanceNetworker integration for optimal performance
Networker integration for optimal performanceMohamed Sohail
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...EMC
 
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...IBM India Smarter Computing
 
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyTrack 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyEMC Forum India
 
WHITE PAPER▶ Protecting VMware Environments with Backup Exec 15
WHITE PAPER▶ Protecting VMware Environments with Backup Exec 15WHITE PAPER▶ Protecting VMware Environments with Backup Exec 15
WHITE PAPER▶ Protecting VMware Environments with Backup Exec 15Symantec
 
What's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - StorageWhat's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - StorageVMware
 

Similaire à VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualization lab report (20)

Why Choose VMware for Server Virtualization
Why Choose VMware for Server VirtualizationWhy Choose VMware for Server Virtualization
Why Choose VMware for Server Virtualization
 
Whitepaper Server Virtualisation And Storage Management
Whitepaper   Server Virtualisation And Storage ManagementWhitepaper   Server Virtualisation And Storage Management
Whitepaper Server Virtualisation And Storage Management
 
IBM XIV Gen3 Storage System
IBM XIV Gen3 Storage SystemIBM XIV Gen3 Storage System
IBM XIV Gen3 Storage System
 
SAP Solution On VMware - Best Practice Guide 2011
SAP Solution On VMware - Best Practice Guide 2011SAP Solution On VMware - Best Practice Guide 2011
SAP Solution On VMware - Best Practice Guide 2011
 
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_finalsp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
 
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
 
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
 
Virtualization Performance on the IBM PureFlex System
Virtualization Performance on the IBM PureFlex SystemVirtualization Performance on the IBM PureFlex System
Virtualization Performance on the IBM PureFlex System
 
Ds v sphere-enterprise-ent-plus
Ds v sphere-enterprise-ent-plusDs v sphere-enterprise-ent-plus
Ds v sphere-enterprise-ent-plus
 
Practical Guide to Business Continuity & Disaster Recovery
Practical Guide to Business Continuity & Disaster RecoveryPractical Guide to Business Continuity & Disaster Recovery
Practical Guide to Business Continuity & Disaster Recovery
 
Vm Ware X Xen Server
Vm Ware X Xen ServerVm Ware X Xen Server
Vm Ware X Xen Server
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
 
Virtualization meisen 042811
Virtualization meisen 042811Virtualization meisen 042811
Virtualization meisen 042811
 
Networker integration for optimal performance
Networker integration for optimal performanceNetworker integration for optimal performance
Networker integration for optimal performance
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
 
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
 
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyTrack 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
 
4AA5-6907ENW
4AA5-6907ENW4AA5-6907ENW
4AA5-6907ENW
 
WHITE PAPER▶ Protecting VMware Environments with Backup Exec 15
WHITE PAPER▶ Protecting VMware Environments with Backup Exec 15WHITE PAPER▶ Protecting VMware Environments with Backup Exec 15
WHITE PAPER▶ Protecting VMware Environments with Backup Exec 15
 
What's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - StorageWhat's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - Storage
 

Plus de IBM India Smarter Computing

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments IBM India Smarter Computing
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...IBM India Smarter Computing
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceIBM India Smarter Computing
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM India Smarter Computing
 

Plus de IBM India Smarter Computing (20)

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments
 
All-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage EfficiencyAll-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage Efficiency
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
 
IBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product GuideIBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product Guide
 
IBM System x3250 M5
IBM System x3250 M5IBM System x3250 M5
IBM System x3250 M5
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
IBM System x3650 M4 HD
IBM System x3650 M4 HDIBM System x3650 M4 HD
IBM System x3650 M4 HD
 
IBM System x3300 M4
IBM System x3300 M4IBM System x3300 M4
IBM System x3300 M4
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
IBM System x3500 M4
IBM System x3500 M4IBM System x3500 M4
IBM System x3500 M4
 
IBM System x3550 M4
IBM System x3550 M4IBM System x3550 M4
IBM System x3550 M4
 
IBM System x3650 M4
IBM System x3650 M4IBM System x3650 M4
IBM System x3650 M4
 
IBM System x3500 M3
IBM System x3500 M3IBM System x3500 M3
IBM System x3500 M3
 
IBM System x3400 M3
IBM System x3400 M3IBM System x3400 M3
IBM System x3400 M3
 
IBM System x3250 M3
IBM System x3250 M3IBM System x3250 M3
IBM System x3250 M3
 
IBM System x3200 M3
IBM System x3200 M3IBM System x3200 M3
IBM System x3200 M3
 
IBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and ConfigurationIBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and Configuration
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization Performance
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
X6: The sixth generation of EXA Technology
X6: The sixth generation of EXA TechnologyX6: The sixth generation of EXA Technology
X6: The sixth generation of EXA Technology
 

Dernier

Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 

Dernier (20)

Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 

VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualization lab report

  • 1. VMware vSphere 5 and IBM XIV Gen3 end-to-end virtualization Lab report: vSphere 5, vMotion, HA, SDRS, I/O Control, vCenter, VAAI and VASA IBM Corporation 2011 1|Page
  • 2. Contents 1. Executive summary .............................................................................................................. 4 2. Introduction ............................................................................................................................ 5 2.1. VMware vSphere 5 features and benefits ................................................................. 5 2.2. Introduction to new XIV Gen3 features ...................................................................... 6 2.3. Testing goals .................................................................................................................. 7 2.4. Description of the equipment....................................................................................... 7 3. Test structure......................................................................................................................... 7 3.1. Hardware setup.............................................................................................................. 7 3.1.2. ISCSI configuration..............................................................................................................8 3.1.3. VMware vSphere..................................................................................................................8 3.2. VMware 5.0 environment Software setup installation.............................................. 8 3.2.1. VMware 5.0 Configuration ..................................................................................................8 3.2.2. VM OS software ...................................................................................................................8 3.2.3. Testing software...................................................................................................................9 4. Test procedures .................................................................................................................... 9 4.1. Iometer for performance testing .................................................................................. 9 4.1.1. Disk and network controller performance.........................................................................9 4.1.2. Bandwidth and latency capabilities of buses...................................................................9 4.2. vSphere vMotion.......................................................................................................... 10 4.2.1. vSphere vMotion - Transfer time of VMs to a local disk (DAS) ..................................10 4.2.2. vSphere vMotion - Transfer times of VMs to XIV LUN (SAN).....................................10 4.3. vSphere High Availability............................................................................................ 11 4.5. Profile-Driven Storage ................................................................................................ 13 4.6. vSphere Storage I/O Control ..................................................................................... 14 4.7. vCenter.......................................................................................................................... 15 4.8. VMware vSphere Storage API Program .................................................................. 15 4.8.1. vSphere Storage APIs for Array Integration (VAAI) .....................................................15 • Full copy, Hardware-Assisted Locking, and Block Zeroing .........................................16 4.8.2. vStorage APIs for Storage Awareness (VASA).............................................................19 IBM Corporation 2011 2|Page
  • 3. 5. Conclusion ........................................................................................................................... 20 Appendix A (Iometer for performance testing) ...................................................................... 22 Appendix B (vSphere vMotion) ................................................................................................ 47 Appendix C (Transfer times of VMs to XIV LUNs (SAN))................................................... 51 Appendix D ( vSphere High Availability)................................................................................. 55 Appendix E (vSphere Storage DRS)....................................................................................... 59 Appendix F (Profile-Driven Storage) ....................................................................................... 75 Appendix G (Storage I/O Control) ........................................................................................... 90 Trademarks and special notices ............................................................................................ 101 IBM Corporation 2011 3|Page
  • 4. 1. Executive summary The values of server virtualization are well understood today. Customers implement server virtualization to increase server utilization, handle peak loads efficiently, decrease total cost of ownership (TCO), and streamline server landscapes. Similarly, storage virtualization helps to address the same challenges as server virtualization. Storage virtualization also expands beyond the boundaries of physical resources and helps to control how IT infrastructures adjust to rapidly changing business demands. Storage virtualization benefits customers through improved physical resource utilization and improved hardware efficiency, as well as reduced power and cooling expenses. In addition, consolidation of resources obtained through virtualization offers measurable returns on investment for today’s businesses. Finally, virtualization serves as one of the key enablers of cloud solutions, which are designed to deliver services economically and on demand. The features of VMware vSphere 5.0 and IBM XIV® Gen3 storage together build a powerful end to end virtualized infrastructure, covering not only servers and storage, but also end-to-end infrastructure management, leading to more-efficient and higher- performing applications. VMware is a leading manufacturer of virtualization software. VMware vSphere 5 is the first version of VMware vSphere built exclusively on ESXi, a hypervisor purpose- built for virtualization that runs independently from a general purpose operating system. With an ultra-thin architecture, ESXi delivers industry-leading performance, reliability and scalability all within a footprint of less than 100 MB. The result is streamlined deployment and configuration as well as simplified patching, updating and better security. The IBM XIV Storage System Gen3 uses an advanced storage fabric architecture built for today’s dynamic data centers with an eye towards tomorrow. With industry leading storage software and a high-speed InifiniBand fabric, the XIV Gen3 delivers storage features and performance demanded in VMware infrastructures including: • Automation and simplicity • Multi-level integration with vSphere • Centralized management in vCenter • vStorage APIs for Array Integration (VAAI) • vStorage APIs for Storage Awareness integration (VASA) • Storage Replication Adapter (SRA) for Site Recovery Manager (SRM) • Engineering-level collaboration for vSphere 5, and beyond A global partnership with IBM and VMware coupled with the forward thinking architecture of IBM XIV Gen3 Storage System provide a solid foundation for virtual infrastructures today and into the future. On top of this solid foundation, VMware vSphere 5.0 and IBM XIV Gen3 complement each other to create a strong virtualization environment. Evidence of how seamlessly these features work together IBM Corporation 2011 4|Page
  • 5. to provide this powerful virtualized environment are found in the following sections. Testing details can be found in Appendices A through G. 2. Introduction 2.1. VMware vSphere 5 features and benefits Enhancements and new features in VMware vSphere 5 are designed to help deliver improved application performance and availability for all business-critical applications. VMware vSphere 5 introduces advanced automation capabilities including: • Four times larger virtual machines (VMs) scale to support any application. With VMware vSphere 5, VMware helps make it easier for customers to virtualize. VMware vSphere 5 is capable of running VMs four times more powerful than VMware vSphere 4, supporting up to 1 terabyte of memory and up to 32 virtual processors. These VMs are able to process in excess of 1 million I/O operations per second, helping surpass current requirements of the most resource-intensive applications. For example, VMware vSphere 5 is able to support a database that processes more than two billion transactions per day. • Updates to vSphere High Availability (HA) offer reliable protection against unplanned downtime. VMware vSphere 5 features a new HA architecture that is easier to set up than with the previous vSphere 4.1 release (customers can get their applications set up with HA in minutes), is more scalable, and offers availability guarantees. • Intelligent Policy Management: Three new automation advancements deliver cloud agility. VMware vSphere 5 introduces three new features that automate datacenter resource management to help IT respond to the business faster while reducing operating expenses. These features deliver intelligent policy management: A “set it and forget it” approach to data center resource management. Customers define the policy and establish the operating parameters, and VMware vSphere 5 does the rest. VMware vSphere 5 intelligent policy management features include: • Auto-Deploy enables automatic server deployment “on the fly” and e.g. reduces the time that it takes to deploy a non-virtualized data center with 40 servers from 20 hours to 10 minutes. After the servers are up and running, Auto-Deploy also automates the patching process, making it possible to instantly apply patches to many servers at once. • Profile-Driven Storage reduces the number of steps required to select storage resources by grouping storage according to user- defined policies (for example, gold, silver, bronze, and so on). During the provisioning process, customers simply select a level of service for the VM, and VMware vSphere automatically uses the storage resources that best align with that level of service. • Storage Distributed Resource Scheduler (DRS) extends the automated load-balancing capabilities that VMware first introduced in IBM Corporation 2011 5|Page
  • 6. 2006 with DRS to include storage characteristics. After a customer has set the storage policy of a VM, Storage DRS automatically manages the placement and balancing of the VM across storage resources. By automating the ongoing resource allocations, Storage DRS eliminates the need for IT to monitor or intervene, while ensuring the VM maintains the service level defined by its policy. 2.2. Introduction to new XIV Gen3 features The XIV Storage System has received rapid market success with thousands of installations in diverse industries worldwide, including financial services, healthcare, energy, education and manufacturing. IBM XIV integrates easily with virtualization, email, database, analytics and data protection solutions from IBM, SAP, Oracle, SAS, VMware, Symantec and others. The XIV Gen3 model exemplifies the XIV series’ evolutionary capability: Each hardware component has been upgraded with the latest technologies, while the core of the architecture remains intact. The XIV Gen3 model gives applications a tremendous performance boost, helping customers meet increasing demands with fewer servers and networks. The XIV Storage System series common features enable it to: • Self-tune and deliver consistently high performance with automated balanced data placement across all key system resources, eliminating hot spots • Provide unprecedented data protection and availability through active- active N+1 redundancy of system components and rapid self-healing (< 60 minutes for 2 TB drives) • Enables unmatched ease of management through automated tasks and an intuitive user interface • Help promote low TCO enabled by high-density disks and optimal utilization • Offer seamless and easy-to-use integrated application solutions with the leading host platforms and business applications XIV Gen3 adds ultra-performance capabilities to the XIV series compared to its previous generation by providing: • Up to 4 times the throughput, cutting time and boosting performance for business intelligence, archiving and other extremely demanding applications • Up to 3 times speedier response time, enabling faster transaction processing and greater scalability with online transaction processing (OLTP), database and email applications • Power to serve even more applications from a single system with a comprehensive hardware upgrade that includes InfiniBand inter-module connect, larger cache, faster disk controllers, increased processing power, and more fibre-channel (FC) and iSCSI connectivity. • Option for future upgradeability to solid-state drive (SSD) caching for breakthrough SSD performance levels at a fraction of typical SSD storage costs, combined with very high-density drives helping achieve even lower TCO. IBM Corporation 2011 6|Page
  • 7. 2.3. Testing goals The purpose of the following test cases is to show that VMware vSphere 5 and the IBM XIV Storage System Gen3 storage solution seamlessly complement each other as an efficient storage virtualization solution. The testing in this paper is for proof of concept and should not be used as a performance statement. 2.4. Description of the equipment The test setup utilizes the following IBM equipment: • (3) IBM System x® 3650 M3 servers • (2) IBM System Storage® SAN24B-4 Express switches • (3) Qlogic QLE2562 HBAs • IBM XIV Storage System Gen3 series hardware, Firmware Version 11.0 3. Test structure 3.1. Hardware setup Figure 1 shows the vSphere 5.0 system x and XIV reference architecture diagram. IBM Corporation 2011 7|Page
  • 8. Ether Et Et he FC he FC FC rn FC FC FC et rn ne et t FC FC Figure 1. vSphere 5.0 System x and XIV reference architecture diagram Fibre Channel configuration • (3) IBM x3650 M3 servers • (2) SAN24B-4 Express (8GB) (SAN A and SAN B) • (3) Qlogic QLE2562 HBAs (8GB) 3.1.2. ISCSI configuration • (2) IBM x3650 M3 servers • 1 GB Ethernet Switch 3.1.3. VMware vSphere • (1) VMware vSphere VM (Microsoft® Windows® 2008 R2) 3.2. VMware 5.0 environment Software setup installation 3.2.1. VMware 5.0 Configuration • VMware 5.0 Enterprise Plus 3.2.2. VM OS software • Windows 2008 R2 • Linux rhel 6.0 IBM Corporation 2011 8|Page
  • 9. 3.2.3. Testing software Iometer for I/O testing Note: Iometer is downloaded from www.iometer.org and distributed under the terms of the Intel Open Source License. The iomtr_kstat kernel modules, as well as other future independent components, are distributed under the terms of the GNU Public License) 4. Test procedures 4.1. Iometer for performance testing When implementing storage, whether the storage is directly attached to a server (direct-attach storage or DAS), connected to a file-based network (network- attached storage or NAS), or resides on its own dedicated storage network (storage area network or SAN — Fibre Channel or iSCSI), it is important to understand storage performance. Without this information, managing growth becomes difficult. Iometer can help deliver this critical performance data to help you make better decisions about the storage needed or whether the current storage solution can handle an increased load. 4.1.1. Disk and network controller performance The following two tests show the possible throughput of a three-VM setup and the IBM XIV Gen3 storage array configuration without any special tuning. See “Appendix A (Iometer for performance testing)” for test procedures. Test object Performance of disk and network controllers. Setup (3) VMs, (1) processor, 4 GB memory, (3) 40GB XIV LUNs for test Test steps Install Windows 2008 R2 Install Iometer Set up test with Iometer 40 Workers 8k block size, 30% write and 70% reads Run-time 1 hour See “Appendix A (Iometer for performance testing)” Results VM (1) 76737 IOPS VM (2) 77296 IOPS VM (3) 72248 IOPS Test notes *This is not a performance measurement test. 4.1.2.Bandwidth and latency capabilities of buses Test object Bandwidth and latency capabilities of buses Setup (3) VMs (1) processor 4 GB memory (3) 40GB XIV LUNs for test Test Steps Install Windows 2008 r2 Install Iometer Set up test with Iometer 40 Workers IBM Corporation 2011 9|Page
  • 10. 8k block size, 30% write and 70% reads Run-time 1 hour See “Appendix A (Iometer for performance testing)” Results VM (1) 588 Mbps, 0.4641 ms average latency VM (2) 603 Mbps, 0.0257 ms average latency VM (3) 565 Mbps, 0.8856 ms average latency Test notes *This is not a performance measurement test. The Iometer testing shows that the IBM XIV Gen3 performed exceptionally well with 70000+ IOPS range with a latency well below the 1 ms range. Figure 2 shows the Iometer measured performance results for VM1. Figure 2. Iometer VM1 results for 40 workers 4.2. vSphere vMotion VMware vSphere vMotion technology enables live migration of VMs from server to server. This test demonstrates the difference in transfer times between moving VMs between local server disks (DAS) and moving VMs to the IBM XIV Gen3 (SAN). This demonstration also shows that the XIV Gen3 can move data at computer bus speeds. 4.2.1.vSphere vMotion - Transfer time of VMs to a local disk (DAS) Test object Transfer time of VMs to local disk Setup VM Size 14.44 GB Test steps See “Appendix B (vSphere vMotion)” Results 10 min 3 seconds Test notes None 4.2.2. vSphere vMotion - Transfer times of VMs to XIV LUN (SAN) Test Object Transfer time of VMs to XIV LUN IBM Corporation 2011 10 | P a g e
  • 11. Setup VM Size 14.44 GB Test steps See “Appendix C (Transfer times of VMs to XIV LUNs (SAN)))” Results 1 minute 31 seconds Test notes None Overall test results: For the two tested VMs, transferring all data from the server to XIV was 6.7x faster than from the server to the local disk for the tested configuration, demonstrating the synergy between XIV and vSphere vMotion. See “Appendix B (vSphere vMotion)” and “Appendix C (Transfer times of VMs to XIV LUNs (SAN))” for test details. 4.3. vSphere High Availability The vSphere High Availability (HA) feature delivers reliability and dependability needed by many applications running on virtual machines, independent of the operating system and applications running within it. vSphere HA provides uniform, cost-effective failover protection against hardware and operating system failures within VMware virtualized IT environments. Test object Failover of an ESX server Setup See “Appendix D ( vSphere High Availability)” Test steps See “Appendix D ( vSphere High Availability)” Results When encountering a test-induced failure, the host moved to a new ESXI host and the storage seamlessly moved with it. Test notes none This test shows that the High Availability feature works seamlessly with the IBM XIV Gen3 as the test results show how a failure automatically moves the VM to a new ESXI host and the storage seamlessly moves with it as shown in Figure 3. See “Appendix D ( vSphere High Availability)” for test details. IBM Corporation 2011 11 | P a g e
  • 12. Figure 3. Demonstrating HA feature: VM moves to new ESXI host along with storage 4.4. vSphere Storage Distributed Resource Scheduler The vSphere Storage Distributed Resource Scheduler (SDRS) aggregates storage resources from several storage volumes into a single pool and simplifies storage management. Intelligently placing workloads on storage volumes during provisioning based on the available storage resources, SDRS performs ongoing load balancing between volumes to ensure space and I/O bottlenecks are avoided as per predefined rules that reflect business needs and changing priorities. Test object Testing aggregated storage resources of several storage volumes. Setup Test steps See “Appendix E (vSphere Storage DRS)” Results Passed, storage bottle neck avoided Test notes None When run without SDRS, a storage bottleneck occurs. When SDRS is running, the system performs a task to load balance the disk. An imbalance on the datastore triggers the Storage DRS recommendation to migrate a virtual machine. Storage DRS makes multiple recommendations to solve this datastore imbalance. See “Appendix E (vSphere Storage DRS)” for test details. IBM Corporation 2011 12 | P a g e
  • 13. Figure 4. Storage DRS recommendations solve a datastore imbalance. 4.5. Profile-Driven Storage Profile-Driven Storage enables easy and accurate selection of the correct datastore on which to deploy VMs. The selection of the datastore is based on the capabilities of that datastore. Then, throughout the lifecycle of the VM, a database administrator (DBA) can manually check to ensure that the underlying storage is still compatible, that is, it has the correct capabilities. This means that if the VM is cold-migrated or migrated using Storage vMotion, administrators can ensure that the VM moves to storage that meets the same characteristics and requirements of the original source “profile.” If the VM is moved without checking the capabilities of the destination storage, the compliance of the VM's physical storage characteristics can still be checked from the User Interface at any time, and the administrator can take corrective actions if the VM is no longer on a datastore which meets its storage requirements. Test object Deploying VMs on Profile-Driven Storage Setup Test steps See “Appendix F (Profile-Driven Storage)” Result This test demonstrates that with Profile Driven Storage, a user is able to ensure physical storage characteristics are consistent between migrations of a VM Test notes This test shows that the Profile-Driven Storage feature works with IBM XIV Gen3 to help ensure VM storage profiles meet requirements as shown in Figures 5 and 6. See “Appendix F (Profile-Driven Storage)” for test details. IBM Corporation 2011 13 | P a g e
  • 14. Figure 5. The VM storage profile is now compliant. Figure 6. VM storage profile 4.6. vSphere Storage I/O Control VMware vSphere 5.0 extends Storage I/O Control to provide cluster-wide I/O sharing and limits for datastores. This feature helps ensure that no single virtual machine should be able to create a bottleneck in any IT environment regardless of the type of shared storage used. Storage I/O Control automatically throttles a VM that is consuming a disparate amount of I/O bandwidth when the configured latency threshold has been exceeded. This enables other virtual machines using the same datastore to receive their fair share of I/O performance. Storage DRS and Storage I/O Control work together to prevent deprecation of service-level agreements while providing long- term and short-term I/O distribution balance. Test object Test cluster-wide I/O sharing and limits for datastores Setup Test steps See “Appendix G (Storage I/O Control)” for test details IBM Corporation 2011 14 | P a g e
  • 15. Results Observed a gradual increase in the IOPS for the VM with 2000 shares and a gradual decrease in IOPS for the VM with 1000 shares. The test results showed that more resources needed to be Test notes allocated to one VM to balance the workload. VMware throttled the I/O of the higher IOPS VM to give more I/O to the slower VM. This test shows that the Storage I/O Control feature works within VMware 5.0 with no changes to the IBM XIV Gen3. See “Appendix G (Storage I/O Control)” for test details. 4.7. vCenter VMware vCenter Server is a tool that manages multiple host servers that run VMs. It enables the provisioning of new server VMs, the migration of VMs between host servers and the creation of a library of standardized VMs templates. You can install plug-ins to add several other features, for example, VASA for discovery of storage topology and capability, event and alert status; SRM for disaster recovery automation exploiting storage business-continuity features. 4.8. VMware vSphere Storage API Program VMware vSphere provides an API and software development kit (SDK) environment to allow customers and independent software vendors to enhance and extend the functionality and control of vSphere. VMware has created several storage virtualization APIs that help address storage functionality and control. 4.8.1. vSphere Storage APIs for Array Integration (VAAI) Virtualization administrators look for ways to improve scalability, performance, and efficiency of their vSphere infrastructure. One way is by utilizing storage integration with VMware vStorage APIs for Array Integration VAAI. VAAI is a set of APIs or primitives that allow vSphere infrastructures to offload processing of data-related tasks, which can burden a VMware ESX server. Utilizing a storage platform like XIV with VAAI enabled, can provide significant improvements in vSphere performance, scalability, and availability. This capability was initially a private API requiring a plug-in in vSphere v4.1, but with vSphere 5.0, it is now a T10 SCSI standard. The VAAI driver for XIV enables the following primitives: • Full copy (also known as hardware copy offload): o Benefit: Considerable boost in system performance and fast completion of copy operations; minimizes host processing and network traffic • Hardware-assisted locking (also known as atomic test and set): Replacement of the SCSI-2 lock/reservation in Virtual Machine File System (VMFS) o Benefit: Significantly improves scalability and performance IBM Corporation 2011 15 | P a g e
  • 16. Block zeroing (also known as write same) o Benefit: Reduces the amount of processor effort, and input/output operations per second (IOPS) required to write zeroes across an entire EagerZeroedThick (EZT) Virtual Machine Disk (VMDK) The XIV Storage System now provides full support for VAAI. The following sections describe each of these primitives. • Full copy Tasks such as VM provisioning and VM migration are part of everyday activities of most VMware administrators. As the virtual environment continues to scale, it is important to monitor the overall impact that these activities have on the VMware infrastructure. Toggle the hardware assisted copy by changing the DataMover.HardwareAcceleratedMove parameter in the Advanced Settings tab in vSphere Virtual Center (set to 1 to enable, 0 to disable). When the value for hardware acceleration is 1, the data path changes for tasks such as Storage vMotion, as illustrated in Figure 7. ction tion opy Data tr u c ta C Instru Cop Ins Da y Figure 7: VAAI Full copy primitive In this instance, the ESX server is removed from the data path of the data copy when hardware copy is enabled. Removing copy transactions from the server workload greatly increases the speed of these copy functions while reducing the impact to the ESX server. How effective is the VAAI full copy offload process? During IBM lab testing, data retrieved from the VMware monitoring tool, esxtop showed that commands per second on the ESX host were reduced by a factor of 10. Copy time reduction varies depending on the VM but is usually significant (over 50% for most profiles). IBM Corporation 2011 16 | P a g e
  • 17. A few examples of this performance boost at customer data centers are shown in Table 1: Field results for VAAI full copy . Customer Test Before VAAI After VAAI Time reduction (in percentage) Major financial 2 VMs 433 sec 180 sec 59% Electric 2 VMs 944 sec 517 sec 45% company Petroleum 40 VMs 1 hour 20 min 67% company Table 1: Field results for VAAI full copy Full copy effect: Thousands of commands and IOPs on the ESX server are freed up for other tasks and promote greater scalability. Hardware-assisted locking (atomic test and set) Just as important as the demonstrated effect of hardware-assisted copy, the hardware-assisted locking primitive also greatly enhances VMware cluster scalability and disk operations for clustered file system (VMFS) with tighter granularity and efficiency. It is important to understand why locking occurs in the first place. For block storage environments, VMware data stores are formatted with VMFS. VMFS is a clustered file system that uses Small Computer System Interface (SCSI) reservations to handle distributive lock management. When there is a change to the metadata of the file system by an ESX server, the SCSI reservation process ensures that shared resources do not overlap with other connected ESX hosts by obtaining exclusive access to the logical unit number (LUN). A SCSI reservation is created on VMFS when (not a complete list): • Virtual Machine Disk (VMDK) is first created • VMDK is deleted • VMDK is migrated • VMDK is created via a template • A template is created from a VMDK • Creating or deleting VM snapshots • VM is switched on or off Although normal I/O operations do not require this mechanism, these boundary conditions have become more common as features such as vMotion with Distributed Resource Scheduler (DRS) are used more frequently. This SCSI reservation design leads to early storage area IBM Corporation 2011 17 | P a g e
  • 18. network (SAN) best practices for vSphere to dictate a limit in cluster size for block storage (about 8 to 10 ESX hosts). With hardware-assisted locking as shown in Figure 8, LUN locking processing is transferred to the storage system. This reduces the number of commands required to access a lock, provides locks to be more granular, and leads to better scalability of the virtual infrastructure. V V V V V V M M M M M M D D D D D D K K K K K K Figure 8: VAAI Atomic test and set primitive Hardware-assisted locking effect: Hardware-assisted locking will increase VMs per data store, ESX servers per data store, and overall performance. This functionality coupled with 60 processors and 360 GB of cache memory for the XIV Storage System Gen3 helps provide better consolidation, density, and performance capabilities for the most demanding virtual infrastructures. Block zeroing (write same) Block zeroing, as shown in Figure 9, is designed to reduce the amount of processor and storage I/O utilization required to write zeroes across an entire EZT VMDK when it is created. With the block zeroing primitive, zeroing operation for EZT VMDK files are offloaded to the XIV Storage System without the host having to issue several commands. Block Zeroing Enabled Zero Zero o Zero Zero Zero r Ze Zero 0 0 0 0 0 0 0 0 0 0 IBM Corporation 2011 18 | P a g e
  • 19. Figure 9. The VAAI write same or block zeroing primitive Block zeroing effect: Block zeroing reduces overhead and provides better performance for creating EZT virtual disks. With XIV, EZT volumes are available immediately through fast write caching and de-staging. VAAI support on XIV storage systems liberates valuable compute resources in the virtual infrastructure Offloading processor and disk intensive activities from the ESX server to the storage system provides significant improvements in vSphere performance, scalability and availability. Note: Before installing the VAAI driver for the XIV storage system, ensure 10.2.4a or higher is the installed microcode. For vSphere 5.x and later, the VAAI driver is no longer required for IBM Storage. 4.8.2. vStorage APIs for Storage Awareness (VASA) The IBM Storage provider for VMware VASA, illustrated in Figure 10, provides even more real-time information about the XIV Storage System. VMware vStorage APIs for Storage Awareness (VASA) enable vCenter to see the capabilities of storage array LUNs and corresponding datastores. With visibility into capabilities underlying a datastore, it is much easier to select the appropriate disk for virtual machine placement. The IBM XIV Storage System VASA provider for VMware vCenter adds: • Real-time disk status • Real-time alerts and events from the XIV Storage System to vCenter • Support for multiple vCenter consoles and multiple XIV Storage Systems • Continuous monitoring through storage monitoring service (SMS) for vSphere • Foundation for future functions such as SDRS and policy- driven storage deployment. IBM Corporation 2011 19 | P a g e
  • 20. Figure 10. VASA block diagram Adding VASA support, available in vSphere 5, allows VMware and Cloud administrators insights which lead to improved availability, performance, and management of the storage infrastructure. In addition to VASA, the XIV Storage System also provides a vCenter Plug- in for vSphere 4 and vSphere 5, which extends management of the storage to provisioning, mapping, and monitoring of replication, snapshots, and capacity. 5. Conclusion Demonstrated through this set of IBM functional tests, VMware vSphere 5 and the IBM XIV Storage System Gen3 storage solution seamlessly complement each other as an efficient storage virtualization solution. Evaluation testing verified that VMware vSphere 5 and the IBM XIV Storage System Gen3 consistently performed as expected. The test setup and results can be further evaluated by exploring Appendices A through G. The release of VMware vSphere 5 is accompanied by many new and improved features. VMware vSphere Storage Distributed Resource Scheduler (SDRS) aggregates storage resources from several storage volumes into a single pool, and simplifies storage management. Profile Driven Storage enables easy and accurate selection of the correct datastore on which to deploy Virtual Machines. Storage I/O Control provides cluster-wide I/O sharing and limits for datastores. VAAI, integrated into vSphere 5, provides enhanced performance via storage array exploitation without the need for a plug-in. VASA delivers realtime VMware administrator discovery of storage: capacity, capabilities, events and alerts. With the addition of these new features IT professionals can realize more efficient utilization of storage resources to help achieve higher productivity at reduced costs. IBM Corporation 2011 20 | P a g e
  • 21. For more information regarding VMware vSphere 5 and the IBM XIV Storage System Gen3, reference the following links: VMware: www.vmware.com/products/vsphere/overview.html IBM XIV Storage System Gen3 ibm.com/systems/storage/disk/xiv/resources.html Iometer Iometer is downloaded from www.iometer.org/ and distributed under the terms of the Intel Open Source License. The iomtr_kstat kernel module, as well as other future independent components, is distributed under the terms of the GNU Public License). IBM Corporation 2011 21 | P a g e
  • 22. Appendix A (Iometer for performance testing) 1. Test objective: Performance of VMware vSphere 5.0 using XIV disk and network controllers. 2. Setup Steps: Create 3 New Virtual Machines on vSphere 2.1. Download Windows 2008 R2 from the Microsoft website www.microsoft.com/en-us/server-cloud/windows-server/2008-r2-trial.aspx. 2.2. Download the MS 2008 R2 ISO to vSphere machine. 2.3. On the vSphere 5.0 machine, open vSphere. 2.4. Right Click on ESX server and Select “New Virtual Machine.” 2.5. Select “Name:” Type a name for Virtual Machine; for the tested configuration, the name used was “New Virtual Machine.” IBM Corporation 2011 22 | P a g e
  • 23. 2.6. Select “Next.” 2.7. Select VM Storage. 2.8. Select “Next.” 2.9. Select Guest Operating System “Windows” Version type. IBM Corporation 2011 23 | P a g e
  • 24. 2.10. Select “Next.” 2.11. Select Create Network Connections. 2.12. Set “How many NICs do you want to connect” to “1.” 2.13. Select NIC 1. 2.14. Select Adapter, for this test, “E1000. ” 2.15. Select “Next.” 2.16. Select “Virtual disk size:” IBM Corporation 2011 24 | P a g e
  • 25. 2.17. Select “Next.” 2.18. Select “Finish” to finish the VM creation. 2.19. Select the Virtual Machine just created. 2.20. Right Click on VM. 2.21. Select “Open Console.” 2.22. Select “Power on” (Green Arrow). IBM Corporation 2011 25 | P a g e
  • 26. 2.23. Select “CD tool.” 2.24. Select “Connect to ISO image on local disk.” 2.25. Select WS 2008 R2 ISO. IBM Corporation 2011 26 | P a g e
  • 27. 2.26. Select “Open.” 2.27. After executing windows server install, assign IP address. 2.28. Right Click on VM. 2.29. Select “Open Console.” 2.30. Run Windows updates, and Windows activation. 2.31. Shutdown Windows server. 2.32. Install test hard drives (XIV Gen3). 2.33. Right click on VM. IBM Corporation 2011 27 | P a g e
  • 28. 2.34. Select “Edit Settings” 2.35. Select “Add” IBM Corporation 2011 28 | P a g e
  • 29. 2.36. Select “Hard Disk” 2.37. Select “Next” and Select “Next” 2.38. Select “Disk Size” 40 GB 2.39. Select “Specify a datastore or datastore cluster:” 2.40. Select “Browse” IBM Corporation 2011 29 | P a g e
  • 30. 2.41. Select appropriate disk volume - In this case is “XIV-ISVX8_X9” 2.42. Select “OK” 2.43. Select “Next” IBM Corporation 2011 30 | P a g e
  • 31. 2.44. Select “Next” 2.45. Select “Finish” 2.46. Start the VM Select “Power on” (Green Arrow) IBM Corporation 2011 31 | P a g e
  • 32. 2.47. Select “VM.” 2.48. Select “Guest.” IBM Corporation 2011 32 | P a g e
  • 33. 2.49. Select “Send Crtl+Alt+del.” 2.50. Enter password 2.51. Select VM. 2.52. Select “Guest.” IBM Corporation 2011 33 | P a g e
  • 34. 2.53. Select “Install/Upgrade VMware Tools.” 2.54. To add newly created disk to Windows server, select “Start.” 2.55. Right Click “My Computer.” 2.56. Select “Manage.” 2.57. Select “Offline disk.” IBM Corporation 2011 34 | P a g e
  • 35. 2.58. Right Click, and select “Online.” 2.59. Right click on volume Select “New Simple Volume” Login to VM 2.60. Select “Next.” 2.61. Select “Assign Drive” and select “Next.” IBM Corporation 2011 35 | P a g e
  • 36. 2.62. Select “Volume label,” in this case disk 3, and select “Next.” 2.63. Select “Finish.” IBM Corporation 2011 36 | P a g e
  • 37. 2.64. Finished 2.65 Please repeat the above procedure a total of three times to create disk 1, disk 2 and disk 3. 2.66 Now perform a remote desktop (RDP) to the VM: 2.67. Download Iometer from this website: http://www.Iometer.org/doc/downloads.html 2.68. Download Version 2006.07.27 (or latest version). [download] Windows i386 Installer and prebuild binaries cc5814fd01a0ef936964d590e4bbce7a 2.69. Download Iometer to the desktop. 2.70. Double click on Iometer-2006.07.27.win32.i386-setup. 2.71. Select “Run.” 2.72. Select “Next.” 2.73. Read License Agreement. IBM Corporation 2011 37 | P a g e
  • 38. 2.74. Select “I Agree” and select “Next” to choose the components to install. 2.75. Select “Install.” 2.76. Select “Finish” to finish installing Iometer. 3. Test Steps to create 3VMs and test performance via Iometer 3.1. To Run Iometer, select windows “Start.” 3.2. Select “All Programs.” IBM Corporation 2011 38 | P a g e
  • 39. 3.3. Select “Iometer 2006.07.27” or the latest version available. 3.4. Select “Iometer” IBM Corporation 2011 39 | P a g e
  • 40. 3.5. Select “+” under “All Managers” 3.6 Create a Worker; select “Worker 1.” 3.7. Select desired drive to use, in this case, E: disk 1. 3.8 Add Network Targets; select to add Network Targets. IBM Corporation 2011 40 | P a g e
  • 41. 3.9. Select “Worker 2.” 3.10. Select Network from the Network targets tab. 3.11. Select “Access Specifications.” IBM Corporation 2011 41 | P a g e
  • 42. 3.12. Select “New.” 3.13. Select “Name.” 3.14. Create test name. 3.15. Select “Transfer Request Size” and set to “2 KB.” 3.16. Change to 8KB to mimic SQL server. 3.17. Select “Percent Read/Write Distribution.” 3.18. Change specification to 30% Write and 70% Read. IBM Corporation 2011 42 | P a g e
  • 43. 3.19. After Changes, Select “OK.” 3.20. Scroll down to find test name. 3.21. Select test name. 3.22. Select “Add.” IBM Corporation 2011 43 | P a g e
  • 44. 3.23. Select “Test Setup.” 3.24. Select “Test Description.” 3.25. Type test name. 3.26. Select “Run Time.” 3.27. Set to 1 hour. 3.28. Select “Results Display.” 3.29. Select “Update Frequency (seconds).” 3.30. Set Update Frequency to 1 second to view results. IBM Corporation 2011 44 | P a g e
  • 45. 3.31. Select Start (Green Flag). 3.32. Select “File name.” 3.33. Select “Save.” The test will run for 1 hour. IBM Corporation 2011 45 | P a g e
  • 46. 3.34. Start Results. 4. Iometer performance results (3) VM, (1) CPU, 4GB Memory, (3) 40GB XIV LUNS were used for this test. The results screen shows the achieved IOPS, throughput and CPU utilization for VM1; the tests were repeated for VM2 and VM3. These tests showed the possible throughput of 3 VMs and the IBM XIV Gen3 storage array configuration without any special tuning. The 3 VMs averaged approximately 75,000 IOPS with <0.5ms latency. IBM Corporation 2011 46 | P a g e
  • 47. Appendix B (vSphere vMotion) 1. Test object: vSphere vMotion - Transfer time of VMs to local disk (Vmware 5.0) 2. Setup steps: This section demonstrates vMotion using local disk 2.1. Download a stop watch from http://download.cnet.com/Stop- Watch/3000-2350_4-10773544.html?tag=mncol;5 and install Screen Setup for test: 3. Test Steps: Test transfer time to migrate data to local disk 3.1. Select Virtual Machine (VM). IBM Corporation 2011 47 | P a g e
  • 48. 3.2. Right click on VM. 3.3. Select “Migrate.” 3.4. Select “Change datastore” and select “Next.” IBM Corporation 2011 48 | P a g e
  • 49. 3.5. Select a Local Datastore “ISVX8-local-0” and select “Next.” Start of test 3.6. Start the Stopwatch; Select “Restart.” IBM Corporation 2011 49 | P a g e
  • 50. 3.7. At the Completion of the test, select “Pause.” End of the test 4. Results: The recorded transfer time migrating VMs to local disk (Vmware 5.0) was 10 min 3 seconds. IBM Corporation 2011 50 | P a g e
  • 51. Appendix C (Transfer times of VMs to XIV LUNs (SAN)) 5. Test object: Transfer times of VMs to XIV LUNs (SAN) 6. Setup steps: This section demonstrates vMotion using XIV 2.1. Download a stop watch from http://download.cnet.com/Stop- Watch/3000-2350_4-10773544.html?tag=mncol;5 and install. Screen Setup for test 7. Test Setup: Test transfer time to migrate data to XIV disk. 3.1. Select Virtual Machine (VM). 3.2. Right click on VM. 3.3. Select “Migrate.” IBM Corporation 2011 51 | P a g e
  • 52. 3.4. Select “Change datastore” and select “Next.” 3.5. Select the XIV LUN ”XIV_ISVX8_X9” and select “Next.” IBM Corporation 2011 52 | P a g e
  • 53. Start of test 3.6. Start the Stopwatch 3.7. Select “Finish” IBM Corporation 2011 53 | P a g e
  • 54. 3.8. At the Completion of the test, select “Pause” and record the total migration time. End of test 8. Results: The recorded transfer time migrating VMs to XIV Gen3 (Vmware 5.0) was 1 min 31 seconds. For the two tested VMs, transferring all data from the server to XIV was 6.7 times faster than from the server to the local disk for the tested configuration, demonstrating the efficiency and synergy using XIV and vSphere vMotion. IBM Corporation 2011 54 | P a g e
  • 55. Appendix D ( vSphere High Availability) 9. Test object: vSphere High Availability - Failover of an ESX server 10. Test steps: Create a VMware vSphere 5.0 with a cluster environment 2.1. In the VMware cluster environment, select a VM that is not Fault Tolerant. 2.2. Right Click on the VM. 2.3. Select “Edit Settings.” 2.4. Ensure that the VM uses XIV Gen3 hard disk as in the example below; select “OK.” IBM Corporation 2011 55 | P a g e
  • 56. 2.5. Right click on VM. 2.6. Select “Fault Tolerance.” 2.7. Select “Turn On Fault Tolerance.” IBM Corporation 2011 56 | P a g e
  • 57. 2.8. Select “Yes” 2.9. Results Fault Tolerance is now active. IBM Corporation 2011 57 | P a g e
  • 58. 2.10. Right click on VM. 2.11. Select “Power” and “Power On.” 2.12. Set up complete. 11. Test Steps: 3.1. Right Click on VM. 3.2. Select “Fault Tolerance.” IBM Corporation 2011 58 | P a g e
  • 59. 3.3. Make note of the “Host and Storage.“ 3.4. Select “Test Failover.” Observe that “VM and Storage” has moved to a new host. 12. Results: The VM moved to a new ESXI host and the storage seamlessly moved with it. Appendix E (vSphere Storage DRS) 1. Test object: vSphere Storage DRS 2. Setup steps: Demonstrate SDRS using VMware vSphere 5.0 startup screen 2.1. Select “Inventory.” 2.2. Select “Datastore and Datastore Cluster.” IBM Corporation 2011 59 | P a g e
  • 60. 2.3. Right click on “Datacenter.” 2.3. Select “New Datastore Cluster.” IBM Corporation 2011 60 | P a g e
  • 61. 2.4. Create the Datastore Cluster Name. 2.5. Select “Turn on Storage DRS,” and select “Next.” 2.6. Select “Fully Automated” and select “Next.” IBM Corporation 2011 61 | P a g e
  • 62. 2.7. Select “Show Advance Options.” 2.8. Review Settings (Use Defaults), and select “Next.” 2.9. Select “Cluster,” and select “Next.” IBM Corporation 2011 62 | P a g e
  • 63. 2.10. Select the datastore to use, then select “Next.” 2.11. Review results under “Ready to Complete.” IBM Corporation 2011 63 | P a g e
  • 64. 2.12. Select “Finish.” The new cluster datastore shows all operations were completed successfully. IBM Corporation 2011 64 | P a g e
  • 65. 2.13. Build a new virtual machine. 2.14. Right Click on “Cluster.” 2.15. Select “New Virtual Machine,” then select “Next.” IBM Corporation 2011 65 | P a g e
  • 66. 2.16. Name the virtual machine and select “Next.” 2.17. Select host and then select “Next.” IBM Corporation 2011 66 | P a g e
  • 67. 2.18. Select datastore cluster, then select “Next.” 2.19. Select Guest Operating System, and select “Next.” 2.20. Select “Create Network Connections,” and select “Next.” IBM Corporation 2011 67 | P a g e
  • 68. 2.21. Specify the virtual disk size, and select “Next.” 2.22. Select “Show all storage recommendations.” IBM Corporation 2011 68 | P a g e
  • 69. 2.23. Select “Continue.” 2.24. Select “Apply Recommendations.” IBM Corporation 2011 69 | P a g e
  • 70. 2.25. Observe that “Apply Storage DRS recommendations” has completed. Exploring the Datastore Cluster 2.26. Select “Datastore and Datastore Cluster” from vSphere Home Screen. 2.27. Select datastore. IBM Corporation 2011 70 | P a g e
  • 71. 2.28. Right click. 2.29. Right click on new VM created. 2.30. Select “Migrate.” 2.31. Select “Change datastore,” and select “Next.” IBM Corporation 2011 71 | P a g e
  • 72. 2.32. Select “XIV_ISVX8_X9” and select “Next.” 2.33. Select “Finish.” IBM Corporation 2011 72 | P a g e
  • 73. SDRS set up completed. 3. Test Steps: 3.1. Select Datastore cluster. 3.2. Select “Run Storage DRS.” IBM Corporation 2011 73 | P a g e
  • 74. “Relocate virtual machine” shows test status of “Completed.” 4. Results: Storage DRS (SDRS) When an imbalance occurs on the datastore, Storage DRS recommends a virtual machine to be migrated. Storage DRS will make multiple recommendations to solve datastore imbalances. IBM Corporation 2011 74 | P a g e
  • 75. Appendix F (Profile-Driven Storage) 1. Test object: Profile-Driven Storage 2. Setup steps: This test demonstrates Profile-Driven Storage 2.1. Select “VM Storage Profile” from the Home vSphere window. 2.2. Select “Enable VM Storage Profiles.” 2.3. Select “Enable.” IBM Corporation 2011 75 | P a g e
  • 76. 2.4. Note that VM Storage Profile Status is enabled and select “Close.” 2.5. Select “Manage Storage Capabilities.” IBM Corporation 2011 76 | P a g e
  • 77. 2.6. Select “Add.” 2.7. Select “Name,” type “Gold.” 2.8. Select “Description,” type “Gold Storage Capability.” 2.9. Select “Ok” and “Close.” IBM Corporation 2011 77 | P a g e
  • 78. 2.10. Note: Recent Tasks and select “Home.” 2.11. Select “Datastores and Datastores Cluster” from the Home vSphere window. 2.12. Select disk choice for User-Defined Storage Capability: 2.13. Select disk and right click. 2.14. Select “Assign User-Defined Storage Capability.” IBM Corporation 2011 78 | P a g e
  • 79. 2.15. Select “Name” pull down, select “Gold,” and select “Ok.” 2.16. Select “Summary.” IBM Corporation 2011 79 | P a g e
  • 80. 2.17. Select “Home.” 2.18. Select “VM Storage Profiles” from the vSphere Home screen. 2.19. Select “Create VM Storage Profile.” 2.20. Select “Name” type: Gold Profile. 2.21. Select “Description” type: Storage Profile for VMs that should reside on Gold storage, and select “Next.” IBM Corporation 2011 80 | P a g e
  • 81. 2.22. Select “Gold,” and select “Next.” IBM Corporation 2011 81 | P a g e
  • 82. 2.23. Select “Finish.” 2.24. Select “Gold Profile.” 2.25. Select “Summary.” 2.26. Observe the settings for later comparison. IBM Corporation 2011 82 | P a g e
  • 83. 3. Test Steps: 3.1. Assign a VM storage profile to a VM. 3.2. Select “Home.” IBM Corporation 2011 83 | P a g e
  • 84. 3.3. Select “Hosts and Clusters.” 3.4. Select a VM. 3.5. Right click the VM. 3.6. Select “VM Storage Profile.” 3.7. Select “Manage Profiles.” IBM Corporation 2011 84 | P a g e
  • 85. 3.8. Select “Home VM Storage Profile.” 3.9. Select “Gold Profile” from pull down menu. 3.10. Select “Propagate to disks,” and select “Ok.” IBM Corporation 2011 85 | P a g e
  • 86. 3.11. Observe the setting in “VM Storage Profiles for virtual disks” for future use and select “Ok.” 3.12. Observe in VM Storage Profiles section the profile is “Noncompliant,” as the storage characteristics in the “to” storage do not meet the same requirements. 3.13. Right click on VM. IBM Corporation 2011 86 | P a g e
  • 87. 3.14. Select “Migrate.” 3.15. Select “Change datastore,” and select “Next.” IBM Corporation 2011 87 | P a g e
  • 88. 3.16. Select “VM Storage Profile.” 3.17. Select “Gold Profile.” 3.18. Select Compatible disk, and select “Next.” Note: the VM is being migrated. 3.19. Select “Refresh.” 4. Results: Profile Driven Storage Note the VM Storage Profile is now Compliant. IBM Corporation 2011 88 | P a g e
  • 89. Gold VM Storage Profile: This test demonstrates that with Profile Driven Storage, a user is able to ensure physical storage characteristics are consistent between VM migrations. IBM Corporation 2011 89 | P a g e
  • 90. Appendix G (Storage I/O Control) 1. Test object: Storage I/O Control 2. Setup steps: Create a VM with 2 hard drives to demonstrate Storage I/O Control 2.1. Start VM. 2.2. Use remote desktop (RDP) to go to the VM. 2.3. Install Iometer from http://www.Iometer.org/doc/downloads.html 2.4. Once installed, run Iometer. IBM Corporation 2011 90 | P a g e
  • 91. 2.5. Select “Worker 1.” 2.6. Select “E: disk1.” 2.7. Select “Access Specifications,” and select “New.” IBM Corporation 2011 91 | P a g e
  • 92. 2.8. Set “Transfer Request” Size to “10 Megabytes, 2 Kilobytes, 0 Bytes.” 2.9. Set “Percent Read/Write Distribution” to 75% Write / 25% Read and select “Ok” (these settings provide a heavier load on the VM). 2.10. Select “Untitled 1” under Global Access Specifications. IBM Corporation 2011 92 | P a g e
  • 93. 2.11. Select “Results Display.” 2.12. Select “Update Frequency” to “1.” 2.13. Select “Green flag” to start. 2.14. Select “Save” to save results. IBM Corporation 2011 93 | P a g e
  • 94. 2.15. Return to vSphere. 2.16. Select “Home.” 2.17. Select “Datastores and Datastore Clusters.” 2.18. Select the Host running the VM. Note Storage I/O Control is “Disabled” IBM Corporation 2011 94 | P a g e
  • 95. 2.19. Select “Properties.” 2.20. Set “Storage I/O Control” to “Enabled.” 2.21. Select “Advanced.” 2.22. Select “OK.” IBM Corporation 2011 95 | P a g e
  • 96. 2.23. Select “OK.” 2.24. Select “Close.” 2.25. Go to the VM used for testing and “Edit Settings.” 2.26. Select “Resources.” IBM Corporation 2011 96 | P a g e
  • 97. 2.27. Select “Disk,” and select “OK.” Note: Storage I/O Control (SIOC) is set on Disk 2. IBM Corporation 2011 97 | P a g e
  • 98. 2.28. Set Hard disk 2 “Share” to High and “Limit – IOPS” to 100. 3. Test Steps: Demonstrate Storage I/O Control IBM Corporation 2011 98 | P a g e
  • 99. 3.1. Now look at SIOC’s enforcing of the IOPS limit. Go back to the vSphere Client Performance tab or the virtual machine’s Iometer results to see the number of IOPS currently being generated. The value for this exercise is approximately 500–600 IOPS. 3.2. Go to the VM running Iometer. 3.3. Stop Iometer. 3.4. Change “# of Outstanding I/Os” to 65. 3.5. Restart Iometer. 3.6. Go to Results Display. IBM Corporation 2011 99 | P a g e
  • 100. 4. Results: Storage I/O Control Implementing Storage I/O Control recommendations shows a gradual movement towards the prioritizing of shares. The test demonstrates a gradual increase in the IOPS for the virtual machine with 2000 shares and a gradual decrease in IOPS for the virtual machine with 1000 shares. This completes the evaluation of Storage I/O Control. IBM Corporation 2011 100 | P a g e
  • 101. Trademarks and special notices © Copyright IBM Corporation 2011. All rights Reserved. References in this document to IBM products or services do not imply that IBM intends to make them available in every country. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning. IBM Corporation 2011 101 | P a g e
  • 102. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Photographs shown are of engineering prototypes. Changes may be incorporated in production models. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM Corporation 2011 102 | P a g e