SlideShare une entreprise Scribd logo
1  sur  135
Télécharger pour lire hors ligne
VMware Cloud
Infrastructure
What’s new in vSphere 5
2010
               vCloud Director



               vShield Security



             vCenter Management



  vSphere
         vSphere
        vSphere
2011
                vCloud Director
                                               New
                vShield Security

       Cloud Infrastructure Launch
              vCenter Management



  vSphere
          vSphere
        vSphere
Agenda

• vSphere 5.0 Platform
• vSphere 5.0 Networking
• vSphere 5.0 Availability
• vSphere 5.0 vMotion, DRS/DPM
• vCenter Server 5.0
• vSphere 5.0 vStorage
• vSphere 5.0 Storage Appliance (VSA)
• VMware vCenter Site Recovery Manager v5.0
vSphere 5.0 – Platform

•  Platform Enhancements
•  ESXi Firewall
•  Image Builder
•  Auto Deploy
New Virtual Machine Features
§ vSphere 5.0 supports the industry’s most capable VM’s
                            •     32 virtual CPUs per VM    •          1TB RAM per VM
                                                            •          4x previous capabilities!
     VM Scalability




                            •     3D graphics
     Richer Desktop
       Experience



                            •  Client-connected USB              •  VM BIOS boot order config API
                               devices
                             and PowerCLI interface
                            •  USB 3.0 devices
                  •  EFI BIOS
                            •  Smart Card Readers for
     Broader Device            VM Console Access
        Coverage


                             •  UI for multi-core virtual         •     Support for Mac OS X
       Other new                CPUs                                    servers
        features
            •  Extended VMware
                                Tools compatibility

   Items which require HW version 8 in blue
vSphere 5.0 – Platform

•  Platform Enhancements
•  ESXi Firewall
•  Image Builder
•  Auto Deploy
•  vSphere Update Manager
ESXi 5.0 Firewall Features
§ Capabilities
 •  ESXi 5.0 has a new firewall engine which is not based on iptables.
 •  The firewall is service oriented, and is a stateless firewall.
 •  Users have the ability to restrict access to specific services based on
      IP address/Subnet Mask.
§ Management
 •  The GUI for configuring the firewall on ESXi 5.0 is similar to that used with
      the classic ESX firewall — customers familiar with the classic ESX firewall
      should not have any difficulty with using the ESXi 5.0 version.
 •    There is a new esxcli interface (esxcfg-firewall is deprecated in ESXi 5.0).
 •    There is Host Profile support for the ESXi 5.0 firewall.
 •    Customers who upgrade from Classic ESX to ESXi 5.0 will have their
      firewall settings preserved.
UI: Security Profile
§ The ESXi Firewall can be managed via the vSphere client.
§ Through the Configuration > Security Profile, one can
 observe the Enabled Incoming/Outgoing Services, the
 Opened Port List for each service & the Allowed IP List for
 each service.
UI: Security Profile > Services >
Properties
§ Through the Services Properties, one can configure if a
 service should be automatically started.
§ Services can also be stopped & started on-the-fly.
vSphere 5.0 – Platform

•  Platform Enhancements
•  ESXi Firewall
•  Image Builder
•  Auto Deploy
Composition of an ESXi Image


     Core           CIM
   Hypervisor	

     Providers	





     Plug-in       Drivers	

   Components
ESXi Image Deployment
§ Challenges
 •  Standard ESXi image from VMware download site is sometimes limited
   •    Doesn’t have all drivers or CIM providers for specific hardware
   •    Doesn’t contain vendor specific plug-in components




                                      ?
                                                                          Missing
                                                                           CIM
                                                                         provider	





                                                                         Missing
                                                                         driver	

     Standard

      ESXi ISO
 •  Base providers	

   •  Base drivers
Building an Image
         Start PowerCLI session	


         Windows Host with PowerCLI

          and Image Builder Snap-in
Building an Image
       Activate Image Builder Snap-in	


           Windows Host with PowerCLI

            and Image Builder Snap-in




                    Image 
                    Builder
Building an Image
 Depots
                 Connect to depot(s)	

 Image

 Profile
        Windows Host with PowerCLI

                 and Image Builder Snap-in

    ESXi
    VIBs	

                         Image 
    Driver            Builder
    VIBs	





OEM VIBs
Building an Image
 Depots
                  Clone and modify 
                existing Image Profile	

 Image

 Profile
        Windows Host with PowerCLI

                 and Image Builder Snap-in

    ESXi
    VIBs	

                         Image 
    Driver            Builder
    VIBs	





OEM VIBs
Building an Image
 Depots
                Generate new image	

 Image

 Profile
        Windows Host with PowerCLI

                 and Image Builder Snap-in

    ESXi
    VIBs	

                         Image 
    Driver            Builder
             ISO Image
    VIBs	

                                              PXE-bootable

                                                 Image




OEM VIBs
vSphere 5.0 – Platform

•  Platform Enhancements
•  ESXi Firewall
•  Image Builder
•  Auto Deploy
vSphere 5.0 – Auto Deploy
                                                         Overview

     vCenter Server with 
        Auto Deploy
                                        •  Deploy and patch vSphere hosts in
                                          minutes using a new “on the fly” model    
         Image
                   
                   
                   
   Host Profiles
                                        •  Coordination with vSphere Host Profiles
        Profiles
                   
        
           
                   
        
           
                   
        
           




                                                          Benefits
                                        • Rapid provisioning: initial deployment
                                          and patching of hosts
 vSphere
      vSphere
      vSphere
   • Centralized host and image management
                                        • Reduce manual deployment and patch
                                         processes
Deploying a Datacenter Has Just Gotten Much Easier
               Before
                       After




Time: 30   Time: 30      Time: 30
mins
      mins
         mins
       …..Repeat 37 more times…




   Total time: 20                   Total time: 10 Minutes!
   Hours!
Auto Deploy Example – Initial Boot
Provision new host	

                                                  vCenter Server
   Image
                                  
    Image
                    
   Profile
     Image
                                     Host Profile
    Profile
                   
                  Host Profile
     Profile
                                      Host Profile
                              
                        Rules 
Engine	

        ESXi               
        VIBs	

               
                              
        Driver             
        VIBs	

               
                          “Waiter”
                              
                              
                            Auto 

                           Deploy
             TFTP
          DHCP
   OEM VIBs
Auto Deploy Example – Initial Boot
1) PXE Boot server	

                                                  vCenter Server
   Image
                                  
    Image
                    
   Profile
     Image
                                     Host Profile
    Profile
                   
                  Host Profile
     Profile
                                      Host Profile
                              
                        Rules 
Engine	

        ESXi               
        VIBs	

               
                              
        Driver             
        VIBs	

               
                          “Waiter”
         gPXE
                DHCP

                              
                                           image
               Reques
                              
                                                                   t
                            Auto 

                           Deploy
             TFTP
          DHCP
   OEM VIBs
Auto Deploy Example – Initial Boot
2) Contact Auto Deploy Server 	

                                                        vCenter Server
   Image
                                      
    Image
                   
   Profile
     Image
                                          Host Profile
    Profile
                  
                        Host Profile
     Profile
                                           Host Profile
                             
                       Rules 
Engine	

        ESXi              
        VIBs	

              
                             
                                                    

        Driver            
                 b oot
                                          http est
        VIBs	

              
             r eq u
                         “Waiter”
                             
                             
                           Auto 

                          Deploy
   OEM VIBs	

                                                   Cluster A
    Cluster B
Auto Deploy Example – Initial Boot
3) Determine Image Profile, Host Profile and cluster	

                                                              vCenter Server
   Image
                                             
    Image
                   
   Profile
     Image
                                                 Host Profile
    Profile
                  
                               Host Profile
     Profile
                                                  Host Profile
                             
                       Rules 
Engine	

        ESXi              
    •  Image Profile
                                     X
        VIBs	

              
    •  Host Profile 1
                                  •  Cluster B
                             
        Driver            
        VIBs	

              
                         “Waiter”
                             
                             
                           Auto 

                          Deploy
   OEM VIBs	

                                                          Cluster A
   Cluster B
Auto Deploy Example – Initial Boot
4) Push image to host, apply host profile	

                                                      vCenter Server
   Image
                                     
    Image
                     
   Profile
     Image
                                         Host Profile
    Profile
                    
                     Host Profile
     Profile
                                          Host Profile
                               
                        Rules 
Engine	

        ESXi                
                           Image Profile

        VIBs	

                
                            Host Profile
                              Cache
                               
        Driver              
        VIBs	

                
                          “Waiter”
                               
                               
                            Auto 

                           Deploy
   OEM VIBs	

                                                  Cluster A
   Cluster B
Auto Deploy Example – Initial Boot
5) Place host into cluster	

                                                     vCenter Server
   Image
                                    
    Image
                       
   Profile
     Image
                                        Host Profile
    Profile
                      
                  Host Profile
     Profile
                                         Host Profile
                                 
                          Rules 
Engine	

         ESXi                 
                             Image Profile

         VIBs	

                 
                              Host Profile
                                Cache
                                 
         Driver               
         VIBs	

                 
                            “Waiter”
                                 
                                 
                              Auto 

                             Deploy
   OEM VIBs	

                                                 Cluster A
   Cluster B
vSphere 5.0 – Networking

•  LLDP
•  NetFlow
•  Port Mirror
•  NETIOC – New Traffic Types
What Is Discovery Protocol?
(Link Layer Discovery Protocol )

§  Discovery protocol is a data link layer network protocol
 used to discover capabilities of network devices.
§  Discovery protocol allows customer to automate the
 deployment process in a complex environment through its
 ability to
  •  Discover capabilities of Network devices
  •  Discover configuration of neighboring infrastructure
§  vSphere infrastructure supports following Discovery
 Protocol
  •  CDP (Standard vSwitches  Distributed vSwitches)
  •  LLDP (Distributed vSwitches)
§  LLDP is a standard based vendor neutral discovery protocol
 (802.1AB)
LLDP Neighbour Info
§ Sample output using LLDPD Utility
vSphere 5.0 – Networking

•  LLDP
•  NetFlow
•  Port Mirror
•  NETIOC – New Traffic Types
What Is NetFlow?
§  NetFlow is a networking protocol that collects IP traffic info
  as records and sends them to third party collectors such as
  CA NetQoS, NetScout etc.
      Legend :
                                       VM A
     VM B

            VM traffic

            NetFlow session



         Collecto              Physical
             r
                 switch
                  vDS


                                                                Host
                                               trun
                                                 k
§  The Collector/Analyzer report on various information such as:
 •  Current top flows consuming the most bandwidth
 •  Which flows are behaving irregularly
 •  Number of bytes a particular flow has sent and received in the past 24 hours
NetFlow with Third-Party
   Collectors
Legend :
                                   Net Scout 

       Internal flows 
             nGenius 

       External flows 
             Collector
       NetFlow session




External
Systems
                  vDS
                                 Hos
                                  t




                                  CA NetQoS

                                   Collector
vSphere 5.0 Networking

•  LLDP
•  NetFlow
•  Port Mirror
•  NETIOC – New Traffic Types
What Is Port Mirroring (DVMirror)?
§  Port Mirroring is the capability on a network switch to send
 a copy of network packets seen on a switch port to a
 network monitoring device connected on another switch
 port.
§  Port Mirroring is also referred to as SPAN (Switched Port
 Analyzer) on Cisco Switches.
§  Port Mirroring overcomes the limitation of promiscuous
 mode.
  •  By providing granular control on which traffic can be monitored
   •    Ingress Source
   •    Egress Source

§  Helps in troubleshooting network issue by providing access
 to:
  •  Inter-VM traffic
  •  Intra-VM traffic
Port Mirror Traffic Flow When
Mirror Destination Is a VM
Inter-VM traffic
                    Ingress
                              Egress

                                 Destinatio                           Destinatio
                    Source
                               Source
                                     n
                                   n




                           vDS
                                     vDS
                                                                                    Legend :
                                                                                           Mirror Flow

                                                                                           VM Traffic 
Intra-VM traffic
                                                          Egress

                     Ingress
                                          Destinatio
                                  Destinatio              Source
                     Source
                                               n
                                      n




       External                                External
       System
                                 System
                               vDS
                                 vDS
vSphere 5.0 Networking

•  LLDP
•  NetFlow
•  Port Mirror
•  NETIOC – New Traffic Types
What Is Network I/O Control
 (NETIOC)?
§  Network I/O control is a traffic management feature of
 vSphere Distributed Switch (vDS).
§  In consolidated I/O (10 gig) deployments, this feature allows
 customers to:
  •  Allocate Shares and Limits to different traffic types.
  •  Provide Isolation
   •  One traffic type should not dominate others
  •  Guarantee Service Levels when different traffic types compete
§  Enhanced Network I/O Control — vSphere 5.0 builds on
 previous versions of Network I/O Control feature by
 providing:
  •  User-defined network resource pools
  •  New Host Based Replication Traffic Type
  •  QoS tagging
NETIOC VM Groups
VMRG1
   VMRG2
   VMRG3




                                                                           Total BW = 20 Gig	





                                                                         User Defined RP




                                                                                                          vMotion

                                                                                                                     iSCSI
             VMware vNetwork Distributed Switch	





                                                                                                                                           HBR
                                                                                                                                    NFS
                                                                                                                              FT
                                                                                                    VM

                                                                                                     
             Network I/O Control	

                                                     10 GigE




                                                                VMRG1

                                                                             VMRG2

                                                                                           VMRG3
NETIOC VM Traffic        Pepsi VMs
                                                 Coke
                                                 VM
                                                                                                        vMotio
                                                                                         HBR
                              FT
                                                                                                          n



                                                                                                Mgmt
              NFS
            iSCSI



Server Admin

                                                                                                         vNetwork Distributed Portgroup	


                                                                            Teaming Policy
                vNetwork Distributed Switch	


                                                                                                                      Load Based
                                                                               Shaper
                                 Teaming
   Traffic
       Shares
       Limit (Mbps)
       802.1p


   vMotion
         5
              150
                1
                                      Scheduler
                                                              Scheduler

    Mgmt
           30
                                 --
                                                                                                                 Limit enforcement 

    NFS
            10
             250
                --
                                                                                                                      per team

                                                                           Shares enforcement
    iSCSI
          10
                                 2
                                                                               per uplink
     FT
            60
                                 --


    HBR
            10
                                 --


     VM
            20
            2000
                4


    Pepsi
          5
                                  --


    Coke
           15
                                 --
vSphere 5.0 – Availability
vSphere HA Primary Components
§ Every host runs a agent
  •  Referred     to as ‘FDM’ or Fault Domain
       Manager
  •    One of the agents within the cluster is
       chosen to assume the role of the Master
   •    There is only one Master per cluster during normal
                                                             ESX 01
   ESX 03
        operations
  •  All other agents assume the role of Slaves
§ There is no more Primary/
 Secondary concept with vSphere HA


                                                             ESX 02
   ESX 04




                                           vCenter
The Master Role
§ An FDM master monitors:
 •  ESX hosts and Virtual Machine availability.
 •  All Slave hosts. Upon a Slave host failure,
      protected VMs on that host will be restarted.
 •    The power state of all the protected VMs.
                                                       ESX 01
   ESX 03
      Upon failure of a protected VM, the Master
      will restart it.
§ An FDM master manages:
 •  The    list of hosts that are members of the
      cluster, updating this list as hosts are added
      or removed from the cluster.

 •  The list of protected VMs. The
  Master updates this list after                       ESX 02
   ESX 04
  each user-initiated power on
  or power off. 


                                      vCenter
The Slave Role
§ An Slave monitors the runtime state
 of it’s locally running VMs and
 forwards any significant state
 changes to the Master.
§ It implements vSphere HA features       ESX 01
   ESX 03

 that do not require central
 coordination, most notably VM
 Health Monitoring.
§ It monitors the health of the Master.
 If the Master should fail, it
 participates in the election process
                                           ESX 02
   ESX 04
 for a new master.
§ Maintains list of powered on VMs
                              vCenter
Storage Level Communications
§ One of the most exciting new features of
  vSphere HA is its ability to use a storage
  subsystem for communication.
§ The datastores used for this are referred to as
  ‘Heartbeat Datastores’.
§ This provides for increased communication         ESX 01
   ESX 03

  redundancy.
§ Heartbeat datastores are used as a
  communication channel only when the
  management network is lost - such as in the
  case of isolation or network partitioning.



                                                     ESX 02
   ESX 04




                                        vCenter
Storage Level Communications
§ Heartbeat Datastores allow a Master to:
 •  Monitor availability of Slave hosts and the
    VMs running on them
 •  Determine whether a host has become
    network isolated rather than network ESX 01
      ESX 03
    partitioned.
 •  Coordinate with other Masters - since a
    VM can only be owned by only one
    master, masters will coordinate VM
    ownership thru datastore communication.
•  By default, vCenter will
   automatically pick 2
   datastores. These 2                      ESX 02
   ESX 04
   datastores can also be
   selected by the user.

                                 vCenter
vSphere 5.0 – vMotion, DRS/DPM
vSphere 5.0 – vMotion
§ The original vMotion keeps getting better!
§ Multi-NIC Support
  •  Support up to four 10Gbps or sixteen 1Gbps NICs.
       (ea. NIC must have its own IP).
  •    Single vMotion can now scale over multiple NICs.
       (load balance across multiple NICs).
  •    Faster vMotion times allow for a higher number of concurrent vMotions.
§  Reduced Application Overhead
  •  Slowdown During Page Send (SDPS) feature throttles busy VMs to reduce
       timeouts and improve success.
  •    Ensures less than 1 Second switchover time in almost all cases.
§ Support for higher latency networks (up to ~10ms)
  •  Extend vMotion capabilities over slower networks.
Multi-NIC Throughput
                                              Multi-NIC
                     30

                     25
Throughput (Gbps)




                     20

                     15

                     10

                      5

                      0
                                One NIC
           Two NICs
          Three NICs*

                           * Limited by throughput of PCI-E bus in this particular setup.
vSphere 5.0 – DRS/DPM
§ DRS/DPM improvements focus on cross-product integration.
  •  Introduce support for “Agent VMs.”
      •    Agent VM is a special purpose VM tied to a specific ESXi host.
      •    Agent VM cannot / should not be migrated by DRS or DPM.
      •    Special handling of Agent VMs now afforded by DRS  DPM.

§ A DRS/DPM cluster hosting Agent VMs.
   •  Accounts for Agent VM reservations (even when powered off).
   •  Waits for Agent VMs to be powered on and ready before placing client VMs.
   •  Will not try to migrate a Agent VM (Agent VMs pinned to their host).
§ 
  Maintenance Mode / Standby Mode Support
  • Agent VMs do not have to be evacuated for host to enter
       maintenance or standby mode.
      •    When host enters maintenance/standby mode, Agent VMs are powered off
           (after client VMs are evacuated).
      •    When host exits maintenance/standby mode, Agent VMs are powered on
           (before client VMs are placed).
vSphere 5.0 – vCenter Server
vSphere Web Client Architecture
The vSphere Web
Client runs within
a browser              Fx

Application
Server that          Flex Client
provides a           Back End
scalable back end
                                      The Query Service
                                      obtains live data
vCenter in either            Query    from the core
single or                   Service   vCenter Server
Linked mode                           process
operation              vCenter
Extension Points
Launchbar                        Tabs
            Inventory Objects           Create custom actions
                                                                Sidebar Extension




                                 Portlets


                                Add right-click extensions
Features of the vSphere Web Client
§ Ready Access to Common Actions
 •  Quick access to common tasks provided out of the box
Introducing vCenter Server
 Appliance
§ The vCenter Server Appliance is the answer!
  •  Simplifies Deployment and Configuration
  •  Streamlines patching and upgrades
  •  Reduces the TCO for vCenter
§ Enables companies to respond to business faster!
                                VMware
                             vCenter Server
                             Virtual
                             Appliance



                 Automatio
                                               Visibility
                     n
                               Scalability
Component Overview
§ vCenter Server Appliance (VCSA) consists of:
  •  A pre-packaged 64 bit application running on SLES 11
   •    Distributed with sparse disks
   •    Disk Footprint
                           Distribution
   Min Deployed
 Max Deployed
                                 3.6GB
        ~5GB
          ~80GB

   •    Memory Footprint
  •  A built in enterprise level database with optional support for a remote
       Oracle databases.
  •    Limits are the same for VC and VCSA
   •    Embedded DB
      •  5 hosts/50 VMs
   •  External DB
      •  300 hosts/3000 VMs (64 bit)
  •  A web-based configuration interface
Feature Overview
§ vCenter Server Appliance supports:
  •  The vSphere Web Client
  •  Authentication through AD and NIS
  •  Feature parity with vCenter Server on Windows
   •  Except –
      •  Linked Mode support - Requires ADAM (AD LDS)
      •  IPv6 support
      •  External DB Support
       •  Oracle is the only supported external DB for the first release
      •  No vCenter Heartbeat support
       •  HA is provided through vSphere HA
vSphere 5.0 – vStorage

• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
Introduction to VMFS-5
§ Enhanced Scalability
  •  Increase the size limits of the filesystem  support much larger single
       extent VMFS-5 volumes.
  •    Support for single extent 64TB Datastores.
§ Better Performance
  •  Uses VAAI locking mechanism with more tasks.
§ Easier to manage and less overhead
  •  Space reclamation on thin provisioned LUNs.
  •  Smaller sub blocks.
  •  Unified Block size.
VMFS-5 vs VMFS-3 Feature
Comparison
 Feature
                              VMFS-3
            VMFS-5

                                            Yes 

 2TB+ VMFS Volumes
                                           Yes
                                      (using extents)


 Support for 2TB+ Physical RDMs
            No
               Yes



 Unified Block size (1MB)
                   No
               Yes


 Atomic Test  Set
 Enhancements
                              No
               Yes

 (part of VAAI, locking mechanism)

 Sub-blocks for space efficiency
     64KB (max ~3k)
    8KB (max ~30k)



 Small file support
                         No
               1KB
VMFS-3 to VMFS-5 Upgrade
§ The Upgrade to VMFS-5 is clearly displayed in the vSphere
 Client under Configuration         Storage view.
§ It is also displayed in the Datastores   Configuration view.
§ Non-disruptive upgrades.
vSphere 5.0 – vStorage

• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
VAAI – Introduction
§ vStorage API for Array Integration = VAAI
§ VAAI’s main purpose is to leverage array capabilities.
  •  Offloading tasks to reduce overhead
  •  Benefit from enhanced mechanisms arrays mechanisms
§ The “traditional” VAAI primitives have been improved.
§ We have introduced multiple new primitives. Application
§ Support for NAS!                                         VI-3

                                                         Hypervisor
                                                          Non-VAAI

                                                           Fabric


                                                           Array
                                                            VAAI
                                                  LUN
                 LUN

                                                   01
                  02
VAAI Primitive Updates in
 vSphere 5.0
§ vSphere 4.1 has a default plugin shipping for Write Same as
 the primitive was fully T10 compliant, however ATS and Full
 Copy were not.
  •  The T10 organization is responsible for SCSI standardization (SCSI-3) and
    a standard used by many Storage Vendors.
§ vSphere 5.0 has all the 3 primitives which are T10 compliant
 integrated in the ESXi Stack.
  •  This allows for arrays which are T10 compliant leverage these primitives
    with a default VAAI plugin in vSphere 5.0.
§ It should also be noted that the ATS primitive has been
 extended in vSphere 5.0 / VMFS-5 to cover even more
 operations, resulting in even better performance and greater
 scalability.
Introducing VAAI NAS Primitives
§ With this primitive, we will enable hardware acceleration/
 offload features for NAS datastores.
§ The following primitives are defined for VAAI NAS:
  •  Full File Clone – Similar to the VMFS block cloning. Allows offline VMDKs
    to be cloned by the Filer.
   •    Note that hot migration via Storage vMotion on NAS is not hardware accelerated.
  •  Reserve Space – Allows creation of thick VMDK files on NAS.
§ NAS VAAI plugins are not shipped with ESXi 5.0. These
 plugins will be developed and distributed by the storage
 vendors, but signed by the VMware certification program.
VAAI NAS: Thick Disk Creation
§ Without the VAAI NAS primitives, only Thin format is
 available.
§ With the VAAI NAS primitives, Flat (thick), Flat pre-initialized
 (eager zeroed-thick) and Thin formats are available.
                     Non
                                                           VAAI
                     VAAI
Introducing VAAI Thin
 Provisioning
§ What are the driving factors behind VAAI TP?
  •  Provisioning new LUNs to a vSphere environment (cluster) is
       complicated.
§ Strategic Goal:
  •  We want to make the act of physical storage provisioning in a vSphere
       environment extremely rare.
  •    LUNs should be an incredibly large address spaces  should be able to
       handle any VM workload.
§ VAAI TP features include:
  •  Dead space reclamation.
  •  Monitoring of the space.
VAAI Thin Provisioning – Dead
 Space Reclamation
§ Dead space is previously written blocks that are no longer
 used
 by the VM. For instance after a Storage vMotion.
§ vSphere conveys block information to storage system
 via VAAI  storage system reclaims the dead blocks.
  •  Storage vMotion, VM deletion
       and swap file deletion can trigger             vSphere
                                                 Storage vMotion
       the thin LUN to free some
       physical space.
  •    ESXi 5.0 uses a standard SCSI
       command for dead space reclamation.




                                         VMFS volume A
    VMFS volume B
Current “Out Of Space” User
Experience
             No space related warnings
  VMware




            No mitigation steps available




               Space exhaustion, VMs 

                  and LUN offline




                            ?
                      VMware
“Out Of Space” User Experience
with VAAI Extensions Space exhaustion warning in UI


  VMware




                      Storage vMotion based
                     evacuation or add space




  VMware




              Space exhaustion, affected VMs paused, 

              LUN online  awaiting space allocation.
vSphere 5.0 – vStorage

• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
Storage vMotion – Introduction
§ In vSphere 5.0, a number of new enhancements were made
 to Storage vMotion.
 •  Storage vMotion will work with Virtual Machines that have snapshots,
      which means coexistence with other VMware products  features such as
      VCB, VDR  HBR.
 •    Storage vMotion will support the relocation of linked clones.
 •    Storage vMotion has a new use case – Storage DRS – which uses Storage
      vMotion for Storage Maintenance Mode  Storage Load Balancing (Space
      or Performance).
Storage vMotion Architecture
Enhancements (1 of 2)
§ In vSphere 4.1, Storage vMotion uses the Changed Block Tracking
 (CBT) method to copy disk blocks between source  destination.
§ The main challenge in this approach is that the disk pre-copy phase
 can take a while to converge, and can sometimes result in Storage
 vMotion failures if the VM was running a very I/O intensive load.
§ Mirroring I/O between the source and the destination disks has
 significant gains when compared to the iterative disk pre-copy
 mechanism.
§ In vSphere 5.0, Storage vMotion uses a new mirroring architecture to
 provide the following advantages over previous versions:
  •  Guarantees migration success even when facing a slower destination.
  •  More predictable (and shorter) migration time.
Storage vMotion Architecture
Enhancements (2 of 2)
      VMM/Guest
                Guest OS




             Datamover
        Mirror Driver
      VMkernel




      Userworld



                          Source
           Destination
vSphere 5.0 – vStorage

• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
Storage I/O Control Phase 2 and
 Refreshing Memory
§ In many customer environments, storage is mostly accessed from
 storage arrays over SAN, iSCSI or NAS.
§ One ESXi host can affect the I/O performance of others by issuing
 large number of requests on behalf of one its virtual machines.
§ Thus the throughput/bandwidth available to ESXi hosts itself may
 vary drastically leading to highly-variable I/O performance for VMs.
§ To ensure stronger I/O guarantees, we implemented Storage I/O
 Control in vSphere 4.1 for block storage which guarantees an
 allocation of I/O resources on a per VM basis.
§ As of vSphere 5.0 we also support SIOC for NFS based storage!
§ This capability is essential to provide better performance for I/O
 intensive and latency-sensitive applications such as database
 workloads, Exchange servers, etc.
Storage I/O Control Refreshing
Memory
What you see                                 What you want to see




 online   Microsoft    data                    online   Microsoft    data
 store    Exchange    mining                   store    Exchange    mining




  VIP       VIP                                  VIP       VIP




                      NFS / VMFS Datastore                          NFS / VMFS Datastore
vSphere 5.0 – vStorage

• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
What Does Storage DRS Solve?
§ Without Storage DRS:
  •  Identify the datastore with the most disk space and lowest latency.
  •  Validate which virtual machines are placed on the datastore and ensure
       there are no conflicts.
  •    Create Virtual Machine and hope for the best.
§ With Storage DRS:
  •  Automatic selection of the best placement for your VM.
  •  Advanced balancing mechanism to avoid storage performance
       bottlenecks
       or “out of space” problems.
  •    Affinity Rules.
What Does Storage DRS Provide?
§ Storage DRS provides the following:
 1.  Initial Placement of VMs and VMDKS based on available space and
       I/O capacity.
 2.    Load balancing between datastores in a datastore cluster via Storage
       vMotion based on storage space utilization.
 3.    Load balancing via Storage vMotion based on I/O metrics, i.e. latency.
§ Storage DRS also includes Affinity/Anti-Affinity Rules for VMs
 and VMDKs;
  •  VMDK Affinity – Keep a VM’s VMDKs together on the same datastore.
       This is the default affinity rule.
  •    VMDK Anti-Affinity – Keep a VM’s VMDKs separate on different datastores.
  •    Virtual Machine Anti-Affinity – Keep VMs separate on different datastores.
§ Affinity rules cannot be violated during normal operations.
Datastore Cluster
§ An integral part of SDRS is to create a group of datastores
 called
 a datastore cluster.
  •  Datastore Cluster without Storage DRS – Simply a group of datastores.
  •  Datastore Cluster with Storage DRS – Load Balancing domain similar to
    a DRS Cluster.
§ A datastore cluster, without SDRS is just a datastore folder.
 It is the functionality provided by SDRS which makes it more
 than just a folder.
                                        2TB	


                                                         datastore cluster




                      500GB	

   500GB	

   500GB	

   500GB	

                                                                  datastores
Storage DRS Operations – Initial
 Placement (1 of 4)
§ Initial Placement – VM/VMDK create/clone/relocate.
 •  When creating a VM you select a datastore cluster rather than an
      individual datastore and let SDRS choose the appropriate datastore.
 •    SDRS will select a datastore based on space utilization and I/O load.
 •    By default, all the VMDKs of a VM will be placed on the same datastore
      within a datastore cluster (VMDK Affinity Rule), but you can choose to
      have VMDKs assigned to different datastore clusters.
                                             2TB	


                                                                datastore cluster




                       500GB	

     500GB	

     500GB	

     500GB	

                                                                           datastores




                        300GB        260GB        265GB        275GB
                       available
   available
   available
   available
Storage DRS Operations – Load
Balancing (2 of 4)
Load balancing – SDRS triggers on space usage  latency threshold.
§ Algorithm makes migration recommendations when I/O response
 time and/or space utilization thresholds have been exceeded.
  •  Space utilization statistics are constantly gathered by vCenter, default threshold
       80%.
  •    I/O load trend is currently evaluated every 8 hours based on a past day history,
       default threshold 15ms.
§ Load Balancing is based on I/O workload and space which ensures
 that no datastore exceeds the configured thresholds.
§ Storage DRS will do a cost / benefit analysis!
§ For I/O load balancing Storage DRS leverages Storage I/O Control
 functionality.
Storage DRS Operations –
Thresholds (3 of 4)
Storage DRS Operations –
 Datastore Maintenance Mode
§ Datastore Maintenance Mode
 •  Evacuates all VMs  VMDKs from selected datastore.
 •  Note that this action will not move VM Templates.
 •  Currently, SDRS only handles registered VMs.

    Place VOL1 in
    maintenance
        mode
                            2TB	


                                                        datastore cluster




                     VOL1	

   VOL2	

      VOL3	

   VOL4	

                                                                datastores
Storage DRS Operations (4 of 4)
 Datastore Cluster
           Datastore Cluster
            Datastore Cluster




 VMDK affinity
             VMDK anti-affinity
           VM anti-affinity
   §  Keep a Virtual       §  Keep a VM’s VMDKs        §  Keep VMs on different
   Machine’s VMDKs                 on different                  datastores
 together on the same               datastores	

                                                          §  Similar to DRS anti-
      datastore	

                            §  Useful for separating           affinity rules
   §  Maximize VM            log and data disks of
                                                         §  Maximize availability
  availability when all          database VMs	

                                                           of a set of redundant
 disks needed in order
                             §  Can select all or a                VMs
         to run	

                             subset of a VM’s disks	

§  On by default for all
           VMs
SDRS Scheduling




          SDRS allows you to create a schedule to change its settings.

                                       
This can be useful for scenarios where you don’t want VMs to migrate between
datastore or when I/O latency might rise, giving false negatives, e.g. during VM
                                   backups.
So What Does It Look Like?
Provisioning…
So What Does It Look Like? Load
 Balancing.
§ It will show “utilization before” and “after.”
§ There’s always the option to override the
 recommendations.
vSphere 5.0 – vStorage

• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
What Is vStorage APIs Storage
 Awareness (VASA)?
§ VASA is an Extension of the vSphere Storage APIs, vCenter-
 based extensions. Allows storage arrays to integrate with
 vCenter for management functionality via server-side plug-
 ins or Vendor Providers.
§ This in turn allows a vCenter administrator to be aware of
 the topology, capabilities, and state of the physical storage
 devices available to the cluster.
§ VASA enables several features.
  •    For example it delivers System-defined (array-defined) Capabilities that
       enables Profile-driven Storage.
  •    Another example is that it provides array internal information that helps
       several Storage DRS use cases to work optimally with various arrays.
Storage Compliancy
§ Once the VASA Provider has been successfully added to
 vCenter, the VM Storage Profiles should also display the
 storage capabilities provided to it by the Vendor Provider.




§ The above example contains a ‘mock-up’ of some possible
 Storage Capabilities as displayed in the VM Storage Profiles.
 These are retrieved from the Vendor Provider.
vSphere 5.0 – vStorage

• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
Why Profile Driven Storage? (1 of 2)
§ Problem Statement
 1.  Difficult to manage datastores at scale
   •    Including: capacity planning, differentiated data services for each datastore, maintaining capacity
        headroom, etc.
 2. Difficult to correctly match VM SLA requirements to available storage
   •    Because: Manually choosing between many datastores and 1 storage tiers
   •    Because: VM requirements not accurately known or may change over its lifecycle

§ Related trends
  •  Newly virtualized Tier-1 workloads need stricter VM storage SLA promises
   •  Because: Other VMs can impact performance SLA
  •  Scale-out storage mix VMs with different SLAs on the same storage
Why Profile Driven Storage? (2 of 2)
Save OPEX by reducing repetitive planning and effort!
§ Minimize per-VM (or per VM request) “thinking” or planning
 for storage placement.
  •  Admin needs to plan for optimal space and I/O balancing for each VM.
  •  Admin needs to identify VM storage requirements and match to physical
    storage properties.
§ Increase probability of “correct” storage placement and use
 (minimize need for troubleshooting, minimize time for
 troubleshooting).
  •  Admin needs more insight into storage characteristics.
  •  Admin needs ability to custom-tag available storage.
  •  Admin needs easy means to identify incorrect VM storage placement
    (e.g. on incorrect datastore).
Save OPEX by Reducing Repetitive Planning and Effort!
                                                   Periodically
    Identify        Find optimal
                                      Create VM
      check            Today	

 requirements
        datastore
                                                   compliance



 Initial setup
Identify storage
                                                   Periodically
 characteristics
      Identify                                       Storage	

                                      Create VM
      check
                    requirements
                                       DRS	

                                                   compliance
    Group

  datastores


 Initial setup
   Discover
   storage
                       Select VM
                                                                  Storage DRS +
characteristics
                      Create VM
                    Storage profile
                                Profile driven
    Group
                                                            storage	

  datastores
Storage Capabilities  VM
Storage Profiles
                   Not
    Compliant
                 Compliant




                              VM Storage Profile
                              associated with VM



                               VM Storage Profile
                              referencing Storage
                                  Capabilities



                              Storage Capabilities
                              surfaced by VASA or
                                  user-defined
Selecting a Storage Profile
 During Provisioning




§ By selecting a VM Storage Profile, datastores are now split
 into
 Compatible  Incompatible.
§ The Celerra_NFS datastore is the only datastore which
 meets the GOLD Profile requirements – i.e. it is the only
 datastore that has our user-defined storage capability
 associated with it.
VM Storage Profile Compliance




§ Policy Compliance is visible from the Virtual Machine
 Summary tab.
vSphere 5.0 – vStorage

• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
Introduction
§ Fiber Channel over Ethernet (FCoE) is an enhancement that
 expands Fiber Channel into the Ethernet by combining two
 leading-edge technologies (FC and the Ethernet)
§ The FCoE adapters that VMware supports generally fall into
 two categories, hardware FCoE adapters and software FCoE
 adapters which uses an FCoE capable NIC
  •  Hardware FCoE adapters were supported as of vSphere 4.0.
§ The FCoE capable NICs are referred to as Converged
 Network Adapters (CNAs) which facilitate network and
 storage traffic.
§ ESXi 5.0 uses FCoE adapters to access Fibre Channel storage.
Software FCoE Adapters (1 of 2)
§ A software FCoE adapter is a software code that performs
 some of the FCoE processing.
§ This adapter can be used with a number of NICs that
 support partial FCoE offload.
§ Unlike the hardware FCoE adapter, the software adapter
 needs to be activated, similar to Software iSCSI.
Software FCoE Adapters (2 of 2)
§ Once the Software FCoE is enabled, a new adapter is
 created, and discovery of devices can now take place.
Conclusion
§ vSphere 5.0 has many new compelling storage features.
§ VMFS volumes can be larger than ever before.
  •  They can contain many more virtual machines due to VAAI
    enhancements and architectural changes.
§ Storage DRS and Profile Driven Storage will help solve
 traditional problems with virtual machine provisioning.
§ The administrative overhead will be severely reduced.
  •  VASA surfacing storage characteristics.
  •  Creating Profiles through Profile Driven Storage.
  •  Combining multiple datastores in a large aggregate.
vSphere Storage Appliance (VSA)
Introduction (1 of 3)
§ In vSphere 5.0, VMware releases a new storage appliance
 called VSA.
  •  VSA is an acronym “vSphere Storage Appliance.”
  •  This appliance is aimed at our SMB (Small-Medium Business) customers
       who may not be in a position to purchase a SAN or NAS array for their
       virtual infrastructure, and therefore do not have shared storage.
  •    It is the SMB market that we wish to go after with this product — our aim
       to move these customers from Essentials to Essentials+.
  •    Without access to a SAN or NAS array, this excludes these SMB customers
       from many of the top features which are available in a VMware Virtual
       Infrastructure, such as vSphere HA  vMotion.
  •    Customers who decide to deploy a VSA can now benefit from many
       additional vSphere features without having to purchase a SAN or NAS
       device to provide them with shared storage.
Introduction (2 of 3)
       VSA
        VSA
       VSA
                VSA Manager 	

      vSphere
    vSphere
   vSphere




                                                      vSphere Client



         NFS
       NFS
       NFS


§ Each ESXi server has a VSA deployed to it as a Virtual Machine.
§ The appliances use the available space on the local disk(s) of the ESXi
  servers  present one replicated NFS volume per ESXi server. This
  replication of storage makes the VSA very resilient to failures.
Introduction (3 of 3)
§ The NFS datastores exported from the VSA can now be used as
  shared storage on all of the ESXi servers in the same datacenter.
§ The VSA creates shared storage out of local storage for use by a
  specific set of hosts.
§ This means that vSphere HA  vMotion can now be made
  available on low-end (SMB) configurations, without external SAN
  or NAS servers.
§ There is a CAPEX saving achieved by SMB customers as there is no
  longer a need to purchase a dedicated SAN or NAS devices to
  achieve shared storage.
§ There is also an OPEX saving as the management of the VSA may be
  done by the vSphere Administrator and there is no need for
  dedicated SAN skills to manage the appliances.
Supported VSA Configurations
§ The vSphere Storage Appliance can be deployed in two
 configurations:
  •  2 x ESXi 5.0 servers configuration
   •    Deploys 2 vSphere Storage Appliances, one per ESXi server  a VSA Cluster Service on the vCenter
        server
  •  3 x ESXi 5.0 servers configuration
   •  Deploys 3 vSphere Storage Appliances, once per ESXi server
  •  Each of the servers must contain a new/vanilla install of ESXi 5.0.
  •  During the configuration, the user selects a datacenter. The user is then
       presented with a list of ESXi servers in that datacenter.
  •    The installer will check the compatibility of each of these physical hosts
       to make sure they are suitable for VSA deployment.
  •    The user must then select which compatible ESXi servers should
       participate in the VSA cluster, i.e. which servers will host VSA nodes.
  •    It then ‘creates’ the storage cluster by aggregating and virtualizing each
       server’s local storage to present a logical pool of shared storage.
Two Member VSA
                                      vCenter Server	

                                    VSA                  VSA Cluster
                                  Manager
                 Service


                                             Manage

                Volume 2	

                                                            Volume 1	

  Volume 1	

                                                            Volume 2	

                (Replica)	

                                                           (Replica)	


                                          VSA           VSA 
                                       Datastore 1	

   Datastore 2	





                               VSA cluster with 2 members
Three Member VSA
          vCenter Server	

                                                      VSA
                                                    Manager

                                                    Manage
                                      VSA           VSA           VSA 
                                   Datastore 1	

   Datastore 2	

   Datastore 3	





                  Volume 3	

                                                                       Volume 2	

    Volume 1	

                                                                       Volume 3	

                  (Replica)	

                                                                      (Replica)	

                                                              Volume 1	

                                               Volume 2	

    (Replica)	





                                 VSA cluster with 3 members
VSA Manager




§ The VSA Manager helps an administrator perform the
 following tasks:
  •  Deploy vSphere Storage Appliance instances onto ESXi hosts to create a
       VSA cluster
  •    Automatically mount the NFS volumes that each vSphere Storage
       Appliance exports as datastores to the ESXi hosts
  •    Monitor, maintain, and troubleshoot a VSA cluster
Resilience
§ Many storage arrays are a single point of failure (SPOF) in
 customer environments.
§ VSA is very resilient to failures.
§ If a node fails in the VSA cluster, another node will
 seamlessly take over the role of presenting its NFS datastore.
§ The NFS datastore that was being presented from the failed
 node will now be presented from the node that holds its
 replica
 (mirror copy).
§ The new node will use the same NFS server IP address that
 the failed node was using for presentation, so that any VMs
 that reside on that NFS datastore will not be affected by the
 failover.
What’s New in VMware vCenter
Site Recovery Manager v5.0 –
Technical
vCenter Site Recovery Manager
Ensures Simple, Reliable DR
§ Site Recovery Manager Complements vSphere to provide the
   simplest and most reliable disaster protection and site migration
   for all applications
§ Provide cost-efficient replication of applications to failover site
  •  Built-in vSphere Replication
  •  Broad support for storage-based replication
§ Simplify management of recovery and migration plans
  •  Replace manual runbooks with centralized recovery plans
  •  From weeks to minutes to set up new plan
§ Automate failover and migration
  processes for reliable recovery
  •  Enable frequent non-disruptive testing
  •  Ensure fast, automated failover
  •  Automate failback processes
SRM Provides Broad Choice of
Replication Options
                           Site                                                               Site
vCenter Server	

        Recovery                                  vCenter Server	

        Recovery
                         Manager	

                                                         Manager	


                                                   VM	

   V
 V      V      V      V      V      V      VM	

                    V      V      V      V      V      V
 M	

   M	

   M	

   M	

   M	

   M	

                   M	

     M	

   M	

   M	

   M	

   M	

   M	

                                                vSphere
               vSphere                                                            vSphere
                                               Replication	


                                                    V       V
                                            V       M	

                                            M	

            M	



                                            Storage-based
                                              replication	

        vSphere Replication: simple, cost-efficient replication for Tier 2 applications and
                                       smaller sites	


         Storage-based replication: High-performance replication for business-critical
                               applications in larger sites
SRM of Today’s High-Level Architecture
       “Protected” Site	

                                       “Recovery” Site	

                 vSphere Client
                                           vSphere Client

                     SRM Plug-In
                                            SRM Plug-In




 SRM Server
                  vCenter Server
                  vCenter Server
           SRM Server


 SRA
                                                                                               SRA


               ESX
        ESX
        ESX
                    ESX
           ESX




            Replication Software
                                         Replication Software
                      
                                                             
                                                Replication
                                  SAN                                                        SAN
 VMFS	

   VMFS	

                Array	

                     VMFS	

   VMFS	

             Array
Technology – vSphere Replication
§ Adding native replication to SRM




  •  Virtual machines can be replicated regardless of what storage they
       live on
  •    Enables replication between heterogeneous datastores
  •    Replication is managed as a property of a virtual machine
  •    Efficient replication minimizes impact on VM workloads
  •    Provides guest-level copy of the VM and not a copy of the VM itself
vSphere Replication Details
§ Replication Granularity per Virtual Machine
      •    Can opt to replicate all or a subset of the VM’s disks
      •    You can create the initial copy in any way you want - even via sneaker net!
      •    You have the option to place the replicated disks where you want.
      •    Disks are replicated in group consistent manner

§ Simplified Replication Management
   •  User selects destination location for target disks
   •  User selects Recovery Point Objective (RPO)
   •  User can supply initial copy to save on bandwidth
§ 
  Replication Specifics
   •  Changes on the source disks are tracked by ESX
   •  Deltas are sent to the remote site
   •  Does not use VMware snapshots
Replication UI
§ Select VMs to replicate from within the vSphere client by
 right click options
§ Can do this on one VM, or multiple at the same time!
vSphere Replication 1.0 Limitations
§ Focus on virtual disks of powered-on VMs.
  •  ISOs and floppy images are not replicated.
  •  Powered-off/suspended VMs not replicated.
  •  Non-critical files not replicated (e.g. logs, stats, swap, dumps).
§ vSR works at the virtual device layer.
  •  Independent of disk format specifics.
  •  Independent of primary-side snapshots.
  •  Snapshots work with vSR, snapshot is replicated, but VM is recovered
       with collapse snapshots.
  •    Physical RDMs are not supported.
§ FT, linked clones, VM templates are not supported with
 HBR.
§ Automated failback of vSR-protected VMs will be late,
 but will be supported in the future. 
§ Virtual Hardware 7, or later, in the VM is required.
SRM Architecture with vSphere Replication

       “Protected” Site	

                       “Recovery” Site	

                 vSphere Client
                           vSphere Client

                     SRM Plug-In
                            SRM Plug-In




 SRM Server
                 vCenter Server
   vCenter Server
            SRM Server



 vRMS
                                                                             vRMS
                                                 vRS
               ESX
        ESX
     ESX
                           ESX
   ESX

               vRA	

     vRA	

    vRA	





                               Storage	

                               Storage	

                                   Storage	

 VMFS	

   VMFS	

                             VMFS	

   VMFS
SRM Scalability

                                            Maximum   Enforced

   Protected virtual machines total           3000       No

   Protected virtual machines in a single
                                              500        No
   protection group

   Protection groups                          250        No


   Simultaneous running recovery plans         30        No


   vSphere Replicated virtual machines        500        No
Workflow
§ Currently we have DR Event failover, and Test.
Planned Migration
§ New is Planned Migration.
                                                              Will shutdown
                                                                protected
                                                                VM’s, and
                                                                    than
                                                               synchronize
                                                                   them!




  Planned migration ensures application consistency and no data-loss during
                                    migration
     •  Graceful shutdown of production VMs in application consistent state
                   •  Data sync to complete replication of VMs
                          •  Recover fully replicated VMs
Failback
      Description	

                                   Benefits	

•  “Single button” to         •  Facilitates DR operations for enterprises that are mandated
   failback all recovered        to perform a true failover as part of DR testing 	

   VMs 	

                    •  Simplifies recovery process after disaster	

•  Interfaces with storage
   to automatically reverse
   replication	

•  Replays existing
   recovery plans – so new
   virtual machines are not
   part of failback	





                                                    Reverse Replication	



                                 Site A (Primary)	

             Site B (Recovery)
Failback
§ To failback, you need first to do a planned migration,
 followed by a reprotect. Then, to do the actual failback, you
 do a recovery.
§ Below is a successful recovery of a planned migration.
Failback (continued)
§ Reprotect is now almost complete . . .
Failback (continued)
§ Replication now goes in reverse – to the protected side.
Failback (continued)
§ Now we are ready to failover to our original side – the
 protected site!
DR Event
Dependencies
§ There is more functionality to help manage multitier
 applications.
Dependencies (continued)
Dependencies (continued) – VM
Startup Order
 Group 1	

   Group 2	

     Group 3	

    Group 4	

   Group 5	



                             App Server
                Desktop
                                            Apache
                                            Apache	

  Master      Database
               Database	

 Database
                                              Desktop
                                            Apache

                                                        Desktop
                             App Server

                                                        Desktop
              Exchange
                    Mail Sync

Contenu connexe

Tendances

Linux On V Mware ESXi
Linux On V Mware ESXiLinux On V Mware ESXi
Linux On V Mware ESXiMasafumi Ohta
 
VMware vSphere Version Comparison 4.0 to 6.5
VMware  vSphere Version Comparison 4.0 to 6.5VMware  vSphere Version Comparison 4.0 to 6.5
VMware vSphere Version Comparison 4.0 to 6.5Sabir Hussain
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3Vepsun Technologies
 
V mware v sphere boot camp
V mware v sphere boot campV mware v sphere boot camp
V mware v sphere boot campbestip
 
vCenter Operations 5: Level 300 training
vCenter Operations 5: Level 300 trainingvCenter Operations 5: Level 300 training
vCenter Operations 5: Level 300 trainingEric Sloof
 
V mware v sphere 5 fundamentals services kit
V mware v sphere 5 fundamentals services kitV mware v sphere 5 fundamentals services kit
V mware v sphere 5 fundamentals services kitsolarisyougood
 
Upgrading to VMware vSphere 6.0
Upgrading to VMware vSphere 6.0Upgrading to VMware vSphere 6.0
Upgrading to VMware vSphere 6.0Tim Carman
 
Hyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the DifferencesHyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the DifferencesSolarWinds
 
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...Edureka!
 
Vsicm51 m01 course_intro_
Vsicm51 m01 course_intro_Vsicm51 m01 course_intro_
Vsicm51 m01 course_intro_Luan Truong Duc
 
Whats new v sphere 6
Whats new v sphere 6Whats new v sphere 6
Whats new v sphere 6shixi wang
 
Xen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationXen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationsolarisyougood
 
E tech vmware presentation
E tech vmware presentationE tech vmware presentation
E tech vmware presentationjpenney
 
V mware v sphere advanced administration
V mware v sphere advanced administrationV mware v sphere advanced administration
V mware v sphere advanced administrationbestip
 
2015 02-10 xen server master class
2015 02-10 xen server master class2015 02-10 xen server master class
2015 02-10 xen server master classCitrix
 
Xen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationXen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationNuno Alves
 
VMware vSphere technical presentation
VMware vSphere technical presentationVMware vSphere technical presentation
VMware vSphere technical presentationaleyeldean
 

Tendances (20)

Linux On V Mware ESXi
Linux On V Mware ESXiLinux On V Mware ESXi
Linux On V Mware ESXi
 
VMware vSphere Version Comparison 4.0 to 6.5
VMware  vSphere Version Comparison 4.0 to 6.5VMware  vSphere Version Comparison 4.0 to 6.5
VMware vSphere Version Comparison 4.0 to 6.5
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3
 
V mware v sphere boot camp
V mware v sphere boot campV mware v sphere boot camp
V mware v sphere boot camp
 
vCenter Operations 5: Level 300 training
vCenter Operations 5: Level 300 trainingvCenter Operations 5: Level 300 training
vCenter Operations 5: Level 300 training
 
V mware v sphere 5 fundamentals services kit
V mware v sphere 5 fundamentals services kitV mware v sphere 5 fundamentals services kit
V mware v sphere 5 fundamentals services kit
 
Upgrading to VMware vSphere 6.0
Upgrading to VMware vSphere 6.0Upgrading to VMware vSphere 6.0
Upgrading to VMware vSphere 6.0
 
Hyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the DifferencesHyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the Differences
 
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...
 
Vsicm51 m01 course_intro_
Vsicm51 m01 course_intro_Vsicm51 m01 course_intro_
Vsicm51 m01 course_intro_
 
VMware vSphere5.1 Training
VMware vSphere5.1 TrainingVMware vSphere5.1 Training
VMware vSphere5.1 Training
 
VMware Presentation
VMware PresentationVMware Presentation
VMware Presentation
 
Whats new v sphere 6
Whats new v sphere 6Whats new v sphere 6
Whats new v sphere 6
 
Xen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationXen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentation
 
E tech vmware presentation
E tech vmware presentationE tech vmware presentation
E tech vmware presentation
 
VMware vSphere 6 & Horizon View 6.1 – What's New ?
VMware vSphere 6 & Horizon View 6.1 – What's New ?VMware vSphere 6 & Horizon View 6.1 – What's New ?
VMware vSphere 6 & Horizon View 6.1 – What's New ?
 
V mware v sphere advanced administration
V mware v sphere advanced administrationV mware v sphere advanced administration
V mware v sphere advanced administration
 
2015 02-10 xen server master class
2015 02-10 xen server master class2015 02-10 xen server master class
2015 02-10 xen server master class
 
Xen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationXen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentation
 
VMware vSphere technical presentation
VMware vSphere technical presentationVMware vSphere technical presentation
VMware vSphere technical presentation
 

En vedette

Vmware srm 6.1
Vmware srm 6.1Vmware srm 6.1
Vmware srm 6.1faz4eva_27
 
V mware v realize orchestrator 6.0 knowledge transfer kit
V mware v realize orchestrator 6.0 knowledge transfer kitV mware v realize orchestrator 6.0 knowledge transfer kit
V mware v realize orchestrator 6.0 knowledge transfer kitsolarisyougood
 
vRA + NSX Technical Deep-Dive
vRA + NSX Technical Deep-DivevRA + NSX Technical Deep-Dive
vRA + NSX Technical Deep-DiveVMUG IT
 
Windows Server 2008 Active Directory
Windows Server 2008 Active DirectoryWindows Server 2008 Active Directory
Windows Server 2008 Active Directoryanilinvns
 
AWS re:Invent 2016: VMware and AWS Together - VMware Cloud on AWS (ENT317)
AWS re:Invent 2016: VMware and AWS Together - VMware Cloud on AWS (ENT317)AWS re:Invent 2016: VMware and AWS Together - VMware Cloud on AWS (ENT317)
AWS re:Invent 2016: VMware and AWS Together - VMware Cloud on AWS (ENT317)Amazon Web Services
 
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...EMC
 
Virtualization 101: Everything You Need To Know To Get Started With VMware
Virtualization 101: Everything You Need To Know To Get Started With VMwareVirtualization 101: Everything You Need To Know To Get Started With VMware
Virtualization 101: Everything You Need To Know To Get Started With VMwareDatapath Consulting
 

En vedette (10)

VMworld 2011 (BCO3276)
VMworld 2011 (BCO3276)VMworld 2011 (BCO3276)
VMworld 2011 (BCO3276)
 
Vmware srm 6.1
Vmware srm 6.1Vmware srm 6.1
Vmware srm 6.1
 
V mware v realize orchestrator 6.0 knowledge transfer kit
V mware v realize orchestrator 6.0 knowledge transfer kitV mware v realize orchestrator 6.0 knowledge transfer kit
V mware v realize orchestrator 6.0 knowledge transfer kit
 
vRA + NSX Technical Deep-Dive
vRA + NSX Technical Deep-DivevRA + NSX Technical Deep-Dive
vRA + NSX Technical Deep-Dive
 
Windows Server 2008 Active Directory
Windows Server 2008 Active DirectoryWindows Server 2008 Active Directory
Windows Server 2008 Active Directory
 
VMware Ready vRealize Automation Program
VMware Ready vRealize Automation ProgramVMware Ready vRealize Automation Program
VMware Ready vRealize Automation Program
 
AWS re:Invent 2016: VMware and AWS Together - VMware Cloud on AWS (ENT317)
AWS re:Invent 2016: VMware and AWS Together - VMware Cloud on AWS (ENT317)AWS re:Invent 2016: VMware and AWS Together - VMware Cloud on AWS (ENT317)
AWS re:Invent 2016: VMware and AWS Together - VMware Cloud on AWS (ENT317)
 
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...
 
Active Directory
Active Directory Active Directory
Active Directory
 
Virtualization 101: Everything You Need To Know To Get Started With VMware
Virtualization 101: Everything You Need To Know To Get Started With VMwareVirtualization 101: Everything You Need To Know To Get Started With VMware
Virtualization 101: Everything You Need To Know To Get Started With VMware
 

Similaire à VMware vSphere 5 seminar

vSphere 5 - Image Builder and Auto Deploy
vSphere 5 - Image Builder and Auto DeployvSphere 5 - Image Builder and Auto Deploy
vSphere 5 - Image Builder and Auto DeployEric Sloof
 
What's New in PowerCLI 5.0
What's New in PowerCLI 5.0What's New in PowerCLI 5.0
What's New in PowerCLI 5.0jonathanmedd
 
Licensing (Enforcement & Compliance) Update
Licensing (Enforcement & Compliance) UpdateLicensing (Enforcement & Compliance) Update
Licensing (Enforcement & Compliance) UpdateFlexera
 
Presentation cloud infrastructure launch – what’s new
Presentation   cloud infrastructure launch – what’s newPresentation   cloud infrastructure launch – what’s new
Presentation cloud infrastructure launch – what’s newsolarisyourep
 
Presentation cloud infrastructure launch – what’s new
Presentation   cloud infrastructure launch – what’s newPresentation   cloud infrastructure launch – what’s new
Presentation cloud infrastructure launch – what’s newxKinAnx
 
i//:squared Business Continuity Event
i//:squared Business Continuity Eventi//:squared Business Continuity Event
i//:squared Business Continuity EventJonathan Allmayer
 
Packaging tool options
Packaging tool optionsPackaging tool options
Packaging tool optionsLen Bass
 
Using Packer to Migrate XenServer Infrastructure to CloudStack
Using Packer to Migrate XenServer Infrastructure to CloudStackUsing Packer to Migrate XenServer Infrastructure to CloudStack
Using Packer to Migrate XenServer Infrastructure to CloudStackTim Mackey
 
McAfee MOVE & Endpoint Security
McAfee MOVE & Endpoint SecurityMcAfee MOVE & Endpoint Security
McAfee MOVE & Endpoint Securitynetlogix
 
Managing Test Labs Without the Headaches
Managing Test Labs Without the HeadachesManaging Test Labs Without the Headaches
Managing Test Labs Without the HeadachesImaginet
 
The Next Generation of Microsoft Virtualization With Windows Server 2012
The Next Generation of Microsoft Virtualization With Windows Server 2012The Next Generation of Microsoft Virtualization With Windows Server 2012
The Next Generation of Microsoft Virtualization With Windows Server 2012Lai Yoong Seng
 
It camp veeam presentation (no videos)
It camp veeam presentation (no videos)It camp veeam presentation (no videos)
It camp veeam presentation (no videos)Harold Wong
 
Virtualization and how it leads to cloud
Virtualization and how it leads to cloudVirtualization and how it leads to cloud
Virtualization and how it leads to cloudHuzefa Husain
 
Virtualization: Hyper-V, VMM, App-V and MED-V.
Virtualization: Hyper-V, VMM, App-V and MED-V.Virtualization: Hyper-V, VMM, App-V and MED-V.
Virtualization: Hyper-V, VMM, App-V and MED-V.Microsoft Iceland
 
ALM@Work - Lab management for everyone
ALM@Work - Lab management for everyoneALM@Work - Lab management for everyone
ALM@Work - Lab management for everyoneDomusDotNet
 
VMworld 2013: vSphere Web Client - Technical Walkthrough
VMworld 2013: vSphere Web Client - Technical WalkthroughVMworld 2013: vSphere Web Client - Technical Walkthrough
VMworld 2013: vSphere Web Client - Technical WalkthroughVMworld
 
Making IT Easier to Manage Your Virtualized Environment - David Babbitt, Spic...
Making IT Easier to Manage Your Virtualized Environment - David Babbitt, Spic...Making IT Easier to Manage Your Virtualized Environment - David Babbitt, Spic...
Making IT Easier to Manage Your Virtualized Environment - David Babbitt, Spic...Spiceworks
 
Server Virtualization
Server VirtualizationServer Virtualization
Server Virtualizationwebhostingguy
 
Evolúcia, alebo revolúcia? vSphere 5 update
Evolúcia, alebo revolúcia? vSphere 5 updateEvolúcia, alebo revolúcia? vSphere 5 update
Evolúcia, alebo revolúcia? vSphere 5 updateASBIS SK
 
Mythbusting goes virtual What's new in vSphere 5.1
Mythbusting goes virtual   What's new in vSphere 5.1Mythbusting goes virtual   What's new in vSphere 5.1
Mythbusting goes virtual What's new in vSphere 5.1Eric Sloof
 

Similaire à VMware vSphere 5 seminar (20)

vSphere 5 - Image Builder and Auto Deploy
vSphere 5 - Image Builder and Auto DeployvSphere 5 - Image Builder and Auto Deploy
vSphere 5 - Image Builder and Auto Deploy
 
What's New in PowerCLI 5.0
What's New in PowerCLI 5.0What's New in PowerCLI 5.0
What's New in PowerCLI 5.0
 
Licensing (Enforcement & Compliance) Update
Licensing (Enforcement & Compliance) UpdateLicensing (Enforcement & Compliance) Update
Licensing (Enforcement & Compliance) Update
 
Presentation cloud infrastructure launch – what’s new
Presentation   cloud infrastructure launch – what’s newPresentation   cloud infrastructure launch – what’s new
Presentation cloud infrastructure launch – what’s new
 
Presentation cloud infrastructure launch – what’s new
Presentation   cloud infrastructure launch – what’s newPresentation   cloud infrastructure launch – what’s new
Presentation cloud infrastructure launch – what’s new
 
i//:squared Business Continuity Event
i//:squared Business Continuity Eventi//:squared Business Continuity Event
i//:squared Business Continuity Event
 
Packaging tool options
Packaging tool optionsPackaging tool options
Packaging tool options
 
Using Packer to Migrate XenServer Infrastructure to CloudStack
Using Packer to Migrate XenServer Infrastructure to CloudStackUsing Packer to Migrate XenServer Infrastructure to CloudStack
Using Packer to Migrate XenServer Infrastructure to CloudStack
 
McAfee MOVE & Endpoint Security
McAfee MOVE & Endpoint SecurityMcAfee MOVE & Endpoint Security
McAfee MOVE & Endpoint Security
 
Managing Test Labs Without the Headaches
Managing Test Labs Without the HeadachesManaging Test Labs Without the Headaches
Managing Test Labs Without the Headaches
 
The Next Generation of Microsoft Virtualization With Windows Server 2012
The Next Generation of Microsoft Virtualization With Windows Server 2012The Next Generation of Microsoft Virtualization With Windows Server 2012
The Next Generation of Microsoft Virtualization With Windows Server 2012
 
It camp veeam presentation (no videos)
It camp veeam presentation (no videos)It camp veeam presentation (no videos)
It camp veeam presentation (no videos)
 
Virtualization and how it leads to cloud
Virtualization and how it leads to cloudVirtualization and how it leads to cloud
Virtualization and how it leads to cloud
 
Virtualization: Hyper-V, VMM, App-V and MED-V.
Virtualization: Hyper-V, VMM, App-V and MED-V.Virtualization: Hyper-V, VMM, App-V and MED-V.
Virtualization: Hyper-V, VMM, App-V and MED-V.
 
ALM@Work - Lab management for everyone
ALM@Work - Lab management for everyoneALM@Work - Lab management for everyone
ALM@Work - Lab management for everyone
 
VMworld 2013: vSphere Web Client - Technical Walkthrough
VMworld 2013: vSphere Web Client - Technical WalkthroughVMworld 2013: vSphere Web Client - Technical Walkthrough
VMworld 2013: vSphere Web Client - Technical Walkthrough
 
Making IT Easier to Manage Your Virtualized Environment - David Babbitt, Spic...
Making IT Easier to Manage Your Virtualized Environment - David Babbitt, Spic...Making IT Easier to Manage Your Virtualized Environment - David Babbitt, Spic...
Making IT Easier to Manage Your Virtualized Environment - David Babbitt, Spic...
 
Server Virtualization
Server VirtualizationServer Virtualization
Server Virtualization
 
Evolúcia, alebo revolúcia? vSphere 5 update
Evolúcia, alebo revolúcia? vSphere 5 updateEvolúcia, alebo revolúcia? vSphere 5 update
Evolúcia, alebo revolúcia? vSphere 5 update
 
Mythbusting goes virtual What's new in vSphere 5.1
Mythbusting goes virtual   What's new in vSphere 5.1Mythbusting goes virtual   What's new in vSphere 5.1
Mythbusting goes virtual What's new in vSphere 5.1
 

Dernier

Traction part 2 - EOS Model JAX Bridges.
Traction part 2 - EOS Model JAX Bridges.Traction part 2 - EOS Model JAX Bridges.
Traction part 2 - EOS Model JAX Bridges.Anamaria Contreras
 
Independent Call Girls Andheri Nightlaila 9967584737
Independent Call Girls Andheri Nightlaila 9967584737Independent Call Girls Andheri Nightlaila 9967584737
Independent Call Girls Andheri Nightlaila 9967584737Riya Pathan
 
International Business Environments and Operations 16th Global Edition test b...
International Business Environments and Operations 16th Global Edition test b...International Business Environments and Operations 16th Global Edition test b...
International Business Environments and Operations 16th Global Edition test b...ssuserf63bd7
 
NewBase 19 April 2024 Energy News issue - 1717 by Khaled Al Awadi.pdf
NewBase  19 April  2024  Energy News issue - 1717 by Khaled Al Awadi.pdfNewBase  19 April  2024  Energy News issue - 1717 by Khaled Al Awadi.pdf
NewBase 19 April 2024 Energy News issue - 1717 by Khaled Al Awadi.pdfKhaled Al Awadi
 
Unlocking the Future: Explore Web 3.0 Workshop to Start Earning Today!
Unlocking the Future: Explore Web 3.0 Workshop to Start Earning Today!Unlocking the Future: Explore Web 3.0 Workshop to Start Earning Today!
Unlocking the Future: Explore Web 3.0 Workshop to Start Earning Today!Doge Mining Website
 
Ten Organizational Design Models to align structure and operations to busines...
Ten Organizational Design Models to align structure and operations to busines...Ten Organizational Design Models to align structure and operations to busines...
Ten Organizational Design Models to align structure and operations to busines...Seta Wicaksana
 
Organizational Structure Running A Successful Business
Organizational Structure Running A Successful BusinessOrganizational Structure Running A Successful Business
Organizational Structure Running A Successful BusinessSeta Wicaksana
 
Buy gmail accounts.pdf Buy Old Gmail Accounts
Buy gmail accounts.pdf Buy Old Gmail AccountsBuy gmail accounts.pdf Buy Old Gmail Accounts
Buy gmail accounts.pdf Buy Old Gmail AccountsBuy Verified Accounts
 
Call Us 📲8800102216📞 Call Girls In DLF City Gurgaon
Call Us 📲8800102216📞 Call Girls In DLF City GurgaonCall Us 📲8800102216📞 Call Girls In DLF City Gurgaon
Call Us 📲8800102216📞 Call Girls In DLF City Gurgaoncallgirls2057
 
Fordham -How effective decision-making is within the IT department - Analysis...
Fordham -How effective decision-making is within the IT department - Analysis...Fordham -How effective decision-making is within the IT department - Analysis...
Fordham -How effective decision-making is within the IT department - Analysis...Peter Ward
 
8447779800, Low rate Call girls in Shivaji Enclave Delhi NCR
8447779800, Low rate Call girls in Shivaji Enclave Delhi NCR8447779800, Low rate Call girls in Shivaji Enclave Delhi NCR
8447779800, Low rate Call girls in Shivaji Enclave Delhi NCRashishs7044
 
Darshan Hiranandani [News About Next CEO].pdf
Darshan Hiranandani [News About Next CEO].pdfDarshan Hiranandani [News About Next CEO].pdf
Darshan Hiranandani [News About Next CEO].pdfShashank Mehta
 
FULL ENJOY Call girls in Paharganj Delhi | 8377087607
FULL ENJOY Call girls in Paharganj Delhi | 8377087607FULL ENJOY Call girls in Paharganj Delhi | 8377087607
FULL ENJOY Call girls in Paharganj Delhi | 8377087607dollysharma2066
 
Pitch Deck Teardown: Geodesic.Life's $500k Pre-seed deck
Pitch Deck Teardown: Geodesic.Life's $500k Pre-seed deckPitch Deck Teardown: Geodesic.Life's $500k Pre-seed deck
Pitch Deck Teardown: Geodesic.Life's $500k Pre-seed deckHajeJanKamps
 
Kenya’s Coconut Value Chain by Gatsby Africa
Kenya’s Coconut Value Chain by Gatsby AfricaKenya’s Coconut Value Chain by Gatsby Africa
Kenya’s Coconut Value Chain by Gatsby Africaictsugar
 
8447779800, Low rate Call girls in Saket Delhi NCR
8447779800, Low rate Call girls in Saket Delhi NCR8447779800, Low rate Call girls in Saket Delhi NCR
8447779800, Low rate Call girls in Saket Delhi NCRashishs7044
 
Innovation Conference 5th March 2024.pdf
Innovation Conference 5th March 2024.pdfInnovation Conference 5th March 2024.pdf
Innovation Conference 5th March 2024.pdfrichard876048
 
MAHA Global and IPR: Do Actions Speak Louder Than Words?
MAHA Global and IPR: Do Actions Speak Louder Than Words?MAHA Global and IPR: Do Actions Speak Louder Than Words?
MAHA Global and IPR: Do Actions Speak Louder Than Words?Olivia Kresic
 

Dernier (20)

Traction part 2 - EOS Model JAX Bridges.
Traction part 2 - EOS Model JAX Bridges.Traction part 2 - EOS Model JAX Bridges.
Traction part 2 - EOS Model JAX Bridges.
 
Independent Call Girls Andheri Nightlaila 9967584737
Independent Call Girls Andheri Nightlaila 9967584737Independent Call Girls Andheri Nightlaila 9967584737
Independent Call Girls Andheri Nightlaila 9967584737
 
International Business Environments and Operations 16th Global Edition test b...
International Business Environments and Operations 16th Global Edition test b...International Business Environments and Operations 16th Global Edition test b...
International Business Environments and Operations 16th Global Edition test b...
 
NewBase 19 April 2024 Energy News issue - 1717 by Khaled Al Awadi.pdf
NewBase  19 April  2024  Energy News issue - 1717 by Khaled Al Awadi.pdfNewBase  19 April  2024  Energy News issue - 1717 by Khaled Al Awadi.pdf
NewBase 19 April 2024 Energy News issue - 1717 by Khaled Al Awadi.pdf
 
Unlocking the Future: Explore Web 3.0 Workshop to Start Earning Today!
Unlocking the Future: Explore Web 3.0 Workshop to Start Earning Today!Unlocking the Future: Explore Web 3.0 Workshop to Start Earning Today!
Unlocking the Future: Explore Web 3.0 Workshop to Start Earning Today!
 
Ten Organizational Design Models to align structure and operations to busines...
Ten Organizational Design Models to align structure and operations to busines...Ten Organizational Design Models to align structure and operations to busines...
Ten Organizational Design Models to align structure and operations to busines...
 
Organizational Structure Running A Successful Business
Organizational Structure Running A Successful BusinessOrganizational Structure Running A Successful Business
Organizational Structure Running A Successful Business
 
Buy gmail accounts.pdf Buy Old Gmail Accounts
Buy gmail accounts.pdf Buy Old Gmail AccountsBuy gmail accounts.pdf Buy Old Gmail Accounts
Buy gmail accounts.pdf Buy Old Gmail Accounts
 
No-1 Call Girls In Goa 93193 VIP 73153 Escort service In North Goa Panaji, Ca...
No-1 Call Girls In Goa 93193 VIP 73153 Escort service In North Goa Panaji, Ca...No-1 Call Girls In Goa 93193 VIP 73153 Escort service In North Goa Panaji, Ca...
No-1 Call Girls In Goa 93193 VIP 73153 Escort service In North Goa Panaji, Ca...
 
Call Us 📲8800102216📞 Call Girls In DLF City Gurgaon
Call Us 📲8800102216📞 Call Girls In DLF City GurgaonCall Us 📲8800102216📞 Call Girls In DLF City Gurgaon
Call Us 📲8800102216📞 Call Girls In DLF City Gurgaon
 
Enjoy ➥8448380779▻ Call Girls In Sector 18 Noida Escorts Delhi NCR
Enjoy ➥8448380779▻ Call Girls In Sector 18 Noida Escorts Delhi NCREnjoy ➥8448380779▻ Call Girls In Sector 18 Noida Escorts Delhi NCR
Enjoy ➥8448380779▻ Call Girls In Sector 18 Noida Escorts Delhi NCR
 
Fordham -How effective decision-making is within the IT department - Analysis...
Fordham -How effective decision-making is within the IT department - Analysis...Fordham -How effective decision-making is within the IT department - Analysis...
Fordham -How effective decision-making is within the IT department - Analysis...
 
8447779800, Low rate Call girls in Shivaji Enclave Delhi NCR
8447779800, Low rate Call girls in Shivaji Enclave Delhi NCR8447779800, Low rate Call girls in Shivaji Enclave Delhi NCR
8447779800, Low rate Call girls in Shivaji Enclave Delhi NCR
 
Darshan Hiranandani [News About Next CEO].pdf
Darshan Hiranandani [News About Next CEO].pdfDarshan Hiranandani [News About Next CEO].pdf
Darshan Hiranandani [News About Next CEO].pdf
 
FULL ENJOY Call girls in Paharganj Delhi | 8377087607
FULL ENJOY Call girls in Paharganj Delhi | 8377087607FULL ENJOY Call girls in Paharganj Delhi | 8377087607
FULL ENJOY Call girls in Paharganj Delhi | 8377087607
 
Pitch Deck Teardown: Geodesic.Life's $500k Pre-seed deck
Pitch Deck Teardown: Geodesic.Life's $500k Pre-seed deckPitch Deck Teardown: Geodesic.Life's $500k Pre-seed deck
Pitch Deck Teardown: Geodesic.Life's $500k Pre-seed deck
 
Kenya’s Coconut Value Chain by Gatsby Africa
Kenya’s Coconut Value Chain by Gatsby AfricaKenya’s Coconut Value Chain by Gatsby Africa
Kenya’s Coconut Value Chain by Gatsby Africa
 
8447779800, Low rate Call girls in Saket Delhi NCR
8447779800, Low rate Call girls in Saket Delhi NCR8447779800, Low rate Call girls in Saket Delhi NCR
8447779800, Low rate Call girls in Saket Delhi NCR
 
Innovation Conference 5th March 2024.pdf
Innovation Conference 5th March 2024.pdfInnovation Conference 5th March 2024.pdf
Innovation Conference 5th March 2024.pdf
 
MAHA Global and IPR: Do Actions Speak Louder Than Words?
MAHA Global and IPR: Do Actions Speak Louder Than Words?MAHA Global and IPR: Do Actions Speak Louder Than Words?
MAHA Global and IPR: Do Actions Speak Louder Than Words?
 

VMware vSphere 5 seminar

  • 2. 2010 vCloud Director vShield Security vCenter Management vSphere vSphere vSphere
  • 3. 2011 vCloud Director New vShield Security Cloud Infrastructure Launch vCenter Management vSphere vSphere vSphere
  • 4.
  • 5. Agenda • vSphere 5.0 Platform • vSphere 5.0 Networking • vSphere 5.0 Availability • vSphere 5.0 vMotion, DRS/DPM • vCenter Server 5.0 • vSphere 5.0 vStorage • vSphere 5.0 Storage Appliance (VSA) • VMware vCenter Site Recovery Manager v5.0
  • 6. vSphere 5.0 – Platform •  Platform Enhancements •  ESXi Firewall •  Image Builder •  Auto Deploy
  • 7. New Virtual Machine Features § vSphere 5.0 supports the industry’s most capable VM’s •  32 virtual CPUs per VM •  1TB RAM per VM •  4x previous capabilities! VM Scalability •  3D graphics Richer Desktop Experience •  Client-connected USB •  VM BIOS boot order config API devices and PowerCLI interface •  USB 3.0 devices •  EFI BIOS •  Smart Card Readers for Broader Device VM Console Access Coverage •  UI for multi-core virtual •  Support for Mac OS X Other new CPUs servers features •  Extended VMware Tools compatibility Items which require HW version 8 in blue
  • 8. vSphere 5.0 – Platform •  Platform Enhancements •  ESXi Firewall •  Image Builder •  Auto Deploy •  vSphere Update Manager
  • 9. ESXi 5.0 Firewall Features § Capabilities •  ESXi 5.0 has a new firewall engine which is not based on iptables. •  The firewall is service oriented, and is a stateless firewall. •  Users have the ability to restrict access to specific services based on IP address/Subnet Mask. § Management •  The GUI for configuring the firewall on ESXi 5.0 is similar to that used with the classic ESX firewall — customers familiar with the classic ESX firewall should not have any difficulty with using the ESXi 5.0 version. •  There is a new esxcli interface (esxcfg-firewall is deprecated in ESXi 5.0). •  There is Host Profile support for the ESXi 5.0 firewall. •  Customers who upgrade from Classic ESX to ESXi 5.0 will have their firewall settings preserved.
  • 10. UI: Security Profile § The ESXi Firewall can be managed via the vSphere client. § Through the Configuration > Security Profile, one can observe the Enabled Incoming/Outgoing Services, the Opened Port List for each service & the Allowed IP List for each service.
  • 11. UI: Security Profile > Services > Properties § Through the Services Properties, one can configure if a service should be automatically started. § Services can also be stopped & started on-the-fly.
  • 12. vSphere 5.0 – Platform •  Platform Enhancements •  ESXi Firewall •  Image Builder •  Auto Deploy
  • 13. Composition of an ESXi Image Core CIM Hypervisor Providers Plug-in Drivers Components
  • 14. ESXi Image Deployment § Challenges •  Standard ESXi image from VMware download site is sometimes limited •  Doesn’t have all drivers or CIM providers for specific hardware •  Doesn’t contain vendor specific plug-in components ? Missing CIM provider Missing driver Standard
 ESXi ISO •  Base providers •  Base drivers
  • 15. Building an Image Start PowerCLI session Windows Host with PowerCLI
 and Image Builder Snap-in
  • 16. Building an Image Activate Image Builder Snap-in Windows Host with PowerCLI
 and Image Builder Snap-in Image Builder
  • 17. Building an Image Depots Connect to depot(s) Image
 Profile Windows Host with PowerCLI
 and Image Builder Snap-in ESXi VIBs Image Driver Builder VIBs OEM VIBs
  • 18. Building an Image Depots Clone and modify existing Image Profile Image
 Profile Windows Host with PowerCLI
 and Image Builder Snap-in ESXi VIBs Image Driver Builder VIBs OEM VIBs
  • 19. Building an Image Depots Generate new image Image
 Profile Windows Host with PowerCLI
 and Image Builder Snap-in ESXi VIBs Image Driver Builder ISO Image VIBs PXE-bootable
 Image OEM VIBs
  • 20. vSphere 5.0 – Platform •  Platform Enhancements •  ESXi Firewall •  Image Builder •  Auto Deploy
  • 21. vSphere 5.0 – Auto Deploy Overview vCenter Server with Auto Deploy •  Deploy and patch vSphere hosts in minutes using a new “on the fly” model Image Host Profiles •  Coordination with vSphere Host Profiles Profiles Benefits • Rapid provisioning: initial deployment and patching of hosts vSphere vSphere vSphere • Centralized host and image management • Reduce manual deployment and patch processes
  • 22. Deploying a Datacenter Has Just Gotten Much Easier Before After Time: 30 Time: 30 Time: 30 mins mins mins …..Repeat 37 more times… Total time: 20 Total time: 10 Minutes! Hours!
  • 23. Auto Deploy Example – Initial Boot Provision new host vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi VIBs Driver VIBs “Waiter” Auto 
 Deploy TFTP DHCP OEM VIBs
  • 24. Auto Deploy Example – Initial Boot 1) PXE Boot server vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi VIBs Driver VIBs “Waiter” gPXE
 DHCP
 image Reques t Auto 
 Deploy TFTP DHCP OEM VIBs
  • 25. Auto Deploy Example – Initial Boot 2) Contact Auto Deploy Server vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi VIBs 
 Driver b oot http est VIBs r eq u “Waiter” Auto 
 Deploy OEM VIBs Cluster A Cluster B
  • 26. Auto Deploy Example – Initial Boot 3) Determine Image Profile, Host Profile and cluster vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi •  Image Profile X VIBs •  Host Profile 1 •  Cluster B Driver VIBs “Waiter” Auto 
 Deploy OEM VIBs Cluster A Cluster B
  • 27. Auto Deploy Example – Initial Boot 4) Push image to host, apply host profile vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile VIBs Host Profile Cache Driver VIBs “Waiter” Auto 
 Deploy OEM VIBs Cluster A Cluster B
  • 28. Auto Deploy Example – Initial Boot 5) Place host into cluster vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile VIBs Host Profile Cache Driver VIBs “Waiter” Auto 
 Deploy OEM VIBs Cluster A Cluster B
  • 29. vSphere 5.0 – Networking •  LLDP •  NetFlow •  Port Mirror •  NETIOC – New Traffic Types
  • 30. What Is Discovery Protocol? (Link Layer Discovery Protocol ) §  Discovery protocol is a data link layer network protocol used to discover capabilities of network devices. §  Discovery protocol allows customer to automate the deployment process in a complex environment through its ability to •  Discover capabilities of Network devices •  Discover configuration of neighboring infrastructure §  vSphere infrastructure supports following Discovery Protocol •  CDP (Standard vSwitches Distributed vSwitches) •  LLDP (Distributed vSwitches) §  LLDP is a standard based vendor neutral discovery protocol (802.1AB)
  • 31. LLDP Neighbour Info § Sample output using LLDPD Utility
  • 32. vSphere 5.0 – Networking •  LLDP •  NetFlow •  Port Mirror •  NETIOC – New Traffic Types
  • 33. What Is NetFlow? §  NetFlow is a networking protocol that collects IP traffic info as records and sends them to third party collectors such as CA NetQoS, NetScout etc. Legend : VM A VM B VM traffic NetFlow session Collecto Physical r switch vDS Host trun k §  The Collector/Analyzer report on various information such as: •  Current top flows consuming the most bandwidth •  Which flows are behaving irregularly •  Number of bytes a particular flow has sent and received in the past 24 hours
  • 34. NetFlow with Third-Party Collectors Legend : Net Scout 
 Internal flows nGenius 
 External flows Collector NetFlow session External Systems vDS Hos t CA NetQoS
 Collector
  • 35. vSphere 5.0 Networking •  LLDP •  NetFlow •  Port Mirror •  NETIOC – New Traffic Types
  • 36. What Is Port Mirroring (DVMirror)? §  Port Mirroring is the capability on a network switch to send a copy of network packets seen on a switch port to a network monitoring device connected on another switch port. §  Port Mirroring is also referred to as SPAN (Switched Port Analyzer) on Cisco Switches. §  Port Mirroring overcomes the limitation of promiscuous mode. •  By providing granular control on which traffic can be monitored •  Ingress Source •  Egress Source §  Helps in troubleshooting network issue by providing access to: •  Inter-VM traffic •  Intra-VM traffic
  • 37. Port Mirror Traffic Flow When Mirror Destination Is a VM Inter-VM traffic Ingress
 Egress
 Destinatio Destinatio Source Source n n vDS vDS Legend : Mirror Flow VM Traffic Intra-VM traffic Egress
 Ingress
 Destinatio Destinatio Source Source n n External External System System vDS vDS
  • 38. vSphere 5.0 Networking •  LLDP •  NetFlow •  Port Mirror •  NETIOC – New Traffic Types
  • 39. What Is Network I/O Control (NETIOC)? §  Network I/O control is a traffic management feature of vSphere Distributed Switch (vDS). §  In consolidated I/O (10 gig) deployments, this feature allows customers to: •  Allocate Shares and Limits to different traffic types. •  Provide Isolation •  One traffic type should not dominate others •  Guarantee Service Levels when different traffic types compete §  Enhanced Network I/O Control — vSphere 5.0 builds on previous versions of Network I/O Control feature by providing: •  User-defined network resource pools •  New Host Based Replication Traffic Type •  QoS tagging
  • 40. NETIOC VM Groups VMRG1 VMRG2 VMRG3 Total BW = 20 Gig User Defined RP vMotion iSCSI VMware vNetwork Distributed Switch HBR NFS FT VM
 Network I/O Control 10 GigE VMRG1 VMRG2 VMRG3
  • 41. NETIOC VM Traffic Pepsi VMs Coke VM vMotio HBR FT n Mgmt NFS iSCSI Server Admin vNetwork Distributed Portgroup Teaming Policy vNetwork Distributed Switch Load Based Shaper Teaming Traffic Shares Limit (Mbps) 802.1p vMotion 5 150 1 Scheduler Scheduler Mgmt 30 -- Limit enforcement 
 NFS 10 250 -- per team Shares enforcement iSCSI 10 2 per uplink FT 60 -- HBR 10 -- VM 20 2000 4 Pepsi 5 -- Coke 15 --
  • 42. vSphere 5.0 – Availability
  • 43. vSphere HA Primary Components § Every host runs a agent •  Referred to as ‘FDM’ or Fault Domain Manager •  One of the agents within the cluster is chosen to assume the role of the Master •  There is only one Master per cluster during normal ESX 01 ESX 03 operations •  All other agents assume the role of Slaves § There is no more Primary/ Secondary concept with vSphere HA ESX 02 ESX 04 vCenter
  • 44. The Master Role § An FDM master monitors: •  ESX hosts and Virtual Machine availability. •  All Slave hosts. Upon a Slave host failure, protected VMs on that host will be restarted. •  The power state of all the protected VMs. ESX 01 ESX 03 Upon failure of a protected VM, the Master will restart it. § An FDM master manages: •  The list of hosts that are members of the cluster, updating this list as hosts are added or removed from the cluster. •  The list of protected VMs. The Master updates this list after ESX 02 ESX 04 each user-initiated power on or power off. vCenter
  • 45. The Slave Role § An Slave monitors the runtime state of it’s locally running VMs and forwards any significant state changes to the Master. § It implements vSphere HA features ESX 01 ESX 03 that do not require central coordination, most notably VM Health Monitoring. § It monitors the health of the Master. If the Master should fail, it participates in the election process ESX 02 ESX 04 for a new master. § Maintains list of powered on VMs vCenter
  • 46. Storage Level Communications § One of the most exciting new features of vSphere HA is its ability to use a storage subsystem for communication. § The datastores used for this are referred to as ‘Heartbeat Datastores’. § This provides for increased communication ESX 01 ESX 03 redundancy. § Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning. ESX 02 ESX 04 vCenter
  • 47. Storage Level Communications § Heartbeat Datastores allow a Master to: •  Monitor availability of Slave hosts and the VMs running on them •  Determine whether a host has become network isolated rather than network ESX 01 ESX 03 partitioned. •  Coordinate with other Masters - since a VM can only be owned by only one master, masters will coordinate VM ownership thru datastore communication. •  By default, vCenter will automatically pick 2 datastores. These 2 ESX 02 ESX 04 datastores can also be selected by the user. vCenter
  • 48. vSphere 5.0 – vMotion, DRS/DPM
  • 49. vSphere 5.0 – vMotion § The original vMotion keeps getting better! § Multi-NIC Support •  Support up to four 10Gbps or sixteen 1Gbps NICs. (ea. NIC must have its own IP). •  Single vMotion can now scale over multiple NICs. (load balance across multiple NICs). •  Faster vMotion times allow for a higher number of concurrent vMotions. §  Reduced Application Overhead •  Slowdown During Page Send (SDPS) feature throttles busy VMs to reduce timeouts and improve success. •  Ensures less than 1 Second switchover time in almost all cases. § Support for higher latency networks (up to ~10ms) •  Extend vMotion capabilities over slower networks.
  • 50. Multi-NIC Throughput Multi-NIC 30 25 Throughput (Gbps) 20 15 10 5 0 One NIC Two NICs Three NICs* * Limited by throughput of PCI-E bus in this particular setup.
  • 51. vSphere 5.0 – DRS/DPM § DRS/DPM improvements focus on cross-product integration. •  Introduce support for “Agent VMs.” •  Agent VM is a special purpose VM tied to a specific ESXi host. •  Agent VM cannot / should not be migrated by DRS or DPM. •  Special handling of Agent VMs now afforded by DRS DPM. § A DRS/DPM cluster hosting Agent VMs. •  Accounts for Agent VM reservations (even when powered off). •  Waits for Agent VMs to be powered on and ready before placing client VMs. •  Will not try to migrate a Agent VM (Agent VMs pinned to their host). §  Maintenance Mode / Standby Mode Support • Agent VMs do not have to be evacuated for host to enter maintenance or standby mode. •  When host enters maintenance/standby mode, Agent VMs are powered off (after client VMs are evacuated). •  When host exits maintenance/standby mode, Agent VMs are powered on (before client VMs are placed).
  • 52. vSphere 5.0 – vCenter Server
  • 53. vSphere Web Client Architecture The vSphere Web Client runs within a browser Fx Application Server that Flex Client provides a Back End scalable back end The Query Service obtains live data vCenter in either Query from the core single or Service vCenter Server Linked mode process operation vCenter
  • 54. Extension Points Launchbar Tabs Inventory Objects Create custom actions Sidebar Extension Portlets Add right-click extensions
  • 55. Features of the vSphere Web Client § Ready Access to Common Actions •  Quick access to common tasks provided out of the box
  • 56. Introducing vCenter Server Appliance § The vCenter Server Appliance is the answer! •  Simplifies Deployment and Configuration •  Streamlines patching and upgrades •  Reduces the TCO for vCenter § Enables companies to respond to business faster! VMware vCenter Server Virtual Appliance Automatio Visibility n Scalability
  • 57. Component Overview § vCenter Server Appliance (VCSA) consists of: •  A pre-packaged 64 bit application running on SLES 11 •  Distributed with sparse disks •  Disk Footprint Distribution Min Deployed Max Deployed 3.6GB ~5GB ~80GB •  Memory Footprint •  A built in enterprise level database with optional support for a remote Oracle databases. •  Limits are the same for VC and VCSA •  Embedded DB •  5 hosts/50 VMs •  External DB •  300 hosts/3000 VMs (64 bit) •  A web-based configuration interface
  • 58. Feature Overview § vCenter Server Appliance supports: •  The vSphere Web Client •  Authentication through AD and NIS •  Feature parity with vCenter Server on Windows •  Except – •  Linked Mode support - Requires ADAM (AD LDS) •  IPv6 support •  External DB Support •  Oracle is the only supported external DB for the first release •  No vCenter Heartbeat support •  HA is provided through vSphere HA
  • 59. vSphere 5.0 – vStorage • VMFS 5.0 • vStorage API for Array Integration • Storage vMotion • Storage I/O Control • Storage DRS • VMware API for Storage Awareness • Profile Driven Storage • FCoE – Fiber Channel over Ethernet
  • 60. Introduction to VMFS-5 § Enhanced Scalability •  Increase the size limits of the filesystem support much larger single extent VMFS-5 volumes. •  Support for single extent 64TB Datastores. § Better Performance •  Uses VAAI locking mechanism with more tasks. § Easier to manage and less overhead •  Space reclamation on thin provisioned LUNs. •  Smaller sub blocks. •  Unified Block size.
  • 61. VMFS-5 vs VMFS-3 Feature Comparison Feature VMFS-3 VMFS-5 Yes 
 2TB+ VMFS Volumes Yes (using extents) Support for 2TB+ Physical RDMs No Yes Unified Block size (1MB) No Yes Atomic Test Set Enhancements
 No Yes (part of VAAI, locking mechanism) Sub-blocks for space efficiency 64KB (max ~3k) 8KB (max ~30k) Small file support No 1KB
  • 62. VMFS-3 to VMFS-5 Upgrade § The Upgrade to VMFS-5 is clearly displayed in the vSphere Client under Configuration Storage view. § It is also displayed in the Datastores Configuration view. § Non-disruptive upgrades.
  • 63. vSphere 5.0 – vStorage • VMFS 5.0 • vStorage API for Array Integration • Storage vMotion • Storage I/O Control • Storage DRS • VMware API for Storage Awareness • Profile Driven Storage • FCoE – Fiber Channel over Ethernet
  • 64. VAAI – Introduction § vStorage API for Array Integration = VAAI § VAAI’s main purpose is to leverage array capabilities. •  Offloading tasks to reduce overhead •  Benefit from enhanced mechanisms arrays mechanisms § The “traditional” VAAI primitives have been improved. § We have introduced multiple new primitives. Application § Support for NAS! VI-3 Hypervisor Non-VAAI Fabric Array VAAI LUN
 LUN
 01 02
  • 65. VAAI Primitive Updates in vSphere 5.0 § vSphere 4.1 has a default plugin shipping for Write Same as the primitive was fully T10 compliant, however ATS and Full Copy were not. •  The T10 organization is responsible for SCSI standardization (SCSI-3) and a standard used by many Storage Vendors. § vSphere 5.0 has all the 3 primitives which are T10 compliant integrated in the ESXi Stack. •  This allows for arrays which are T10 compliant leverage these primitives with a default VAAI plugin in vSphere 5.0. § It should also be noted that the ATS primitive has been extended in vSphere 5.0 / VMFS-5 to cover even more operations, resulting in even better performance and greater scalability.
  • 66. Introducing VAAI NAS Primitives § With this primitive, we will enable hardware acceleration/ offload features for NAS datastores. § The following primitives are defined for VAAI NAS: •  Full File Clone – Similar to the VMFS block cloning. Allows offline VMDKs to be cloned by the Filer. •  Note that hot migration via Storage vMotion on NAS is not hardware accelerated. •  Reserve Space – Allows creation of thick VMDK files on NAS. § NAS VAAI plugins are not shipped with ESXi 5.0. These plugins will be developed and distributed by the storage vendors, but signed by the VMware certification program.
  • 67. VAAI NAS: Thick Disk Creation § Without the VAAI NAS primitives, only Thin format is available. § With the VAAI NAS primitives, Flat (thick), Flat pre-initialized (eager zeroed-thick) and Thin formats are available. Non VAAI VAAI
  • 68. Introducing VAAI Thin Provisioning § What are the driving factors behind VAAI TP? •  Provisioning new LUNs to a vSphere environment (cluster) is complicated. § Strategic Goal: •  We want to make the act of physical storage provisioning in a vSphere environment extremely rare. •  LUNs should be an incredibly large address spaces should be able to handle any VM workload. § VAAI TP features include: •  Dead space reclamation. •  Monitoring of the space.
  • 69. VAAI Thin Provisioning – Dead Space Reclamation § Dead space is previously written blocks that are no longer used by the VM. For instance after a Storage vMotion. § vSphere conveys block information to storage system via VAAI storage system reclaims the dead blocks. •  Storage vMotion, VM deletion and swap file deletion can trigger vSphere Storage vMotion the thin LUN to free some physical space. •  ESXi 5.0 uses a standard SCSI command for dead space reclamation. VMFS volume A VMFS volume B
  • 70. Current “Out Of Space” User Experience No space related warnings VMware No mitigation steps available Space exhaustion, VMs 
 and LUN offline ? VMware
  • 71. “Out Of Space” User Experience with VAAI Extensions Space exhaustion warning in UI VMware Storage vMotion based evacuation or add space VMware Space exhaustion, affected VMs paused, 
 LUN online awaiting space allocation.
  • 72. vSphere 5.0 – vStorage • VMFS 5.0 • vStorage API for Array Integration • Storage vMotion • Storage I/O Control • Storage DRS • VMware API for Storage Awareness • Profile Driven Storage • FCoE – Fiber Channel over Ethernet
  • 73. Storage vMotion – Introduction § In vSphere 5.0, a number of new enhancements were made to Storage vMotion. •  Storage vMotion will work with Virtual Machines that have snapshots, which means coexistence with other VMware products features such as VCB, VDR HBR. •  Storage vMotion will support the relocation of linked clones. •  Storage vMotion has a new use case – Storage DRS – which uses Storage vMotion for Storage Maintenance Mode Storage Load Balancing (Space or Performance).
  • 74. Storage vMotion Architecture Enhancements (1 of 2) § In vSphere 4.1, Storage vMotion uses the Changed Block Tracking (CBT) method to copy disk blocks between source destination. § The main challenge in this approach is that the disk pre-copy phase can take a while to converge, and can sometimes result in Storage vMotion failures if the VM was running a very I/O intensive load. § Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy mechanism. § In vSphere 5.0, Storage vMotion uses a new mirroring architecture to provide the following advantages over previous versions: •  Guarantees migration success even when facing a slower destination. •  More predictable (and shorter) migration time.
  • 75. Storage vMotion Architecture Enhancements (2 of 2) VMM/Guest Guest OS Datamover Mirror Driver VMkernel Userworld Source Destination
  • 76. vSphere 5.0 – vStorage • VMFS 5.0 • vStorage API for Array Integration • Storage vMotion • Storage I/O Control • Storage DRS • VMware API for Storage Awareness • Profile Driven Storage • FCoE – Fiber Channel over Ethernet
  • 77. Storage I/O Control Phase 2 and Refreshing Memory § In many customer environments, storage is mostly accessed from storage arrays over SAN, iSCSI or NAS. § One ESXi host can affect the I/O performance of others by issuing large number of requests on behalf of one its virtual machines. § Thus the throughput/bandwidth available to ESXi hosts itself may vary drastically leading to highly-variable I/O performance for VMs. § To ensure stronger I/O guarantees, we implemented Storage I/O Control in vSphere 4.1 for block storage which guarantees an allocation of I/O resources on a per VM basis. § As of vSphere 5.0 we also support SIOC for NFS based storage! § This capability is essential to provide better performance for I/O intensive and latency-sensitive applications such as database workloads, Exchange servers, etc.
  • 78. Storage I/O Control Refreshing Memory What you see What you want to see online Microsoft data online Microsoft data store Exchange mining store Exchange mining VIP VIP VIP VIP NFS / VMFS Datastore NFS / VMFS Datastore
  • 79. vSphere 5.0 – vStorage • VMFS 5.0 • vStorage API for Array Integration • Storage vMotion • Storage I/O Control • Storage DRS • VMware API for Storage Awareness • Profile Driven Storage • FCoE – Fiber Channel over Ethernet
  • 80. What Does Storage DRS Solve? § Without Storage DRS: •  Identify the datastore with the most disk space and lowest latency. •  Validate which virtual machines are placed on the datastore and ensure there are no conflicts. •  Create Virtual Machine and hope for the best. § With Storage DRS: •  Automatic selection of the best placement for your VM. •  Advanced balancing mechanism to avoid storage performance bottlenecks or “out of space” problems. •  Affinity Rules.
  • 81. What Does Storage DRS Provide? § Storage DRS provides the following: 1.  Initial Placement of VMs and VMDKS based on available space and I/O capacity. 2.  Load balancing between datastores in a datastore cluster via Storage vMotion based on storage space utilization. 3.  Load balancing via Storage vMotion based on I/O metrics, i.e. latency. § Storage DRS also includes Affinity/Anti-Affinity Rules for VMs and VMDKs; •  VMDK Affinity – Keep a VM’s VMDKs together on the same datastore. This is the default affinity rule. •  VMDK Anti-Affinity – Keep a VM’s VMDKs separate on different datastores. •  Virtual Machine Anti-Affinity – Keep VMs separate on different datastores. § Affinity rules cannot be violated during normal operations.
  • 82. Datastore Cluster § An integral part of SDRS is to create a group of datastores called a datastore cluster. •  Datastore Cluster without Storage DRS – Simply a group of datastores. •  Datastore Cluster with Storage DRS – Load Balancing domain similar to a DRS Cluster. § A datastore cluster, without SDRS is just a datastore folder. It is the functionality provided by SDRS which makes it more than just a folder. 2TB datastore cluster 500GB 500GB 500GB 500GB datastores
  • 83. Storage DRS Operations – Initial Placement (1 of 4) § Initial Placement – VM/VMDK create/clone/relocate. •  When creating a VM you select a datastore cluster rather than an individual datastore and let SDRS choose the appropriate datastore. •  SDRS will select a datastore based on space utilization and I/O load. •  By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters. 2TB datastore cluster 500GB 500GB 500GB 500GB datastores 300GB 260GB 265GB 275GB available available available available
  • 84. Storage DRS Operations – Load Balancing (2 of 4) Load balancing – SDRS triggers on space usage latency threshold. § Algorithm makes migration recommendations when I/O response time and/or space utilization thresholds have been exceeded. •  Space utilization statistics are constantly gathered by vCenter, default threshold 80%. •  I/O load trend is currently evaluated every 8 hours based on a past day history, default threshold 15ms. § Load Balancing is based on I/O workload and space which ensures that no datastore exceeds the configured thresholds. § Storage DRS will do a cost / benefit analysis! § For I/O load balancing Storage DRS leverages Storage I/O Control functionality.
  • 85. Storage DRS Operations – Thresholds (3 of 4)
  • 86. Storage DRS Operations – Datastore Maintenance Mode § Datastore Maintenance Mode •  Evacuates all VMs VMDKs from selected datastore. •  Note that this action will not move VM Templates. •  Currently, SDRS only handles registered VMs. Place VOL1 in maintenance mode 2TB datastore cluster VOL1 VOL2 VOL3 VOL4 datastores
  • 87. Storage DRS Operations (4 of 4) Datastore Cluster Datastore Cluster Datastore Cluster VMDK affinity VMDK anti-affinity VM anti-affinity §  Keep a Virtual §  Keep a VM’s VMDKs §  Keep VMs on different Machine’s VMDKs on different datastores together on the same datastores §  Similar to DRS anti- datastore §  Useful for separating affinity rules §  Maximize VM log and data disks of §  Maximize availability availability when all database VMs of a set of redundant disks needed in order §  Can select all or a VMs to run subset of a VM’s disks §  On by default for all VMs
  • 88. SDRS Scheduling SDRS allows you to create a schedule to change its settings.
 This can be useful for scenarios where you don’t want VMs to migrate between datastore or when I/O latency might rise, giving false negatives, e.g. during VM backups.
  • 89. So What Does It Look Like? Provisioning…
  • 90. So What Does It Look Like? Load Balancing. § It will show “utilization before” and “after.” § There’s always the option to override the recommendations.
  • 91. vSphere 5.0 – vStorage • VMFS 5.0 • vStorage API for Array Integration • Storage vMotion • Storage I/O Control • Storage DRS • VMware API for Storage Awareness • Profile Driven Storage • FCoE – Fiber Channel over Ethernet
  • 92. What Is vStorage APIs Storage Awareness (VASA)? § VASA is an Extension of the vSphere Storage APIs, vCenter- based extensions. Allows storage arrays to integrate with vCenter for management functionality via server-side plug- ins or Vendor Providers. § This in turn allows a vCenter administrator to be aware of the topology, capabilities, and state of the physical storage devices available to the cluster. § VASA enables several features. •  For example it delivers System-defined (array-defined) Capabilities that enables Profile-driven Storage. •  Another example is that it provides array internal information that helps several Storage DRS use cases to work optimally with various arrays.
  • 93. Storage Compliancy § Once the VASA Provider has been successfully added to vCenter, the VM Storage Profiles should also display the storage capabilities provided to it by the Vendor Provider. § The above example contains a ‘mock-up’ of some possible Storage Capabilities as displayed in the VM Storage Profiles. These are retrieved from the Vendor Provider.
  • 94. vSphere 5.0 – vStorage • VMFS 5.0 • vStorage API for Array Integration • Storage vMotion • Storage I/O Control • Storage DRS • VMware API for Storage Awareness • Profile Driven Storage • FCoE – Fiber Channel over Ethernet
  • 95. Why Profile Driven Storage? (1 of 2) § Problem Statement 1.  Difficult to manage datastores at scale •  Including: capacity planning, differentiated data services for each datastore, maintaining capacity headroom, etc. 2. Difficult to correctly match VM SLA requirements to available storage •  Because: Manually choosing between many datastores and 1 storage tiers •  Because: VM requirements not accurately known or may change over its lifecycle § Related trends •  Newly virtualized Tier-1 workloads need stricter VM storage SLA promises •  Because: Other VMs can impact performance SLA •  Scale-out storage mix VMs with different SLAs on the same storage
  • 96. Why Profile Driven Storage? (2 of 2) Save OPEX by reducing repetitive planning and effort! § Minimize per-VM (or per VM request) “thinking” or planning for storage placement. •  Admin needs to plan for optimal space and I/O balancing for each VM. •  Admin needs to identify VM storage requirements and match to physical storage properties. § Increase probability of “correct” storage placement and use (minimize need for troubleshooting, minimize time for troubleshooting). •  Admin needs more insight into storage characteristics. •  Admin needs ability to custom-tag available storage. •  Admin needs easy means to identify incorrect VM storage placement (e.g. on incorrect datastore).
  • 97. Save OPEX by Reducing Repetitive Planning and Effort! Periodically Identify Find optimal Create VM check Today requirements datastore compliance Initial setup Identify storage Periodically characteristics Identify Storage Create VM check requirements DRS compliance Group
 datastores Initial setup Discover storage Select VM Storage DRS + characteristics Create VM Storage profile Profile driven Group
 storage datastores
  • 98. Storage Capabilities VM Storage Profiles Not Compliant Compliant VM Storage Profile associated with VM VM Storage Profile referencing Storage Capabilities Storage Capabilities surfaced by VASA or user-defined
  • 99. Selecting a Storage Profile During Provisioning § By selecting a VM Storage Profile, datastores are now split into Compatible Incompatible. § The Celerra_NFS datastore is the only datastore which meets the GOLD Profile requirements – i.e. it is the only datastore that has our user-defined storage capability associated with it.
  • 100. VM Storage Profile Compliance § Policy Compliance is visible from the Virtual Machine Summary tab.
  • 101. vSphere 5.0 – vStorage • VMFS 5.0 • vStorage API for Array Integration • Storage vMotion • Storage I/O Control • Storage DRS • VMware API for Storage Awareness • Profile Driven Storage • FCoE – Fiber Channel over Ethernet
  • 102. Introduction § Fiber Channel over Ethernet (FCoE) is an enhancement that expands Fiber Channel into the Ethernet by combining two leading-edge technologies (FC and the Ethernet) § The FCoE adapters that VMware supports generally fall into two categories, hardware FCoE adapters and software FCoE adapters which uses an FCoE capable NIC •  Hardware FCoE adapters were supported as of vSphere 4.0. § The FCoE capable NICs are referred to as Converged Network Adapters (CNAs) which facilitate network and storage traffic. § ESXi 5.0 uses FCoE adapters to access Fibre Channel storage.
  • 103. Software FCoE Adapters (1 of 2) § A software FCoE adapter is a software code that performs some of the FCoE processing. § This adapter can be used with a number of NICs that support partial FCoE offload. § Unlike the hardware FCoE adapter, the software adapter needs to be activated, similar to Software iSCSI.
  • 104. Software FCoE Adapters (2 of 2) § Once the Software FCoE is enabled, a new adapter is created, and discovery of devices can now take place.
  • 105. Conclusion § vSphere 5.0 has many new compelling storage features. § VMFS volumes can be larger than ever before. •  They can contain many more virtual machines due to VAAI enhancements and architectural changes. § Storage DRS and Profile Driven Storage will help solve traditional problems with virtual machine provisioning. § The administrative overhead will be severely reduced. •  VASA surfacing storage characteristics. •  Creating Profiles through Profile Driven Storage. •  Combining multiple datastores in a large aggregate.
  • 107. Introduction (1 of 3) § In vSphere 5.0, VMware releases a new storage appliance called VSA. •  VSA is an acronym “vSphere Storage Appliance.” •  This appliance is aimed at our SMB (Small-Medium Business) customers who may not be in a position to purchase a SAN or NAS array for their virtual infrastructure, and therefore do not have shared storage. •  It is the SMB market that we wish to go after with this product — our aim to move these customers from Essentials to Essentials+. •  Without access to a SAN or NAS array, this excludes these SMB customers from many of the top features which are available in a VMware Virtual Infrastructure, such as vSphere HA vMotion. •  Customers who decide to deploy a VSA can now benefit from many additional vSphere features without having to purchase a SAN or NAS device to provide them with shared storage.
  • 108. Introduction (2 of 3) VSA VSA VSA VSA Manager vSphere vSphere vSphere vSphere Client NFS NFS NFS § Each ESXi server has a VSA deployed to it as a Virtual Machine. § The appliances use the available space on the local disk(s) of the ESXi servers present one replicated NFS volume per ESXi server. This replication of storage makes the VSA very resilient to failures.
  • 109. Introduction (3 of 3) § The NFS datastores exported from the VSA can now be used as shared storage on all of the ESXi servers in the same datacenter. § The VSA creates shared storage out of local storage for use by a specific set of hosts. § This means that vSphere HA vMotion can now be made available on low-end (SMB) configurations, without external SAN or NAS servers. § There is a CAPEX saving achieved by SMB customers as there is no longer a need to purchase a dedicated SAN or NAS devices to achieve shared storage. § There is also an OPEX saving as the management of the VSA may be done by the vSphere Administrator and there is no need for dedicated SAN skills to manage the appliances.
  • 110. Supported VSA Configurations § The vSphere Storage Appliance can be deployed in two configurations: •  2 x ESXi 5.0 servers configuration •  Deploys 2 vSphere Storage Appliances, one per ESXi server a VSA Cluster Service on the vCenter server •  3 x ESXi 5.0 servers configuration •  Deploys 3 vSphere Storage Appliances, once per ESXi server •  Each of the servers must contain a new/vanilla install of ESXi 5.0. •  During the configuration, the user selects a datacenter. The user is then presented with a list of ESXi servers in that datacenter. •  The installer will check the compatibility of each of these physical hosts to make sure they are suitable for VSA deployment. •  The user must then select which compatible ESXi servers should participate in the VSA cluster, i.e. which servers will host VSA nodes. •  It then ‘creates’ the storage cluster by aggregating and virtualizing each server’s local storage to present a logical pool of shared storage.
  • 111. Two Member VSA vCenter Server VSA VSA Cluster Manager Service Manage Volume 2 Volume 1 Volume 1 Volume 2 (Replica) (Replica) VSA VSA Datastore 1 Datastore 2 VSA cluster with 2 members
  • 112. Three Member VSA vCenter Server VSA Manager Manage VSA VSA VSA Datastore 1 Datastore 2 Datastore 3 Volume 3 Volume 2 Volume 1 Volume 3 (Replica) (Replica) Volume 1 Volume 2 (Replica) VSA cluster with 3 members
  • 113. VSA Manager § The VSA Manager helps an administrator perform the following tasks: •  Deploy vSphere Storage Appliance instances onto ESXi hosts to create a VSA cluster •  Automatically mount the NFS volumes that each vSphere Storage Appliance exports as datastores to the ESXi hosts •  Monitor, maintain, and troubleshoot a VSA cluster
  • 114. Resilience § Many storage arrays are a single point of failure (SPOF) in customer environments. § VSA is very resilient to failures. § If a node fails in the VSA cluster, another node will seamlessly take over the role of presenting its NFS datastore. § The NFS datastore that was being presented from the failed node will now be presented from the node that holds its replica (mirror copy). § The new node will use the same NFS server IP address that the failed node was using for presentation, so that any VMs that reside on that NFS datastore will not be affected by the failover.
  • 115. What’s New in VMware vCenter Site Recovery Manager v5.0 – Technical
  • 116. vCenter Site Recovery Manager Ensures Simple, Reliable DR § Site Recovery Manager Complements vSphere to provide the simplest and most reliable disaster protection and site migration for all applications § Provide cost-efficient replication of applications to failover site •  Built-in vSphere Replication •  Broad support for storage-based replication § Simplify management of recovery and migration plans •  Replace manual runbooks with centralized recovery plans •  From weeks to minutes to set up new plan § Automate failover and migration processes for reliable recovery •  Enable frequent non-disruptive testing •  Ensure fast, automated failover •  Automate failback processes
  • 117. SRM Provides Broad Choice of Replication Options Site Site vCenter Server Recovery vCenter Server Recovery Manager Manager VM V V V V V V V VM V V V V V V M M M M M M M M M M M M M vSphere vSphere vSphere Replication V V V M M M Storage-based replication vSphere Replication: simple, cost-efficient replication for Tier 2 applications and smaller sites Storage-based replication: High-performance replication for business-critical applications in larger sites
  • 118. SRM of Today’s High-Level Architecture “Protected” Site “Recovery” Site vSphere Client vSphere Client SRM Plug-In SRM Plug-In SRM Server vCenter Server vCenter Server SRM Server SRA SRA ESX ESX ESX ESX ESX Replication Software Replication Software Replication SAN SAN VMFS VMFS Array VMFS VMFS Array
  • 119. Technology – vSphere Replication § Adding native replication to SRM •  Virtual machines can be replicated regardless of what storage they live on •  Enables replication between heterogeneous datastores •  Replication is managed as a property of a virtual machine •  Efficient replication minimizes impact on VM workloads •  Provides guest-level copy of the VM and not a copy of the VM itself
  • 120. vSphere Replication Details § Replication Granularity per Virtual Machine •  Can opt to replicate all or a subset of the VM’s disks •  You can create the initial copy in any way you want - even via sneaker net! •  You have the option to place the replicated disks where you want. •  Disks are replicated in group consistent manner § Simplified Replication Management •  User selects destination location for target disks •  User selects Recovery Point Objective (RPO) •  User can supply initial copy to save on bandwidth §  Replication Specifics •  Changes on the source disks are tracked by ESX •  Deltas are sent to the remote site •  Does not use VMware snapshots
  • 121. Replication UI § Select VMs to replicate from within the vSphere client by right click options § Can do this on one VM, or multiple at the same time!
  • 122. vSphere Replication 1.0 Limitations § Focus on virtual disks of powered-on VMs. •  ISOs and floppy images are not replicated. •  Powered-off/suspended VMs not replicated. •  Non-critical files not replicated (e.g. logs, stats, swap, dumps). § vSR works at the virtual device layer. •  Independent of disk format specifics. •  Independent of primary-side snapshots. •  Snapshots work with vSR, snapshot is replicated, but VM is recovered with collapse snapshots. •  Physical RDMs are not supported. § FT, linked clones, VM templates are not supported with HBR. § Automated failback of vSR-protected VMs will be late, but will be supported in the future. § Virtual Hardware 7, or later, in the VM is required.
  • 123. SRM Architecture with vSphere Replication “Protected” Site “Recovery” Site vSphere Client vSphere Client SRM Plug-In SRM Plug-In SRM Server vCenter Server vCenter Server SRM Server vRMS vRMS vRS ESX ESX ESX ESX ESX vRA vRA vRA Storage Storage Storage VMFS VMFS VMFS VMFS
  • 124. SRM Scalability Maximum Enforced Protected virtual machines total 3000 No Protected virtual machines in a single 500 No protection group Protection groups 250 No Simultaneous running recovery plans 30 No vSphere Replicated virtual machines 500 No
  • 125. Workflow § Currently we have DR Event failover, and Test.
  • 126. Planned Migration § New is Planned Migration. Will shutdown protected VM’s, and than synchronize them! Planned migration ensures application consistency and no data-loss during migration •  Graceful shutdown of production VMs in application consistent state •  Data sync to complete replication of VMs •  Recover fully replicated VMs
  • 127. Failback Description Benefits •  “Single button” to •  Facilitates DR operations for enterprises that are mandated failback all recovered to perform a true failover as part of DR testing VMs •  Simplifies recovery process after disaster •  Interfaces with storage to automatically reverse replication •  Replays existing recovery plans – so new virtual machines are not part of failback Reverse Replication Site A (Primary) Site B (Recovery)
  • 128. Failback § To failback, you need first to do a planned migration, followed by a reprotect. Then, to do the actual failback, you do a recovery. § Below is a successful recovery of a planned migration.
  • 129. Failback (continued) § Reprotect is now almost complete . . .
  • 130. Failback (continued) § Replication now goes in reverse – to the protected side.
  • 131. Failback (continued) § Now we are ready to failover to our original side – the protected site!
  • 133. Dependencies § There is more functionality to help manage multitier applications.
  • 135. Dependencies (continued) – VM Startup Order Group 1 Group 2 Group 3 Group 4 Group 5 App Server Desktop Apache Apache Master Database Database Database Desktop Apache Desktop App Server Desktop Exchange Mail Sync