SlideShare une entreprise Scribd logo
1  sur  25
Storage Changes in VMware vSphere 4.1 A quick review of changes and new features Scott Lowe, VCDX 39 vSpecialist, EMC Corporation Author, Mastering VMware vSphere 4 Blogger, http://blog.scottlowe.org
Storage Feature Summary 8 Gbps FC HBA and expanded support for FCoE CNAs New storage performance statistics Storage I/O Control vStorage APIs for Array Integration iSCSI enhancements
New Storage Performance Statistics More comprehensive performance statistics available both in vCenter and esxtop/resxtop Historical/real-time via GUI in vCEnter Real-time via esxtop/resxtop for drill-down/scripting Latency and throughput statistics available for: Datastore per host Storage adapter and path per host Datastore per VM VMDK per VM Greater feature parity across all storage protocols
Storage I/O Control
The I/O Sharing Problem What You See Now What You Should See MicrosoftExchange MicrosoftExchange online store online store data mining data mining datastore datastore
Solution: Storage I/O Control (SIOC) CPU shares: Low Memory shares: Low CPU shares: High Memory shares: High CPU shares: High Memory shares: High I/O shares: High I/O shares: Low I/O shares: High 32GHz 16GB MicrosoftExchange online store data mining Datastore A
Setting I/O Controls
Enabling Storage I/O Control Storage I/O Control    Enabled
Viewing Configuration Settings
Allocate I/O Resources Shares translate into ESX I/O queue slots VMs with more shares are allowed to send more I/O’s at a time Slot assignment is dynamic, based on VM shares and current load Total # of slots available is dynamic, based on level of congestion data mining MicrosoftExchange online store I/O’s in flight STORAGE ARRAY
Congestion Triggers SIOC Congestion signal: ESX-array response time > threshold Default threshold: 30ms Should set different defaults for SSD and SATA Changing default threshold (not usually recommended) Low latency goal: set lower if latency is critical for some VMs High throughput goal: set close to IOPS maximization point
vStorage APIs for Array Integration
Storage Integration Points VI Client VM ESX Storage Stack VMFS NFS Snap request SvMotion request VM provisioning cmd Turn thin prov on/off VSS via VMware Tools  NFS client VMware LVM Co-op vStorage API for Data Protection (VDDK) Datamover Co-op vCenter SRM Vendor-specific Plug-In e.g. EMC Virtual Storage Integrator  Network Stack vStorageAPI for Multi- pathing NMP View VMware-to-Storage relationships Provision datastores more easily Leverage array features (compress/dedupe, file/filesystem/LUN snapshots  HBA Drivers NIC Drivers FC/FCoE & iSCSI NFS Array APIs/Mgmt Vendor Specific vStorage API for SRM  VM-Aware Unisphere VAAI SCSI cmds   Storage Array Vendor-specific VAAI SCSI command support Future NFS offloads
Current VAAI Primitives Hardware-Accelerated Locking = 10-100x better metadata scaling Replace LUN locking with extent-based locks for better granularity Reduce number of “lock” operations required by using one efficient SCSI command to perform pre-lock, lock, and post-lock operations  Increases locking efficiency by an order of magnitude Hardware-Accelerated Zero = 2-10x lower number of IO operations Eliminate redundant and repetitive host-based write commands with optimized internal array commands Hardware-Accelerated Copy = 2-10x better data movement Leverage native array Copy capability to move blocks
Hardware-Accelerated Locking Without API Reserves the complete LUN so that it could obtain a lock Required several SCSI commands LUN level locks affect adjacent hosts With API Locks occur at a block level One efficient SCSI command - SCSI Compare and Swap (CAS) Block level locks have no effect on adjacent hosts. Use Cases Bigger clusters with more VMs View, Lab Manager, Project Redwood More & Faster VM Snapshotting
Parameter for Hardware-Accelerated Locking Q: Can this be changed without a reboot? A: Yes!
Hardware-Accelerated Zero Without API SCSI Write - Many identical small blocks of zeroes moved from host to array for MANY VMware IO operations Extra zeros can be removed by EMC arrays after the fact by “zero reclaim”  New Guest IO to VMDK is “pre-zeroed” With API SCSI Write Same - One giant block of zeroes moved from host to array and repeatedly written Thin provisioned array skips zero completely (pre “zero reclaim”) Use Cases Reduced IO when writing to new blocks in the VMDK for any VM Time to create VMs (particularly FT-enabled VMs SCSI WRITE (0000…..) SCSI WRITE (data) SCSI WRITE (0000….) SCSI WRITE (data) Repeat MANY times… SCSI WRITE SAME (0 * times) SCSI WRITE (data) VMDK
Parameter for Hardware-Accelerated Zero Q: Can this be changed without a reboot? A: Yes!
Hardware-Accelerated Copy “let’s Storage VMotion” “let’s Storage VMotion” Without API SCSI Read (Data moved from array to host) SCSI Write (Data moved from host to array) Repeat Huge periods of large VMFS level IO, done via millions of small block operations With API SCSI Extended Copy (Data moved within array) Repeat Order of magnitude reduction in IO operations Order of magnitude reduction in array IOps Use Cases Storage VMotion VM Creation from Template SCSI WRITE SCSI WRITE SCSI WRITE ..MANY times… SCSI READ SCSI READ SCSI READ ..MANY times… SCSI EXTENDED COPY “Give me a VM clone/deploy from template”
Parameter for Hardware-Accelerated Copy Q: Can this be changed without a reboot? A: Yes!
iSCSI Enhancements
iSCSI Enhancements in vSphere 4.1 Boot from software iSCSI (iBFT) iSCSI offloading  iSCSI session management Additions have also been made to the CLI
Steps in Booting from Software iSCSI Ensure that the NIC you wish to use supports iSCSI Boot. Ensure that the NIC has a supported version of firmware. Ensure that iSCSI Boot option is selected in the host BIOS for the NIC. Configure the iSCSI Boot parameters in the NIC BIOS. Configure the iSCSI target to allow initiator (NIC) access. Configure a LUN on the iSCSI target and present to the initiator.
Q&A
Storage Changes in VMware vSphere 4.1

Contenu connexe

Tendances

Veeam Backup and Replication: Overview
Veeam  Backup and Replication: OverviewVeeam  Backup and Replication: Overview
Veeam Backup and Replication: Overview
Dudley Smith
 
Veeam back up and replication presentation
Veeam back up and replication presentation Veeam back up and replication presentation
Veeam back up and replication presentation
BlueChipICT
 
Presentation v mware cloud infrastructure - success in virtualization
Presentation   v mware cloud infrastructure - success in virtualizationPresentation   v mware cloud infrastructure - success in virtualization
Presentation v mware cloud infrastructure - success in virtualization
solarisyourep
 

Tendances (20)

VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
 
Veeam Backup and Replication: Overview
Veeam  Backup and Replication: OverviewVeeam  Backup and Replication: Overview
Veeam Backup and Replication: Overview
 
VMworld Europe 2014: A DevOps Story - Unlocking the Power of Docker with the ...
VMworld Europe 2014: A DevOps Story - Unlocking the Power of Docker with the ...VMworld Europe 2014: A DevOps Story - Unlocking the Power of Docker with the ...
VMworld Europe 2014: A DevOps Story - Unlocking the Power of Docker with the ...
 
What’s new in Veeam Availability Suite v9
What’s new in Veeam Availability Suite v9What’s new in Veeam Availability Suite v9
What’s new in Veeam Availability Suite v9
 
VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...
VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...
VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...
 
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
 
Veeam - Digital Transformation event 29 feb - EuroSys
Veeam - Digital Transformation event 29 feb - EuroSysVeeam - Digital Transformation event 29 feb - EuroSys
Veeam - Digital Transformation event 29 feb - EuroSys
 
VMworld 2013: vSphere Data Protection 5.5 Advanced VMware Backup and Recovery...
VMworld 2013: vSphere Data Protection 5.5 Advanced VMware Backup and Recovery...VMworld 2013: vSphere Data Protection 5.5 Advanced VMware Backup and Recovery...
VMworld 2013: vSphere Data Protection 5.5 Advanced VMware Backup and Recovery...
 
VMworld 2014: Data Protection for vSphere 101
VMworld 2014: Data Protection for vSphere 101VMworld 2014: Data Protection for vSphere 101
VMworld 2014: Data Protection for vSphere 101
 
Veeam back up and replication presentation
Veeam back up and replication presentation Veeam back up and replication presentation
Veeam back up and replication presentation
 
Veeam Backup & Replication v8 for VMware — General Overview
Veeam Backup & Replication v8 for VMware — General OverviewVeeam Backup & Replication v8 for VMware — General Overview
Veeam Backup & Replication v8 for VMware — General Overview
 
Veeam Presentation
Veeam PresentationVeeam Presentation
Veeam Presentation
 
Presentation v mware cloud infrastructure - success in virtualization
Presentation   v mware cloud infrastructure - success in virtualizationPresentation   v mware cloud infrastructure - success in virtualization
Presentation v mware cloud infrastructure - success in virtualization
 
VMware Site Recovery Manager - Architecting a DR Solution - Best Practices
VMware Site Recovery Manager - Architecting a DR Solution - Best PracticesVMware Site Recovery Manager - Architecting a DR Solution - Best Practices
VMware Site Recovery Manager - Architecting a DR Solution - Best Practices
 
Inf bco2891 release candidate v11 copy
Inf bco2891 release candidate v11 copyInf bco2891 release candidate v11 copy
Inf bco2891 release candidate v11 copy
 
Veeam backup and replication v5
Veeam backup and replication v5Veeam backup and replication v5
Veeam backup and replication v5
 
VSAN – Architettura e Design
VSAN – Architettura e DesignVSAN – Architettura e Design
VSAN – Architettura e Design
 
VMware Site Recovery Manager
VMware Site Recovery ManagerVMware Site Recovery Manager
VMware Site Recovery Manager
 
VMworld 2013: Protection for All - VMware vSphere Replication & SRM Technical...
VMworld 2013: Protection for All - VMware vSphere Replication & SRM Technical...VMworld 2013: Protection for All - VMware vSphere Replication & SRM Technical...
VMworld 2013: Protection for All - VMware vSphere Replication & SRM Technical...
 
VMworld 2014: Site Recovery Manager and vSphere Replication
VMworld 2014: Site Recovery Manager and vSphere ReplicationVMworld 2014: Site Recovery Manager and vSphere Replication
VMworld 2014: Site Recovery Manager and vSphere Replication
 

Similaire à Storage Changes in VMware vSphere 4.1

Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
Stephen Foskett
 
Vsphere 4-partner-training180
Vsphere 4-partner-training180Vsphere 4-partner-training180
Vsphere 4-partner-training180
Juan Ulacia
 
VMware Performance Troubleshooting
VMware Performance TroubleshootingVMware Performance Troubleshooting
VMware Performance Troubleshooting
glbsolutions
 
Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...
Louis Göhl
 

Similaire à Storage Changes in VMware vSphere 4.1 (20)

Where Does VMware Integration Occur?
Where Does VMware Integration Occur?Where Does VMware Integration Occur?
Where Does VMware Integration Occur?
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
 
VMware vSphere Version Comparison 4.0 to 6.5
VMware  vSphere Version Comparison 4.0 to 6.5VMware  vSphere Version Comparison 4.0 to 6.5
VMware vSphere Version Comparison 4.0 to 6.5
 
Exchange 2010 New England Vmug
Exchange 2010 New England VmugExchange 2010 New England Vmug
Exchange 2010 New England Vmug
 
3487570
34875703487570
3487570
 
Vsphere 4-partner-training180
Vsphere 4-partner-training180Vsphere 4-partner-training180
Vsphere 4-partner-training180
 
Presentation v mware performance overview
Presentation   v mware performance overviewPresentation   v mware performance overview
Presentation v mware performance overview
 
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
 
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
 
Upgrading your Private Cloud to Windows Server 2012 R2
Upgrading your Private Cloud to Windows Server 2012 R2Upgrading your Private Cloud to Windows Server 2012 R2
Upgrading your Private Cloud to Windows Server 2012 R2
 
Esx versions diff 3.5
Esx versions diff 3.5Esx versions diff 3.5
Esx versions diff 3.5
 
vSphere
vSpherevSphere
vSphere
 
Denver VMUG nov 2011
Denver VMUG nov 2011Denver VMUG nov 2011
Denver VMUG nov 2011
 
What's New with vSphere 4
What's New with vSphere 4What's New with vSphere 4
What's New with vSphere 4
 
VMworld - sto7650 -Software defined storage @VMmware primer
VMworld - sto7650 -Software defined storage  @VMmware primerVMworld - sto7650 -Software defined storage  @VMmware primer
VMworld - sto7650 -Software defined storage @VMmware primer
 
VMware Vsphere Graduation Project Presentation
VMware Vsphere Graduation Project PresentationVMware Vsphere Graduation Project Presentation
VMware Vsphere Graduation Project Presentation
 
IBM XIV Gen3 Storage System
IBM XIV Gen3 Storage SystemIBM XIV Gen3 Storage System
IBM XIV Gen3 Storage System
 
Vsphere 4-partner-training180
Vsphere 4-partner-training180Vsphere 4-partner-training180
Vsphere 4-partner-training180
 
VMware Performance Troubleshooting
VMware Performance TroubleshootingVMware Performance Troubleshooting
VMware Performance Troubleshooting
 
Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...
 

Plus de Scott Lowe

Plus de Scott Lowe (20)

The Full Stack Journey (DFW)
The Full Stack Journey (DFW)The Full Stack Journey (DFW)
The Full Stack Journey (DFW)
 
The Vision for the Future of Network Virtualization with VMware NSX (Q2 2016)
The Vision for the Future of Network Virtualization with VMware NSX (Q2 2016)The Vision for the Future of Network Virtualization with VMware NSX (Q2 2016)
The Vision for the Future of Network Virtualization with VMware NSX (Q2 2016)
 
Where We're Headed and Where NSX Fits In
Where We're Headed and Where NSX Fits InWhere We're Headed and Where NSX Fits In
Where We're Headed and Where NSX Fits In
 
An Overview of Linux Networking Options
An Overview of Linux Networking OptionsAn Overview of Linux Networking Options
An Overview of Linux Networking Options
 
Root Causing Cloud Adoption
Root Causing Cloud AdoptionRoot Causing Cloud Adoption
Root Causing Cloud Adoption
 
The Vision for the Future of Network Virtualization with VMware NSX
The Vision for the Future of Network Virtualization with VMware  NSXThe Vision for the Future of Network Virtualization with VMware  NSX
The Vision for the Future of Network Virtualization with VMware NSX
 
Getting Started with Containers
Getting Started with ContainersGetting Started with Containers
Getting Started with Containers
 
Sustaining Your Career
Sustaining Your CareerSustaining Your Career
Sustaining Your Career
 
An Introduction to Vagrant and Docker
An Introduction to Vagrant and DockerAn Introduction to Vagrant and Docker
An Introduction to Vagrant and Docker
 
Closing the Cloud Skills Gap
Closing the Cloud Skills GapClosing the Cloud Skills Gap
Closing the Cloud Skills Gap
 
An Introduction to VMware NSX
An Introduction to VMware NSXAn Introduction to VMware NSX
An Introduction to VMware NSX
 
The Future of Cloud Networking is VMware NSX (Danish VMUG edition)
The Future of Cloud Networking is VMware NSX (Danish VMUG edition)The Future of Cloud Networking is VMware NSX (Danish VMUG edition)
The Future of Cloud Networking is VMware NSX (Danish VMUG edition)
 
The Future of Cloud Networking is VMware NSX
The Future of Cloud Networking is VMware NSXThe Future of Cloud Networking is VMware NSX
The Future of Cloud Networking is VMware NSX
 
Positioning Yourself for the Future
Positioning Yourself for the FuturePositioning Yourself for the Future
Positioning Yourself for the Future
 
Network Virtualization with VMware NSX
Network Virtualization with VMware NSXNetwork Virtualization with VMware NSX
Network Virtualization with VMware NSX
 
Getting Involved in VMUG
Getting Involved in VMUGGetting Involved in VMUG
Getting Involved in VMUG
 
VMware vSphere in an OpenStack Environment
VMware vSphere in an OpenStack EnvironmentVMware vSphere in an OpenStack Environment
VMware vSphere in an OpenStack Environment
 
SDN, Network Virtualization, and the Right Abstraction
SDN, Network Virtualization, and the Right AbstractionSDN, Network Virtualization, and the Right Abstraction
SDN, Network Virtualization, and the Right Abstraction
 
5 Thoughts for Staying Sharp and Relevant (Boston)
5 Thoughts for Staying Sharp and Relevant (Boston)5 Thoughts for Staying Sharp and Relevant (Boston)
5 Thoughts for Staying Sharp and Relevant (Boston)
 
A Deeper Look at Network Virtualization
A Deeper Look at Network VirtualizationA Deeper Look at Network Virtualization
A Deeper Look at Network Virtualization
 

Dernier

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Dernier (20)

Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 

Storage Changes in VMware vSphere 4.1

  • 1. Storage Changes in VMware vSphere 4.1 A quick review of changes and new features Scott Lowe, VCDX 39 vSpecialist, EMC Corporation Author, Mastering VMware vSphere 4 Blogger, http://blog.scottlowe.org
  • 2. Storage Feature Summary 8 Gbps FC HBA and expanded support for FCoE CNAs New storage performance statistics Storage I/O Control vStorage APIs for Array Integration iSCSI enhancements
  • 3. New Storage Performance Statistics More comprehensive performance statistics available both in vCenter and esxtop/resxtop Historical/real-time via GUI in vCEnter Real-time via esxtop/resxtop for drill-down/scripting Latency and throughput statistics available for: Datastore per host Storage adapter and path per host Datastore per VM VMDK per VM Greater feature parity across all storage protocols
  • 5. The I/O Sharing Problem What You See Now What You Should See MicrosoftExchange MicrosoftExchange online store online store data mining data mining datastore datastore
  • 6. Solution: Storage I/O Control (SIOC) CPU shares: Low Memory shares: Low CPU shares: High Memory shares: High CPU shares: High Memory shares: High I/O shares: High I/O shares: Low I/O shares: High 32GHz 16GB MicrosoftExchange online store data mining Datastore A
  • 8. Enabling Storage I/O Control Storage I/O Control Enabled
  • 10. Allocate I/O Resources Shares translate into ESX I/O queue slots VMs with more shares are allowed to send more I/O’s at a time Slot assignment is dynamic, based on VM shares and current load Total # of slots available is dynamic, based on level of congestion data mining MicrosoftExchange online store I/O’s in flight STORAGE ARRAY
  • 11. Congestion Triggers SIOC Congestion signal: ESX-array response time > threshold Default threshold: 30ms Should set different defaults for SSD and SATA Changing default threshold (not usually recommended) Low latency goal: set lower if latency is critical for some VMs High throughput goal: set close to IOPS maximization point
  • 12. vStorage APIs for Array Integration
  • 13. Storage Integration Points VI Client VM ESX Storage Stack VMFS NFS Snap request SvMotion request VM provisioning cmd Turn thin prov on/off VSS via VMware Tools NFS client VMware LVM Co-op vStorage API for Data Protection (VDDK) Datamover Co-op vCenter SRM Vendor-specific Plug-In e.g. EMC Virtual Storage Integrator Network Stack vStorageAPI for Multi- pathing NMP View VMware-to-Storage relationships Provision datastores more easily Leverage array features (compress/dedupe, file/filesystem/LUN snapshots HBA Drivers NIC Drivers FC/FCoE & iSCSI NFS Array APIs/Mgmt Vendor Specific vStorage API for SRM VM-Aware Unisphere VAAI SCSI cmds Storage Array Vendor-specific VAAI SCSI command support Future NFS offloads
  • 14. Current VAAI Primitives Hardware-Accelerated Locking = 10-100x better metadata scaling Replace LUN locking with extent-based locks for better granularity Reduce number of “lock” operations required by using one efficient SCSI command to perform pre-lock, lock, and post-lock operations Increases locking efficiency by an order of magnitude Hardware-Accelerated Zero = 2-10x lower number of IO operations Eliminate redundant and repetitive host-based write commands with optimized internal array commands Hardware-Accelerated Copy = 2-10x better data movement Leverage native array Copy capability to move blocks
  • 15. Hardware-Accelerated Locking Without API Reserves the complete LUN so that it could obtain a lock Required several SCSI commands LUN level locks affect adjacent hosts With API Locks occur at a block level One efficient SCSI command - SCSI Compare and Swap (CAS) Block level locks have no effect on adjacent hosts. Use Cases Bigger clusters with more VMs View, Lab Manager, Project Redwood More & Faster VM Snapshotting
  • 16. Parameter for Hardware-Accelerated Locking Q: Can this be changed without a reboot? A: Yes!
  • 17. Hardware-Accelerated Zero Without API SCSI Write - Many identical small blocks of zeroes moved from host to array for MANY VMware IO operations Extra zeros can be removed by EMC arrays after the fact by “zero reclaim” New Guest IO to VMDK is “pre-zeroed” With API SCSI Write Same - One giant block of zeroes moved from host to array and repeatedly written Thin provisioned array skips zero completely (pre “zero reclaim”) Use Cases Reduced IO when writing to new blocks in the VMDK for any VM Time to create VMs (particularly FT-enabled VMs SCSI WRITE (0000…..) SCSI WRITE (data) SCSI WRITE (0000….) SCSI WRITE (data) Repeat MANY times… SCSI WRITE SAME (0 * times) SCSI WRITE (data) VMDK
  • 18. Parameter for Hardware-Accelerated Zero Q: Can this be changed without a reboot? A: Yes!
  • 19. Hardware-Accelerated Copy “let’s Storage VMotion” “let’s Storage VMotion” Without API SCSI Read (Data moved from array to host) SCSI Write (Data moved from host to array) Repeat Huge periods of large VMFS level IO, done via millions of small block operations With API SCSI Extended Copy (Data moved within array) Repeat Order of magnitude reduction in IO operations Order of magnitude reduction in array IOps Use Cases Storage VMotion VM Creation from Template SCSI WRITE SCSI WRITE SCSI WRITE ..MANY times… SCSI READ SCSI READ SCSI READ ..MANY times… SCSI EXTENDED COPY “Give me a VM clone/deploy from template”
  • 20. Parameter for Hardware-Accelerated Copy Q: Can this be changed without a reboot? A: Yes!
  • 22. iSCSI Enhancements in vSphere 4.1 Boot from software iSCSI (iBFT) iSCSI offloading iSCSI session management Additions have also been made to the CLI
  • 23. Steps in Booting from Software iSCSI Ensure that the NIC you wish to use supports iSCSI Boot. Ensure that the NIC has a supported version of firmware. Ensure that iSCSI Boot option is selected in the host BIOS for the NIC. Configure the iSCSI Boot parameters in the NIC BIOS. Configure the iSCSI target to allow initiator (NIC) access. Configure a LUN on the iSCSI target and present to the initiator.
  • 24. Q&A

Notes de l'éditeur

  1. The problem Storage I/O control is addressing is the situation where some less important workloads are taking the majority of I/O bandwidth from more important applications. In the case of the three applications shown here, the data mining is hogging a majority of the storage I/O resource. And the two more important to the business operations are getting less performance than needed. <Click> what one wants to see is a distribution of I/O that is aligned with the importance of each virtual machine. Where the most important business critical applications are getting the I/O bandwidth needed for them to be responsive and the less critical data mining application is taking less I/O bandwidth.
  2. I/O shares can be set at the Virtual Machine level and although this capability has been there for a few previous releases, it was not enforced on a VMware cluster wide level until release 4.1. Prior to 4.1 the I/O shares and limits were enforced for a VM with more than one virtual disk or a number of VMs on a single ESX server. <click> But with 4.1 these I/O shares are now used to distribute I/O bandwidth across all the ESX servers which have access to that shared datastore.
  3. The ability to set shares for I/O is done via edit properties on the virtual machine. This screen shows two virtual disks and the ability to set priority and limits on the I/Os per second.
  4. Once the shares are set on the virtual machines in a vmware cluster, one needs to also enable the “Storage I/O Control” option on the properties screen for that datastores on which you want to have Storage I/O control working. The other thing that is needed for Storage I/O to kick in is that congestion measured in the form of latency must exist for a period of time on the datastore before the I/O control kicks in. The example which comes to mind is a car pool lane is not typically enforced when there is not a lot of traffic on the highway. It would be of limited value if you could travel at the same speed in the non car pool lane as well as the car pool lane. In much the same way, Storage I/O control will not be put into action when there is latency below a sustained value of 30 msec.
  5. One can then observe which VMs have what shares and limits set via the virtual machine tab for the datastore. As datastores are now objects managed by vCenter, there are several new views in vSphere that enable you see which ESX Servers are connected to a datastore and which VMs are sharing that datastore. Many of these views also allow one to customize which columns are displayed and create specific views to report on usage.
  6. The way in which these I/O shares are used to effect performance is that queue depth for each ESX server can be assigned and throttled to align the specific shares assigned for each VM running on the collective pool of storage. In the case of our 3 VMs displayed earlier, we have the data mining vm getting the least number of queues assigned while the other two VMs are getting many more queuing slots enabled for their I/O.
  7. It is important to understand that SIOC does not kick in until the congestion of the datastore gets above the threshold of 35 ms for a period of time. It is a weighted average that is used to determine this latency is not just a minor spike that comes and goes quickly. This threshold value can be modified but should be done only with great care and consideration as if its too low, it might be on and off again a lot or if too high might not kick in at all.