SlideShare une entreprise Scribd logo
1  sur  73
vStorage: Troubleshooting Performance Nathan Small, Staff Engineer, Global Support Services
Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
vCenter Performance Charts & ESXTop
Performance Charts Through the Overview section of the Performance tab of an ESX host, we can observe if a host is seeing recent disk latency issues. Unfortunately, this chart does not pinpoint which device or path is seeing this latency. For a more granularity, we will use “Advanced”
Performance Charts If we switch to the “Storage path” option, we can see latency is coming from vmhba0:C0:T0:L0 for both read and write latency:
Performance Charts In order to see the latency statistics for a given LUN, we will need to go to the “Disk” view and then select “Chart Options”: Select the LUNs you wish to see the latency statistics for under “Objects”, then select the Physical device read and write latency “Counters”
Performance Charts With this granular view, we can see that the LUN with ID naa.60060480000190300919533030303134 had a latency spike up to 48ms at ~9:50am
ESXTop ESXTop is a great tool to use if you need performance captured during a specific timeframe (scripted batch mode) or if the ESX host is unavailable in vCenter/VI Client. To view the equivalent of the “Storage path” view, hit ‘d’:
ESXTop You can toggle the display fields by hitting ‘f’. By taking out unneeded fields (I/O stats), you can make the display screen far easier to read and work with:
ESXTop To view individual LUN statistics (with unique ID), hit ‘u’: As we can see, the default field size for “DEVICE” is not long enough to show the entire unique identifier. To change this field size, hit ‘L’:
ESXTop Expanding this field now shows the entire unique ID for the LUN and NFS datastores: Just as before, we should display only the fields we need to be concerned with by hitting ‘f’:
ESXTop This view now shows us similar output to the ‘d’ view except we see the statistics for a LUN (all paths).  *Note: The reason there are no statistics for the NFS mounts is due to the fact that these are ISO and template repositories that are not in use currently by this host.
ESXTop To capture esxtop data, use ‘vm-support -S’. Here is an example for collecting these statistics at 2 second intervals over 20 seconds: # vm-support –S –i 2 –d 20
ESXTop To replay that capture data, extract the compressed file, change to the ‘snaphots’ directory, run the uncompress script, then run ‘esxtop’ in replay mode ‘-R’:
DAVG/KAVG/GAVG/QAVG What is the DAVG? The DAVG field is the Device Average, which is the amount of time it takes for a SCSI command to leave the HBA, hit the array, and return completed. This is the most effective method of determining if the performance issue being experienced is at the physical layer (switch/array). What are acceptable DAVG values? Optimal, sustained DAVG latency values would be between 0-5ms for 4/8GB FC and 10GB iSCSI/NFS. Seeing 5-10ms latency for 1/2GB FC or 1GB iSCSI/NFS is also acceptable.  Seeing latency values of 10-20ms could start to show some minor performance issues inside the VM. while values of 20-50ms would show noticeable to significant performance issues. Latency values of 50ms or higher would make the VMs almost unusable. How should I proceed when my DAVG is sustained at a high level? Engage the storage team immediately. They may already be aware of the latency due to increased load on the storage array (LUN replication, backups)
DAVG/KAVG/GAVG/QAVG What is the KAVG? The KAVG field is the Kernel Average, which is the amount of time a SCSI command spends in the vmkernel.  What are acceptable KAVG values? The KAVG should always be less than 1ms. How should I proceed when my KAVG is sustained at a high level? While rare to see the KAVG value high at all, it has been observed when the queue has been throttled back on the ESX host due to a TASK_SET_FULL condition on the array (KB 1008113), or the queue depth has been reduced on the HBAs due to configuration restrictions (KB 1006001). It has also been observed temporarily on path failover. This is due to commands queuing during the path activation process.
DAVG/KAVG/GAVG/QAVG What is the GAVG? The GAVG field is the Device Average + Kernel Average + Queue Average, which is the latency perceived by the Guest OS or VM. What are acceptable GAVG values? As the KAVG and QAVG should always be less than 1ms, acceptable GAVG latency values would be the same as acceptable DAVG values. How should I proceed when my GAVG is sustained at a high level? The same steps should be followed for high DAVG values.
DAVG/KAVG/GAVG/QAVG What is the QAVG? The QAVG field is the Queue Average, which is the amount of time the SCSI command spends in the HBA driver. What are acceptable QAVG values? The QAVG should always be less than 1ms. How should I proceed when my QAVG is sustained at a high level? Just like the KAVG, the QAVG should not have a high latency value, unless there is a legitimate reason for commands to remained queued on the host side.
Queue Depth and ESXTop Many people believe that increasing the queue depth will solve a performance issue or improve performance overall, however this can have the opposite effect (KB 1006001 & 1008113).  The queue depth should only be increased if the array vendor recommends to do so or the queue depth is getting exhausted on the host side. As seen in a previous slide, we can observe the queue depth usage and activity:
Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
Disk Alignment
Disk Alignment The concept of Disk Alignment is simple, yet the performance hit for misaligned I/O can be troublesome for a single, high disk I/O VM (SQL, Exchange, etc) or for the entire virtualized infrastructure on a particular storage array. It has been a best practice recommendation from high I/O application vendors to ensure that the data drive for the application is aligned to ensure the best I/O possible, as this will avoid any partial read or write operation. In many cases, this data drive would be a separate LUN either on local storage or on a share storage array.
Disk Alignment In a SAN environment, the smallest hardware unit used by a SAN storage array to build a LUN out of multiple physical disks is a called a chunk or a stripe. To optimize I/O, chunks are usually much larger than sectors. Thus a SCSI I/O request that intends to read a sector in reality reads one chunk. On top of this, in a Windows environment NTFS is formatted in blocks ranging from 1MB to 8MB. The file system used by the guest operating system optimizes I/O by grouping sectors into so-called clusters (allocation units). Older operating systems on x86 architectures create partitions with a master boot record (MBR) that consumes 63 sectors. This is due to legacy BIOS code from the PC that used cylinder, head, and sector addressing instead of logical block addressing (LBA). Without LBA, the first track is reserved for the boot code, and the first partition starts at cylinder 0, head 1, and sector 1. This is LBA 63 and is therefore unaligned.
Disk Alignment An unaligned partition results in a track crossing and an additional I/O, incurring a penalty on latency and throughput. The additional I/O (especially if small) can impact system resources significantly on some host types. An aligned partition ensures that the single I/O is serviced by a single device, eliminating the additional I/O and resulting in overall performance improvement. Unaligned structure may cause many additional I/O operations when only one cluster is ready by the guest operating system.
Disk Alignment Beginning with ESX 3.0.x, or rather VMFS3, any newly created VMFS LUN will be aligned to a starting sector of 128, which is 64k (128 * 512 = 65536). As most arrays perform reads/writes in 4k, or a value divisible by 4k, this means that the VMFS partition is aligned with the underlying storage. When the partition for the VM is aligned in the same manner, I/O is aligned across the board. I/O improvements on a properly aligned Windows NTFS volume in a VMDK on a SAN LUN:
Disk Alignment The result of aligning I/O can have significant, measurable performance improvements for high disk I/O VMs: VMW SQL
Disk Alignment and NFS Does Disk Alignment only apply to VMFS and not NFS?  While the NFS volume does not have an additional VMFS layer to align, the NFS datastore is aligned to a 4k divisible offset when created and the array controllers still operate in 4k read/write operations. As such, there is still a need to align the OS and data drives of VMs residing on an NFS volume.
Verifying Disk Alignment How can I tell if the disks inside my VM’s are aligned? This is the default partition offset of 63: 63 * 512 = 32256
Aligning Disks To align newly created disk (data disks), simply use diskpart (Windows) or fdisk (Linux). For more information, see:http://www.vmware.com/pdf/esx3_partition_align.pdfhttp://media.netapp.com/documents/tr-3747.pdf Aligning existing disks gets tricky since the file system and data has already been written. To simply change the offset of these existing partitions with diskpart or fdisk will render them useless. vSphere/ESX currently has no way of aligning disks however 3rd party vendors have created utilities to do this: Netapp provides a utility called ‘mbrscan’, which will scan your VMs’ disks to see if they are aligned, and ‘mbralign’, which will align the disks of a VM (powered off VMs only). Vizioncore makes a product called vOptimizer that will align VM’s disks to 64k offset. Alternatively, their vConverter 5.0 product will convert physical or virtual machines and in the process align the disk offset to 64k Paragon also makes a utility called Paragon Alignment Tool (PAT). This also comes bundled with their flagship product.
Planning Ahead The best way to plan ahead with Disk Alignment is to create templates for every OS you wish to deploy and align those before installing the OS. This way you can deploy VM’s that will already be aligned.
Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
Hard Disks/Storage Array
SCSI vs. SATA The argument of which hard drive technology to use is a well known discussion however the answer on which drives to use ultimately depends on the application or need. SCSI drives are more expensive than SATA drives however this is due to the design and performance achieved with this technology. SCSI uses a tagged command queuing implementation which allows many commands to be outstanding. This design provides significant performance gain for the drives/controllers to be able to order these commands in most optimal execution manner possible. SCSI drives also use a processor for executing commands and handling the interface while a separate processor handles the head positioning through servos. SCSI can push a maximum of 640Meg/second
SCSI vs. SATA In the early years, SATA used Task Command Queuing (TCQ). This queuing technology was designed to help bridge the gap between SCSI and PATA/SATA drives however there was too much overhead and TCQ simply wasn’t efficient enough. TCQ is sometimes referred to as interrupt queuing. SATA 2.0 introduced Native Command Queuing (NCQ), which is very similar to the outstanding request queuing that SCSI uses. This technology drastically reduces the amount of interrupts that TCQ uses due to the integrated first party DMA engine as well as the intelligent ordering of commands and interrupting when most it is most optimal to do so. Due to NCQ and the throughput increases with the SATA 2.0 standard, SATA can perform well but not to the same level as SCSI. SATA 3GB/s can push a maximum of 300MB/s
When To Use SCSI SCSI drives are by far the most expensive choice, however you get what you pay for: Performance. There are no limitations for SCSI drives usage however budgets are rarely unlimited so SCSI drives are usually purchased for production servers/applications (SQL, MS Exchange, etc) or tier 1/2 storage. Because of the throughput capabilities of SCSI, these drives will handle I/O bursts far better than SATA, which would rely heavily on read/write cache to rough the storm. SCSI drives are recommended for use with VMware View environments for linked clone and fully deployed desktops. More on this later.
When To Use SATA While SATA drives are far cheaper than SCSI drives and usually higher in disk capacity, they simply do not perform at the same level as SCSI drives.  Installing more write cache in the storage array can assist SATA drives through heavy I/O periods however that would make write cache the last line of defence to an I/O storm or increasing I/O load over time. It may only be a matter of time before write cache fills up and then begins the battle of destaging. If SATA disks must be used, a configuration of RAID 10 for those disks is strongly recommended. This will not guarantee availability in the heaviest of I/O storms, but it is the most optimal RAID configuration while maintaining redundancy.
Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
SCSI Reservations
What are SCSI Reservations? SCSI Reservations refer to the act of an initiator to using SCSI-2 commands 0x16 (RESERVE) and 0x17 (RELEASE) to lock a LUN for a specific operation. While a reservation is placed on a LUN, if another initiator attempts to perform any command on the reserved LUN other than an INQUIRY, REQUEST SENSE, or a PREVENT/ALLOW MEDIUM REMOVAL command the command shall be rejected with RESERVATION CONFLICT device status. A reservation may only be released by the initiator that placed the reservation. A release command sent by another initiator will be ignored. Any initiator can clear the reservation by issuing a “BUS DEVICE RESET” command or a hard “RESET”.
SCSI Reservations Since VMFS-3 is basically a clustered file system, allowing simultaneous access from multiple ESX servers, we need a way of preserving the integrity of the file system when more than one host is updating it. VMFS-3 implements a locking protocol to prevent VMware cluster-aware applications from powering on (or otherwise sharing) the contents of a given virtual disk on more than one host at any given time. Part of this locking protocol is based on the notion of on-disk locks that protect the metadata on the volume. An ESX/ESXi host interested in locked access to metadata must atomically check the owner and lock-state fields in the lock, and if the lock is free, acquire it by writing out its own ID and lock state into the owner and lock-state fields.
SCSI Reservations The lock manager currently uses SCSI reservations to check the lock information, establish itself as the owner by writing to the relevant lock fields on disk if the lock is free, and then releases the SCSI reservation. ESX uses SCSI-2 non-persistent reservations which implies that only a single host can reserve the LUN in question at any one time. A reboot of that host will clear the reservation on the LUN (non-persistent). ESX 4.0 introduces limited support for SCSI-3 persistent reservations to work with Microsoft Windows 2008 Clustering, however the LSISAS controller must be used for the VMs. Only Windows 2k8 is supported with this controller. VMFS lock operations are still implemented using SCSI-2 non-persistent reservations, even in ESX 4.x.
A typical SCSI Reservation Conflict message vmkernel: 0:12:44:32.598 cpu4:1046)SCSI: vm 1046: 5509: Sync CR at 64 vmkernel: 0:12:44:33.520 cpu4:1046)SCSI: vm 1046: 5509: Sync CR at 48 vmkernel: 0:12:44:34.512 cpu4:1046)SCSI: vm 1046: 5509: Sync CR at 32 vmkernel: 0:12:44:35.482 cpu4:1046)SCSI: vm 1046: 5509: Sync CR at 16 vmkernel: 0:12:44:36.490 cpu2:1046)SCSI: vm 1046: 5509: Sync CR at 0  vmkernel: 0:12:44:36.490 cpu2:1046)WARNING: SCSI: 5519: Failing I/O due to too many reservation conflicts vmkernel: 0:12:44:36.490 cpu2:1046)WARNING: SCSI: 5615: status SCSI reservation conflict, rstatus 0xc0de01 for vmhba2:0:3. residual R 919, CR 0, ER 3 vmkernel: 0:12:44:36.490 cpu2:1046)FSS: 343: Failed with status 0xbad0022 for f530 28 2 45378334 5f8825b2 17007a7d 1d624ca4 4 1 0 0 0 0 0 "CR" stands for conflict retry. In this case an I/O is being retried due to reservation conflicts. The number at the end of the log statement is the number of retries left. In this case, all retries were exhausted so the I/O failed.
What causes a SCSI Reservation?  Administrative operations, such as creating or deleting a virtual disk, extending a VMFS volume, or creating or deleting snapshots, result in metadata updates to the file system using locks, and thus result in SCSI reservations.  Reservations are also generated when you expand a virtual disk, when a snapshot for a virtual machine disk increases in size, or when a thin provisioned virtual disk grows.  VMotion also use SCSI Reservations, with a reservation placed first by the source ESX, which is subsequently released, and then the destination ESX places a SCSI Reservation on the LUN. Note that SCSI reservations are used to acquire or release a lock on a file but is not required when updating host heartbeats to maintain a lock on a file. For more information, see KB 1005009: http://kb.vmware.com/kb/1005009
What Should I do when I have SCSI Reservation Conflicts?  Sometimes SCSI Reservation Conflicts are temporary and my subside. If the SCSI Reservation Conflicts do not subside, issue a LUN “RESET” to the LUN to see if this resolves them. This can be done from an ESX host by issuing the following command: # vmkfstools –L lunreset /vmfs/devices/disks/naa.xxxxxxxxxxxxxxxx *NOTE: To be safe, issue the LUN reset command at least twice. If the command was successful, you should see messages in the vmkernel similar to the following: cpu1:1057)<6>scsi(2:0:3:73): DEVICE RESET SUCCEEDED. If a LUN reset resolves the issue then this points to a lost reservation as being the root cause of the problem. A lost reservation means that an initiator placed the reservation however it was not able to send the release (HBA goes into fatal error state) or the release was dropped at some level yet a SUCCESS was returned to the initiator (blade environment).
What Should I do when I have SCSI Reservation Conflicts?  If the LUN reset does not resolve the SCSI reservation conflict, the following scenarios may apply: Array is overloaded:  Write pending at 100% Read/write cache exhausted Cache constantly destaging down to disk Array controller/port queue is full LUN Replication/Backups too demanding Host mode setting incorrect for initiator record on the array Array software/firmware bug (LUN flag setting, Storage pool leak, etc) Hardware Management Agents (HP Insight Manager) are locking the LUN The LUN reporting the SCSI reservation is presented to a non-ESX host The LUN reporting the SCSI reservation is an RDM LUN used by a VM *Remember: A SCSI reservation conflict is a symptom, not the problem.
Best Practices to avoid SCSI Reservation Conflicts Keep HBA, Blade Switch (if applicable), Fabric Switch, and Array firmware up to date. Firmware updates contain fixes! Determine if any operations are already happening on the LUN you wish to start another operation that may cause a SCSI Reservation. Spread out VCB Backups, VMotion operations, Template deployments, etc. Choose one ESX Server as your deployment server to avoid conflicts with multiple ESX servers trying to deploy templates. Limit access Administrative operations in vCenter so that you control who can enact an operation that could lead to potential reservations issues (snapshots).
Best Practices to avoid SCSI Reservation Conflicts Avoid boot storms by scheduling VM reboots so that there is only one reboot per LUN. In the case of VMware View desktops, choose to leave these desktops always powered on. Use care when scheduling backups, antivirus, or LUN replication.  Ensure that the storage array will not be overloaded with operations that could impact hosts connected. Verify that there is no other SCSI Reservations operation happening. Use the ATS (Atomic Test and Set) feature of VAAI, if available. More on this feature on the next slide.
Goodbye SCSI Reservations, Hello ATS! In vSphere 4.1 the following three offload primitives are supported – Hardware Assisted Locking, Full Copy, and Block Zero. These primitives are also known as Atomic Test and Set, Clone Blocks, and Write Same respectively. We will talk about Clone Blocks and Write Same in later slides. Atomic Test and Set (ATS) primitive atomically modifies a sector on disk without the use of SCSI reservations and needing to lock out other hosts from concurrent LUN access. This primitive also reduces the number of commands required to successfully acquire an on-disk lock.  Upon receiving an ATS command, the array should atomically check if the contents of the disk block at the specified logical block number are the same as initiator-provided existing value, and if yes, replace it with initiator-provided new value.
Goodbye SCSI Reservations, Hello ATS! Unlike SCSI reservations, ATS commands from other hosts should not be rejected with a device status of RESERVATION CONFLICT in the case of in-flight ATS commands from other hosts. Instead, these commands are queued as needed and processed in the same order. The ATS primitive can significantly improve I/O throughput from guest OS applications to a VMFS volume in the presence of metadata operations, and the number of concurrent cluster-aware VM operations that can be supported in the presence of these I/O intensive enterprise workloads. ATS will be used for the following operations: Every lock operation executed by the VMFS-3 lock manager (metadata) Power state operations (power on/off/checkpoint/resume, snapshot and consolidate) of VMs Cluster-wide operations like Storage vMotion, vMotion, DRS
Requirements for VAAI The following are required for VAAI: ESX/ESXi 4.1 Array with SCSI ANSI version 3 or higher Array firmware that supports VAAI primitives VAAI uses the following SCSI commands: ATS uses SCSI command 0x89 (COMPARE AND WRITE) Clone Blocks/Full Copy uses SCSI command 0x83 (EXTENDED COPY) Zero Blocks/Write Same uses SCSI command 0x93 (WRITE SAME) These VAAI operations are controlled by the following advanced settings: /VMFS3/HardwareAcceleratedLocking /DataMover/HardwareAcceleratedMove /DataMover/HardwareAcceleratedInit
Known Issues for VAAI VAAI is enabled by default, and the primitives will issue the specific SCSI commands to determine whether the LUN supports the commands or not. If the commands are not successful, the LUN will be marked as non-VAAI capable. The reason VAAI is enabled by default is due to NDU (non-disruptive upgrades) or initiator setting changes that could make the LUNs support VAAI. This default behavior has caused issues for some arrays or fabric configurations that do not support  the SCSI commands VAAI uses. Examples of this are the following: Outdated/unsupported array firmware does not handle SCSI commands correctly and returns unexpected responses Cisco SANTAP controllers do not support the commands and crash
Known Issues for VAAI To check VAAI status, run the command:# esxcfg-scsidevs -l | egrep "Display Name:|VAAI Status:“ In the event that your array or fabric configuration does not support VAAI commands and is behaving oddly, disable the VAAI functions on the ESX hosts until a firmware update is available from the vendor.  To disable the VAAI primitives, execute the following commands on each ESX host:# esxcfg-advcfg -s 0 /DataMover/HardwareAcceleratedMove# esxcfg-advcfg -s 0 /DataMover/HardwareAcceleratedInit# esxcfg-advcfg -s 0 /VMFS3/HardwareAcceleratedLocking
Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
APD (All Paths Down) Issue
What is the APD issue? APD (all paths down) is a condition where all paths to the storage device (LUN) are lost either temporarily or permanently. The most common cause for the APD condition is due to intentional changes in the SAN configuration, particularly removing the presentation of a LUN to a give host(s) or migrating off a SAN entirely. The APD state can be transient, but in most cases it will be permanent, unless the removed LUN is presented back to the affected host, an advanced configuration option is used, or the host is rebooted to recover from the condition. The advanced configuration option was introduced in a patch after ESX 4.0 U1 was released (included in ESX 4.0 U2 and ESX 4.1):http://kb.vmware.com/kb/1016291 The advanced configuration option can be enabled with the following command:# esxcfg-advcfg –s 1 /VMFS3/FailVolumeOpenIfAPD
What is the impact of the APD issue? If a LUN has been removed from the ESX hosts improperly, this will lead to an APD state. While the LUN is in this state, running VM’s may hang or pause momentarily due to the VMM requiring an fsLock. Issuing a rescan at this point will exasperate the issue and cause more VM’s to pause and for a longer period of time. This hang/pause effect can cause severe issues for time sensitive/transaction applications such as SAP, SQL, or MS Exchange. It also effectively pauses the network stack of Windows based OS’es and can cause ICMP ping drops as a result. This issue is discussed in detail via the following KB link:http://kb.vmware.com/kb/1016626
How can I avoid the APD issue? Engineering has approved an official procedure for proper LUN removal in ESX 4.x.  This process involves creating LUN claim rules that will mask the LUNs from the vmkernel before they are removed from the host. After masking the LUN, a reclaim command is run which will unregister those LUNs from the vmkernel, allowing them to be safely removed. Once the LUNs are removed, the claim rules that were added for this procedure should be removed. The proper LUN removal procedure to avoid the APD is fully  covered in the following KB article:http://kb.vmware.com/kb/1015084
Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
Multipathing Considerations
MRU (Most Recently Used) Path Selection Policy The MRU Path Selection Policy was designed for use with Active/Passive or Passive Not Ready (PNR) arrays. These arrays have a sense of LUN ownership, which means a LUN can only be accessed via one controller, not both. As such, Active/Passive arrays have far more failover conditions. The MRU policy selects the first working path discovered at system boot time. This would be the first HBA and first path discovered. This design has an inherit flaw as it means that unless failover has occurred, all I/O to the LUNs will be going through the first HBA. If this path becomes unavailable, the ESX host switches to an alternative path and continues to use the new path while it is available. It does this because forcing the path back to the other array controller would cause the LUN to change ownership on the array.
Fixed Path Selection Policy Originally, the Fixed Path Selection Policy was primarily used by Active/Active arrays however this policy is now used by most ALUA compatible arrays as well as iSCSI arrays with a virtual port. The Fixed Path Policy uses the concept of a preferred path, which means that you can select which path you want to use for a LUN and that path will always be used unless a failover scenario occurs. Once the path is restored, however, the storage stack will automatically switch back to the preferred path.  Fixed was used in the ESX 2.x – 3.x days to load balance I/O from an ESX perspective. It is acceptable to use the MRU policy for Active/Active arrays since both controllers can serve up a LUN, however it is NOT acceptable or supported to use Fixed policy on an Active/Passive or Passive Not Ready (PNR) array as this will cause path thrashing to occur.
RoundRobin Path Selection Policy The RoundRobin Path Selection Policy is the only in-box policy that load balances I/O (Fixed doesn’t count). This policy will only use “Active”, “Optimized” paths by default as valid paths to switch to. No “Standby” paths will be used as this would cause a trespassing effect/LUN ownership change on Active/Passive or Passive Not Ready (PNR) arrays to occur. This is not a problem for Active/Active arrays since they have no “Standby” paths. The load balancing mechanism will switch paths by one of two methods:  IOPS threshold (commands sent)  throughput threshold (bytes transferred).
RoundRobin Path Selection Policy When setting the RoundRobin policy, the default value for IOPS will be 1000 commands and the default throughput threshold will be 10485760 bytes. VMware does not recommend settings for either the IOPS or the bytes value with Roundrobin. For information on the correct settings for RoundRobin use with a particular storage array, the storage array vendor MUST be contacted. The array vendor will know what settings are optimal for use but they will also know what settings NOT to use. Some vendors recommend an ‘iops’ value of ‘1’ to be used however this same setting on another array can have detrimental effects resulting in poor performance or even a possible production outage. The RoundRobin policy cannot be used on MSCS Quorum RDM LUNs.
Vendor Recommend Round Robin Settings EMC DMX:http://www.emc.com/collateral/hardware/white-papers/h6531-using-vmware-vsphere-with-emc-symmetrix-wp.pdf EMC Celerra:http://www.emc.com/collateral/software/technical-documentation/h5536-vmware-esx-srvr-using-emc-celerra-stor-sys-wp.pdf NetApp:http://kb.vmware.com/kb/1010713http://media.netapp.com/documents/tr-3749.pdf HP EVA:http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-2185ENW.pdf
Setting Round Robin Settings From CLI Setting an entire SATP (Storage Array Type Plugin) to Round Robin:# esxclinmpsatpsetdefaultpsp -s VMW_SATP_LSI -p VMW_PSP_RR Setting the Round Robin PSP setting for a single LUN:# esxclinmp device setpolicy -d naa.600174d001000000010f003048318438 --psp VMW_PSP_RR Command line option to set iops value: # esxclinmproundrobinsetconfig -d naa.600174d001000000010f003048318438 --iops 10 --type iops Command line option to set bytes value:# esxclinmproundrobinsetconfig -d naa.600174d001000000010f003048318438 --bytes 11 --type bytes
Displaying Round Robin Settings From CLI To displaying Round Robin settings for a LUN: # esxclinmp device naa.60a9800050334b356b4a51312f417541     Device Display Name: NETAPP Fibre Channel Disk (naa.60a9800050334b356b4a51312f417541)     Storage Array Type: VMW_SATP_ALUA     Storage Array Type Device Config:  {implicit_support=on;explicit_support=off;explicit_allow=on;alua_followover=on;{TPG_id=2,TPG_state=AO}{TPG_id=3,TPG_state=ANO}}     Path Selection Policy: VMW_PSP_RR     Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=3: NumIOsPending=0,numBytesPending=0}     Working Paths: vmhba2:C0:T2:L1, vmhba1:C0:T2:L1
Known Issues for Round Robin in ESX 4.x There are currently two known issues when using the Round Robin Path Selection Policy: After setting the IOPS value for roundrobin, the setting is not retained after a reboot and instead shows a much higher value. This is resolved in a patch for ESX 4.0:http://kb.vmware.com/kb/1017721 The SATP for EMC DMX (VMW_SATP_SYMM) does not distribute I/O evenly with roundrobin after a state change occurs. This state change includes adding a new path (FA) or when losing a paths and recovering. This is also resolved in a patch for ESX 4.0 and ESX 4.1:ESX 4.0:http://kb.vmware.com/kb/1023759ESX 4.1:http://kb.vmware.com/kb/1027013
Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
Communication
Communication Communication between various business units is required for the success of a company.  Many companies silo their teams into networking, storage, server/hardware, Windows/Linux administrators, Database/Exchange administrators, virtualization administrators, and so on. This silo design or syndrome has been responsible for lack of cooperation and communication between the siloed teams. There have been movements to try and break down these barriers however this is not an easy task.  Ultimately, the victims of this design may not be the people of the silo teams, but the business/company itself, which in turn affects customers directly.
Communication In recent years, virtualization has become a huge movement in datacenters and the criticality of virtualized servers have shifted from simple application servers to mission critical applications. This mission critical requirement is far more demanding from an I/O (CPU, memory, and disk) and disk capacity perspective. The need to have virtualization administrators and storage administrators communicate effectively has come to critical mass. More than ever, these two teams need to be on the same page at all times. The need to cross-pollinate knowledge between these teams has never been higher. Lack of effective communication only between these two silo groups has led some companies to experience production outages and scalability issues.
Improving Communication While change is not easy, there are some simple policies that can be adopted in order to break down the silo walls between the storage and virtualization administration teams: Cross-train between both teams (basic to intermediate) Meetings where topics that could affect the other silo should always have at least one representative from that silo that understands the discussion Regular meetings (weekly) between the two silos to discuss current and future projects Email distribution list that contains all parties of both silos Always consider the other party before any changes are made to the virtualization infrastructure, array configuration, or policies on either team
Questions?

Contenu connexe

Tendances

Deploying postgre sql on amazon ec2
Deploying postgre sql on amazon ec2 Deploying postgre sql on amazon ec2
Deploying postgre sql on amazon ec2
Denish Patel
 
Best Practices of HA and Replication of PostgreSQL in Virtualized Environments
Best Practices of HA and Replication of PostgreSQL in Virtualized EnvironmentsBest Practices of HA and Replication of PostgreSQL in Virtualized Environments
Best Practices of HA and Replication of PostgreSQL in Virtualized Environments
Jignesh Shah
 
Webinar Slides: Migrating to Galera Cluster
Webinar Slides: Migrating to Galera ClusterWebinar Slides: Migrating to Galera Cluster
Webinar Slides: Migrating to Galera Cluster
Severalnines
 
Replication Solutions for PostgreSQL
Replication Solutions for PostgreSQLReplication Solutions for PostgreSQL
Replication Solutions for PostgreSQL
Peter Eisentraut
 

Tendances (20)

Velocity 2010 - ATS
Velocity 2010 - ATSVelocity 2010 - ATS
Velocity 2010 - ATS
 
Deploying postgre sql on amazon ec2
Deploying postgre sql on amazon ec2 Deploying postgre sql on amazon ec2
Deploying postgre sql on amazon ec2
 
Out of the box replication in postgres 9.4(pg confus)
Out of the box replication in postgres 9.4(pg confus)Out of the box replication in postgres 9.4(pg confus)
Out of the box replication in postgres 9.4(pg confus)
 
Mike Resseler - Deduplication in windows server 2012 r2
Mike Resseler - Deduplication in windows server 2012 r2Mike Resseler - Deduplication in windows server 2012 r2
Mike Resseler - Deduplication in windows server 2012 r2
 
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte DataProblems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
 
Out of the box replication in postgres 9.4
Out of the box replication in postgres 9.4Out of the box replication in postgres 9.4
Out of the box replication in postgres 9.4
 
Best Practices of HA and Replication of PostgreSQL in Virtualized Environments
Best Practices of HA and Replication of PostgreSQL in Virtualized EnvironmentsBest Practices of HA and Replication of PostgreSQL in Virtualized Environments
Best Practices of HA and Replication of PostgreSQL in Virtualized Environments
 
PostgreSQL Extensions: A deeper look
PostgreSQL Extensions:  A deeper lookPostgreSQL Extensions:  A deeper look
PostgreSQL Extensions: A deeper look
 
Webinar Slides: Migrating to Galera Cluster
Webinar Slides: Migrating to Galera ClusterWebinar Slides: Migrating to Galera Cluster
Webinar Slides: Migrating to Galera Cluster
 
Mark Farnam : Minimizing the Concurrency Footprint of Transactions
Mark Farnam  : Minimizing the Concurrency Footprint of TransactionsMark Farnam  : Minimizing the Concurrency Footprint of Transactions
Mark Farnam : Minimizing the Concurrency Footprint of Transactions
 
Deep Dive into RDS PostgreSQL Universe
Deep Dive into RDS PostgreSQL UniverseDeep Dive into RDS PostgreSQL Universe
Deep Dive into RDS PostgreSQL Universe
 
Oscon 2010 - ATS
Oscon 2010 - ATSOscon 2010 - ATS
Oscon 2010 - ATS
 
Webinar slides: Top 9 Tips for building a stable MySQL Replication environment
Webinar slides: Top 9 Tips for building a stable MySQL Replication environmentWebinar slides: Top 9 Tips for building a stable MySQL Replication environment
Webinar slides: Top 9 Tips for building a stable MySQL Replication environment
 
Replication Solutions for PostgreSQL
Replication Solutions for PostgreSQLReplication Solutions for PostgreSQL
Replication Solutions for PostgreSQL
 
Best Practices with PostgreSQL on Solaris
Best Practices with PostgreSQL on SolarisBest Practices with PostgreSQL on Solaris
Best Practices with PostgreSQL on Solaris
 
PostgreSQL and Linux Containers
PostgreSQL and Linux ContainersPostgreSQL and Linux Containers
PostgreSQL and Linux Containers
 
High Availability PostgreSQL with Zalando Patroni
High Availability PostgreSQL with Zalando PatroniHigh Availability PostgreSQL with Zalando Patroni
High Availability PostgreSQL with Zalando Patroni
 
Streaming replication in PostgreSQL
Streaming replication in PostgreSQLStreaming replication in PostgreSQL
Streaming replication in PostgreSQL
 
Oaktable World 2014 Kevin Closson: SLOB – For More Than I/O!
Oaktable World 2014 Kevin Closson:  SLOB – For More Than I/O!Oaktable World 2014 Kevin Closson:  SLOB – For More Than I/O!
Oaktable World 2014 Kevin Closson: SLOB – For More Than I/O!
 
SQL Server vs Postgres
SQL Server vs PostgresSQL Server vs Postgres
SQL Server vs Postgres
 

En vedette

SIGGRAPH Asia 2008 Modern OpenGL
SIGGRAPH Asia 2008 Modern OpenGLSIGGRAPH Asia 2008 Modern OpenGL
SIGGRAPH Asia 2008 Modern OpenGL
Mark Kilgard
 

En vedette (7)

Master VMware Performance and Capacity Management
Master VMware Performance and Capacity ManagementMaster VMware Performance and Capacity Management
Master VMware Performance and Capacity Management
 
B-Plan Pitch Deck Template
B-Plan Pitch Deck TemplateB-Plan Pitch Deck Template
B-Plan Pitch Deck Template
 
Designing for Sensors 
& the Future of Experiences
Designing for Sensors 
& the Future of ExperiencesDesigning for Sensors 
& the Future of Experiences
Designing for Sensors 
& the Future of Experiences
 
SIGGRAPH Asia 2008 Modern OpenGL
SIGGRAPH Asia 2008 Modern OpenGLSIGGRAPH Asia 2008 Modern OpenGL
SIGGRAPH Asia 2008 Modern OpenGL
 
Iugu's pitch deck template
Iugu's pitch deck templateIugu's pitch deck template
Iugu's pitch deck template
 
QFI Open Quiz Prelims
QFI Open Quiz PrelimsQFI Open Quiz Prelims
QFI Open Quiz Prelims
 
Startup Investor Pitch Deck
Startup Investor Pitch DeckStartup Investor Pitch Deck
Startup Investor Pitch Deck
 

Similaire à vSphere vStorage: Troubleshooting Performance

Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02
Suresh Kumar
 
San 101 basics of administrating a san
San 101 basics of administrating a sanSan 101 basics of administrating a san
San 101 basics of administrating a san
pineapplebed24
 
Perf Vsphere Storage Protocols
Perf Vsphere Storage ProtocolsPerf Vsphere Storage Protocols
Perf Vsphere Storage Protocols
Yanghua Zhang
 
VMware Performance Troubleshooting
VMware Performance TroubleshootingVMware Performance Troubleshooting
VMware Performance Troubleshooting
glbsolutions
 
General commands for navisphere cli
General commands for navisphere cliGeneral commands for navisphere cli
General commands for navisphere cli
msaleh1234
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
Stephen Foskett
 
USING EMC FAST SUITE WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS
USING EMC FAST SUITE  WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMSUSING EMC FAST SUITE  WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS
USING EMC FAST SUITE WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS
suri1155
 

Similaire à vSphere vStorage: Troubleshooting Performance (20)

Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02
 
San 101 basics of administrating a san
San 101 basics of administrating a sanSan 101 basics of administrating a san
San 101 basics of administrating a san
 
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
 
Perf Vsphere Storage Protocols
Perf Vsphere Storage ProtocolsPerf Vsphere Storage Protocols
Perf Vsphere Storage Protocols
 
Virtualization with Lenovo X6 Blade Servers: white paper
Virtualization with Lenovo X6 Blade Servers: white paperVirtualization with Lenovo X6 Blade Servers: white paper
Virtualization with Lenovo X6 Blade Servers: white paper
 
os
osos
os
 
2011 q1-indy-vmug
2011 q1-indy-vmug2011 q1-indy-vmug
2011 q1-indy-vmug
 
Presentation v mware v-sphere advanced troubleshooting by eric sloof
Presentation   v mware v-sphere advanced troubleshooting by eric sloofPresentation   v mware v-sphere advanced troubleshooting by eric sloof
Presentation v mware v-sphere advanced troubleshooting by eric sloof
 
Storage
StorageStorage
Storage
 
VMware Performance Troubleshooting
VMware Performance TroubleshootingVMware Performance Troubleshooting
VMware Performance Troubleshooting
 
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
 
Vmfs
VmfsVmfs
Vmfs
 
Steps to identify ONTAP latency related issues
Steps to identify ONTAP latency related issuesSteps to identify ONTAP latency related issues
Steps to identify ONTAP latency related issues
 
Presentation v mware performance overview
Presentation   v mware performance overviewPresentation   v mware performance overview
Presentation v mware performance overview
 
Next-Generation Best Practices for VMware and Storage
Next-Generation Best Practices for VMware and StorageNext-Generation Best Practices for VMware and Storage
Next-Generation Best Practices for VMware and Storage
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysis
 
3487570
34875703487570
3487570
 
General commands for navisphere cli
General commands for navisphere cliGeneral commands for navisphere cli
General commands for navisphere cli
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
 
USING EMC FAST SUITE WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS
USING EMC FAST SUITE  WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMSUSING EMC FAST SUITE  WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS
USING EMC FAST SUITE WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS
 

Plus de ProfessionalVMware

ProfessionalVMware VCAP BrownBag Section 2
ProfessionalVMware VCAP BrownBag Section 2ProfessionalVMware VCAP BrownBag Section 2
ProfessionalVMware VCAP BrownBag Section 2
ProfessionalVMware
 

Plus de ProfessionalVMware (13)

#vBrownBag OpenStack - Review & Kickoff for Phase 2
#vBrownBag OpenStack - Review & Kickoff for Phase 2#vBrownBag OpenStack - Review & Kickoff for Phase 2
#vBrownBag OpenStack - Review & Kickoff for Phase 2
 
Portland VMware User Conference 2013 - Afternoon Keynote
Portland VMware User Conference 2013 - Afternoon KeynotePortland VMware User Conference 2013 - Afternoon Keynote
Portland VMware User Conference 2013 - Afternoon Keynote
 
Couch to open_stack_keystone
Couch to open_stack_keystoneCouch to open_stack_keystone
Couch to open_stack_keystone
 
Vagrant
VagrantVagrant
Vagrant
 
vCloud Architecture BrownBag
vCloud Architecture BrownBagvCloud Architecture BrownBag
vCloud Architecture BrownBag
 
BrownBag - vCloud Networking
BrownBag - vCloud NetworkingBrownBag - vCloud Networking
BrownBag - vCloud Networking
 
ProfessionalVMware BrownBag - SMB Design
ProfessionalVMware BrownBag - SMB DesignProfessionalVMware BrownBag - SMB Design
ProfessionalVMware BrownBag - SMB Design
 
Wade Holmes vCloud Architecture Toolkit
Wade Holmes vCloud Architecture ToolkitWade Holmes vCloud Architecture Toolkit
Wade Holmes vCloud Architecture Toolkit
 
ProfessionalVMware BrownBag (Jason Boche) - VCAP-DCD Objective 1
ProfessionalVMware BrownBag (Jason Boche) - VCAP-DCD Objective 1ProfessionalVMware BrownBag (Jason Boche) - VCAP-DCD Objective 1
ProfessionalVMware BrownBag (Jason Boche) - VCAP-DCD Objective 1
 
VMworld 2011 - PowerCLI 101
VMworld 2011 - PowerCLI 101VMworld 2011 - PowerCLI 101
VMworld 2011 - PowerCLI 101
 
ProfessionalVMware VCAP BrownBag Section 2
ProfessionalVMware VCAP BrownBag Section 2ProfessionalVMware VCAP BrownBag Section 2
ProfessionalVMware VCAP BrownBag Section 2
 
VCAP-DCA Lightning Round Q&A
VCAP-DCA Lightning Round Q&AVCAP-DCA Lightning Round Q&A
VCAP-DCA Lightning Round Q&A
 
Vcap dca section 1
Vcap dca section 1Vcap dca section 1
Vcap dca section 1
 

Dernier

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 

Dernier (20)

Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 

vSphere vStorage: Troubleshooting Performance

  • 1. vStorage: Troubleshooting Performance Nathan Small, Staff Engineer, Global Support Services
  • 2. Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
  • 4. Performance Charts Through the Overview section of the Performance tab of an ESX host, we can observe if a host is seeing recent disk latency issues. Unfortunately, this chart does not pinpoint which device or path is seeing this latency. For a more granularity, we will use “Advanced”
  • 5. Performance Charts If we switch to the “Storage path” option, we can see latency is coming from vmhba0:C0:T0:L0 for both read and write latency:
  • 6. Performance Charts In order to see the latency statistics for a given LUN, we will need to go to the “Disk” view and then select “Chart Options”: Select the LUNs you wish to see the latency statistics for under “Objects”, then select the Physical device read and write latency “Counters”
  • 7. Performance Charts With this granular view, we can see that the LUN with ID naa.60060480000190300919533030303134 had a latency spike up to 48ms at ~9:50am
  • 8. ESXTop ESXTop is a great tool to use if you need performance captured during a specific timeframe (scripted batch mode) or if the ESX host is unavailable in vCenter/VI Client. To view the equivalent of the “Storage path” view, hit ‘d’:
  • 9. ESXTop You can toggle the display fields by hitting ‘f’. By taking out unneeded fields (I/O stats), you can make the display screen far easier to read and work with:
  • 10. ESXTop To view individual LUN statistics (with unique ID), hit ‘u’: As we can see, the default field size for “DEVICE” is not long enough to show the entire unique identifier. To change this field size, hit ‘L’:
  • 11. ESXTop Expanding this field now shows the entire unique ID for the LUN and NFS datastores: Just as before, we should display only the fields we need to be concerned with by hitting ‘f’:
  • 12. ESXTop This view now shows us similar output to the ‘d’ view except we see the statistics for a LUN (all paths). *Note: The reason there are no statistics for the NFS mounts is due to the fact that these are ISO and template repositories that are not in use currently by this host.
  • 13. ESXTop To capture esxtop data, use ‘vm-support -S’. Here is an example for collecting these statistics at 2 second intervals over 20 seconds: # vm-support –S –i 2 –d 20
  • 14. ESXTop To replay that capture data, extract the compressed file, change to the ‘snaphots’ directory, run the uncompress script, then run ‘esxtop’ in replay mode ‘-R’:
  • 15. DAVG/KAVG/GAVG/QAVG What is the DAVG? The DAVG field is the Device Average, which is the amount of time it takes for a SCSI command to leave the HBA, hit the array, and return completed. This is the most effective method of determining if the performance issue being experienced is at the physical layer (switch/array). What are acceptable DAVG values? Optimal, sustained DAVG latency values would be between 0-5ms for 4/8GB FC and 10GB iSCSI/NFS. Seeing 5-10ms latency for 1/2GB FC or 1GB iSCSI/NFS is also acceptable. Seeing latency values of 10-20ms could start to show some minor performance issues inside the VM. while values of 20-50ms would show noticeable to significant performance issues. Latency values of 50ms or higher would make the VMs almost unusable. How should I proceed when my DAVG is sustained at a high level? Engage the storage team immediately. They may already be aware of the latency due to increased load on the storage array (LUN replication, backups)
  • 16. DAVG/KAVG/GAVG/QAVG What is the KAVG? The KAVG field is the Kernel Average, which is the amount of time a SCSI command spends in the vmkernel. What are acceptable KAVG values? The KAVG should always be less than 1ms. How should I proceed when my KAVG is sustained at a high level? While rare to see the KAVG value high at all, it has been observed when the queue has been throttled back on the ESX host due to a TASK_SET_FULL condition on the array (KB 1008113), or the queue depth has been reduced on the HBAs due to configuration restrictions (KB 1006001). It has also been observed temporarily on path failover. This is due to commands queuing during the path activation process.
  • 17. DAVG/KAVG/GAVG/QAVG What is the GAVG? The GAVG field is the Device Average + Kernel Average + Queue Average, which is the latency perceived by the Guest OS or VM. What are acceptable GAVG values? As the KAVG and QAVG should always be less than 1ms, acceptable GAVG latency values would be the same as acceptable DAVG values. How should I proceed when my GAVG is sustained at a high level? The same steps should be followed for high DAVG values.
  • 18. DAVG/KAVG/GAVG/QAVG What is the QAVG? The QAVG field is the Queue Average, which is the amount of time the SCSI command spends in the HBA driver. What are acceptable QAVG values? The QAVG should always be less than 1ms. How should I proceed when my QAVG is sustained at a high level? Just like the KAVG, the QAVG should not have a high latency value, unless there is a legitimate reason for commands to remained queued on the host side.
  • 19. Queue Depth and ESXTop Many people believe that increasing the queue depth will solve a performance issue or improve performance overall, however this can have the opposite effect (KB 1006001 & 1008113). The queue depth should only be increased if the array vendor recommends to do so or the queue depth is getting exhausted on the host side. As seen in a previous slide, we can observe the queue depth usage and activity:
  • 20. Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
  • 22. Disk Alignment The concept of Disk Alignment is simple, yet the performance hit for misaligned I/O can be troublesome for a single, high disk I/O VM (SQL, Exchange, etc) or for the entire virtualized infrastructure on a particular storage array. It has been a best practice recommendation from high I/O application vendors to ensure that the data drive for the application is aligned to ensure the best I/O possible, as this will avoid any partial read or write operation. In many cases, this data drive would be a separate LUN either on local storage or on a share storage array.
  • 23. Disk Alignment In a SAN environment, the smallest hardware unit used by a SAN storage array to build a LUN out of multiple physical disks is a called a chunk or a stripe. To optimize I/O, chunks are usually much larger than sectors. Thus a SCSI I/O request that intends to read a sector in reality reads one chunk. On top of this, in a Windows environment NTFS is formatted in blocks ranging from 1MB to 8MB. The file system used by the guest operating system optimizes I/O by grouping sectors into so-called clusters (allocation units). Older operating systems on x86 architectures create partitions with a master boot record (MBR) that consumes 63 sectors. This is due to legacy BIOS code from the PC that used cylinder, head, and sector addressing instead of logical block addressing (LBA). Without LBA, the first track is reserved for the boot code, and the first partition starts at cylinder 0, head 1, and sector 1. This is LBA 63 and is therefore unaligned.
  • 24. Disk Alignment An unaligned partition results in a track crossing and an additional I/O, incurring a penalty on latency and throughput. The additional I/O (especially if small) can impact system resources significantly on some host types. An aligned partition ensures that the single I/O is serviced by a single device, eliminating the additional I/O and resulting in overall performance improvement. Unaligned structure may cause many additional I/O operations when only one cluster is ready by the guest operating system.
  • 25. Disk Alignment Beginning with ESX 3.0.x, or rather VMFS3, any newly created VMFS LUN will be aligned to a starting sector of 128, which is 64k (128 * 512 = 65536). As most arrays perform reads/writes in 4k, or a value divisible by 4k, this means that the VMFS partition is aligned with the underlying storage. When the partition for the VM is aligned in the same manner, I/O is aligned across the board. I/O improvements on a properly aligned Windows NTFS volume in a VMDK on a SAN LUN:
  • 26. Disk Alignment The result of aligning I/O can have significant, measurable performance improvements for high disk I/O VMs: VMW SQL
  • 27. Disk Alignment and NFS Does Disk Alignment only apply to VMFS and not NFS? While the NFS volume does not have an additional VMFS layer to align, the NFS datastore is aligned to a 4k divisible offset when created and the array controllers still operate in 4k read/write operations. As such, there is still a need to align the OS and data drives of VMs residing on an NFS volume.
  • 28. Verifying Disk Alignment How can I tell if the disks inside my VM’s are aligned? This is the default partition offset of 63: 63 * 512 = 32256
  • 29. Aligning Disks To align newly created disk (data disks), simply use diskpart (Windows) or fdisk (Linux). For more information, see:http://www.vmware.com/pdf/esx3_partition_align.pdfhttp://media.netapp.com/documents/tr-3747.pdf Aligning existing disks gets tricky since the file system and data has already been written. To simply change the offset of these existing partitions with diskpart or fdisk will render them useless. vSphere/ESX currently has no way of aligning disks however 3rd party vendors have created utilities to do this: Netapp provides a utility called ‘mbrscan’, which will scan your VMs’ disks to see if they are aligned, and ‘mbralign’, which will align the disks of a VM (powered off VMs only). Vizioncore makes a product called vOptimizer that will align VM’s disks to 64k offset. Alternatively, their vConverter 5.0 product will convert physical or virtual machines and in the process align the disk offset to 64k Paragon also makes a utility called Paragon Alignment Tool (PAT). This also comes bundled with their flagship product.
  • 30. Planning Ahead The best way to plan ahead with Disk Alignment is to create templates for every OS you wish to deploy and align those before installing the OS. This way you can deploy VM’s that will already be aligned.
  • 31. Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
  • 33. SCSI vs. SATA The argument of which hard drive technology to use is a well known discussion however the answer on which drives to use ultimately depends on the application or need. SCSI drives are more expensive than SATA drives however this is due to the design and performance achieved with this technology. SCSI uses a tagged command queuing implementation which allows many commands to be outstanding. This design provides significant performance gain for the drives/controllers to be able to order these commands in most optimal execution manner possible. SCSI drives also use a processor for executing commands and handling the interface while a separate processor handles the head positioning through servos. SCSI can push a maximum of 640Meg/second
  • 34. SCSI vs. SATA In the early years, SATA used Task Command Queuing (TCQ). This queuing technology was designed to help bridge the gap between SCSI and PATA/SATA drives however there was too much overhead and TCQ simply wasn’t efficient enough. TCQ is sometimes referred to as interrupt queuing. SATA 2.0 introduced Native Command Queuing (NCQ), which is very similar to the outstanding request queuing that SCSI uses. This technology drastically reduces the amount of interrupts that TCQ uses due to the integrated first party DMA engine as well as the intelligent ordering of commands and interrupting when most it is most optimal to do so. Due to NCQ and the throughput increases with the SATA 2.0 standard, SATA can perform well but not to the same level as SCSI. SATA 3GB/s can push a maximum of 300MB/s
  • 35. When To Use SCSI SCSI drives are by far the most expensive choice, however you get what you pay for: Performance. There are no limitations for SCSI drives usage however budgets are rarely unlimited so SCSI drives are usually purchased for production servers/applications (SQL, MS Exchange, etc) or tier 1/2 storage. Because of the throughput capabilities of SCSI, these drives will handle I/O bursts far better than SATA, which would rely heavily on read/write cache to rough the storm. SCSI drives are recommended for use with VMware View environments for linked clone and fully deployed desktops. More on this later.
  • 36. When To Use SATA While SATA drives are far cheaper than SCSI drives and usually higher in disk capacity, they simply do not perform at the same level as SCSI drives. Installing more write cache in the storage array can assist SATA drives through heavy I/O periods however that would make write cache the last line of defence to an I/O storm or increasing I/O load over time. It may only be a matter of time before write cache fills up and then begins the battle of destaging. If SATA disks must be used, a configuration of RAID 10 for those disks is strongly recommended. This will not guarantee availability in the heaviest of I/O storms, but it is the most optimal RAID configuration while maintaining redundancy.
  • 37. Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
  • 39. What are SCSI Reservations? SCSI Reservations refer to the act of an initiator to using SCSI-2 commands 0x16 (RESERVE) and 0x17 (RELEASE) to lock a LUN for a specific operation. While a reservation is placed on a LUN, if another initiator attempts to perform any command on the reserved LUN other than an INQUIRY, REQUEST SENSE, or a PREVENT/ALLOW MEDIUM REMOVAL command the command shall be rejected with RESERVATION CONFLICT device status. A reservation may only be released by the initiator that placed the reservation. A release command sent by another initiator will be ignored. Any initiator can clear the reservation by issuing a “BUS DEVICE RESET” command or a hard “RESET”.
  • 40. SCSI Reservations Since VMFS-3 is basically a clustered file system, allowing simultaneous access from multiple ESX servers, we need a way of preserving the integrity of the file system when more than one host is updating it. VMFS-3 implements a locking protocol to prevent VMware cluster-aware applications from powering on (or otherwise sharing) the contents of a given virtual disk on more than one host at any given time. Part of this locking protocol is based on the notion of on-disk locks that protect the metadata on the volume. An ESX/ESXi host interested in locked access to metadata must atomically check the owner and lock-state fields in the lock, and if the lock is free, acquire it by writing out its own ID and lock state into the owner and lock-state fields.
  • 41. SCSI Reservations The lock manager currently uses SCSI reservations to check the lock information, establish itself as the owner by writing to the relevant lock fields on disk if the lock is free, and then releases the SCSI reservation. ESX uses SCSI-2 non-persistent reservations which implies that only a single host can reserve the LUN in question at any one time. A reboot of that host will clear the reservation on the LUN (non-persistent). ESX 4.0 introduces limited support for SCSI-3 persistent reservations to work with Microsoft Windows 2008 Clustering, however the LSISAS controller must be used for the VMs. Only Windows 2k8 is supported with this controller. VMFS lock operations are still implemented using SCSI-2 non-persistent reservations, even in ESX 4.x.
  • 42. A typical SCSI Reservation Conflict message vmkernel: 0:12:44:32.598 cpu4:1046)SCSI: vm 1046: 5509: Sync CR at 64 vmkernel: 0:12:44:33.520 cpu4:1046)SCSI: vm 1046: 5509: Sync CR at 48 vmkernel: 0:12:44:34.512 cpu4:1046)SCSI: vm 1046: 5509: Sync CR at 32 vmkernel: 0:12:44:35.482 cpu4:1046)SCSI: vm 1046: 5509: Sync CR at 16 vmkernel: 0:12:44:36.490 cpu2:1046)SCSI: vm 1046: 5509: Sync CR at 0 vmkernel: 0:12:44:36.490 cpu2:1046)WARNING: SCSI: 5519: Failing I/O due to too many reservation conflicts vmkernel: 0:12:44:36.490 cpu2:1046)WARNING: SCSI: 5615: status SCSI reservation conflict, rstatus 0xc0de01 for vmhba2:0:3. residual R 919, CR 0, ER 3 vmkernel: 0:12:44:36.490 cpu2:1046)FSS: 343: Failed with status 0xbad0022 for f530 28 2 45378334 5f8825b2 17007a7d 1d624ca4 4 1 0 0 0 0 0 "CR" stands for conflict retry. In this case an I/O is being retried due to reservation conflicts. The number at the end of the log statement is the number of retries left. In this case, all retries were exhausted so the I/O failed.
  • 43. What causes a SCSI Reservation? Administrative operations, such as creating or deleting a virtual disk, extending a VMFS volume, or creating or deleting snapshots, result in metadata updates to the file system using locks, and thus result in SCSI reservations. Reservations are also generated when you expand a virtual disk, when a snapshot for a virtual machine disk increases in size, or when a thin provisioned virtual disk grows. VMotion also use SCSI Reservations, with a reservation placed first by the source ESX, which is subsequently released, and then the destination ESX places a SCSI Reservation on the LUN. Note that SCSI reservations are used to acquire or release a lock on a file but is not required when updating host heartbeats to maintain a lock on a file. For more information, see KB 1005009: http://kb.vmware.com/kb/1005009
  • 44. What Should I do when I have SCSI Reservation Conflicts? Sometimes SCSI Reservation Conflicts are temporary and my subside. If the SCSI Reservation Conflicts do not subside, issue a LUN “RESET” to the LUN to see if this resolves them. This can be done from an ESX host by issuing the following command: # vmkfstools –L lunreset /vmfs/devices/disks/naa.xxxxxxxxxxxxxxxx *NOTE: To be safe, issue the LUN reset command at least twice. If the command was successful, you should see messages in the vmkernel similar to the following: cpu1:1057)<6>scsi(2:0:3:73): DEVICE RESET SUCCEEDED. If a LUN reset resolves the issue then this points to a lost reservation as being the root cause of the problem. A lost reservation means that an initiator placed the reservation however it was not able to send the release (HBA goes into fatal error state) or the release was dropped at some level yet a SUCCESS was returned to the initiator (blade environment).
  • 45. What Should I do when I have SCSI Reservation Conflicts? If the LUN reset does not resolve the SCSI reservation conflict, the following scenarios may apply: Array is overloaded: Write pending at 100% Read/write cache exhausted Cache constantly destaging down to disk Array controller/port queue is full LUN Replication/Backups too demanding Host mode setting incorrect for initiator record on the array Array software/firmware bug (LUN flag setting, Storage pool leak, etc) Hardware Management Agents (HP Insight Manager) are locking the LUN The LUN reporting the SCSI reservation is presented to a non-ESX host The LUN reporting the SCSI reservation is an RDM LUN used by a VM *Remember: A SCSI reservation conflict is a symptom, not the problem.
  • 46. Best Practices to avoid SCSI Reservation Conflicts Keep HBA, Blade Switch (if applicable), Fabric Switch, and Array firmware up to date. Firmware updates contain fixes! Determine if any operations are already happening on the LUN you wish to start another operation that may cause a SCSI Reservation. Spread out VCB Backups, VMotion operations, Template deployments, etc. Choose one ESX Server as your deployment server to avoid conflicts with multiple ESX servers trying to deploy templates. Limit access Administrative operations in vCenter so that you control who can enact an operation that could lead to potential reservations issues (snapshots).
  • 47. Best Practices to avoid SCSI Reservation Conflicts Avoid boot storms by scheduling VM reboots so that there is only one reboot per LUN. In the case of VMware View desktops, choose to leave these desktops always powered on. Use care when scheduling backups, antivirus, or LUN replication. Ensure that the storage array will not be overloaded with operations that could impact hosts connected. Verify that there is no other SCSI Reservations operation happening. Use the ATS (Atomic Test and Set) feature of VAAI, if available. More on this feature on the next slide.
  • 48. Goodbye SCSI Reservations, Hello ATS! In vSphere 4.1 the following three offload primitives are supported – Hardware Assisted Locking, Full Copy, and Block Zero. These primitives are also known as Atomic Test and Set, Clone Blocks, and Write Same respectively. We will talk about Clone Blocks and Write Same in later slides. Atomic Test and Set (ATS) primitive atomically modifies a sector on disk without the use of SCSI reservations and needing to lock out other hosts from concurrent LUN access. This primitive also reduces the number of commands required to successfully acquire an on-disk lock. Upon receiving an ATS command, the array should atomically check if the contents of the disk block at the specified logical block number are the same as initiator-provided existing value, and if yes, replace it with initiator-provided new value.
  • 49. Goodbye SCSI Reservations, Hello ATS! Unlike SCSI reservations, ATS commands from other hosts should not be rejected with a device status of RESERVATION CONFLICT in the case of in-flight ATS commands from other hosts. Instead, these commands are queued as needed and processed in the same order. The ATS primitive can significantly improve I/O throughput from guest OS applications to a VMFS volume in the presence of metadata operations, and the number of concurrent cluster-aware VM operations that can be supported in the presence of these I/O intensive enterprise workloads. ATS will be used for the following operations: Every lock operation executed by the VMFS-3 lock manager (metadata) Power state operations (power on/off/checkpoint/resume, snapshot and consolidate) of VMs Cluster-wide operations like Storage vMotion, vMotion, DRS
  • 50. Requirements for VAAI The following are required for VAAI: ESX/ESXi 4.1 Array with SCSI ANSI version 3 or higher Array firmware that supports VAAI primitives VAAI uses the following SCSI commands: ATS uses SCSI command 0x89 (COMPARE AND WRITE) Clone Blocks/Full Copy uses SCSI command 0x83 (EXTENDED COPY) Zero Blocks/Write Same uses SCSI command 0x93 (WRITE SAME) These VAAI operations are controlled by the following advanced settings: /VMFS3/HardwareAcceleratedLocking /DataMover/HardwareAcceleratedMove /DataMover/HardwareAcceleratedInit
  • 51. Known Issues for VAAI VAAI is enabled by default, and the primitives will issue the specific SCSI commands to determine whether the LUN supports the commands or not. If the commands are not successful, the LUN will be marked as non-VAAI capable. The reason VAAI is enabled by default is due to NDU (non-disruptive upgrades) or initiator setting changes that could make the LUNs support VAAI. This default behavior has caused issues for some arrays or fabric configurations that do not support the SCSI commands VAAI uses. Examples of this are the following: Outdated/unsupported array firmware does not handle SCSI commands correctly and returns unexpected responses Cisco SANTAP controllers do not support the commands and crash
  • 52. Known Issues for VAAI To check VAAI status, run the command:# esxcfg-scsidevs -l | egrep "Display Name:|VAAI Status:“ In the event that your array or fabric configuration does not support VAAI commands and is behaving oddly, disable the VAAI functions on the ESX hosts until a firmware update is available from the vendor. To disable the VAAI primitives, execute the following commands on each ESX host:# esxcfg-advcfg -s 0 /DataMover/HardwareAcceleratedMove# esxcfg-advcfg -s 0 /DataMover/HardwareAcceleratedInit# esxcfg-advcfg -s 0 /VMFS3/HardwareAcceleratedLocking
  • 53. Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
  • 54. APD (All Paths Down) Issue
  • 55. What is the APD issue? APD (all paths down) is a condition where all paths to the storage device (LUN) are lost either temporarily or permanently. The most common cause for the APD condition is due to intentional changes in the SAN configuration, particularly removing the presentation of a LUN to a give host(s) or migrating off a SAN entirely. The APD state can be transient, but in most cases it will be permanent, unless the removed LUN is presented back to the affected host, an advanced configuration option is used, or the host is rebooted to recover from the condition. The advanced configuration option was introduced in a patch after ESX 4.0 U1 was released (included in ESX 4.0 U2 and ESX 4.1):http://kb.vmware.com/kb/1016291 The advanced configuration option can be enabled with the following command:# esxcfg-advcfg –s 1 /VMFS3/FailVolumeOpenIfAPD
  • 56. What is the impact of the APD issue? If a LUN has been removed from the ESX hosts improperly, this will lead to an APD state. While the LUN is in this state, running VM’s may hang or pause momentarily due to the VMM requiring an fsLock. Issuing a rescan at this point will exasperate the issue and cause more VM’s to pause and for a longer period of time. This hang/pause effect can cause severe issues for time sensitive/transaction applications such as SAP, SQL, or MS Exchange. It also effectively pauses the network stack of Windows based OS’es and can cause ICMP ping drops as a result. This issue is discussed in detail via the following KB link:http://kb.vmware.com/kb/1016626
  • 57. How can I avoid the APD issue? Engineering has approved an official procedure for proper LUN removal in ESX 4.x. This process involves creating LUN claim rules that will mask the LUNs from the vmkernel before they are removed from the host. After masking the LUN, a reclaim command is run which will unregister those LUNs from the vmkernel, allowing them to be safely removed. Once the LUNs are removed, the claim rules that were added for this procedure should be removed. The proper LUN removal procedure to avoid the APD is fully covered in the following KB article:http://kb.vmware.com/kb/1015084
  • 58. Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
  • 60. MRU (Most Recently Used) Path Selection Policy The MRU Path Selection Policy was designed for use with Active/Passive or Passive Not Ready (PNR) arrays. These arrays have a sense of LUN ownership, which means a LUN can only be accessed via one controller, not both. As such, Active/Passive arrays have far more failover conditions. The MRU policy selects the first working path discovered at system boot time. This would be the first HBA and first path discovered. This design has an inherit flaw as it means that unless failover has occurred, all I/O to the LUNs will be going through the first HBA. If this path becomes unavailable, the ESX host switches to an alternative path and continues to use the new path while it is available. It does this because forcing the path back to the other array controller would cause the LUN to change ownership on the array.
  • 61. Fixed Path Selection Policy Originally, the Fixed Path Selection Policy was primarily used by Active/Active arrays however this policy is now used by most ALUA compatible arrays as well as iSCSI arrays with a virtual port. The Fixed Path Policy uses the concept of a preferred path, which means that you can select which path you want to use for a LUN and that path will always be used unless a failover scenario occurs. Once the path is restored, however, the storage stack will automatically switch back to the preferred path. Fixed was used in the ESX 2.x – 3.x days to load balance I/O from an ESX perspective. It is acceptable to use the MRU policy for Active/Active arrays since both controllers can serve up a LUN, however it is NOT acceptable or supported to use Fixed policy on an Active/Passive or Passive Not Ready (PNR) array as this will cause path thrashing to occur.
  • 62. RoundRobin Path Selection Policy The RoundRobin Path Selection Policy is the only in-box policy that load balances I/O (Fixed doesn’t count). This policy will only use “Active”, “Optimized” paths by default as valid paths to switch to. No “Standby” paths will be used as this would cause a trespassing effect/LUN ownership change on Active/Passive or Passive Not Ready (PNR) arrays to occur. This is not a problem for Active/Active arrays since they have no “Standby” paths. The load balancing mechanism will switch paths by one of two methods: IOPS threshold (commands sent) throughput threshold (bytes transferred).
  • 63. RoundRobin Path Selection Policy When setting the RoundRobin policy, the default value for IOPS will be 1000 commands and the default throughput threshold will be 10485760 bytes. VMware does not recommend settings for either the IOPS or the bytes value with Roundrobin. For information on the correct settings for RoundRobin use with a particular storage array, the storage array vendor MUST be contacted. The array vendor will know what settings are optimal for use but they will also know what settings NOT to use. Some vendors recommend an ‘iops’ value of ‘1’ to be used however this same setting on another array can have detrimental effects resulting in poor performance or even a possible production outage. The RoundRobin policy cannot be used on MSCS Quorum RDM LUNs.
  • 64. Vendor Recommend Round Robin Settings EMC DMX:http://www.emc.com/collateral/hardware/white-papers/h6531-using-vmware-vsphere-with-emc-symmetrix-wp.pdf EMC Celerra:http://www.emc.com/collateral/software/technical-documentation/h5536-vmware-esx-srvr-using-emc-celerra-stor-sys-wp.pdf NetApp:http://kb.vmware.com/kb/1010713http://media.netapp.com/documents/tr-3749.pdf HP EVA:http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-2185ENW.pdf
  • 65. Setting Round Robin Settings From CLI Setting an entire SATP (Storage Array Type Plugin) to Round Robin:# esxclinmpsatpsetdefaultpsp -s VMW_SATP_LSI -p VMW_PSP_RR Setting the Round Robin PSP setting for a single LUN:# esxclinmp device setpolicy -d naa.600174d001000000010f003048318438 --psp VMW_PSP_RR Command line option to set iops value: # esxclinmproundrobinsetconfig -d naa.600174d001000000010f003048318438 --iops 10 --type iops Command line option to set bytes value:# esxclinmproundrobinsetconfig -d naa.600174d001000000010f003048318438 --bytes 11 --type bytes
  • 66. Displaying Round Robin Settings From CLI To displaying Round Robin settings for a LUN: # esxclinmp device naa.60a9800050334b356b4a51312f417541 Device Display Name: NETAPP Fibre Channel Disk (naa.60a9800050334b356b4a51312f417541) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=off;explicit_allow=on;alua_followover=on;{TPG_id=2,TPG_state=AO}{TPG_id=3,TPG_state=ANO}} Path Selection Policy: VMW_PSP_RR Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=3: NumIOsPending=0,numBytesPending=0} Working Paths: vmhba2:C0:T2:L1, vmhba1:C0:T2:L1
  • 67. Known Issues for Round Robin in ESX 4.x There are currently two known issues when using the Round Robin Path Selection Policy: After setting the IOPS value for roundrobin, the setting is not retained after a reboot and instead shows a much higher value. This is resolved in a patch for ESX 4.0:http://kb.vmware.com/kb/1017721 The SATP for EMC DMX (VMW_SATP_SYMM) does not distribute I/O evenly with roundrobin after a state change occurs. This state change includes adding a new path (FA) or when losing a paths and recovering. This is also resolved in a patch for ESX 4.0 and ESX 4.1:ESX 4.0:http://kb.vmware.com/kb/1023759ESX 4.1:http://kb.vmware.com/kb/1027013
  • 68. Agenda vCenter Performance Charts & ESXTop Disk Alignment Hard Disks/Storage Array SCSI Reservations APD (All Paths Down) Issue MultipathingConsiderations Communication
  • 70. Communication Communication between various business units is required for the success of a company. Many companies silo their teams into networking, storage, server/hardware, Windows/Linux administrators, Database/Exchange administrators, virtualization administrators, and so on. This silo design or syndrome has been responsible for lack of cooperation and communication between the siloed teams. There have been movements to try and break down these barriers however this is not an easy task. Ultimately, the victims of this design may not be the people of the silo teams, but the business/company itself, which in turn affects customers directly.
  • 71. Communication In recent years, virtualization has become a huge movement in datacenters and the criticality of virtualized servers have shifted from simple application servers to mission critical applications. This mission critical requirement is far more demanding from an I/O (CPU, memory, and disk) and disk capacity perspective. The need to have virtualization administrators and storage administrators communicate effectively has come to critical mass. More than ever, these two teams need to be on the same page at all times. The need to cross-pollinate knowledge between these teams has never been higher. Lack of effective communication only between these two silo groups has led some companies to experience production outages and scalability issues.
  • 72. Improving Communication While change is not easy, there are some simple policies that can be adopted in order to break down the silo walls between the storage and virtualization administration teams: Cross-train between both teams (basic to intermediate) Meetings where topics that could affect the other silo should always have at least one representative from that silo that understands the discussion Regular meetings (weekly) between the two silos to discuss current and future projects Email distribution list that contains all parties of both silos Always consider the other party before any changes are made to the virtualization infrastructure, array configuration, or policies on either team