Why Teams call analytics are critical to your entire business
Vmug V Sphere Storage (Rev E)
1. vSphere 4.0 Storage: Features and Enhancements Nathan Small Staff Engineer Rev E Last updated 3 rd August 2009 VMware Inc
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29. GUI Changes - Display Device Info Note that there are no further references to vmhbaC:T:L. Unique device identifiers such as the NAA id are now used.
30. GUI Changes - Display HBA Configuration Info Again, notice the use of NAA ids rather than vmhbaC:T:L.
31. GUI Changes - Display Path Info Note the reference to the PSP & SATP Note the (I/O) status designating the active path
36. New Storage Alarms New Datastore specific Alarms New VM specific Alarms This alarms allow the tracking of Thin Provisioned disks This alarms will trigger if a datastore becomes unavailble to the host This alarms will trigger if a snapshot delta file becomes too large
37.
38.
39.
40.
41.
42.
43.
44. Mounting A Snapshot Original Volume is still presented to the ESX Snapshot – notice that the volume label is the same as the original volume.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62. Comparison: Volume Grow & Add Extent Volume Grow Add Extent Must power-off VMs No No Can be done on newly-provisioned LUN No Yes Can be done on existing array-expanded LUN Yes Yes (but not allowed through GUI) Limits An extent can be grown any number of times, up to 2TB. A datastore can have up to 32 extents, each up to 2TB. Results in creation of new partition No Yes VM availability impact None, if datastore has only one extent. Introduces dependency on first extent.
63. Volume Grow GUI Enhancements Here I am choosing the same device on which the VMFS is installed – there is currently 4GB free. This option selects to expand the VMFS using free space on the current device Notice that the current extent capacity is 1GB.
64. VMFS Grow - Expansion Options LUN Provisioned at Array VMFS Volume/Datastore Provisioned for ESX Virtual Disk Provisioned for VM VMFS Volume Grow Dynamic LUN Expansion VMDK Hot Extend
76. Other Storage Features/Enhancements (ctd) ESX 3.x boot time LUN selection – which sd device represents an iSCSI disk and which represents an FC disk?
77. Other Storage Features/Enhancements (ctd) ESX 4.0 boot time LUN selection. Hopefully this will address incorrect LUN selections during install/upgrade.
The C in vmhbaN:C:T:L:P is for channel. We only see a channel 0 in FC & iSCSI typically. Other identifiers might include a T10 name, as observed with the OpenFiler appliance. We’re moving away from the Controller Target Lun Partition and using more NAA id. MPX Multipath X device = unknown (cdroms, some local storage types too)
The SDK for third parties to write their own code for the PSA is called the VMKDH. This eliminates the recertification issue with the storage vendors. EMC will be first for GA (Powerpath). Remember that the PSA and the Multi Pathing Plugin (MPP) are just replacing the tasks that were carried out by the SCSI Mid Layer in previous ESX versions.
The multipath plug-ins are VMkernel modules, e.g. NMP Advanced functions like the quantumSched – are now apart of the PSA. Such as how much time VM’s can run on the run-queue. Note that this is still SCSI mid layer stuff, except we now split the operations between the PSA & the MPP.
Multi Path Plugin (MPP) Point 2: PSA discovers IBM on 2 paths and EMC on 4 Paths and determins which MPP to give to the plugin. Collaps the 4 paths for one array and notes it as one array
NMP Native Multipathing Plugin
3 rd point: At this point in time, there are no known partners working on SATPs.
The esxcli command also appears in the RCLI for VI4.
The new Pluggable Storage Architecture (PSA) also allows for third party plug-ins to take control of the entire path failover and load balancing operations, replacing VMware’s NMP. Also in ESX 4, there are two ways to set a PSP for a device: 1. By setting the default PSP for a given SATP in a claim rule. 2. By setting the managing PSP for a given device, like so: # esxcli nmp device setpolicy --device naa.50060160ffe0000150060160ffe00001 --psp PSP_RR There is currently no way to set PSP's by vendor/model strings. As for config options, there is currently no way to provide PSP-wide config options in a claimrule the way there is for a SATP. However, the per-path and per-device configuration you set using esxcli, such as: # esxcli nmp psp setconfig --device naa.50060160ffe0000150060160ffe00001 --config iops=213 are persistent across reboots, so you only need to set it up once per device. Actually, once all your devices are claimed, you can just make shell script to set the per-device configuration for all your devices.
Round Robin is the only Path Selection Policy which load balances. It has been around in experimental form since ESX 3.0 but is finally supported in ESX 4.0. It only uses active paths so is of most use on Active/Active arrays. However in Active/Passive arrays, it will load balance between on ports to the same Storage Processor.
"mpx" is a VMware specific namespace. It is not an acronym - it could roughly stand for "Mult Pathing x". The mpx name space is used when no other valid namespace can be obtained from the LUN, such as NAA. It is not globally unique and is not persistent across reboots. Typically only local devices will not have NAA, IQN, etc namespaces and so have names starting with "mpx.". The Storage Array Type Device Config: {navireg ipfilter} config settings are specific to the SATP_CX, SATP_INV, and SATP_ALUA_CX. The accepted values are: navireg_on, navireg_off, ipfilter_on, ipfilter_off navireg_on starts automatic registration of the device with Navisphere navireg_off stops the automatic registration of the device ipfilter_on stops the sending of the host name for Navisphere registration, used if host is known as "localhost" ipfilter_off enables the sending of the host name during Navisphere registration # esxcli nmp satp getconfig -d naa.60060160432017003461b060f9f6da11 {navireg ipfilter}
This following slides are collected in the vm-support dumps
The vicfg-mpath command also takes --username & --password as arguments.
For FC arrays, this will show World Wide Names (WWNs)
To my knowledge, there is only one MPP under development at present (Sep 2008), and that is EMC Powerpath version 5.4.
No one is currently develponing SATP’s or PSP’s but this is the framework for new arrays so they don’t have to be qualified by VMware. Third party PSP and SATP’s are to be treated like third party software and can be unloaded in order to troubleshoot. and a reboot is required after unloading the module.
Port binding is a mechanism of identifying certain VMkernel ports for use by the iSCSI storage stack . Port binding is necessary to enable storage multipathing policies, such as the VMware round-robin load-balancing, or MRU, or fixed-path, to apply to iSCSI NIC ports and paths. Port binding does not work in combination with IPV6. When users configure port binding they expect to see additional paths for each bound VMkernel NIC. However, when they configure the array under a IPV6 global scope address the additional paths will not be established. Users only see paths established on the IPV6 routable VMkernel NIC. For instance if users have two target portals and two VMkernel NICs, they see four paths when using IPv4. They see only two paths when using IPv6. Because there are no paths for failover, path policy setup does not make sense. Workaround: Use IPV4 and port binding, or configure the storage array and the ESX/ESXi host with LOCAL SCOPE IPv6 addresses on the same subnet (switch segment). You cannot currently use global scope IPv6 with port binding.
Taken from the SAN Compatibility Guide
Animation: Click 1 – Highlight the Storage Link in Hardware Configuration Click 2 – Highlight the datastores available on this host Click 3 – Highlight the details for a particular datastore Click 4 – NAA id banner launches - NAA – Network Address Authority
Animation: Click 1 – Highlight the Storage Adapter Link in Hardware Configuration Click 2 – Highlight one particular adapter available on this host, in this case vmhba33. Note the NMP on the bottom right
Animation: Click 1: Select a datastore on your ESX, click on the Properties link Click 2: This launches the VMFS properties window Click 3: Highlight the ‘Manage Paths’ button which, when clicked, launches the Manage Path window Click 4: The Manage Path window opens which displays the PSP, SATP and other pathing info used by these paths and LUN.
Animation for Datacenter wide rescan: Click 1: Select your DataCenter, right click and select Rescan for Datastores Click 2: Acknowledge that the rescan may take a long time Click 3: Scan for either New Storage devices, New VMFS volumes or both Click 4: Notice the Rescan tasks for each ESX appearing in the Recent Tasks window in vCenter
Identifiers that are persistent and globally unique: t10 eui naa rtp1 tpg lug md5 sns This started in 3.5 where we started using the NAA id to identify each LUN.
This output was achieved by creating a real snapshot of an EMC Clariion LUN which has a VMFS volume and presented it back to the same ESX. I tried changing the original LUN’s ID but the ESX handled this without a problem since we reference the LUN by the NAA.
We have got rid of DisallowSnapshotLUN but we retain the EnableResignature purely for supporting SRM. If you do use the GUI, you need to be aware that if you use the CLI utility, or you use a VI Client to directly attach to a host, these do not get reflected automatically in VC. This causes an issue with customers whose environment is running with DisallowSnapshotLUN = 0. There is a solution for this covered in the coming slides.
The equivalent RCLI for ESXi is vicfg-volume
Note that after you have unmounted the persistently mounted LUN, these entries stay in the esx.conf except that the forceMount option is set to false . /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/forceMount = "false" /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/lvmName = "48d247da-b18fd17c-1da1-0019993032e1" /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/readOnly = "false" Readonly is for NFS volumes, but it will not be displayed in the UI.
It will let you know if it can be resignatured, and if it resides or is visibile to the same ESX host. It will not allow you to mount the snapshot if you try.
Attempting to mount a snapshot volume on the same ESX as the original results in the following message in /var/log/vmkernel: Oct 1 18:11:00 cs-tse-f116 vmkernel: 4:05:33:23.794 cpu5:4111)LVM: MountSnapshotVolume:9536: Volume 48d247da-b18fd17c-1da1-0019993032e1 is already mounted This appears if attempting the operation via the GUI or CLI. The GUI doesn’t show any error or event if this operation is attempted – one simply sees the volume not becoming available. From the CLI, the following is observed: root@cs-tse-f116 ~]# esxcfg-volume -l VMFS3 UUID/label: 48d247dd-7971f45b-5ee4-0019993032e1/cormac_grow_vol Can mount: Yes Can resignature: Yes Extent name: naa.6006016043201700f30570ed09f6da11:1 range: 0 - 15103 (MB) [root@cs-tse-f116 ~]# esxcfg-volume -m 48d247dd-7971f45b-5ee4-0019993032e1 Mounting volume 48d247dd-7971f45b-5ee4-0019993032e1 Error: SysinfoException: Node (5015) ; Status(bad0007)= Bad parameter; Message= Module: lvmdriver Instance(0): Input(3) 48d247da-b18fd17c-1da1-0019993032e1 rw naa.6006016043201700f30570ed09f6da11:1 [root@cs-tse-f116 ~]# esxcfg-volume -M 48d247dd-7971f45b-5ee4-0019993032e1 Persistently mounting volume 48d247dd-7971f45b-5ee4-0019993032e1 Error: SysinfoException: Node (5015) ; Status(bad0007)= Bad parameter; Message= Module: lvmdriver Instance(0): Input(3) 48d247da-b18fd17c-1da1-0019993032e1 rw naa.6006016043201700f30570ed09f6da11:1 [root@cs-tse-f116 ~]# esxcfg-volume -r 48d247dd-7971f45b-5ee4-0019993032e1 Resignaturing volume 48d247dd-7971f45b-5ee4-0019993032e1
I am working with our engineering team on a permanent fix
The first point regarding GUI support is important since previously, SVMotion operations could only be initiated via the Remote CLI in ESX 3.5. The final point, the ability to move an individual disk without moving the VM’s home was not available in previous versions of SVMotion in ESX 3.5. SVMotion of power on VM with RDM preserves RDM Cold migration of powered off VM with RDM still converts RDM to flat VMDK file
This works with non-pass thru RDMs only. Cold migration of an RDM when the VM is powered off converts it to a VMDK. If the VM is powered on, it preserved the RDM. You can also use the GUI to do Storage VMotions on older ESX 3.x hosts, but this continues to use the old snapshot mechanism.
There is still no plan to include an svmotion executable on the Service Console of the ESX.
Previously with RDMs, we had to remove the mapping file and remake it – now this is updated ‘on-the-fly’
VM availability impact refers to what happens to availability if we use the feature. With Volume Grow, availability is no different. With extents, there is a dependency on the first extent, so that if the first extent goes, they all go, i.e. we loose access to the whole volume.
This option first became available in ESX/ESXi 3.5 U2. There is currently no shrinking capability for a VMDK.
There should be esxcli and esxcfg-volumes RCLI commands in RC & GA….
esxcfg-mpath used to show the size of the LUN in earlier ESX versions. It does not in ESX 4.0. This is why the output above is useful if you wish to identify a LUN based on size.
vmkfstools –G takes the same partition as both the source and destination – i.e. you are growing this partition with the same partition.
There is no VMFS upgrade to worry about. Older versions will stay as they are. Newly formatted volumes from ESX4.x boxes will get the new version. Newly formatted volumes from ESX3.x.x boxes will get the old version. All VMFS-3.xx versions are supported by all ESX3.x.x hosts. The 2 enhancements that lead us from 3.31 to 3.33 are both applicable to the yet unreleased VMFS-4, so nothing to report here. The following is true though (as has always been, since we throw in optimizations with every ESX release): ESX4.x.x boxes will work more efficiently on any given VMFS-3.xx volume than ESX3.x.x boxes working on the same volume.
The new in-guest virtualization-optimized SCSI driver has been developed to combat performance competition with hyper-V. Paravirtual SCSI adapters are supported on the following guest operating systems: Windows 2008 Windows 2003 Red Hat Linux (RHEL) 5 The following features are not supported with Paravirtual SCSI adapters: Boot disks Record/Replay Fault Tolerance MSCS Clustering