Factors to Consider When Choosing Accounts Payable Services Providers.pptx
vSphere on NAS Design Considerations
1. Before we start
• Get involved! Audience participation is encouraged and
requested.
• If
you use Twitter, feel free to tweet about this session (use
hashtag #VMUG or handle @SeattleVMUG)
•I encourage you to take photos or videos of today’s session and
share them online
• This presentation will be made available online after the event
2. Design Considerations
for vSphere on NFS
Discussing some design considerations
for using vSphere with NFS
Scott Lowe, VCDX 39
vExpert, Author, Blogger, Geek
http://blog.scottlowe.org / Twitter: @scott_lowe
4. Agenda
• Some NFS basics
• Some link aggregation basics
• NFS bandwidth
• Link redundancy
• NFS and iSCSI interaction
• Routed NFS access
• Other considerations
5. Some NFS Basics
• All versions of ESX/ESXi use NFSv3 over TCP
• NFSv3 uses a single TCP session for data transfer
• This
single session originates from one VMkernel port and
terminates at the NAS IP interface/export
• vSphere
5 adds support for DNS round robin but still uses single
TCP session and only resolves DNS name once
6. Some Link Aggregation Basics
• Requires unique hash values to place flows on a link in the
bundle
• Identical hash values will always result in the same link being
selected
• Does provide link redundancy
• Doesn’t increase per-flow bandwidth, only aggregate bandwidth
• Need special support to avoid single point of failure (SPoF)
7. NFS Bandwidth
• Can’t use link aggregation to increase per-datastore bandwidth
• Can’t use DNS round robin to increase per-datastore bandwidth
• Can’t
use multiple VMkernel NICs to increase per-datastore
bandwidth
• Mustmove to a faster network transport (from 1Gb to 10Gb
Ethernet, for example)
• That being said, most workloads are not bandwidth constrained
8. Link Redundancy
• No concept of multipathing; link redundancy must be managed
at the network layer
• No concept of multiple active “paths” per datastore
• Link aggregation helps but is not required
9. NFS and iSCSI Interaction
• iSCSItraffic is generally “pinned” to specific uplinks via port
binding/multipathing configuration; not so for NFS traffic
• Traffic could “cross” uplinks under certain configurations
• Need to keep separate with:
• Per-port group failover configurations
• Separate vSwitches
• Separate IP subnets for iSCSI and NFS traffic
10. Routed NFS Access
• Supported as of vSphere 5.0 U1
• Besure to use FHRP (HSRP or VRRP) for gateway redundancy
and apply QoS where needed
• Can’t use IPv6 or vSphere Distributed Switch (VDS)
• Be sure latency won’t be an issue (WAN routing not supported)
• Moreinformation available at http://blogs.vmware.com/vsphere/
2012/06/vsphere-50-u1-now-supports-routed-nfs-storage-
access.html
11. Other Considerations
• Thinprovisioned VMDKs: need VAAI-NFS plugin to do thick
provisioned VMDKs
• Datastore sizing: SCSI locking not an issue, but still need to
consider:
• Underlying disk architectures/layout and IOPS requirements
• Ability to meet RPO/RTO
• Jumbo frames: can be useful, but not necessarily required
• ESXi
configuration recommendations: follow vendor-provided
recommended practices
14. Coming to VMworld?
• Ifyou’re coming to VMworld
(and you should be!), consider
bringing your spouse/partner
with you!
• Spousetivities
will be offering
planned, organized activities for
spouses/partners/friends traveling with VMworld conference
attendees
• See http://spousetivities.com for more information
ESX/ESXi will use only a single VMkernel port because of IP routing behaviors\n
\n
Refer back to networking basics (same hash values) and NFS basics (single TCP session)\n\nRefer to NFS basics (single TCP session, resolves name once)\n\nRefer to NFS basics (one VMkernel NIC due to IP routing table, single TCP session)\n