4. 4Sirius Computer Solutions
Virtual Machine Compatibility w/ ESXi 6.0
128 vCPUs / 4TB of RAM
Server Compatibility
Support for Windows Server 2012 R2 and SQL 2012
Failover clustering and AlwaysOn Availability Group
support
IPv6 Support Enhancements
Platform Features
5. 5Sirius Computer Solutions
Platform Features
Metric Windows Appliance
Hosts / vCenter 1,000 1,000
Powered on VMs / vCenter 10,000 10,000
Hosts / Cluster 64 64
VMs / Cluster 6,000 6,000
Linked Mode Support? Yes Yes
vCenter Features
Windows install now supports Postgres and External SQL/Oracle DBs
vCSA supports embedded Postgres and external Oracle DBs
6. 6
Platform Services Controller
Sirius Computer Solutions
Platform Services Controller (PSC) provides the
following services
vCenter Single Sign-in (SSO)
License Service
Lookup Service
VMware Directory Service
VMware Certificate Authority (VMCA)
7. 7
Platform Services Controller (continued)
Sirius Computer Solutions
Deployment Options
Internal and External
External has built in replication
(N) vCenters => (1) PSC
(1) vCenter => (N) PSCs
(N) vCenters => (N) PSCs
Enables highly resilient configurations:
8. 8
Certificates
Sirius Computer Solutions
VMware Certificate Authority (VMCA)
Will provision ESXi hosts and vCenter Server/Services
with certificates that are signed by the VMCA
Not Required – you may use your own CA
Can be used in the following modes:
Default – self signed CA, default certs installed,
certs easily regenerated
Enterprise – replace default certs with an
enterprise CA
Custom – disable VMCA, adds complexity
9. 9
Certificates (continued)
Sirius Computer Solutions
VMware Endpoint Certificate Store (VECS)
Stores all certificates and keys used for vCenter and
services – not optional
ESXi certificates are stored locally
Use vecs-cli to manage certificates and keys
10. 10Sirius Computer Solutions
Linked Mode
vSphere 5.5 vSphere 6.0
Windows Yes Yes
Appliance No Yes
Single Inventory View Yes Yes
Single Inventory Search Yes Yes
Replication Mechanism Microsoft ADAM Native
Roles and Permissions Yes Yes
Licenses Yes Yes
Policies No Yes
Tags No Yes
11. 11
vMotion
Sirius Computer Solutions
Cross vSwitch vMotion
Does not change the IP of the VM, switches must have
L2 adjacency
Supported Functionality:
vSS => vSS
vSS => vDS
vDS => vDS
no vDS => vSS
12. 12
vMotion (continued)
Sirius Computer Solutions
Cross vCenter vMotion
Changes compute, storage, network, and vCenter
simultaneously
Does not require shared storage
250Mbps per vMotion required
Same SSO domain only from GUI, different SSO domain
possible via API
13. 13
vMotion (continued)
Sirius Computer Solutions
Misc Notes
vMotion network will cross L3 boundaries and can use
it’s own TCP/IP stack
Can define provisioning network for virtual machine cold
migration, cloning, and snaps as well as network file
copy traffic during long distant vMotions.
Up to 150ms RTTs for long distance vMotion
vMotion MSCS cluster-across-box (CAB) with physical
mode RDMs
14. 14
Content Library
Sirius Computer Solutions
Provides content management and
synchronization. One central location to
manage all content.
ISO IMAGES SCRIPTS
VM
TEMPLATES
vApps
15. 15
Content Library
Sirius Computer Solutions
Uses publisher / subscriber model
vCenter => vCenter
vCloud Director => vCenter
Max of 256 content items per library and 10
simultaneous copies
Syncs every 24 hours
16. 16
Network I/O Control (NIOC) v3
Sirius Computer Solutions
Applied at the vNIC level
Bandwidth is guaranteed to the virtual interface
Reservation set on the vNIC in the VM properties
Applied at the Distributed Port Group level
Bandwidth is guaranteed to a distributed port group
Reservation set on the VDS port group
17. 17
Virtual Volumes
Sirius Computer Solutions
Per-VM granularity for data services and policy
instead of per-LUN granularity.
Every VMDK is an object that you can apply policies and settings to and is
stored in a non partitioned storage pool.
Requires array support, some native to the array some as an appliance
vCenter communicates with.
VMFS is not used, instead the objects the VM is comprised of are stored
directly on the array, thus no need for constructs like VAAI UNMAP
18. 18
Virtual Volumes (continued)
Sirius Computer Solutions
VASA Provider – Provides out of band communication between vCenter and the
storage system. Again, can be native or an appliance. Think control plane.
Protocol Endpoint – I/O proxy that directs IO from the host to the appropriate Virtual
Volumes stored on a storage container. Takes the form of a LUN or mount point but
not used for actual data storage. No NFS 4.
Storage Container – a pool of raw storage used to store virtual volumes, not a LUN,
managed by the array.
Storage Policy Based Management – allows for VMs to be placed on storage that
meets the requirements of that particular VM.
Virtual Volume Datastore – Logical representation of a storage container in vSphere.
20. 20
VSAN 6.0
Sirius Computer Solutions
All flash configuration
100K IOPs/Host
Up to 64 Nodes
Max VMDK of 62TB
Ability to evacuate a single disk
(in 5.5 you had to evacuate an entire disk group)
21. 21
VSAN 6.0 (continued)
Sirius Computer Solutions
Fault domains now available
Allows for controlled placement of replicas and witnesses
Can align with racks boundaries to account for rack failures
Direct-attached JBOD is now supported for
certain blade systems
Check HCL – Just UCS at last glance
Controller-based D@RE and Hardware
Checksum
Check HCL for both
22. 22
High Availability
Sirius Computer Solutions
VM Component Protection
VM restart during APD / PDL situations
Prior versions had PDL support via CLI but no support for APD as related to
VM restart
64 hosts now supported in a HA cluster
8000 VMs / cluster
23. 23Sirius Computer Solutions
Fault Tolerance
Feature vSphere 5.5 vSphere 6.0
Max vCPUs 1 4
RAM 64 GB 64 GB
Virtual Disk Type Eager Zeroed Thick Thin, Thick, Eager Zeroed Thick
Snapshots/VADP No Yes
Storage Redundancy No (shared VMDKs) Yes (individual VMDKs)
Paravirtual Device Supportx No Yes
Max FT VMs Per Host 4 4 VMs or 8 vCPUs Total
Network Requirement 1Gbps 10Gbps
24. 24
vSphere Replication
Sirius Computer Solutions
End-to-end network compression
Network traffic isolation
Separate replication traffic on to dedicated vmkernel port
Linux filesystem quiescing now supported
25. 25
vSphere Replication (continued)
Sirius Computer Solutions
Full sync is now faster as it is based upon
actual allocation rather than configured value
Storage vMotion of replica is supported without
full sync
vSphere replication virtual appliance now runs
on SLES 11 SP3 w/ IPv6 support
26. 26
vSphere 6.0 Gotchas
Sirius Computer Solutions
NFS 4.1
Many issues highlighted in the release notes, review before utilizing.
Interoperability concerns with VVols
vSphere Plugins
Plugins and anything interacting with the API (VDI, Failover software) must
be 6.0 compatible prior to upgrade
Tags
Domain users with full privileges cannot assign tags, must use SSO admin
27. 27
vSphere 6.0 Gotchas (continued)
Sirius Computer Solutions
SSO
Changing the Administrator password after SSO is deployed causes the
Component Manager service to hang, no workaround.
ESXi Hosts
ESXi hosts randomly fail or disconnect when running >300 VMs in a cluster.
Third Party Components
1000v is not fully certified, AVS mode will not be supported. Host based
flash caching plugins like PernixData not currently supported.
28. 28
vSphere 6.0 Gotchas (continued)
Sirius Computer Solutions
vCenter Installation
The FQDN/IP used when configuring the vCenter server MUST be match
what is utilized for the PSC.
vCenter Service Account
Do not use or @ in the password for the service account used to run
vCenter, installation may fail.
Active Directory
ESXi hosts may drop off of the domain following upgrade to 6.0.
29. 29
vSphere 6.0 Gotchas (continued)
Sirius Computer Solutions
IPv6
Still not 100% supported, check official docs or
https://www.edge-cloud.net/2015/03/ipv6-in-vsphere-6/
vCenter External Database
Oracle 11g and 12c are deprecated. Will work for 6.0 but continuing support
is not guaranteed.
.
31. 31
What’s it all about?
• July 14th 2015 – Windows Server 2003 is
officially End of Life
• Similar to WinXP EOL last year
• Only a couple months remaining before
extended support is required
Sirius Computer Solutions
33. 33
Operational Impact
• No technical support
• Migrations required
» Application compatibility concerns
• Exceptions have to be managed
• Custom support is available
» Expensive
» Negotiated with Microsoft
Sirius Computer Solutions
34. 34
Security Impact
• No security patches henceforth
» 37 critical updates were released in 2013 under
extended support. None following EOL.
• Ripe target for attacks
• Exceptions must be mitigated
Sirius Computer Solutions
35. 35
Compliance Impact
• Nearly all regulations require a patched,
supported OS.
• Ex. PCI DSS Req 6.2
» Ensure that all system components and software are protected from known
vulnerabilities by installing applicable vendor-supplied security patches. Install critical
security patches within one month of release.
• Compensating controls for any exceptions
Sirius Computer Solutions
36. 36
What can we do?
• Migrate to a supported OS
» Best option
• Alternatives
» Segment off behind separate firewall
» Harden OS
Sirius Computer Solutions
37. 37
How can Sirius help?
• Microsoft Services
• Virtual Infrastructure
• Compensating Controls
» Cisco Security (ASA w/ Firepower)
» Palo Alto Networks
» Bit 9 / Carbon Black
Sirius Computer Solutions
Anybody out here approaching any of these maximums?
More at https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf
vSphere6 only supports processors available after June 2006… If you try to use any processors made prior it will PSOD during the install.
If your Server is older than 1 January 2012 there is a high likelihood it may not be on the HCL.
Lots have worked for a while but is now finally supported
AOAG – Huge!
https://www.edge-cloud.net/2015/03/ipv6-in-vsphere-6/
10X # of hosts
More than 3X the # of VMs
vCSA is also an interactive installer now rather than an OVF
Still no embedded VUM :(
Limited to vSphere HA
PSC takes up about 30 GB on External deployment
Command Line Tool
Is anyone using 3rd party certs? Probably not… VMware certificates are terrible to replace.
Well now it’s better. Mostly.
30 vCenters with separate certificates oh god… heck even the VCAP DCA was a pain.
VMCA Default: VMCA uses a self-signed root certificate. It issues certificates to vCenter, ESXi, etc and manages these certificates. These certificates have a chain of trust that stops at the VMCA root certificate. VMCA is not a general purpose CA and its use is limited to VMware components.
VMCA Enterprise: VMCA is used as a subordinate CA and is issued subordinate CA signing certificate. It can now issue certificates that trust up to the enterprise CA’s root certificate. If you have already issued certs using VMCA Default and replace VMCA’s root cert with a CA signing cert then all certificates issued will be regenerated and pushed out to the components.
Custom: In this scenario VMCA is completely bypassed. This scenario is for those customers that want to issue and/or install their own certificates. You will need to issue a cert for every component, not unlike you do today for 5.5 when using 3rd party certs. And all of those certs (except for host certs) need to be installed into VECS.
VMware Endpoint Certificate Store (VECS) serves as a local (client-side) repository for certificates, private keys, and other certificate information that can be stored in a keystore. You can decide not to use VMCA as your certificate authority and certificate signer, but you must use VECS to store all vCenter certificates, keys, and so on. ESXicertificates are stored locally on each host and not in VECS.
VECS runs as part of the VMware Authentication Framework Daemon (VMAFD). VECS runs on every embedded deployment, Platform Services Controller node, and management node and holds the keystores that contain the certificates and keys.
VECS polls VMware Directory Service (vmdir) periodically for updates to the TRUSTED_ROOTS store. You can also explicitly manage certificates and keys in VECS usingvecs-cli commands. See vecs-cli Command Reference.
VECS includes the following stores.
Can even run mixed linked mode – thanks to PSC
Enabled by default (called enhanced linked mode)
Useful during datacenter migrations when you are moving between clusters
Imagine eventually SSO limitation will be removed from GUI option
Used to be 10ms
CAB vMotion is a big deal
Simple concept, a bit clunky to administer
Can run 2 or 3 but cannot go back
3 does not support CoS tagging or user defined resource pools
Changing policies on a per VM basiss allows for the array to sleect the correct storage container to meet those capabilities be they performance related, availability related, or otherwise.
Capabilities exposed via VASA, could include dedupe, compression, encryption, flash acceleration, replication, more.
Limited array support thus far
VASA provider, or a Virtual Volume Storage Provider (let’s call it the VP) is a software component that acts as a storage Awareness Service for vSphere and mediates out-of-band communication between vCenter and a storage system. The VP can take different forms; some array vendors have it embedded in the storage controller while others run it in an appliance. An administrator needs to add details of the VP to vCenter server. This is usually something as simple as providing a URL to the VP, along with some credentials. This information should come from the array vendor documentation.
Protocol Endpoint is a logical I/O proxy presented to a host to communicate with Virtual Volumes stored on a Storage Container. When a virtual machine on the host performs an I/O operation,the protocol endpoint directs the I/O to the appropriate virtual volume. This is a LUN on block storage arrays, and a mount point on NAS arrays. These must be pre-configured on the array, and then presented to all the ESXi hosts that wish to use VVols. These are discovered ‘or’ mounted to the ESXi hosts just like a datastore. However they are not used for storage, just communication.
Storage Container is a pool of raw storage capacity or an aggregation of storage capabilities that a storage system can provide to virtual volumes. It is not a LUN! However, this is where the Virtual Volumes are created.
Storage Policy Based Management, through VM Storage Policies, is used for virtual machine provisioning to match storage capabilities to application requirements. The location, layout and storage capabilities of a VM depends on the storage policy associated with the VM.
Virtual Volume Datastore is a vSphere representation of a Storage Container. When setting up Virtual Volumes, a Virtual Volume datastore is created to introduce the Storage Container to vSphere.
Virtual Volumes (VVols) are stored natively inside a storage system that is connected through block or file protocols. They are exported as objects by a compliant storage system and are managed entirely by hardware on the storage array. Virtual Volumes are an encapsulation of virtual machine files, virtual disks, and their derivatives.
Changing policies on a per VM basiss allows for the array to sleect the correct storage container to meet those capabilities be they performance related, availability related, or otherwise.
Capabilities exposed via VASA, could include dedupe, compression, encryption, flash acceleration, replication, more.
Limited array support thus far
VASA provider, or a Virtual Volume Storage Provider (let’s call it the VP) is a software component that acts as a storage Awareness Service for vSphere and mediates out-of-band communication between vCenter and a storage system. The VP can take different forms; some array vendors have it embedded in the storage controller while others run it in an appliance. An administrator needs to add details of the VP to vCenter server. This is usually something as simple as providing a URL to the VP, along with some credentials. This information should come from the array vendor documentation.
Protocol Endpoint is a logical I/O proxy presented to a host to communicate with Virtual Volumes stored on a Storage Container. When a virtual machine on the host performs an I/O operation,the protocol endpoint directs the I/O to the appropriate virtual volume. This is a LUN on block storage arrays, and a mount point on NAS arrays. These must be pre-configured on the array, and then presented to all the ESXi hosts that wish to use VVols. These are discovered ‘or’ mounted to the ESXi hosts just like a datastore. However they are not used for storage, just communication.
Storage Container is a pool of raw storage capacity or an aggregation of storage capabilities that a storage system can provide to virtual volumes. It is not a LUN! However, this is where the Virtual Volumes are created.
Storage Policy Based Management, through VM Storage Policies, is used for virtual machine provisioning to match storage capabilities to application requirements. The location, layout and storage capabilities of a VM depends on the storage policy associated with the VM.
Virtual Volume Datastore is a vSphere representation of a Storage Container. When setting up Virtual Volumes, a Virtual Volume datastore is created to introduce the Storage Container to vSphere.
Virtual Volumes (VVols) are stored natively inside a storage system that is connected through block or file protocols. They are exported as objects by a compliant storage system and are managed entirely by hardware on the storage array. Virtual Volumes are an encapsulation of virtual machine files, virtual disks, and their derivatives.
Changing policies on a per VM basiss allows for the array to sleect the correct storage container to meet those capabilities be they performance related, availability related, or otherwise.
Capabilities exposed via VASA, could include dedupe, compression, encryption, flash acceleration, replication, more.
Limited array support thus far
VASA provider, or a Virtual Volume Storage Provider (let’s call it the VP) is a software component that acts as a storage Awareness Service for vSphere and mediates out-of-band communication between vCenter and a storage system. The VP can take different forms; some array vendors have it embedded in the storage controller while others run it in an appliance. An administrator needs to add details of the VP to vCenter server. This is usually something as simple as providing a URL to the VP, along with some credentials. This information should come from the array vendor documentation.
Protocol Endpoint is a logical I/O proxy presented to a host to communicate with Virtual Volumes stored on a Storage Container. When a virtual machine on the host performs an I/O operation,the protocol endpoint directs the I/O to the appropriate virtual volume. This is a LUN on block storage arrays, and a mount point on NAS arrays. These must be pre-configured on the array, and then presented to all the ESXi hosts that wish to use VVols. These are discovered ‘or’ mounted to the ESXi hosts just like a datastore. However they are not used for storage, just communication.
Storage Container is a pool of raw storage capacity or an aggregation of storage capabilities that a storage system can provide to virtual volumes. It is not a LUN! However, this is where the Virtual Volumes are created.
Storage Policy Based Management, through VM Storage Policies, is used for virtual machine provisioning to match storage capabilities to application requirements. The location, layout and storage capabilities of a VM depends on the storage policy associated with the VM.
Virtual Volume Datastore is a vSphere representation of a Storage Container. When setting up Virtual Volumes, a Virtual Volume datastore is created to introduce the Storage Container to vSphere.
Virtual Volumes (VVols) are stored natively inside a storage system that is connected through block or file protocols. They are exported as objects by a compliant storage system and are managed entirely by hardware on the storage array. Virtual Volumes are an encapsulation of virtual machine files, virtual disks, and their derivatives.
Anyone using VSAN? Why… why not?
Anyone encountered APD?
How big are your clusters?
No vVol Support
Conflicting reports on SMP-FT for vCenter server. Your mileage may vary.
Anyone encountered APD?
How big are your clusters?
Anyone encountered APD?
How big are your clusters?
Microsoft quote about custom support: “Custom Support costs can vary, depending on specific customer needs, such as the number of server instances requiring continued support. We recommend customers work with their Microsoft Account Representative to determine applicable pricing for their environment.“”
Price reportedly ~$600 / device / year, could be a lot more.
The register article: “One running several thousand Windows Server 2003 machines will pay “high single-digit millions” for the first year and "high double digit millions" in year two, our source - who wished to remain anonymous - said.”
This is example is from a blog article, http://blogs.technet.com/b/uktechnet/archive/2014/06/25/are-you-ready-to-migrate-windows-server-2003-end-of-life-is-coming-on-the-july-14th-2015.aspx