Jeżeli jesteś specjalistą, który korzysta z technologii wirtualizacji VMware, a Twój ostatni kontakt z wirtualizacją Microsoft był kilka lat temu, to z pewnością zainteresuje Cię odświeżenie tej wiedzy w błyskawiczny sposób. W trakcie godzinnego szkolenia otrzymasz skondensowaną i aktualną wiedzę na temat rozwiązań wirtualizacji Microsoft, czyli wirtualizatora Hyper-V w wersji 3 (dostępnego w Windows Server 2012). Omówione zostaną różne techniczne aspekty wirtualizacji oraz porównane zostaną dwie technologie (Microsoft i VMWare), z jednoczesnym przedstawieniem jak wyglądają rozwiązania, które już znasz, ale w innym wydaniu.
PLNOg16: SDN dla entuzjastów i sceptyków. Co zaskoczyło mnie w rozwiązaniu wi...
Wirtualizacja Microsoft dla specjalistów VMware
1. Dlaczego Hyper-V?
Wirtualizacja Microsoft dla specjalistów VMWare
Porównanie Windows Server 2012 Hyper-V z VMWare vSphere 5.1
Mariusz Kędziora
Technical Evangelist | Microsoft
http://ewangelista.it | mariusz.kedziora@microsoft.com
2.
3. Podstawy Hyper-V
Hyper-V = Platforma wirtualizacji od Microsoft / Wirtualizator
Występuje w dwóch opcjach:
Windows Server 2012 Hyper-V (rola systemu, dostępna „in box”)
Hyper-V Server (sam wirtualizator)
VM VM VM VM VM VM VM VM VM VM
Hyper-V Wirtualizator (Hyper-V)
Windows Server 2012
Client Hyper-V (wirtualizacja działająca w Windows 8
Pro/Enterprise)
4. Microsoft Hyper-V Server 2012
Bezpłatne rozwiązanie
Zawiera
Hypervisor
Windows Server Driver Model
Kluczowe komponenty wirtualizacji
Minimalny „footprint”
Brak GUI + Windows
Ale ma wszystkie możliwości wirtualizacyjne
dostępne w Windows Server 2012 z Hyper-V
5. Wymagania sprzętowe
Platforma x64
Sprzętowe wsparcie
Sprzętowo wspierana wirtualizacja (AMD-V i Intel VT)
Sprzętowo wspierane Data Execution Prevention (DEP)
Minimalne wymagania
Chyba mało istotne? Będziemy tam uruchamiać dużo VM!
Ale: 512MB RAM, 1.4 GHz CPU, 1 karta sieciowa
6. Wspierane systemy gości (serwery)
Windows Server
2012, 2008 R2 (SP1), 2008 R2, 2008 (SP2), 2003 R2 (SP3), 2003
(SP3)
Systemy nie-Microsoft
* Wymagane jest pobranie
CentOS* (5.7, 5.8, 6.0-6.3) (bezpłatnych) komponentów integracyjnych
RedHat Enterprise Linux* (5.7, 5.8, 6.0-6.3)
SUSE Linux Enterprise Server 11 SP2
Open SUSE 12.1
Ubuntu 12.04
Systemy klienckie też są wspierane! (od XP SP3, Linuxy – takie jak
7. Historia Hyper-V
• Windows Server 2012
2012.09: v3.0 • Hyper-V Server 2012
• Service Pack 1
2011.02: v2.1 dla Windows Server 2008 R2
• Windows Server 2008 R2
2009.10: v2.0 • Hyper-V Server 2008 R2
• Windows Server 2008
2008.06: v1.0 • Hyper-V Server 2008 (+6M)
9. Słownik: Podstawowe terminy
VMware Microsoft
VI Client Hyper-V Manager
vCenter System Center Virtual Machine Manager (SCVMM)
VMware Tools Integration Component
Service Console Parent Partition
Consolidated Backup System Center Data Protection Manager (DPM)
Distributed Power Management Core Parking & Dynamic Optimization
Standard/Distributed Switch Virtual Switch
Converter SCVMM P2V / V2V
Update Manager Virtual Machine Servicing Tool (VSMT)
10. Podstawowe zarządzanie
Hyper-V Manager PowerShell
Centralne miejsce do Język i środowisko skryptowe
zarządzania To co jest w GUI jest w
Wgląd w maszyny wirtualne PowerShell
Konfiguracja (np. switcha) Automatyzacja zadań (skrypty)
Inicjowanie akcji VM Podpowiadanie składni, pomoc
Dostępny zdalnie (RSAT) Duża baza gotowych skryptów
Również dostęp zdalny
Klasycznie
PowerShell Web Access
13. Słownik: Storage, Zasoby
VMware Microsoft
VMDK (Virtual Machine Disk) VHD/VHDX (Virtual Hard Disk)
Raw Device Mapping Pass-Through Disk
Storage vMotion Live Storage Migration
Thin Provisioning Dynamic Disk
Volume / Extent Grow Expand Disk / Volume
VM SCSI VM IDE Boot
Hot Add Disks, Storage, Memory Hot Add Disks, Storage, Memory
Memory Overcommit Dynamic Memory*
14. Skalowalność: Zmiany
Hyper-V Hyper-V
System Resource Zmiana
(2008 R2) (2012)
Logical Processors 64 320 5×
Host Physical Memory 1TB 4TB 4×
Virtual CPUs per Host 512 2,048 4×
Virtual CPUs per VM 4 64 16×
Memory per VM 64GB 1TB 16×
VM
Active VMs per Host 384 1,024 2.7×
nothing
Guest NUMA No Yes -
Maximum Nodes 16 64 4×
Cluster
Maximum VMs 1,000 8,000 8×
15. Edycje Windows Server 2012 (i Hyper-V)
Wysoka gęstość Niska gęstość
wirtualizacji lub brak wirtualizacji
Windows Server Windows Server
2012 Datacenter 2012 Standard
Wszystkie funkcje i możliwości Wszystkie funkcje i możliwości
Klastry, RAM, CPU, itp. Klastry, RAM, CPU, itp.
Live Migration, Hyper-V Manager Live Migration, Hyper-V Manager
Licencjonowanie na CPU (x2) Licencjonowanie na CPU (x2)
Liczba OSE: nieograniczone Liczba OSE: 2
16. Kupujemy jedną licencję i…
Standard lub Datacenter
Standard Datacenter Możemy uruchomić
1024 maszyn wirtualnych
Bez względu na edycję!
1024 VM Ale w ramach kupna licencji
Dostajemy Windows Server
2012
Standard: 2 licencje
Datacenter: ∞ licencji
17. Porównanie do VMware
Hyper-V vSphere vSphere 5.1
System Resource
(2012) Hypervisor Enterprise Plus
Logical Processors 320 160 160
Host Physical Memory 4TB 32GB1 2TB
Virtual CPUs per Host 2,048 2,048 2,048
Virtual CPUs per VM 64 8 642
Memory per VM 1TB 32GB1 1TB
VM
Active VMs per Host 1,024 512 512
Guest NUMA Yes Yes Yes
Maximum Nodes 64 N/A3 32
Cluster
Maximum VMs 8,000 N/A3 4,000
1 Host może mieć maksimum 32GB, dlatego maszyna również może dostać maksimum 32GB
2 vSphere 5.1 Enterprise Plus to jedyna edycja wspierająca 64 vCPUs.
Edycja Enterprise wspiera 32 vCPU (per VM), pozostałe edycje wspierają 8 vCPUs (per VM)
3 Opcje klastrowania/wysokiej dostępności – dostępne tylko z zakupem vSphere we
wszystkich
Źródła dla vSphere Hypervisor / vSphere 5.x Ent+
- http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf
- https://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Platform-Technical-Whitepaper.pdf
- http://www.vmware.com/products/vsphere-hypervisor/faq.html
edycjach 1
20. Porównanie do VMware
Hyper-V vSphere vSphere 5.1
Capability
(2012) Hypervisor Enterprise Plus
Virtual Fiber Channel Yes Yes Yes
3rd Party Multipathing (MPIO) Yes No Yes (VAMP)1
Native 4-KB Disk Support Yes No No
Maximum Virtual Disk Size 64TB VHDX 2TB VMDK 2TB VMDK
Maximum Pass Through Disk Size 256TB+2 64TB 64TB
Offloaded Data Transfer Yes No Yes (VAAI)3
Boot from USB Disk Yes4 Yes Yes
32 razy
Storage Pooling Yes No No
większe
1 vStorage API for Multipathing (VAMP) jest dostępne tylko w vSphere 5.1 Enterprise & Enterprise Plus
2 Maksymalny rozmiar dysku zależy od tego ile system gość jest w stanie obsłużyć
Najnowsza wersja Windows Server 2012 potrafi obsłużyć dyski 256TB
3 vStorage API for Array Integration (VAAI) jest dostępne tylko w vSphere 5.1 Enterprise & Enterprise Plus
4 Tylko Hyper-V Server 2012
Źródła dla vSphere Hypervisor / vSphere 5.x Ent+
- http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf,
- http://www.vmware.com/products/vsphere/buy/editions_comparison.html
2
22. Porównanie do VMware
Hyper-V vSphere vSphere 5.1
Capability
(2012) Hypervisor Enterprise Plus
Dynamic Memory Yes Yes Yes
Resource Metering Yes Yes1 Yes
Quality of Service Yes No Yes2
Data Center Bridging (DCB) Yes Yes Yes
1 Bez vCenter, Resource Metering w vSphere Hypervisor jest dostępny tylko per indywidualny host
2 Quality of Service (QoS) jest dostępne tylko w vSphere 5.1 Enterprise & Enterprise Plus
QoS jest
dostępny tylko
w
Źródła dla vSphere Hypervisor / vSphere 5.x Ent+:
- http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf
- http://www.vmware.com/products/vsphere/buy/editions_comparison.html najwyższych
edycjach
23.
24. Słownik: Bezpieczeństwo i sieć
VMware Microsoft
Direct Path I/O SR-IOV*
Standard Switch Hyper-V Extensible Switch
25. Hyper-V Extensible Switch
ARP/ND
Poisoning DHCP Guard
głębszą
PVLANS Protection Protection
integrację
Trunk Mode
to Virtual Monitoring &
Virtual Port ACLs Machines Port Mirroring
Windows PowerShell & WMI Management
26. Rozszerzanie Extensible Switch’a
Packet
Cisco
Inspection
Nexus 1000V 5nine
UCS VM-FEX Security Manager Packet
Filtering
Network wielu partnerom
NEC InMon Forwarding
OpenFlow sFlow
Wielu partnerów z Intrusion
Detection
rozszerzeniami
2
27. Porównanie do VMware
Hyper-V vSphere vSphere 5.1
Capability
(2012) Hypervisor Enterprise Plus
Extensible vSwitch Yes No Replaceable1
Confirmed Partner Extensions 5 No 2
Private Virtual LAN (PVLAN) Yes No Yes1
ARP Spoofing Protection Yes No vCNS/Partner2
DHCP Snooping Protection Yes No vCNS/Partner2 otwarty i
Virtual Port ACLs Yes No vCNS/Partner2
Trunk Mode to Virtual Machines Yes No Yes3 rozszerzalny
Port Monitoring Yes Per Port Group Yes3
Port Mirroring Yes Per Port Group Yes3
1 vSphere Distributed Switch (wymagany do PVLAN) jest dostępny tylko w edycji Enterprise Plus
Jest on też wymienny na rozwiązania partnerów (Cisco/IBM) a nie rozszerzalny.
2 ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs wymagają komponentu App z VMware vCloud
Network & Security (vCNS) lub rozwiązania partnerów (wymagają dodatkowych zakupów)
3 Trunking VLANs dla vNICs, Port Monitoring oraz Mirroring (na granularnym poziomie) wymagają vSphere Distributed
Switch, który dostępny jest tylko w edycji Enterprise Plus
Źródła vSphere Hypervisor / vSphere 5.x Ent+:
zamknięty
- http://www.vmware.com/products/cisco-nexus-1000V/overview.html, http://www-03.ibm.com/systems/networking/switches/virtual/dvs5000v/, http://www.vmware.com/technical-
resources/virtualization-topics/virtual-networking/distributed-virtual-switches.html, http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-
Whitepaper.pdf, http://www.vmware.com/products/vshield-app/features.html, http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-492971.html
28. Wydajność sieci
korzysta
Dynamic Dynamiczne łączenie wielu CPUs podczas z innowacji
VMq przetwarzania ruchu sieciowego maszyn wirtualnych
sprzętowych
IPsec Task Przeniesienie przetwarzania IPsec z maszyny wirtualnej
Offload na fizyczną kartę sieciową dla lepszej wydajności
Wsparcie dla Przeniesienie funkcji SR-IOV z fizycznego adaptera
SR-IOV bezpośrednio do maszyny wirtualnej
29. Fizyczne bezpieczeństwo
Twoje dane
BitLocker pozostają
bezpieczne
Local Disk Dysk w klastrze CSV 2.0
30. Porównanie do VMware
Hyper-V vSphere vSphere 5.1
Capability
(2012) Hypervisor Enterprise Plus
Dynamic Virtual Machine Queue Yes NetQueue1 NetQueue1
IPsec Task Offload Yes No No
SR-IOV with Live Migration Yes No2 No2
Storage Encryption Yes No No
1 VMware vSphere oraz vSphere Hypervisor wspierają tylko VMq (NetQueue)
2 Implementacja SR-IOV w VMware nie wspiera: vMotion, HA oraz Fault Tolerance.
Do jego działania wymagany jest też vSphere Distributed Switch (dostępny tylko w najwyższej edycji)
DirectPath I/O, whilst not identical to SR-IOV, aims to provide virtual machines with more direct access to hardware devices, with
network cards being a good example. Whilst on the surface, this will boost VM networking performance, and reduce the burden
on host CPU cycles, in reality, there are a number of caveats in using DirectPath I/O:
• Very small Hardware Compatibility List
• No Memory Overcommit
• No vMotion (unless running certain configurations of Cisco UCS)
•
•
•
No Fault Tolerance
No Network I/O Control
No VM Snapshots (unless running certain configurations of Cisco UCS)
bez
•
•
No Suspend/Resume (unless running certain configurations of Cisco UCS)
No VMsafe/Endpoint Security support poświęcania
kluczowych
Źródła dla vSphere Hypervisor / vSphere 5.x Ent+
możliwości
- http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.1.pdf
35. Porównanie do VMware
Hyper-V vSphere vSphere 5.1
Capability
(2012) Hypervisor Enterprise Plus
VM Live Migration Yes No1 Yes2
1GB Simultaneous Live Migrations Unlimited3 N/A 4
10GB Simultaneous Live Migrations Unlimited3 N/A 8
Live Storage Migration Yes No4 Yes5
Shared Nothing Live Migration Yes No Yes5
Network Virtualization Yes No VXLAN6
1 Live Migration (vMotion) jest niedostępne w vSphere Hypervisor (wymagany vSphere 5.1)
2 Live Migration (vMotion) oraz Shared Nothing Live Migration (Enhanced vMotion) dostępne są
w edycjach Essentials Plus i wyższych
3 W ramach technicznych możliwości Twojego sprzętu sieciowego
4 Live Storage Migration (Storage vMotion) jest niedostępne w vSphere Hypervisor (wymagany vSphere 5.1)
5 Live Storage Migration (Storage vMotion) jest dostępne w edycjach Standard, Enterprise & Enterprise Plus
6 VXLAN jest elementem vCloud Networking & Security, który jest dostępny tylko za dodatkową opłatą (poza vSphere)
Dodatkowo wymaga również vSphere Distributed Switch, który dostępny jest tylko w vSphere 5.1 Enterprise Plus.
bez
Źródła dla vSphere Hypervisor / vSphere 5.x Ent+:
ponoszenia
- http://www.vmware.com/products/vsphere/buy/editions_comparison.html,
- http://www.vmware.com/files/pdf/products/vcns/vCloud-Networking-and-Security-Overview-Whitepaper.pdf
- http://www.vmware.com/products/datacenter-virtualization/vcloud-network-security/features.html#vxlan dodatkowych
kosztów
36.
37. Słownik: Wysoka dostępność
VMware Microsoft
VMware HA (High-Availability) Failover Clustering
Fault Tolerance (FT) Hyper-V Replica*
Site Recovery Manager (SRM) Hyper-V Replica
vMotion Live Migration
Primary Node Coordinator Node
VM Affinity VM Affinity
VMFS Cluster Shared Volumes (CSV)
Distributed Resource Scheduler (DRS) Dynamic Optimization
41. Porównanie do VMware
Hyper-V vSphere vSphere 5.1
Capability
(2012) Hypervisor Enterprise Plus
Incremental Backups Yes No Yes1
VM Replication Yes No Yes2
NIC Teaming Yes Yes Yes
Integrated High Availability Yes No3 Yes4
Guest OS Application Monitoring
Failover Prioritization
Yes
Yes
N/A
N/A
No5
Yes6
monitorowani
Affinity & Anti-Affinity Rules
Cluster-Aware Updating
Yes
Yes
N/A
N/A
Yes6
Yes6
e aplikacji w
1 VMware Data Protection jest dostępne w edycji Essentials Plus i wyższych
systemach
2 vSphere Replication jest dostępne w edycji Essentials Plus i wyższych
3 vSphere Hypervisor nie posiada wbudowanych możliwości HA (wymagany vSphere 5.1).
4 VMware HA jest wbudowane w edycji Essentials Plus i wyższych
gości
5 VMware posiada publicznie dostępne API do monitorowania, ale samo monitorowanie jest niedostępne
6 Opcje dostępne są dla wszystkich edycji, które mają włączone High Availability
Źródła dla vSphere Hypervisor / vSphere 5.x Ent+:
- http://www.vmware.com/products/vsphere/buy/editions_comparison.html
- http://www.yellow-bricks.com/2011/08/11/vsphere-5-0-ha-application-monitoring-intro/
42. Porównanie do VMware
Hyper-V vSphere vSphere 5.1
Capability
(2012) Hypervisor Enterprise Plus
Nodes per Cluster
VMs per Cluster
64
8,000
N/A1
N/A1
32
4,000
najbardziej
Max Size Guest Cluster (iSCSI)
Max Size Guest Cluster (Fiber)
64 Nodes
64 Nodes
16 Nodes2
5 Nodes
16 Nodes2
5 Nodes
elastyczny
Max Size Guest Cluster (File Based) 64 Nodes 0 Nodes3 0 Nodes3
Guest Clustering with Live Migration Support Yes N/A1 No4
Guest Clustering with Dynamic Memory Support Yes No5 No5
1 HighAvailability/vMotion/Clustering jest niedostępne w vSphere Hypervisor
2 GuestClusters mogą być uruchomione na vSphere 5.1 korzystając z „in-guest iSCSI initiator”,
aby podłączyć się do SAN (tak samo jakby to działało na fizycznym klastrze).
Wsparcie dla systemów gości tylko do Windows Server 2008 R2 oznacza, że maksymalny rozmiar to 16 węzłów.
3 VMware nie wspiera VM Guest Clustering korzystając z File Based Storage (np. NFS)
bez
poświęcania
4 VMware nie wspiera vMotion i Storage vMotion maszyny, która jest częścią Guest Cluster
5 VMware nie wspiera użycia Memory Overcommit dla maszyny, która jest częścią Guest Cluster
Źródła dla vSphere Hypervisor / vSphere 5.x Ent+
- http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf,
- http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-mscs-guide.pdf,
- http://kb.vmware.com/kb/1037959
43.
44. Windows Server 2012
Wysoka gęstość Niska gęstość
wirtualizacji lub brak wirtualizacji
Windows Server Windows Server
2012 Datacenter 2012 Standard
Wszystkie funkcje i możliwości Wszystkie funkcje i możliwości
Klastry, RAM, CPU, itp. Klastry, RAM, CPU, itp.
Live Migration, Hyper-V Manager Live Migration, Hyper-V Manager
Licencjonowanie na CPU (x2) Licencjonowanie na CPU (x2)
Liczba OSE: nieograniczone Liczba OSE: 2
45. Kupujemy jedną licencję i…
Standard lub Datacenter
Standard Datacenter Możemy uruchomić
1024 maszyn wirtualnych
Bez względu na edycję!
1024 VM Ale w ramach kupna licencji
Dostajemy Windows Server
2012
Standard: 2 licencje
Datacenter: ∞ licencji
46. Porównanie do VMWare http://aka.ms/WhyHyperV-vsVMWare
Komponent Zasób Windows Server 2012 VMWare ESXi 5.0 vSphere 5.1 Ent+
Procesory logiczne 320 160 2x 160 2x
Host Pamięć fizyczna 4TB 32GB 125x 2TB 2x
Wirtualne CPU na Host 2048 2048 -- 2048 --
Wirtualne CPU dla VM 64 8 8x 64 --
Pamięć 1TB 32GB 125x 1TB --
VM
Aktywne maszyny 1024 512 2x 512 2x
NUMA Tak Tak -- Tak --
Węzły 64 N/A ∞ 32 2x
Klaster
Maksymalna ilość VM 8000 N/A ∞ 3000 2.5x
Funkcjonalność Windows Server 2012 Vmware ESXi 5.0 vSphere 5.1 Ent+
VM Live Migration Tak Nie Tak
- 1GB Live Migrations Bez limitów N/A 4
- 10GB Live Migrations Bez limitów N/A 8
Live Storage Migration Tak Nie Tak
Shared Nothing Live Migration Tak Nie Tak
52
Network Virtualization Tak Nie Partner
47. Policz to sam
Dwa kalkulatory
Wirtualizacja (małe i średnie firmy)
http://servervirtualization.cloudapp.net/
Chmura prywatna (duże firmy)
http://cloudeconomics.cloudapp.net/
Podajemy 2 dane: liczba VM i gęstość
Efekt: Raport + porównanie kosztów
50. Pamiętajcie: Rzeczywistość wirtualna
Jeśli zaczniemy z fizycznym bałaganem i go zwirtualizujemy
to na pewno skończymy z wirtualnym bałaganem
Zarządzanie jest kluczowe
Wirtualizacja nie jest lekiem na całe zło
Rozsądne podejście:
Inwentaryzacja – jakie serwery i co robią
Cykl życia – rozsądne uruchamianie maszyn
Planowanie – różne obciążenia maszyn
Metadane – opisywanie maszyn (właściciel, itp.)
51. Private Cloud oparty na Microsoft
Automatyzacja i monitorowanie całego DataCenter
Zarządzanie aplikacjami i procesami
Heterogeniczność + Samoobsługa
Zarządzanie na wielu poziomach
Kompletna platforma wirtualizacyjna
Wiele elementów dostępnych „z pudełka”
Ogromna skalowalność (Enterprise Ready)
Private Cloud
52. Porównania, źródła…
Microsoft
Porównanie możliwości: aka.ms/WhyHyperV-vsVMware
Porównanie wydajności: brak, ze względu na EULA VMware
2.4 Benchmarking.
You may use the Software to conduct internal
performance testing and benchmarking
studies, the results of which only You may
publish or publicly disseminate, provided that
VMware has reviewed and approved of the
methodology, assumptions and other
parameters of your testing and studies.
53. Porównania, źródła…
VMWare
Uwaga na maile z linkiem do PDFa z porównaniem
Porównują najnowszy vSphere 5.1 do Windows Server 2008 R2
54. Kilka innych źródeł
UP2V
http://up2v.nl/2012/09/04/microsoft-windows-server-2012-hyper-v-
compared-to-vmware-vsphere-5-1/
Miles Consulting
http://www.milesconsultingcorp.com/Hyper-V-versus-VMware-
Comparison.aspx
Aidan Finn
http://www.aidanfinn.com/?p=13483
http://www.aidanfinn.com/?p=13856
55. Nauka Windows Server 2012
Pobieranie Windows Server 2012: aka.ms/GetWS2012
Nauka: aka.ms/LearnWS2012
Microsoft Virtual Academy: Windows Server 2012
Materiały z premiery Windows Server 2012
TechNet: Start z Windows Server 2012
Baza Wiedzy TechNet: Windows Server 2012
Wirtualne laboratoria z Windows Server 2012
TechEd Europe 2012: Windows Server
IT Camps on Demand
56. Inne linki
Strony produktowe Windows Server 2012
Strony produktowe System Center 2012
Omówienie wyników kalkulatorów firm VMware i Microsoft
Licencjonowanie Windows Server w środowiskach
wirtualnych
opartych na rozwiązaniach Microsoft, VMware i pozostałych
Porównanie Windows Server w wersjach 2003R2, 2008R2 i
2012
Before Hyper-V Server 2012Hyper-V was first introduced as part of Windows Server 2008, but later that year, in October 2008, a standalone, free edition of Hyper-V, known as Microsoft Hyper-V Server 2008 was released. This standalone version contained only the core virtualization components and Windows driver model, and as a result, offered a reduced disk footprint, and a reduced number of services running on the host OS.In October 2009, just 1 year later, both Windows Server 2008 and Microsoft Hyper-V Server 2008 were updated with the R2 release, which introduced a number of compelling capabilities to help organizations reduce costs, whilst increasing agility and flexibility. Key features introduced included:Live Migration – Enabling the movement of virtual machines (VMs) with no interruption or downtimeCluster Shared Volumes – Highly scalable and flexible use of shared storage (SAN) for VMsProcessor Compatibility – Increase the Flexibility for Live Migration across hosts with differing CPU architecturesHot Add Storage – Flexibly add or remove storage to and from VMsImproved Virtual Networking Performance – Support for Jumbo Frames and Virtual Machine Queue (VMq)Finally, in February 2011, With the addition of Service Pack 1 (SP1) for Hyper-V, Microsoft introduced 2 new, key capabilities to help organizations realize even greater value from the platform:Dynamic Memory – More efficient use of memory while maintaining consistent workload performance and scalability.RemoteFX – Provides the richest virtualized Windows 7 experience for Virtual Desktop Infrastructure (VDI) deployments.And that brings us to today…TRANSCRIPTION STARTS HEREBut it’s important before we look at Windows Server 2012 and Hyper-V Server 2012 to look at where Hyper-V has come from.Hyper-V was first introduced as part of Windows Server 2008 but later that year in October 2008 a stand alone free edition of Hyper-V known as Microsoft Hyper-V Server 2008 was released. This stand alone version contained only the core virtualization components and Windows driver models and as a result offered a reduced disk footprint and a reduced number of services running on the host OS. In October 2009 just one year later both Windows Server 2008 and Microsoft Hyper-V server 2008 were updated with the R2 release which introduced a number of compelling capabilities to help organizations reduce cost while increasing agility and flexibility. Key features included; live migration, which enabled the movement of running virtual machines in physical server to physical server with no interruption or downtime, cluster shared volumes, the highly scalable and flexible use of SAN storage, processor compatibility to increase the flexibility for live migration across hosts with differing CPU architectures, hot-added virtual machine storage to flexibly add or remove storage to and from running virtual machines, improved virtual networking performance, support for things like jumbo frames and virtual machine queue and finally in February 2011 with the addition of Service Pack 1 for Hyper-V, Microsoft introduced two new key capabilities to help organizations realize even greater value from the platform. Dynamic memory enables a more efficient use of memory while maintaining consistent workload performance and scalability ultimately helping organizations see a greater level of density on that same sized hardware, and additionally remote FX, which provides the richest virtualized Windows client experience for VDI deployment.
Virtualization technologies help customers lower costs and deliver greater agility and economies of scale. Either as a stand-alone product or an integrated part of Windows Server, Hyper‑V is the leading virtualization platform for today and the transformational opportunity with cloud computing.With Hyper‑V, it is now easier than ever for organizations to take advantage of the cost savings of virtualization, and make the optimum use of server hardware investments by consolidating multiple server roles as separate virtual machines that are running on a single physical machine. Customers can use Hyper‑V to efficiently run multiple operating systems, Windows, Linux, and others, in parallel, on a single server. Windows Server 2012 and Hyper-V Server 2012 extend this with more features, greater scalability and further inbuilt reliability mechanisms.In the data center, on the desktop, and now in the cloud, the Microsoft virtualization platform, which is led by Hyper‑V and management tools, simply makes more sense and offers better value for money when compared to the competition.This presentation will focus on comparing Windows Server 2012 Hyper-V &Hyper-V Server 2012, with the VMware vSphere Hypervisor and vSphere 5.1 Enterprise Plus, across 4 key areas:Scalability, Performance & DensitySecure MultitenancyFlexible InfrastructureHigh Availability & ResiliencyTRANSCRIPTION STARTS HEREAnd that brings us to today, but why Hyper-V? Well virtualization technologies help customers lower cost and deliver greater agility and economies of scale either as a stand alone product Hyper-V server or an integrated part of Windows Server, Hyper-V is the fastest growing virtualization platform for today. And can help you with your transformational computing as you move towards cloud computing. With Hyper-V it’s now easier than ever for organizations to take advantage of the cost savings of virtualization and make the optimum use of server hardware investments by consolidating multiple server roles as separate virtual machines that are running on a single physical machine. Customers can use Hyper-V to efficiently run multiple operating systems, Windows, Linux and others in parallel on a single server. With Windows SErver2012 and Hyper-V Server 2012 we extend this capability with more features, greater scale and further in built reliability mechanisms. In the data center, on the desktop, and now in the cloud, the Microsoft virtualization platform which is led by Hyper-V and System Center management tools makes more sense and offers better value for your money when compared to the competitors. This presentation will focus on comparing Windows Server 2012 Hyper-V and the free download that is Hyper-V Server 2012 with the VMware Vsphere Hypervisor and vSphere 5.1 enterprise plus. And we’ll focus on four key areas; scalability, performance and density, secure multi-tenancy, a flexible infrastructure, and high availability and resiliency.
And we’ll start, by looking at the Scalability, Performance and Density comparisons.
Now, in the previous releases of both Windows Server 2008 R2 Hyper-V, and the Microsoft Hyper-V Server 2008 R2, we supported configuring virtual machines with a maximum of four virtual processors and up to 64 GB of memory. However, IT organizations increasingly want to use virtualization when they deploy mission‑critical, tier-1 business applications. Large, demanding workloads such as online transaction processing (OLTP) databases and online transaction analysis (OLTA) solutions typically run on systems with 16 or more then 32 processors actually and demand large amounts of memory. For this class of workloads, more virtual processors and larger amounts of virtual memory are a core requirement.Hyper‑V in Windows Server 2012, and Hyper-V Server 2012 greatly expands support for host processors and memory. New features include support for up to 64 virtual processors and TB of memory for Hyper‑V guests, a new VHDX virtual hard disk format with larger disk capacity of up to 64 TB, and additional resiliency. These features help to ensure that the virtualization infrastructure can support the configuration of large, high-performance virtual machines to support workloads that might need to scale up significantly.Significant improvements have also been made across the board, with Hyper-V now supporting increased cluster sizes, from 16 nodes, up to 64 nodes in size, supporting up to 8,000 VMs per cluster. Hyper-V now supports a significantly higher number of active virtual machines per host, and additionally, more advanced performance features such as in-guest Non-Uniform Memory Access or (NUMA). This ensures customers can achieve the highest levels of scalability, performance and density for their mission-critical workloads.
Standard = 900$Datacenter = 4800$
In terms of comparison, the table shows that Hyper-V offers significantly greater scale across Host, VM and Cluster when compared with both the VMware vSphere Hypervisor and vSphere plus. VMware positions this vSphere Hypervisor as a simple, entry-level solution designed to allow users to experience the benefits of Vmware’s virtualization technology, however on closer examination, certain restrictions are imposed which prevent customers utilizing the solution at scale, meaning customers have to purchase, at significant cost, one of the more advanced vSphere editions. An examples of this includes the capping of physical memory on the vSphere Hypervisor to 32GB, limiting scalability, and subsequently maximum virtual machine memory size as a result.Since the launch of vSphere 5.0, back in 2011, VMware has regularly discussed the inclusion of 32 virtual processors within a virtual machine, yet this was exclusive, at the time, to the Enterprise Plus edition of vSphere, and not for the vSphere Hypervisor, vSphere 5.0 Essentials, Essentials Plus, Standard, and Enterprise editions, which were all capped at 8 virtual processors per virtual machine. With vSphere 5.1 however, the Enterprise edition can now deliver VMs with up to 32 vCPUs, and the Enterprise Plus edition 64 vCPUs. Compare this with Hyper-V in both Windows Server 2012 and Hyper-V Server 2012, and customers not only receive up to 64 virtual processors per VM, but this comes with no SKU-specific restrictions. Customers are free to run the most demanding of their workloads on Hyper-V, without additional costs or expensive additional upgrades.The table also shows that both Windows Server 2012 Hyper-V, along with Hyper-V Server 2012, and vSphere 5.1 Enterprise Plus deliver up to aTB of memory to an individual virtual machineWhilst virtualization itself is an incredibly important aspect within the datacenter, resiliency and high availability of workloads is of equal importance. The inclusion of Failover Clustering in Windows Server 2012 and Hyper-V Server 2012 enables customers to achieve massive scale with an unparalleled number of nodes within a cluster, and virtual machines per cluster. Unfortunately, the free vSphere Hypervisor alone doesn’t provide any high availability, or resiliency features, and customers must purchase vSphere 5.1 to unlock these features, and even then, cluster sizes are restricted to 32 nodes, and VMs per cluster, which is considerably smaller than the 64 nodes, and 8000 VMs supported by Windows Server 2012 Hyper-V and Hyper-V Server 2012.
Now both Windows Server 2012 Hyper-V & Hyper-V Server 2012 also introduce a number of enhanced storage capabilities to support the most intensive, mission-critical of workloads. These capabilities include: Virtual Fibre Channel – which enables virtual machines to integrate directly into Fiber Channel Storage Area Networks (SAN), unlocking scenarios such as fiber channel-based Hyper-V Guest Clusters or increased performance for storage intensive workloads running in a virtual environment. Support for 4-KB Disk in Hyper‑V, so support for the new advanced format drives 4,000-byte (4‑KB) disk that lets customers take advantage of the emerging innovation in storage hardware for increased capacity and reliability. Hyper-V supports this from the word Go. And finally as we touched on earlier, the New Virtual Hard Disk Format. This new format, called VHDX, is designed to better handle current and future workloads and addresses the technological demands of an enterprise’s evolving needs by increasing storage capacity, protecting data, improving quality performance on 4-KB disks especially , and providing additional operation-enhancing features. The maximum size of a VHDX file is 64TB compared with just 2TB in the previous release.
Storage Spaces transforms SAS & SATA disks into storage pools, from which logical disks, or storage spaces, can then be provisioned. These spaces can be given different levels of resiliency and performance, can be thinly or fully provisioned, and support advanced features such as trim provisioning. Storage Spaces enable you to deliver a new category of highly capable storage solutions to all Windows customer segments at a dramatically lower price point. At the same time, you can maximize operations by leveraging commodity storage to supply high-performance and feature-rich storage to servers, clusters, and applications alike.With offloaded data transfer support, the Hyper-V host can concentrate on the processing needs of the application and workload and offload any storage-related tasks to the SAN, increasing performance all around the complete solution.And finally with it’s reduced footprint, Microsoft Hyper-V Server 2012 supports installation to USB media, providing more deployment flexibility, especially in scenarios such as diskless servers. This is specific to Hyper-V Server 2012.
Now we’ve talked about a number of those capabilities there but I think it’s important to take a quick look at some of those capabilities in action and show how they’re set up and configured, so without further ado let’s duck into a demo. Okay, so here we are in our demo infrastructure where we’ve got four physical Hyper-V hosts all running Microsoft Hyper-V Server 2012 and the first three here, so node 2,3 and 4 are configured in a cluster. Now all three of these hosts in a cluster are also configured to support fiber channel and if we look at node 4 specifically we go over to our virtual SAN manager here, now Hyper-V allows you to define virtual SANs on the host to accommodate scenarios where a single Hyper-V host is connected to different SANs through multiple fiber channel ports. And a virtual SAN defines a named group of physical fiber channel ports from the host that are connected to the same physical SAN. For example assume that a Hyper-V host is connected to two SANs, production and a test SAN. The host is connected to each SAN through two physical fiber channel ports so in this example you might configure two virtual SANs, one named production, that has two physical fiber channel ports connected and one named Test SAN, which has the two remaining physical fiber channel ports connected. Now you can configure as many as four fiber channel adaptors to a virtual machine and associate each one with a virtual SAN, so that means a single VM could be actually attached to four SANs. So when we connect, when we create a virtual fiber channel SAN we’re literally just ticking the boxes to represent the physical HBAs in the host that are going to allow our virtual machines to ultimately connect through to that SAN. So we’ll cancel out of there and we’ll look at a virtual machine settings that we’ve already configured here.So I’ll look at the settings of this virtual machine, and we’ll see that this particular VM already has two fiber channel adaptors here and if you want to add more we’ll use the add hardware but I can’t add a fiber channel adaptor while a VM is running it’s something I need to do when it’s off. In this particular virtual machine what you’ll notice is in this particular fiber channel adaptor is that it’s got two sets of addresses; address set A, and address set B. now to support live migration of VMs across Hyper-V hosts while maintaining fiber channel connectivity two worldwide names are configured for each fiber channel adaptor, set A and set B. And Hyper-V automatically alternates between the set A and set V worldwide node addresses during a live migration. So this ensures that all nodes are available on the destination host before the migration and that no downtime occurs during the migration. But when you’re masking and zoning your LUNs on your fiber channel storage you take both of these addresses into account and that’s the address set for this particular fiber channel adaptor and the other one. But what does this look like inside the virtual machine? Well let’s take a quick lookSo I’m going to double click on demo 6 and go into that virtual machine, have that connection open already and if we look inside Device Manager of this virtual machine you’ll see down here on the storage controls Hyper-V fiber channel HBA, the representation of that HBA but also you’ll see here that because I’ve got two fiber channel adaptors I’ve actually enabled MP IO inside the virtual machine or Multi-pathing, I’ve done that through the same configuration wizard as I would do if this was a physical host so I’ve got that consistency. And what we’ve seen by importing the relevant HP drivers, I’ve been able to see the actual HP Multi-path device that is in my environment. So consistency, a consistent user experience. Now, let me come out of there and let me show you this particular disk and you’ll see that it’s a fiber channel disk, it’s 50 GB in size and we’re consuming around about 4 GB maybe just less, on this drive, but if we double down into that drive you’ll see that actually on this drive I’ve got about 126 GB worth of content or in this case virtual hard disk files. So I’ve got 126 gigs worth of stuff on the drive yet I’m only consuming around about 3,4 GB. What’s going on there?Well one of the new storage features within Windows Server 2012 is an inbox deduplication, so if we go to the local server and I look at file and storage services and volumes, what you’ll see is for this fiber channel disk which I’ve turned on deduplication I‘m at a deduplication rate of 97% so we’ve deduplicated more than what a single virtual hard disk file would consume on it’s own, we’re down to around 3 to 4 GBs, that’s a huge savings, saving 123 gig versus what we would need to have in place if we didn’t have deduplication. That’s phenomenal saving, that’s a great example of using technology that’s built into Windows Server 2012 to help us be more efficient and effective. Now, what you’ll also find is that we have a new capability within Windows Server 2012 that I highlighted in the slides earlier on called storage spaces. And if I go to the storage pools section here you’ll see that I’ve got actually a storage space not created this is just a default to show me my physical disks down here and what you’ll notice is this particular machine has got three physical disks attached to it, each with 64 TB in size, that’s huge, that’s a huge amount of storage space that I’ve got available for each virtual machine. But these are just regular disks at this point, they’re not a storage space, so I want to go to create a new storage space or pool, we’ll go next, I’m going to give it a name so we’ll call it space 1 for now and it’s going to look at the disks that are available in the primordial pool there to use as part of this space. So we’ll go next, we’ll select these physical disks and notice I can change the allocation as well do I want a particular disk to be a hot spare or part of the pool automatically, we’ll go next and you’ll see I’m going to have 192 TB of capacity which is great, we’ll go next, and we’ll go create. It’s updating, it’s finishing the process and that’s great and notice I’ve got this opportunity to create a virtual disk when this wizard closes. So we’ll go ahead and we’ll create that virtual disk, so we’ll go close, so now I’m creating a virtual disk, from a storage pool and this virtual disk could be used by something else, now it’s important to not think of necessarily a virtual disk like a virtual hard disk, this is a virtual disk that could have a volume on it that could be used for a file share or some other workload. So we’ll go next, we’ll choose the space that I’ve created, Space 1, we’ll go next, we’ll give this a name, Disk 1, we’ll go next, now do I want this to be simple, so data’s going to be striped across all three of these disks, giving me maximum capacity and throughput but there’s no resiliency. And mirror would enable me to duplicate data across two or three physical disks increasing reliability but I might sacrifice a little bit of capacity. However parity is striped across physical disks, increasing reliability, though somewhat reducing capacity, I’m going to go with parity on this one. Now, do I want to provision this data? Fixed so use the complete space straight away, or thin provision, I’m going to go thin provision, go next, size, I’m going to choose a virtual disk size of let’s say 100 TB, then we’ll go next, then we’ll go create. So it’s creating all of those elements and now the final step of the wizard is creating a volume, because what we’ve got at the moment presents to Windows as a blank 100 TB disk now that’s part of a pool of 192 TB maximum capacity, we’ve not got a volume which we can actually store data on, so we’ll go close and we’ll make sure that we’ve got that tick box ticked, we’ll go next, which particular machine am I going to provision this to on which disk, so you’ll see it’s 100 TB this virtual disk that we just created is Disk 1, the disk I’m creating is going to be called Disk 5. go next, what volume size, let’s make it the full amount, drive letter, yes let’s leave it as the default, file system, volume label, HugeDrive, NTFS, all that can be left as default, do you want to enable dedups as part of the wizard that is engrained within Windows Server, we’ll go next and we’ll go create. So very quickly we’ve taken three effectively blank disks, we’ve transformed them into a storage space, we’ve provisioned a virtual disk on that storage space and from there we’ve now created a volume of 100TB, do we actually have physically 100TB in my infrastructure to use? No, but this has all been thin provisioned, I’m going to close that out and you’ll see now that we’ve got our storage space and you can see that we’ve got a thin disk provisioned and it’s currently allocated 2GB of the 192 maximum and that’s been used, if we look at the volume view here, there’s our F drive, hugedrive 100 TB, it knows it’s thin provisioned so it’s aware of this kind of stuff. And if I go to Windows now, there’s our huge drive, 100TB in space, it’s consuming minimal amounts at this point in time but as we copy data to it, we’ll see more. But what can we do to expand the infrastructure, what if we want to add another 64 TB disk? Well, if I want to do that I can minimize this particular VM, I’m going to go over to cluster manager because this virtual machine is a clustered machine I should be doing everything really from here from an administrator perspective, I’ll go to settings of the VM and you’ll see it’s already got 3 64 TB disks attached to this VM those are the three disks we saw earlier but I’m going to go to SCSI controller, hard drive, add, remember this virtual machine is running at this point in time, we’ll call it Disk 4, in fact we’ll go no, next, VHDX, Dynamic, so I’ll make it small to begin with and grow, we’ll call this Disk 4, it’s a VHDX file which allows us to have that increased capacity I’m going to choose where I’m going to store this, so I’m going to store this with my other virtual disks in that folder there, I’ll go next, what size, 65 or 64 TB and we’ll go finish. We’ll go apply, and if I go back to that virtual machine now and we do a scan for disks, there, we found our remaining 64 TB disk that we’ve just created, what can I do with that? Well, I can go to my storage pools I can select space 1, I can add an extra physical disk to it, tick, okay, and just like that we’ve increased the capacity of our storage pool to 256 TB and if I scroll down to our virtual disk I could now go ahead and extend this virtual disk to something even greater and then subsequently increase the size of the volume as well. You’ll notice I could also if I wanted to remove a disk. So we’ll go yep, we’re going to rebuild everything on the fly, we’ll go yes, ok, so very quickly we’ve added a disk to a storage space, we’ve removed a disk from a storage space, incredibly easy, simple and effective.
Welcome back and I hope you found the demo useful. But how does VMware compare with some of those things we talked about earlier on?Well, as shown in the table, both Windows Server 2012 Hyper-V and the free Hyper-V Server 2012 provide a significant number of advantages over both the vSphere Hypervisor and vSphere 5.1 Enterprise Plus. Customers building virtualized infrastructures today require the highest levels of availability and performance, and wish to maximize the investment in their chosen technologies to help drive their business forward. With Microsoft, the ability to utilize Device Specific Modules, also known as DSMs, produced by storage vendors, in conjunction with the Multipath I/O framework within Windows Server and Hyper-V Server, ensures that customers run their workloads on an optimized configuration from the start, as the storage vendor intended, providing the highest levels of performance and availability. This framework, this NPIO framework is built into the Windows Server platform, at no cost. Unfortunately, the vSphere Hypervisor doesn’t provide the ability to utilize these storage vendor specific optimizations, and in fact, only the Enterprise and Enterprise Plus editions of vSphere 5.1, through a feature known as the vStorage APIs for Multipathing’, provide this capability, meaning customers have to upgrade to higher, more costly editions in order to unlock the best performance from their storage investments.When implementing a virtualized infrastructure, customers today look to the future to understand new technology trends and innovations that are coming down the line. One of those innovations is the rapidly emerging Advanced Format Drives, that have a 4KB physical sector size. These disks bring an increase in performance, and are natively supported by Windows Server 2012 Hyper-V and Hyper-V Server 2012, but unfortunately, are not supported with the vSphere Hypervisor and vSphere 5.1, restricting future hardware upgrades.As customers introduce larger, more powerful workloads into their virtual environments, the amount of data associated with these workloads, over time, will grow. Fortunately, Windows Server 2012 Hyper-V and Hyper-V Server 2012 support the creation of virtual disks, quickly and efficiently, of up to 64 TB in size, allowing huge databases, file repositories or document archives to be stored within individual disks. Whilst VMware’s proprietary file system, VMFS5, supports datastore sizes of 64TB, the Virtual Machine Disk Format (VMDK), is restricted to 2TB, meaning customers have to utilize the less flexible, less portable Raw Device Mappings or (RDMs). If customers do choose to implement RDMs, 64TB is the maximum supported size, however with Microsoft, Hyper-V places no specific maximum on the size of a pass through disk. The maximum size is ultimately determined by what the guest operating system supports. This ensures that the largest data-driven workloads can be virtualized on Hyper-V with ease.Now, we mentioned earlier a capability known as 3rd Party Multipathing, and how this enables customers to optimize their Host-to-SAN integration and connectivity, maximizing their investment in both of these key elements of the virtualized infrastructure, whilst providing the highest levels of performance and availability for their critical workloads. Offloaded Data Transfer or (ODX), is a key capability of Windows Server 2012 Hyper-Vand Hyper-V server 2012 and is another of those features that enables organizations to maximize their investment in current existing technologies. By integrating Windows Server 2012 Hyper-V and Hyper-V Server 2012 with an ODX-capable storage array, many of the storage-related tasks that would normally use valuable CPU and network resources on the Hyper-V hosts, are offloaded to the array itself, executing much faster, increasing performance significantly, and unlocking extra resources on the hosts themselves. VMware offer a similar capability, known as the vStorage APIs for Array Integration, VAAI, but unfortunately, this capability is only available in the Enterprise and Enterprise Plus editions of vSphere 5.1, meaning customers, again, have to upgrade to higher editions to achieve higher performance from their hardware investments.
Now Windows Server 2012 Hyper-V and Hyper-V Server 2012 also introduce a number of enhanced resource management capabilities that include:Dynamic Memory Improvements whichdramatically increase virtual machine consolidation ratios and improve reliability for restart operations that can lead to lower costs, especially in environments, such as VDI, that have many idle or low-load virtual machines.Resource Metering provides the ability to track and report the amount of data that is transferred per IP address or virtual machine to help ensure accurate chargebacks.Quality of Service provides the ability to programmatically adhere to a service level agreement (SLA) by specifying the minimum bandwidth that is available to a virtual machine or a port,prevents latency issues by allocating maximum bandwidth use for a virtual machine or port.And finallyData Center Bridging takes advantage of the latest innovation in hardware and reduces the cost and difficulty to maintain separate network, management, live migration and storage traffic by using a modern, converged infrastructure and hardware.
And it’s important let’s have a look at some of those things through a quick demonstration. So here we are back in the demo environment, this is slightly different one to show some different demonstrations and capabilities. First of all I want to talk about the improvements in Dynamic Memory which as we discussed before help organizations to realize better density and a better return on investment. so what we can see here is one of my VMs here Demo01 has been enabled with Dynamic Memory, it’s got a minimum and maximum between half and one GB the current assigned memory that Hyper-V has given our VM is half a GB the minimum but the demand inside the guest OS is relatively low, it’s actually lower than the minimum. But you’ll notice this new feature, this option we’ve got within Windows Server 2012 and Hyper-V is to specify the startup memory. Because certain VMs start up with more memory to get certain things done initially, certain things started and then once they’re calmed down they can then drop their memory requirement below that startup memory, down towards what the minimum is. We’ve kept them as the same for this example but it’s up to you. What we’ve got here though is the opportunity to now start to hot add dynamic memory to really make it dynamic and scale up that workload as it grows. So by going into this particular virtual machine here which we’ve got open here, COM Demo1 and we can run a little application that’s going to consume memory, so it’s going to increase the memory usage of our virtual machine, so if we minimize back out of there we’ll see warning, memory demand is higher than what the maximum is here so maximum 1GB, memory demand is higher, so I can go into the settings of this VM now, this is new in 2012, go to memory and I can adjust the maximum on the fly to something more appropriate, so I’ll choose 2 GB, I’ll apply, I click ok, and you’ll see now the assigned memory, Hyper-V starting to give it a little bit more, but it’s continuing to grow, the demand that little application that’ s running inside there is continuing to need more and more memory, so we could go ahead and assign more but in fact, I’m just going to kill it. And memory demand what you’ll find now will start to shrink down, so the memory demand has dropped considerably and now the assigned memory, Hyper-V will start to reclaim that even though there’s no contention, over time it will start to claim that memory back and give it other virtual machine when they need it. Now we mentioned before resource metering is a way to track the certain metrics associated with virtual machines even if they migrate and float around the infrastructure using some of the migration technologies. So what I want to do is just show you how easy it is to set that up, now in my notepad file here I’ve got an example of a script that you would write in PowerShell to set this up. So at first on the host you would enable resource metering so a simple script there, then you would set your interval, so how often you’re going to query and check and then finally this final script here enables those, I’m just going to copy and bring up PowerShell, this enables us to actually query and product what’s been used by the virtual machine. And you’ll see a number of the virtual machines that have been running that period of time, the average CPU and megahertz, the average memory, the maximum memory and we’re able to use that particularly in solution search of System Center to build show and charge back models to represent the usage and give the information back to the business. So really easy and obviously with this in PowerShell this can be piped out to other systems as well. So that was a little bit about resource metering and I want to finish on quality of service and show you how easy it is to define quality of service for certain key workloads and applications. Now to do that currently if I look inside this particular virtual machine demo001, I’m going to run this little client network application, it’s going to fire towards a receiver if you will where I’ve already set up a certain amount of traffic in a certain amount of time. and you’ll see in this case in five seconds it was able to send 230 MB of data at a rate of 385 M/sec. Now if I press continue there and close that and I come out of this particular virtual machine and look at the settings on the host of that VM, in the network adaptor here that’s connected and we turn on bandwidth management and we set a maximum of say 50 Mbps so we’re stopping this virtual machine being particularly noisy which could impact other virtual machines. If you’re interested in giving a certain VM a certain amount of SLA so a minimum level of bandwidth available you can select and enter information in this minimum column as well. But I’ll go ok there, I’ll go back to the virtual machine and we’ll run the same test again and you’ll see this time it didn’t take any longer because it was five seconds an interval but you’ll see the transfer was a lot smaller at a lot lower transfer rate that we were controlling through quality of service. It’s a really simple demonstration but very, very powerful stuff available to customers of all shapes and sizes, very, very useful indeed. And with that, we’ll go back to the slides.
Welcome back, I hope you found the demo useful, but how does VMware compare with some of these capabilities? Well, as shown in the table, when it comes to memory management, Windows Server 2012 Hyper-V and Hyper-V Server 2012, along with both the VMware vSphere Hypervisor and vSphere 5.1, all provide techniques to better utilize virtual machine memory, increase density and maximize the return on investment, however Microsoft’s approach to memory management is different to that of VMware. VMware claim, that through their 4 memory management techniques; Ballooning, Transparent Page Sharing, Compression and Swapping, they can provide a virtual machine density greater than that of Hyper-V, yet in reality, this simply isn’t true. All 4 of these memory management techniques only operate when the host is under memory pressure, heavily laden, as a reactive measure. With technologies such as Transparent Page Sharing or (TPS), with the majority of hardware platforms now supporting higher performance 2MB Large Page Tables by default (LPT), TPS is unable to deduplicate memory pages as easily as it would, prior to Large Page Tables , thus the capability becomes significantly less useful. Under memory pressure, the ESXi host will break down large memory pages into smaller, 4KB pages, which it can then deduplicate, freeing up memory, but unfortunately, this process doesn’t occur without a cost to already limited, host performance of resources. With compression and swapping, whilst both help to keep virtual machines operable, it’s too little too late, with performance of key workloads at this point, becoming severely degraded.With Dynamic Memory, Hyper-V works intuitively with the guest OS, delivering, and reclaiming memory from the virtual machine in a way that is optimal for the guest OS, ensuring resources are provided appropriately, and a consistent level of performance is achieved for key workloads, ultimately providing the highest levels of density, and the greatest return on investment.As we move towards more cloud-oriented infrastructures, especially in multi-tenanted environments, hosting providers and enterprises must be able to measure the amount of data center resources (compute, network, and storage) that are consumed by each workload. These can be used to charge external customers (known as chargeback), or for internal accounting (known as showback) for cross-departmental budget management scenarios of an enterprise. Resource Metering, which is a standard feature ofWindows Server 2012 Hyper-V and Hyper-V Server 2012, when combined with new performance counters, exposes a wealth of information from which chargeback and showback models can be built. While the vSphere Hypervisor, and vSphere 5.1 both enable the capturing of information within vCenter, organizations must purchase, at additional cost to vSphere 5.1, vCenter Chargeback Manager in order utilize the information in a meaningful manner.Whilst chargeback and showback are two important elements for a private cloud, ensuring service levels are met is equally important, whether the primary business is that of a hosting provider, serving external customers, or an enterprise organization, serving internal business units with chargeable resources. Either way, ensuring the highest levels of performance is imperative, and with Windows Server 2012 Hyper-V and Hyper-V Server 2012, Quality of Service (QoS) is a standard feature, enabling organizations to ensure that Service Level Agreements (SLAs) for key workloads are met, and at the same time, intensive, noisy virtual machines don’t consume more than their allocated allowance. With VMware however, QoS is only available in the Enterprise Plus edition of vSphere 5.1, so for those customers who wish to implement stringent SLAs, customers must upgrade, at additional cost, to VMware’s highest edition.
Virtualized data centers are becoming more popular and practical every day. IT organizations and hosting providers have begun offering infrastructure as a service (IaaS), which provides more flexible, virtualized infrastructures to customers—“server instances on‑demand.” Because of this trend, IT organizations and hosting providers must offer customers enhanced security and isolation from one another.If a service provider’s infrastructure is hosting two companies, the IT Admin must help ensure that each company is provided its own privacy and security. Before Windows Server 2012, server virtualization provided isolation between VMs, but the network layer of the data center was still not fully isolated and implied layer-2 connectivity between different workloads that run over the same infrastructure.For the hosting provider, isolation in the virtualized environment must be equal to the isolation in the physical data center, to meet customer expectations and not be a barrier to cloud adoption. Isolation is almost as important in an enterprise environment. Although all internal departments belong to the same organization, certain workloads and environments, such as finance and human resource systems must still be isolated from each other. IT departments that offer private clouds and move to an Infrastructure as a Service model must consider this requirement and provide a way to isolate such highly sensitive workloads.Windows Server 2012 Hyper-V and Hyper-V Server 2012 contain new security and isolation capabilities through the Hyper‑V Extensible Switch.
Now with Windows Server 2012 and Hyper-V Server 2012, you can configure Hyper-V servers to ensure and enforce network isolation among any set of arbitrary isolation groups, which are typically defined for individual customers or sets of workloads. Windows Server 2012 Hyper-V and Hyper-V Server 2012 provide the isolation and security capabilities for multitenancy by offering the following features:Virtual machine isolation with PVLANs. VLAN technology is traditionally used to subdivide a network and provide isolation for individual groups that share a single physical infrastructure. Windows Server 2012 introduces support for PVLANs, a technique used with VLANs that can be used to provide isolation between two VMs on the same VLAN. When a VM doesn’t need to communicate with other virtual machines, you can use PVLANs to isolate it from other virtual machines in your data center. By assigning each virtual machine in a PVLAN one primary VLAN ID and one or more secondary VLAN IDs, you can put the secondary PVLANs into one of three modes.These PVLAN modes determine which other virtual machines on the PVLAN a virtual machine can talk to. If you want to isolate a virtual machine, put it in isolated mode. Promiscuous ports can exchange packets with any other port on the same primary VLAN ID. And community ports on the same VLAN ID can exchange packets with each other at layer 2. so isolated, promiscuous and community, three choices there. TheHyper-V Extensible Switch provides protection against a malicious virtual machine stealing IP addresses from other virtual machines through ARP spoofing which is also known as ARP poisoning in IPv4. With this type of man-in-the-middle attack, a malicious virtual machine sends a fake ARP message, which associates its own MAC address to an IP address that it doesn’t own. Unsuspecting virtual machines send the network traffic targeted to that IP address to the MAC address of the malicious virtual machine instead of the intended destination. For IPv6, Win Server 2012 and Hyper-V Server 2012 provide equivalent protection for ND spoofing or neighbor discover spoofing. DHCP Guard protection which in a DHCP environment, a rogue DHCP server could intercept client DHCP requests and provide incorrect address information. The rogue DHCP server could cause traffic to be routed to a malicious intermediary that sniffs all traffic before forwarding it to the legitimate destination. To protect against this particular man-in-the-middle attack, the Hyper-V administrator can designate which Hyper-V Extensible Switch ports can have DHCP servers connected to them. DHCP server traffic from other Hyper-V Extensible Switch ports is automatically dropped. The Hyper-V Extensible Switch now protects against a rogue DHCP server attempting to provide IP addresses that would cause traffic to be rerouted. Virtual port ACLs for network isolation and metering now Port ACLs provide a mechanism for isolating networks and metering network traffic for a virtual port on the Hyper-V Extensible Switch. By using port ACLs, you can meter the IP addresses or MAC addresses that can or can’t communicate with a virtual machine. For example, you can use port ACLs to enforce isolation of a virtual machine by letting it talk only to the Internet or communicate only with a predefined set of addresses. Now by using the metering capability, you can measure network traffic going to or from a specific IP address or MAC address, which lets you report on traffic, sent or received from the Internet or from network storage arrays. You can configure multiple port ACLs for a virtual port. Each port ACL consists of a source or destination network address and a permit to deny or meter action. The metering capability also supplies information about the number of instances where traffic was attempted to or from a virtual machine from a restricted address. Trunk mode to virtual machines is also very, very useful. A VLAN makes a set of host machines or virtual machines appear to be on the same local LAN, independent of their actual physical locations. With the Hyper-V Extensible Switch trunk mode, traffic from multiple VLANs can now be directed to a single network adapter in a virtual machine that could previously receive traffic from only one VLAN. As a result, traffic from different VLANs is consolidated, and a virtual machine can listen in on multiple VLANs. which can help you shape network traffic and enforce multitenant security in your data center. And finally Monitoring, many physical switches can monitor the traffic from specific ports flowing through specific virtual machines on the switch. Hyper-V Extensible Switch also provides this port mirroring. You can designate which virtual ports should be monitored and to which virtual port the monitored traffic should be delivered for further processing. For example, a security monitoring VM that could look for anomalous patterns in the traffic that flows through other specific virtual machines on the switch. In addition, you can diagnose network connectivity issues by monitoring traffic bound for a particular virtual switch port. And all of this is managed through common interfaces.Windows Server 2012 provides Windows PowerShell cmdlets for the Hyper-V Extensible Switch that lets you build command-line tools or automated scripts for setup, configuration, monitoring, and troubleshooting. These cmdlets can be run remotely also. now PowerShell also enables third parties to build their own tools to manage the Hyper-V Extensible Switch.
Now many enterprises need the ability to extend virtual switch features with their own plug-ins to suit that environment. If you’re in charge of making IT purchasing decisions at your company, you want to know that the virtualization platform that you choose won’t lock you in to a small set of compatible features, devices, or technologies.Now, in Windows Server 2012 and Hyper-V Server 2012, the Hyper V Extensible Switch provides theextensibility our partners are looking for. The Hyper V Extensible Switch in Windows Server 2012 and Hyper-V Server 2012 is a layer-2 virtual network switch that provides that programmatical managed and extensible capabilities to connect VMs to the physical network. It is an open platform that lets multiple vendors provide extensions that are written to standard Windows API frameworks. The reliability of extensions is strengthened through the Windows standard framework and reduction of required third-party code for functions and is backed by the Windows Hardware Quality Labs certification program. The IT Admin can manage the Hyper-V Extensible Switch and its extensions using PowerShell, programmatically with WMI or through the Hyper-V Manager user interface.Now there are several partners as you can see on the screen that have already announced extensions for the Hyper-V Extensible Switch, Cisco with it’s Nexus 1000V & UCS Virtual Machine Fabric Extender (VM-FEX), NEC with theOpenFlow technology,5ninewith innovativeSecurity technology, and finally InMon with a monitoring focused sFlow product. These extensions are across Packet Inspection, Packet Filtering, Network Forwarding and Intrusion Detection, ensuring comprehensive levels of granularity and control for specific customer scenarios.
But what about VMware? Whilst VMware offer an advanced distributed network switch, unfortunately, it is only available in the Enterprise Plus edition of vSphere 5.1, thus customers wishing to take advantage of the increased granularity, management capability and control, have to upgrade to the highest edition, at substantial cost. The VMware vSphere Hypervisorunfortunately doesn’t provide this capability. A key point to note however, is that the vSphere vSwitch, isn’t open and extensible, but instead, closed and replaceable. Up until recently, Cisco were the only vendor to offer an alternative to the VMware vSphere Distributed Switch. Now IBM have recently released an alternative to this, however with Windows Server 2012 Hyper-V and Hyper-V Server 2012, there is already commitment from 4 Partners; to deliver extended functionality across a variety of different extension types, from packet inspection and filtering through to forwarding and intrusion detection, offering customers a greater set of choice for their specific environment. It’s also important to note that so far, the approach from VMware’s Partners has been more about replacement than integration, with the Cisco Nexus 1000V and the IBM System Networking Distributed Virtual Switch 5000V both effectively replacing the inbox vSphere Distributed Switch.Now many of the more advanced networking capabilities within Windows Server 2012 Hyper-V and Hyper-V Server 2012 are unfortunately not present within the free vSphere Hypervisor, and even with vSphere 5.1 enterprise plus, key security protection capabilities such as ARP and ND Spoofing Protection, DHCP Snooping Protection and DHCP Guard, along with Virtual Port Access Control Lists are only available through the purchase of additional technologies on top of vSphere 5.1; either the App component of the vCloud Networking & Security product or within the network switch technologies from vendors such as Cisco. This means that again, customers have to add additional, costly technology in order to provide protection from these threats.Now with the Hyper-V Extensible Switch trunk mode, traffic from multiple VLANs can now be directed to a single network adapter this capability also applies in the vSphere enterprise plus edition however isn’t available in the free vSphere hypervisor.And finally, the Extensible Switch in the Hyper-Vprovides organizations with the ability to not only monitor individual ports within a vSwitch, but also mirror the traffic that is passing, to an alternative location for further analysis. With VMware vSphere Hypervisor however, all traffic on a particular Port Group or vSwitch, on which ‘Promiscuous Mode’ is enabled, is exposed, posing a potential risk to the security of that network. That lack of granularity restricts it’s usage in real world environments, and means that customers who require that next level of protection have to upgrade to vSphere 5.1 Enterprise Plus, which has the Distributed Switch technology to provide the capability through features such as NetFlow and Port Mirroring.
Windows Server 2012 Hyper-V and Hyper-V Server 2012 also include a number of performance enhancements within the networking stack to help customers virtualize their most intensive workloads. Virtual Machine Queue, introduced in Windows Server 2008 R2 Hyper-V, enables, when combined with VMq-capable network hardware, a more streamlined and efficient delivery of packets from the external network to the virtual machine, reducing the overhead on the host operating system. In Windows Server 2012 however, this has been streamlined and improved considerably, with Dynamic Virtual Machine Queue spreading the processing of the network traffic more intelligently across CPUs and cores in the host, resulting in higher networking performance.When it comes to security, many customers are familiar with IPsec. IPsec protects network communications by authenticating and encrypting some, or all of the contents of network packets. IPsec Task Offload in Windows Server 2012, leverages the hardware capabilities of server NICs to offload IPsec processing of traffic. This reduces the CPU overhead of IPsec encryption and decryption significantly. In Win Server 2012, it’s extended to VMs as well. Customers using VMs who want to protect their network traffic with IPsec can take advantage of the IPsec hardware offload capability, thus freeing up CPU cycles to perform more application-level work and leaving the per packet encryption/decryption to hardware.And finally, when it comes to virtual networking, a primary goal is native I/O throughput. Now WinServer 2012 Hyper-V and Hyper-V Server 2012 add the ability to assign SR-IOV functionality from physical devices directly through to virtual machines. This gives VMs the ability to bypass the software-based Hyper-V Virtual Switch and network data, and directly address the NIC. As a result, CPU overhead and latency is reduced, with a corresponding rise in throughput. This is all available, without sacrificing key Hyper-V features such as virtual machine Live Migration.
And when it comes to deployment of virtualization technologies, many are within secure datacenter environments, but what about those that aren’t? Satellite offices, remote sites, home offices and retail stores are all examples of environments that may not have the same levels of physical security as the enterprise datacenter, yet may still have physical servers, with virtualization technologies present. If the physical hosts were compromised, there could be very serious repercussions for the business.Well with Windows Server 2012 Hyper-V and Hyper-V Server 2012, BitLocker Drive Encryption is included to solve that very problem, by allowing customers to encrypt all data stored on the host operating system volume and configured data volumes, along with any Failover Cluster disks, Cluster Shared Volumes and so on, ensuring that environments, large and small, that are implemented in less physically secure locations, can have the highest levels of data protection and compliance for their key workloads, at no additional cost
Now, whilst VMware provide a capability known as NetQueue, in VMware’s own documentation, entitled the ‘Performance Best Practices for VMware vSphere 5.1’, it is noted that “On some 10 Gigabit Ethernet hardware network adapters, ESXi supports NetQueue, a technology that significantly improves performance of 10 Gigabit Ethernet network adapters in virtualized environments”. What does this mean for customers who have servers that don’t have 10 GigE? With Windows Server 2012 Hyper-V and Hyper-V Server 2012, and Dynamic-VMq, customers with existing 1 gigabit and 10 gigabit Ethernet adaptors can flexibly utilize these advanced capabilities to improve performance and throughput, whilst reducing the CPU burden on their Hyper-V hosts.When it comes to network security, specifically IPsec, VMware offers no offloading capabilities from the virtual machine through to the physical network interface, thus in a densely populated environment, valuable host CPU cycles will be lost to maintain the desired security level. With Windows Server 2012 Hyper-V, the IPsec Task Offload capability will move that workload to a dedicated processor on the network adaptor, enabling customers to make dramatically better use of the resources and bandwidth that is available.Now as stated earlier, when it comes to virtual networking, a primary goal is native I/O. With SR-IOV, customers have the ability to directly address the physical NIC from within the virtual machine, reducing CPU overhead and latency whilst increasing throughput. In vSphere 5.1, VMware have introduced SR-IOV support, however it requires the vSphere Distributed Switch – a feature only found in the highest vSphere edition, meaning customers have to upgrade to take advantage of the higher levels of performance. And also, VMware’s implementation of SR-IOV unfortunately doesn’t support other features such as vMotion, High Availability and Fault Tolerance, meaning customers who wish to take advantage of higher levels of performance, must sacrifice agility and resiliency. Prior to vSphere 5.1, VMware provided a feature that offered a similar capability to SR-IOV, and continues to offer this in 5.1. DirectPath I/O, a technology which binds a physical NIC up to a virtual machine, offers that same enhancement, to near native performance, however, unlike SR-IOV in Windows Server 2012 Hyper-V and Hyper-V Server 2012, a VM with DirectPath I/O enabled is restricted to that particular host, unless the customer is running a certain configuration of Cisco UCS. And there’s lots of other caveats, no Memory Overcommit, no vMotion , no Fault Tolerance, no Network I/O Control, No VM Snapshots.Now whilst DirectPath I/O may be attractive to customers from a performance perspective, VMware ask customers to sacrifice agility, losing vMotion in most cases, and scale, having to disable memory overcommit, along with a number of other vSphere features. No such restrictions are imposed when using SR-IOV with Windows Server 2012 Hyper-V, ensuring customers can combine the highest levels of performance with the flexibility they need for an agile, scalable infrastructure.Finally, when it comes to physical security, VMware has no capability within the vSphere Hypervisor or vSphere 5.1 that can enable the encryption of either VMFS, or the VMDK files themselves, and instead rely on hardware-based, or in-guest alternatives, which add cost, management overhead, and additional resource usage.
In this penultimate section, we’ll take a look at the key capabilities within Windows Server 2012 Hyper-V and Hyper-V Server 2012, that enable organizations to be flexible, and agile.
Now to maintain optimal use of physical resources and to be able to easily add new virtual machines, IT must be able to move virtual machines whenever necessary without disrupting the business. The ability to move virtual machines across Hyper‑V hosts is available in Windows Server 2008 R2, with a feature known as Live Migration. But Windows Server 2012 Hyper-V and Hyper-V Server 2012 build on that feature and enhance the ability to migrate virtual machines with support for simultaneous live migrations - the ability to move several virtual machines at the same time, enabling a more agile, responsive infrastructure and at the same time a more optimal usage of network bandwidth during the migration process.In addition, Windows Server 2012 Hyper-V and Hyper-V Server 2012, introduce Live Storage Migration, which lets the IT Admin move virtual hard disks that are attached to a running virtual machine. Through this feature, IT can transfer virtual hard disks, with no downtime, to a new location for upgrading or migrating storage, performing backend storage maintenance, or redistributing the storage load. The IT Admin can perform this operation by using a new wizard in Hyper‑V Manager or the new Hyper‑V cmdlets for Windows PowerShell. Live storage migration is available for both storage area network and file-based storage.And finally with Windows Server 2012 Hyper-V and Hyper-V Server 2012, live migrations are no longer limited to a cluster and virtual machines can be migrated across cluster boundaries. With Shared-Nothing Live Migration, we can move a running virtual machine and it’s disk from physical server to physical server with nothing but an ethernet cable. An example of this could be a developer working on a virtualized web server on his local Windows Server 2012 Hyper-V workstation. And once testing’s complete this workload could be migrated live with no interruption from the local host system where the virtual machine resides on locally attached storage across the production cluster or the VM will reside at a high performance SAN storage. With Shared-Nothing live migration this migration is seamless with no interruption or downtime.
And I want to take a moment to show you these capabilities in action so let’s take a look. So here we are again in my demo environment and I’m going to bring up failover cluster here and you can see that I’ve selected a number of machines that are currently running on our cluster it’s a three node cluster remember, all three running Hyper-V Server 2012 and these four virtual machines that I’ve selected I’m going to right click, move, live migrate, and now notice what’s new in Windows Server 2012 it’s almost this guidance of where we’re going to distribute the virtual machines to. You’ll notice all the virtual machines are currently on node 4 and if I selected best possible node Hyper-V and the clustering management tools are going to move the virtual machines and distribute them a little more evenly. But I’m going to choose to select to move them all to a single node and I’m going to choose node number 2, and we’ll go ok. And you’ll see that all of those virtual machines are migrating simultaneously, as quickly as possible utilizing that network bandwidth as much as possible, final virtual machine going across there, notice this final virtual machine is actually one that’s got our virtual fiber channel adaptors attached to it as well, so that should have been moved across. You’ll notice that one’s got a little bit more memory than the others and therefore it took a little bit longer but that was a very easy and simple demonstration showcasing multiple, simultaneous virtual machine live migration.What I want to do now is select a slightly different VM Demo number 5 and I’m going to choose move again but this time I’m going to choose virtual machine storage, so I’m going to do a live storage migration. And we’ll click virtual machine storage, notice I’ve got all of the things that make up a virtual machine so the disk, snapshot, configuration files and so on, and I want to choose where I want to drop them. Currently it’s located on volume 1, but I’m going to choose to just click and drag and move to volume 3. so we’ll go start, and off it goes so we’re starting that virtual machine storage migration and that’s going to take a little time to complete, it’s a decent size due to the amount of storage there to move and if I go over to Hyper-V Manager here, we can see as it stands on #2 there migrating storage, 17, 18% and it’s synchronizing, it’s shifting those virtual machine files around to a different location, in this case a different LUN on the same SAN. Now, one thing I also want to show, the final area of migration is the shared nothing live migration. Because what we’ve shown initially is how we can move a multiple virtual machine, multiple running virtual machines between cluster nodes, we’ve also shown the movement of the disks all of this is without interruption for those running virtual machines but now I want to choose to move a virtual machine between two physical servers but share nothing but an ethernet cable. Now over on node 3 here I’ve got a virtual machine and I’m going to right click, I’m going to click move, and notice I’ve got this move wizard here that’s going to ask my okay, what do you want to do, just move the disk or move the whole VM and it’s disks, everything about it? We’ll go with the whole VM, we’ll go next. What’s the destination, so I’m going to choose CON demo5 for this particular one and we’ll go next. The move options, do you want to move everything to one place, do you want to separate things out so move the one virtual disk to here, move the config file to here, I can separate all of that out or just move the running state of the VM but the virtual machine disks must be currently on shared storage, that’s not the case so we’re just going to move the top one. We’ll go next, I’ll choose a folder to place it on the target machine I’m just going to go there, that’s fine, the D drive and then we’ll go next and finish and that’s going to start and that's performing the move and that’s going to take a minute or two but don’t forget this is moving the running state and at the same time the virtual disks. So we’re literally picking up the whole of that machine and transporting it to another physical server. If you’ve been following along, nodes 2,3 and 4 is a cluster and Hyper-V 5 is a stand alone server. So we’re actually moving a VM in and out of a cluster in this particular example we’re moving it out of the cluster. So that’s the flexibility that Hyper-V in Share Nothing migration brings, moving VMs between stand alone servers, stand alone to cluster, cluster to cluster, cluster back to stand alone, complete flexibility utilizing Shared Nothing Live migration. And you’ll see it’s zooming through, zooming along, we’re just finishing wrapping up the process now and there, done, disappeared from node 3 and you’ll see it’s now here on node number 5 and you’ll see it’s still running there, you can make out in the little screen there. great stuff, great flexibility, agility and all in the box with Windows Server 2012 and Hyper-V.
Welcome back , I hope you enjoyed the demo, now isolating virtual machines of different departments or customers can be a challenge on a shared network. When these departments or customers must isolate entire networks of virtual machines, the challenge becomes even greater. Traditionally, VLANs are used to isolate networks, but VLANs are complex to manage on a large scale. There are a number of drawbacks of VLANs:Cumbersome reconfiguration of production switches is required whenever virtual machines or isolation boundaries must be moved, and the frequent reconfiguration of the physical network to add or modify VLANs increases the risk of an inadvertent outage,VLANs have limited scalability because typical switches support no more than 1,000 VLAN IDs with a maximum of 4,000, VLANs cannot span multiple subnets, which limits the number of nodes in a single VLAN and restricts the placement of virtual machines based on physical location.In addition to the drawbacks of VLANs, virtual machine IP address assignment presents other key issues when organizations move to the cloud such as Required renumbering of service workloads, Policies that are tied to IP addresses, Physical locations that determine virtual machine IP addresses, Topological dependency of virtual machine deployment and traffic isolation. These are all considerations to make. The IP address is the fundamental address that is used for layer‑3 network communications because most network traffic is TCP/IP. Unfortunately, when IP addresses are moved to the cloud, the addresses must be changed to accommodate the physical and topological restrictions of the data center. Renumbering IP addresses is cumbersome because all associated policies that are based on IP addresses that must be updated.The physical layout of a data center influences the permissible potential IP addresses for virtual machines that run on a specific server or blade that is connected to a specific rack in the data center. A virtual machine that is provisioned and placed in the data center must adhere to the choices and restrictions regarding its IP address. Therefore, the typical result is that data center administrators assign IP addresses to the VMs and force virtual machine owners to adjust all their policies that were based on the original IP address. This renumbering overhead is so high that many enterprises choose to deploy only new services into the cloud and leave legacy applications unchanged.But with Hyper‑V Network Virtualization we’re solving these problems. With this feature, IT can isolate network traffic from different business units or customers on a shared infrastructure and not be required to use VLANs. Network Virtualization also lets IT move virtual machines as needed within the virtual infrastructure while preserving their virtual network assignments. Finally, IT can even use Hyper‑V Network Virtualization to transparently integrate these private networks into a preexisting infrastructure on another site. It’s really enabling complete flexibility virtualizing the network and abstracting it all your way in the same way we virtualized servers and abstracted them in the physical.
In this scenario, Contoso Ltd. is a service provider that provides cloud services to businesses that need them. Blue Corp and Red Corp are two companies that want to move their Microsoft SQL Server infrastructures into the Contoso cloud, but they want to maintain their current IP addressing. With the new network virtualization feature of Hyper-V in Windows Server 2012, Contoso can do this, as shown in the figure.
But what about VMware? Well as shown in the table, the flexibility and agility provided by the inbox features of Windows Server 2012 Hyper-V and Hyper-V Server 2012 are simply unmatched by VMware. The VMware vSphere Hypervisor supports none of the capabilities required for an agile infrastructure today, meaning customers have to purchase a more expensive vSphere 5.1 edition.vSphere 5.1 Essentials Plus edition, and higher, now support vMotion (virtual machine live migration) yet on 1GigE networks, VMware restrict the number of simultaneous vMotions to 4, and to 8 on 10GigE,. With Windows Server 2012 Hyper-V and Hyper-V Server 2012, Microsoft supports an unlimited number of simultaneous live migrations, within the confines of what the networking hardware will support, with the process utilizing 100% of the available, dedicated live migration network to complete the process as quickly and efficiently as possible, with no interruption to that running virtual machines.Now just like virtual machine vMotion, Storage vMotion is unavailable in VMware vSphere Hypervisor, and is restricted to the Standard,Enterprise and Enterprise Plus editions of vSphere 5.1, available at considerable cost. In vSphere 5.1, VMware also introduced a feature, known as Enhanced vMotion, which enables the migration of a virtual machine between 2 hosts without shared storage. This feature was already available in all editions of Hyper-V, in the form of Shared-Nothing Live Migration.And finally, with Hyper‑V Network Virtualization, network traffic from different business units or customers can be isolated, even on a shared infrastructure, without the need for VLANs. VMware however , to obtain any kind of functionality similar to what Network Virtualization can deliver, customers must first purchase the vCloud Networking & Security product, of which VXLAN is a component, and also, as VXLAN requires the vSphere Distributed Switch, customers must upgrade to the Enterprise Plus edition of vSphere 5.1 to take advantage. Network Virtualization has some significant advantages over VXLAN, with one in particular being better integration with existing hardware and software stacks, which is of particular importance when VMs need to communicate out of the hosts and into the physical network infrastructure. Not all switches are VXLAN aware, meaning this traffic cannot necessarily be handled effectively.
And so we come to our final part, where we compare and contrast some of the key features from a High Availability and resiliency perspective.
Now virtualization can promote the high availability of mission-critical workloads in new and effective ways and in Windows Server 2012 and Hyper-V Server 2012, there are a number of new enhancements that ensure key workloads are resilient, and protected.Firstly, Incremental Backups - True differential disk backups of virtual hard disks to help ensure that the data is backed up and restored when necessary. It also reduces storage costs because it backing up only what has changed, not the entire disk and it’s agentless.Hyper‑V Replica is aAsynchronous, application-consistent virtual machine replication is built in to Windows Server 2012 and Hyper-V Server 2012. It provides asynchronous replication, using commercial broadband of Hyper‑V virtual machines between two locations for business continuity and failure recovery. Hyper‑V Replica works with any server vendor, any network vendor, and any storage vendor.And finally, NIC Teamingwhich provides in the box in teaming increased reliability and performance and throughput for virtual machine so should a network adaptor fail the traffic will still pass because the team will allow that to take place. Great flexibility and that integration is in the box. And we’ll take a look at that in the demonstration.
So let’s have a look at some of this stuff in action, now in this first demonstration I want to take a quick look at that feature we called Hyper-V replica earlier on and to enable that I’m going to failover cluster manager here and the reason I’m going to failover cluster manager is because the virtual machines that I want to protect with Hyper-V replica are running on the cluster. So I need to enable this features on this VM from failover cluster. What you’ll also notice on this cluster is I’ve got a little service or role running here called the Hyper-V replica broker and that’s an important service that you need to enable on the cluster if you are going to be replicating in and out or in or out of the cluster, so very simple to enable, all the documentation’s on TechNet, but it’s just a role that you turn on, on the cluster. But now that it’s turned on I can start to enable Hyper-V replica for key virtual machines. And to do that I’m going to select this particular virtual machine VM4 and I’m going to right click, replication, and enable replication and this brings me up the Hyper-V replication wizard and what I can do is firstly specify where I’m going to replicate this to and I’m going to choose to replicate to a server outside of the cluster, so it could be a server that’s outside my environment, another datacenter, you decide. And I can choose depending on where I’m going to replicate that to whether I want to secure it with a certificate, what port I want to send this information over and whether it’s going to be compressed as it goes over the network. But I’m going to go next, I can choose which virtual disks are going to be replicated and in the example here is for example a VHD used for a dedicated paging file, you may not want to replicate that is it unnecessary, in which case you could unselect that one. But I’m just going to choose these particular VHDX file. We’ll go next, how many recovery points do I want to deliver to this virtual machine, so I’m going to say that we’ll keep only the latest recovery point, but I could add more and I could also turn on the VSS capability for application consistent snapshots or copies, so that gives me that extra level of resiliency but I do need additional storage to do that. But I’m going to go only the latest for the time being, we’ll go next, now I’m going to replicate 12 GB of data and how do I want to get that there initially? Well I could just send that over the network, I could send an initial copy using a USB disk, or some other form of external media, I could choose something I’ve already got on that site like back up that’s been restored that I could then replicate onto and I can choose when I’m going to start this replication. So I’m going to choose right now, immediately over the network and we’ll go next. And we’ll go finish and off that goes, that started replicating now and you can see here, initial replica in progress, sending initial replica and what you’ll notice is if I go over to Hyper-V Manager because that’s where node number 5 is you’ll see receiving changes here, replicating across the network from A to B in this case hosting the cluster out of the cluster. And while it’s doing that, because that's going to take a few minutes, let’s look at what the situation looks like from an actual failover perspective. So we’ll go back to failover cluster manager and we’ll select demo 3 which I’ve already set replication up with and I’m choosing now to instigate that failover from A to B. but firstly because I want to make sure that this is a clean migration one of the things that a planned failover I know that I’m going to need to failover in the event of a disaster here it needs the virtual machine to be shut down or at least turned off, it doesn’t have to be shut down cleanly but you may want to do that. I’m going to do that because I’ve got that opportunity, I’ve been warned of an impending natural disaster on the way so I’ll shut that VM down and give it that time to shut down cleanly, I’ll then right click, replication, planned failover and that’s going to run through a wizard and you’ll see that the first prerequisite check is to check that the VM is turned off. So whether you shut it down cleanly or switched off the virtual machine, as long as it’s in the off state it’s going to be able to continue. And check configuration for allowing reverse replication so is it going to be able to replicate back this way when we’re doing this planned failover, so we’ll go failover, and I’ll jump over to Hyper-V manager quickly and you’ll see it was just about starting that virtual machine now, so it’s off on the cluster and now they’re starting it up in this environment and this is flashing away to say everything was fine, completed successfully. So you’ll see here now we’ve started up and it’s up and running and it sent those final changes that we perhaps hadn’t replicated prior to shutting down and it’s also replicating back the other way, that’s really useful as well. Let’s just look quickly at some of the settings inside this VM concerned with replica. Because if I look at the settings for this VM you’ll see I’ve got this little option here called failover TCP/IP so these are the settings that I could apply but when the VM fails over to the other sites these IP addresses are injected and used for that virtual machine once it’s failed over. So that’s pretty useful for it saves me when configuring those kind of settings and IP address in a panic when we’ve actually gone into a full failover situation, so I’ll cancel that out. Now, if I go back to the cluster this one is off, now if we had to of lost the other site, so lost the primary this now being the secondary, I have the option to do a failover which basically says okay, you’ve lost the main sites, we’ve lost the primary, bring up the secondary. Simple as that. Test failover however let’s me spin up and bring these virtual machines up and running in an isolated type environment which means that I can do that while the primary workload is still running. 7:28
Now when it comes to clustering, Windows Server 2012 and Hyper-V Server 2012 offers unmatched scale and flexibility for virtualized infrastructures:Win Server 2012 and Hyper-V Server 2012 will now support up to 64 physical nodes and up to 8,000 VMs in a single cluster providing supreme scalability and flexibility for key virtualized workloads.Win Server 2012 and Hyper-V Server 2012 provides not only iSCSI guest clustering support, including Multi-pathing, but also enables the use of Virtual Fibre Channel adapters within the virtual machine allowing workloads access to storage area networks or SANs using fiber channel fabric. In addition, a virtual fiber channel enables IT to cluster guest operating systems over Fiber Channel providing HA for workloads within VMs and utilize the built-in Windows multi-path I/O (MPIO) for high-availability and load balancing on the storage path. By employing MPIO and Failover Clustering together as complimentary technologies, users are able to mitigate the risk of a system outage at both the hardware and application levels. In addition you can also create guest clustering using SMB file based storage as well.We want to make sure that storage that’s used for the cluster is secure and Hyper-V, Failover Clustering and BitLocker now work in harmony to create the ideal and secure platform for private cloud infrastructure. Windows Server 2012 Cluster disks that are encrypted using BitLocker Drive Encryption enable better physical security for deployments outside ofsecure data centers, providing a critical safeguard for the cloud and helping protect against inadvertent data leaks.And finally with CSV 2.0 has been greatly enhanced in a number of ways. From a usability standpoint, CSV is now a core Failover Clustering feature, with simplified administration and management. To support up to 64 nodes in a cluster, CSV has been improved in aspects of both performance and scalability. In terms of integrating with our partners, CSV has been specifically enhanced to work out of the box with storage filter drivers such as those used by: anti-virus, data protection, backup and storage replication ensuring a more seamless integration with existing investments.
There are 3 Levels of Availability provided in this release, Hyper-V and Failover Clustering work together to bring higher availability to workloads that do not support clustering. It does this by providing a light-weight, simple solution to monitor applications running in the VMs and integrating with the host. By monitoring services and event logs inside the virtual machine, Hyper-V and Failover Clustering can now detect whether the key services that a virtual machine provides are healthy and provide automatic corrective action such as restarting the VM or restarting a service within the VM. This is in addition to the already existing virtual machine failover capabilities that should a host fail, or the virtual machine itself become unresponsive.Cluster-Aware Updating is an in-box end-to-end solution for updating Hyper-V Failover Clusters, helping customers to preview, apply, and report on updates, all with zero downtime to the virtual machines.Virtual machine priorities can be now prioritizedto control the order in which specific virtual machines failover or start. This ensures higher priority VMsare given the resources they need and lower priority VMs are given resources as they are available.And finally administrators can establish preferred hosts for certain virtual machines, and in conjunction with System Center 2012 SP1, administrators can define powerful rules to ensure that certain VMs always stay apart.
But I think it’s important to take a look at some of these technology and features in action so that’s what we’ll do in this demo. Now in this final demonstration I want to take a look at some of the key resiliency capabilities that are new in Windows Server 2012. And the first one is around this ability to offer extra levels of availability to your key applications and services. Now if I go into our cluster manager here and have a look at Demo 06, it looks like any normal virtual machine but if I click summary here, what you’ll notice is the monitored services listed here is print spooler, now what’s that about? Well this VM has been enabled for the monitoring services that third level of availability so if I click more actions, configure monitoring I’ve ticked the box for print spooler that are other services that I could protect and other things that I may have installed inside the virtual machine would appear here but you have to meet the pre-requesites obviously. I’m going to cancel out of there because I’ve already enabled that and what that means is if that print spooler has a problem, and I’m going to connect to that virtual machine now so we can have a look inside, and we’ll close the storage space dialogue box that we had open before and I’m going to bring up services and we’ll see that there’s the print spooler currently running and if we look at the properties, recovery, first failure, restart the service, so if it stops, restart the service, second failure, restart the service, this is Windows stuff, this is what Windows Server does inside the guest operating system. But what I’m going to do is I’m going to run a little process here to kill the print spooler, sounds very harsh and very cruel, it’s for the purpose of demonstration so it should be fine, so I’m going to go kill and it’s been terminated. So if I refresh it’s gone, if I refresh again it’s back on again because Windows has kicked in and saved it. If I do it again, hit refresh, it’s gone down here, and it’s back. And if I do it again, it shouldn’t, it’s gone, keeping refreshing, hitting f5 and it’s not coming back, okay, so what’s happened on the cluster? Well, let’s minimize out of there and let’s go back to the cluster and this will probably take a few seconds to pick up but what we’ve got now within the clustered environment, if I make this a little bit bigger, the cluster is monitoring this specific service, the Hyper-V host is monitoring this specific service and when it polls and when it checks to see what’s happened or see if this service is still running, it’s going to encounter an error and it’s going to put an event in the event log at the cluster level. So your monitoring tool, your management tool is looking for that particular event can respond to that. Now I say your monitoring tool or management tool can respond to that, I can actually choose to respond from within a cluster, now if I look at resources very briefly in fact we just saw it flick over there, running the application in VM critical, you’ll see there’s been a problem noted, if I look at show critical events and it’s been raised at the cluster level as well, if I show critical events we can see that today’s date, 3:38 cluster resource on clustered role received a critical state notification, indicates a application or service inside the VM in unhealthy and that’s the event ID, 1250. Now, what I could have done if I click on resources down here, and right click and look at properties, in the settings view here, I’ve unticked this little box here, so heartbeat settings, yes, enable heartbeat monitoring for the VM, that’s fine, but enable automatic recovery for application health monitoring if I’d have had that ticked as soon as that error was detected, the Hyper-V and the cluster would have restarted the virtual machine and if that didn’t fix the problem it would have also then restarted the virtual machine, another alternative host, but I chose to untick that box to show you what the event and the problem would look like. But that’s what the kind of level and automation and remediation we’re building into the product but if I’d have ticked that box it would have restarted that VM. But you might think well, I might not want to restart that VM because there might be something else relying or maybe another dependency somewhere else and that’s why you bubble up that event, that critical event that was registered into your more comprehensive management tools, tools like System Center Operations Manager is a good example.Now, the final demo I want to give and want to show you is still within this particular infrastructure but it’s a feature known as cluster aware updating and cluster aware updating is a new feature in 2012 that enables me to remediate complete cluster infrastructures, moving workloads around that cluster no matter what’s on the cluster, whether it’s file service, Hyper-V VMs, DHCP services, whatever it may be. Now notice that this was last run at 3am and it succeeded, now I wasn’t up at 3am to do this, I had it automatically set up to automatically remediate my cluster. Now, how I do that, I configure my cluster self updating features, now I can do this manually if I want to, but sometimes having it automatic is pretty beneficial. I’ll go next, enable self updating mode, if I want to do it manually I can disable it, but whenever I want to update the cluster as a whole I would come to this UI and do it manually. But I’m going to go next, I’ve chosen monthly, I’ve just changed it from daily, go next, it’s going to use these options which I could tweak, so reboot time out, post scripts, etcetera, we’ll go next, which updates, next and apply. And that’s done, now what’s important here is your cluster nodes are still going to pull from the essential patching source, so if that’s W source, if that’s Configuration Manager they’re still going to use that, all this is doing is triggering a polling of what updates are available for each individual node and then it’s orchestrating the delivery of those patches, the movement of the workloads off the cluster nodes and then the necessary reboot if necessary of that host and then moving the workloads back. And you’ll see if I was to preview updates for this cluster because I’ve already run it multiple, every night for the last couple of weeks there’s probably not going to be any, no, no updates found. But that’s what I would do if there were updates found I could check them, see which ones are applicable, we’ll go close and then I would click apply and it would orchestrate the whole process of deploying the patch to that particular cluster node, moving the VM workloads and so on. But if I do want to see what’s been happening in the past I can generate a report so I can see what’s happening in the last month or so or in this case I could put whatever start or end date I’d like but I’ll just choose the last month and you’ll see for most nights zero, zero, zero, nope there was a couple of updates pushed out on the 28th of November we can see what they were for specifically, information about that, great. There were a few more a bit later on, so December there was 27 so 9 a piece and then finally just a day or so ago security update there as well. So providing these has been approved in W source so each cluster node can actually see them because when the cluster that we’re updating is triggering that node to scan what updates are available for it, if they’ve not been approved in W source or Configuration Manager whatever your patching tool is then the cluster nodes aren’t going to be able to see it and therefore it’s not going to appear in this list and it’s not going to be pushed down from W source. So that’s a little bit about cluster aware updating it orchestrates that whole process of virtual or physical cluster nodes been rebooted, moved around, shifted all of that kind of stuff, extremely powerful and great for businesses of all shapes and sizes and with that we’ll go back to the slides.
Guest OSApp Monitoring - http://blogs.msdn.com/b/clustering/archive/2012/04/18/10295158.aspxWelcome back, I hope you found the demo useful. Now in terms of VMware comparison the table shows that when it comes to comparing the clustering and high availability capabilities of Win Server 2012 Hyper-V andHyper-V Server 2012, with the vSphere Hypervisor, the restrictions placed on VMware’s free edition become quickly evident. Whilst the vSphere Hypervisor does support integrated NIC Teaming for network card resiliency, it is lacking any other resiliency features, meaning if customers were to virtualize important workloads on the platform, they would have to upgrade to a more expensive edition in order to provide some sort of resiliency and protection for the workloads in question.Windows Server 2012 Hyper-V and Hyper-V Server 2012 on the other hand, offer a number of resiliency and high availability features in the box. Integrated Failover Clustering provides the foundation for virtual machine resiliency upon host, and virtual machine failure, and as we discussed in this release, extends the native protection into the guest operating system, ensuring that if app or a service start to exhibit problems, corrective action can be taken. VMware offer an API to deliver similar functionality, but it stops there. Customers can purchase 3rd party technologies to provide the specific resiliency capabilities, but these come at additional expense, and an added level of complexity.For customers looking for the highest levels of availability, not only within the datacenter, but between datacenters, Hyper-V Replica, an inbox feature of Hyper-V in Windows Server 2012 and Hyper-V Server 2012, provides a streamlined, efficient and flexible way to asynchronously replicate virtual machines between sites, and in the event of a disaster, start the replicated VM on the alternative site in minutes. Hyper-V Replica also provides the ability for customers to not only perform planned and unplanned failovers, but also perform non-disruptive testing on the DR site – a feature that is lacking in vSphere Replication unless customers purchase vCenter Site Recovery Manager, at a considerable cost. On top of that, the vSphere Replication has no API, meaning 3rd Parties cannot extend or plug into this. The reason behind this, is to ensure customers wanting to automate and orchestrate the process of failover, purchase SRM. In contrast this with Hyper-V Replica, which provides a rich, comprehensive PowerShell interface for driving automated scenarios. For customers who already have invested in storage replication technologies through their SAN vendor, the improvements in Hyper-V and Failover Clustering in Win Server 2012 and Hyper-V Server 2012, ensure streamlined integration to harness those investments
Now when it comes to cluster scalability, both from a physical cluster, and guest cluster perspective, Windows Server 2012 Hyper-V and Hyper-V Server 2012 lead the way in comparison with VMware.As shown in the table, Windows Server 2012 Hyper-V and Hyper-V Server 2012 offers double the number of nodes in an individual cluster, when compared with vSphere 5.1, and scales the number of VMs within an individual cluster to 8,000, again, double that of vSphere 5.1. This provides large enterprises, and service providers with unprecedented scale to run significant numbers of workloads.Customers who have embraced the standalone vSphere Hypervisor don’t have the ability to construct resilient clustered infrastructures, unless they upgrade to a costly edition of vSphere 5.1, however customers wishing to construct virtual machine guest clusters can use the standalone vSphere Hypervisor, or alternatively, vSphere 5.1. VMware’s support for guest clusters is severely lacking in comparison with Microsoft’s flexible offerings. Customers who have invested in iSCSI storage can create guest clusters on the vSphere Hypervisor or on vSphere 5.1, using the in-guest iSCSI initiator, the same way you would if you were constructing a physical cluster, however with vSphere 5.1, VMware support up to Windows Server 2008 R2, and are therefore restricted to 16 node guest clusters. For customers who have invested in file based storage (NFS) with VMware, this is unfortunately unsupported when it comes to creating guest clusters inside virtual machines, and with VMware’s virtual fiber channel implementation, presenting a fiber channel LUN directly to a VM, the size of the virtualized guest cluster is restricted to just 5 nodes. Compare this with Windows Server 2012 Hyper-V and Hyper-V Server 2012, which, for a Windows Server 2012 guest cluster, support up to 64 nodes, over iSCSI, Virtual Fiber Channel, or SMB 3.0, for complete flexibility and unmatched scale.And it’s important to note that whilst Windows Server 2012 Hyper-V and Hyper-V Server 2012 provide a significantly more comprehensive guest clustering capability than VMware in terms of storage integration and support, it also doesn’t require customers to sacrifice other features and functionality to work effectively. A virtualized guest cluster on Windows Server 2012 Hyper-V and Hyper-V Server 2012 supports features such as virtual machine Live Migration, for flexibility and agility, and Dynamic Memory, to ensure the highest levels of density. Compare this with VMware, who, whilst restricting customers to a maximum of 16 nodes with iSCSI storage, and only 5 with fiber channel storage, they also restrict customers from migrating the guest cluster nodes using vMotion, migrating disks using Storage vMotion, and additionally, direct customers to disable memory overcommit on those guest cluster nodes, sacrificing density. These are just 2 of the limitations with VMware vSphere guest clustering.
And so to wrap up.
Standard = 900$Datacenter = 4800$
And so to wrap up.
In this presentation, we have looked at a significant number of the new capabilities that are available within Windows Server 2012 Hyper-V and Hyper-V Server 2012, across 4 key investment areas: Scalability, Performance & Density, Secure Multitenancy, Flexible Infrastructure and High Availability & ResiliencyAnd across each of these areas, we’ve detailed how Hyper-V offers more scale, a more comprehensive array of customer-driven features and capabilities, and a greater level of extensibility and flexibility than the vSphere Hypervisor or VMware vSphere 5.1. With features such as Hyper-V Replica, cluster sizes of up to 64 nodes and 8,000 VMs, Storage and Shared-Nothing Live Migration, the Hyper-V Extensible Switch, Network Virtualization, and powerful guest clustering capabilities, it’s clear to see that Windows Server 2012 Hyper-V and Hyper-V Server 2012 offer the most comprehensive virtualization platform for the next generation of cloud-optimized infrastructures.
Warto wspomnieć o tym, jak w oczach Microsoft wygląda chmura prywatna, czyli PrivateCloud.Jej podstawą jest omówiony dziś Windows Server 2012, który zapewnia platformę wirtualizacji „w pudełku” i w cenie tego systemu. Jest przy tym bardzo skalowalny.Drugiem elementem chmury prywatnej jest rodzina System Center, która zapewnie wiele aspektów takiej chmury. Dzięki System Center możliwa jest automatyzacja naszego Datacenter – razem z jego monitorowaniem, zarządzaniem aplikacjami i procesami czy też opcjami korzystania z innych rozwiązań wirtualizacyjnych.