11. Type2 架构 Type 2 Java VM .NET CLR VM 硬件 操作系统 VMM Guest 2 Guest 1 Guest 3
12. Hybrid 架构 Hybrid Virtual PC 2007 Virtual Server 2005 硬件 OS VMM Guest 2 Guest 1 Guest 3
13. Type 1 ( Hypervisor )架构 Type1 Hypervisor Windows Server Hyper - V 硬件 VMM Guest 2 Guest 1 Guest 3
14.
15.
16.
17. Windows Server 2008 VSP 应用程序 应用程序 应用程序 Non-Hypervisor Aware OS VMBus Emulation “ Designed for Windows” Server Hardware Windows hypervisor 父分区 子分区 VM Service WMI Provider Microsoft Hyper-V Microsoft / Citrix (XenSource) 用户模式 Ring 3 内核模式 Ring 0 Ring -1 Drivers VMBus VMBus 应用程序 OS ISV / IHV / OEM Hyper-V 的体系结构 Windows Kernel Supported Windows OS Windows Kernel VSC Xen-Enabled Linux Kernel Linux VSC Hypercall Adapter VM Worker Processes
18.
19.
20.
21.
22. 微软的虚拟化产品家族 服务器虚拟化 应用程序虚拟化 桌面虚拟化 用户界面虚拟化 管理 微软虚拟化的突出优势 完整的产品线: 从桌面、应用程序到数据中心虚拟化,一应俱全 统一的管理平台:不管是虚机,还是物理机,都可以借助一系列 System Center 管理工具进行统一管理,彼此完美兼容 Windows Server Hyper-V
现在大家所看到的是目前微软虚拟化家族的全体产品,从服务器虚拟化的 Virtual Server , Hyper - V, 到应用程序虚拟化的 SoftGrid ,到桌面虚拟化的 VPC, 再到用户界面虚拟化的 Terminal Services ,一应俱全。而且为这些所有的产品提供了统一的管理平台: System Center , 不管是物理机,还是 Hyper-V , VPC , Virtual Server 的虚机,甚至是 VMware 的虚拟机都可以通过 System Center 的管理工具进行统一管理,并且彼此兼容。 另外 System Center 还提供了快速迁移的功能,也就是说,你可以通过简单的操作将一个虚拟机在各个物理机上进行快速的迁移,这就使得一些灾难恢复的工作变成更加的简单和快速
Hyper-V 下, IDE 和 SCSI 驱动的区别 The IDE controller remains an emulated device, but with a couple of differences to the IDE controller in Virtual Server. It is now 48-bit LBA capable . This allows you to connect large VHDs up to 2040GB to it. The second difference is a filter driver we insert into the storage stack inside the guest which effectively bypasses the emulation path for IDE, making it much higher performance. In fact, for I/O paths, the IDE controller with the filter driver performs equivalently to the SCSI controller in Windows Server virtualization. You can also attach pass-through disk storage to IDE, which was not possible in Virtual Server. The SCSI controller in Windows Server virtualization is not an emulated device. Instead, it is a “synthetic” device. It has no real world counterpart – it is a virtual controller. You can’t go to a store to buy one for a physical machine. The controller allows up to 255 VHDs or pass-through storage devices per controller, while gaining improved performance over the emulated adapter in virtual server. (The why for this is architectural - I'll cover that another day). As a “synthetic” device, it is not currently possible to boot directly from it until an operating system is available with a loader capable of reading from the drives/device. BIOS changes would also required. That's definitely a topic for another day though. Hyper-V: How to get the most from your virtualized disk performance There are two types of disk controllers that Hyper-V supports: SCSI and IDE. There are two IDE controllers and four SCSI controllers available. Each IDE controller can have two devices. You can not boot from a SCSI controller. This means an IDE disk will be required. The boot disk will be IDE controller 0 Device 0. If you want a CDROM it will consume an IDE device slot. EACH SCSI controller can support up to 255 devices. Both SCSI and IDE can support pass-through, Fixed, Dynamic, Sparse, and Delta drives (see http://blogs.msdn.com/tvoellm/archive/2007/10/13/what-windows-server-virtualization-aka-viridian-storage-is-best-for-you.aspx ). The difference lies in how the controllers are actually implemented. The IDE controller is emulated where as the SCSI controller is synthetic. So what does this mean? The IDE controller implements a well-known IDE controller and this means there is extra processing before the I/O is sent to the disk. This processing occurs in vmwp.exe (a user mode process that exists for each started VM. More on this in a later post). Once the IDE emulation is complete the I/O is sent into the Root Partition’s I/O Stack. I/O completion requires a trip back to vmwp.exe. The SCSI controller is not emulated. The SCSI controller uses VMBUS (Virtual Machine BUS. More on this in a later post). The I/O's pass from the Child (aka Guest) Partition to the root over VMBUS and enter the I/O stack. You can already see one less process/machine context switch is required because vmwp.exe does not get invoked. Once and I/O completes its completion is sent over VMBUS. There is a lot more to how both the IDE and SCSI controllers work however the descriptions below should help you to understand why SCSI controllers are the right choice for the best performance. 启动的问题 Following on from my post last week, I had some good questions asking about the difference between the SCSI adapter in Virtual Server and the SCSI controller in Windows Server virtualization. In Virtual Server 2005, the best practice is to configure Virtual Machines to boot from the SCSI adapter for performance reasons. This is not the case in Windows Server virtualization. This post takes a dip into explaining why. To start with, let’s take a look at the SCSI adapter in Virtual Server In common with all devices in Virtual Server, including the IDE controller, the SCSI adapter is an emulated device. It emulates a real-world counterpart, a parallel SCSI adapter with the Adaptec 7870 chip set. It can support up to 7 storage devices (Virtual Hard Disks). You may be asking – if both the IDE controller and the SCSI adapter in Virtual Server are emulated, why should the SCSI adapter perform better. The answer is simple. It’s due to a driver that is installed when the Virtual Machine Additions are installed inside a virtual machine. We have an optimized driver – if you take a look under device manager in a VM after the additions are installed, you’ll see that the driver is msvmscsi.sys. The SCSI adapter in Virtual Server has another advantage over the IDE controller in Virtual Server. The IDE controller can have VHDs connected up to 127GB in size. The SCSI adapter can have VHDs connected up to 2040GB in size (8GB short of 2TB). The IDE controller is not 48-bit LBA aware ( http://www.48bitlba.com/ ) so the maximum theoretical capacity (if we allowed it) would be 137.4GB. The SCSI adapter has a boot BIOS which enables virtual machines to boot directly from VHDs connected to it after control has been passed from the virtual machine BIOS. So keeping that in mind, let’s compare and contrast the above with the IDE and SCSI controllers in Windows Server virtualization. The IDE controller remains an emulated device, but with a couple of differences to the IDE controller in Virtual Server. It is now 48-bit LBA capable . This allows you to connect large VHDs up to 2040GB to it. The second difference is a filter driver we insert into the storage stack inside the guest which effectively bypasses the emulation path for IDE, making it much higher performance. In fact, for I/O paths, the IDE controller with the filter driver performs equivalently to the SCSI controller in Windows Server virtualization. You can also attach pass-through disk storage to IDE, which was not possible in Virtual Server. The SCSI controller in Windows Server virtualization is not an emulated device. Instead, it is a “synthetic” device. It has no real world counterpart – it is a virtual controller. You can’t go to a store to buy one for a physical machine. The controller allows up to 255 VHDs or pass-through storage devices per controller, while gaining improved performance over the emulated adapter in virtual server. (The why for this is architectural - I'll cover that another day). As a “synthetic” device, it is not currently possible to boot directly from it until an operating system is available with a loader capable of reading from the drives/device. BIOS changes would also required. That's definitely a topic for another day though. Hopefully that gives a bit more insight into why the best practice recommendation of booting from SCSI in Virtual Server no longer applies in Windows Server virtualization and why booting from IDE does not incur the same performance overhead as in Virtual Server.