2. Content list
Virtualization Introduction
Virtualization approaches
Types of Hypervisors
X86 architecture model- Ring and Virtualised model
Parameters for computing efficiency of Hypervisor
Open source hypervisor description- KVM and Xen
Comparison between KVM and Xen
Comparative Analysis of KVM with Xen
Conclusion
Cons of KVM
3. What is Virtualization?
INTRODUCTION
Virtualization is making one computer appear to be multiple computers.
Virtualization is accomplished with a program called a hypervisor, while systems
running under a hypervisor are known as virtual machines (VMs).
4. Defining Virtualization Approaches
Virtualization can achieved by two types. These description of Hypervisor are based on
hardware virtualization
a) Full Virtualization: Full virtualization uses the entire system's resources are
abstracted by the virtualization software layer.
Fully virtualized workloads do not require any change or modification to their guest
operating systems.
Ex- Vmware
b) Para Virtualization: Para virtualization uses only a portion of the system's
resources, or partial amount, is abstracted by the virtualization software layer.
Paravirtualization requires that the guest operating system running on the host server be
modified so that it recognizes the virtualization software layer.
Ex- Initial version of Xen (Xen PV)
5. Types of hypervisor:
Type 1 Hypervisor
Type 1 Hypervisors run directly on the host's hardware to control the hardware and to
manage guest operating systems. A guest operating system thus runs on another level
above the hypervisor.
Examples of some popular Type 1 Hypervisors are VMware's ESX Server, Sun's Logical
Domains Hypervisor and Xen etc.
Type 2 Hypervisor
TYPE1 Hypervisor
Figure 1 Google Images
6. Type 2 Hypervisor
Hypervisors runs as a normal program inside a normal operating system. This OS is
known as the host. It run over 3nd level over hardware .Each guest OS runs as a
process in the host Operating system. These processes can be manipulated just like
any other process. Examples of some popular Type 2 Hypervisors are VMware
server, KVM, Virtualbox etc.
Examples of some popular Type 2 Hypervisors are VMware
server, KVM, Virtualbox, etc.
Type2 Hypervisor
Figure 2 Google Images
7. X86 architecture Virtualization Model
X86 architecture has been taken as reference which is powerful virtualized
computing architecture with more than eight processors and high speed
CPU.
X86 architecture is divided into rings; each ring has access to specific layer
and privileges. User application has access lower layer ring 3. In general OS
system has access to ring 0 and enjoy full privileged to access hardware. And
ring 1 & 2 are generally not in use. So applications user cannot execute a
system call or instruction that is reserved by OS access while residing over
ring 3.
9. Virtualized X86 Architecture Description
In case of virtualization VMs are placed at ring 1 or above
allowed for a to access hardware call with binary translation in
which the privileged instructions were trapped by a software
interpreter layer known as the Virtual Machine Monitor (VMM)
or hypervisor, and converted to safe instructions that could be
virtualized. This technique allows the VMM to run in Ring 0 for
isolation and performance, while moving the operating system
to a user level ring with greater privilege than applications was in
Ring 3 but less privilege than the virtual machine monitor on a
Ring 0.
11. There are various parameters through which efficiency of hypervisors are judged such as
(a) Scheduling Policy: It is based stability, security and robustness. Aggregate measure of
CPU, memory, network and disk IO for a given virtual machine.
(b) Performance: It is judged on the bases of improving efficiency and Quality of service (QOS).
(c) Memory: It allows maximum requirement of memory resources allocated to the system to
run both guest and host.
(d) Security: Must be empowered by high security policy as hypervisor is undoubtedly a
tempting target for hackers.
(e) Cost: Maintaining large enterprise data on low cost.
(f) Easy to Use: Hypervisors design and control must not be too complex to handle by
administrators.
(g) Management: Managing with complexity must easier to accelerate in agility of business
world.
(h) Recovery: Proper backup and recovery mechanisms provided by software would be
beneficial for application and data
12. Description of Open source Hypervisors
KVM Hypervisor
•KVM is a type-2 hypervisor, maintained by Qumranet Red Hat Corporation. It is
based on the QEMU emulator and derives all its management tools from QEMU.
•KVM is a unique hypervisor. New versions from KVM-62 also have support for
par virtualized Linux guests, but did not utilize this capability in initial prototype.
• The main focus of KVM development is to use the x86 VT extensions, which
allow virtual machines to make system calls.
• KVM uses a set of Linux kernel as a module and will ship with any Linux
distribution moving forward as no work is required for the Linux distributions to
add KVM.
13. •KVM supports the QEMU Copy-on-write (QCOW) disk image format, allowing it to
support a snapshot mode for its disk I/O operations.
• Multiple VM's can be run from one disk image, somewhat mitigating the huge storage
requirements associated with hosting a grid of VM’s.
•KVM supports the standard Linux TUN/TAP model for Ethernet bridging.
•By using this model, each VM gets its own networking resources, making it
indistinguishable from a physical machine.
15. Xen Hypervisor
•Xen is a type-1 hypervisor, maintained Xen.org community built independent of
any operating system
•It is a complete separate layer from the operating system and hardware and is
seen by the community and customers as an Infrastructure Virtualization Platform
to build their solutions upon.
•In fact, community is not in the business of building a complete solution, but
rather a platform for companies and users to leverage for their virtualization and
cloud solutions.
•Xen hypervisor is found in many unique solutions today from standard server
virtualization to cloud providers to grid computing platforms to networking
devices, etc.
•Xen hypervisor is inserted between the server's hardware and the operating
system. This provides an abstraction layer that allows each physical server to run
one or more "virtual servers", effectively decoupling the operating system and its
applications from the underlying physical server
16. •Xen is an open source virtual machine monitor for x86-compatible computers.
•Xen makes it possible for multiple guest operating systems to run on a single computer by using a software
layer called a hypervisor to mediate access to the real hardware.
In order to create a secure operating environment, Xen hypervisor divides the VMs into two domains i.e.
Domain0 (Dom0) and Domain (DomU) due to the accessibility privileges.
The Dom0 VMs have the higher privileges and they can access the hardware whereas DomU VMs have lower
privileges and cannot directly access the hardware
•Red Hat Inc. includes the Xen hypervisor as part of Red Hat Enterprise Linux (RHEL) software, describing this
combination as "integrated virtualization."
•Sun Microsystems provides support for Xen virtualization on Solaris 10, its version of the Unix operating
system. Other mainstream Linux distributions, including Debian and SuSE, have the
necessary kernel extensions available to serve as the base OS for Xen.
•Xen, which was released under the GNU General Public License, was originally a research project at the
University of Cambridge. Xen Source, Inc., a company that supported the development of the open source
project and enterprise applications of the software, was acquired by Citrix Systems in October 2007
18. Comparison between KVM and Xen
•KVM is a hypervisor that is based on Linux kernel. It has inherited Memory
Management, Scheduling policy and Security from Linux Kernel. Whereas Xen is a hypervisor
that is based on Ubuntu, derived from Linux system has also in cooperating these three key
features into its OS itself.
•For example KVM Hypervisor are supported by Red Hat, AMD, HP, IBM, Novell, SGI and others
Whereas Xen hypervisors are currently used by
Cisco, Critix, Fujitsu, Lenovo, Novell, Oracle, Samsung, and various cloud providers
Amazon, Cloud.com, Go Grid and Rackspace.
•KVM is part of Linux and uses the regular Linux scheduler and memory management. This
means that KVM is much smaller and simpler to use. On the other hand ,Xen is an external
hypervisor; it assumes control of the machine and divides resources among guests.,
KVM runs on processors that support hvm, whereas Xen runs non-hvm compatible processors.
KVM is easy to use and provide more features, whereas Xen is powerful but it requires good
amount of knowledge to operate.
19. KVM need hardware assisted virtualization support (Intel VT-x, AMD AMD-V), whereas Xen
PV does not but can't run operating systems without PV support (you can't run Windows on
Xen PV).
KVM will use parts of the Qemu virtualization software to emulate actual hardware for
devices not using PV drivers in the guest system.
KVM is an internal part of the linux kernel module and uses regular memory and scheduler
like linux, whereas Xen is an external hypervisor that takes the control and divides the
resources between the guest machines
KVM doesn't have any support for paravirtualization, whereas Xen supports
paravirtualization that is used for device drivers to improve the performance of input/output.
•Xen has a mature and proven memory manager including support for NUMA and large scale
systems, whereas Xen hypervisor has needed to build this support from scratch
20. KVM Hypervisor analysis
Out of the two hypervisors KVM is best option to select among two. An analysis is conducted
while keeping various points in the mind
Security KVM follows Standard Linux security features SELinux (Security Enhanced Linux)
project, developed by the US National Security Agency. It has sVirt projects builds on SELinux
Infrastructure that provides a level of security and isolation unmatched in industry.
Memory Management NUMA supported by KVM allows virtual machines to efficiently access
large amounts of memory. KVM inherits the powerful memory management features of Linux.
Memory page sharing is supported through a kernel feature called Kernel Same-page
Merging(KSM).
Hardware support Since KVM is a part of Linux it leverages all types of hardware support of
Linux hardware's. Any new features are added to the Linux kernel are inherited to KVM
21. Live Migration KVM supports live Migration which provides the ability to move a running virtual
machine between physical hosts with no interruption to service. Saving a virtual machine's
current state to disk to allow it to be stored and resumed at a later time.
Storage KVM is able to use any storage supported by Linux to store virtual machine
images, including local disks with IDE, SCSI and SATA, Network Attached Storage (NAS) including
NFS and SAMBA/CIFS or SAN with support for iSCSI and fiber Channel. Including all those feature
which supported by any Linux storage device.
Guest Support KVM supports a wide variety of guest operating systems, from mainstream
operating systems such as Linux and Windows to other platforms including
OpenBSD, FreeBSD, OpenSolaris, Solaris x86 and MS DOS.
Device Drivers KVM hypervisor supports the uses of VirtIO standard developed by IBM and Red
Hat in conjunction with the Linux community for paravirtualized for better guest interoperability.
22. Performance and Scalability KVM inherits the performance and scalability of Linux, supporting
virtual machines with up to 16 virtual CPUs and 256GB of ram and host systems with 256 cores
and over 1TB or RAM.
Increased Response time In KVM, time elapses between a stimulus and the response is
decreased. Under this operating model, kernel processes that require a long CPU time slice are
divided into smaller components and scheduled/processed accordingly by the kernel.
Improved scheduling and resource control In the KVM model, a virtual machine (is scheduled
and managed by the standard Linux kernel. Kernel level is responsible for scheduling processes
that divides long CPU Scheduled time into slice of smaller components . So any request from
virtual machines can be processed faster, thereby significantly reducing application processing
latency and improving determinism.
23. Conclusion
For IT staff interested in zero-cost, Linux-friendly, feature-rich and resource-efficient
virtualization, KVM has become the way to go.
The rapid maturation of KVM (Kernel-based virtual machine) over the course of the last couple
of years constituted the first open-source challenge. Integrated into the Linux kernel, KVM
provides feature-rich and highly efficient virtualization as things in virtualization land move
pretty fast.
As Boon for Small Vendors while glance over challenges of cloud computing for small vendors
in low hardware cost, KVM virtualization Service Provider (VSP) can sell compute power without
having to directly maintain each end-user’s particular application.
Virtual Organizations (VO's) can purchase compute power from VSP's without having to worry
about hardware or software compatibility. A VO is free to develop a model cluster
locally, perhaps even on a personal workstation, test it, and then deploy it to a VSP's hardware
with reasonable assurances that the operating environment will be fully compatible.
24. Some Cons:
Perhaps the single major downside of KVM is that it requires a bit
more technical know-how since some features can only be
configured via manual hacking of XML files.
KVM and related tools continue to mature, expect that to change.
26. What is Virtualization?
INTRODUCTION
Virtualization is making one computer appear to be multiple computers.
Virtualization is accomplished with a program called a hypervisor, while systems
running under a hypervisor are known as virtual machines (VMs).
27. Defining Virtualization Approaches
Virtualization can achieved by either Full or Para Virtualization technique
Full Virtualization: Full virtualization uses the entire system's resources are abstracted
by the virtualization software layer.
Fully virtualized workloads do not require any change or modification to their guest
operating systems.
Ex- Vmware
Para Virtualization: Paravirtualization uses only a portion of the system's resources, or
partial amount, is abstracted by the virtualization software layer.
Paravirtualization requires that the guest operating system running on the host server
be modified so that it recognizes the virtualization software layer.
Ex- Initial version of Xen (Xen PV)
Hypervisor descriptions are given here which are based on hardware virtualization
28. Types of hypervisor:
Type 1 Hypervisor
Type 1 Hypervisors run directly on the host's hardware to control the hardware and to
manage guest operating systems. A guest operating system thus runs on another level
above the hypervisor.
Examples of some popular Type 1 Hypervisors are VMware's ESX Server, Sun's Logical
Domains Hypervisor and Xen etc.
Type 2 Hypervisor
TYPE1 Hypervisor
Figure 1 Google Images
29. Type 2 Hypervisor
Hypervisors runs as a normal program inside a normal operating system. This OS is
known as the host. It run over 3nd level over hardware .Each guest OS runs as a
process in the host Operating system. These processes can be manipulated just like
any other process. Examples of some popular Type 2 Hypervisors are VMware
server, KVM, Virtualbox etc.
Examples of some popular Type 2 Hypervisors are VMware
server, KVM, Virtualbox, etc.
Type2 Hypervisor
Figure 2 Google Images
30. X86 architecture Virtualization Model
X86 architecture has been taken as reference which is powerful virtualized
computing architecture with more than eight processors and high speed
CPU.
X86 architecture is divided into rings; each ring has access to specific layer
and privileges. User application has access lower layer ring 3. In general OS
system has access to ring 0 and enjoy full privileged to access hardware. And
ring 1 & 2 are generally not in use. So applications user cannot execute a
system call or instruction that is reserved by OS access while residing over
ring 3.
32. Virtualized X86 Architecture Description
In case of virtualization VMs are placed at ring 1 or above allowed for a to access
hardware call with binary translation in which the privileged instructions were
trapped by a software interpreter layer known as the Virtual Machine Monitor (VMM)
or hypervisor, and converted to safe instructions that could be virtualized. This
technique allows the VMM to run in Ring 0 for isolation and performance, while
moving the operating system to a user level ring with greater privilege than
applications was in Ring 3 but less privilege than the virtual machine monitor in Ring 0.
34. There are various parameters through which efficiency of hypervisors are judged such as
(a) Scheduling Policy: It is based stability, security and robustness. Aggregate measure of
CPU, memory, network and disk IO for a given virtual machine.
(b) Performance: It is judged on the bases of improving efficiency and Quality of service (QOS).
(c) Memory: It allows maximum requirement of memory resources allocated to the system to run both
guest and host.
(d) Security: Must be empowered by high security policy as hypervisor is undoubtedly a tempting target for
hackers.
(e) Cost: Maintaining large enterprise data on low cost.
(f) Easy to Use: Hypervisors design and control must not be too complex to handle by administrators.
(g) Management: Managing with complexity must easier to accelerate in agility of business world.
(h) Recovery: Proper backup and recovery mechanisms provided by software would be beneficial for
application and data
35. Description of Hypervisors
KVM Hypervisor
•KVM is a type-2 hypervisor, maintained by Qumranet Red Hat Corporation. It is based on
the QEMU emulator and derives all its management tools from QEMU.
•KVM is a unique hypervisor. New versions from KVM-62 also have support for par
virtualized Linux guests, but did not utilize this capability in initial prototype.
• The main focus of KVM development is to use the x86 VT extensions, which allow virtual
machines to make system calls.
• KVM uses a set of Linux kernel as a module and will ship with any Linux distribution
moving forward as no work is required for the Linux distributions to add KVM.
36. •KVM supports the QEMU Copy-on-write (QCOW) disk image format, allowing it to
support a snapshot mode for its disk I/O operations.
• Multiple VM's can be run from one disk image, somewhat mitigating the huge storage
requirements associated with hosting a grid of VM’s.
•KVM supports the standard Linux TUN/TAP model for Ethernet bridging.
•By using this model, each VM gets its own networking resources, making it
indistinguishable from a physical machine.
38. Xen Hypervisor
•Xen is a type-1 hypervisor, maintained Xen.org community built independent of
any operating system
•It is a complete separate layer from the operating system and hardware and is
seen by the community and customers as an Infrastructure Virtualization Platform
to build their solutions upon.
•In fact, community is not in the business of building a complete solution, but
rather a platform for companies and users to leverage for their virtualization and
cloud solutions.
•Xen hypervisor is found in many unique solutions today from standard server
virtualization to cloud providers to grid computing platforms to networking
devices, etc.
•Xen hypervisor is inserted between the server's hardware and the operating
system. This provides an abstraction layer that allows each physical server to run
one or more "virtual servers", effectively decoupling the operating system and its
applications from the underlying physical server
39. •Xen is an open source virtual machine monitor for x86-compatible computers.
•Xen makes it possible for multiple guest operating systems to run on a single computer by using a software
layer called a hypervisor to mediate access to the real hardware.
In order to create a secure operating environment, Xen hypervisor divides the VMs into two domains i.e.
Domain0 (Dom0) and Domain (DomU) due to the accessibility privileges.
The Dom0 VMs have the higher privileges and they can access the hardware whereas DomU VMs have lower
privileges and cannot directly access the hardware
•Red Hat Inc. includes the Xen hypervisor as part of Red Hat Enterprise Linux (RHEL) software, describing this
combination as "integrated virtualization."
•Sun Microsystems provides support for Xen virtualization on Solaris 10, its version of the Unix operating
system. Other mainstream Linux distributions, including Debian and SuSE, have the
necessary kernel extensions available to serve as the base OS for Xen.
•Xen, which was released under the GNU General Public License, was originally a research project at the
University of Cambridge. Xen Source, Inc., a company that supported the development of the open source
project and enterprise applications of the software, was acquired by Citrix Systems in October 2007
41. Comparison between KVM and Xen
•KVM is a hypervisor that is based on Linux kernel. It has inherited Memory
Management, Scheduling policy and Security from Linux Kernel. Whereas Xen is a hypervisor
that is based on Ubuntu, derived from Linux system has also in cooperating these three key
features into its OS itself.
•For example KVM Hypervisor are supported by Red Hat, AMD, HP, IBM, Novell, SGI and others
Whereas Xen hypervisors are currently used by
Cisco, Critix, Fujitsu, Lenovo, Novell, Oracle, Samsung, and various cloud providers
Amazon, Cloud.com, Go Grid and Rackspace.
•KVM is part of Linux and uses the regular Linux scheduler and memory management. This
means that KVM is much smaller and simpler to use. On the other hand ,Xen is an external
hypervisor; it assumes control of the machine and divides resources among guests.,
KVM runs on processors that support hvm, whereas Xen runs non-hvm compatible processors.
KVM is easy to use and provide more features, whereas Xen is powerful but it requires good
amount of knowledge to operate.
42. KVM need hardware assisted virtualization support (Intel VT-x, AMD AMD-V), whereas Xen
PV does not but can't run operating systems without PV support (you can't run Windows on
Xen PV).
KVM will use parts of the Qemu virtualization software to emulate actual hardware for
devices not using PV drivers in the guest system.
KVM is an internal part of the linux kernel module and uses regular memory and scheduler
like linux, whereas Xen is an external hypervisor that takes the control and divides the
resources between the guest machines
KVM doesn't have any support for paravirtualization, whereas Xen supports
paravirtualization that is used for device drivers to improve the performance of input/output.
•Xen has a mature and proven memory manager including support for NUMA and large scale
systems, whereas Xen hypervisor has needed to build this support from scratch
43. KVM Hypervisor analysis
Out of the two hypervisors KVM is best option to select among two. An analysis is conducted
while keeping various points in the mind
Security KVM follows Standard Linux security features SELinux (Security Enhanced Linux)
project, developed by the US National Security Agency. It has sVirt projects builds on SELinux
Infrastructure that provides a level of security and isolation unmatched in industry.
Memory Management NUMA supported by KVM allows virtual machines to efficiently access
large amounts of memory. KVM inherits the powerful memory management features of Linux.
Memory page sharing is supported through a kernel feature called Kernel Same-page
Merging(KSM).
Hardware support Since KVM is a part of Linux it leverages all types of hardware support of
Linux hardware's. Any new features are added to the Linux kernel are inherited to KVM
44. Live Migration KVM supports live Migration which provides the ability to move a running virtual
machine between physical hosts with no interruption to service. Saving a virtual machine's
current state to disk to allow it to be stored and resumed at a later time.
Storage KVM is able to use any storage supported by Linux to store virtual machine
images, including local disks with IDE, SCSI and SATA, Network Attached Storage (NAS) including
NFS and SAMBA/CIFS or SAN with support for iSCSI and fiber Channel. Including all those feature
which supported by any Linux storage device.
Guest Support KVM supports a wide variety of guest operating systems, from mainstream
operating systems such as Linux and Windows to other platforms including
OpenBSD, FreeBSD, OpenSolaris, Solaris x86 and MS DOS.
Device Drivers KVM hypervisor supports the uses of VirtIO standard developed by IBM and Red
Hat in conjunction with the Linux community for paravirtualized for better guest interoperability.
45. Performance and Scalability KVM inherits the performance and scalability of Linux, supporting
virtual machines with up to 16 virtual CPUs and 256GB of ram and host systems with 256 cores
and over 1TB or RAM.
Increased Response time In KVM, time elapses between a stimulus and the response is
decreased. Under this operating model, kernel processes that require a long CPU time slice are
divided into smaller components and scheduled/processed accordingly by the kernel.
Improved scheduling and resource control In the KVM model, a virtual machine (is scheduled
and managed by the standard Linux kernel. Kernel level is responsible for scheduling processes
that divides long CPU Scheduled time into slice of smaller components . So any request from
virtual machines can be processed faster, thereby significantly reducing application processing
latency and improving determinism.
46. Conclusion
For IT staff interested in zero-cost, Linux-friendly, feature-rich and resource-efficient
virtualization, KVM has become the way to go.
The rapid maturation of KVM (Kernel-based virtual machine) over the course of the last couple
of years constituted the first open-source challenge. Integrated into the Linux kernel, KVM
provides feature-rich and highly efficient virtualization as things in virtualization land move
pretty fast.
As Boon for Small Vendors while glance over challenges of cloud computing for small vendors
in low hardware cost, KVM virtualization Service Provider (VSP) can sell compute power without
having to directly maintain each end-user’s particular application.
Virtual Organizations (VO's) can purchase compute power from VSP's without having to worry
about hardware or software compatibility. A VO is free to develop a model cluster
locally, perhaps even on a personal workstation, test it, and then deploy it to a VSP's hardware
with reasonable assurances that the operating environment will be fully compatible.
47. Some Cons of KVM:
Perhaps the single major downside of KVM is that
it requires a bit more technical know-how since
some features can only be configured via manual
hacking of XML files.
KVM and related tools continue to
mature, expect that to change.