A descriptive guide to explain the concepts and procedures to work with RedHat's Virtualization technology and its comparison with two most popular Hypervisors : VMware and VirtualBox.
2. Agenda
• Physical Vs Logical(Demo)
• Virtualization: The What?
• Virtualization: The How?
• Virtualization: The Why?
• Redhat Virtualization(Demo)
• Virtualization Vs. Virtualization
• Future
3. Physical Vs Logical
A demonstration for Logical Volume Manager
(L.V.M) is suffice enough to give an insight of how
great the things are if scaled ahead the physical
boundaries…
4. • Commands required for configuration:
Create the required partitions.
#fdisk /dev/sda
Create the physical volumes
#pvcreate /dev/sda{5,6,7}
Watch the pvsize and also the non usable size of the partition
#pvdisplay
Create volume group of the three partitions thus formed
#vgcreate vg0 /dev/hda{5,6,7}
#vgdisplay
create the logical volume of the volume group thus formed using command:
#lvcreate -L +50M /dev/vg0 -n lv0
#lvdisplay
5. We can even extend this logical volume thus formed using the following command depending
upon our use:
#lvextend -L +25M /dev/vg0/lv0
Display the logical partition which actually exists for use:
#ls /dev/vg*
Finally after the partition is created:
mount a file system to it using command:
mkfs.ext3 -L /lvm_data /dev/vg0/lv0
Mounting it on a directory to use.
mount /dev/vg0/ /mnt
We are ready to store data into this logical volume.
7. In Past
One operating system on one machine , so ,the OS
had complete control of the resources in that
machine. Various applications would run on that
machine, but these applications could affect each
other.
Machine utilisation was very low, most times it was
below 20%.
Even Now!!
11. Virtualization, in com p u ting, is th e cre ation of a virtu al (rath e r
th an actu al) ve rs ion of s om e th ing, s u ch as a h ard ware
p latform , op e rating s ys te m , a s torage d e vice or ne twork
re s ou rce s .
Th e u s u al goal of virtualization is to ce ntralize ad m inis trative
tas ks wh ile im p roving s calab ility and ove rall h ard ware -re s ou rce
u tilization.
12. Virtual Machine Monitor(or Hypervisor)
Each virtual machine interfaces with its
host system via the virtual machine
monitor (VMM). Being the primary link
between a VM and the host OS and
hardware, the VMM provides a crucial
role.
15. OS and A pps in a VM don't
know that the VMM exis ts or
that they s hare C PU
res ources with other VMs
VMM s hould is olate
Gues t S W s tacks from
one another
VMM s hould run
protected from all
Gues t s oftware
VMM s hould pres ent a
virtual platform
interface to Gues t S W
16. x86 modes: Privilege Levels
• x86 p roce s s or’s s e gm e nt-p rote ction m e ch anis m re cognize s 4 p rivile ge le ve ls
(0-h igh , 3-low le ve l)
• Th e ce nte r (re s e rve d for th e m os t p rivile ge d cod e ) is u s e d for th e s e gm e nts
containing th e critical s oftware , u s u ally th e ke rne l of an op e rating s ys te m .
• O u te r rings are u s e d for le s s critical s oftware .
• Th e p roce s s or u s e s p rivile ge le ve ls to p re ve nt a p rogram or tas k op e rating at a
le s s e r p rivile ge le ve l from acce s s ing a s e gm e nt with a gre ate r p rivile ge , e xce p t
u nd e r controlle d s itu ations .
18. Full Virtualization
C om p le te s im u lation of th e u nd e rlying
h ard ware .
E ve ry s alie nt fe atu re of th e h ard ware is
re fle cte d into one of s e ve ral virtu al m ach ine s –
inclu d ing th e fu ll ins tru ction s e t, inp u t/ tp u t
ou
op e rations , inte rru p ts , m e m ory acce s s , and
wh ate ve r oth e r e le m e nts are u s e d by th e
s oftware th at ru ns on th e b are m ach ine , and
th at is inte nd e d to ru n in a virtu al m ach ine .
19. Para Virtualization
p re s e nts a s oftware inte rface to virtu al
m ach ine s th at is s im ilar b u t not id e ntical to
th at of th e u nd e rlying h ard ware .
Th e inte nt of th e m od ifie d inte rface is to
re d u ce th e p ortion of th e gu e s t's e xe cu tion
tim e s p e nt p e rform ing op e rations wh ich are
s u b s tantially m ore d ifficu lt to ru n in a virtu al
e nvironm e nt com p are d to a non-virtu alize d
e nvironm e nt.
21. Major Hypervisors
Xen : U nive rs ity of C am b rid ge C om p u te r Lab oratory
F u lly op e n s ou rce d
S e t of p atch e s agains t th e Linu x ke rne l
VMware E S X : C los e d s ou rce
P rop rie tary d rive rs
VirtualB ox: a fre e h yp e rvis or from S U N S ys te m s .
Lim ite d fu nctionality
KVM: M os t u s e d H yp e rvis or in Linu x.
22. Xen Vs KVM
Xen
•Hypervisor that supports x86, x86_64, Itanium, and
ARM architectures.
•can run Linux, Windows, Solaris, and some of the
BSDs as guests on their supported CPU architectures.
•can do full virtualization on systems that support
virtualization extensions, but can also work as a
hypervisor on machines that don't have the
virtualization extensions.
23. • If you want to run a Xen host, you need to have
a supported kernel.
• Though after kernel 2.6.23 ,linux has started to
put in into the mainline.
24. KVM
• Hypervisor that is in the mainline Linux kernel.
• runs on x86 and x86-64 systems with hardware
supporting virtualization extensions.
• KVM isn't an option on older CPUs made before
the virtualization extensions were developed, and
it rules out newer CPUs (like Intel's Atom CPUs)
that don't include virtualization extensions.
• If you're getting a recent Linux kernel, you've
already got KVM built in.
25. System Requirements
Xen para-virtualization requirements
•Para-virtualized guests require a Red Hat Enterprise
Linux 5 installation tree available over the network
using the NFS, FTP or HTTP protocols.
Xen full virtualization requirements
Full virtualization with the Xen Hypervisor requires:
•an Intel processor with the Intel VT extensions, or
•an AMD processor with the AMD-V extensions.
26. KVM requirements
The KVM hypervisor requires:
•an Intel processor with the Intel VT and the Intel
64 extensions, or
•an AMD processor with the AMD-V and the AMD64
extensions.
28. Virtualization with Red Hat
Red Hat Virtualization provides a complete package of
almost all types of virtualizations
1. S erver/operating s ys tem virtualization
inte grate d into ke rne l and O S p latform (as K VM)
• S torage virtualization: G lob al d ata
R e d H at G lob al F ile S ys te m / LVM
C
3. S ys tem management, res ourc e management, provis ioning
R e d H at N e twork
4. A pplic ation environment cons is tency with non-virtualized
environments
29. Red Hat Enterprise Linux Advanced
Platform
• Server and storage virtualization
extends across multiple systems
Ex
te
n d
S h are d S torage
30. Red Hat Enterprise Linux
Advanced Platform
A fully integrated server and storage virtualization environment
Multi Host/Instance Logical Volume Management
Multi Host/Instance Global File System
Multi Host/Instance Application Migration
Provides a complete virtualization platform
Server : Storage : Management
Simplifies deployment & manageability
Increases flexibility & scalability
Integrates server & storage virtualization
no special hardware
31. Installing an Operating System
Options Available:
1)GUI(Graphical User Interface)
2)CLI(Command Line Interface)
32. Bas ic p ackage s Ins tallation
# yu m grou p ins tall Virtu alization
Libvirt
qemu-kvm
python-virtins t
vir-manager
virt-viewer
Th e d e p e nd e ncie s are configu re d au tom atically
d u ring th e ins tallation p roce s s .
46. Virtualization – It's gonna be
even better!
• Multiple Hypervisor Support
(
(Xen, KVM, ....)
• Even better deployment
Cobbler – next gen. Installation server
• More manageble
oVirt (free platform virtualization management web
application software developed by Red Hat)
47. VMware Vs. VirtualBox Vs. KVM
Domains for comparison:
•Device Support
•Ease of use
•Installation
•Administration
•Look & Feel
•Performance
•Licensing and Support
Provides protection, networking, driver coordination, and resource management so that each virtual OS sees itself as running on a bare metal server. Allows you to create, control, monitor, destroy, pause, or migrate new virtual machines. Boots on “bare metal” Loads Domain 0 throught multiboot standard Provides “safe” interface for hardware access Virtual Machine Monitor Scheduling Virtual CPU, Memory
PRIVILEGE LEVELS The processor’s segment-protection mechanism recognizes 4 privilege levels, numbered from 0 to 3. The greater numbers mean lesser privileges. Figure 4-3 shows how these levels of privilege can be interpreted as rings of protection. The center (reserved for the most privileged code, data, and stacks) is used for the segments containing the critical software, usually the kernel of an operating system. Outer rings are used for less critical software. (Systems that use only 2 of the 4 possible privilege levels should use levels 0 and 3.) The processor uses privilege levels to prevent a program or task operating at a lesser privilege level from accessing a segment with a greater privilege, except under controlled situations. When the processor detects a privilege level violation, it generates a general-protection exception (#GP). To carry out privilege-level checks between code segments and data segments, the processor recognizes the following three types of privilege levels: • Current privilege level (CPL) — The CPL is the privilege level of the currently executing program or task. It is stored in bits 0 and 1 of the CS and SS segment registers. Normally, the CPL is equal to the privilege level of the code segment from which instructions are being fetched. The processor changes the CPL when program control is transferred to a code segment with a different privilege level. • Descriptor privilege level (DPL) — The DPL is the privilege level of a segment or gate. It is stored in the DPL field of the segment or gate descriptor for the segment or gate. When the currently executing code segment attempts to access a segment or gate, the DPL of the segment or gate is compared to the CPL and RPL of the segment or gate selector (as described later in this section). The DPL is interpreted differently, depending on the type of segment or gate being accessed: • Requested privilege level (RPL) — The RPL is an override privilege level that is assigned to segment selectors. It is stored in bits 0 and 1 of the segment selector. The processor checks the RPL along with the CPL to determine if access to a segment is allowed. Even if the program or task requesting access to a segment has sufficient privilege to access the segment, access is denied if the RPL is not of sufficient privilege level. That is, if the RPL of a segment selector is numerically greater than the CPL, the RPL overrides the CPL, and vice versa. The RPL can be used to insure that privileged code does not access a segment on behalf of an application program unless the program itself has access privileges for that segment. See Section 4.10.4, “Checking Caller Access Privileges (ARPL Instruction),” for a detailed description of the purpose and typical use of the RPL. Privilege levels are checked when the segment selector of a segment descriptor is loaded into a segment register. The checks used for data access differ from those used for transfers of program control among code segments; therefore, the two kinds of accesses are considered separately in the following sections.
Xen is a hypervisor that supports x86, x86_64, Itanium, and ARM architectures, and can run Linux, Windows, Solaris, and some of the BSDs as guests on their supported CPU architectures. It's supported by a number of companies, primarily by Citrix , but also used by Oracle for Oracle VM, and by others. Xen can do full virtualization on systems that support virtualization extensions, but can also work as a hypervisor on machines that don't have the virtualization extensions. ARM is a 32-bit reduced instruction set computer (RISC) instruction set architecture (ISA) developed by ARM Holdings . It was named the Advanced RISC Machine and, before that, the Acorn RISC Machine . The ARM architecture is the most widely used 32-bit instruction set architecture in numbers produced. [2] [3] Originally conceived by Acorn Computers for use in its personal computers , the first ARM-based products were the Acorn Archimedes range introduced in 1987. Itanium ( / aɪˈteɪniəm / eye- TAY -nee-əm ) is a family of 64-bit Intel microprocessors that implement the Intel Itanium architecture (formerly called IA-64 ). Intel markets the processors for enterprise servers and high-performance computing systems. The architecture originated at Hewlett-Packard (HP), and was later jointly developed by HP and Intel. Berkeley Software Distribution ( BSD , sometimes called Berkeley Unix ) is a Unix operating system derivative developed and distributed by the Computer Systems Research Group (CSRG) of the University of California, Berkeley , from 1977 to 1995. Today the term "BSD" is often used non-specifically to refer to any of the BSD descendants which together form a branch of the family of Unix-like operating systems. Operating systems derived from the original BSD code remain actively developed and widely used.
KVM is a hypervisor that is in the mainline Linux kernel. Your host OS has to be Linux, obviously, but it supports Linux, Windows, Solaris, and BSD guests. It runs on x86 and x86-64 systems with hardware supporting virtualization extensions. This means that KVM isn't an option on older CPUs made before the virtualization extensions were developed, and it rules out newer CPUs (like Intel's Atom CPUs) that don't include virtualization extensions. For the most part, that isn't a problem for data centers that tend to replace hardware every few years anyway — but it means that KVM isn't an option on some of the niche systems like the SM10000 that are trying to utilize Atom CPUs in the data center.
• libvirt Contains the libvirt application programming interface (API) for abstracting away differences between Xen, KVM, and other virtualization technologies. • qemu-kvm Contains KVM components associated with QEMU utilities. • python-virtinst Contains commands such as virt-install (to create and manage virtual guests), virt-convert (to convert VMs into different formats), virt-image (to create VMs from image descriptors), and virt-clone (to create clone VMs from existing disk images). • virt-manager Contains the virt-manager Virtual Machine Manager application. It is used to start, stop, and otherwise manage virtual guest operating systems. It also can display summary information and statistics about your guest VMs. • virt-viewer Contains the virt-viewer Virtual Machine Viewer graphical client, which is used to connect to virtual machines via a VNC interface. QEMU is a processor emulator that relies on dynamic binary translation to achieve a reasonable speed while being easy to port on new host CPU architectures. In conjunction with CPU emulation, it also provides a set of device models, allowing it to run a variety of unmodified guest operating systems ; it can thus be viewed as a hosted virtual machine monitor .
Cobbler is a Linux provisioning server centralizes and simplifies control of services including DHCP, TFTP, and DNS for the purpose of performing network-based operating systems installs. It can be configured for reinstallations, and virtualized guests using Xen , KVM or VMware . Cobbler interacts with the koan program for re-installation and virtualization support. koan and Cobbler use libvirt to integrate with different virtualization software. Cobbler builds on the Kickstart mechanism and offers installation profiles that can be applied to one or many machines. It also features integration with Yum to aid in machine installs. oVirt is free platform virtualization management web application software developed by Red Hat . oVirt is built on libvirt which allows it to manage virtual machines hosted on any supported backend, including KVM , Xen and VirtualBox . oVirt can handle multiple hosts. It communicates with its host servers through HTTP with XML-RPC . oVirt is able to handle storage solutions such as NFS , iSCSI and local storage.
The enthusiast From the enthusiast’s standpoint, KVM would seem like the best choice, most configuration options.Enthusiasts will find plenty of new combinations of settings to experiment with. KVM’s lack of end-user features and complexity of use also give it the flavour of being a tool for the elite, which the enthusiast is likely to find appealing. Next in line is VirtualBox, which offers fewer options, but still enough to keep a geek interested. Simply reading the manual and following the forums will suffice. VMWare Player is last in line for this category of users, since it offers very limited customisability. The architect For architects, the requirement determines the choice of component. For server virtualisation with an emphasis on performance and scale, KVM is the clear choice. For end users, VMWare Player is the best choice, since it can run a VM authored on Workstation in a manner that makes it extremely easy to use, particularly for a user who’s not tech-savvy. However, for prototyping and getting off the ground quickly, VirtualBox’ superior feature set makes it the tool of choice. The executive VirtualBox is the product with the likelihood of meeting the most requirements at the least cost. If budget was not a constraint, VMWare Player’s paid version, VMWare Workstation, could give VirtualBox a run for its money on features. VirtualBox provides many more features in the free version than VMWare. The exception to this is if the VM is being authored elsewhere, and VMWare Player is being used only for access. In that scenario, VMWare Player is much easier to use than either of the other two. KVM is not really a solution for the executive at all. The follower From the follower’s viewpoint, the ideal and often only supportable option is to use VMWare Player to run existing VMs (created by Workstation). Next in line is VirtualBox. As with the executive, KVM is not an option for the follower.