1. Configure the guest by creating a configuration file specifying resources, storage, networking, etc.
2. Create the guest by running "xm create" followed by the configuration file.
3. Access the guest initially through the console to complete installation.
4. Once installed, SSH can be used to access the guest remotely along with VNC or SDL for graphical access.
Decarbonising Buildings: Making a net-zero built environment a reality
2010 xen-lisa
1. Xen Hypervisor Deployment, Management, and
Cloud Computing Tools
Todd Deshane and Patrick F. Wilbur
Clarkson University
2.
3. Copyright Notice
Copyright 2010, Todd Deshane and Patrick F. Wilbur.
Last modified: September 30, 2010 11:02 PM EST.
The Xen panda logo is property of Xen.org .
==
LICENSE:
Todd Deshane and Patrick F. Wilbur
Department of Mathematics and Computer Science
Clarkson University
Potsdam, NY USA
Current slides available at: http://cosi.clarkson.edu/docs/installingxen/
These slides and content are released under the Creative Commons Attribution-Share Alike 3.0 Unported
license, available online at http://creativecommons.org/licenses/by-sa/3.0/
You may share (copy, distribute, and transmit) this work, and remix (adapt) this work, as long as you
attribute this work to the author and share adapted works under the same or similar license by leaving this
entire notice in place (including the original author's name/contact information/URL and this license
notice).
4. About Us
Todd Deshane is a Ph.D. Patrick F. Wilbur is a Ph.D
graduate of Clarkson student at Clarkson
University. University.
His research interests include His research interests include
information technology, security, usability, policy, and
security, virtualization, and systems architecture.
usability.
http://todddeshane.net http://pdub.net
5. Acknowledgments
This 2010 Xen Training / Tutorial, by Todd Deshane and
Patrick F. Wilbur, is derived from the 2009 Xen Training /
Tutorial as updated by Zach Shepherd and Jeanna Matthews
from the original version written by Zach Shepherd and Wenjin
Hu, originally derived from materials written by Todd Deshane
and Patrick F. Wilbur.
Portions of this work are inspired by Jeremy
Fitzhardinge's Pieces of Xen slides.
Variations of this work have been presented numerous times at
the USENIX Annual Technical Conference, USENIX LISA, and
at other various locations across the United States.
6. Overview
● Session 1: Xen Introduction and Installing Xen
● Session 2: Guest Creation and Management
● Session 3: Xen in the Datacenter
● Session 4: Xen in the Cloud
9. Xen and the Art of Virtualization
● Xen is a virtualization system supporting both
paravirtualization and hardware-assisted full virtualization
● Name from neXt gENeration virtualization
● Initially created by University of Cambridge
Computer Laboratory
● Open source (licensed under GPLv2)
10. Xen Virtualization Basics
● A physical machine runs a program to manage virtual
machines (Virtual Machine Monitor or hypervisor)
● On the physical machine, there are one or more virtual
machines (domains) running
● A virtual machine is an encapsulated operating system
which can run applications as a physical machine
● The management virtual machine (Domain0) is responsible
for interacting with the hypervisor
● Other virtual machines are called guests
11. Ways to Use Virtualization
● Fully utilize hardware resources: consolidation of
workloads on single machine, exploitation of multiple cores
● Running heterogeneous environments on one
machine: different operating systems, different libraries
● Isolation: separate workloads that have different
requirement and/or to avoid attacks on one from
compromising another
● Manageability: rapid deployment and provisioning,
backup/disaster recovery, portability
12. Types of Virtualization
Emulation:
Fully-emulate the underlying hardware architecture
Full virtualization:
Simulate the base hardware architecture
Paravirtualization:
Abstract the base architecture
OS-level virtualization:
Shared kernel (and architecture), separate user spaces
13. Virtualization in Xen
Paravirtualization:
● Uses a modified Linux kernel
● Guest loads Dom0's pygrub or Dom0's kernel
● Front-end and back-end virtual device model
● Cannot run Windows
● Guest "knows" it's a VM and talks with the hypervisor
Hardware-assisted full virtualization:
● Uses the same, normal, OS kernel
● Guest contains grub and kernel
● Normal device drivers
● Can run Windows
● Guest doesn't "know" it's a VM, so hardware manages it
14. Reasons to Use Xen
Paravirtualization (PV):
● High performance (claim to fame)
● High scalability
● Uses a modified operating system
Hardware-assisted full virtualization (HVM):
● Co-evolution of hardware and software on x86 architecture
● Uses an unmodified operating system
15. Reasons to Use Xen
● Xen is powered by a growing and active community and a
diverse range of products and services
● Xen offers high performance and an isolating architecture
17. Xen: Hypervisor Role
● Thin, privileged abstraction layer between the hardware and
operating systems
● Defines the virtual machine that guest domains see instead
of physical hardware:
○ Grants portions of physical resources to each guest
○ Exports simplified devices to guests
○ Enforces isolation among guests
18. Xen: Domain0 (Dom0) Role
● Creates and manages guest VMs
xm (Xen management tool)
A client application to send commands to xend
● Interacts with the Xen hypervisor
xend (Xen daemon)
Daemon to communicate with the hypervisor
● Supplies device and I/O services:
○ Runs (backend) device drivers
○ Provides domain storage
19. Normal Linux Boot Process
BIOS
Master Boot Record (MBR)
GRUB
Kernel
Module
Linux
20. The Xen Boot Process
GRUB starts
Kernel
Hypervisor starts
Module
Domain0 starts
Daemon
Xend starts
xm
Guest domain starts
24. Installing Xen from a Package
● OpenSUSE: Install with YaST
http://www.susegeek.com/general/how-to-install-configure-xen-virtualization-in-
opensuse-110/
● Gentoo: Install with portage
http://www.gentoo.org/doc/en/xen-guide.xml
● NetBSD: Xen package support as of BSD 4.0
http://www.netbsd.org/ports/xen/howto.html
25. Installing Xen from Source
Reasons to use the latest Xen version:
● Performance optimization, cutting-edge features
● Security and bug fixes
● Support for additional Dom0 OSes (Linux, Solaris, BSD)
● Ability to patch/customize
Xen4 installation instructions, including from source:
http://wiki.xensource.com/xenwiki/Xen4.0
26. Installing Xen from Source
New in Xen4:
● blktap2 for VHD image support, snapshots and cloning
● Primary graphics card GPU passthru for high-performance
3D graphics and hardware-accelerated video
● TMEM allows improved utilization of unused (for example
page cache) PV guest memory
● Memory page sharing and page-to-disc for HVM guests
● Copy-on-Write sharing of memory pages between VMs
27. Installing Xen from Source
Also new in Xen4:
● Netchannel2 for improved networking acceleration, smart
NICs, multi-queue support, SR-IOV functionality
● On-line resize of guest disks without reboot/shutdown
● Remus Fault Tolerance: live transactional synchronization
of VM state between physical servers
● RAS features: physical cpu/memory hotplug
28. GRUB Configuration
Sample Xen GRUB Configuration:
title Xen 3.4
root (hd0,0)
kernel /boot/xen-3.4.0.gz
module /boot/vmlinuz-2.6.18.8-xen root=/dev/sda1
module /boot/initrd.img-2.6.18.8-xen
Sample Normal Linux GRUB Configuration:
title Ubuntu 2.6.24-23
root (hd0,0)
kernel /boot/vmlinuz-2.6.24-23-generic root=/dev/sda1
initrd /boot/initrd.img-2.6.24-23-generic
29. Xend Configuration
Xen daemon's configuration in /etc/xen/xend-config.sxp :
● Configure Xen to listen for remote connections
● Set max/min Dom0 CPU and memory resources
● Set up the virtual network:
○ Bridging
○ Routing
○ NAT
● Configure live migration (enable and set relocation port)
31. Network Modes
Bridging mode:
Guest domains are transparently on the same
network as Dom0
Routing mode:
Guest domains sit behind Dom0 and packets are
relayed to the network by Dom0
NAT mode:
Guest domains hide behind Dom0 using Dom0's
IP for external traffic
44. Local Storage
Raw File:
● Use a filesystem within a single file
● Takes advantage of loopback devices
Partition:
● Use a partition on a logical partition
● Can be physical partition or on an LVM volume
Partitioned File:
● Less common
● Treats a raw file as a disk (instead of single partition)
45. Local Storage: Raw File for PV
1. Allocate storage:
dd if=/dev/zero of=/path/to/image.img bs=1024k count=1024
2. Format:
mkfs.ext3 -F /path/to/image.img
3. Mount the storage:
mkdir /mnt/tmp; mount -o loop /path/to/new/image.img /mnt/tmp
4. Install the operating system (needs PV drivers):
debootstrap hardy /mnt/tmp or cp -a /* /mnt/tmp
46. Local Storage: Raw File for PV
5. Modify various files in guest filesystem and unmount:
e.g. /etc/fstab , /etc/hostname , /etc/ifconfig
6. Create the guest configuration file for Xen to use
47. Local Storage: Raw File for HVM
1. Allocate storage:
dd if=/dev/zero of=/path/to/image.img bs=1024k count=1024
2. Create the guest configuration file
3. Install the operating system
48. Guest Storage Configuration Options
Array of disk specifications:
'real dev in dom0, virtual dev in domU, Access (r, w)'
SCSI (sd) and IDE (hd) examples:
disk = [ 'phy:sda, sda, w',
'phy:/dev/cdrom, cdrom:hdc, r' ]
disk = [ 'tap:aio:hdb1, hdb1, w',
'phy:/dev/LV/disk1, sda1, w' ]
Xen virtual device example:
disk = [ 'tap:aio:hdb1, xvdb1, w',
'phy:/dev/LV/disk1,xvda1, w' ]
49. General Guest Configuration Options
(For both PV and HVM guests)
name
● The name of the guest
● (defaults to configuration filename)
vcpus
● The number of virtual CPUs
● (defaults to 1)
memory
● The amount of memory (in MB)
● (defaults to 128)
50. Guest Network Configuration
● Array of virtual interface network parameters specify
'MAC Address, IP Address,' for each interface
● Examples:
vif = [ ' ' ] # Default bridge, random MAC address
vif = [ 'mac=00:16:3e:36:a1:e9,
ip=192.168.1.25, bridge=xenbr0' ]
51. Guest Network Configuration
Bridge mode networking (default in xend config):
Set vif statement in the DomU's configuration file
Routing mode networking (if chosen in xend config):
Set DomU's gateway (in guest OS's network configuration) to
Dom0's external IP (e.g. 192.0.32.10)
NAT mode networking (if chosen in xend config):
Set DomU's gateway (in guest OS's network configuration) to
Dom0's internal IP (e.g. 10.0.0.1)
53. HVM-specific Configuration Options
kernel
The location of the HVM loader
builder
Domain build function ("hvm" for an unmodified kernel)
device_model
Location of the device emulation tool (e.g. "qemu_dm")
boot
The boot order (CD-ROM, hard drive)
vnc
Enable VNC utility for the guest to display
55. Installing HVM Guest OSes (CD/.iso)
1. Allocate disk image for the VM
2. Create HVM config. with CD/.iso as first boot device
3. Boot the guest:
xm create /path/to/guest.cfg
4. Follow normal installation process of guest OS
5. Change boot order in guest configuration file, reboot
56. PV-specific Configuration Options
kernel
Location of the Xen-modified kernel in Dom0's filespace
ramdisk
Location of the initial RAM disk image in Dom0's filespace
or:
bootloader
The location of the bootloader (e.g. pygrub)
57. PV-specific Configuration Options
root
The partition to use as root inside the guest
extra
The parameters appended to the kernel command line
(as would be normally set at the end of a kernel line)
vfb
Virtual framebuffer for PV guest to use in addition to console
59. Installing PV Guest OSes
1. Allocate disk image for the guest VM
2. Mount and populate disk image with distro tools:
○ Stacklet Bundler
○ virt-install
○ virt-manager (discussed further later)
○ vmbuilder
○ debootstrap
○ The tool that comes with your favorite distro
3. Unmount image and create PV guest configuration
4. Boot the guest: xm create /path/to/guest.cfg
60. Pre-built Guest Images
Sources:
● http://stacklet.com
● http://rpath.com
● http://jumpbox.com
Advantages:
● Simple to download and extract the images
● Available with different distribution OSes and
pre-installed applications
61. P2V : Physical Machine to a VM
Conversion of a physical machine into a virtual machine
Scenarios:
● Virtualizing existing infrastructure
● Supporting legacy applications
● System administration benefits of virtualization
Available Tools:
● Use existing backup tools to create a file backup
● P2V LiveCD
● XenServer conversion tool
● Various third-party tools
62. Guest Access Methods
● The simplest way: console
xm console domU_name
● A better way: SSH directly to DomU
ssh user@xxx.xxx.xxx.xxx
● Simple graphics: SSH with X11 forwarding to DomU
ssh -X user@xxx.xxx.xxx.xxx
● Better graphics: SDL or VNC
○ Install vncviewer package
Enable the vnc or sdl option in guest config file
70. Network Storage Options
ATA over Ethernet (AoE):
● Export block devices over the network
● Lightweight Ethernet layer protocol
● No built-in security
Internet Small Computer System Interface (iSCSI):
● Exports block devices over the network
● Network layer protocol
● Scales with network bandwidth
● Client and user-level security
Network File System (NFS):
● Exports file system over the network
● Network layer protocol
● Known performance issues as root file system
71. Network Storage Options
Network Block Device (NBD):
● Exports block devices over the network
● Network layer protocol
● Scales with network bandwidth
● Not recommended as root file system
Distributed Replicated Block Device (DRBD):
● Exports and shares block devices over the network
● Integration with Heartbeat
● No additional storage server necessary
72. Using AoE
1. Install required packages:
○ Install vblade on the storage server
○ Install aoe-tools and the aoe module in the Domain0
2. Export a guest image from the storage server:
vbladed 1 1 eth0 /dev/ (for partitions)
...
vbladed 1 1 eth0 /path/to/image.img (for files)
3. Point the guest configuration to the image:
disk = ['phy:etherd/e1.1,xvda1,w']
Notes:
● Remember that AoE provides no security
● Never use the same shelf/slot for two images
73. Using DRBD
1. Install required packages:
○ Ubuntu/Debian: drbd8-utils and drbd8-module
Red Hat/CentOS: drbd and drbd-km
2. Configure DRBD:
○ Mostly beyond the scope of this presentation
○ Disable sendpage in /etc/modprobe.d/drbd.conf :
options drbd disable_sendpage=1
3. Point the guest configuration to the image:
disk = [ 'drbd:resource,xvda,w' ]
Documentation:
http://www.drbd.org/users-guide/ch-xen.html
75. Guest Management Tools
Simplify:
● Creation of guest images
● Manipulation of guest domains
● Generation of guest configuration files
● Monitoring resource usage by guests
Popular tools:
● Convirt
○ Open-source
○ Third-party product and support
● Zentific
○ Open-source
○ Web-based tool
● Virtual Machine Manager
○ Open-source
○ Desktop tool
76. Convirt
● Designed for full datacenter management
● Allows for managing the complete lifecycle of Xen (and
KVM) guests and hosts
● Open-source with commercial support
86. Virtual Machine Manager
● Graphical user interface for managing virtual machines
● Allows for Xen guest performance monitoring, resource
allocation, and domain creation.
● Open source with Red Hat support
95. Xen Integration and Compatibility
libvirt:
Provides a uniform interface with different virtualization technologies
Mainline Virtualization API (pv_ops):
Provides a common paravirtualization interface in mainstream Linux kernel for
increased performance and capabilities
Open Virtual Machine Format (OVF):
Defines a set of metadata tags that can be used to deploy virtual environment
across multiple virtualization platforms
Xen API (XAPI)
97. Multiple Dom0 Network Interfaces
Motivation:
Segregate DomUs over different networks
Procedure:
1. Run network bridge script for each physical interface:
/etc/xen/scripts/network-bridge start vifnum=0
netdev=eth1 bridge=xenbr1
2. Configure the DomU's vif option for each bridge:
vif = ['bridge=xenbr1', ...]
98. Multiple DomU Network Interfaces
Motivation:
Allow a DomU to connect to different virtual bridges
Procedure:
Modify DomU configuration file:
vif = ['bridge=xenbr0', 'bridge=xenbr1', ...]
99. DomU Network Isolation
Motivation:
Isolate DomUs from external network, but allow them to
communicate with one another
Procedure:
1. Create a dummy bridge in Dom0 in network
configuration or with brctl
2. Configure DomUs to connect to that dummy bridge:
vif = ['bridge = dummy0']
100. DomU Network Rate Limiting
Motivation:
Rate limiting for DomU network usage for better performance
isolation
Procedure:
Configure DomU's vif option with rate parameter :
vif = ['..., rate=50Kb/s']
104. Memory and Scalability
● Using memory overcommitment, more memory can be
allocated than is on the system
● Memory allocated to, but unused by, a VM is made
available for use by other VMs
● Reduces wasted resources, allowing greater scalability
● Risk poor performance due to swapping
109. Cold Relocation
Motivation:
Moving guest between hosts without shared storage or with
different architectures or hypervisor versions
Process:
1. Shut down a guest on the source host
2. Move the guest from one Domain0's file system to
another's by manually copying the guest's disk image
and configuration files
3. Start the guest on the destination host
110. Cold Relocation
Benefits:
● Hardware maintenance with less downtime
● Shared storage not required
● Domain0s can be different
● Multiple copies and duplications
Limitation:
● More manual process
● Service should be down during copy
111. Warm Migration
Motivation:
Move a guest between hosts when uptime is not critical
Command:
xm migrate
Result:
1. Pauses a guest's execution
2. Transfers guest's state across network to a new host
3. Resumes guest's execution on destination host
112. Warm Migration
Benefits:
● Guest and processes remains running
● Less data transfer than live migration
Limitations:
● For a short time, the guest is not externally accessible
● Requires shared storage
● Network connections to and from guest are interrupted and
will probably timeout
113. Live Migration
Motivation:
Load balancing, hardware maintenance, and
power management
Command:
xm migrate --live
Result:
1. Begins transferring guest's state to new host
2. Repeatedly copies dirtied guest memory (due to
continued execution) until complete
3. Re-routes network connections, and guest continues
executing with execution and network uninterrupted
114. Live Migration
Benefits:
● No downtime
● Network connections to and from guest often remain active
and uninterrupted
● Guest and its services remain available
Limitations:
● Requires shared storage
● Hosts must be on the same layer 2 network
● Sufficient spare resources needed on target machine
● Hosts must be similar
117. Xen Cloud Platform (XCP)
● Xen Cloud Platform (XCP) is turnkey virtualization solution
that provides out-of-the-box virtualization/cloud computing
● XCP includes:
○ Open-source Xen hypervisor
○ Enterprise-level XenAPI (XAPI) management tool stack
○ Support for Open vSwitch (open-source, standards-
compliant virtual switch)
● XCP was originally derived from Citrix XenServer (a free
enterprise product), is open-source, and is free
● XCP promises to contain cutting-edge features that will
drive future developments of Citrix XenServer
118. XCP Features
● Fully-signed Windows PV drivers
● Single Root I/O Virtualization (SR-IOV) support
● Heterogeneous machine resource pool support
● Installation by templates for many different guest OSes
119. XCP XenAPI Management Tool Stack
● VM lifecycle: live snapshots, checkpoint, migration
● Resource pools: live relocation, auto configuration,
disaster recovery
● Flexible storage, networking, and power management
● Event tracking: progress, notification
● Upgrade and patching capabilities
● Real-time performance monitoring and alerting
134. Xen VNC Proxy (XVP)
● Web-based, open-source management for both
Citrix XenServer and Xen Cloud Platform
● VNC guest console via web browser
● Freely available as software or a virtual appliance:
http://www.xvpsource.org
140. Xen Cloud Control System (XCCS)
● "XCCS is a lightweight front end package for the excellent
Xen Cloud Platform cloud computing system. XCCS is
totally web based so any computer or smart phone with a
web browser can be used with it!"
● Open-source and freely available as software/appliance:
http://www.xencloudcontrol.com
151. Cloud Computing BoF
Tuesday, 8:00pm:
Open Source and Open Standards-based Cloud Computing
(Room: Willow Glen)
Todd Deshane and Patrick F. Wilbur, Clarkson University
Ben Pfaff, Nicira Networks
Jason Faulkner, Rackspace
In this session, we will describe some of the open source components available
to support hybrid (public/private) cloud computing. We have some interest and
expertise with various open source components, such as the hypervisor (Xen),
the infrastructure platform (the Xen Cloud Platform (XCP)), the virtual
networking switch layer (Open vSwitch), and the cloud computing software
(OpenStack). We invite others that are interested in learning about, describing
experiences with, and discussing the role open source and open standards-
based solutions play in the cloud.
152. Cloud Computing Sessions
● Wednesday, 4:00pm: Experiences with Eucalyptus:
Deploying an Open Source Cloud
● Thursday, 2:00pm: Flying Instruments-Only: Navigating
Legal and Security Issues from the Cloud
● Thursday, 4:00pm: RC2 -- A Living Lab for
Cloud Computing
● Thursday, 4:00pm: Panel: Legal and Privacy Issues in
Cloud Computing
153. Useful Resources and References
Community:
● Xen Mailing List: http://www.xen.org/community/
● Xen Wiki: http://wiki.xensource.com/xenwiki/
● Xen Blog: http://blog.xen.org
● http://wiki.xensource.com/xenwiki/XenCommonProblems
Books:
● The Definitive Guide to the Xen Hypervisor
● Running Xen: A Hands-On Guide to the Art of Virtualization
Discussion:
● http://www.xen.org/community/xenpapers.html
● Abstracts, slides, and videos from Xen Summits