Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.
Jaime Melis
OpenNebula Engineer // @j_melis //
jmelis@opennebula.org
Hypervisors & Containers
OpenNebulaConf 2016
4th
edit...
Agenda
Introduction
KVM
Virtual Infra Management
•Capacity management
•Multi-VM management
•Resource optimization
•HA and busines...
Reference Architecture
Reference Architecture
Basic Advanced
Operating
System
Supported OS (Ubuntu or CentOS/RHEL) in all machines
Specific OpenN...
Reference Architecture
Basic Advanced
Memory 2 GB 4 GB
CPU 1 CPU (2 cores) 2 CPU (4 cores)
Disk size 100 GB 500 GB
Network...
Reference Architecture
Network Implementations
Private
Network
Communication between VMs.
Public Network To serve VMs that...
Configuring Drivers
VM_MAD = [
NAME = "kvm",
SUNSTONE_NAME = "KVM",
EXECUTABLE = "one_vmm_exec",
ARGUMENTS = "-t 15 -r 0 k...
Monitoring Hosts
Monitoring Hosts
Wed Oct 19 14:43:20 2016 [Z0][InM][D]: Monitoring host host01 (0)
Wed Oct 19 14:43:21 2016 [Z0][InM][D]: ...
Capacity
Attributes
● MEMORY
● CPU
● VCPU
Overcommitment
● RESERVED_CPU
● RESERVED_MEMORY
Cgroups
What is?
● Enforce CPU assigned to a VM
● VM with CPU=0.5 gets half of another VM CPU=1.0
● You can limit the tota...
Fast VM Deployments
● Libvirt listens by default on a unix socket
● No concurrent operations
/etc/one/sched.conf
# MAX_HOS...
RAW
If it's supported by Libvirt… it's supported by OpenNebula
RAW = [
type = "kvm",
data = "<devices>
<serial type="pty">...
Improve Performance
● Paravirtualized drivers
● Network
● Storage
Enable it by default:
/etc/one/vmm_exec/vmm_exec_kvm.con...
Further Tips
KSM
● Kernel Samepage Merging
● Combines Memory private pages
● Increases VM density
● Enabled by default in ...
Further Tips
Virsh Capabilities
/usr/share/libvirt/cpu_map.xml
OS = [ MACHINE = "..." ]
Cache
● Writethrough
○ host page o...
vCenter Approach
KVM
Virtual Infra Management
•Capacity management
•Multi-VM management
•Resource optimization
•HA and bus...
Reference Architecture
Reference Architecture
Description
Front-end Supported OS (Ubuntu or CentOS/RHEL)
Specific OpenNebula packages installed
H...
VM_MAD = [
NAME = "vcenter",
SUNSTONE_NAME = "VMWare vCenter",
EXECUTABLE = "one_vmm_sh",
ARGUMENTS = "-p -t 15 -r 0 vcent...
Configuring Drivers (Monitoring)
IM_MAD = [
NAME = "vcenter",
SUNSTONE_NAME = "VMWare vCenter",
EXECUTABLE = "one_im_sh",
...
vCenter Delegation
VMs
Templates
Networks
Overview
Key Points
● VMware workflows
● Leverages vMotion, HA, DRS
● Templates and Networks must exist
● Each vCenter clu...
vCenter
ESX HostESX Host
Connectivity
VNC
OpenNebula Frontend
ESX Hosts
VI API
ESX HostESX HostESX Hosts
VMM Driver
Importing Clusters
● Sunstone to import vCenter Clusters
● CLI Tool also provides that functionality
● Manages subsequent ...
Importing Templates
● A Template must be already defined in OpenNebula.
● It must contain all the basic information to be ...
Importing Templates
● The Template includes the vCenter UUID.
● Keep VM Disks is optional
Importing Templates
● User can be asked about Resource Pool and Datastore
Importing Networks
● The Network must exist in OpenNebula.
● When importing, we can assign an IP range for the
Network
Importing VMs
● Wild VMs can be imported
● After importing, VMs can be managed by OpenNebula
● The following operations ca...
Importing Datastores and VMDKs
● Available through CLI and Sunstone
● Same mechanism as with VMs, Networks and Templates
Importing Datastores and VMDKs
vCenter datastores supported in OpenNebula
● Monitorization of Datastores and VMDKs
● VMDK ...
Contextualization
● Two supported Contextualizations methods:
○ vCenter Customizations
○ OpenNebula
● OpenNebula Contextua...
Scheduling
● OpenNebula chooses a Host (vCenter Cluster)
● The specific ESX is selected by vCenter (DRS)
● The specific Cl...
Docker
Docker
Machine
Docker-Machine
● Official Docker project
● Deploys transparently your Docker host
● Supports Multiple Backends
● Switch be...
Boot2Docker
Lightweight Linux distribution based on Tiny Core Linux
made specifically to run Docker containers.
http://boo...
Requirements
● OpenNebula Cloud
● Image for Docker Engine (Boot2Docker) & Network
● Docker Client Tools & Docker Machine
●...
Docker Machine OpenNebula Plugin
docker-machine create 
--driver opennebula 
--opennebula-network-name private 
--opennebu...
Docker Swarm
● Native clustering for Docker
● Pool of Docker hosts into a single, virtual Docker host
● Scale to multiple ...
Rancher
● Complete Platform for Running Containers
● Entire software stack
● Supports Docker Machine provisioning
OpenNebulaConf 2016
4th
edition
Platinum
Gold
Silver
Community
THANKS!
Prochain SlideShare
Chargement dans…5
×

OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime Melis, OpenNebula

832 vues

Publié le

In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the computing subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Hypervisors and Containers with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices

Publié dans : Technologie
  • Soyez le premier à commenter

OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime Melis, OpenNebula

  1. 1. Jaime Melis OpenNebula Engineer // @j_melis // jmelis@opennebula.org Hypervisors & Containers OpenNebulaConf 2016 4th edition
  2. 2. Agenda
  3. 3. Introduction KVM Virtual Infra Management •Capacity management •Multi-VM management •Resource optimization •HA and business continuity OpenNebula Cloud Management •VDC multi-tenancy •Simple cloud GUI and interfaces •Service elasticity/provisioning •Federation/hybrid vCenter VMware OpenNebula
  4. 4. Reference Architecture
  5. 5. Reference Architecture Basic Advanced Operating System Supported OS (Ubuntu or CentOS/RHEL) in all machines Specific OpenNebula packages installed Hypervisor KVM Networking VLAN 802.1Q VXLAN Storage Shared file system (NFS/GlusterFS) using qcow2 format for Image and System Datastores Ceph Cluster for Image Datastores, and a separated Shared FS for System Datastore Authentication Native authentication or Active Directory Basic and Advanced Implementations
  6. 6. Reference Architecture Basic Advanced Memory 2 GB 4 GB CPU 1 CPU (2 cores) 2 CPU (4 cores) Disk size 100 GB 500 GB Network 2 NICs 2 NICs Front-end Hardware recommendations
  7. 7. Reference Architecture Network Implementations Private Network Communication between VMs. Public Network To serve VMs that need internet access Service Network For front-end and virtualization node communication -including inter node communication for live migration-, as well as for storage traffic Storage Network To serve the the shared filesystem or the Ceph pools to the virtualization nodes
  8. 8. Configuring Drivers VM_MAD = [ NAME = "kvm", SUNSTONE_NAME = "KVM", EXECUTABLE = "one_vmm_exec", ARGUMENTS = "-t 15 -r 0 kvm", DEFAULT = "vmm_exec/vmm_exec_kvm.conf", TYPE = "kvm", KEEP_SNAPSHOTS = "no", IMPORTED_VMS_ACTIONS = "terminate, terminate-hard, hold, release, suspend, resume, delete, reboot, reboot-hard, resched, unresched, disk-attach, disk-detach, nic-attach, nic-detach, snap-create, snap-delete" ]
  9. 9. Monitoring Hosts
  10. 10. Monitoring Hosts Wed Oct 19 14:43:20 2016 [Z0][InM][D]: Monitoring host host01 (0) Wed Oct 19 14:43:21 2016 [Z0][InM][D]: Host host01 (0) successfully monitored. Wed Oct 19 14:43:31 2016 [Z0][InM][D]: Host host01 (0) successfully monitored. Wed Oct 19 14:43:51 2016 [Z0][InM][D]: Host host01 (0) successfully monitored. ...
  11. 11. Capacity Attributes ● MEMORY ● CPU ● VCPU Overcommitment ● RESERVED_CPU ● RESERVED_MEMORY
  12. 12. Cgroups What is? ● Enforce CPU assigned to a VM ● VM with CPU=0.5 gets half of another VM CPU=1.0 ● You can limit the total memory used by the VMs How? ● Check your distro ● Configuration in the hosts (not in the front-end) ● There is a cgroups service ● Enable in /etc/libvirt/qemu.conf ● Add libvirt to /etc/cgrules.conf
  13. 13. Fast VM Deployments ● Libvirt listens by default on a unix socket ● No concurrent operations /etc/one/sched.conf # MAX_HOST: Maximum number of Virtual Machines dispatched to a given host in # each scheduling action # MAX_HOST = 1 ● Enable TCP socket in libvirtd.conf
  14. 14. RAW If it's supported by Libvirt… it's supported by OpenNebula RAW = [ type = "kvm", data = "<devices> <serial type="pty"><source path="/dev/pts/5"/><target port="0"/></serial> <console type="pty" tty="/dev/pts/5"><source path="/dev/pts/5"/><target port="0"/></console> </devices>" ] Libvirt Deployment File (XML)
  15. 15. Improve Performance ● Paravirtualized drivers ● Network ● Storage Enable it by default: /etc/one/vmm_exec/vmm_exec_kvm.conf NIC = [ MODEL = "virtio" ] /etc/one/oned.conf DEFAULT_DEVICE_PREFIX = "vd" virtio
  16. 16. Further Tips KSM ● Kernel Samepage Merging ● Combines Memory private pages ● Increases VM density ● Enabled by default in CentOS SPICE ● Native in OpenNebula >= 4.12 (qlx display Driver) ● Redirect printers, USB (mass-storage), Audio
  17. 17. Further Tips Virsh Capabilities /usr/share/libvirt/cpu_map.xml OS = [ MACHINE = "..." ] Cache ● Writethrough ○ host page on, guest disk write cache off ● Writeback ○ Good overall I/O Performance ○ host page on, disk write cache on ● None ○ Good write performance ○ host page off, disk write cache on
  18. 18. vCenter Approach KVM Virtual Infra Management •Capacity management •Multi-VM management •Resource optimization •HA and business continuity OpenNebula Cloud Management •VDC multi-tenancy •Simple cloud GUI and interfaces •Service elasticity/provisioning •Federation/hybrid vCenter VMware OpenNebula
  19. 19. Reference Architecture
  20. 20. Reference Architecture Description Front-end Supported OS (Ubuntu or CentOS/RHEL) Specific OpenNebula packages installed Hypervisor VMware vSphere (managed through vCenter) Networking Standard and Distributed Switches (managed through vCenter) Storage Local and Networked (FC, iSCSI, SAS) (managed through vCenter) Authentication Native authentication or Active Directory Summary of the implementation
  21. 21. VM_MAD = [ NAME = "vcenter", SUNSTONE_NAME = "VMWare vCenter", EXECUTABLE = "one_vmm_sh", ARGUMENTS = "-p -t 15 -r 0 vcenter -s sh", DEFAULT = "vmm_exec/vmm_exec_vcenter.conf", TYPE = "xml", KEEP_SNAPSHOTS = "yes", IMPORTED_VMS_ACTIONS = "terminate, terminate-hard, hold, release, suspend, resume, delete, reboot, reboot-hard, resched, unresched, poweroff, poweroff-hard, disk-attach, disk-detach, nic-attach, nic-detach, snap-create, snap-delete" ] Configuring Drivers (Virtualization)
  22. 22. Configuring Drivers (Monitoring) IM_MAD = [ NAME = "vcenter", SUNSTONE_NAME = "VMWare vCenter", EXECUTABLE = "one_im_sh", ARGUMENTS = "-c -t 15 -r 0 vcenter" ]
  23. 23. vCenter Delegation VMs Templates Networks
  24. 24. Overview Key Points ● VMware workflows ● Leverages vMotion, HA, DRS ● Templates and Networks must exist ● Each vCenter cluster is a Host ○ OpenNebula chooses the Host (vCenter cluster) ○ VMware DRS chooses the ESX Host ● VMware tools in guest OS Limitations ● Security Groups ● Files passed in the Context
  25. 25. vCenter ESX HostESX Host Connectivity VNC OpenNebula Frontend ESX Hosts VI API ESX HostESX HostESX Hosts VMM Driver
  26. 26. Importing Clusters ● Sunstone to import vCenter Clusters ● CLI Tool also provides that functionality ● Manages subsequent import actions
  27. 27. Importing Templates ● A Template must be already defined in OpenNebula. ● It must contain all the basic information to be deployed ● During instantiation we can add an extra network, but not remove them.
  28. 28. Importing Templates ● The Template includes the vCenter UUID. ● Keep VM Disks is optional
  29. 29. Importing Templates ● User can be asked about Resource Pool and Datastore
  30. 30. Importing Networks ● The Network must exist in OpenNebula. ● When importing, we can assign an IP range for the Network
  31. 31. Importing VMs ● Wild VMs can be imported ● After importing, VMs can be managed by OpenNebula ● The following operations cannot be performed: ○ delete --recreate ○ undeploy ○ migrate ○ stop
  32. 32. Importing Datastores and VMDKs ● Available through CLI and Sunstone ● Same mechanism as with VMs, Networks and Templates
  33. 33. Importing Datastores and VMDKs vCenter datastores supported in OpenNebula ● Monitorization of Datastores and VMDKs ● VMDK Creation ● VMDK Upload ● VMDK Cloning ● VMDK Deletion Persistent VMDK VMDK Hotplug supported ● Attach disk
  34. 34. Contextualization ● Two supported Contextualizations methods: ○ vCenter Customizations ○ OpenNebula ● OpenNebula Contextualization works both for Windows and Linux. ● START_SCRIPT is supported
  35. 35. Scheduling ● OpenNebula chooses a Host (vCenter Cluster) ● The specific ESX is selected by vCenter (DRS) ● The specific Cluster can be forced: SCHED_REQUIREMENTS = "NAME="<vcenter_cluster>""
  36. 36. Docker Docker Machine
  37. 37. Docker-Machine ● Official Docker project ● Deploys transparently your Docker host ● Supports Multiple Backends ● Switch between your Docker hosts
  38. 38. Boot2Docker Lightweight Linux distribution based on Tiny Core Linux made specifically to run Docker containers. http://boot2docker.io
  39. 39. Requirements ● OpenNebula Cloud ● Image for Docker Engine (Boot2Docker) & Network ● Docker Client Tools & Docker Machine ● Docker Machine OpenNebula Plugin ○ github.com/OpenNebula/docker-machine-opennebula
  40. 40. Docker Machine OpenNebula Plugin docker-machine create --driver opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-b2d-size 18192 my_docker_host
  41. 41. Docker Swarm ● Native clustering for Docker ● Pool of Docker hosts into a single, virtual Docker host ● Scale to multiple hosts
  42. 42. Rancher ● Complete Platform for Running Containers ● Entire software stack ● Supports Docker Machine provisioning
  43. 43. OpenNebulaConf 2016 4th edition Platinum Gold Silver Community THANKS!

×