Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
1. Linux Containers – NextGen Virtualization
for Cloud (Benefit Realization)
Cloud Expo
June 10-12, 2014
New York City, NY
Boden Russell (brussell@us.ibm.com)
2. Why LXC: Performance
6/13/2014 2
Manual VM LXC
Provision Time
Days
Minutes
Seconds / ms
linpack performance @ 45000
0
50
100
150
200
250
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
B
M
vcpus
GFlops
3. Why LXC: Industry Uptrend
6/13/2014 3
Google trends - LXC
Google trends - docker
5. Why LXC: Lower TCO
Supported with out of the box modern
Linux Kernel
Open source toolsets
Cloudy integration
6/13/2014 5
6. Definitions
Linux Containers (LXC LinuX Containers)
– Lightweight virtualization
– Realized using features provided by a modern Linux kernel
– VMs without the hypervisor (kind of)
Containerization of
– (Linux) Operating Systems
– Single or multiple applications
LXC as a technology ≠ LXC “tools”
6/13/2014 6
7. Hypervisors vs. Linux Containers
6/13/2014 7
Hardware
Operating System
Hypervisor
Virtual Machine
Operating
System
Bins / libs
App App
Virtual Machine
Operating
System
Bins / libs
App App
Hardware
Hypervisor
Virtual Machine
Operating
System
Bins / libs
App App
Virtual Machine
Operating
System
Bins / libs
App App
Hardware
Operating System
Container
Bins / libs
App App
Container
Bins / libs
App App
Type 1 Hypervisor Type 2 Hypervisor Linux Containers
Containers share the OS kernel of the host and thus are lightweight.
However, each container must have the same OS kernel.
Containers are isolated, but
share OS and, where
appropriate, libs / bins.
9. LXC Technology Stack
6/13/2014 9
UserSpaceKernelSpace
Kernel
System Call Interface
Architecture Dependent Kernel Code
GLIBC / Pseudo FS / User Space Tools & Libs
Linux Container Tooling
Linux Container Commoditization
Orchestration & Management
Hardware
cgroups
namespaces
chroots
LSM
lxc
10. About This Benchmark
Use case perspective
– As an OpenStack Cloud user I want a Ubuntu based VM with MySQL… Why would I choose
docker LXC vs a traditional hypervisor?
OpenStack “Cloudy” perspective
– LXC vs. traditional VM from a Cloudy (OpenStack) perspective
– VM operational times (boot, start, stop, snapshot)
– Compute node resource usage (per VM penalty); density factor
Guest runtime perspective
– CPU, memory, file I/O, MySQL OLTP, etc.
Why KVM?
– Exceptional performance
DISCLAIMERS
The tests herein are semi-active litmus tests – no in depth tuning,
analysis, etc. More active testing is warranted. These results do not
necessary reflect your workload or exact performance nor are they
guaranteed to be statistically sound.
6/13/2014 10
11. Benchmark Environment Topology @ SoftLayer
6/13/2014 11
glance api / reg
nova api / cond / etc
keystone
…
rally
nova api / cond / etc
cinder api / sch / vol
docker lxc
dstat
controller compute node
glance api / reg
nova api / cond / etc
keystone
…
rally
nova api / cond / etc
cinder api / sch / vol
KVM
dstat
controller compute node
+
Awesome!
+
Awesome!
12. STEADY STATE VM PACKING
OpenStack Cloudy Benchmark
6/13/2014 12
13. Cloudy Performance: Steady State Packing
Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot 15 VM asynchronously in succession
– Wait for 5 minutes (to achieve steady-state on the
compute node)
– Delete all 15 VMs asynchronously in succession
Benchmark driver
– cpu_bench.py
High level goals
– Understand compute node characteristics under
steady-state conditions with 15 packed / active VMs
6/13/2014 13
0
2
4
6
8
10
12
14
16
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
ActiveVMs
Time
Benchmark Visualization
VMs
20. Cloudy Performance: Serial VM Boot
Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot VM
– Wait for VM to become ACTIVE
– Repeat the above steps for a total of 15 VMs
– Delete all VMs
Benchmark driver
– OpenStack Rally
High level goals
– Understand compute node characteristics under
sustained VM boots
6/13/2014 20
0
2
4
6
8
10
12
14
16
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
ActiveVMs
Time
Benchmark Visualization
VMs
21. Cloudy Performance: Serial VM Boot
6/13/2014 21
3.529113102
5.781662448
0
1
2
3
4
5
6
7
docker KVM
TimeInSeconds
Average Server Boot Time
docker
KVM
28. SERIAL VM SOFT REBOOT
OpenStack Cloudy Benchmark
6/13/2014 28
29. Cloudy Performance: Serial VM Reboot
Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot a VM & wait for it to become ACTIVE
– Soft reboot the VM and wait for it to become ACTIVE
• Repeat reboot a total of 5 times
– Delete VM
– Repeat the above for a total of 5 VMs
Benchmark driver
– OpenStack Rally
High level goals
– Understand compute node characteristics under sustained VM reboots
6/13/2014 29
0
1
2
3
4
5
6
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55
ActiveVMs
Time
Benchmark Visualization
Active VMs
30. Cloudy Performance: Serial VM Reboot
6/13/2014 30
2.577879581
124.433239
0
20
40
60
80
100
120
140
docker KVM
TimeInSeconds
Average Server Reboot Time
docker
KVM
31. Cloudy Performance: Serial VM Reboot
6/13/2014 31
3.567586041
3.479760051
0
0.5
1
1.5
2
2.5
3
3.5
4
docker KVM
TimeInSeconds
Average Server Delete Time
docker
KVM
35. SNAPSHOT VM TO IMAGE
OpenStack Cloudy Benchmark
6/13/2014 35
36. Cloudy Performance: Snapshot VM To Image
Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot a VM
– Wait for it to become ACTIVE
– Snapshot the VM
– Wait for image to become ACTIVE
– Delete VM
Benchmark driver
– OpenStack Rally
High level goals
– Understand cloudy ops times from a user perspective
6/13/2014 36
37. Cloudy Performance: Snapshot VM To Image
6/13/2014 37
36.88756394
48.02313805
0
10
20
30
40
50
60
docker KVM
TimeInSeconds
Average Snapshot Server Time
docker
KVM
48. Cloud Management Impacts on LXC
0.17
3.529113102
0
0.5
1
1.5
2
2.5
3
3.5
4
docker cli nova-docker
Seconds
Docker: Boot Container - CLI vs Nova Virt
docker cli
nova-docker
6/13/2014 48
Cloud management often caps true ops performance of LXC
49. Ubuntu MySQL Image Size
381.5
1080
0
200
400
600
800
1000
1200
docker kvm
SizeInMB
Docker / KVM: Ubuntu MySQL
docker
kvm
6/13/2014 49
Out of the box JeOS images for docker are lightweight
50. LXC In Summary
Near bare metal performance in the guest
Fast operations in the Cloud
– Often capped by Cloud management framework
Reduced resource consumption (CPU, MEM) on the compute
node – greater density
Out of the box smaller image footprint
6/13/2014 50
51. LXC Gaps
There are gaps…
Lack of industry tooling / support
Live migration still a WIP
Full orchestration across resources (compute / storage / networking)
Fears of security
Not a well known technology… yet
Integration with existing virtualization and Cloud tooling
Not much / any industry standards
Missing skillset
Slower upstream support due to kernel dev process
Memory /CPU proc FS not cgroup aware yet
Etc.
6/13/2014 51
52. LXC: Use Cases For Traditional VMs
There are still use cases where traditional VMs are warranted.
Virtualization of non Linux based OSs
– Windows
– AIX
– Etc.
LXC not supported on host
VM requires unique kernel setup which is not applicable to
other VMs on the host (i.e. per VM kernel config)
6/13/2014 52
53. LXC Recommendations
Private environments (trusted code)
– App packaging / deployment / management / etc, devOps, Cloud, etc…
No additional worries about security
Public environments
– Single tenant
• Same restrictions as private envs; tenant trusted code
– Multi tenant
6/13/2014 53
Privileges, Multitenancy, Untrusted Code
SecurityMeasures
LSM, capabilities,
seccomp, RO bind mounts,
GRSEC, etc.
LXC Security Triangle