A talk presented at the Moby Summit, Los Angeles (a co-located event with the Open Source Summit North America) on Thursday, September 14, 2017. In this talk, an open source tool, bucketbench, was presented as a way to benchmark container runtimes to compare performance impacts of changes in the runtime or changes to the configuration of Docker, runC, or containerd, the three runtimes currently supported in the bucketbench project.
3. But...
Standard container lifecycle operations are not
sufficient for our performance guarantees!
Cannot “docker build”, “docker run”, “docker rm”
on each function invocation..
Containers make good sense
as the function invocation
vehicle.
5. docker
Complete container
engine with lifecycle
management,
orchestration, remote
API (daemon model),
plugin support, SDN
networking, image
building, image
registry/local cache
management.
containerd
High-performance,
standards-based
lightweight container
runtime with gRPC API,
daemon model. 1.0
contains complete
lifecycle and image
management in
Q42017.
runc
Open Container
Initiative (OCI)
compliant
implementation of the
runtime specification.
Lightweight container
executor; no network,
image registry or
image creation
capability.
6. https://github.com/estesp/bucketbench
A Go-based framework for benchmarking container
lifecycle operations (using concurrency and load)
against docker, containerd (0.2.x and 1.0), and runc.
The YAML file provided via the --benchmark flag will determine which
lifecycle container commands to run against which container runtimes, specifying
iterations and number of concurrent threads. Results will be displayed afterwards.
Usage:
bucketbench run [flags]
Flags:
-b, --benchmark string YAML file with benchmark definition
-h, --help help for run
-s, --skip-limit Skip 'limit' benchmark run
-t, --trace Enable per-container tracing during benchmark runs
Global Flags:
--log-level string set the logging level (info,warn,err,debug) (default "warn")
H
O
W
CAN
W
E
BEN
CH
M
ARK
VARIO
U
S
CO
N
TAIN
ER
RU
N
TIM
E
O
PTIO
N
S?
examples/basic.yaml
name: BasicBench
image: alpine:latest
rootfs: /home/estesp/containers/alpine
detached: true
drivers:
-
type: Docker
threads: 5
iterations: 15
-
type: Runc
threads: 5
iterations: 50
commands:
- run
- stop
- remove
8. Architecture
Two key interfaces:
Driver
Drives the container runtime
Bench
Defines the container operations
and provides results/statistics
type Driver interface {
Type() Type
Info() (string, error)
Create(name, image string, detached bool, trace bool) (Container, error)
Clean() error
Run(ctr Container) (string, int, error)
Stop(ctr Container) (string, int, error)
Remove(ctr Container) (string, int, error)
Pause(ctr Container) (string, int, error)
Unpause(ctr Container) (string, int, error)
}
type Bench interface {
Init(driverType driver.Type, binaryPath,
imageInfo string, trace bool) error
Validate() error
Run(threads, iterations int) error
Stats() []RunStatistics
Elapsed() time.Duration
State() State
Type() Type
Info() string
}
Driver implementations support:
docker, containerd (1.0 via gRPC Go client
API; 0.2.x via `ctr` binary), and runc today
Can easily be extended to support any
runtime which can implement the Driver
interface
9. Go tools: pprof, trace, block prof..
Also useful: strace, flame graphs..
10. API overhead, libnetwork
setup/teardown, & metadata
sync/update (locking) all add to
differential from runc “bare”
container start performance
Filesystem setup also measurable
for large # of layers, depending on
storage backend
Network namespace creation/deletion has significant impact under load
● 300ms (and higher) delay in network spin lock under multi-threaded contention
● Known issue:
http://stackoverflow.com/questions/28818452/how-to-identify-performance-bottlene
ck-in-linux-system-call-unshareclone-newnet
DISCOVERIES
11. Bucketbench: TODOs
1. Structured Output Format
○ JSON and/or CSV output
2. Other Driver Implementations
○ rkt? cri-o?
○ Drive via CRI versus clients?
3. Integrate with Trace/Debug Tooling
○ Randomized trace output (% of operations)
○ “Real” performance metrics/tooling?
12. Thank You!
1. Check out, critique, contribute to:
http://github.com/estesp/bucketbench
2. Connect with me to ask questions, or
provide your own perspective and
findings at @estesp on Twitter or
estesp@gmail.com