Google has been running everything in containers for the past 15 years, but how do we orchestrate and manage all those containers? We've built and released the open source Kubernetes (http://kubernetes.io), which is based on years of running containers internally at Google. Join us for an introduction to containers and Kubernetes, followed by a hands-on workshop building and deploying your own Kubernetes cluster with multiple front end, database and caching instances.
Docker containers help solve the issue of process-level reproducibility by packaging up your apps and execution environments into a number of containers. But once you have a lot of containers running, you'll need to coordinate them across a cluster of machines while keeping them healthy and making sure they can find each other. This can quickly turn into an unmanageable mess! Wouldn't it be helpful if you could declare what wanted, and then have the cluster assign the resources to get it done and to recover from failures and scale on demand? Kubernetes is here to help!
Key takeaways
- Gentle introduction into containers: why and how
- Learn how Google manages applications using containers
- Intro to Kubernetes: managing applications and services
- Build and deploy your own multi-tier application using Kubernetes
1. Containing Container Chaos with
Kubernetes
Bret McGowen
Google
@bretmcg
Carter Morgan
Google
@_askcarter
Workshop setup: http://github.com/bretmcg/kubernetes-workshop
5. 5
Shared machines
Chroots, ulimits, and nice
Noisy neighbors: a real problem
Limited our ability to share
The fleet got larger
Inefficiency hurts more at scale
Share harder!
ca. 2002 App-specific machine pools
Inefficient and painful to
manage
Good fences make good
neighbors
6. 6
Everything we do is about
isolation
Namespacing is secondary
c.f. github.com/google/lmctfy
We evolved our system, made
mistakes, learned lessons
Docker
The time is right to share our
experiences, and to learn from
yours
ca. 2006 Google developed cgroups
Inescapable resource isolation
Enables better sharing
7. 7
job hello_world = {
runtime = { cell = 'ic' } // Cell (cluster) to run in
binary = '.../hello_world_webserver' // Program to run
args = { port = '%port%' } // Command line parameters
requirements = { // Resource requirements
ram = 100M
disk = 100M
cpu = 0.1
}
replicas = 5 // Number of tasks
}
10000
Borg - Developer View
8. 8
web browsers
BorgMaster
link shard
UI shardBorgMaster
link shard
UI shardBorgMaster
link shard
UI shardBorgMaster
link shard
UI shard
Scheduler
borgcfg web browsers
scheduler
Borglet Borglet Borglet Borglet
Config
file
BorgMaster
link shard
UI shard
persistent store
(Paxos)
Binary
Borg
What just
happened?
16. 16@kubernetesio @bretmcg @_askcarter
Old Way: Virtual Machines
Some isolation
Inefficient
Still highly coupled to the guest OS
Hard to manage app
libs
kernel
libs
app app
kernel
app
libs
libs
kernel
kernel
18. 18@kubernetesio @bretmcg @_askcarter
But what ARE they?
Containers share the same operating system kernel
Container images are stateless and contain all dependencies
▪ static, portable binaries
▪ constructed from layered filesystems
Containers provide isolation (from each other and from the host)
Resources (CPU, RAM, Disk, etc.)
Users
Filesystem
Network
19. 19
Why containers?
• Performance
• Repeatability
• Isolation
• Quality of service
• Accounting
• Portability
A fundamentally different way of
managing applications
late binding vs. early binding
Images by Connie
Zhou
24. Now that we have containers...
Isolation: Keep jobs from interfering with each other
Scheduling: Where should my job be run?
Lifecycle: Keep my job running
Discovery: Where is my job now?
Constituency: Who is part of my job?
Scale-up: Making my jobs bigger or smaller
Auth{n,z}: Who can do things to my job?
Monitoring: What’s happening with my job?
Health: How is my job feeling?
25. 25@kubernetesio @bretmcg @_askcarter
Kubernetes
Manage applications, not machines
Open source, container orchestrator
Supports multiple cloud and bare-metal
environments
Inspired and informed by Google’s
experiences and internal systems
26. Design principles
Declarative > imperative: State your desired results, let the system actuate
Control loops: Observe, rectify, repeat
Simple > Complex: Try to do as little as possible
Modularity: Components, interfaces, & plugins
Legacy compatible: Requiring apps to change is a non-starter
Network-centric: IP addresses are cheap
No grouping: Labels are the only groups
Bulk > hand-crafted: Manage your workload in bulk
Open > Closed: Open Source, standards, REST, JSON, etc.
68. 68@kubernetesio @bretmcg @_askcarter
Kubernetes
Manage applications, not machines
Open source, container orchestrator
Supports multiple cloud and bare-metal
environments
Inspired and informed by Google’s
experiences and internal systems
71. 71@kubernetesio @bretmcg @_askcarter
Goal: Write once, run anywhere*
Don’t force apps to know about concepts
that are cloud-provider-specific
Examples of this:
● Network model
● Ingress
● Service load-balancers
● PersistentVolumes
* approximately
Workload Portability
72. 72@kubernetesio @bretmcg @_askcarter
Top 0.01% of all
GitHub projects
1200+ external
projects based on
k8s
Companies
Contributing
Companies
Using
690+
unique contributors
Community
85. 85@kubernetesio @bretmcg @_askcarter
Drive current state towards desired state
Deployments
Node1 Node2 Node3
Pod
hello
app: hello
replicas: 3
Pod
hello
Pod
hello
Pod
hello