A talk I gave at Prairie Dev Con about how we deployed applications before Kubernetes, how to reason about what Kubernetes does, and why you should use it as a default.
10. Procedure to get a new app running
1. Spec server, order, wait
2. Talk to facilities about power
3. Talk to network about a port and an IP
4. Don’t forget to ask about capacity
5. Receive gear, unbox, rack, hook up. Find bandaids for cuts from cage nuts.
6. Install OS while in DC
7. Remote in to install software
11. Problems running an app on bare metal
● Hardware failure / App tied to specific hardware
● CPU and memory under-utilization
● CPU and memory over-utilization
● Scaled at least O(N)
● App was tied to hardware lifecycle
14. For a while things were good
● Easy to provision servers
● Easy to right-size memory/CPU
● Physical hardware independent of VM
● VMs on failed hardware could be restarted on another node
15.
16. 1 VM == 1 OS to
manage
OS Updates
Did this just get worse? Managing OS Configs
Need to fix in place
Run OS services
18. Value
Value is a nebulous word we use to
describe things that make the lives of
our end users better.
Our customers want the results of our
applications, not hosting.
19. Value?
Things that add value:
● Availability
● Quality (correctness)
● Functional improvements
● Speed
● Durability
Things that don’t add value:
● Hardware
● IP addresses
● Virtual machines
● Server names
20. Our goal is to do
stuff that adds
value
And minimize doing that which
doesn’t.
22. We were still stuck with
● Managing OS patches and daemons
● Writing startup scripts to manage applications (systemd helped here)
● Fixed CPU/Memory
● Networking, names, IP addresses
● Upgrading/deploying in place
23. It’s almost like we want a mainframe
Except one that lets us take
advantage of commodity
hardware.
26. Containers can be fun
● Bundle just the app with the bare minimum it needs to run
● Run it on the host OS in a cgroup, so the process doesn’t see the others
● Do some funky filesystem stuff so we can ship around and use zipped up
filesystem “layers”
● More funky networking stuff so every container had its own IP
30. Important Concepts
● K8s takes what you tell it, and tries to get the cluster to that state
● You describe that state through a series of Objects
● These objects let you describe your Infrastructure
● You feed K8s these objects through an API, or by using a command line tool
● APIs require JSON, but for CLI we use YAML
31. K8s is more like a manager than an IC
● Container stuff delegated to Container Runtime (e.g. Docker, containerd)
● Network stuff delegated to Container Network Interface (Calico, Cilium, etc)
● Cloud stuff sent to cloud provider
● Running processes to Linux
● Separation to cgroups
● CPU throttling to cgroups
33. The Basic Objects
Pods correspond to a single instance of a
container, or what we used to call a server. *
ReplicaSets manage a group of identical pods
Deployments manage ReplicaSets to allow for
rolling upgrades
A typical app just needs a Deployment, and K8s
creates the necessary RS/Pods
Endpoints are a list of pods that match a label
expression aka “selector”
Services provide an internal IP and service
discovery for a set of Endpoints.
Most apps just want a Service, which will create
the Endpoints.
35. Hey
Kubernetes!
Give me a deployment of my app and
a service to go with it
Kubernetes will:
● Schedule your app across
nodes
● Put a Load Balancer in front
● Restart failed apps
● Handle hardware failure
● Provide safe rolling upgrades
● Run health checks
● Offer service discovery
36. Do you have any
idea how much
software we
wrote to do that
on VMs?
37. Let’s Talk About YAML
● YAML is one way to ask for Objects
● They have a form that makes them easier
to grok
● No I’m not going to go over each one, this
talk is almost over
38. The anatomy of an Object
Which API group does this belong to, and what is it?
A name, plus labels for things to find it
The “spec” is the details of the Object you want managed.
Here we want 3 replicas, and to manage Pods with labels app:nginx
A Deployment creates Pods (through ReplicaSets) so what do those
Pods look like?
The Pod template has labels, and the spec describes what the pods
look like
39. Kubernetes likes labels
Pods created will be labeled app: nginx
The underlying ReplicaSet will manage pods
with label app:nginx
I’ll spray connections to any
pod with this label
40. Just play with
labels for:
Canary deploys
Blue/green deploys
Internal/External pools
A/B testing
43. K8s as the default
If you think “it’s easier to do this myself”, I don’t think you understand the scope.
After a couple of deployables, it’s just easier.
I haven’t even talked about the cloud/on-prem abstraction you get.
K8s too heavy? Look at K3s.
44. There must be a downside
Yes, we have to learn a lot of new things
In many cases we are abstracted from what’s actually going on
There are a dizzying array of projects to “help” us.
45. So remember this
● Infrastructure is hard. There are a lot of moving parts, literally and figuratively
● Kubernetes isn’t about YAML, it’s about reusable patterns
● Think of your end state, and look at how K8s gets you there
● If you plan on more than a couple of deployables, just start with K8s.
46. Thank you!
Northfield does a lot of K8s
At Superbowl scale
Come work with me!
https://www.northfieldit.com/
Sean Walberg
Principal Engineer
Northfield IT
sean.walberg@northfieldit.com