Jorge Salamero discusses 15 failure points to monitor in Kubernetes:
1) Application metrics like connections, response time, and errors
2) Node availability and resource usage of CPU, memory, and disk
3) Ensuring deployments are running the desired number of instances and not experiencing glitches
4) Monitoring pod status, restarts, and the health of the Kubernetes API server and services like KubeDNS
5) Validating Kubernetes configuration changes with tools that monitor deployment commands
5. The holy service metrics
- KPI / biz metrics / synthetic
monitoring / user metrics
- Google SRE book:
“The Four Golden Signals”
Latency+Traffic+Errors+Saturation
6. USE method
- Utilization
(how busy we are, close to 100% bottleneck)
- Saturation
(amount of work waiting on the queue)
- Errors
14. Kubernetes metadata: labels
Pod
app: shopping
tier: api
Pod
app: shopping
tier: db
Pod
app: social
tier: api
role: search
Pod
app: social
tier: api
role: search
17. Health vs state monitoring
- Health:
- CPU, memory, disk
- connections, response time,
errors
18. Health vs state monitoring
- State (orchestration):
- Are containers up and
running properly?
19. Health vs state monitoring
- kube-state-metrics
https://github.com/kubernetes/kube-state-metrics
https://sysdig.com/blog/introducing-kube-state-metrics/
calculate new metrics based on
the state of Kubernetes
resources
20. Container scheduling
- Need to deploy a container:
- given the requirements,
where can we run it?
and let’s ignore affinity, taints and tolerations:
https://sysdig.com/blog/kubernetes-scheduler/
- capacity planning
21. 4. node availability
Based on the host or the kubelet component status:
kube_node_status_condition{condition="Ready",status="true"} == 0
count(kube_node_status_condition{condition="Ready",status="true"} == 0) > 1 and
(count(kube_node_status_condition{condition="Ready",status="true"} == 0) /
count(kube_node_status_condition{condition="Ready",status="true"})) > 0.2
count(up{job="kubelet"} == 0) / count(up{job="kubelet"}) * 100 > 3
kube_node_status_condition: kube_node_status_ready,
kube_node_status_out_of_disk, kube_node_status_memory_pressure,
kube_node_status_disk_pressure, and kube_node_status_network_unavailable
26. 7. disk resources
predict_linear(node_filesystem_free[30m], 3600 * 2) < 0
kube_node_status_condition: kube_node_status_out_of_disk
but within containers this is still WIP, at least Kubernetes 1.8:
container_fs_* doesn’t work with PV
https://github.com/kubernetes/kubernetes/pull/59170
https://github.com/kubernetes/kubernetes/pull/51553
https://kubernetes.io/docs/concepts/cluster-administration/controller-metrics/
33. Liveness probes
To know when to restart a container:
livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3
34. Ready-ness probes
To know when a container is ready to start accepting traffic:
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
35. 11. pod status
kube_pod_status_phase: Pending|Running|Succeeded|Failed|Unknown
kube_pod_status_ready
kube_pod_status_scheduled
kube_pod_container_status_waiting
kube_pod_container_status_running
kube_pod_container_status_terminated
kube_pod_container_status_ready
36. 12. pod restarts
You can look at this as a metric or as an event:
ALERT PodRestartingTooMuch
IF rate(k8s_pod_status_restartCount[1m]) > 1/(5*60)
FOR 1h
LABELS { severity="warning" }
ANNOTATIONS {
summary = "Pod {{$labels.namespace}}/{{$label.name}} restarting too
much.",
description = "Pod {{$labels.namespace}}/{{$label.name}} restarting too
much.",
}
41. 14. KubeDNS / Istio
histogram_quantile(0.95,
sum(rate(kubedns_probe_kubedns_latency_ms_bucket[1m])) BY (le,
kubernetes_pod_name)) > 1000
All export native metrics in Prometheus format, just scrape them!
https://sysdig.com/blog/monitor-istio/
42. What are we deploying?
- CI/CD and commits
- Manual deploys
You need to validate what you
tell Kubernetes too!
43. 15. monitor your commands
kubeval: validates YAML and JSON config files
https://github.com/garethr/kubeval
kube-diff: show differences between running state and version controlled configuration
https://github.com/weaveworks/kubediff
Configuration reconciliation discussion:
https://github.com/kubernetes/kubernetes/issues/1702
Although this is getting automated too:
https://sysdig.com/blog/kubernetes-scaler/
44. Recap
1. connections per second
2. response time
3. errors
4. node availability
5. CPU resources
6. memory resources
7. disk and external resources