SlideShare a Scribd company logo
1 of 220
Download to read offline
Kubernetes & OpenStack
2019.12
윤형기 hky@openwith.net
순서
일차 Module 학습내용
1일차 Intro  인사
 강의 소개
배경  Container & Orchestration
 Yaml 파일
K8S overview
K8S 실습 (1)
 Kubernetes 개요와 아키텍처
 Pods
 Replication
2일차 K8S 실습 (2)  Service Delivery
 Volume
K8S 실습 (3)  ConfigMap
 Deployment
3일차 K8S wrap-up  K8S 아키텍처/Internal
OpenStack 소개 및 둘러보기  OpenStack 개요와 Architecture
 설치, 사용법 (둘러보기)
4일차 OpenStack Components  Core Components (1) (Nova, Neutron)
 Core Components (2) (Glance, Cinder, Keystone)
확장  Containers & Cloud Orchestration
Wrap-up  Review & 마무리
배경
Container와 Orchestration
Container Review
• 핵심개념
– Linux Namespace를 이용한 process isolation
– Mount (mnt) Process ID (pid)
– Network (net) Inter-process communication (ipc)
– UTS User ID (user)
– cgroup을 이용한 자원의 제한적 할당
– Image layers
VMs vs. containers
• Docker container 플랫폼
– Docker 개념
– Images
– Registries
– Containers
– Docker image의 build, 배포
• Docker Engine과 K8s
dockerd
 containerd
 containerd-shim
 "sleep 60" (desired process in container).
How it all works in Kubernetes
 Container Runtime Interface (CRI)
 cni container network interface
 gRPC Remote Procedure Calls
Orchestration
• 개념
– the automated configuration, coordination, and management of
computer systems and software (wiki)
• Orchestrator의 주요 task
– Reconciling the desired state
– Replicated and global services
– Service discovery
– Routing
– Load balancing
– Scaling
– Self-healing  Zero downtime
– Affinity and location awareness
– Security
• Secure communication and cryptographic node identity
• Secure networks and network policies
• RBAC: Secrets, Content trust, Reverse uptime
– Introspection
Yaml 파일
• 개요
– a human-readable data-serialization language
– a superset of JSON.
– Outline Indentation and Whitespace
– YAML의 기본 구성요소는 key-value
– https://yaml.org/spec/1.2/spec.html
• Basic Rules
• YAML is case sensitive.
• YAML does not allow use of tabs.
• Data Types
– YAML excels at working with mappings (hashes / dictionaries),
sequences (arrays / lists), and scalars (strings / numbers).
• Scalars
– Strings, numbers, a boolean property, integer (number).
– often called variables in programming.
– Most scalars are unquoted, but if you are typing a string that
uses punctuation and other elements that can be confused with
YAML syntax (dashes, colons, etc.) you may want to quote this
data using single ' or double " quotation marks.
• Double quotation marks allow you to use escapings to represent
ASCII and Unicode characters.
• Only 2 types of structures in YAML
– Lists
• literally a sequence of objects. For example:
• members of the list can also be maps:
– Maps
• name-value pairs
integer: 25
string: "25"
float: 25.0
boolean: Yes
---
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "rss-site",
"labels": {
"app": "web"
}
}
}
args:
- sleep
- "1000"
- message"
{
"args": ["sleep", "1000", "message"]
}
---
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "rss-site",
"labels": {
"app": "web"
}
},
"spec": {
"containers": [{
"name": "front-end",
"image": "nginx",
"ports": [{
"containerPort": "80"
}]
},
{
"name": "rss-reader",
"image": "nickchase/rss-php-nginx:v1",
"ports": [{
"containerPort": "88"
}]
}]
}
}
Pod in yaml
• YAML in Kubernetes
Kubernetes
Kubernetes Overview
Kubernetes 개요
• 필요성
– From monolithic apps to microservices
• Splitting apps into microservices
– Consistent environment to applications
– DevOps (CD/CI) and NoOps
• Google’s Borg
https://storage.googleapis.com/pub-tools-public-publication-data/pdf/43438.pdf
• 계보
• Kubernetes from the top of a mountain
– Help developers focus on the core app features
– Help ops teams achieve better resource utilization
K8s exposes whole datacenter as a single deployment platform.
• Kubernetes 아키텍처 • Head Node
– API server
– Scheduler
– Controller manager
– etcd
– Sometimes:
• Kubelet
• Docker
• Worker Node
– Kubelet
– Kube-proxy
– docker
• k8s cluster is composed of nodes, split into 2 types:
– master node hosts the Kubernetes Control Plane
• controls and manages whole Kubernetes system
– Worker nodes run the actual applications you deploy
k8s cluster의 구성요소
– Control Plane
• 구성요소:
– Kubernetes API Server
– Scheduler ; schedules your apps (assigns a worker node to each
deployable component of your application)
– Controller Manager, which performs cluster-level functions, such as
replicating components, keeping track of worker nodes, handling node
failures, and so on
– etcd = a distributed data store that persistently stores cluster
configuration.
– (Worker) Nodes
• = machines that run containerized applications.
– Docker, rkt, or another container runtime, which runs your containers
– Kubelet, which talks to the API server and manages containers on its
node
– Kubernetes Service Proxy (kube-proxy), which load-balances network
traffic between application components
• Kubernetes에서의 application 수행
– Description을 통한 container 실행
SwarmKit Kubernetes Description
Swarm Cluster Set of servers/nodes managed by the respective orchestrator.
Node
Cluster
member
Single host (physical or virtual) which is a member of the
swarm/cluster.
Manager
node
Master Node managing the swarm/cluster. This is the control plane.
Worker node Node Member of the swarm/cluster running application workload.
Container Container**
Instance of a container image running on a node. In a Kubernetes
cluster, we cannot run a container.
Task Pod
Instance of a service (swarm) or ReplicaSet (Kubernetes) running
on a node. A task manages a single container while a Pod contains
one to many containers that are all sharing the same network
namespace.
• SwarmKit와 Kubernetes의 비교
뒷면 계속
SwarmKit Kubernetes Description
Service ReplicaSet
Defines and reconciles the desired state of an application service
consisting of multiple instances.
Service Deployment
A deployment is a ReplicaSet augmented with rolling update and
rollback capabilities.
Routing
Mesh
Service
Swarm Routing Mesh provides L4 routing and load balancing
using IPVS.
Stack Stack **
Definition of an application consisting of multiple (Swarm)
services.
Network Network policy
Swarm software-defined networks (SDNs) are used to firewall
containers. Kubernetes only defines a single flat network.
• Kubernetes를 이용한 Application 배포, update
– Deploy a first application
• Deploy the web component
• Deploy the database
• Streamle the deployment
– Zero downtime deployments
• Rolling updates
• Blue–green deployment
– Kubernetes secrets
• Manually defining secrets
• Creating secrets with kubectl
• Using secrets in a pod
• Secret values in environment variables
환경구성
• Minikube vs. Multi-node
• 설치순서 (Kubeadm)
Docker와 Kubernetes: 맛보기
Container image의 생성 및 실행
• Docker
– Hello World container
• Node.js app의 예
– Node.js app
– Dockerfile
– Build image
• Image layers
– Container 실행
– Container 정보
– Stop & Remove container
– Registry에 push
$ docker build -t mykub .
$ docker inspect mykub-container
Kubernetes cluster
• Minikube
– Local single-node Kubernetes cluster
– Minikube를 이용한 Kubernetes cluster 실행
– Kubernetes client 설치 (kubectl)
• 설치 후 확인
$ minikube start
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
...
Kubectl is now configured to use the cluster.
• Public cloud와 Kubernetes cluster
– (GKE 의 경우)
• 절차: …
• 3-node Kubernetes cluster 생성
$ gcloud container clusters create mykub --num-nodes 3
--machine-type f1-micro
• kubectl 사용 준비
– Creating an alias
– Configuring tab completion for kubectl
alias k=kubectl
Kubernetes에서의 app 실행
• Deploying Node.js app
• Pods
– = group of one or more tightly related containers that always
run together on the same worker node and in the same Linux
namespace(s).
– Each pod is like a separate logical machine with its own IP,
hostname, processes, and so on, running a single application.
$ kubectl run mykub --image=hk/mykub --port=8080 --generator=run/v1
replicationcontroller "mykub" created
• Web application
– expose through a Service object.
• LoadBalancer-type service  external load balancer
–  connect to pod through the load balancer’s public IP.
• Service object 생성
– External IP를 통한 Service확인
• ReplicationController, Pod 및 Service
– kubectl run 명령어를 이용한 ReplicationController 생성
• ReplicationController creates actual Pod object.
• Service의 필요성 및 역할
– Pods are ephemeral  ever-changing pod IP addresses
– Services expose multiple pods at a single constant IP: port pair.
• a static IP during the lifetime of service.
• Horizontal scaling
– 확인/Inspection
$ kubectl scale rc abc --replicas=3
$ kubectl describe pod abc
• Kubernetes dashboard
Pods
pod 생성
Label의 이용과 Annotation 및
Namespace 이용
pod 사용의 종료
주요 항목
• YAML 또는 JSON descriptor를 이용한 pod 생성
• Label의 이용
– Listing subsets of pods through label selectors
– Label 및 selector를 통한 pod scheduling 관리
• Annotation
• Namespace 를 이용한 resource grouping
• pod 사용의 종료
Pods 개요
Figure 3.1. All containers of a pod run on the same node.
A pod never spans two nodes.
• Pods의 필요성 (역할)
– Multiple containers
• Containers are designed to run a single process per container.
– Pod with multiple containers run on a single worker node.
• Run closely related processes together and provide same env’t.
• Partial isolation between containers of the same pod
– As all containers of a pod run under the same Network and
UTS ns (Linux ns), they all share same hostname and NICs.
– All containers of a pod run under the same IPC namespace.
• Containers share the same IP and port space
– Because containers in a pod run in the same Network ns, they
share same IP address and port space.
– Flat inter-pod network
• All pods in a k8s cluster reside in a single flat, shared, network-
address space.  No NAT gateways between them.
• 여러 container를 pod에 적절히 배분하는 문제
– Split multi-tier apps into multiple pods
– Split into multiple pods to enable individual scaling
– Run containers in separate pods, unless a specific reason
requires
YAML descriptor 이용한 pod 생성
• YAML descriptor of a pod
– Pod definition – 3 sections in Kubernetes resources:
• Metadata
• Spec
• Status
• YAML descriptor 이용한 pod 생성 실습
– Yaml 파일
– Pod 생성
– Container port
– requests to the pod에게 request 발송
$ kubectl create -f mypod.yaml
$ kubectl explain pods
$ kubectl explain pod.spec
$ kubectl get pods
$ kubectl port-forward mypod 8888:8080
Label 의 이용
• Label 개념
– an arbitrary key-value pair you attach to a resource
–  label selectors (resource filtering).
• Label selector
– Whether the resource
• Contains (or doesn’t contain) a label with a certain key
• Contains a label with a certain key and value
• Contains a label with a certain key, but with a value not equal to
the one you specify
– Label selectors:
• creation_method!=manual
• env in (prod,devel)
• env notin (prod,devel)
• Label을 이용한 worker node의 분류
– Scheduling pods to specific nodes
Annotations
• Annotations
– =KVP
– hold much larger pieces of information
– Tool-building 또는 new features 에 이용
• Object에 대한 annotation의 추가 및 수정 및 검사
Namespace
• Namespace
– Kubernetes namespaces provide a scope for objects names.
• (cf. Linux namespaces to isolate processes from each other.)
– 필요성: split up resources in a multi-tenant environment
• Namespace 생성
• Namespace 를 이용한 resource grouping
$ kubectl get ns
apiVersion: v1
kind: Namespace
metadata:
name: custom-namespace
pod 사용의 종료
• 삭제
– Pod의 이름을 이용한 삭제
– Label selector를 이용한 삭제
– Namespace 전체에 대한 삭제
– 전체 삭제
$ kubectl delete po mykub-gpu
pod "mykub-gpu" deleted
Deploying managed pods: Replication
• pods 상태 점검
• ReplicationControllers
• ReplicaSets
• DaemonSets
• completable task
• Periodical Scheduling
주요 항목
• pods 상태 점검
• ReplicationControllers
• ReplicaSets
• DaemonSets
• completable task
• Job Scheduling to run periodically
pods 상태 점검
• Liveness probes 개요
– check if a container is still alive through liveness probes.
• 주기적 조사 후 restart the container if the probe fails.
– Mechanisms
• HTTP GET probe
• TCP Socket probe
• Exec probe
• Liveness probe의 활용
– What a liveness probe should check
• Check only internals of the app
– Probe에서 retry loop는 사용치 말것
ReplicationControllers
• ReplicationController
– ensures its pods are always kept running.
– Features
• makes sure a pod is always running by starting a new pod.
• When a cluster node fails, it creates replacement replicas.
• enables horizontal scaling of pods—both manual and automatic
– 3 parts of a ReplicationController
• label selector
• replica count
• pod template
– YAML definition of a ReplicationController
apiVersion: v1
kind: ReplicationController
metadata:
name: myRC
spec:
replicas: 3
selector:
app: mykub
template:
metadata:
labels:
app: mykub
spec:
containers:
- name: myContainer
image: openwith/rc
ports:
- containerPort: 8080
• pod template의 변경
• Horizontal scaling pods
– Scaling up a ReplicationController
– Scaling a ReplicationController by editing its definition
• ReplicationController의 삭제
$ kubectl scale rc mykub --replicas=10
$ kubectl edit rc mykub
$ kubectl edit rc myRC
ReplicaSets
• ReplicaSet vs. ReplicationController
• A ReplicaSet has more expressive pod selectors.
– A YAML definition of a ReplicaSet
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myRS
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: openwith/RSapp
– ReplicaSet의 expressive label selector
• operators:
– In
– NotIn
– Exists
– DoesNotExist
selector:
matchExpressions:
- key: app
operator: In
values:
- openwith
DaemonSets
• DaemonSet 개요
• Run exactly one pod on each node with
• infrastructure-related pods that perform system-level operations.
– DaemonSet 을 이용한 특정 node에서만의 pods 수행
• node selector
Completable task
• 배경
• Continuous tasks
• Completable tasks
– Job resource
• Functional programming
• Running job pods sequentially
• Running job pods parallelly
Job Scheduling to run periodically
• CronJob 생성
• 5 entries:
– Minute
– Hour
– Day of month
– Month
– Day of week.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: batch-job-3-min
spec:
schedule: "0,15,30,45 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: periodic-batch-job
spec:
restartPolicy: OnFailure
containers:
- name: main
image: openwith/batch-job
Services
주요 항목
• 개요
• Exposing services to external clients
• Ingress resource
• Signaling when a pod is ready to accept connections
• Headless servic를 이용한 개별 pod의 발견
Kubernetes service 개요
• Kubernetes service 개요
– a resource you create to make a single, constant point of
entry to a group of pods providing the same service.
– Each service has an IP address and port that never change
while the service exists.
–  enable clients to discover and talk to pods
• Service Types
– 예:
• External clients need to connect to frontend pods …
• Frontend pods need to connect to backend database ...
Services 생성과 이용
• Services 생성
– kubectl expose를 통한 생성
– YAML descriptor 를 통한 생성
apiVersion: v1
kind: Service
metadata:
name: mykub
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: mykub
• Service의 이용
– 생성된 Service의 확인
– 수행 중인 container에서의 원격실행
– Session affinity
• 특정 client로부터의 모든 request를 특정 port로 redirect
•  sessionAffinity property 를 ClientIP로 지정
– Exposing multiple ports
– named ports
$ kubectl exec mykub-7nog1 -- curl -s http://10.111.249.153
Service Discovery
• 환경변수를 이용한 service discovery
• DNS를 이용한 service discovery
– FQDN의 이용
• (frontend-backend 예에서) - frontend pod can connect to
backend DB service by opening a connection to:
backend-database.default.svc.cluster.local
외부 Service 연결
• Cluster 밖의 Service에 대한 연결
– 목적: 외부 IP(s):port(s) 에 대한 Redirection을 통해
service load balancing 및 service discovery 달성
• Service endpoints
• Endpoints Resource sits in between
• Service endpoint에 대한 manual configuring
– Service 및 Endpoints resource를 생성
– 외부 service에 대한 alias 생성
•  FQDN 이용 가능
2개의 외부 endpoint를 대상으로 한 service
• 외부 client에게 service를 expose
– Set service type to NodePort
– Set service type to LoadBalancer, an extension of NodePort type
– Create an Ingress resource to expose multiple services thru single IP
address
• NodePort service의 이용
–  make Kubernetes reserve a port on all its nodes and
forward incoming connections to pods that are part of service.
– NodePort service 생성 및 NodePort service 확인
• 외부 load balancer를 통한 service exposure
• 기타
– 불필요한 network hop의 방지
Ingress resource
• 필요성
– Exposing services externally through an Ingress resource
– Ingresses operate at application layer (HTTP) and provide
features such as cookie-based session affinity
• 생성과 이용
$ kubectl get ingresses
$ curl http://mykub.example.com
• 하나의 Ingress를 통해 여러 service를 expose
• Mapping different services to different paths of the same host
• Ingress를 이용한 handle TLS traffic의 처리
spec:
rules:
- host: foo.example.com
http:
paths:
- path: /
backend:
serviceName: foo
servicePort: 80
- host: bar.example.com
http:
paths:
- path: /
backend:
serviceName: bar
servicePort: 80
• Readiness probes
– 주기적 호출을 통해 특정 pod가 request 수신 가능한지 확인
– 유형: Liveness probes와 마찬가지
• Exec probe
• HTTP GET probe
• TCP Socket probe
• Readiness probe 이용
– pod에 readiness probe 추가
– 기존 pod의 readiness status를 수정
Headless service
• 목적
– Headless service를 이용해서 개별 pod를 탐색 (discover)
• Headless service의 생성
– Service spec에서 clusterIP 필드를 None으로 설정
•  K8s won’t assign it a cluster IP
• DNS를 통한 pod 탐색
• DNS A records
Volumes
주요 항목
• 개요
• Volume 이용
– Worker node의 파일시스템/파일을 access
• Storage와 pod를 decouple
– PersistentVolume과 PersistentVolumeClaims
• 기타
– PersistentVolume에 대한 Dynamic provisioning
개요
• Kubernetes volumes
– a component of a pod and are thus defined in the pod’s
specification—much like containers.
– A volume is available to all containers in the pod, but it must
be mounted in each container.
– 예:
• 가용한 volume types
– emptyDir
– hostPath
– gitRepo
– nfs
– cinder, cephfs, iscsi, flocker, glusterfs, quobyte, rbd, flexVolume,
vsphere-Volume, photonPersistentDisk, scaleIO
– configMap, secret, downwardAPI
– persistentVolumeClaim
– Vendor-specific
• gcePersistentDisk (GCE Persistent Disk),
awsElasticBlockStore (AWS EBS Volume),
azureDisk (MS Azure)
• https://kubernetes.io/docs/concepts/storage/volumes/#types-of-
volumes
Volume의 이용
• Volume을 이용한 container간 데이터 공유
– emptyDir volume의 이용
– Git repository를 활용
• Volume initialized by checking out contents of a Git repository.
• sidecar containers
– Container augmenting the operation of main container of the pod.
• Worker node의 파일시스템/파일을 access
• hostPath volume
Storage와 pod를 decouple
• PersistentVolume과 PersistentVolumeClaims
Storage Activity Kubernetes storage primitive
Provisioning Persistent Volume
Configuring Storage Class
Attaching Persistent Volume Claim
출처: https://portworx.com/tutorial-kubernetes-persistent-volumes/
• PersistentVolumeClaim
– = a completely separate process from creating a pod
– PersistentVolumeClaim to stay available even if pod is rescheduled
• 활용과 잇점
• Dynamic provisioning of PersistentVolumes
– Administrator 가 PersistentVolume을 pre-provision하는 대신,
StorageClass를 지정하고 PersistentVolumeClaim 를 통해 필요 시
새로운 PersistentVolume을 생성
– PersistentVolumeClaim의 storage class를 request
• PVC definition 생성
– Storage class 지정 없이 dynamic provisioning
ConfigMap과 Secrets
주요 항목
• 개요
– Containerized application의 설정 (configuration)
• 일반론
– Command-line arguments의 전달
– Container에 대한 환경변수 설정
• ConfigMap의 이용
• Secret의 이용
개요
• Containerized application의 설정 (configuration)
– Command-line arguments
– 환경변수
– Regardless of using a ConfigMap to store configuration data
or not, you can configure your apps by
• Passing command-line arguments to containers
• Setting custom environment variables for each container
• Mounting configuration files into containers through a special
type of volume
• Passing command-line arguments to containers
– Defining the command and arguments in Docker
• Dockerfile에서의 ENTRYPOINT와 CMD
– ENTRYPOINT defines executable invoked when container is started.
– CMD specifies the arguments that get passed to the ENTRYPOINT.
– shell form과 exec form
– shell form—For example, ENTRYPOINT node app.js.
– exec form—For example, ENTRYPOINT ["node", "app.js"].
– Whether specified command is invoked inside a shell or not.
• Kubernetes vs. Docker
Docker Kubernetes 설명
ENTRYPOINT command Container 안에서 실행되는
executable
CMD args Executable에 전달되는
arguments
• Container에 대한 환경변수 설정
– Container definition에서 환경변수 지정
– Referring to other environment variables in a variable’s value
ConfigMap
• ConfigMaps 개요
– Decoupling configuration
– Having config in a separate standalone object
– Move the configuration out of the pod description.
• ConfigMap 생성
– kubectl create configmap 명령어
• Creating a ConfigMap entry from the contents of a file
• Creating a ConfigMap from files in a directory
• Combining different options
– ConfigMap entry를 환경변수로서 container에 전달
$ kubectl create configmap fortune-config --from-literal=sleep-interval=25
Secrets
• 개요
– Pass sensitive data to containers
– key-value pairs
• Pass Secret entries to the container as environment variables
• Expose Secret entries as files in a volume
– Secrets are always stored in memory only.
– etcd stores Secrets in encrypted form
• ConfigMap과 Secret 비교
• Binary data에 대한 Secret 적용
– Base64 encoding
– default token Secret
– Secret 생성
• Serve HTTPS traffic
• Certificate과 private key files 생성
Deployments
주요 항목
• Pod에서 수행 중인 application의 update
• ReplicationController 이용
– 자동 rolling update
• Deployment를 이용한 선언적(declarative) update
개요
• 개요
– Pod 안에서 수행 중인 application의 update
– 2 ways
• 기존의 모든 pods를 삭제한 후 start new ones.
• Start new ones  delete old ones.
– Either by adding
all new pods and then deleting all the old ones
or
sequentially, by adding new pods and removing old ones gradually
• 기존의 모든 pods를 삭제한 후 start new ones.
• Spin up new pods 후 delete old ones
–  a blue-green deployment
– rolling update
ReplicationController
• 개요
– ReplicationController 이용한 자동 rolling update
• 진행과정
– Run initial version of the app
– kubectl를 이용하여 rolling update 수행
Service is redirecting requests to both old and new pods
during rolling update.
Deployments
• 배경
– kubectl rolling-update 의 제한점
• (i) k8s modifies objects I created
• (ii) I had to explicitly say that the kubectl client perform …
• (iii) It’s imperative
• Deployment
– = a higher-level resource for deploying/updating applications
–  선언적(declarative) update
• instead of doing it through a ReplicationController or a
ReplicaSet, which are both considered lower-level concepts.
• Deployment 생성
– Deployment manifest
– Deployment resource 생성
• 우선 수행 중인 ReplicationController와 pod를 삭제
• Updating a Deployment
– Deployment 전략
• Recreate strategy
• RollingUpdate strategy
– Triggering rolling update
• Deployment와 기타 resource의 수정방법
Method 내역
kubectl edit Opens the object’s manifest in default editor.
kubectl patch Object의 개별 properties를 수정.
예: kubectl patch deployment ~~
kubectl apply YAML or JSON 파일의 property value를 통해 수정
kubectl replace Replaces the object with a new one from a
YAML/JSON file.
kubectl set image Changes the container image defined in a Pod,
• deployment의 Rollback
• Undoing a rollout
• Rollout history
• Pausing the rollout process
– Pausing the rollout
– Resuming the rollout
• Rollout 방식의 조정
– Rollout rateProperty What it does
maxSurge How many pod instances you allow to exist above the
desired replica count configured on the Deployment.
Default: 25%,
maxUnavailable Determines how many pod instances can be unavailable
relative to the desired replica count during the update.
Default: 25%,
Kubernetes Wrap-up
Kubernetes Components
• Master Components
– provide the cluster’s control plane.
– make global decisions about the cluster (예: scheduling), and
they detect and respond to cluster events (예: starting up a
new pod when a replication controller’s replicas field is
unsatisfied).
– kube-apiserver
– etcd
– kube-scheduler
– kube-controller-manager
– cloud-controller-manager
• kube-apiserver
– Component on the master that exposes the Kubernetes API. It
is the front-end for the Kubernetes control plane.
– It is designed to scale horizontally.
• etcd
– Consistent and highly-available key value store used as
Kubernetes’ backing store for all cluster data.
– If your Kubernetes cluster uses etcd as its backing store, make
sure you have a back up plan for those data.
– You can find in-depth information about etcd in the offical
documentation.
• kube-scheduler
– Component on the master that watches newly created pods
that have no node assigned, and selects a node for them to
run on.
– Factors taken into account for scheduling decisions include
individual and collective resource requirements,
hardware/software/policy constraints, affinity and anti-affinity
specifications, data locality, inter-workload interference and
deadlines.
• kube-controller-manager
– Component on the master that runs controllers .
– Logically, each controller is a separate process, but to reduce
complexity, they are all compiled into a single binary and run
in a single process.
• Node Controller: Responsible for noticing and responding when
nodes go down.
• Replication Controller: Responsible for maintaining the correct
number of pods for every replication controller object in the
system.
• Endpoints Controller: Populates the Endpoints object (that is,
joins Services & Pods).
• Service Account & Token Controllers: Create default accounts
and API access tokens for new namespaces.
• cloud-controller-manager
– cloud-controller-manager runs controllers that interact with
the underlying cloud providers.
– cloud-controller-manager runs cloud-provider-specific
controller loops only. You must disable these controller loops
in the kube-controller-manager.
• Node Controller: For checking the cloud provider to determine if
a node has been deleted in the cloud after it stops responding
• Route Controller: For setting up routes in the underlying cloud
infrastructure
• Service Controller: For creating, updating and deleting cloud
provider load balancers
• Volume Controller: For creating, attaching, and mounting
volumes, and interacting with the cloud provider to orchestrate
volumes
• Node Components
– 모든 node에서 수행되며 Kubernetes runtime environment 제공
– kubelet
• An agent that runs on each node in the cluster. It makes sure
that containers are running in a pod.
– kube-proxy
• kube-proxy is a network proxy that runs on each node in the
cluster.
– Container Runtime
• the software that is responsible for running containers.
• Kubernetes supports several container runtimes: Docker,
containerd, cri-o, rktlet and any implementation of the
Kubernetes CRI (Container Runtime Interface)
• Addons
– DNS
• While the other addons are not strictly required, all Kubernetes
clusters should have cluster DNS, as many examples rely on it.
• Cluster DNS
– Web UI (Dashboard)
• a general purpose, web-based UI for Kubernetes clusters.
– Container Resource Monitoring
• records generic time-series metrics about containers in a central
database, and provides a UI for browsing that data.
– Cluster-level Logging
• is responsible for saving container logs to a central log store with
search/browsing interface.
• Pods
– Comparing Docker container and Kubernetes pod networking
– Sharing the network namespace
– Pod life cycle
– Pod specification
– Pods and volumes
• Kubernetes API
– Kubernetes itself is decomposed into multiple components,
which interact through its API.
• API changes
• OpenAPI and Swagger definitions
• API versioning
• API groups
• Enabling API groups
• Enabling resources in the groups
• Self-Registration of Nodes
– When kubelet flag --register-node is true (default), the
kubelet will attempt to register itself with the API server.
– For self-registration, kubelet is started with options:
--kubeconfig
--cloud-provider
--register-node
--register-with-taints
--node-labels
--node-status-update-frequency
• Cluster nodes – On each node, 3 services need to run:
– Kubelet:
• = primary node agent.
• uses pod specifications to make sure all of the containers of the
corresponding pods are running and healthy.
• YAML or JSON 파일
– Container runtime:
• Manages and runs individual containers of a pod.
• rkt or CRI-O도 사용 가능.
– kube-proxy:
• runs as a daemon and is a simple network proxy and load
balancer for all application services running on that particular
node.
Kubernetes support in Docker
• Pods
– = atomic unit of deployment in Kubernetes.
– = abstraction of one or many co-located containers that share
the same Kernel namespaces, such as the network namespace.
– No equivalent exists in the Docker SwarmKit.
• Docker container와 Kubernetes pod networking의 비교
• Container 사이의 통신
Containers in Pod sharing network namespace
Containers in pods communicate via localhost
• Sharing the network namespace
• Pod life cycle
• Pod specification
• Pods and volumes
• Kubernetes ReplicaSet
• defines and manages a collection of identical pods that are
running on different cluster nodes.
– a ReplicaSet defines which container images are used by the
containers running inside a pod and how many instances of the pod
will run in the cluster.  desired state.
– ReplicaSet specification
– Self-healing
• Kubernetes service
Kubernetes service providing stable endpoints
to clients
Service discovery
• Context-based routing
Context-based routing using a Kubernetes
ingress controller
Kubernetes internals
• Kubernetes architecture
• How controllers cooperate
• Running pod
• Inter-pod networking
Architecture
• Components of the Control Plane
• etcd distributed persistent storage
• API server
• Scheduler
• Controller Manager
• Components running on the worker nodes
• Kubelet
• Kubernetes Service Proxy (kube-proxy)
• Container Runtime (Docker, rkt, or others)
• Add-on components
• Kubernetes DNS server
• Dashboard
• Ingress controller
• Heapster, which we’ll talk about in chapter 14
• CNI network plugin
• Distributed nature of Kubernetes components
– Checking the status of the Control Plane components
$ kubectl get componentstatuses
• etcd
– optimistic concurrency control
– How resources are stored in etcd
– Ensuring consistency when etcd is clustered
$ etcdctl ls /registry
• API server
– 역할
• Authenticating the client with authentication plugins
• Authorizing the client with authorization plugins
• Validating and/or Modifying resource in the request with
admission control plugins
• Scheduler
– scheduling algorithm
• 2 parts in selection of a node
– Filtering the list of all nodes to obtain a list of acceptable nodes the
pod can be scheduled to.
– Prioritizing the acceptable nodes and choosing the best one. If
multiple nodes have the highest score, round-robin is used to ensure
pods are deployed across all of them evenly.
Controllers
• Which components are involved
Running pod
Inter-pod networking
• Container Network Interface
– CNI project makes easier to connect containers into a network.
 allows Kubernetes to be configured to use any CNI plugin
including
• Calico
• Flannel
• Romana
• Weave Net
• And others
• https://kubernetes.io/docs/concepts/cluster-
administration/addons/.
OpenStack
OpenStack Overview
• OpenStack
– IaaS 플랫폼
– A composable, open infrastructure that provides API-driven
access to compute, storage and networking resources.
– Open-Source project: License Apache 2.0
– Initiated by Nebula (NASA) and Rackspace
– Written in Python, Stable releases every 6 months
Functions OpenStack
Compute
Identity
Network
Storage
Telemetry
Orchestration
Dashboard
Nova
Keystone
Neutron
Glance, Cinder, Swift
Ceilometer
Heat
Horizon
• The foundation
– 10 000 users
• Cloud providers, Telco, Banks, Governments, etc
– 1000 organizations
• Red Hat, IBM, Rackspace, eNovance, etc
– 100 countries
• OpenStack과 hypervisors
• OpenStack에서의 network service
– OpenStack manages several physical and virtual network
devices and virtual overlay networks.
– Various interfaces are abstracted by OpenStack API.
– OpenStack can manage many types of network technology
• OpenStack에서의 storage
– Block Storage
– Object Storage
OpenStack components
Project Code 명 설명
Compute Nova Manages VM resources, including CPU, memory, disk, and
network interfaces.
Networking Neutron Provides resources used by the VM network interface, including
IP addressing, routing, and SDN.
Object Storage Swift Provides object-level storage, accessible via a RESTful API.
Block Storage Cinder Provides block-level (traditional disk) storage to VMs.
Identity Keystone Manages role-based access control (RBAC) for OpenStack
components. Provides authorization services.
Image Service Glance Manages VM disk images. Provides image delivery to VMs and
snapshot (backup) services.
Dashboard Horizon Provides a web-based GUI for working with OpenStack.
Telemetry Ceilometer Collection for metering and monitoring OpenStack components.
Orchestration Heat Template-based cloud application orchestration
History of OpenStack
Series Status Initial Release
Date
Next Phase
Ussuri Development 2020-05-13
estimated (schedule)
Maintained estimated
2020-05-13
Train Maintained 2019-10-16 Extended Maintenance
estimated 2021-04-16
Stein Maintained 2019-04-10 Extended Maintenance
estimated 2020-10-10
Rocky Maintained 2018-08-30 Extended Maintenance
estimated 2020-02-24
https://releases.openstack.org/
기본화면
OpenStack Dashboard
• 3 primary ways to interface with OpenStack:
• Dashboard
• CLI
• APIs
– Regardless of interface method, all interactions will make
their way back to the OpenStack APIs.
– Overview screen
• Access & Security screen
• Images & Snapshots screen
– <OpenStack image formats>
– RAW
– VHD (Virtual Hard Disk)
– VMDK (Virtual Machine Disk)
– VDI
(Virtual Disk Image or
VirtualBox Disk Image)
– ISO
– QCOW
(QEMU Copy On Write)
– AKI
– ARI
– AMI
• Volumes screen
– <Block vs. file vs. object storage>
• 3 categories of typical storage access methods’ :
– Block
– File
– Object
• Instances screen
• OpenStack deployment as a hotel.
• tenants as hotel rooms.
• Hotel OpenStack provides computational resources.
– Just as a hotel room is configurable, so are tenants.
» The number of resources (vCPU, RAM, storage, and the like), images (tenant-specific software
images), and the configuration of the network are all based on tenant-specific configurations.
• Users are independent of tenants, but users may hold roles for
specific tenants.
• Every time a new instance (VM) is created, it must be created in a
tenant.
OpenStack 사용법
Tenant model
• Tenant model operations
– Users and roles have one-
to-many relationships
with tenants.
– All resource configuration
(users with roles,
instances, networks, and
so on) is organized based
on tenant separation.
– Roles are defined outside
of tenants, but users are
created with an initial
tenant assignment.
• Tenants, users, role의 생성
– Tenant 생성
– User 생성
– Role 부여
• Tenant networks
– flat = absence of a virtual routing tier
– OpenStack tenant network
• 추가의 router residing within virtual environment
 internal network (GENERAL_NETWORK)과 external network
(PUBLIC_NETWORK)를 분리.
– Creating internal networks
• internal network works on ISO Layer 2, so for the network types
this is the virtual equivalent of providing a network switch to be
used exclusively for a particular tenant.
– Network (Neutron)
• GENERAL_NETWORK created for your tenant.
– Router 생성
• Router를 subnet에 추가  local virtual switch에 port 생성
– Router를 public network에 연결
– 외부 네트워크의 생성
– 외부 subnet의 생성
External gateways for tenants
OpenStack 아키텍처
• KeyStone
– 주요기능
• Identity provider:
– identity is represented as a user in the form of a name and password.
• API client authentication:
– KeyStone can do it by using many third-party backends such as
LDAP and AD. Once authenticated, the user gets a token which
he/she can use to access other OpenStack service APIs.
• Multitenant authorization:
– When a user access any OpenStack service, the service verifies the
role of the user and whether he/she can access the resource.
• Service discovery:
– KeyStone manages a service catalog in which other services can
register their endpoints.
– KeyStone components:
• KeyStoneAPI:
• Services:
• Identity:
• Resource:
• Assignment:
• Token:
• Catalog:
• Policy:
Nova
• Nova
– = a compute service provides a way to provision compute
instances, also known as virtual machines.
– 주요기능
• Create and manage:
– Virtual machines
– Bare metal servers
– System containers
– internally communicate via RPC message-passing mechanisms.
– Nova components:
• Nova API:
• Placement API:
• Scheduler:
• Compute:
• Conductor:
• Database:
• Message queue:
• Network:
Neutron
• Neutron
– = network service providing networking options.
– uses plugins to provide different network configurations.
• Neutron components:
• Neutron server (neutron-server and neutron-*-plugin):
• Plugin agent (neutron-*-agent):
• DHCP agent (neutron-dhcp-agent):
• L3 agent (neutron-l3-agent):
• Network provider services (SDN server/services):
• Messaging queue:
• Database:
Cinder
• Cinder
– = a block storage service which provides persistent block
storage resources for VMs in Nova.
• Cinder uses LVM or other plugin drivers to provide storage.
• Users can use Cinder to create, delete, and attach a volume.
• advanced features such as clone, extend volumes, snapshots, and
write images can be used as bootable persistent instances for
VMs and bare metals.
– Cinder components
• cinder-api:
• cinder-scheduler:
• cinder-volume:
• cinder-backup:
Glance
• Glance
– = image service which provides discovering, registering, and
retrieving abilities for disk and server images.
– Users can upload and discover data images and metadata
definitions that are meant to be used with other services.
• Glance is a central repository for managing images for VMs,
containers and bare metals.
• Glance has a RESTful API that allows for the querying of image
metadata as well as the retrieval of the actual image.
– Glance components:
• glance-api:
• glance-registry:
• Database:
• Storage repository
for image files:
• Metadata definition service:
Swift
• Swift
– = object store service used to store redundant, scalable data on
clusters of servers that are capable of storing petabytes of data.
– uses a distributed architecture with no central point of control.
– is ideal for storing unstructured data which can grow without
bounds and can be retrieved and updated.
– Data is written to multiple nodes that extend to different zones
for ensuring data replication and integrity across the cluster.
– Clusters can scale horizontally. In case of node failure, data is
replicated to other active nodes.
– Swift organizes data in a hierarchy. It accounts for the stored list
of containers, containers for storing lists of objects and objects
for storing the actual data with metadata.
– Swift components
• proxy-servers:
• Rings:
• Zones:
• Accounts:
• Containers:
• Objects:
• Partitions:
• Swift has many other services such as updaters, auditors, and
replicators which handle housekeeping tasks to deliver a
consistent object storage solution:
Wrap-up
Project Codename Description
Compute Nova Manages virtual machine (VM) resources, including CPU,
memory, disk, and network interfaces
Networking Neutron Provides resources used by VM network interfaces,
including IP addressing, routing, and SDN
Object Storage Swift Provides object-level storage accessible via RESTful APIs
Block Storage Cinder Provides block-level (traditional disk) storage to VMs
Identity Service
(shared service)
Keystone Manages RBAC for OpenStack components; provides
authorization services
Image Service
(shared service)
Glance Manages VM disk images; provides image delivery to
VMs and snapshot (backup) services
Telemetry Service
(shared service)
Ceilometer Centralized collection for metering and monitoring
OpenStack components
Orchestration Service
(shared service)
Heat Template-based cloud application orchestration for
OpenStack environments
Database Service
(shared service)
Trove Provides users with relational and non-relational
database services
Dashboard Horizon Provides a web-based GUI for working with OpenStack
OpenStack과 Containers
Overview
주요 Orchestration 도구
• 기능
• Provision and managing hosts on which containers will run
• Pull the images from repository and instantiate containers
• Manage the life cycle of containers
• Schedule containers on hosts based on host's resource availability
• Start a new container when one dies
• Scale the containers to match the application's demand
• Provide networking between containers so that they can access
each other on different hosts
• Expose containers as services
• Health monitoring of the containers
• Upgrade the containers
• Docker Swarm
– Docker Swarm components
• Node
– Manager node
– Worker node
• Tasks
• Services
• Discovery service
• Scheduler
– Swarm mode
• Apache Mesos
– = cluster manager.
• Ex: Marathon runs containerized applications on the Mesos
cluster. Together, Mesos and Marathon become a container
orchestration engine like Swarm or Kubernetes.
– Apache Mesos components
• Master
• Slaves
• Frameworks
• Offer
• Tasks
• Zookeeper
Kubernetes
• Kubernetes architecture
– External request
– Master node
• kube-apiserver
• etcd
• kube-controller-manager
• kube-scheduler
– Worker nodes
• kubelet
• kube-proxy
• Container runtime
• supervisord
• fluentd
Containerization in
OpenStack
• Nova
– = a compute service that provides APIs to manage VMs.
– supports provisioning of machine containers using two libraries,
that is, LXC and OpenVZ (Virtuozzo).
• These container related libraries are supported by libvirt, which Nova
uses to manage virtual machines.
• Heat
– = an orchestration service.
– Users need to enable plugins for Docker orchestration in Heat.
• Magnum
– = a container infrastructure management service.
– Magnum provides APIs to deploy Kubernetes, Swarm, and Mesos
clusters on OpenStack infrastructure.
– Magnum uses Heat templates to deploy these clusters.
• Zun
– = a container management service for that provides APIs to
manage life cycle of containers in OpenStack's cloud.
– Currently, Zun provides the support to run containers on bare
metals, but in the future, it may provide the support to run
containers on virtual machines created by Nova.
– Zun uses Kuryr to provide neutron networking to containers. Zun
uses Cinder for providing persistent storage to containers.
• Kuryr
– = a Docker network plugin that provides networking services to
Docker containers using Neutron.
• Kolla
– = a project to which it deploys OpenStack Controller plane
services within Docker containers.
– Kolla simplifies deployment and operations by packaging each
controller service as a micro-service inside a Docker container.
• Murano
– = provides an application catalog for app developers and
cloud administrators to publish cloud-ready applications in a
repository available within OpenStack Dashboard (Horizon)
which can be run inside Docker or Kubernetes.
–  control the full life cycle of applications.
• Fuxi
– = storage plugin for Docker containers that enables
containers to use Cinder volume and Manila share as
persistent storage inside them.
• OpenStack-Helm
– provides a framework for operators and developers to deploy
OpenStack on top of Kubernetes.
Kubernetes Plugin for OpenStack
• Kubernetes interface to IaaS layer를 대상으로 함
– Supports plugging into many cloud providers: OpenStack, GCE, AWS,
Azure, ...
– Easily configurable
– Leveraged by key Kube components: kube-apiserver, kube-controllermanager,
kubelet
• Kubernetes plugin supports
– OpenStack Identity
– OpenStack Networking
– OpenStack Storage
• Code:
– gophercloud repo: https://github.com/rackspace/gophercloud
– Kubernetes repo: https://github.com/kubernetes/kubernetes
• pkg/cloudprovider/providers/openstack
• pkg/volume/cinder
– OpenStack repo: http://git.openstack.org/cgit/openstack/k8s-cloud-provider
• Magnum project delivers the integration of Kubernetes and
OpenStack
Identity Management Integration
• Keystone:
– robust identity service, fully populated by cloud provider
– provides multiple LDAPs and MS ADs integration and
federated identity support
• How Kubernetes services access OpenStack services
– Code: gophercloud package
– Establish session to access Neutron, Cinder, ...
– Keystone trust id for better security: automated by Magnum
Kubernetes as a Service
• OpenStack provides the blueprint for SDI, Container
Deployment + DevOps
Kops
• 개요
– A client for provisioning Kubernetes clusters
– Follows the Kubernetes design philosophy
• Declarative
• Operator controlled
– Support for most providers
– www.github.com/kubernetes/kops
• Motivations
– Instance Groups
• Multiple node flavors
• Multiple node images
• Declarative node taints/labels
– etcd
• backed by disks and supports snapshots
• spec by disk labels allowing migration
Container관련 주요 Package
Magnum
• Magnum이란?
– Container cluster를 생
성시켜주는 OpenStack
API Service
– Keystone의 credential
을 이용
– Cluster type을 선택
– Multi-tenancy 지원
– Multi-master cluster
생성 가능
• 주요 개념
– COE
• Container Orchestration Engine
• 예:
– Docker Swarm
– Kubernetes
– Apache Mesos
– DC/OS
– Magnum Cluster
– Cluster Template
– Native Client
• COE와 함께 배포되는 client (예: docker, kubectl)
• OpenStack client가 아님
• TLS를 이용한 authenticate
– Magnum 주요 기능
• Provides a standard API for complete life cycle management of COEs
• Supports multiple COEs such as Kubernetes, Swarm, Mesos, and DC/OS
• Supports the ability to scale a cluster up or down
• Supports multi-tenancy for container clusters
• Different choices of container cluster deployment models: VM or bare-metal
• Provides KeyStone-based multi-tenant security and auth management
• Neutron based multi-tenant network control and isolation
• Supports Cinder to provide volume for containers
• Integrated with OpenStack
• Secure container cluster access (Transport Layer Security (TLS)) enabled
• Support for external infrastructure can also be used by the cluster, such as
DNS, public network, public discovery service, Docker registry, load balancer,
and so on
• Barbican provides the storage of secrets such as certificates used for TLS
within the cluster
• Kuryr-based networking for container-level isolation
• 구성요소
– Magnum API
• = a WSGI server that serves API requests that user sends.
• Magnum API has many controllers to handle a request for each of
resources:
– Baymodel # Baymodel and Bay will be replaced by cluster
– Bay # and cluster templates respectively.
– Certificate, Cluster, Cluster template
– Magnum services, Quota, Stats
• Each of controllers handle a request for specific resources.
– Magnum conductor
• = an RPC server that provides coordination and database query
support for Magnum.
• is stateless and horizontally scalable, meaning multiple instances of
the conductor service can run at the same time.
• Magnum과 Kubernetes의 관계
Zun
• 개요
– container management service that provides APIs to manage
containers abstracted by different technologies at the
backend.
• Zun supports Docker as container runtime tool.
• Zun integrates with many OpenStack services
– Zun has various add-ons over Docker, which makes it a
powerful solution for container management.
• 주요 기능
– Container의 life cycle management를 위한 표준 API 제공
– Provides KeyStone-based multi-tenant security and auth
management
– Supports Docker with runc and clear container for managing
containers
– Supports Cinder to provide volume for containers
– Kuryr-based networking for container-level isolation
– Supports container orchestration via Heat
– Container composition known as capsules lets user run multiple
containers with related resources as a single unit
– Supports the SR-IOV feature that enables the sharing of a
physical PCIe device to be shared across VMs and containers
– Supports interactive sessions with containers
– Zun allows users to run heavy workloads with dedicated
resources by exposing CPU sets
Kuryr
• Kuryr
– = a Docker network plugin that uses OpenStack Neutron to
provide networking services to Docker containers.
– maps container network abstractions to Neutron APIs.
• Security groups
• Subnet pools
• NAT (SNAT/DNAT, Floating IP)
• Port security (ARP spoofing)
• Quality of Service (QoS)
• Quota management
• Neutron pluggable IPAM
• Well-integrated COE load balancing via a neutron
• FWaaS for containers
• Kuryr architecture
– Mapping the Docker libnetwork to the neutron API
Murano
• 개요
– = application catalog service
–  cloud-ready applications to be easily deploy on OpenStack.
• It is an integration point for external applications and OpenStack
with support of complete applications life cycle management.
– Environment
– Package
– Session
– Deployments
– Bundle
– Categories
Kolla
• 개요
– OpenStack cloud를 deploy & manage하는 복잡성의 문제
• 기능
– OpenStack service를 container형태로 실행
– Ansible을 이용하여 container image를 설치 및 deploy or
upgrade OpenStack cluster.
– Kolla containers are configured to store data on persistent
storage, which can then be mounted back onto host
operating system and restored to protect against any faults.
Wrap-up
Acknowlegement
• Figure sources include:
– www.kubernetes.io
– Marko Luksa, “Kubernetes in Action”, 2018
– www.openstack.org

More Related Content

What's hot

Robust Containers by Eric Brewer
Robust Containers by Eric BrewerRobust Containers by Eric Brewer
Robust Containers by Eric BrewerDocker, Inc.
 
Exploring Magnum and Senlin integration for autoscaling containers
Exploring Magnum and Senlin integration for autoscaling containersExploring Magnum and Senlin integration for autoscaling containers
Exploring Magnum and Senlin integration for autoscaling containersTon Ngo
 
Orchestrating Redis & K8s Operators
Orchestrating Redis & K8s OperatorsOrchestrating Redis & K8s Operators
Orchestrating Redis & K8s OperatorsDoiT International
 
Docker and Kubernetes 101 workshop
Docker and Kubernetes 101 workshopDocker and Kubernetes 101 workshop
Docker and Kubernetes 101 workshopSathish VJ
 
Autoscaling with magnum and senlin
Autoscaling with magnum and senlinAutoscaling with magnum and senlin
Autoscaling with magnum and senlinQiming Teng
 
Kubernetes Basis: Pods, Deployments, and Services
Kubernetes Basis: Pods, Deployments, and ServicesKubernetes Basis: Pods, Deployments, and Services
Kubernetes Basis: Pods, Deployments, and ServicesJian-Kai Wang
 
Kubernetes @ Squarespace: Kubernetes in the Datacenter
Kubernetes @ Squarespace: Kubernetes in the DatacenterKubernetes @ Squarespace: Kubernetes in the Datacenter
Kubernetes @ Squarespace: Kubernetes in the DatacenterKevin Lynch
 
Managing Container Clusters in OpenStack Native Way
Managing Container Clusters in OpenStack Native WayManaging Container Clusters in OpenStack Native Way
Managing Container Clusters in OpenStack Native WayQiming Teng
 
Ansible, integration testing, and you.
Ansible, integration testing, and you.Ansible, integration testing, and you.
Ansible, integration testing, and you.Bob Killen
 
Seamless scaling of Kubernetes nodes
Seamless scaling of Kubernetes nodesSeamless scaling of Kubernetes nodes
Seamless scaling of Kubernetes nodesMarko Bevc
 
Deploy an Elastic, Resilient, Load-Balanced Cluster in 5 Minutes with Senlin
Deploy an Elastic, Resilient, Load-Balanced Cluster in 5 Minutes with SenlinDeploy an Elastic, Resilient, Load-Balanced Cluster in 5 Minutes with Senlin
Deploy an Elastic, Resilient, Load-Balanced Cluster in 5 Minutes with SenlinQiming Teng
 
Spark with kubernates
Spark with kubernatesSpark with kubernates
Spark with kubernatesDavid Tung
 
Kubernetes introduction
Kubernetes introductionKubernetes introduction
Kubernetes introductionDongwon Kim
 

What's hot (20)

Robust Containers by Eric Brewer
Robust Containers by Eric BrewerRobust Containers by Eric Brewer
Robust Containers by Eric Brewer
 
Exploring Magnum and Senlin integration for autoscaling containers
Exploring Magnum and Senlin integration for autoscaling containersExploring Magnum and Senlin integration for autoscaling containers
Exploring Magnum and Senlin integration for autoscaling containers
 
Kubernetes Basics
Kubernetes BasicsKubernetes Basics
Kubernetes Basics
 
kubernetes for beginners
kubernetes for beginnerskubernetes for beginners
kubernetes for beginners
 
Orchestrating Redis & K8s Operators
Orchestrating Redis & K8s OperatorsOrchestrating Redis & K8s Operators
Orchestrating Redis & K8s Operators
 
Docker and Kubernetes 101 workshop
Docker and Kubernetes 101 workshopDocker and Kubernetes 101 workshop
Docker and Kubernetes 101 workshop
 
Autoscaling with magnum and senlin
Autoscaling with magnum and senlinAutoscaling with magnum and senlin
Autoscaling with magnum and senlin
 
Kubernetes Basis: Pods, Deployments, and Services
Kubernetes Basis: Pods, Deployments, and ServicesKubernetes Basis: Pods, Deployments, and Services
Kubernetes Basis: Pods, Deployments, and Services
 
Kubernetes Basics
Kubernetes BasicsKubernetes Basics
Kubernetes Basics
 
Kubernetes @ Squarespace: Kubernetes in the Datacenter
Kubernetes @ Squarespace: Kubernetes in the DatacenterKubernetes @ Squarespace: Kubernetes in the Datacenter
Kubernetes @ Squarespace: Kubernetes in the Datacenter
 
Managing Container Clusters in OpenStack Native Way
Managing Container Clusters in OpenStack Native WayManaging Container Clusters in OpenStack Native Way
Managing Container Clusters in OpenStack Native Way
 
Ansible, integration testing, and you.
Ansible, integration testing, and you.Ansible, integration testing, and you.
Ansible, integration testing, and you.
 
Seamless scaling of Kubernetes nodes
Seamless scaling of Kubernetes nodesSeamless scaling of Kubernetes nodes
Seamless scaling of Kubernetes nodes
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
Intro to Kubernetes
Intro to KubernetesIntro to Kubernetes
Intro to Kubernetes
 
GoDocker presentation
GoDocker presentationGoDocker presentation
GoDocker presentation
 
Deploy an Elastic, Resilient, Load-Balanced Cluster in 5 Minutes with Senlin
Deploy an Elastic, Resilient, Load-Balanced Cluster in 5 Minutes with SenlinDeploy an Elastic, Resilient, Load-Balanced Cluster in 5 Minutes with Senlin
Deploy an Elastic, Resilient, Load-Balanced Cluster in 5 Minutes with Senlin
 
Spark with kubernates
Spark with kubernatesSpark with kubernates
Spark with kubernates
 
Kubernetes 101
Kubernetes 101Kubernetes 101
Kubernetes 101
 
Kubernetes introduction
Kubernetes introductionKubernetes introduction
Kubernetes introduction
 

Similar to Open stack and k8s(v4)

Kube Overview and Kube Conformance Certification OpenSource101 Raleigh
Kube Overview and Kube Conformance Certification OpenSource101 RaleighKube Overview and Kube Conformance Certification OpenSource101 Raleigh
Kube Overview and Kube Conformance Certification OpenSource101 RaleighBrad Topol
 
Kubernetes fundamentals
Kubernetes fundamentalsKubernetes fundamentals
Kubernetes fundamentalsVictor Morales
 
An Introduction to Kubernetes and Continuous Delivery Fundamentals
An Introduction to Kubernetes and Continuous Delivery FundamentalsAn Introduction to Kubernetes and Continuous Delivery Fundamentals
An Introduction to Kubernetes and Continuous Delivery FundamentalsAll Things Open
 
Get you Java application ready for Kubernetes !
Get you Java application ready for Kubernetes !Get you Java application ready for Kubernetes !
Get you Java application ready for Kubernetes !Anthony Dahanne
 
DevJam 2019 - Introduction to Kubernetes
DevJam 2019 - Introduction to KubernetesDevJam 2019 - Introduction to Kubernetes
DevJam 2019 - Introduction to KubernetesRonny Trommer
 
Containers and Kubernetes -Notes Leo
Containers and Kubernetes -Notes LeoContainers and Kubernetes -Notes Leo
Containers and Kubernetes -Notes LeoLéopold Gault
 
DevOps in AWS with Kubernetes
DevOps in AWS with KubernetesDevOps in AWS with Kubernetes
DevOps in AWS with KubernetesOleg Chunikhin
 
aks_training_document_Azure_kuberne.pptx
aks_training_document_Azure_kuberne.pptxaks_training_document_Azure_kuberne.pptx
aks_training_document_Azure_kuberne.pptxWaseemShare
 
Kubernetes #1 intro
Kubernetes #1   introKubernetes #1   intro
Kubernetes #1 introTerry Cho
 
Docker kubernetes fundamental(pod_service)_190307
Docker kubernetes fundamental(pod_service)_190307Docker kubernetes fundamental(pod_service)_190307
Docker kubernetes fundamental(pod_service)_190307Inhye Park
 
Kubernetes for java developers - Tutorial at Oracle Code One 2018
Kubernetes for java developers - Tutorial at Oracle Code One 2018Kubernetes for java developers - Tutorial at Oracle Code One 2018
Kubernetes for java developers - Tutorial at Oracle Code One 2018Anthony Dahanne
 
Kubernetes presentation
Kubernetes presentationKubernetes presentation
Kubernetes presentationGauranG Bajpai
 
Introducing Kubernetes
Introducing Kubernetes Introducing Kubernetes
Introducing Kubernetes VikRam S
 
Container orchestration k8s azure kubernetes services
Container orchestration  k8s azure kubernetes servicesContainer orchestration  k8s azure kubernetes services
Container orchestration k8s azure kubernetes servicesRajesh Kolla
 
Kubernetes-Fundamentals.pptx
Kubernetes-Fundamentals.pptxKubernetes-Fundamentals.pptx
Kubernetes-Fundamentals.pptxsatish642065
 
Pro2516 10 things about oracle and k8s.pptx-final
Pro2516   10 things about oracle and k8s.pptx-finalPro2516   10 things about oracle and k8s.pptx-final
Pro2516 10 things about oracle and k8s.pptx-finalMichel Schildmeijer
 
Kubernetes Internals
Kubernetes InternalsKubernetes Internals
Kubernetes InternalsShimi Bandiel
 

Similar to Open stack and k8s(v4) (20)

Kube Overview and Kube Conformance Certification OpenSource101 Raleigh
Kube Overview and Kube Conformance Certification OpenSource101 RaleighKube Overview and Kube Conformance Certification OpenSource101 Raleigh
Kube Overview and Kube Conformance Certification OpenSource101 Raleigh
 
Kubernetes fundamentals
Kubernetes fundamentalsKubernetes fundamentals
Kubernetes fundamentals
 
An Introduction to Kubernetes and Continuous Delivery Fundamentals
An Introduction to Kubernetes and Continuous Delivery FundamentalsAn Introduction to Kubernetes and Continuous Delivery Fundamentals
An Introduction to Kubernetes and Continuous Delivery Fundamentals
 
Get you Java application ready for Kubernetes !
Get you Java application ready for Kubernetes !Get you Java application ready for Kubernetes !
Get you Java application ready for Kubernetes !
 
DevJam 2019 - Introduction to Kubernetes
DevJam 2019 - Introduction to KubernetesDevJam 2019 - Introduction to Kubernetes
DevJam 2019 - Introduction to Kubernetes
 
Containers and Kubernetes -Notes Leo
Containers and Kubernetes -Notes LeoContainers and Kubernetes -Notes Leo
Containers and Kubernetes -Notes Leo
 
DevOps in AWS with Kubernetes
DevOps in AWS with KubernetesDevOps in AWS with Kubernetes
DevOps in AWS with Kubernetes
 
aks_training_document_Azure_kuberne.pptx
aks_training_document_Azure_kuberne.pptxaks_training_document_Azure_kuberne.pptx
aks_training_document_Azure_kuberne.pptx
 
Kubernetes #1 intro
Kubernetes #1   introKubernetes #1   intro
Kubernetes #1 intro
 
Docker kubernetes fundamental(pod_service)_190307
Docker kubernetes fundamental(pod_service)_190307Docker kubernetes fundamental(pod_service)_190307
Docker kubernetes fundamental(pod_service)_190307
 
Kubernetes for java developers - Tutorial at Oracle Code One 2018
Kubernetes for java developers - Tutorial at Oracle Code One 2018Kubernetes for java developers - Tutorial at Oracle Code One 2018
Kubernetes for java developers - Tutorial at Oracle Code One 2018
 
Kubernetes presentation
Kubernetes presentationKubernetes presentation
Kubernetes presentation
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
 
Kubernetes Intro @HaufeDev
Kubernetes Intro @HaufeDev Kubernetes Intro @HaufeDev
Kubernetes Intro @HaufeDev
 
DevOps with Kubernetes
DevOps with KubernetesDevOps with Kubernetes
DevOps with Kubernetes
 
Introducing Kubernetes
Introducing Kubernetes Introducing Kubernetes
Introducing Kubernetes
 
Container orchestration k8s azure kubernetes services
Container orchestration  k8s azure kubernetes servicesContainer orchestration  k8s azure kubernetes services
Container orchestration k8s azure kubernetes services
 
Kubernetes-Fundamentals.pptx
Kubernetes-Fundamentals.pptxKubernetes-Fundamentals.pptx
Kubernetes-Fundamentals.pptx
 
Pro2516 10 things about oracle and k8s.pptx-final
Pro2516   10 things about oracle and k8s.pptx-finalPro2516   10 things about oracle and k8s.pptx-final
Pro2516 10 things about oracle and k8s.pptx-final
 
Kubernetes Internals
Kubernetes InternalsKubernetes Internals
Kubernetes Internals
 

More from H K Yoon

AI 바이오 (4일차).pdf
AI 바이오 (4일차).pdfAI 바이오 (4일차).pdf
AI 바이오 (4일차).pdfH K Yoon
 
AI 바이오 (2_3일차).pdf
AI 바이오 (2_3일차).pdfAI 바이오 (2_3일차).pdf
AI 바이오 (2_3일차).pdfH K Yoon
 
Outlier Analysis.pdf
Outlier Analysis.pdfOutlier Analysis.pdf
Outlier Analysis.pdfH K Yoon
 
Nlp and transformer (v3s)
Nlp and transformer (v3s)Nlp and transformer (v3s)
Nlp and transformer (v3s)H K Yoon
 
Open source Embedded systems
Open source Embedded systemsOpen source Embedded systems
Open source Embedded systemsH K Yoon
 
빅데이터, big data
빅데이터, big data빅데이터, big data
빅데이터, big dataH K Yoon
 
Sensor web
Sensor webSensor web
Sensor webH K Yoon
 
Tm기반검색v2
Tm기반검색v2Tm기반검색v2
Tm기반검색v2H K Yoon
 

More from H K Yoon (8)

AI 바이오 (4일차).pdf
AI 바이오 (4일차).pdfAI 바이오 (4일차).pdf
AI 바이오 (4일차).pdf
 
AI 바이오 (2_3일차).pdf
AI 바이오 (2_3일차).pdfAI 바이오 (2_3일차).pdf
AI 바이오 (2_3일차).pdf
 
Outlier Analysis.pdf
Outlier Analysis.pdfOutlier Analysis.pdf
Outlier Analysis.pdf
 
Nlp and transformer (v3s)
Nlp and transformer (v3s)Nlp and transformer (v3s)
Nlp and transformer (v3s)
 
Open source Embedded systems
Open source Embedded systemsOpen source Embedded systems
Open source Embedded systems
 
빅데이터, big data
빅데이터, big data빅데이터, big data
빅데이터, big data
 
Sensor web
Sensor webSensor web
Sensor web
 
Tm기반검색v2
Tm기반검색v2Tm기반검색v2
Tm기반검색v2
 

Recently uploaded

Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????blackmambaettijean
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 

Recently uploaded (20)

Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 

Open stack and k8s(v4)

  • 2. 순서 일차 Module 학습내용 1일차 Intro  인사  강의 소개 배경  Container & Orchestration  Yaml 파일 K8S overview K8S 실습 (1)  Kubernetes 개요와 아키텍처  Pods  Replication 2일차 K8S 실습 (2)  Service Delivery  Volume K8S 실습 (3)  ConfigMap  Deployment 3일차 K8S wrap-up  K8S 아키텍처/Internal OpenStack 소개 및 둘러보기  OpenStack 개요와 Architecture  설치, 사용법 (둘러보기) 4일차 OpenStack Components  Core Components (1) (Nova, Neutron)  Core Components (2) (Glance, Cinder, Keystone) 확장  Containers & Cloud Orchestration Wrap-up  Review & 마무리
  • 5. Container Review • 핵심개념 – Linux Namespace를 이용한 process isolation – Mount (mnt) Process ID (pid) – Network (net) Inter-process communication (ipc) – UTS User ID (user) – cgroup을 이용한 자원의 제한적 할당 – Image layers VMs vs. containers
  • 6. • Docker container 플랫폼 – Docker 개념 – Images – Registries – Containers – Docker image의 build, 배포
  • 7. • Docker Engine과 K8s dockerd  containerd  containerd-shim  "sleep 60" (desired process in container).
  • 8. How it all works in Kubernetes  Container Runtime Interface (CRI)  cni container network interface  gRPC Remote Procedure Calls
  • 9. Orchestration • 개념 – the automated configuration, coordination, and management of computer systems and software (wiki) • Orchestrator의 주요 task – Reconciling the desired state – Replicated and global services – Service discovery – Routing – Load balancing – Scaling – Self-healing  Zero downtime – Affinity and location awareness – Security • Secure communication and cryptographic node identity • Secure networks and network policies • RBAC: Secrets, Content trust, Reverse uptime – Introspection
  • 10. Yaml 파일 • 개요 – a human-readable data-serialization language – a superset of JSON. – Outline Indentation and Whitespace – YAML의 기본 구성요소는 key-value – https://yaml.org/spec/1.2/spec.html • Basic Rules • YAML is case sensitive. • YAML does not allow use of tabs. • Data Types – YAML excels at working with mappings (hashes / dictionaries), sequences (arrays / lists), and scalars (strings / numbers).
  • 11. • Scalars – Strings, numbers, a boolean property, integer (number). – often called variables in programming. – Most scalars are unquoted, but if you are typing a string that uses punctuation and other elements that can be confused with YAML syntax (dashes, colons, etc.) you may want to quote this data using single ' or double " quotation marks. • Double quotation marks allow you to use escapings to represent ASCII and Unicode characters. • Only 2 types of structures in YAML – Lists • literally a sequence of objects. For example: • members of the list can also be maps: – Maps • name-value pairs integer: 25 string: "25" float: 25.0 boolean: Yes
  • 12.
  • 13. --- apiVersion: v1 kind: Pod metadata: name: rss-site labels: app: web { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "rss-site", "labels": { "app": "web" } } }
  • 14. args: - sleep - "1000" - message" { "args": ["sleep", "1000", "message"] } --- apiVersion: v1 kind: Pod metadata: name: rss-site labels: app: web spec: containers: - name: front-end image: nginx ports: - containerPort: 80 - name: rss-reader image: nickchase/rss-php-nginx:v1 ports: - containerPort: 88 { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "rss-site", "labels": { "app": "web" } }, "spec": { "containers": [{ "name": "front-end", "image": "nginx", "ports": [{ "containerPort": "80" }] }, { "name": "rss-reader", "image": "nickchase/rss-php-nginx:v1", "ports": [{ "containerPort": "88" }] }] } }
  • 15. Pod in yaml • YAML in Kubernetes
  • 18. Kubernetes 개요 • 필요성 – From monolithic apps to microservices • Splitting apps into microservices – Consistent environment to applications – DevOps (CD/CI) and NoOps
  • 21. • Kubernetes from the top of a mountain – Help developers focus on the core app features – Help ops teams achieve better resource utilization K8s exposes whole datacenter as a single deployment platform.
  • 22. • Kubernetes 아키텍처 • Head Node – API server – Scheduler – Controller manager – etcd – Sometimes: • Kubelet • Docker • Worker Node – Kubelet – Kube-proxy – docker
  • 23. • k8s cluster is composed of nodes, split into 2 types: – master node hosts the Kubernetes Control Plane • controls and manages whole Kubernetes system – Worker nodes run the actual applications you deploy k8s cluster의 구성요소
  • 24. – Control Plane • 구성요소: – Kubernetes API Server – Scheduler ; schedules your apps (assigns a worker node to each deployable component of your application) – Controller Manager, which performs cluster-level functions, such as replicating components, keeping track of worker nodes, handling node failures, and so on – etcd = a distributed data store that persistently stores cluster configuration. – (Worker) Nodes • = machines that run containerized applications. – Docker, rkt, or another container runtime, which runs your containers – Kubelet, which talks to the API server and manages containers on its node – Kubernetes Service Proxy (kube-proxy), which load-balances network traffic between application components
  • 25.
  • 26. • Kubernetes에서의 application 수행 – Description을 통한 container 실행
  • 27. SwarmKit Kubernetes Description Swarm Cluster Set of servers/nodes managed by the respective orchestrator. Node Cluster member Single host (physical or virtual) which is a member of the swarm/cluster. Manager node Master Node managing the swarm/cluster. This is the control plane. Worker node Node Member of the swarm/cluster running application workload. Container Container** Instance of a container image running on a node. In a Kubernetes cluster, we cannot run a container. Task Pod Instance of a service (swarm) or ReplicaSet (Kubernetes) running on a node. A task manages a single container while a Pod contains one to many containers that are all sharing the same network namespace. • SwarmKit와 Kubernetes의 비교 뒷면 계속
  • 28. SwarmKit Kubernetes Description Service ReplicaSet Defines and reconciles the desired state of an application service consisting of multiple instances. Service Deployment A deployment is a ReplicaSet augmented with rolling update and rollback capabilities. Routing Mesh Service Swarm Routing Mesh provides L4 routing and load balancing using IPVS. Stack Stack ** Definition of an application consisting of multiple (Swarm) services. Network Network policy Swarm software-defined networks (SDNs) are used to firewall containers. Kubernetes only defines a single flat network.
  • 29. • Kubernetes를 이용한 Application 배포, update – Deploy a first application • Deploy the web component • Deploy the database • Streamle the deployment – Zero downtime deployments • Rolling updates • Blue–green deployment – Kubernetes secrets • Manually defining secrets • Creating secrets with kubectl • Using secrets in a pod • Secret values in environment variables
  • 30. 환경구성 • Minikube vs. Multi-node • 설치순서 (Kubeadm)
  • 32. Container image의 생성 및 실행 • Docker – Hello World container • Node.js app의 예 – Node.js app – Dockerfile – Build image • Image layers – Container 실행 – Container 정보 – Stop & Remove container – Registry에 push $ docker build -t mykub . $ docker inspect mykub-container
  • 33. Kubernetes cluster • Minikube – Local single-node Kubernetes cluster – Minikube를 이용한 Kubernetes cluster 실행 – Kubernetes client 설치 (kubectl) • 설치 후 확인 $ minikube start Starting local Kubernetes cluster... Starting VM... SSH-ing files into VM... ... Kubectl is now configured to use the cluster.
  • 34. • Public cloud와 Kubernetes cluster – (GKE 의 경우) • 절차: … • 3-node Kubernetes cluster 생성 $ gcloud container clusters create mykub --num-nodes 3 --machine-type f1-micro
  • 35. • kubectl 사용 준비 – Creating an alias – Configuring tab completion for kubectl alias k=kubectl
  • 36. Kubernetes에서의 app 실행 • Deploying Node.js app • Pods – = group of one or more tightly related containers that always run together on the same worker node and in the same Linux namespace(s). – Each pod is like a separate logical machine with its own IP, hostname, processes, and so on, running a single application. $ kubectl run mykub --image=hk/mykub --port=8080 --generator=run/v1 replicationcontroller "mykub" created
  • 37. • Web application – expose through a Service object. • LoadBalancer-type service  external load balancer –  connect to pod through the load balancer’s public IP. • Service object 생성 – External IP를 통한 Service확인
  • 38. • ReplicationController, Pod 및 Service – kubectl run 명령어를 이용한 ReplicationController 생성 • ReplicationController creates actual Pod object. • Service의 필요성 및 역할 – Pods are ephemeral  ever-changing pod IP addresses – Services expose multiple pods at a single constant IP: port pair. • a static IP during the lifetime of service.
  • 39. • Horizontal scaling – 확인/Inspection $ kubectl scale rc abc --replicas=3 $ kubectl describe pod abc
  • 41. Pods pod 생성 Label의 이용과 Annotation 및 Namespace 이용 pod 사용의 종료
  • 42. 주요 항목 • YAML 또는 JSON descriptor를 이용한 pod 생성 • Label의 이용 – Listing subsets of pods through label selectors – Label 및 selector를 통한 pod scheduling 관리 • Annotation • Namespace 를 이용한 resource grouping • pod 사용의 종료
  • 43. Pods 개요 Figure 3.1. All containers of a pod run on the same node. A pod never spans two nodes.
  • 44. • Pods의 필요성 (역할) – Multiple containers • Containers are designed to run a single process per container. – Pod with multiple containers run on a single worker node. • Run closely related processes together and provide same env’t. • Partial isolation between containers of the same pod – As all containers of a pod run under the same Network and UTS ns (Linux ns), they all share same hostname and NICs. – All containers of a pod run under the same IPC namespace. • Containers share the same IP and port space – Because containers in a pod run in the same Network ns, they share same IP address and port space.
  • 45. – Flat inter-pod network • All pods in a k8s cluster reside in a single flat, shared, network- address space.  No NAT gateways between them.
  • 46. • 여러 container를 pod에 적절히 배분하는 문제 – Split multi-tier apps into multiple pods – Split into multiple pods to enable individual scaling – Run containers in separate pods, unless a specific reason requires
  • 47. YAML descriptor 이용한 pod 생성 • YAML descriptor of a pod – Pod definition – 3 sections in Kubernetes resources: • Metadata • Spec • Status
  • 48. • YAML descriptor 이용한 pod 생성 실습 – Yaml 파일 – Pod 생성 – Container port – requests to the pod에게 request 발송 $ kubectl create -f mypod.yaml $ kubectl explain pods $ kubectl explain pod.spec $ kubectl get pods $ kubectl port-forward mypod 8888:8080
  • 49. Label 의 이용 • Label 개념 – an arbitrary key-value pair you attach to a resource –  label selectors (resource filtering).
  • 50. • Label selector – Whether the resource • Contains (or doesn’t contain) a label with a certain key • Contains a label with a certain key and value • Contains a label with a certain key, but with a value not equal to the one you specify – Label selectors: • creation_method!=manual • env in (prod,devel) • env notin (prod,devel) • Label을 이용한 worker node의 분류 – Scheduling pods to specific nodes
  • 51. Annotations • Annotations – =KVP – hold much larger pieces of information – Tool-building 또는 new features 에 이용 • Object에 대한 annotation의 추가 및 수정 및 검사
  • 52. Namespace • Namespace – Kubernetes namespaces provide a scope for objects names. • (cf. Linux namespaces to isolate processes from each other.) – 필요성: split up resources in a multi-tenant environment • Namespace 생성 • Namespace 를 이용한 resource grouping $ kubectl get ns apiVersion: v1 kind: Namespace metadata: name: custom-namespace
  • 53. pod 사용의 종료 • 삭제 – Pod의 이름을 이용한 삭제 – Label selector를 이용한 삭제 – Namespace 전체에 대한 삭제 – 전체 삭제 $ kubectl delete po mykub-gpu pod "mykub-gpu" deleted
  • 54. Deploying managed pods: Replication • pods 상태 점검 • ReplicationControllers • ReplicaSets • DaemonSets • completable task • Periodical Scheduling
  • 55. 주요 항목 • pods 상태 점검 • ReplicationControllers • ReplicaSets • DaemonSets • completable task • Job Scheduling to run periodically
  • 56. pods 상태 점검 • Liveness probes 개요 – check if a container is still alive through liveness probes. • 주기적 조사 후 restart the container if the probe fails. – Mechanisms • HTTP GET probe • TCP Socket probe • Exec probe • Liveness probe의 활용 – What a liveness probe should check • Check only internals of the app – Probe에서 retry loop는 사용치 말것
  • 57. ReplicationControllers • ReplicationController – ensures its pods are always kept running. – Features • makes sure a pod is always running by starting a new pod. • When a cluster node fails, it creates replacement replicas. • enables horizontal scaling of pods—both manual and automatic
  • 58. – 3 parts of a ReplicationController • label selector • replica count • pod template
  • 59. – YAML definition of a ReplicationController apiVersion: v1 kind: ReplicationController metadata: name: myRC spec: replicas: 3 selector: app: mykub template: metadata: labels: app: mykub spec: containers: - name: myContainer image: openwith/rc ports: - containerPort: 8080
  • 60. • pod template의 변경 • Horizontal scaling pods – Scaling up a ReplicationController – Scaling a ReplicationController by editing its definition • ReplicationController의 삭제 $ kubectl scale rc mykub --replicas=10 $ kubectl edit rc mykub $ kubectl edit rc myRC
  • 61. ReplicaSets • ReplicaSet vs. ReplicationController • A ReplicaSet has more expressive pod selectors. – A YAML definition of a ReplicaSet apiVersion: apps/v1 kind: ReplicaSet metadata: name: myRS spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: openwith/RSapp
  • 62. – ReplicaSet의 expressive label selector • operators: – In – NotIn – Exists – DoesNotExist selector: matchExpressions: - key: app operator: In values: - openwith
  • 63. DaemonSets • DaemonSet 개요 • Run exactly one pod on each node with • infrastructure-related pods that perform system-level operations.
  • 64. – DaemonSet 을 이용한 특정 node에서만의 pods 수행 • node selector
  • 65. Completable task • 배경 • Continuous tasks • Completable tasks – Job resource • Functional programming • Running job pods sequentially • Running job pods parallelly
  • 66. Job Scheduling to run periodically • CronJob 생성 • 5 entries: – Minute – Hour – Day of month – Month – Day of week. apiVersion: batch/v1beta1 kind: CronJob metadata: name: batch-job-3-min spec: schedule: "0,15,30,45 * * * *" jobTemplate: spec: template: metadata: labels: app: periodic-batch-job spec: restartPolicy: OnFailure containers: - name: main image: openwith/batch-job
  • 68. 주요 항목 • 개요 • Exposing services to external clients • Ingress resource • Signaling when a pod is ready to accept connections • Headless servic를 이용한 개별 pod의 발견
  • 69. Kubernetes service 개요 • Kubernetes service 개요 – a resource you create to make a single, constant point of entry to a group of pods providing the same service. – Each service has an IP address and port that never change while the service exists. –  enable clients to discover and talk to pods • Service Types
  • 70. – 예: • External clients need to connect to frontend pods … • Frontend pods need to connect to backend database ...
  • 71. Services 생성과 이용 • Services 생성 – kubectl expose를 통한 생성 – YAML descriptor 를 통한 생성 apiVersion: v1 kind: Service metadata: name: mykub spec: ports: - port: 80 targetPort: 8080 selector: app: mykub
  • 72. • Service의 이용 – 생성된 Service의 확인 – 수행 중인 container에서의 원격실행 – Session affinity • 특정 client로부터의 모든 request를 특정 port로 redirect •  sessionAffinity property 를 ClientIP로 지정 – Exposing multiple ports – named ports $ kubectl exec mykub-7nog1 -- curl -s http://10.111.249.153
  • 73. Service Discovery • 환경변수를 이용한 service discovery • DNS를 이용한 service discovery – FQDN의 이용 • (frontend-backend 예에서) - frontend pod can connect to backend DB service by opening a connection to: backend-database.default.svc.cluster.local
  • 74. 외부 Service 연결 • Cluster 밖의 Service에 대한 연결 – 목적: 외부 IP(s):port(s) 에 대한 Redirection을 통해 service load balancing 및 service discovery 달성 • Service endpoints • Endpoints Resource sits in between
  • 75. • Service endpoint에 대한 manual configuring – Service 및 Endpoints resource를 생성 – 외부 service에 대한 alias 생성 •  FQDN 이용 가능 2개의 외부 endpoint를 대상으로 한 service
  • 76. • 외부 client에게 service를 expose – Set service type to NodePort – Set service type to LoadBalancer, an extension of NodePort type – Create an Ingress resource to expose multiple services thru single IP address
  • 77. • NodePort service의 이용 –  make Kubernetes reserve a port on all its nodes and forward incoming connections to pods that are part of service. – NodePort service 생성 및 NodePort service 확인
  • 78. • 외부 load balancer를 통한 service exposure • 기타 – 불필요한 network hop의 방지
  • 79. Ingress resource • 필요성 – Exposing services externally through an Ingress resource – Ingresses operate at application layer (HTTP) and provide features such as cookie-based session affinity • 생성과 이용 $ kubectl get ingresses $ curl http://mykub.example.com
  • 80. • 하나의 Ingress를 통해 여러 service를 expose • Mapping different services to different paths of the same host • Ingress를 이용한 handle TLS traffic의 처리 spec: rules: - host: foo.example.com http: paths: - path: / backend: serviceName: foo servicePort: 80 - host: bar.example.com http: paths: - path: / backend: serviceName: bar servicePort: 80
  • 81. • Readiness probes – 주기적 호출을 통해 특정 pod가 request 수신 가능한지 확인 – 유형: Liveness probes와 마찬가지 • Exec probe • HTTP GET probe • TCP Socket probe
  • 82. • Readiness probe 이용 – pod에 readiness probe 추가 – 기존 pod의 readiness status를 수정
  • 83. Headless service • 목적 – Headless service를 이용해서 개별 pod를 탐색 (discover) • Headless service의 생성 – Service spec에서 clusterIP 필드를 None으로 설정 •  K8s won’t assign it a cluster IP • DNS를 통한 pod 탐색 • DNS A records
  • 85. 주요 항목 • 개요 • Volume 이용 – Worker node의 파일시스템/파일을 access • Storage와 pod를 decouple – PersistentVolume과 PersistentVolumeClaims • 기타 – PersistentVolume에 대한 Dynamic provisioning
  • 86. 개요 • Kubernetes volumes – a component of a pod and are thus defined in the pod’s specification—much like containers. – A volume is available to all containers in the pod, but it must be mounted in each container. – 예:
  • 87. • 가용한 volume types – emptyDir – hostPath – gitRepo – nfs – cinder, cephfs, iscsi, flocker, glusterfs, quobyte, rbd, flexVolume, vsphere-Volume, photonPersistentDisk, scaleIO – configMap, secret, downwardAPI – persistentVolumeClaim – Vendor-specific • gcePersistentDisk (GCE Persistent Disk), awsElasticBlockStore (AWS EBS Volume), azureDisk (MS Azure) • https://kubernetes.io/docs/concepts/storage/volumes/#types-of- volumes
  • 88. Volume의 이용 • Volume을 이용한 container간 데이터 공유 – emptyDir volume의 이용 – Git repository를 활용 • Volume initialized by checking out contents of a Git repository. • sidecar containers – Container augmenting the operation of main container of the pod.
  • 89. • Worker node의 파일시스템/파일을 access • hostPath volume
  • 90. Storage와 pod를 decouple • PersistentVolume과 PersistentVolumeClaims Storage Activity Kubernetes storage primitive Provisioning Persistent Volume Configuring Storage Class Attaching Persistent Volume Claim 출처: https://portworx.com/tutorial-kubernetes-persistent-volumes/
  • 91. • PersistentVolumeClaim – = a completely separate process from creating a pod – PersistentVolumeClaim to stay available even if pod is rescheduled • 활용과 잇점
  • 92. • Dynamic provisioning of PersistentVolumes – Administrator 가 PersistentVolume을 pre-provision하는 대신, StorageClass를 지정하고 PersistentVolumeClaim 를 통해 필요 시 새로운 PersistentVolume을 생성 – PersistentVolumeClaim의 storage class를 request • PVC definition 생성 – Storage class 지정 없이 dynamic provisioning
  • 94. 주요 항목 • 개요 – Containerized application의 설정 (configuration) • 일반론 – Command-line arguments의 전달 – Container에 대한 환경변수 설정 • ConfigMap의 이용 • Secret의 이용
  • 95. 개요 • Containerized application의 설정 (configuration) – Command-line arguments – 환경변수 – Regardless of using a ConfigMap to store configuration data or not, you can configure your apps by • Passing command-line arguments to containers • Setting custom environment variables for each container • Mounting configuration files into containers through a special type of volume
  • 96. • Passing command-line arguments to containers – Defining the command and arguments in Docker • Dockerfile에서의 ENTRYPOINT와 CMD – ENTRYPOINT defines executable invoked when container is started. – CMD specifies the arguments that get passed to the ENTRYPOINT. – shell form과 exec form – shell form—For example, ENTRYPOINT node app.js. – exec form—For example, ENTRYPOINT ["node", "app.js"]. – Whether specified command is invoked inside a shell or not.
  • 97. • Kubernetes vs. Docker Docker Kubernetes 설명 ENTRYPOINT command Container 안에서 실행되는 executable CMD args Executable에 전달되는 arguments
  • 98. • Container에 대한 환경변수 설정 – Container definition에서 환경변수 지정 – Referring to other environment variables in a variable’s value
  • 99. ConfigMap • ConfigMaps 개요 – Decoupling configuration – Having config in a separate standalone object – Move the configuration out of the pod description.
  • 100. • ConfigMap 생성 – kubectl create configmap 명령어 • Creating a ConfigMap entry from the contents of a file • Creating a ConfigMap from files in a directory • Combining different options – ConfigMap entry를 환경변수로서 container에 전달 $ kubectl create configmap fortune-config --from-literal=sleep-interval=25
  • 101.
  • 102.
  • 103. Secrets • 개요 – Pass sensitive data to containers – key-value pairs • Pass Secret entries to the container as environment variables • Expose Secret entries as files in a volume – Secrets are always stored in memory only. – etcd stores Secrets in encrypted form • ConfigMap과 Secret 비교 • Binary data에 대한 Secret 적용 – Base64 encoding
  • 104. – default token Secret – Secret 생성 • Serve HTTPS traffic • Certificate과 private key files 생성
  • 106. 주요 항목 • Pod에서 수행 중인 application의 update • ReplicationController 이용 – 자동 rolling update • Deployment를 이용한 선언적(declarative) update
  • 107. 개요 • 개요 – Pod 안에서 수행 중인 application의 update – 2 ways • 기존의 모든 pods를 삭제한 후 start new ones. • Start new ones  delete old ones. – Either by adding all new pods and then deleting all the old ones or sequentially, by adding new pods and removing old ones gradually
  • 108. • 기존의 모든 pods를 삭제한 후 start new ones.
  • 109. • Spin up new pods 후 delete old ones –  a blue-green deployment – rolling update
  • 110. ReplicationController • 개요 – ReplicationController 이용한 자동 rolling update • 진행과정 – Run initial version of the app – kubectl를 이용하여 rolling update 수행
  • 111. Service is redirecting requests to both old and new pods during rolling update.
  • 112. Deployments • 배경 – kubectl rolling-update 의 제한점 • (i) k8s modifies objects I created • (ii) I had to explicitly say that the kubectl client perform … • (iii) It’s imperative • Deployment – = a higher-level resource for deploying/updating applications –  선언적(declarative) update • instead of doing it through a ReplicationController or a ReplicaSet, which are both considered lower-level concepts.
  • 113. • Deployment 생성 – Deployment manifest – Deployment resource 생성 • 우선 수행 중인 ReplicationController와 pod를 삭제 • Updating a Deployment – Deployment 전략 • Recreate strategy • RollingUpdate strategy – Triggering rolling update
  • 114. • Deployment와 기타 resource의 수정방법 Method 내역 kubectl edit Opens the object’s manifest in default editor. kubectl patch Object의 개별 properties를 수정. 예: kubectl patch deployment ~~ kubectl apply YAML or JSON 파일의 property value를 통해 수정 kubectl replace Replaces the object with a new one from a YAML/JSON file. kubectl set image Changes the container image defined in a Pod,
  • 115. • deployment의 Rollback • Undoing a rollout • Rollout history • Pausing the rollout process – Pausing the rollout – Resuming the rollout
  • 116. • Rollout 방식의 조정 – Rollout rateProperty What it does maxSurge How many pod instances you allow to exist above the desired replica count configured on the Deployment. Default: 25%, maxUnavailable Determines how many pod instances can be unavailable relative to the desired replica count during the update. Default: 25%,
  • 118. Kubernetes Components • Master Components – provide the cluster’s control plane. – make global decisions about the cluster (예: scheduling), and they detect and respond to cluster events (예: starting up a new pod when a replication controller’s replicas field is unsatisfied). – kube-apiserver – etcd – kube-scheduler – kube-controller-manager – cloud-controller-manager
  • 119. • kube-apiserver – Component on the master that exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. – It is designed to scale horizontally. • etcd – Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data. – If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data. – You can find in-depth information about etcd in the offical documentation.
  • 120. • kube-scheduler – Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on. – Factors taken into account for scheduling decisions include individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference and deadlines.
  • 121. • kube-controller-manager – Component on the master that runs controllers . – Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. • Node Controller: Responsible for noticing and responding when nodes go down. • Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system. • Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods). • Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.
  • 122. • cloud-controller-manager – cloud-controller-manager runs controllers that interact with the underlying cloud providers. – cloud-controller-manager runs cloud-provider-specific controller loops only. You must disable these controller loops in the kube-controller-manager. • Node Controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding • Route Controller: For setting up routes in the underlying cloud infrastructure • Service Controller: For creating, updating and deleting cloud provider load balancers • Volume Controller: For creating, attaching, and mounting volumes, and interacting with the cloud provider to orchestrate volumes
  • 123. • Node Components – 모든 node에서 수행되며 Kubernetes runtime environment 제공 – kubelet • An agent that runs on each node in the cluster. It makes sure that containers are running in a pod. – kube-proxy • kube-proxy is a network proxy that runs on each node in the cluster. – Container Runtime • the software that is responsible for running containers. • Kubernetes supports several container runtimes: Docker, containerd, cri-o, rktlet and any implementation of the Kubernetes CRI (Container Runtime Interface)
  • 124. • Addons – DNS • While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it. • Cluster DNS – Web UI (Dashboard) • a general purpose, web-based UI for Kubernetes clusters. – Container Resource Monitoring • records generic time-series metrics about containers in a central database, and provides a UI for browsing that data. – Cluster-level Logging • is responsible for saving container logs to a central log store with search/browsing interface.
  • 125. • Pods – Comparing Docker container and Kubernetes pod networking – Sharing the network namespace – Pod life cycle – Pod specification – Pods and volumes
  • 126. • Kubernetes API – Kubernetes itself is decomposed into multiple components, which interact through its API. • API changes • OpenAPI and Swagger definitions • API versioning • API groups • Enabling API groups • Enabling resources in the groups
  • 127. • Self-Registration of Nodes – When kubelet flag --register-node is true (default), the kubelet will attempt to register itself with the API server. – For self-registration, kubelet is started with options: --kubeconfig --cloud-provider --register-node --register-with-taints --node-labels --node-status-update-frequency
  • 128. • Cluster nodes – On each node, 3 services need to run: – Kubelet: • = primary node agent. • uses pod specifications to make sure all of the containers of the corresponding pods are running and healthy. • YAML or JSON 파일 – Container runtime: • Manages and runs individual containers of a pod. • rkt or CRI-O도 사용 가능. – kube-proxy: • runs as a daemon and is a simple network proxy and load balancer for all application services running on that particular node.
  • 130. • Pods – = atomic unit of deployment in Kubernetes. – = abstraction of one or many co-located containers that share the same Kernel namespaces, such as the network namespace. – No equivalent exists in the Docker SwarmKit.
  • 131. • Docker container와 Kubernetes pod networking의 비교 • Container 사이의 통신 Containers in Pod sharing network namespace Containers in pods communicate via localhost
  • 132. • Sharing the network namespace • Pod life cycle • Pod specification • Pods and volumes
  • 133. • Kubernetes ReplicaSet • defines and manages a collection of identical pods that are running on different cluster nodes. – a ReplicaSet defines which container images are used by the containers running inside a pod and how many instances of the pod will run in the cluster.  desired state. – ReplicaSet specification – Self-healing
  • 134. • Kubernetes service Kubernetes service providing stable endpoints to clients Service discovery
  • 135. • Context-based routing Context-based routing using a Kubernetes ingress controller
  • 136. Kubernetes internals • Kubernetes architecture • How controllers cooperate • Running pod • Inter-pod networking
  • 137. Architecture • Components of the Control Plane • etcd distributed persistent storage • API server • Scheduler • Controller Manager • Components running on the worker nodes • Kubelet • Kubernetes Service Proxy (kube-proxy) • Container Runtime (Docker, rkt, or others) • Add-on components • Kubernetes DNS server • Dashboard • Ingress controller • Heapster, which we’ll talk about in chapter 14 • CNI network plugin
  • 138. • Distributed nature of Kubernetes components – Checking the status of the Control Plane components $ kubectl get componentstatuses
  • 139. • etcd – optimistic concurrency control – How resources are stored in etcd – Ensuring consistency when etcd is clustered $ etcdctl ls /registry
  • 140. • API server – 역할 • Authenticating the client with authentication plugins • Authorizing the client with authorization plugins • Validating and/or Modifying resource in the request with admission control plugins
  • 141. • Scheduler – scheduling algorithm • 2 parts in selection of a node – Filtering the list of all nodes to obtain a list of acceptable nodes the pod can be scheduled to. – Prioritizing the acceptable nodes and choosing the best one. If multiple nodes have the highest score, round-robin is used to ensure pods are deployed across all of them evenly.
  • 145. • Container Network Interface – CNI project makes easier to connect containers into a network.  allows Kubernetes to be configured to use any CNI plugin including • Calico • Flannel • Romana • Weave Net • And others • https://kubernetes.io/docs/concepts/cluster- administration/addons/.
  • 148. • OpenStack – IaaS 플랫폼 – A composable, open infrastructure that provides API-driven access to compute, storage and networking resources. – Open-Source project: License Apache 2.0 – Initiated by Nebula (NASA) and Rackspace – Written in Python, Stable releases every 6 months Functions OpenStack Compute Identity Network Storage Telemetry Orchestration Dashboard Nova Keystone Neutron Glance, Cinder, Swift Ceilometer Heat Horizon
  • 149. • The foundation – 10 000 users • Cloud providers, Telco, Banks, Governments, etc – 1000 organizations • Red Hat, IBM, Rackspace, eNovance, etc – 100 countries
  • 150.
  • 152. • OpenStack에서의 network service – OpenStack manages several physical and virtual network devices and virtual overlay networks. – Various interfaces are abstracted by OpenStack API. – OpenStack can manage many types of network technology
  • 153. • OpenStack에서의 storage – Block Storage – Object Storage
  • 154. OpenStack components Project Code 명 설명 Compute Nova Manages VM resources, including CPU, memory, disk, and network interfaces. Networking Neutron Provides resources used by the VM network interface, including IP addressing, routing, and SDN. Object Storage Swift Provides object-level storage, accessible via a RESTful API. Block Storage Cinder Provides block-level (traditional disk) storage to VMs. Identity Keystone Manages role-based access control (RBAC) for OpenStack components. Provides authorization services. Image Service Glance Manages VM disk images. Provides image delivery to VMs and snapshot (backup) services. Dashboard Horizon Provides a web-based GUI for working with OpenStack. Telemetry Ceilometer Collection for metering and monitoring OpenStack components. Orchestration Heat Template-based cloud application orchestration
  • 155. History of OpenStack Series Status Initial Release Date Next Phase Ussuri Development 2020-05-13 estimated (schedule) Maintained estimated 2020-05-13 Train Maintained 2019-10-16 Extended Maintenance estimated 2021-04-16 Stein Maintained 2019-04-10 Extended Maintenance estimated 2020-10-10 Rocky Maintained 2018-08-30 Extended Maintenance estimated 2020-02-24 https://releases.openstack.org/
  • 157. OpenStack Dashboard • 3 primary ways to interface with OpenStack: • Dashboard • CLI • APIs – Regardless of interface method, all interactions will make their way back to the OpenStack APIs.
  • 159. • Access & Security screen
  • 160. • Images & Snapshots screen – <OpenStack image formats> – RAW – VHD (Virtual Hard Disk) – VMDK (Virtual Machine Disk) – VDI (Virtual Disk Image or VirtualBox Disk Image) – ISO – QCOW (QEMU Copy On Write) – AKI – ARI – AMI
  • 161. • Volumes screen – <Block vs. file vs. object storage> • 3 categories of typical storage access methods’ : – Block – File – Object
  • 163. • OpenStack deployment as a hotel. • tenants as hotel rooms. • Hotel OpenStack provides computational resources. – Just as a hotel room is configurable, so are tenants. » The number of resources (vCPU, RAM, storage, and the like), images (tenant-specific software images), and the configuration of the network are all based on tenant-specific configurations. • Users are independent of tenants, but users may hold roles for specific tenants. • Every time a new instance (VM) is created, it must be created in a tenant.
  • 165. Tenant model • Tenant model operations – Users and roles have one- to-many relationships with tenants. – All resource configuration (users with roles, instances, networks, and so on) is organized based on tenant separation. – Roles are defined outside of tenants, but users are created with an initial tenant assignment.
  • 166. • Tenants, users, role의 생성 – Tenant 생성 – User 생성 – Role 부여
  • 167. • Tenant networks – flat = absence of a virtual routing tier – OpenStack tenant network • 추가의 router residing within virtual environment  internal network (GENERAL_NETWORK)과 external network (PUBLIC_NETWORK)를 분리.
  • 168. – Creating internal networks • internal network works on ISO Layer 2, so for the network types this is the virtual equivalent of providing a network switch to be used exclusively for a particular tenant. – Network (Neutron) • GENERAL_NETWORK created for your tenant.
  • 169. – Router 생성 • Router를 subnet에 추가  local virtual switch에 port 생성
  • 170. – Router를 public network에 연결 – 외부 네트워크의 생성
  • 171. – 외부 subnet의 생성 External gateways for tenants
  • 173.
  • 174. • KeyStone – 주요기능 • Identity provider: – identity is represented as a user in the form of a name and password. • API client authentication: – KeyStone can do it by using many third-party backends such as LDAP and AD. Once authenticated, the user gets a token which he/she can use to access other OpenStack service APIs. • Multitenant authorization: – When a user access any OpenStack service, the service verifies the role of the user and whether he/she can access the resource. • Service discovery: – KeyStone manages a service catalog in which other services can register their endpoints.
  • 175. – KeyStone components: • KeyStoneAPI: • Services: • Identity: • Resource: • Assignment: • Token: • Catalog: • Policy:
  • 176. Nova • Nova – = a compute service provides a way to provision compute instances, also known as virtual machines. – 주요기능 • Create and manage: – Virtual machines – Bare metal servers – System containers – internally communicate via RPC message-passing mechanisms.
  • 177. – Nova components: • Nova API: • Placement API: • Scheduler: • Compute: • Conductor: • Database: • Message queue: • Network:
  • 178. Neutron • Neutron – = network service providing networking options. – uses plugins to provide different network configurations. • Neutron components: • Neutron server (neutron-server and neutron-*-plugin): • Plugin agent (neutron-*-agent): • DHCP agent (neutron-dhcp-agent): • L3 agent (neutron-l3-agent): • Network provider services (SDN server/services): • Messaging queue: • Database:
  • 179.
  • 180. Cinder • Cinder – = a block storage service which provides persistent block storage resources for VMs in Nova. • Cinder uses LVM or other plugin drivers to provide storage. • Users can use Cinder to create, delete, and attach a volume. • advanced features such as clone, extend volumes, snapshots, and write images can be used as bootable persistent instances for VMs and bare metals. – Cinder components • cinder-api: • cinder-scheduler: • cinder-volume: • cinder-backup:
  • 181.
  • 182. Glance • Glance – = image service which provides discovering, registering, and retrieving abilities for disk and server images. – Users can upload and discover data images and metadata definitions that are meant to be used with other services. • Glance is a central repository for managing images for VMs, containers and bare metals. • Glance has a RESTful API that allows for the querying of image metadata as well as the retrieval of the actual image.
  • 183. – Glance components: • glance-api: • glance-registry: • Database: • Storage repository for image files: • Metadata definition service:
  • 184. Swift • Swift – = object store service used to store redundant, scalable data on clusters of servers that are capable of storing petabytes of data. – uses a distributed architecture with no central point of control. – is ideal for storing unstructured data which can grow without bounds and can be retrieved and updated. – Data is written to multiple nodes that extend to different zones for ensuring data replication and integrity across the cluster. – Clusters can scale horizontally. In case of node failure, data is replicated to other active nodes. – Swift organizes data in a hierarchy. It accounts for the stored list of containers, containers for storing lists of objects and objects for storing the actual data with metadata.
  • 185. – Swift components • proxy-servers: • Rings: • Zones: • Accounts: • Containers: • Objects: • Partitions: • Swift has many other services such as updaters, auditors, and replicators which handle housekeeping tasks to deliver a consistent object storage solution:
  • 186. Wrap-up Project Codename Description Compute Nova Manages virtual machine (VM) resources, including CPU, memory, disk, and network interfaces Networking Neutron Provides resources used by VM network interfaces, including IP addressing, routing, and SDN Object Storage Swift Provides object-level storage accessible via RESTful APIs Block Storage Cinder Provides block-level (traditional disk) storage to VMs Identity Service (shared service) Keystone Manages RBAC for OpenStack components; provides authorization services Image Service (shared service) Glance Manages VM disk images; provides image delivery to VMs and snapshot (backup) services Telemetry Service (shared service) Ceilometer Centralized collection for metering and monitoring OpenStack components Orchestration Service (shared service) Heat Template-based cloud application orchestration for OpenStack environments Database Service (shared service) Trove Provides users with relational and non-relational database services Dashboard Horizon Provides a web-based GUI for working with OpenStack
  • 189. 주요 Orchestration 도구 • 기능 • Provision and managing hosts on which containers will run • Pull the images from repository and instantiate containers • Manage the life cycle of containers • Schedule containers on hosts based on host's resource availability • Start a new container when one dies • Scale the containers to match the application's demand • Provide networking between containers so that they can access each other on different hosts • Expose containers as services • Health monitoring of the containers • Upgrade the containers
  • 190. • Docker Swarm – Docker Swarm components • Node – Manager node – Worker node • Tasks • Services • Discovery service • Scheduler – Swarm mode
  • 191. • Apache Mesos – = cluster manager. • Ex: Marathon runs containerized applications on the Mesos cluster. Together, Mesos and Marathon become a container orchestration engine like Swarm or Kubernetes. – Apache Mesos components • Master • Slaves • Frameworks • Offer • Tasks • Zookeeper
  • 192. Kubernetes • Kubernetes architecture – External request – Master node • kube-apiserver • etcd • kube-controller-manager • kube-scheduler – Worker nodes • kubelet • kube-proxy • Container runtime • supervisord • fluentd
  • 194. • Nova – = a compute service that provides APIs to manage VMs. – supports provisioning of machine containers using two libraries, that is, LXC and OpenVZ (Virtuozzo). • These container related libraries are supported by libvirt, which Nova uses to manage virtual machines. • Heat – = an orchestration service. – Users need to enable plugins for Docker orchestration in Heat. • Magnum – = a container infrastructure management service. – Magnum provides APIs to deploy Kubernetes, Swarm, and Mesos clusters on OpenStack infrastructure. – Magnum uses Heat templates to deploy these clusters.
  • 195. • Zun – = a container management service for that provides APIs to manage life cycle of containers in OpenStack's cloud. – Currently, Zun provides the support to run containers on bare metals, but in the future, it may provide the support to run containers on virtual machines created by Nova. – Zun uses Kuryr to provide neutron networking to containers. Zun uses Cinder for providing persistent storage to containers. • Kuryr – = a Docker network plugin that provides networking services to Docker containers using Neutron. • Kolla – = a project to which it deploys OpenStack Controller plane services within Docker containers. – Kolla simplifies deployment and operations by packaging each controller service as a micro-service inside a Docker container.
  • 196. • Murano – = provides an application catalog for app developers and cloud administrators to publish cloud-ready applications in a repository available within OpenStack Dashboard (Horizon) which can be run inside Docker or Kubernetes. –  control the full life cycle of applications. • Fuxi – = storage plugin for Docker containers that enables containers to use Cinder volume and Manila share as persistent storage inside them. • OpenStack-Helm – provides a framework for operators and developers to deploy OpenStack on top of Kubernetes.
  • 197. Kubernetes Plugin for OpenStack • Kubernetes interface to IaaS layer를 대상으로 함 – Supports plugging into many cloud providers: OpenStack, GCE, AWS, Azure, ... – Easily configurable – Leveraged by key Kube components: kube-apiserver, kube-controllermanager, kubelet • Kubernetes plugin supports – OpenStack Identity – OpenStack Networking – OpenStack Storage • Code: – gophercloud repo: https://github.com/rackspace/gophercloud – Kubernetes repo: https://github.com/kubernetes/kubernetes • pkg/cloudprovider/providers/openstack • pkg/volume/cinder – OpenStack repo: http://git.openstack.org/cgit/openstack/k8s-cloud-provider • Magnum project delivers the integration of Kubernetes and OpenStack
  • 198. Identity Management Integration • Keystone: – robust identity service, fully populated by cloud provider – provides multiple LDAPs and MS ADs integration and federated identity support • How Kubernetes services access OpenStack services – Code: gophercloud package – Establish session to access Neutron, Cinder, ... – Keystone trust id for better security: automated by Magnum
  • 199. Kubernetes as a Service
  • 200. • OpenStack provides the blueprint for SDI, Container Deployment + DevOps
  • 201. Kops • 개요 – A client for provisioning Kubernetes clusters – Follows the Kubernetes design philosophy • Declarative • Operator controlled – Support for most providers – www.github.com/kubernetes/kops • Motivations – Instance Groups • Multiple node flavors • Multiple node images • Declarative node taints/labels – etcd • backed by disks and supports snapshots • spec by disk labels allowing migration
  • 203. Magnum • Magnum이란? – Container cluster를 생 성시켜주는 OpenStack API Service – Keystone의 credential 을 이용 – Cluster type을 선택 – Multi-tenancy 지원 – Multi-master cluster 생성 가능
  • 204. • 주요 개념 – COE • Container Orchestration Engine • 예: – Docker Swarm – Kubernetes – Apache Mesos – DC/OS
  • 207. – Native Client • COE와 함께 배포되는 client (예: docker, kubectl) • OpenStack client가 아님 • TLS를 이용한 authenticate
  • 208. – Magnum 주요 기능 • Provides a standard API for complete life cycle management of COEs • Supports multiple COEs such as Kubernetes, Swarm, Mesos, and DC/OS • Supports the ability to scale a cluster up or down • Supports multi-tenancy for container clusters • Different choices of container cluster deployment models: VM or bare-metal • Provides KeyStone-based multi-tenant security and auth management • Neutron based multi-tenant network control and isolation • Supports Cinder to provide volume for containers • Integrated with OpenStack • Secure container cluster access (Transport Layer Security (TLS)) enabled • Support for external infrastructure can also be used by the cluster, such as DNS, public network, public discovery service, Docker registry, load balancer, and so on • Barbican provides the storage of secrets such as certificates used for TLS within the cluster • Kuryr-based networking for container-level isolation
  • 209. • 구성요소 – Magnum API • = a WSGI server that serves API requests that user sends. • Magnum API has many controllers to handle a request for each of resources: – Baymodel # Baymodel and Bay will be replaced by cluster – Bay # and cluster templates respectively. – Certificate, Cluster, Cluster template – Magnum services, Quota, Stats • Each of controllers handle a request for specific resources. – Magnum conductor • = an RPC server that provides coordination and database query support for Magnum. • is stateless and horizontally scalable, meaning multiple instances of the conductor service can run at the same time.
  • 210.
  • 212. Zun • 개요 – container management service that provides APIs to manage containers abstracted by different technologies at the backend. • Zun supports Docker as container runtime tool. • Zun integrates with many OpenStack services – Zun has various add-ons over Docker, which makes it a powerful solution for container management.
  • 213. • 주요 기능 – Container의 life cycle management를 위한 표준 API 제공 – Provides KeyStone-based multi-tenant security and auth management – Supports Docker with runc and clear container for managing containers – Supports Cinder to provide volume for containers – Kuryr-based networking for container-level isolation – Supports container orchestration via Heat – Container composition known as capsules lets user run multiple containers with related resources as a single unit – Supports the SR-IOV feature that enables the sharing of a physical PCIe device to be shared across VMs and containers – Supports interactive sessions with containers – Zun allows users to run heavy workloads with dedicated resources by exposing CPU sets
  • 214. Kuryr • Kuryr – = a Docker network plugin that uses OpenStack Neutron to provide networking services to Docker containers. – maps container network abstractions to Neutron APIs. • Security groups • Subnet pools • NAT (SNAT/DNAT, Floating IP) • Port security (ARP spoofing) • Quality of Service (QoS) • Quota management • Neutron pluggable IPAM • Well-integrated COE load balancing via a neutron • FWaaS for containers
  • 215. • Kuryr architecture – Mapping the Docker libnetwork to the neutron API
  • 216. Murano • 개요 – = application catalog service –  cloud-ready applications to be easily deploy on OpenStack. • It is an integration point for external applications and OpenStack with support of complete applications life cycle management. – Environment – Package – Session – Deployments – Bundle – Categories
  • 217. Kolla • 개요 – OpenStack cloud를 deploy & manage하는 복잡성의 문제 • 기능 – OpenStack service를 container형태로 실행 – Ansible을 이용하여 container image를 설치 및 deploy or upgrade OpenStack cluster. – Kolla containers are configured to store data on persistent storage, which can then be mounted back onto host operating system and restored to protect against any faults.
  • 219.
  • 220. Acknowlegement • Figure sources include: – www.kubernetes.io – Marko Luksa, “Kubernetes in Action”, 2018 – www.openstack.org