5. Container Review
• 핵심개념
– Linux Namespace를 이용한 process isolation
– Mount (mnt) Process ID (pid)
– Network (net) Inter-process communication (ipc)
– UTS User ID (user)
– cgroup을 이용한 자원의 제한적 할당
– Image layers
VMs vs. containers
6. • Docker container 플랫폼
– Docker 개념
– Images
– Registries
– Containers
– Docker image의 build, 배포
7. • Docker Engine과 K8s
dockerd
containerd
containerd-shim
"sleep 60" (desired process in container).
8. How it all works in Kubernetes
Container Runtime Interface (CRI)
cni container network interface
gRPC Remote Procedure Calls
9. Orchestration
• 개념
– the automated configuration, coordination, and management of
computer systems and software (wiki)
• Orchestrator의 주요 task
– Reconciling the desired state
– Replicated and global services
– Service discovery
– Routing
– Load balancing
– Scaling
– Self-healing Zero downtime
– Affinity and location awareness
– Security
• Secure communication and cryptographic node identity
• Secure networks and network policies
• RBAC: Secrets, Content trust, Reverse uptime
– Introspection
10. Yaml 파일
• 개요
– a human-readable data-serialization language
– a superset of JSON.
– Outline Indentation and Whitespace
– YAML의 기본 구성요소는 key-value
– https://yaml.org/spec/1.2/spec.html
• Basic Rules
• YAML is case sensitive.
• YAML does not allow use of tabs.
• Data Types
– YAML excels at working with mappings (hashes / dictionaries),
sequences (arrays / lists), and scalars (strings / numbers).
11. • Scalars
– Strings, numbers, a boolean property, integer (number).
– often called variables in programming.
– Most scalars are unquoted, but if you are typing a string that
uses punctuation and other elements that can be confused with
YAML syntax (dashes, colons, etc.) you may want to quote this
data using single ' or double " quotation marks.
• Double quotation marks allow you to use escapings to represent
ASCII and Unicode characters.
• Only 2 types of structures in YAML
– Lists
• literally a sequence of objects. For example:
• members of the list can also be maps:
– Maps
• name-value pairs
integer: 25
string: "25"
float: 25.0
boolean: Yes
21. • Kubernetes from the top of a mountain
– Help developers focus on the core app features
– Help ops teams achieve better resource utilization
K8s exposes whole datacenter as a single deployment platform.
23. • k8s cluster is composed of nodes, split into 2 types:
– master node hosts the Kubernetes Control Plane
• controls and manages whole Kubernetes system
– Worker nodes run the actual applications you deploy
k8s cluster의 구성요소
24. – Control Plane
• 구성요소:
– Kubernetes API Server
– Scheduler ; schedules your apps (assigns a worker node to each
deployable component of your application)
– Controller Manager, which performs cluster-level functions, such as
replicating components, keeping track of worker nodes, handling node
failures, and so on
– etcd = a distributed data store that persistently stores cluster
configuration.
– (Worker) Nodes
• = machines that run containerized applications.
– Docker, rkt, or another container runtime, which runs your containers
– Kubelet, which talks to the API server and manages containers on its
node
– Kubernetes Service Proxy (kube-proxy), which load-balances network
traffic between application components
27. SwarmKit Kubernetes Description
Swarm Cluster Set of servers/nodes managed by the respective orchestrator.
Node
Cluster
member
Single host (physical or virtual) which is a member of the
swarm/cluster.
Manager
node
Master Node managing the swarm/cluster. This is the control plane.
Worker node Node Member of the swarm/cluster running application workload.
Container Container**
Instance of a container image running on a node. In a Kubernetes
cluster, we cannot run a container.
Task Pod
Instance of a service (swarm) or ReplicaSet (Kubernetes) running
on a node. A task manages a single container while a Pod contains
one to many containers that are all sharing the same network
namespace.
• SwarmKit와 Kubernetes의 비교
뒷면 계속
28. SwarmKit Kubernetes Description
Service ReplicaSet
Defines and reconciles the desired state of an application service
consisting of multiple instances.
Service Deployment
A deployment is a ReplicaSet augmented with rolling update and
rollback capabilities.
Routing
Mesh
Service
Swarm Routing Mesh provides L4 routing and load balancing
using IPVS.
Stack Stack **
Definition of an application consisting of multiple (Swarm)
services.
Network Network policy
Swarm software-defined networks (SDNs) are used to firewall
containers. Kubernetes only defines a single flat network.
29. • Kubernetes를 이용한 Application 배포, update
– Deploy a first application
• Deploy the web component
• Deploy the database
• Streamle the deployment
– Zero downtime deployments
• Rolling updates
• Blue–green deployment
– Kubernetes secrets
• Manually defining secrets
• Creating secrets with kubectl
• Using secrets in a pod
• Secret values in environment variables
32. Container image의 생성 및 실행
• Docker
– Hello World container
• Node.js app의 예
– Node.js app
– Dockerfile
– Build image
• Image layers
– Container 실행
– Container 정보
– Stop & Remove container
– Registry에 push
$ docker build -t mykub .
$ docker inspect mykub-container
33. Kubernetes cluster
• Minikube
– Local single-node Kubernetes cluster
– Minikube를 이용한 Kubernetes cluster 실행
– Kubernetes client 설치 (kubectl)
• 설치 후 확인
$ minikube start
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
...
Kubectl is now configured to use the cluster.
34. • Public cloud와 Kubernetes cluster
– (GKE 의 경우)
• 절차: …
• 3-node Kubernetes cluster 생성
$ gcloud container clusters create mykub --num-nodes 3
--machine-type f1-micro
35. • kubectl 사용 준비
– Creating an alias
– Configuring tab completion for kubectl
alias k=kubectl
36. Kubernetes에서의 app 실행
• Deploying Node.js app
• Pods
– = group of one or more tightly related containers that always
run together on the same worker node and in the same Linux
namespace(s).
– Each pod is like a separate logical machine with its own IP,
hostname, processes, and so on, running a single application.
$ kubectl run mykub --image=hk/mykub --port=8080 --generator=run/v1
replicationcontroller "mykub" created
37. • Web application
– expose through a Service object.
• LoadBalancer-type service external load balancer
– connect to pod through the load balancer’s public IP.
• Service object 생성
– External IP를 통한 Service확인
38. • ReplicationController, Pod 및 Service
– kubectl run 명령어를 이용한 ReplicationController 생성
• ReplicationController creates actual Pod object.
• Service의 필요성 및 역할
– Pods are ephemeral ever-changing pod IP addresses
– Services expose multiple pods at a single constant IP: port pair.
• a static IP during the lifetime of service.
42. 주요 항목
• YAML 또는 JSON descriptor를 이용한 pod 생성
• Label의 이용
– Listing subsets of pods through label selectors
– Label 및 selector를 통한 pod scheduling 관리
• Annotation
• Namespace 를 이용한 resource grouping
• pod 사용의 종료
43. Pods 개요
Figure 3.1. All containers of a pod run on the same node.
A pod never spans two nodes.
44. • Pods의 필요성 (역할)
– Multiple containers
• Containers are designed to run a single process per container.
– Pod with multiple containers run on a single worker node.
• Run closely related processes together and provide same env’t.
• Partial isolation between containers of the same pod
– As all containers of a pod run under the same Network and
UTS ns (Linux ns), they all share same hostname and NICs.
– All containers of a pod run under the same IPC namespace.
• Containers share the same IP and port space
– Because containers in a pod run in the same Network ns, they
share same IP address and port space.
45. – Flat inter-pod network
• All pods in a k8s cluster reside in a single flat, shared, network-
address space. No NAT gateways between them.
46. • 여러 container를 pod에 적절히 배분하는 문제
– Split multi-tier apps into multiple pods
– Split into multiple pods to enable individual scaling
– Run containers in separate pods, unless a specific reason
requires
47. YAML descriptor 이용한 pod 생성
• YAML descriptor of a pod
– Pod definition – 3 sections in Kubernetes resources:
• Metadata
• Spec
• Status
48. • YAML descriptor 이용한 pod 생성 실습
– Yaml 파일
– Pod 생성
– Container port
– requests to the pod에게 request 발송
$ kubectl create -f mypod.yaml
$ kubectl explain pods
$ kubectl explain pod.spec
$ kubectl get pods
$ kubectl port-forward mypod 8888:8080
49. Label 의 이용
• Label 개념
– an arbitrary key-value pair you attach to a resource
– label selectors (resource filtering).
50. • Label selector
– Whether the resource
• Contains (or doesn’t contain) a label with a certain key
• Contains a label with a certain key and value
• Contains a label with a certain key, but with a value not equal to
the one you specify
– Label selectors:
• creation_method!=manual
• env in (prod,devel)
• env notin (prod,devel)
• Label을 이용한 worker node의 분류
– Scheduling pods to specific nodes
51. Annotations
• Annotations
– =KVP
– hold much larger pieces of information
– Tool-building 또는 new features 에 이용
• Object에 대한 annotation의 추가 및 수정 및 검사
52. Namespace
• Namespace
– Kubernetes namespaces provide a scope for objects names.
• (cf. Linux namespaces to isolate processes from each other.)
– 필요성: split up resources in a multi-tenant environment
• Namespace 생성
• Namespace 를 이용한 resource grouping
$ kubectl get ns
apiVersion: v1
kind: Namespace
metadata:
name: custom-namespace
53. pod 사용의 종료
• 삭제
– Pod의 이름을 이용한 삭제
– Label selector를 이용한 삭제
– Namespace 전체에 대한 삭제
– 전체 삭제
$ kubectl delete po mykub-gpu
pod "mykub-gpu" deleted
55. 주요 항목
• pods 상태 점검
• ReplicationControllers
• ReplicaSets
• DaemonSets
• completable task
• Job Scheduling to run periodically
56. pods 상태 점검
• Liveness probes 개요
– check if a container is still alive through liveness probes.
• 주기적 조사 후 restart the container if the probe fails.
– Mechanisms
• HTTP GET probe
• TCP Socket probe
• Exec probe
• Liveness probe의 활용
– What a liveness probe should check
• Check only internals of the app
– Probe에서 retry loop는 사용치 말것
57. ReplicationControllers
• ReplicationController
– ensures its pods are always kept running.
– Features
• makes sure a pod is always running by starting a new pod.
• When a cluster node fails, it creates replacement replicas.
• enables horizontal scaling of pods—both manual and automatic
58. – 3 parts of a ReplicationController
• label selector
• replica count
• pod template
68. 주요 항목
• 개요
• Exposing services to external clients
• Ingress resource
• Signaling when a pod is ready to accept connections
• Headless servic를 이용한 개별 pod의 발견
69. Kubernetes service 개요
• Kubernetes service 개요
– a resource you create to make a single, constant point of
entry to a group of pods providing the same service.
– Each service has an IP address and port that never change
while the service exists.
– enable clients to discover and talk to pods
• Service Types
70. – 예:
• External clients need to connect to frontend pods …
• Frontend pods need to connect to backend database ...
71. Services 생성과 이용
• Services 생성
– kubectl expose를 통한 생성
– YAML descriptor 를 통한 생성
apiVersion: v1
kind: Service
metadata:
name: mykub
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: mykub
72. • Service의 이용
– 생성된 Service의 확인
– 수행 중인 container에서의 원격실행
– Session affinity
• 특정 client로부터의 모든 request를 특정 port로 redirect
• sessionAffinity property 를 ClientIP로 지정
– Exposing multiple ports
– named ports
$ kubectl exec mykub-7nog1 -- curl -s http://10.111.249.153
73. Service Discovery
• 환경변수를 이용한 service discovery
• DNS를 이용한 service discovery
– FQDN의 이용
• (frontend-backend 예에서) - frontend pod can connect to
backend DB service by opening a connection to:
backend-database.default.svc.cluster.local
74. 외부 Service 연결
• Cluster 밖의 Service에 대한 연결
– 목적: 외부 IP(s):port(s) 에 대한 Redirection을 통해
service load balancing 및 service discovery 달성
• Service endpoints
• Endpoints Resource sits in between
75. • Service endpoint에 대한 manual configuring
– Service 및 Endpoints resource를 생성
– 외부 service에 대한 alias 생성
• FQDN 이용 가능
2개의 외부 endpoint를 대상으로 한 service
76. • 외부 client에게 service를 expose
– Set service type to NodePort
– Set service type to LoadBalancer, an extension of NodePort type
– Create an Ingress resource to expose multiple services thru single IP
address
77. • NodePort service의 이용
– make Kubernetes reserve a port on all its nodes and
forward incoming connections to pods that are part of service.
– NodePort service 생성 및 NodePort service 확인
78. • 외부 load balancer를 통한 service exposure
• 기타
– 불필요한 network hop의 방지
79. Ingress resource
• 필요성
– Exposing services externally through an Ingress resource
– Ingresses operate at application layer (HTTP) and provide
features such as cookie-based session affinity
• 생성과 이용
$ kubectl get ingresses
$ curl http://mykub.example.com
80. • 하나의 Ingress를 통해 여러 service를 expose
• Mapping different services to different paths of the same host
• Ingress를 이용한 handle TLS traffic의 처리
spec:
rules:
- host: foo.example.com
http:
paths:
- path: /
backend:
serviceName: foo
servicePort: 80
- host: bar.example.com
http:
paths:
- path: /
backend:
serviceName: bar
servicePort: 80
81. • Readiness probes
– 주기적 호출을 통해 특정 pod가 request 수신 가능한지 확인
– 유형: Liveness probes와 마찬가지
• Exec probe
• HTTP GET probe
• TCP Socket probe
82. • Readiness probe 이용
– pod에 readiness probe 추가
– 기존 pod의 readiness status를 수정
83. Headless service
• 목적
– Headless service를 이용해서 개별 pod를 탐색 (discover)
• Headless service의 생성
– Service spec에서 clusterIP 필드를 None으로 설정
• K8s won’t assign it a cluster IP
• DNS를 통한 pod 탐색
• DNS A records
85. 주요 항목
• 개요
• Volume 이용
– Worker node의 파일시스템/파일을 access
• Storage와 pod를 decouple
– PersistentVolume과 PersistentVolumeClaims
• 기타
– PersistentVolume에 대한 Dynamic provisioning
86. 개요
• Kubernetes volumes
– a component of a pod and are thus defined in the pod’s
specification—much like containers.
– A volume is available to all containers in the pod, but it must
be mounted in each container.
– 예:
88. Volume의 이용
• Volume을 이용한 container간 데이터 공유
– emptyDir volume의 이용
– Git repository를 활용
• Volume initialized by checking out contents of a Git repository.
• sidecar containers
– Container augmenting the operation of main container of the pod.
91. • PersistentVolumeClaim
– = a completely separate process from creating a pod
– PersistentVolumeClaim to stay available even if pod is rescheduled
• 활용과 잇점
92. • Dynamic provisioning of PersistentVolumes
– Administrator 가 PersistentVolume을 pre-provision하는 대신,
StorageClass를 지정하고 PersistentVolumeClaim 를 통해 필요 시
새로운 PersistentVolume을 생성
– PersistentVolumeClaim의 storage class를 request
• PVC definition 생성
– Storage class 지정 없이 dynamic provisioning
94. 주요 항목
• 개요
– Containerized application의 설정 (configuration)
• 일반론
– Command-line arguments의 전달
– Container에 대한 환경변수 설정
• ConfigMap의 이용
• Secret의 이용
95. 개요
• Containerized application의 설정 (configuration)
– Command-line arguments
– 환경변수
– Regardless of using a ConfigMap to store configuration data
or not, you can configure your apps by
• Passing command-line arguments to containers
• Setting custom environment variables for each container
• Mounting configuration files into containers through a special
type of volume
96. • Passing command-line arguments to containers
– Defining the command and arguments in Docker
• Dockerfile에서의 ENTRYPOINT와 CMD
– ENTRYPOINT defines executable invoked when container is started.
– CMD specifies the arguments that get passed to the ENTRYPOINT.
– shell form과 exec form
– shell form—For example, ENTRYPOINT node app.js.
– exec form—For example, ENTRYPOINT ["node", "app.js"].
– Whether specified command is invoked inside a shell or not.
97. • Kubernetes vs. Docker
Docker Kubernetes 설명
ENTRYPOINT command Container 안에서 실행되는
executable
CMD args Executable에 전달되는
arguments
98. • Container에 대한 환경변수 설정
– Container definition에서 환경변수 지정
– Referring to other environment variables in a variable’s value
99. ConfigMap
• ConfigMaps 개요
– Decoupling configuration
– Having config in a separate standalone object
– Move the configuration out of the pod description.
100. • ConfigMap 생성
– kubectl create configmap 명령어
• Creating a ConfigMap entry from the contents of a file
• Creating a ConfigMap from files in a directory
• Combining different options
– ConfigMap entry를 환경변수로서 container에 전달
$ kubectl create configmap fortune-config --from-literal=sleep-interval=25
101.
102.
103. Secrets
• 개요
– Pass sensitive data to containers
– key-value pairs
• Pass Secret entries to the container as environment variables
• Expose Secret entries as files in a volume
– Secrets are always stored in memory only.
– etcd stores Secrets in encrypted form
• ConfigMap과 Secret 비교
• Binary data에 대한 Secret 적용
– Base64 encoding
104. – default token Secret
– Secret 생성
• Serve HTTPS traffic
• Certificate과 private key files 생성
106. 주요 항목
• Pod에서 수행 중인 application의 update
• ReplicationController 이용
– 자동 rolling update
• Deployment를 이용한 선언적(declarative) update
107. 개요
• 개요
– Pod 안에서 수행 중인 application의 update
– 2 ways
• 기존의 모든 pods를 삭제한 후 start new ones.
• Start new ones delete old ones.
– Either by adding
all new pods and then deleting all the old ones
or
sequentially, by adding new pods and removing old ones gradually
112. Deployments
• 배경
– kubectl rolling-update 의 제한점
• (i) k8s modifies objects I created
• (ii) I had to explicitly say that the kubectl client perform …
• (iii) It’s imperative
• Deployment
– = a higher-level resource for deploying/updating applications
– 선언적(declarative) update
• instead of doing it through a ReplicationController or a
ReplicaSet, which are both considered lower-level concepts.
113. • Deployment 생성
– Deployment manifest
– Deployment resource 생성
• 우선 수행 중인 ReplicationController와 pod를 삭제
• Updating a Deployment
– Deployment 전략
• Recreate strategy
• RollingUpdate strategy
– Triggering rolling update
114. • Deployment와 기타 resource의 수정방법
Method 내역
kubectl edit Opens the object’s manifest in default editor.
kubectl patch Object의 개별 properties를 수정.
예: kubectl patch deployment ~~
kubectl apply YAML or JSON 파일의 property value를 통해 수정
kubectl replace Replaces the object with a new one from a
YAML/JSON file.
kubectl set image Changes the container image defined in a Pod,
115. • deployment의 Rollback
• Undoing a rollout
• Rollout history
• Pausing the rollout process
– Pausing the rollout
– Resuming the rollout
116. • Rollout 방식의 조정
– Rollout rateProperty What it does
maxSurge How many pod instances you allow to exist above the
desired replica count configured on the Deployment.
Default: 25%,
maxUnavailable Determines how many pod instances can be unavailable
relative to the desired replica count during the update.
Default: 25%,
118. Kubernetes Components
• Master Components
– provide the cluster’s control plane.
– make global decisions about the cluster (예: scheduling), and
they detect and respond to cluster events (예: starting up a
new pod when a replication controller’s replicas field is
unsatisfied).
– kube-apiserver
– etcd
– kube-scheduler
– kube-controller-manager
– cloud-controller-manager
119. • kube-apiserver
– Component on the master that exposes the Kubernetes API. It
is the front-end for the Kubernetes control plane.
– It is designed to scale horizontally.
• etcd
– Consistent and highly-available key value store used as
Kubernetes’ backing store for all cluster data.
– If your Kubernetes cluster uses etcd as its backing store, make
sure you have a back up plan for those data.
– You can find in-depth information about etcd in the offical
documentation.
120. • kube-scheduler
– Component on the master that watches newly created pods
that have no node assigned, and selects a node for them to
run on.
– Factors taken into account for scheduling decisions include
individual and collective resource requirements,
hardware/software/policy constraints, affinity and anti-affinity
specifications, data locality, inter-workload interference and
deadlines.
121. • kube-controller-manager
– Component on the master that runs controllers .
– Logically, each controller is a separate process, but to reduce
complexity, they are all compiled into a single binary and run
in a single process.
• Node Controller: Responsible for noticing and responding when
nodes go down.
• Replication Controller: Responsible for maintaining the correct
number of pods for every replication controller object in the
system.
• Endpoints Controller: Populates the Endpoints object (that is,
joins Services & Pods).
• Service Account & Token Controllers: Create default accounts
and API access tokens for new namespaces.
122. • cloud-controller-manager
– cloud-controller-manager runs controllers that interact with
the underlying cloud providers.
– cloud-controller-manager runs cloud-provider-specific
controller loops only. You must disable these controller loops
in the kube-controller-manager.
• Node Controller: For checking the cloud provider to determine if
a node has been deleted in the cloud after it stops responding
• Route Controller: For setting up routes in the underlying cloud
infrastructure
• Service Controller: For creating, updating and deleting cloud
provider load balancers
• Volume Controller: For creating, attaching, and mounting
volumes, and interacting with the cloud provider to orchestrate
volumes
123. • Node Components
– 모든 node에서 수행되며 Kubernetes runtime environment 제공
– kubelet
• An agent that runs on each node in the cluster. It makes sure
that containers are running in a pod.
– kube-proxy
• kube-proxy is a network proxy that runs on each node in the
cluster.
– Container Runtime
• the software that is responsible for running containers.
• Kubernetes supports several container runtimes: Docker,
containerd, cri-o, rktlet and any implementation of the
Kubernetes CRI (Container Runtime Interface)
124. • Addons
– DNS
• While the other addons are not strictly required, all Kubernetes
clusters should have cluster DNS, as many examples rely on it.
• Cluster DNS
– Web UI (Dashboard)
• a general purpose, web-based UI for Kubernetes clusters.
– Container Resource Monitoring
• records generic time-series metrics about containers in a central
database, and provides a UI for browsing that data.
– Cluster-level Logging
• is responsible for saving container logs to a central log store with
search/browsing interface.
125. • Pods
– Comparing Docker container and Kubernetes pod networking
– Sharing the network namespace
– Pod life cycle
– Pod specification
– Pods and volumes
126. • Kubernetes API
– Kubernetes itself is decomposed into multiple components,
which interact through its API.
• API changes
• OpenAPI and Swagger definitions
• API versioning
• API groups
• Enabling API groups
• Enabling resources in the groups
127. • Self-Registration of Nodes
– When kubelet flag --register-node is true (default), the
kubelet will attempt to register itself with the API server.
– For self-registration, kubelet is started with options:
--kubeconfig
--cloud-provider
--register-node
--register-with-taints
--node-labels
--node-status-update-frequency
128. • Cluster nodes – On each node, 3 services need to run:
– Kubelet:
• = primary node agent.
• uses pod specifications to make sure all of the containers of the
corresponding pods are running and healthy.
• YAML or JSON 파일
– Container runtime:
• Manages and runs individual containers of a pod.
• rkt or CRI-O도 사용 가능.
– kube-proxy:
• runs as a daemon and is a simple network proxy and load
balancer for all application services running on that particular
node.
130. • Pods
– = atomic unit of deployment in Kubernetes.
– = abstraction of one or many co-located containers that share
the same Kernel namespaces, such as the network namespace.
– No equivalent exists in the Docker SwarmKit.
131. • Docker container와 Kubernetes pod networking의 비교
• Container 사이의 통신
Containers in Pod sharing network namespace
Containers in pods communicate via localhost
132. • Sharing the network namespace
• Pod life cycle
• Pod specification
• Pods and volumes
133. • Kubernetes ReplicaSet
• defines and manages a collection of identical pods that are
running on different cluster nodes.
– a ReplicaSet defines which container images are used by the
containers running inside a pod and how many instances of the pod
will run in the cluster. desired state.
– ReplicaSet specification
– Self-healing
137. Architecture
• Components of the Control Plane
• etcd distributed persistent storage
• API server
• Scheduler
• Controller Manager
• Components running on the worker nodes
• Kubelet
• Kubernetes Service Proxy (kube-proxy)
• Container Runtime (Docker, rkt, or others)
• Add-on components
• Kubernetes DNS server
• Dashboard
• Ingress controller
• Heapster, which we’ll talk about in chapter 14
• CNI network plugin
138. • Distributed nature of Kubernetes components
– Checking the status of the Control Plane components
$ kubectl get componentstatuses
139. • etcd
– optimistic concurrency control
– How resources are stored in etcd
– Ensuring consistency when etcd is clustered
$ etcdctl ls /registry
140. • API server
– 역할
• Authenticating the client with authentication plugins
• Authorizing the client with authorization plugins
• Validating and/or Modifying resource in the request with
admission control plugins
141. • Scheduler
– scheduling algorithm
• 2 parts in selection of a node
– Filtering the list of all nodes to obtain a list of acceptable nodes the
pod can be scheduled to.
– Prioritizing the acceptable nodes and choosing the best one. If
multiple nodes have the highest score, round-robin is used to ensure
pods are deployed across all of them evenly.
145. • Container Network Interface
– CNI project makes easier to connect containers into a network.
allows Kubernetes to be configured to use any CNI plugin
including
• Calico
• Flannel
• Romana
• Weave Net
• And others
• https://kubernetes.io/docs/concepts/cluster-
administration/addons/.
148. • OpenStack
– IaaS 플랫폼
– A composable, open infrastructure that provides API-driven
access to compute, storage and networking resources.
– Open-Source project: License Apache 2.0
– Initiated by Nebula (NASA) and Rackspace
– Written in Python, Stable releases every 6 months
Functions OpenStack
Compute
Identity
Network
Storage
Telemetry
Orchestration
Dashboard
Nova
Keystone
Neutron
Glance, Cinder, Swift
Ceilometer
Heat
Horizon
149. • The foundation
– 10 000 users
• Cloud providers, Telco, Banks, Governments, etc
– 1000 organizations
• Red Hat, IBM, Rackspace, eNovance, etc
– 100 countries
152. • OpenStack에서의 network service
– OpenStack manages several physical and virtual network
devices and virtual overlay networks.
– Various interfaces are abstracted by OpenStack API.
– OpenStack can manage many types of network technology
154. OpenStack components
Project Code 명 설명
Compute Nova Manages VM resources, including CPU, memory, disk, and
network interfaces.
Networking Neutron Provides resources used by the VM network interface, including
IP addressing, routing, and SDN.
Object Storage Swift Provides object-level storage, accessible via a RESTful API.
Block Storage Cinder Provides block-level (traditional disk) storage to VMs.
Identity Keystone Manages role-based access control (RBAC) for OpenStack
components. Provides authorization services.
Image Service Glance Manages VM disk images. Provides image delivery to VMs and
snapshot (backup) services.
Dashboard Horizon Provides a web-based GUI for working with OpenStack.
Telemetry Ceilometer Collection for metering and monitoring OpenStack components.
Orchestration Heat Template-based cloud application orchestration
155. History of OpenStack
Series Status Initial Release
Date
Next Phase
Ussuri Development 2020-05-13
estimated (schedule)
Maintained estimated
2020-05-13
Train Maintained 2019-10-16 Extended Maintenance
estimated 2021-04-16
Stein Maintained 2019-04-10 Extended Maintenance
estimated 2020-10-10
Rocky Maintained 2018-08-30 Extended Maintenance
estimated 2020-02-24
https://releases.openstack.org/
157. OpenStack Dashboard
• 3 primary ways to interface with OpenStack:
• Dashboard
• CLI
• APIs
– Regardless of interface method, all interactions will make
their way back to the OpenStack APIs.
160. • Images & Snapshots screen
– <OpenStack image formats>
– RAW
– VHD (Virtual Hard Disk)
– VMDK (Virtual Machine Disk)
– VDI
(Virtual Disk Image or
VirtualBox Disk Image)
– ISO
– QCOW
(QEMU Copy On Write)
– AKI
– ARI
– AMI
161. • Volumes screen
– <Block vs. file vs. object storage>
• 3 categories of typical storage access methods’ :
– Block
– File
– Object
163. • OpenStack deployment as a hotel.
• tenants as hotel rooms.
• Hotel OpenStack provides computational resources.
– Just as a hotel room is configurable, so are tenants.
» The number of resources (vCPU, RAM, storage, and the like), images (tenant-specific software
images), and the configuration of the network are all based on tenant-specific configurations.
• Users are independent of tenants, but users may hold roles for
specific tenants.
• Every time a new instance (VM) is created, it must be created in a
tenant.
165. Tenant model
• Tenant model operations
– Users and roles have one-
to-many relationships
with tenants.
– All resource configuration
(users with roles,
instances, networks, and
so on) is organized based
on tenant separation.
– Roles are defined outside
of tenants, but users are
created with an initial
tenant assignment.
168. – Creating internal networks
• internal network works on ISO Layer 2, so for the network types
this is the virtual equivalent of providing a network switch to be
used exclusively for a particular tenant.
– Network (Neutron)
• GENERAL_NETWORK created for your tenant.
169. – Router 생성
• Router를 subnet에 추가 local virtual switch에 port 생성
174. • KeyStone
– 주요기능
• Identity provider:
– identity is represented as a user in the form of a name and password.
• API client authentication:
– KeyStone can do it by using many third-party backends such as
LDAP and AD. Once authenticated, the user gets a token which
he/she can use to access other OpenStack service APIs.
• Multitenant authorization:
– When a user access any OpenStack service, the service verifies the
role of the user and whether he/she can access the resource.
• Service discovery:
– KeyStone manages a service catalog in which other services can
register their endpoints.
176. Nova
• Nova
– = a compute service provides a way to provision compute
instances, also known as virtual machines.
– 주요기능
• Create and manage:
– Virtual machines
– Bare metal servers
– System containers
– internally communicate via RPC message-passing mechanisms.
178. Neutron
• Neutron
– = network service providing networking options.
– uses plugins to provide different network configurations.
• Neutron components:
• Neutron server (neutron-server and neutron-*-plugin):
• Plugin agent (neutron-*-agent):
• DHCP agent (neutron-dhcp-agent):
• L3 agent (neutron-l3-agent):
• Network provider services (SDN server/services):
• Messaging queue:
• Database:
179.
180. Cinder
• Cinder
– = a block storage service which provides persistent block
storage resources for VMs in Nova.
• Cinder uses LVM or other plugin drivers to provide storage.
• Users can use Cinder to create, delete, and attach a volume.
• advanced features such as clone, extend volumes, snapshots, and
write images can be used as bootable persistent instances for
VMs and bare metals.
– Cinder components
• cinder-api:
• cinder-scheduler:
• cinder-volume:
• cinder-backup:
181.
182. Glance
• Glance
– = image service which provides discovering, registering, and
retrieving abilities for disk and server images.
– Users can upload and discover data images and metadata
definitions that are meant to be used with other services.
• Glance is a central repository for managing images for VMs,
containers and bare metals.
• Glance has a RESTful API that allows for the querying of image
metadata as well as the retrieval of the actual image.
184. Swift
• Swift
– = object store service used to store redundant, scalable data on
clusters of servers that are capable of storing petabytes of data.
– uses a distributed architecture with no central point of control.
– is ideal for storing unstructured data which can grow without
bounds and can be retrieved and updated.
– Data is written to multiple nodes that extend to different zones
for ensuring data replication and integrity across the cluster.
– Clusters can scale horizontally. In case of node failure, data is
replicated to other active nodes.
– Swift organizes data in a hierarchy. It accounts for the stored list
of containers, containers for storing lists of objects and objects
for storing the actual data with metadata.
185. – Swift components
• proxy-servers:
• Rings:
• Zones:
• Accounts:
• Containers:
• Objects:
• Partitions:
• Swift has many other services such as updaters, auditors, and
replicators which handle housekeeping tasks to deliver a
consistent object storage solution:
186. Wrap-up
Project Codename Description
Compute Nova Manages virtual machine (VM) resources, including CPU,
memory, disk, and network interfaces
Networking Neutron Provides resources used by VM network interfaces,
including IP addressing, routing, and SDN
Object Storage Swift Provides object-level storage accessible via RESTful APIs
Block Storage Cinder Provides block-level (traditional disk) storage to VMs
Identity Service
(shared service)
Keystone Manages RBAC for OpenStack components; provides
authorization services
Image Service
(shared service)
Glance Manages VM disk images; provides image delivery to
VMs and snapshot (backup) services
Telemetry Service
(shared service)
Ceilometer Centralized collection for metering and monitoring
OpenStack components
Orchestration Service
(shared service)
Heat Template-based cloud application orchestration for
OpenStack environments
Database Service
(shared service)
Trove Provides users with relational and non-relational
database services
Dashboard Horizon Provides a web-based GUI for working with OpenStack
189. 주요 Orchestration 도구
• 기능
• Provision and managing hosts on which containers will run
• Pull the images from repository and instantiate containers
• Manage the life cycle of containers
• Schedule containers on hosts based on host's resource availability
• Start a new container when one dies
• Scale the containers to match the application's demand
• Provide networking between containers so that they can access
each other on different hosts
• Expose containers as services
• Health monitoring of the containers
• Upgrade the containers
194. • Nova
– = a compute service that provides APIs to manage VMs.
– supports provisioning of machine containers using two libraries,
that is, LXC and OpenVZ (Virtuozzo).
• These container related libraries are supported by libvirt, which Nova
uses to manage virtual machines.
• Heat
– = an orchestration service.
– Users need to enable plugins for Docker orchestration in Heat.
• Magnum
– = a container infrastructure management service.
– Magnum provides APIs to deploy Kubernetes, Swarm, and Mesos
clusters on OpenStack infrastructure.
– Magnum uses Heat templates to deploy these clusters.
195. • Zun
– = a container management service for that provides APIs to
manage life cycle of containers in OpenStack's cloud.
– Currently, Zun provides the support to run containers on bare
metals, but in the future, it may provide the support to run
containers on virtual machines created by Nova.
– Zun uses Kuryr to provide neutron networking to containers. Zun
uses Cinder for providing persistent storage to containers.
• Kuryr
– = a Docker network plugin that provides networking services to
Docker containers using Neutron.
• Kolla
– = a project to which it deploys OpenStack Controller plane
services within Docker containers.
– Kolla simplifies deployment and operations by packaging each
controller service as a micro-service inside a Docker container.
196. • Murano
– = provides an application catalog for app developers and
cloud administrators to publish cloud-ready applications in a
repository available within OpenStack Dashboard (Horizon)
which can be run inside Docker or Kubernetes.
– control the full life cycle of applications.
• Fuxi
– = storage plugin for Docker containers that enables
containers to use Cinder volume and Manila share as
persistent storage inside them.
• OpenStack-Helm
– provides a framework for operators and developers to deploy
OpenStack on top of Kubernetes.
197. Kubernetes Plugin for OpenStack
• Kubernetes interface to IaaS layer를 대상으로 함
– Supports plugging into many cloud providers: OpenStack, GCE, AWS,
Azure, ...
– Easily configurable
– Leveraged by key Kube components: kube-apiserver, kube-controllermanager,
kubelet
• Kubernetes plugin supports
– OpenStack Identity
– OpenStack Networking
– OpenStack Storage
• Code:
– gophercloud repo: https://github.com/rackspace/gophercloud
– Kubernetes repo: https://github.com/kubernetes/kubernetes
• pkg/cloudprovider/providers/openstack
• pkg/volume/cinder
– OpenStack repo: http://git.openstack.org/cgit/openstack/k8s-cloud-provider
• Magnum project delivers the integration of Kubernetes and
OpenStack
198. Identity Management Integration
• Keystone:
– robust identity service, fully populated by cloud provider
– provides multiple LDAPs and MS ADs integration and
federated identity support
• How Kubernetes services access OpenStack services
– Code: gophercloud package
– Establish session to access Neutron, Cinder, ...
– Keystone trust id for better security: automated by Magnum
203. Magnum
• Magnum이란?
– Container cluster를 생
성시켜주는 OpenStack
API Service
– Keystone의 credential
을 이용
– Cluster type을 선택
– Multi-tenancy 지원
– Multi-master cluster
생성 가능
207. – Native Client
• COE와 함께 배포되는 client (예: docker, kubectl)
• OpenStack client가 아님
• TLS를 이용한 authenticate
208. – Magnum 주요 기능
• Provides a standard API for complete life cycle management of COEs
• Supports multiple COEs such as Kubernetes, Swarm, Mesos, and DC/OS
• Supports the ability to scale a cluster up or down
• Supports multi-tenancy for container clusters
• Different choices of container cluster deployment models: VM or bare-metal
• Provides KeyStone-based multi-tenant security and auth management
• Neutron based multi-tenant network control and isolation
• Supports Cinder to provide volume for containers
• Integrated with OpenStack
• Secure container cluster access (Transport Layer Security (TLS)) enabled
• Support for external infrastructure can also be used by the cluster, such as
DNS, public network, public discovery service, Docker registry, load balancer,
and so on
• Barbican provides the storage of secrets such as certificates used for TLS
within the cluster
• Kuryr-based networking for container-level isolation
209. • 구성요소
– Magnum API
• = a WSGI server that serves API requests that user sends.
• Magnum API has many controllers to handle a request for each of
resources:
– Baymodel # Baymodel and Bay will be replaced by cluster
– Bay # and cluster templates respectively.
– Certificate, Cluster, Cluster template
– Magnum services, Quota, Stats
• Each of controllers handle a request for specific resources.
– Magnum conductor
• = an RPC server that provides coordination and database query
support for Magnum.
• is stateless and horizontally scalable, meaning multiple instances of
the conductor service can run at the same time.
212. Zun
• 개요
– container management service that provides APIs to manage
containers abstracted by different technologies at the
backend.
• Zun supports Docker as container runtime tool.
• Zun integrates with many OpenStack services
– Zun has various add-ons over Docker, which makes it a
powerful solution for container management.
213. • 주요 기능
– Container의 life cycle management를 위한 표준 API 제공
– Provides KeyStone-based multi-tenant security and auth
management
– Supports Docker with runc and clear container for managing
containers
– Supports Cinder to provide volume for containers
– Kuryr-based networking for container-level isolation
– Supports container orchestration via Heat
– Container composition known as capsules lets user run multiple
containers with related resources as a single unit
– Supports the SR-IOV feature that enables the sharing of a
physical PCIe device to be shared across VMs and containers
– Supports interactive sessions with containers
– Zun allows users to run heavy workloads with dedicated
resources by exposing CPU sets
214. Kuryr
• Kuryr
– = a Docker network plugin that uses OpenStack Neutron to
provide networking services to Docker containers.
– maps container network abstractions to Neutron APIs.
• Security groups
• Subnet pools
• NAT (SNAT/DNAT, Floating IP)
• Port security (ARP spoofing)
• Quality of Service (QoS)
• Quota management
• Neutron pluggable IPAM
• Well-integrated COE load balancing via a neutron
• FWaaS for containers
216. Murano
• 개요
– = application catalog service
– cloud-ready applications to be easily deploy on OpenStack.
• It is an integration point for external applications and OpenStack
with support of complete applications life cycle management.
– Environment
– Package
– Session
– Deployments
– Bundle
– Categories
217. Kolla
• 개요
– OpenStack cloud를 deploy & manage하는 복잡성의 문제
• 기능
– OpenStack service를 container형태로 실행
– Ansible을 이용하여 container image를 설치 및 deploy or
upgrade OpenStack cluster.
– Kolla containers are configured to store data on persistent
storage, which can then be mounted back onto host
operating system and restored to protect against any faults.