This is my presentation at DevNexus 2017 in Atlanta.
Containers are a default choice for packaging and deploying Microservices.
You will understand why containers are a natural fit for microservices, the value a container platform brings to the table, how to structure your microservices running as containers on an enterprise ready Kubernetes platform aka, OpenShift. We will also look at a sample microservices application packaged and running as containers on this platform.
1. Presenter: Veer Muchandi
Title: Principal Architect - Container Solutions
Social Handle: @VeerMuchandi
Blogs: https://blog.openshift.com/author/veermuchandi/
2. Agenda
Why Containers for Microservices?
Value of Container Platform
How containers run on a K8S/OpenShift cluster?
Structure of a “Containerized Microservice” on OpenShift
An example
Other OOTB features from a Container Platform
3. –Martin Fowler
“building applications as suites of services. As well as the fact that
services are independently deployable and scalable, each service also
provides a firm module boundary, even allowing for different services to
be written in different programming languages. They can also be
managed by different teams”
Microservices
5. A Typical Monolith
Multiple business functions all
bundled up into a single large
monolith
Acknowledgements: Borrowed a few conventions from here
http://martinfowler.com/articles/microservices.html
6. Deployed as a single large
deployment unit on the host.
- hard to change
- hard to manage
- causes slow cycles
So we want to break it up..
7. So we want to refactor each business function as
an independent microservice.
So how does a microservice run on a host?
8. Microservices are typically very
small.
Even the smallest sized VM in
your enterprise may be too big.
With a single microservice per
host, we will end up wasting a
lot of resources.
So should we run a bunch of
them on the box?
9. Well.. now all our eggs are in the
same basket!!
Hmm.. let’s see.
How about? …..
12. Welcome to the World of Containers!!
Multiple containers run on a host.
Containers share kernel on the Host
Docker containers have layers
infrastructure-as-code by default!!
It is just not your application but all
dependencies included.
Let’s understand containers
14. Containers are Portable
So do you want to burst your microservices to other datacenters or cloud to meet your
demands??
Containers run the same way across the datacenters.. Portability comes with the container format
15. Polyglot
Again multiple containers run
on a host.
Since dependencies are
bundled in, each container
can implement its own
technologies, without affecting
other containers on the host.
Containers are naturally
Polyglot.
17. Containers Scale up fast and Scale down fast
Microservices need to scale
up quickly. Containers provide
that OOTB.
18. Application Upgrades, Security fixes, Middleware, BaseOS Upgrades
Upgrades are quick
They won’t affect other
containers as the changes
are local to a container
Easy to rollback.. Just bring
up the previous container
version!!
Meets this need...
“Microservices can be
changed quickly without
affecting others!!!”
19. Container Registry
PUSH PULL
As a bonus, you get a repository to store your ready
to run Microservices. Just push to or pull from the
repo.
24. 10.1.0.2
Some pods may have more
than one container.. that’s a
special case though!!
10.0.0.1
All the containers in a pod
die along with a pod.
Usually these containers are
dependent like a master and
slave or side-car pattern
And they have a very tight
married relationship
Containers in Pods
25. When you scale up your
application, you are scaling up
pods.
Each Pod has its own IP.
10.1.0.110.0.0.410.0.0.1
Pod Scaling
26. Nodes are the application hosts that make up a
Openshift/K8S cluster. They run docker and Openshift.
Master controls where the pods are deployed on the
nodes, and ensures cluster health.
Nodes
27. When you scale up, pods are distributed across nodes
following scheduler policies defined by the administrator.
So even if a node fails, the application is still
available
High Availability
28. Not just that, if a pod dies for some reason, another pod
will come in its place
Health Management
29. Pods can be front-ended by a Service.
Service is a proxy.. Every node knows
about it. Service gets an IP
Service knows which pods to frontend
based on the labels.
Flexibility of architecture with Openshift/ K8S Services
30. Clients can talk to the service. Service
redirects the requests to the pods.
Service also gets a DNS Name
Client can discover service… built in
service discovery!!
Built-in Service Discovery
31. Accessing your Application
When you want to expose a service
externally eg: access via browser using a
URL, you create a “Route”
Route gets added to a HAProxy LB.
You can configure your F5 as well as LB.
34. Microservice Structure on OpenShift/K8S
Microservice1 is made up of two k8s services/tiers. Each tier scales
independently although part of the same microservice. Microservice1 is
exposed via route. Hence can be used by External clients such as a
browser.
Microservice2 is
an internal service
only usable by
other
microservices
running on the
cluster, as it does
not have a route.
Route
36. OpenShift Templates
OpenShift Templates
enable easy deployment
of suite of
microservices.
You can also define the
number of pods to run
for each microservice
and their resource
requirements.
43. API Gateway upcoming image on OpenShift
RedHat SSO based on KeyCloak
Fuse and AMQ supported
Middleware supported
Many supported OSS technologies
Jenkins CI/CD pipelines all run on OpenShift as containers
44. With all such features built into a Container Platform, it becomes a
true Microservices Platform.