MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
Microservices: Living Large in Your Castle Made of Sand
1. Microservices:
Living Large in your Castle Made of Sand
MongoDB Evenings Atlanta, October 2016
Brandon.Newell@MongoDB.com
Senior Solutions Architect
Twitter:
@virtual_newell @MongoDB #MDBEvenings
3. What is a Microservice?
• A self-contained process that delivers a specific capability for
the business
• Each service is developed, tested, and deployed
independently
• Communicate with other services via APIs
6. When are they used?
Multiple microservices work together as one ‘whole’
system to deliver a usable interface to users
7. Considerations for Microservices
• Monitoring
• Operational overhead
• Incorrect Service boundaries
• Plan for failures (distributed computing truth)
8. Let’s think about an online auction site
When we interact with the site, we see the interface...but so much more is going on
10. How many business processes can you see?
...and there’s even more going on
11. Microservices make it possible
Shopping
Cart
Seller/
Buyer
Comms
Seller
Reviews
Microservice Microservice
Microservice
Business
Process
Business
Process
Business
Process
Item
Listing
Business
Process
Microservice
12. What’s in a Microservice?
A small application stack designed with a very specific purpose in mind!
Each microservice can be designed in the way most suited to
addressing a particular business process
Seller Reviews
NodeJS
Item Listing
Java
13. Microservices are flexible
Teams can choose the data model, language, and platforms that best
suit the needs of their assigned process.
Seller Reviews
NodeJS
Item Listing
Java
14. Microservices make frequent releases easier
Each microservice team chooses their own pace and is not dependent
on other teams to release features or fix bugs.
We’ve got
new stuff!
No need to
wait for us!
Seller Reviews
NodeJS
Item Listing
Java
15. A note about the team...
A microservice team should have expertise at each layer of the
application:
• Developers
• DBA
• Sysadmin | VMAdmin
• Storage Admin
• Web Admin
• Security
16. Communication is not a barrier
Microservices almost always need to communicate with one another to
ensure a complete picture for the end user
This is done via HTTP or other common messaging protocol
HTTP
Seller Reviews
NodeJS
Item Listing
Java
17. Microservices scale readily
As your demand grows, you can add more of a microservice to meet
the demand.
Shopping
Cart
Seller/
Buyer
Comms
Seller
Reviews
Item
ListingItem
ListingItem
ListingItem
ListingItem
Listing
Free listing
Weekend!
18. Microservices are independent
Upgrades or changes can be done at the microservice level without
impacting the entire system
Shopping
Cart
Seller/
Buyer
Comms
Seller
Reviews
Item
Listing
Time to
Upgrade!
Business as Usual
19. Microservices are independent
If a microservice goes down, others can continue running
Shopping
Cart
Seller/
Buyer
Comms
Seller
Reviews
Item
Listing
Business as Usual
20. Recap - Microservices
• Microservices are small applications with a very specific business purpose
• The business process dictates the choice of:
• Development Language
• Data Model
• Database
• Infrastructure
• Not dependent on one another for releases or upgrades
• Teams should be self-contained and have expertise at each layer of the stack
• Can scale as the demand on the business process grow
• Communicate with one another through common protocols like HTTP
22. Docker ecosystem
Provisioning and managing your Dockerized hosts
Native clustering: turns a pool of Docker hosts into a
single, virtual Docker host.
Define a multi-container application with all of its
dependencies in a single file
24. 44%
of orgs
adopting
microservices
Why use Docker?
41%
want
application
portability
13x
improvement in
release
frequency
62%
MTTR on
software issues
60%
Using Docker to
migrate to
cloud
Reason to run containers:
SPEED
Microservices
architectures
Efficiency Cloud
(The Docker Survey, 2016)
25. Why Docker Swarm?
5x faster than
Kubernetes to spin
up a new container
7x faster than
Kubernetes to list
all running
containers
Evaluating Container
Platforms at Scale
1000 EC2 instances in a cluster
What is their performance at scale?
Can they operate at scale?
What does it take to support them at
scale?
https://medium.com/on-docker/evaluating-container-platforms-at-scale-5e7b44d93f2c#.k2fxds8c2
https://www.docker.com/survey-2016
28. Redundancy and fault tolerance
Deploy an odd number of voting members
Members Majority required Fault tolerance
High availability and resource colocation
Single member of a replica set / server
Shards as Replica Set
Ideally: primary / secondary / secondary
Possible: primary / secondary / arbiter
Deployment patterns: Replica Set and Sharded
Clusters
Server 3Server 2Server 1
mongos
Primary
Primary
RS1
SecondarySecondary
Secondary
RS2
Secondary
RS3
Secondary
RS1
Primary
RS2
Secondary
RS3
Secondary
RS1
Secondary
RS2
Primary
RS3
mongos mongos
cfgsvr1 cfgsvr2 cfgsvr3
29. SECONDARY
cgroup/docker
cgroup/docker cgroup/docker cgroup/docker
cgroup/docker cgroup/docker
cgroup/dockercgroup/docker cgroup/docker
cgroup/dockercgroup/docker cgroup/docker
cgroup/dockercgroup/docker cgroup/docker
cgroup/dockercgroup/docker cgroup/docker
PRIMARY SECONDARY SECONDARY
PRIMARY SECONDARY
PRIMARYSECONDARY SECONDARY
PRIMARYSECONDARY SECONDARY
PRIMARYSECONDARY SECONDARY
PRIMARYSECONDARY SECONDARY
Maintaining Primary and two Secondaries
for HA per application.
Linux cgroups or Docker Containers used
to isolate RAM / CPU / BlockIO for each
mongod instance.
MongoDB Ops/Cloud Manager is key to
success!
MACHINE 1
APP1
MACHINE 2 MACHINE 3
APP2APP3APP4APP5APP6
SECONDARY
Container “Striping”
30. How successful customers use MongoDB with
Docker
• Case Studies @ https://www.mongodb.com/blog
• Whitepaper/Webinar
https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained
https://www.mongodb.com/webinar/enabling-microservices-from-startups-to-the-enterprise
31. MongoDB Atlas Features
Database as a service for MongoDB
• Automated: The easiest way to build, launch, and scale apps on MongoDB
• Flexible: The only database as a service with all you need for modern
applications
• Secured: Multiple levels of security available to give you peace of mind
• Scalable: Deliver massive scalability with zero downtime as you grow
• Highly available: Your deployments are fault-tolerant and self-healing by
default
• High performance: The performance you need for your most demanding
workloads
32. MongoDB Atlas Benefits
Database as a service for MongoDB
• Spin up a cluster in
seconds
• Replicated &
always-on
deployments
• Fully elastic: scale
out or up in a few
clicks with zero
downtime
• Automatic patches
& simplified
upgrades for the
newest MongoDB
features
• Authenticated &
encrypted
• Continuous backup
with point-in-time
recovery
• Fine-grained
monitoring &
custom alerts
Safe &
Secure
Run for
You
• On-demand pricing
model; billed by the
hour
• Multi-cloud support
(AWS available with
others coming
soon)
• Part of a suite of
products & services
designed for all
phases of your app;
migrate easily to
different
environments
(private cloud, on-
prem, etc) when
needed
No Lock-In
33. Share with us your use case of MongoDB & Docker:
http://bit.do/DockerMongoDB
Try it yourself!
https://github.com/sisteming/mongo-swarm
Hello valued customer! Can you share with me, in great detail, how you were able to successfully adopt and master Microservices methodologies and architecture that have provided you with unparalleled productivity and success?
We plan to share this information with the world at large including but not limited to individuals or companies that may be your competition…
HUDL – refers to their teams as ‘squads’
So now we know how a highly available MongoDB pattern looks like and we will dive into how we can use Docker to implement these patterns.
As we will see shortly, in our recipe today we will run MongoDB on Docker containers, and we will use Docker compose, docker-machine, swarm and the Cloud Manager API to orchestrate and automate the deployment of a sharded cluster
Generally, what we mean by Docker is refered to docker daemon, but Docker has an ecosystem of tools to help us manage containers
So for example with docker-machine we can provision and manage our containers under different providers, like AWS, virtualbox or others,
We then have docker swarm that provide us with a clustering solution to have multiple nodes running docker daemon. And then we can have filters and rules to orchestrate the deployment of the containers into each node of the cluster.
Docker compose instead can be used to define our patterns and multi-container deployments just with a YAML description. This way we can define services for replica sets or sharded cluster and easily deploy this type of deployments - i.e. Deploy a replica set with a single command
All these tools have in common the use of Docker API, so any tool that works with Docker can use Swarm to scale to multiple hosts
4 – UNTIL WHEN SOME OF OUR SUCCESSFUL CUSTOMERS DISCOVER DOCKER
The good news is that Docker can help us with this! So for example we can easily coordinate our containers to deploy a recommended highly available pattern, and we can then size each container and therefore its instance and each cluster to avoid any resource allocation issue
5 – THIS IS WHAT WE REALIZED
Mean Time to Repair
So probably most of you already know and use Docker, but in case there is someone that does not, I’ll quickly introduce it.
If you ask Google – what is Docker – Google will tell you that Docker is a person employed in a port to load and unload ships. While this might seem unrelated, it is quite related as our ship will be our servers, and Docker will allow us to load and unload containers on them.
So these containers are basically be isolated processes in the userspace, they have a shared kernel, and in these isolated process we can deploy applications and their dependencies. The idea is that we can containarize a single application, including its dependencies, and this will allow us to run them everywhere – we just need to have a docker daemon running, whether in our laptop, in a cloud infrastructure like AWS or Azure or even on our Rasperry Pi.
One of the interesting things about docker swarm is the possibility of having multi-host networking.
As we have containers in each of the swarm nodes, they need to be able to connect with each other. For this reason, Swarm automatically creates a overlay container-to-container network.
The underliying technology is based in the docker swarm master and its service discovery (in this example we are using consul as a discovery service)
With this, by refering to the hostname associated with each container, we can reach the containers located in a different swarm node. This is a key concept for deploying our MongoDB containers in a swarm cluster, but once understood is really easy to use and connect all the containers with each other.
Co-hosted using Affinity
Centralized MongoDB lives outside the Docker cluster. Common first step
Co-Hosted lets MongoDB live within the Docker cluster as it’s own container.
Co-resident works for very light agile applications that are self-sufficient
When speaking about replica sets, and this is a general MongoDB best practice, we generally suggest a primary secondary secondary configuration as recommended pattern.
If we have two secondaries and the primary replicating all operations to them, so that we have two copies of our data and in case of losing one member, we still have a replicated copy.
We always recommend having an odd number of members and this is strictly related to the fault tolerance: With more members, higher majority and higher resiliency of our environment.
It is also essential to understand why we want to have each replica set member on a different server or instances. Losing one of the servers won’t mean that our replica set will be unavailable.
Swisscom – largest communications provider in Switzerland, tens of thousands of microservices (Flocker+ScaleIO, Openstack, Cloud Foundry)
fuboTV – Soccer streaming service (Kubernetes), bursty 100x growth 10 mins before match
Hudl – adopted microservices all the way to the dev team model
UPS i-parcel