L’introduction de docker en production se fait souvent petit à petit, en mode “nouveau format de packaging” plutôt qu’avec un orchestrateur docker.
Les orchestrateurs comme Kubernetes ou Mesos peuvent faire peur de part leur potentielle complexité à être mis en oeuvre et intégrés dans une infrastructure existante.
Nous découvrirons ensemble Rancher, un autre orchestrateur, plus facile à intégrer, mais tout aussi puissant.
Nous verrons comment l'utiliser sans révolutionner son infrastructure et comment bénéficier de son utilisation dans des cas d'usage comme le “rolling upgrade” d'un service.
2. /us /me
Christophe Furmaniak
✔
Twitter : @cfurmaniak
✔
Github : looztra
✔
Consultant Architecture et Culture DevOps chez Zenika
✔
Ex Atos Multimedia Atos Worldline Worldline
✔
(Très) Intéressé par la Gestion de la Configuration, les métriques,
le cycle de construction, livraison et déploiement des applications
✔
Adepte de la tonte des Yaks
Yak shaving : Any apparently useless activity which, by allowing you to overcome
intermediate difficulties, allows you to solve a larger problem.
✔
Note : j'ai l'air grincheux, mes slides ne sont pas très esthétiques,
je suis au courant
4. Le contenu de la prez
✔ L'application guestbook
✔ Des containers en mode 'de base'
✔ Level Up : utilisons un orchestrateur
✔ Rancher ?
✔ Démo : Rancher + containers en mode 'de base'
✔ Démo : Guestbook v1 powered by Rancher
✔ Démo : Upgrade de service méthode 1 ('rolling
upgrade')
✔ Démo : Upgrade de service méthode 2 ('blue/green')
5. Où en est-on ?
✔ L'application guestbook
✔ Des containers en mode 'de base'
✔ Level Up : utilisons un orchestrateur
✔ Rancher ?
✔ Démo : Rancher + containers en mode 'de base'
✔ Démo : Guestbook v1 powered by Rancher
✔ Démo : Upgrade de service méthode 1 ('rolling
upgrade')
✔ Démo : Upgrade de service méthode 2 ('blue/green')
6. L'application Guestbook
✔ Inspirée de l'exemple kubernetes / guestbook
✔ Un stockage de données à base :
✔ D'un redis master (write)
✔ De deux redis en mode slave (read)
✔ Une couche API service à base de SpringBoot
✔ Une couche Front à base html+js dans un nginx
10. Docker en mode 'de base'
✔ Containers lancés avec ansible (par exemple), ou avec
des scripts 'maison', voire à la main (ouch)
✔ Pas de communication entre les containers hébergés
sur 2 hosts différents (ex : pre-docker 1.9)
✔ Port mapping en dur
✔
docker run -d -p 8000:80 ns/frontend
✔
docker run -d -p 8080:8080 ns/api-server
✔
docker run -d -p 6379:6379 ns/redis-master
✔
docker run -d -p 6380:6379 ns/redis-slave
11. Docker en mode 'de base'
✔ Affectation manuelle des containers sur les hosts
✔ Comment bien gérer la montée en charge ?
✔ Comment bien redémarrer les containers en cas de
pb ?
✔ Comment gérer la maintenance des hosts ?
✔ Au démarrage, 7 containers + 3 loadbalancers à gérer :
✔ et si l'application se complexifie ?
✔ comment gérer les mises à jour ?
13. Où en est-on ?
✔ L'application guestbook
✔ Des containers en mode 'de base'
✔ Level Up : utilisons un orchestrateur
✔ Rancher ?
✔ Démo : Rancher + containers en mode 'de base'
✔ Démo : Guestbook v1 powered by Rancher
✔ Démo : Upgrade de service méthode 1 ('rolling
upgrade')
✔ Démo : Upgrade de service méthode 2 ('blue/green')
14. Un orchestrateur ?
L'orchestration :
✔ Scheduling (Affectation)
✔ Gestion des erreurs
✔ Provisionning des hosts
✔ Services :
✔
Load balancing
✔
Discovery
✔ ...
cf blog Octo : la bataille sanglante des orchestrateurs
15. Autrement dit :
✔ Un orchestrateur :
✔ permet de “scaler” l’utilisation de docker dans un
environnement plus conséquent
✔ apporte des mécanismes de répartition de charge et des règles
de gestion :
✔
le container A ne doit pas tourner sur le même host que le
container B
✔
le container C doit tourner sur tous les hosts existants qui
portent le label “front”
✔ permet de redémarrer automatiquement le container D s’il n’est
plus fonctionnel
✔ éventuellement : gérer la communication réseau entre les
différents containers des différents hosts
17. Où en est-on ?
✔ L'application guestbook
✔ Des containers en mode 'de base'
✔ Level Up : utilisons un orchestrateur
✔ Rancher ?
✔ Démo : Rancher + containers en mode 'de base'
✔ Démo : Guestbook v1 powered by Rancher
✔ Démo : Upgrade de service méthode 1 ('rolling
upgrade')
✔ Démo : Upgrade de service méthode 2 ('blue/green')
18. Rancher par RancherLabs
✔ RancherLabs (http://rancher.com @Rancher_Labs):
✔ Créée par des anciens de Citrix Systems
✔ Cupertino (Californie) + Mesa (Arizona)
✔ Rancher ? Cattle vs Pet paradigme
✔ Premiers commits sur GH en nov 2014
✔ 2 produits : Rancher et RancherOS
●
« Rancher is a complete platform for running
Docker applications in production »
●
« RancherOS is a 20mb Linux distro that runs the
entire OS as Docker containers »
19. Rancher : fonctionnalités
✔ Communication des containers cross-host (overlay
network utilisant un tunnel Ipsec)
✔ Container scheduling
✔ Enregistrement/Découverte de services
✔ Loadbalancers
✔ Service DNS distribué + Service Metadata
✔ Healthchecks (tcp connection, http 2xx/3xx)
✔ Haute Dispo des services
✔ Mise à jour de services ('rolling upgrade','Blue/Green')
✔ Rancher-compose
✔ Sidekicks (=~ pods kubernetes)
20. Où en est-on ?
✔ L'application guestbook
✔ Des containers en mode 'de base'
✔ Level Up : utilisons un orchestrateur
✔ Rancher ?
✔ Démo : Rancher + containers en mode 'de base'
✔ Démo : Guestbook v1 powered by Rancher
✔ Démo : Upgrade de service méthode 1 ('rolling
upgrade')
✔ Démo : Upgrade de service méthode 2 ('blue/green')
21. Démo #1
✔ Rancher + containers en mode legacy
✔ Objectifs :
✔ Installation de rancher
✔ Visualisation des containers existants (non gérés par
rancher)
22. Où en est-on ?
✔ L'application guestbook
✔ Des containers en mode 'de base'
✔ Level Up : utilisons un orchestrateur
✔ Rancher ?
✔ Démo : Rancher + containers en mode 'de base'
✔ Démo : Guestbook v1 powered by Rancher
✔ Démo : Upgrade de service méthode 1 ('rolling
upgrade')
✔ Démo : Upgrade de service méthode 2 ('blue/green')
23. Démo #2
✔ L'application Guestbook v1 déployée avec Rancher
✔ Objectifs :
✔ Un guestbook v1 déployé manuellement
Note : DNS en round robin pré-configuré sur les 2 hosts
« front »
24. Où en est-on ?
✔ L'application guestbook
✔ Des containers en mode 'de base'
✔ Level Up : utilisons un orchestrateur
✔ Rancher ?
✔ Démo : Rancher + containers en mode 'de base'
✔ Démo : Guestbook v1 powered by Rancher
✔ Démo : Upgrade de service méthode 1 ('rolling
upgrade')
✔ Démo : Upgrade de service méthode 2 ('blue/green')
http://docs.rancher.com/rancher/concepts/#networking
NETWORKING
Rancher supports cross-host container communication by implementing a simple and secure overlay network using IPsec tunneling. To leverage this capability, a container launched through Rancher must select “Managed” for its network mode or if launched through Docker, provide an extra label “–label io.rancher.container.network=true”. Most of Rancher’s network features, such as load balancer or DNS service, require the container to be in the managed network.
Under Rancher’s network, a container will be assigned both a Docker bridge IP (172.17.0.0/16) and a Rancher managed IP (10.42.0.0/16) on the default docker0 bridge. Containers within the same environment are then routable and reachable via the managed network.
SERVICE DISCOVERY
Rancher adopts the standard Docker Compose terminology for services and defines a basic service as one or more containers created from the same Docker image. Once a service (consumer) is linked to another service (producer) within the same stack, a DNS record mapped to each container instance is automatically created and discoverable by containers from the “consuming” service. Other benefits of creating a service under Rancher include:
Service High Availability (HA) - the ability to have Rancher automatically monitor container states and maintain a service’s desired scale.
Health Monitoring - the ability to set basic monitoring thresholds for container health.
Add Load Balancers - the ability to add a simple load balancer for your services using HAProxy.
Add External Services - the ability to add any-IP as a service to be discovered.
Add Service Alias - the ability to add a DNS record for your services to be discovered.
LOAD BALANCER
Rancher implements a managed load balancer using HAProxy that can be manually scaled to multiple hosts. A load balancer can be used to distribute network and application traffic to individual containers by directly adding them or “linked” to a basic service. A basic service that is “linked” will have all its underlying containers automatically registered as load balancer targets by Rancher.
DISTRIBUTED DNS SERVICE
Rancher implements a distributed DNS service by using its own light-weight DNS server coupled with a highly available control plane. Each healthy container is automatically added to the DNS service when linked to another service or added to a Service Alias. When queried by the service name, the DNS service returns a randomized list of IP addresses of the healthy containers implementing that service.
By default, all services within the same stack are added to the DNS service without requiring explicit links.
You can resolve containers within the same stacks by the service names.
If you need a custom DNS name for your service, that is different from your service name, you will be required to use a link to get the custom DNS name.
Links are still required for load balancers to target services.
Links are still required if a Service Alias is used.
To make services resolvable that are in different stacks, you will need to link them explicitly.
Because Rancher’s overlay networking provides each container with a distinct IP address, you do not need to deal with port mappings and do not need to handle situations like duplicated services listening on different ports. As a result, a simple DNS service is adequate for handling service discovery.
HEALTH CHECKS
Rancher implements a health monitoring system by running managed network agent’s across it’s hosts to co-ordinate the distributed health checking of containers and services. These network agents internally utilize HAProxy to validate the health status of your applications. When health checks are enabled either on an individual container or a service, each container is then monitored by up to three network agents running on hosts separate to that containers parent host. The container is considered healthy if at least one HAProxy instance reports a “passed” health check.
SERVICE HA
Rancher constantly monitors the state of your containers within a service and actively manages to ensure the desired scale of the service. This can be triggered when there are fewer (or even more) healthy containers than the desired scale of your service, a host becomes unavailable, a container fails, or is unable to meet a health check.
SERVICE UPGRADE
Rancher supports the notion of service upgrades by allowing users to either load balance or apply a service alias for a given service. By leveraging either Rancher features, it creates a static destination for existing workloads that require that service. Once this is established, the underlying service can be cloned from Rancher as a new service, validated through isolated testing, and added to either the load balancer or service alias when ready. The existing service can be removed when obsolete. Subsequently, all the network or application traffic are automatically distributed to the new service.
RANCHER COMPOSE
Rancher implements and ships a command-line tool called rancher-compose that is modeled after docker-compose. It takes in the same docker-compose.yml templates and deploys the Stacks onto Rancher. The rancher-compose tool additionally takes in a rancher-compose.yml file which extends docker-compose.yml to allow specifications of attributes such as scale, load balancing rules, health check policies, and external links not yet currently supported by docker-compose.
For more information, see rancher-compose.
STACKS
A Rancher stack mirrors the same concept as a docker-compose project. It represents a group of services that make up a typical application or workload.
CONTAINER SCHEDULING
Rancher supports container scheduling policies that are modeled closely after Docker Swarm. They include scheduling based on:
port conflicts
shared volumes
host tagging
shared network stack: –net=container:dependency
strict and soft affinity/anti-affinity rules by using both env var (Swarm) and labels (Rancher)
In addition, Rancher supports scheduling service triggers that allow users to specify rules, such as on “host add” or “host label”, to automatically scale services onto hosts with specific labels.
For more information on Container Scheduling and comparison matrix of Rancher’s scheduling and Docker Swarm, see rancher-compose
SIDEKICKS
Rancher supports the colocation, scheduling, and lock step scaling of a set of services by allowing users to group these services by using the notion of sidekicks. A service with one or more sidekicks is typically created to support shared volumes (i.e. --volumes_from) and networking (i.e. --net=container) between containers.
For more information, see sidekicks with rancher-compose.
METADATA SERVICES
Rancher offers data for both your services and containers. This data can be used to manage your running Docker instances in the form of a metadata service accessed directly through a HTTP based API. These data can include static information when creating your Docker containers, Rancher Services, or runtime data such as discovery information about peer containers within the same service.