Docker is definitely one of the hottest technologies at the moment and one that is already dramatically changing the way we build, package and deploy applications. In this session we’ll have a look at how a project got into a quest for containerizing most of their components and services while increasing the value delivered.
This presentation was delivered at Wildcard Conference 2015, on 16th of May 2015 in Riga
52. Start small – containerizing
everything is a fair goal but takes
time
Notes de l'éditeur
We won’t cover the basics.
Docker has dramatically changed the way we package, deploy and run applications.
Millions of downloads of the engine, hundreds of millions of images from registry. Absolutely impressive for a 2 year old project.
This is going to be opinionated.
In a real project, for a very classic client.
How many of you are you still working in a scenario where releases are deployed to production by giving a .zip package and a word file with instructions to a third party, for installation?
We don’t have the luxury of working on cloud, yet, so containers are for us a much more granular way of deploying content as opposed to deployment to Tomcat. Containers let us for example easily deploy multiple releases of the same application more than once to the same server, without the need to deal with kludges.
This is a way for us to segregate the functionalities of containers; if there's a function that does not seem to belong to a container, then it gets spun off to its own container.
This does not necessarily mean that one container runs a single process!
Use docker exec.
We don't really like data volumes and data-only containers.
Container hierarchies are necessary so that we can segregate responsibilities between containers; the smaller and the less they do, the better.
They also help to keep build times shorter, e.g. creating a container by simply coping a WAR file into it is way faster than building it every time from scratch.
The hard part: changing one of the containers at the very top means that we need to rebuild every single container. For minor changes this is not a problem since we can roll out updated containers at our own pace but for critical updates such as security fixes, it may be a bit problematic.
Existing previous investment in Puppet for automation, which is still currently used.
However, containers are being slowly migrated to plain Dockerfiles.
We started with Artifactory as it’s our über-repository for all sorts of artifacts. With Docker 1.6, we decided to move to Docker registry 2.0.
While Jenkins builds containers, Rundeck is responsible for deploying them to hosts.
Integration between Jenkins and Rundeck has not been implemented yet, but it’s in the plans.
Remember the restart policies!
Use USER in Dockerfile, or –u with docker run
Don’t keep it in the container:
Data volumes (though I don’t like them), or data-only containers
Write to host if you have to
Or do not containerize applications with tons of state (e.g. databases)
Put state in a networked filesystem
The darker side of Docker.
- Can’t sign containers
- Logging API was very much needed – and it was only just finished
Tags can be overwritten in a registry without anyone realizing. For example, it’s hard to say if today’s centos:6 is the same as yesterday when you make a new build. Solution: build your own, or keep a local copy.
Docker is ready for production workloads; your expected level of integration and automation will define how much infrastructure “glue” you need to design and build on top – e.g. service discovery, automation, load balancing, etc.