Setup of the Docker environment on a Node.js application. Several components are used: Docker Machine, Dockerfile, Docker Compose, libnetwork, Docker Swarm, setup of a load balancing with HAProxy / Interlock
2. The base application
Quick introduction to Docker
The runtime environment
Build our application’s image
Publish the image to a Docker Registry
Link containers on a single Docker host
Container networking on a single Docker host
Container networking on multiple Docker hosts
Deployment on a Docker Swarm
3. Details
API HTTP Rest - Node.js (Sails.js) / MongoDB
Prerequisite
nodejs 4.4.5 (LTS) - https://nodejs.org/en/
mongo 3.2 - https://docs.mongodb.org/manual/installation/
CRUD on a “Message” model
HTTP verb URI Action
GET /message list all messages
GET /message/ID get message with ID
POST /message create a new message
PUT /message/ID modify message with ID
DELETE /message/ID delete message with ID
4.
5. Setup
usage of sailsjs framework (RoR of Node.js)
install sailsjs: sudo npm install sails -g (should install 0.12.3)
create the application: sails new messageApp && cd messageApp
link with local MongoDB
usage of sails-mongo orm: npm install sails-mongo --save
change configuration
create API: sails generate api message
run the application: sails lift
config/model.js:
module.exports.models = {
connection: mongo,
migrate: 'safe'
};
config/connections.js:
module.exports.connections = {
mongo: {
adapter: 'sails-mongo',
url: process.env.MONGO_URL || 'mongodb://localhost/messageApp'
}
};
6. curl http://localhost:1337/message
curl -XPOST http://localhost:1337/message?text=hello
curl -XPOST http://localhost:1337/message?text=hola
curl http://localhost:1337/message
curl -XPUT http://localhost:1337/message/5638b363c5cd0825511690bd?text=hey
curl -XDELETE http://localhost:1337/message/5638b381c5cd0825511690be
curl http://localhost:1337/message
[
{
"text": "hello",
"createdAt": "2015-11-08T13:15:15.363Z",
"updatedAt": "2015-11-08T13:15:15.363Z",
"id": "5638b363c5cd0825511690bd"
},
{
"text": "hola",
"createdAt": "2015-11-08T13:15:45.774Z",
"updatedAt": "2015-11-08T13:15:45.774Z",
"id": "5638b381c5cd0825511690be"
}
]
Exemples
[
{
"text": "hey",
"createdAt": "2015-11-08T13:15:15.363Z",
"updatedAt": "2015-11-08T13:19:40.179Z",
"id": "5638b363c5cd0825511690bd"
}
]
[ ]
⇒ API CRUD created in just a couple of lines with sailsjs
7. The base application
Quick introduction to Docker
The runtime environment
Build our application’s image
Publish the image to a Docker Registry
Link containers on a single Docker host
Container networking on a single Docker host
Container networking on multiple Docker hosts
Deployment on a Docker Swarm
8. Containers
containers
processes
(nginx, ...)
libraries (nodejs
runtime, debian
libraries, …)
application code
Linux host
cgroups + namespaces
cgroups + namespaces
cgroups + namespaces
A container is a group of processes
cgroups and namespaces are used
to isolate the container from the
outside
- cgroups limits the resources (CPU,
RAM, …)
- namespaces limits the visibility of
the system (network, user, …)
9. Image
blueprint of a container
processes
(nginx, ...)
libraries (nodejs
runtime, debian
libraries, …)
application code
cgroups + namespaces
Dockerfile
text file describing the
processes that will run on the
container
Image
Built from the instructions of the
Dockerfile. An image consist of
multiple read-only layers.
Container
Instance of an image
10. The base application
Quick introduction to Docker
The runtime environment
Build our application’s image
Publish the image to a Docker Registry
Link containers on a single Docker host
Container networking on a single Docker host
Container networking on multiple Docker hosts
Deployment on a Docker Swarm
11. Docker host
physical or virtual host running Docker Engine
easily created with Docker Machine or using Docker for Mac / Windows beta
a lot of drivers available with Docker Machine
Oracle Virtualbox
DigitalOcean
Amazon Web Service
Microsoft Azure
Google Compute Engine
...
12. Creation
locally with virtualbox driver
docker-machine create --driver virtualbox node1
setup in Docker host context
eval “$(docker-machine env node1)”
usage of regular Docker commands
get IP of newly created Docker host
docker-machine ip node1 (⇒ 192.168.99.100)
13. The base application
Quick introduction to Docker
The runtime environment
Build our application’s image
Publish the image to a Docker Registry
Link containers on a single Docker host
Container networking on a single Docker host
Container networking on multiple Docker hosts
Deployment on a Docker Swarm
14. One image for application, one image for
databaseavoid to add too many services in a single image
usage of 2 images to package the application
one image for the database
one image for the application
application: several possibilities
extend official Linux distribution image (Ubuntu, CentOS, ...) with Node.js runtime
usage of the official Node.js image (https://hub.docker.com/_/node/)
Database
usage of the official MongoDB image
15. Dockerfile
text file describing all the commands needed to create an image
Dockerfile for our application
usage of the official node:4.4.5 (LTS) image
copy application sources
install dependencies
expose port to the outside from the Docker host
default command ran when instantiating the image
Create the image
docker build -t message-app .
List all images available on the Docker host
docker images
⇒ message-app image created
# Use node 4.4.5 LTS
FROM node:4.4.5
ENV LAST_UPDATED 20160605T165400
# Copy source code
COPY . /app
# Change working directory
WORKDIR /app
# Install dependencies
RUN npm install
# Expose API port to the outside
PORT 80
EXPOSE 80
# Launch application
CMD ["npm","start"]
Dockerfile
16. Let’s instantiate a container
$ docker run message-app
npm info it worked if it ends with ok
...
error: A hook (`orm`) failed to load!
error: Error: Failed to connect to MongoDB. Are you sure your configured Mongo instance is running?
Error details:
{ [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
name: 'MongoError',
message: 'connect ECONNREFUSED 127.0.0.1:27017' }]
originalError:
{ [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
name: 'MongoError',
message: 'connect ECONNREFUSED 127.0.0.1:27017' } }
17.
18. The base application
Quick introduction to Docker
The runtime environment
Build our application’s image
Publish the image to a Docker Registry
Link containers on a single Docker host
Container networking on a single Docker host
Container networking on multiple Docker hosts
Deployment on a Docker Swarm
19. Why is that needed ?
provide access to the packaged application
public or private access
possible to use tags to handle all the versions of the application
format ⇒ username/image:tag (note: official images do not have the username prefix, eg: mongo,
redis, ...)
mongo:3.2
lucj/message-app (same as lucj/message-app:latest)
GitHub account can be linked to Docker hub
build can be automatically triggered on a git push command
20. Creation of a repository on Docker Public Registry
hub.docker.com
list of user’s repositories
repository details
repository created
⇒ the newly created repository will contain all the version of the application’s image
21. Publish image
image needs to be created using username of the Docker hub account
docker build -t lucj/message-app .
identification
docker login
publication
docker push lucj/message-app
the image (public) can now be used from any Docker host
docker pull lucj/message-app
docker run -dP lucj/message-app (will start with an error as no database information is provided)
22. The base application
Quick introduction to Docker
The runtime environment
Build our application’s image
Publish the image to a Docker Registry
Link containers on a single Docker host
Container networking on a single Docker host
Container networking on multiple Docker hosts
Deployment on a Docker Swarm
23. docker run --link
mongoDB container: docker run --name mongoDB -d mongo:3.0
container link to mongoDB: docker run -ti --link mongoDB:db busybox /bin/sh
what’s inside the second container ?
/ # env
HOSTNAME=466ad6b628d1
DB_PORT=tcp://172.17.0.1:27017
DB_NAME=/furious_tesla/db
DB_PORT_27017_TCP_ADDR=172.17.0.1
DB_PORT_27017_TCP_PORT=27017
DB_PORT_27017_TCP_PROTO=tcp
DB_PORT_27017_TCP=tcp://172.17.0.1:27017
DB_ENV_MONGO_VERSION=3.0.7
...
Environment variables
and /etc/hosts are
automatically modified
within the second
container when --link
option is used
/ # cat /etc/hosts
172.17.0.5 466ad6b628d1
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-
loopback
172.17.0.1 db c99a75a05c4a mongoDB
172.17.0.1 mongoDB
172.17.0.1 mongoDB.bridge
172.17.0.5 furious_tesla
172.17.0.5 furious_tesla.bridge
...
⇒ DB_PORT_27017_TCP_ADDR et DB_PORT_27017_TCP_PORT need to be used by the application
to connect to the mongoDB container
24. Link application with database
message-app
container
port: 80
host:172.17.0.28
mongo
container
Docker host
port: 27017
DB_PORT_27017_TCP_ADDR
DB_PORT_27017_TCP_PORT
module.exports.connections = {
someMongodbServer: {
adapter: 'sails-mongo',
host: process.env.DB_PORT_27017_TCP_ADDR ||
'localhost',
port: process.env.DB_PORT_27017_TCP_PORT || 27017,
database: ‘messageApp’'
}
}
link
modification of config/connection.js file
⇒ connect to the mongoDB database using environment
variables imported in the application container
25. Update application image
update Timestamp (LAST_UPDATED) within the application Dockerfile
ex: ENV LAST_UPDATED 20151108T203800
this invalidates the cache (the layers cached during the previous builds are not used)
creation and publication of the new image version
docker build -t lucj/message-app . && docker push lucj/message-app
run application container
docker run -p 8000:80 --link mongoDB:db lucj/message-app
⇒ application available on port 8000 on the Docker host (192.168.99.100)
message-app
container
port: 80
host:172.17.0.28
mongo
container
Docker host
port: 8000
port: 27017
link
27. The base application
Quick introduction to Docker
The runtime environment
Build our application’s image
Publish the image to a Docker Registry
Link containers on a single Docker host
Container networking on a single Docker host
Container networking on multiple Docker hosts
Deployment on a Docker Swarm
28. Default networks
3 default networks on node1 Docker host
$ docker network ls
NETWORK ID NAME DRIVER
d87b8fc4c466 bridge bridge
efaf610f57a5 host host
f7d0de539edd none null
By default, Docker engine attaches
each container to the bridge network
29. Default bridge network
Docker engine attach each container to the default bridge network
$ docker run --name mongo -d mongo:3.2
$ docker run --name box -d busybox top
$ docker network inspect --format='{{json .Containers}}' d87b8fc4c466 | python -m json.tool
{
"0b8fedf4613c7275d89861037ea1b23ad4d65ab10f16df67bf976d9cb5652311": {
"EndpointID":
"0cf0cd3b2e0438c6f68c6a1e2f7587b63c48bda74911af55d1040f0d2fb117d2",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": "",
"MacAddress": "02:42:ac:11:00:03",
"Name": "mongo"
},
"6cb5e5f4a1bcc37925407b39f2dde41f2b370fc48a21f8289da91d17b3763a4c": {
"EndpointID":
"2a6412d3c3c25545a59ea148e317b2046965c0fe5c1eeae2c51f4f882aaa6b36",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": "",
"MacAddress": "02:42:ac:11:00:02",
"Name": "box"
}
}
$ docker run -ti busybox /bin/sh
/ # ping mongo
ping: bad address 'mongo'
/ # ping box
ping: bad address 'box'
A container cannot be addressed by its name :(
30. User defined bridge network
Create a bridge network with Docker network commands
Run container in the new network
$ docker network create mongonet
ce9ea3b69d6ee2ecf56b40bd35b8a43f8505c8ca0473bc37bdede3711ecf60c1
$ docker network ls
NETWORK ID NAME DRIVER
d87b8fc4c466 bridge bridge
efaf610f57a5 host host
ce9ea3b69d6e mongonet bridge
f7d0de539edd none null
$ docker run --net mongonet -ti busybox /bin/sh
/ # / # ping -c 3 mongo
PING mongo (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.058 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.085 ms
64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.072 ms
--- mongo ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.058/0.071/0.085 ms
$ docker run --name mongo --net mongonet -d mongo:3.2
Containers can be address by their name through the DNS
name server embedded in Docker 1.10+
31. Test our application
Run db and application containers in the new bridge network
Test HTTP REST Api
$ curl -XPOST http://192.168.99.100:8000/message?text=hello
{
"text": "hello",
"createdAt": "2016-06-06T14:01:05.764Z",
"updatedAt": "2016-06-06T14:01:05.764Z",
"id": "57558221a4461312009ce88c"
}
$ docker run --name mongo --net mongonet -d mongo:3.2
$ docker run --name app --net mongonet -p “8000:80” -d -e “MONGO_URL=mongodb://mongo/messageApp” message-app:v1
Use mongo container’s name
in environment variable
$ curl -XGET http://192.168.99.100:8000/message
[
{
"text": "hello",
"createdAt": "2016-06-06T14:01:05.764Z",
"updatedAt": "2016-06-06T14:01:05.764Z",
"id": "57558221a4461312009ce88c"
}
]
Application container is connected to mongo container using container name
32. Packaging of the application with Docker Compose
package of a multi containers application in a single file
docker-compose.yml
database container
api container
version: '2'
services:
mongo:
image: mongo:3.2
volumes:
- mongo-data:/data/db
expose:
- "27017"
app:
image: message-app:v1
ports:
- "80"
links:
- mongo
depends_on:
- mongo
environment:
-
MONGO_URL=mongodb://mongo/messageApp
volumes:
mongo-data:
Internal port of app container is mapped to a random port on the host
Volume used to mount mongodb data folder
Application container is connected to mongo container using container name
33. Lifecycle and scalability
lifecycle
docker-compose up (-d option enables the application to run in background)
docker-compose ps
docker-compose stop
scalability
docker-compose scale app=3
how are the new containers found ?
⇒ need to use a load balancer that will be
updated each time a container is created or
removed
message-app
container
port: 80
mongo
container Docker host
port: 32768
port: 27017
message-app
container
port: 80
port: 32769
message-app
container
port: 80
port: 32770
34. Usage of dockercloud/haproxy image
listen to all Docker Engine events
http://docs.docker.com/engine/reference/commandline/events/
automatic update of load balancer configuration
when a container is created or removed
message-app
container
port: 80
ip: 172.17.0.30
mongo
container
Docker host
port: 27017
message-app
container
port: 80
ip: 172.17.0.31
message-app
container
port: 80
ip: 172.17.0.32
link
port: 80
port: 8000
dockercloud/haproxy
container
35. Adding load balancer to docker-compose.yml
Load balancer exposes port 8000 to the outside
App container only exposes port 80 internally
Services communicate with each other though
their name (using Docker Engine embedded
DNS name server)
version: '2'
services:
mongo:
image: mongo:3.2
volumes:
- mongo-data:/data/db
expose:
- "27017"
lbapp:
image: dockercloud/haproxy
links:
- app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8000:80"
app:
image: message-app
expose:
- "80"
links:
- mongo
depends_on:
- mongo
environment:
- MONGO_URL=mongodb://mongo/messageApp
volumes:
mongo-data:
36. Test our application
Run the new version of compose file
docker-compose up
docker-compose scale app=3
Test HTTP REST Api
$ curl -XPOST http://192.168.99.100:8000/message?text=hola
{
"text": "hola",
"createdAt": "2016-06-08T13:30:18.298Z",
"updatedAt": "2016-06-08T13:30:18.298Z",
"id": "57581deacde05a1200877fa2"
}
$ curl -XGET http://192.168.99.100:8000/message
[
{
"text": "hola",
"createdAt": "2016-06-08T13:30:18.298Z",
"updatedAt": "2016-06-08T13:30:18.298Z",
"id": "57581deacde05a1200877fa2"
}
]
37. The base application
Quick introduction to Docker
The runtime environment
Build our application’s image
Publish the image to a Docker Registry
Link containers on a single Docker host
Container networking on a single Docker host
Container networking on multiple Docker hosts
Deployment on a Docker Swarm
38. Prerequisite
Docker 1.9+
multihost networking available out the box with libnetwork
need to setup a Key Value store
eg: etcd / consul / zookeeper
keeps all the information regarding
networks / subnetworks
IP addresses of Docker hosts / containers
…
39. Creation of a key-value store
creation of a Docker host
docker-machine create -d virtualbox consul
switch to context of newly created machine
eval "$(docker-machine env consul)"
run container based on Consul image
docker run -d -p "8500:8500" -h "consul" progrium/consul -server -bootstrap
40. Creation of the Docker hosts
$ docker-machine create
-d virtualbox
--engine-opt="cluster-store=consul://$(docker-machine ip
consul):8500"
--engine-opt="cluster-advertise=eth1:2376"
host1
$ docker-machine create
-d virtualbox
--engine-opt="cluster-store=consul://$(docker-machine ip
consul):8500"
--engine-opt="cluster-advertise=eth1:2376"
host2
$ docker $(docker-machine config host1) network ls
NETWORK ID NAME DRIVER
14753b15c63e bridge bridge
2cc7d35a48e3 none null
ad05eeca763a host host
$ docker $(docker-machine config host2) network ls
NETWORK ID NAME DRIVER
b7765c98adbf bridge bridge
48244d2fca3b none null
36a3858b68c8 host host
default networks available on each host: bridge / none / host
HOST1 HOST2
41. Creation of an overlay network
creation of a network from host1
docker $(docker-machine config host1) network create -d overlay appnet
new network also visible from host2
$ docker $(docker-machine config host1) network ls
NETWORK ID NAME DRIVER
acd47b4c062d appnet overlay
14753b15c63e bridge bridge
2cc7d35a48e3 none null
ad05eeca763a host host
$ docker $(docker-machine config host2) network ls
NETWORK ID NAME DRIVER
acd47b4c062d appnet overlay
b7765c98adbf bridge bridge
48244d2fca3b none null
36a3858b68c8 host host
42. Creation of the containers
run mongo container on appnet network from host1
docker $(docker-machine config host1) run -d --name mongo --net=appnet mongo:3.0
run busybox container on appnet network from host2
docker $(docker-machine config host2) run -ti --name box --net=appnet busybox sh
“box” container can communicate with “mongo” container using its name
through the DNS name server embedded in Docker 1.10+
/ # ping mongo
PING mongo (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.553 ms
…
/ # ping mongo.appnet
PING mongo.appnet (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.474 ms
…
43. The base application
Quick introduction to Docker
The runtime environment
Build our application’s image
Publish the image to a Docker Registry
Link containers on a single Docker host
Container networking on a single Docker host
Container networking on multiple Docker hosts
Deployment on a Docker Swarm
44. Docker Swarm
Docker hosts cluster
one or several swarm master (for HA)
orchestrator / scheduler
failover
one Swarm agent per node
easy to create with Docker Machine
integration of Docker Machine / Docker Compose / Docker Swarm
45. Creation of a key-value store
creation of a Docker host
docker-machine create -d virtualbox consul
switch to context of newly created machine
eval "$(docker-machine env consul)"
run container based on Consul image
docker run -d -p "8500:8500" -h "consul" progrium/consul -server -bootstrap
46. Creation of a swarm
$ docker-machine create
-d virtualbox
--swarm
--swarm-master
--swarm-discovery="consul://$(docker-machine ip consul):8500"
--engine-opt="cluster-store=consul://$(docker-machine ip
consul):8500"
--engine-opt="cluster-advertise=eth1:2376"
demo0
$ docker-machine create
-d virtualbox
--swarm
--swarm-discovery="consul://$(docker-machine ip consul):8500" --
engine-opt="cluster-store=consul://$(docker-machine ip
consul):8500"
--engine-opt="cluster-advertise=eth1:2376"
demo1
swarm master swarm agent
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
consul * virtualbox Running tcp://192.168.99.100:2376
demo0 - virtualbox Running tcp://192.168.99.101:2376 demo0 (master)
demo1 - virtualbox Running tcp://192.168.99.102:2376 demo1
⇒ 3 Docker hosts created (key-store, Swarm master, Swarm node)
47. Create DNS load balancer
user nginx;
worker_processes 2;
events {
worker_connections 1024;
}
http {
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# 127.0.0.11 is the address of the Docker embedded DNS server
resolver 127.0.0.11 valid=1s;
server {
listen 80;
# apps is the name of the network alias in Docker
set $alias "apps";
location / {
proxy_pass http://$alias;
}
}
}
FROM nginx:1.9
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Dockerfile nginx.conf
# Create image
$ docker build -t lucj/lb-dns .
# Publish image
$ docker push -t lucj/lb-dns
48. Update docker-compose.yml
version: '2'
services:
mongo:
image: mongo:3.2
networks:
- backend
volumes:
- mongo-data:/data/db
expose:
- "27017"
environment:
- "constraint:node==demo0"
lbapp:
image: lucj/lb-dns
networks:
- backend
ports:
- "8000:80"
environment:
- "constraint:node==demo0"
app:
image: lucj/message-app
expose:
- "80"
environment:
- MONGO_URL=mongodb://mongo/messageApp
- "constraint:node==demo1"
networks:
backend:
aliases:
- apps
depends_on:
- lbapp
volumes:
mongo-data:
networks:
backend:
driver: overlay
use lb load balancer
add constraints to choose the nodes
a new user defined overlay network is created
No need to use link between containers
Get images from Docker Hub
49. Deployment and scaling of the application
switch to the swarm master contexte
eval $(docker-machine env --swarm demo0)
run application using networking option
docker-compose up
scaling
docker-compose scale app=5
messageApp API is available through http://192.168.99.101:8000/message
IP of the swarm master
Port of the load balancer
50. In summary
setup Docker for a simple Node.js / MongoDB application
created image for the application
containing all the parts to run the application (runtime Node.js, librairies, application code)
portable image (dev / test / qa / prod) available through the Docker Hub
scalability of the application (API) on a cluster of Docker hosts
several Docker components well integrated together
51. Next
scalability of the database tier
add a web front-end that uses the API
add a centralized gestion of the logs
ELK stack (Elasticsearch / Logstash / Kibana)
add a monitoring solution for all the running containers
Add a TLS termination (using https-portal)