Jeremy Gimbel of Vector Media Group at ExpressionEngine Conference 2018
For years, I used MAMP and later Vagrant to run my local development environment. With MAMP I constantly was cluttering my computer with additional dependencies and living in fear of what would happen when my code went live on staging and production servers wildly different than my local setup. Vagrant was a slight improvement, but the virtual machines were monolithic and hard to build. Like many, my first few attempts at Docker failed miserably and left me with more questions than I had going in and very few answers. Through much research and the guiding voices of my colleagues, I’ve finally managed to wrangle the beast that is Docker into a development environment that is more flexible than ever before and yet easy to use. In this session I will walk attendees through the basics of Docker, the components of my Docker development environment and help guide them around some of the pitfalls I came across while I set it up.
2. Obligatory “Who Am I” Slide
● Husband
● Father
● Based in Philadelphia
● Have worked with ExpressionEngine since 2010
● I was the MojoAddons guy
● Web Services Manager at Vector Media Group
9. What is this “Docker” you speak of?
Docker is virtual computing platform.
The big idea behind it is containerization
Creates a virtual host machine on your physical computer
Application-specific containers run your web server stack
10. My First Look at Docker
Image
Machine
Docker Compose
Registry
Services
Container
Dockerfile
Network
Swarm
16. Docker Compose
Docker Compose is a tool within Docker that lets you define
and run multi-container applications.
Applications are defined inside a file called docker-
compose.yml.
The containers inside an application are called services.
25. Nginx Proxy? DNSMasq? Oh my!
Your
Application
Apache
PHP-FPM
Another
Application
Nginx
PHP-FPM
Docker Machine
Your Local
Machine
Port 80
Port 443
Port 3306
26. Nginx Proxy? DNSMasq? Oh my!
Nginx Proxy
MySQL
DNSMasq
Your
Application
MailHog
Dash
Apache
PHP-FPM
Another
Application
Nginx
PHP-FPM
Docker Machine
Your Local
Machine
Port 80
Port 443
Port 3306
PHPMyAdmin
28. Setting Up Dash
1. Install Docker (duh)
2. Clone the Dash repo:
https://github.com/dreadfullyposh/dash
3. Add your Dash directory to your path
29. Adding Dash to your Path
1. Open Terminal
2.cd ~ <enter>
3.nano .bash_profile <enter>
4. Add a line like this: PATH=$PATH:/path/to/dash
<control-x> <y> <enter>
5. Exit Terminal
6.echo $PATH <enter> to confirm your the PATH variable
has been updated.
30. Adding Dash to your Path
At this point you should be able to type dev <enter> in your
terminal window and see a bunch of information display.
jeremy@mac ~> dev
Execute various commands within the developer environment
Usage:
dev [options] [COMMAND] [ARGS...]"
31. Setting Up Dash
1. Install Docker (duh)
2. Clone the Dash repo: github.com/dreadfullyposh/dash
3. Add your Dash directory to your path
4. Create a Docker network for your Dash setup by running
docker network create dash
32. Setting Up Dash
1. Install Docker (duh)
2. Clone the Dash repo: github.com/dreadfullyposh/dash
3. Add your Dash directory to your path
4. Create a Docker network for your Dash setup by running
docker network create dash
5. Configure your Mac to resolve *.test with DNSMasq
33. Configure Resolver
1. Open Terminal
2.cd /etc <enter>
3.mkdir resolver <enter>
4.cd resolver <enter>
5.nano test <enter>
6. type nameserver 127.0.0.1
7. <control-x> <y> <enter>
8. Restart, in honor of our Windows friends
34. Setting Up Dash
1. Install Docker (duh)
2. Clone the Dash repo: github.com/dreadfullyposh/dash
3. Add your Dash directory to your path
4. Create a Docker network for your Dash setup by running
docker network create dash
5. Configure your Mac to resolve *.test with DNSMasq
6. Setup SSL
35. Setup SSL
1. Open Terminal
2.cd /your/dash/directory/certs <enter>
3../generatecertificate.sh <enter>
4. Open Keychain Access
5. Drag the .crt file from the certs directory to the Keychain
Access window.
6. Double click the certificate
7. Edit the Trust option to Always
36. Start it Up
Open Terminal
dev dash up <enter>
jeremy@mac ~> dev dash up
Creating dash_dnsmasq_1 ... done
Creating dash_mysql_1 ... done
Creating dash_nginx_1 ... done
Creating dash_phpmyadmin_1 ... done
Creating dash_mailhog_1 ... done
37. But.. what did that do? - Looking at dash.yml
version: '3'
services:
# Nginx Proxy
nginx:
build: ./docker/nginx
ports:
- "80:80"
38. Looking at dash.yml
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs
restart: always
# DNSMasq Server
dnsmasq:
55. Setting Up a Project
1. Copy example/docker-compose.yml to your project
directory.
2. Configure docker-compose.yml
3. Start project
56. Start your Project
1. Open Terminal
2.cd /your/project/directory <enter>
3.dev up <enter>
jeremy@mac ~> dev up
dash_nginx_1 is up-to-date
dash_dnsmasq_1 is up-to-date
dash_mysql_1 is up-to-date
dash_mailhog_1 is up-to-date
dash_phpmyadmin_1 is up-to-date
Starting projectdir_web_1 ... done
57. Connecting to MySQL
Since MySQL is in the Dash, you can’t just connect to it via
localhost.
From Web Container
Host: mysql
Username: root
Password: root
From Local Computer (Sequel Pro)
Host: 127.0.0.1
Username: root
Password: root
58. Connecting to MailHog
With MailHog running, we have a fake SMTP server running
so we can monitor any messages sent from our application.
From Application
SMTP Server: mailhog
Port: 1025
Access MailHog From Local Machine
Host: https://mailhog.dev.test
59. Connecting to PHPMyAdmin
PHPMyAdmin is available in the Dash if you don’t want to
use Sequel Pro.
Access PHPMyAdmin From Local Machine
Host: https://phpmyadmin.dev.test
The username and password are already defined, so you
should be logged in automatically.
61. Gotchas
● Make sure you aren’t running MAMP or some other
servers on the same ports: 80, 443, 53 and 3306.
● Make sure you dev down your projects when you’re not
using them. The more containers you run, the more of
your computer’s resources will be tied up running Docker.
62. Other Useful Tips
To shut down a project: dev down
To shut down the Dash: dev dash down
Make sure to download and install Kitematic, available from
the Docker menubar icon.
To run a command inside a container in your project:
dev exec servicename somecommand
63. Where to Go From Here
Now that you’ve got a base setup running, you can extend it
by adding more containers. Here are some ideas.
64. Where to Go From Here
Need to install Node dependencies? Add a node container.
Then run dev run node yarn install
node:
image: node:8
volumes:
- .:/opt
working_dir: /opt
65. Where to Go From Here
Need to share your development environment with others?
Install an ngrok container.
ngroktunnel:
image: gtriggiano/ngrok-tunnel:development
ports:
- "4040"
environment:
NGROK_REGION: us
TARGET_HOST: web
TARGET_PORT: 80
VIRTUAL_HOST: appname-ngrok.dev.vmg
depends_on:
- web
66.
67. Wait.. what were those links again?
Servers for Hackers Containers Courses
https://serversforhackers.com/t/containers
Dash Repo on Github
https://github.com/dreadfullyposh/dash
WebDevOps Docker Images
https://hub.docker.com/r/webdevops/
Also, please leave speaker feedback
https://eeconf.com/speaker-feedback
Editor's Notes
Hi, I’m Jeremy. And here’s my obligatory who am i slide
<click>
I’m Not husband
<click>
Not a father. Sorry borrowed this template from every other tech conference speaker.
<click>
I live and work in Philadelphia.
<click>
I’m a bit of an EE veteran, though I consider myself an equal opportunity developer. I’ve been using EE since 2010.
<click>
I was also briefly known as the mojoaddons guy *laugh track*
<click>
I’m the Web Services Manager at Vector Media Group in New York.
Before we get into talking about Docker, let’s look at what I call the development environment progression.
This is the path I took to finally arrive at docker, and i’m sure many of you can commiserate
I think we all start the same way. Editing live on the server with FTP.
The Do it Live™ approach is pretty typical for a person just dipping their toes into development for the first time.
It’s scary, dangerous.. Generally awful. If this is your approach to web development, please stop.
Eventually we get frustrated with doing it live. Or we have a colossal disaster that forces us to change our ways.
So we look for ways to develop locally. I stumbled on MAMP pretty early on, but others just install server software locally. Either way it’s pretty much the same effect.
It works pretty well. It leads its way into better development practices like version control, deployment processes, etc.
But then you need some random PHP module. Anyone installed ionCube? Yeah, it’s fun.
Or you have a client that’s running nginx, or a different PHP version on their server and something doesn’t work right there “but it works for me.”
Or you upgrade OS X. Womp. Womp.
You can only have your local development environment implode on you so many times before you start thinking “there’s got to be a better way to do this.”
You might have started building your own Vagrant box, or you might have used someone else’s like Laravel Homestead or Scotchbox.
Now you’re cooking. You have a virtual machine based development environment that is portable and repeatable.
But...
… your development environment still doesn’t match your production servers. And your hard drive is 97.3% full. And if you want to swap out just one thing, like a PHP version then you’re in for quite a ride.
Now what?
That’s how I got to Docker. The promised land.
So before we get into the specifics here, let’s talk about what exactly IS Docker?
Docker is virtual computing platform.
<click>
The big idea behind it is containerization
<click>
Creates a virtual host machine on your physical computer
<click>
Application-specific containers run your web server stack. The containers are lighter weight than a full virtual machine since they don’t run a full OS on their own.
All that sounds good, but if you’ve taken a look at Docker before you start to see all these terms.
<click>
There are a lot of them
If you’re like me, you got bogged down with all of these different terms, what they mean and how they related to each other.
But it’s ok, we’ll get through this.
We have to.
Or else we’ll never make it to the after party...
How does Docker work?
Let's go through the basics.
The key thing you need to understand is that Docker works in layers.
<click>
Dockerfiles are the the base layer. A dockerfile is a set of instructions to build an image.
<click>
An image is built from a Dockerfile and is package that contains everything needed to run a container.
<click>
A container is a running instance of an image.
But it’s not quite that simple.
<click>
Dockerfiles can actually be built on top of other Dockerfiles.
<click>
So for example the official Dockerfile for Apache is based on a debian Dockerfile, which is based on another debian Dockerfile, which is based on Scratch (Docker’s base empty image).
The layers can be endless.
Next let’s look at docker compose.
Docker compose is a tool that lets you define multi-container applications.
These applications get defined in a docker-compose.yml file, which configures all the settings needed to run one or more containers that make up your application.
Important terminology sidenote: The containers defined in the application’s docker compose file are called services.
Here’s a sample application that you might define with docker compose.
You can see it has three containers.
One for Apache, one for PHP-FPM and one for Redis.
An application could have more or less containers depending on what you need.
We’ll look at how you actually define the application a bit later.
If you’re serious about docker, I’d suggest reading through some tutorials to continue to expand your knowledge on the core concepts.
The approach I’m about to show you is fairly self contained and automated, so it’ll shield you from having to know a lot of the details of dealing with Docker containers, but it’s really helpful background to have, especially as you start using the tool and want to expand your usage of it.
I definitely recommend checking out Servers for Hackers if you haven’t already. They have two series available about Docker for Development.
I’ll put this link up again later if you don’t catch it now
Now for the fun part.
Now that we have some docker basics under our belt, I’d like to unveil to you my docker-based local development environment
Dash
However before I do that, two brief disclaimers:
Dash is one way to use Docker for development. It’s certainly not the only way. But I feel like it’s a good approach for someone just coming from MAMP or a Vagrant box and wanting to make as simple a move as possible to Docker. Please don’t come at me if this isn’t how you’d do it.
I haven’t tested this outside of a Mac. I’m sure the core of it will work fine, but some of the finer details will need to be worked out on each operating system to make it work as smoothly as it does in OSX. Feel free to contribute to the repository!
That aside let’s go on.
Dash is an approach to using Docker
and an accompanying shell script, that was originally written by IFTTT
It creates a “dash”, which is a persistent stack of services used across all of your projects
<click>
and then you have project-specific services that you configure for each project you work on
<click>
The dash shell script is really just wrapper for docker-compose which you install globally. And it allows you to control both the dash and your projects with the same script without having to change directories all the time in your terminal
So you can see the dash here on the left and then your applications over here on the right.
As you take that in, I’m sure you’re looking at the containers in the dash and feeling a little concerned.
Nginx proxy? DNSMasq? What on earth are they for?
Let’s look at that a little closer
As I mentioned, my first efforts to use Docker for local dev failed. This happened because I wasn’t able to get my head around how to run multiple projects in parallel.
As we talked about earlier, all of your containers run inside the Docker machine. But only the machine has network access directly to your computer.
You can quickly see the problem is that you can only map a port to one container at a time. This doesn’t work so well when you have multiple applications running and needing to be accessed from the browser. Does anyone here have the luxury of working on just one project at a time?.. Didn’t think so.
Dash solves this by adding the nginx proxy into the mix.
The proxy will look at the hostname of the incoming request and send it to the appropriate container. So now we can run as many sites as we want and our dash will keep traffic moving to the right place based on the hostname.
It sounds complicated. But luckily someone else figured out how to do all of that automatically and packaged it up in a Docker container, so all we have to do is run it.
To make things even easier, we use DNSMasq in the Dash so that we don’t even have to edit the hosts file for all of these project hostname.
Are you excited yet?
I’ll warn you now, there are quite a few steps in setting up dash, but it’s not hard per se.
I’m going to run through this very quickly in the interest of time, but every step needed to get started is documented in these slides, which I’ll make available.
So you can follow along with this at your own pace again later.
Let’s get started.
Obviously you need Docker installed.
Then clone this repo. (Link will be shown again later)
Then you need to add the Dash directory to your path.
Won’t go through all the details here, but it’s pretty easy to follow. Again, slides will be available later to help with this.
skip
Now we create a Docker network. Sounds complicated. It isn’t. Just type to command and hit enter. That’s about it.
This setups a network inside your docker machine that lets all of our containers talk to each other.
Next we need to make our Mac resolve all .test domains with the DNSMasq server in our Dash. This is one of the more magical bits of Dash, because as long as you’re using a hostname on the top level domain you’ve mapped, it will always already point to Dash.
This is fairly straightforward it just takes a few steps. I won’t go through them all now.
Don’t forget to restart by the way.
The last thing is setting up SSL.
This is optional, but it’s easy enough.
One thing to note here is that you can only create a wildcard certificate for a second level domain. So we’ll be securing dev.test. This means every project will need to have the hostname set to something.dev.test to work with the certificate.
It looks a little stupid, but it saves us from having to manually generate certificates for each project. Which will save you 10s of seconds in the long run.
Again it’s some pretty straightforward steps.
I’ve included a script in the Dash repo which will generate the certificates for you easily, you just need to run it.
Starting Dash is simple. Just run dev dash up.
<click>
And very untriumphantly, your dash will start.
In the background, your nginx-proxy, MySQL database, DNSMasq, mailhog and phpmyadmin are now running on in your docker machine.
So that’s all fine and dandy, but what did it actually do?
Let’s take a quick look at dash.yml, which is the docker-compose file for Dash. This is all preconfigured for you in the repo, so if this doesn’t make any sense, you don’t need to worry about it.
This looks just like the docker-compose file we looked at earlier.
The first service is the nginx proxy. It’s building from a dockerfile. And it exposes port 80
And port 443
It maps the docker socket from the local machine to the nginx container. Which is the magical part of how nginx proxy works
And it maps the certificates directory.
Then we move on to dnsmasq
It exposes port 53
The command there tells it what domain to respond to.
Then we move to the mysql service container.
The important part of the mysql container is this volume mapping, which makes sure our databases are persistent. Without this, the database server would be wiped clean each time we rebuild the container.
Port 3306 is exposed so we can connect to the database from our local machine
Then next is mailhog, a utility that makes a mock smtp server for testing
It maps one port to our host machine and exposes a port internally
We make sure it depends on the nginx proxy. Adding this dependency will ensure that the nginx proxy starts first.
And then we set a hostname and tell nginx proxy we want it to proxy port 8025, which this image uses by default
The last service is PHPMyAdmin
Again we expose port 80 internally
And then make sure it depends on nginx proxy and mysql
Then we set a hostname and pass in the environment variables for the host, user and password
Last we tell dash to use the dash network.
Having the network is key to making sure that our Dash and our project containers can all talk to each other.
Because this is just a standard docker-compose file, you can add any other services you want, keeping in mind that the same core service containers will be shared across your projects.
With Dash running, we’re ready to setup a project.
Create a directory for your project. Or select an existing site directory.
With Dash running, we’re ready to setup a project.
Create a directory for your project. Or select an existing site directory.
Copy the example docker compose file from the example directory in your dash repo.
Open up the copied docker compose file and configure it for your project.
Let’s take a look at that.
The example docker-compose has just about everything you need.
I have it configured with a php and apache image. You can look at the other images from webdevops for other stacks. They tend to be very nice images that are super easy to configure right from the docker compose file.
Set the document root
/app is an arbitrary directory where this image points Apache by default.
We’re just moving that one level deeper to /app/public so we can have our repo above the document root.
This can be renamed or adjusted as needed based on your own directory structure
The virtual host entered here is what the nginx-proxy container uses to route traffic to this container.
This is a special environment variable that nginx-proxy is configured to look at to automate the configuration of the proxy.
The https method forces all traffic to be redirected to run over https. You can adjust this option if you need to support http only or both.
Last is our volumes.
This is where we map the current directory, usually the root of your site repository, to the app directory on the container.
Last we have the network, which just makes sure this container runs on the same docker network as our Dash
That’s pretty much it.
Of course you can get a lot more complicated.
The webdevops images are well documented with all of their environment variable options for adjusting Apache and PHP configuration.
And you can add any other application-specific services you need into the docker compose file.
So now we can fire it up.
If the dash services aren’t already running, they’ll start. Otherwise they’ll show as already up to date.
Then it starts the project’s containers.
Done.
Obviously as CMS developers, it’s important for us to be able to connect to the mysql database.
From your web server container, you use the mysql host name (which is the name of the service in your dash).
From your local computer, if you’re connecting with Sequel Pro or something like that, the mysql port is mapped so you can connect to it directly with 127.0.0.1
Sending out email from applications is also a common use case.
MailHog helps with this by creating a fake SMTP server we can send mail through and then access the messages through a web browser.
It’s pretty simple to configure an application to send through mailhog. And to access the control panel through the browser.
I’ve also included PHPMyadmin in the dash.
Just visit the phpmyadmin url and you should be taken right in.
That’s pretty much it for the basics.
Obviously there’s a lot more to learn, but this should be enough to get you going.
Once you start up your project, you should be able to access the virtual host url you chose from your browser.
A couple gotchas:
First make sure you don’t have mamp or any other server running on these ports. We have them mapped from the containers to the local machine, so they can’t be in use elsewhere.
While you can successfully run multiple projects at once, be careful about how many containers you try to run at once.
To shut down a project: dev downTo shut down the Dash: dev dash downTo view running containers and their output, use Kitematic, can be helpful for debugging.To run a command inside a container in your project: dev exec servicename somecommand
That was the base setup, getting you up and running with a simple Apache/PHP server.
But there are so many images out there that it’s easy to extend your setup just by adding them to your docker-compose file.
You could use a node image to use a consistent node version across environments and run npm or yarn.
The node container itself won’t stay running, and so you can easily run dev run with a command to use it at any time.
You could also use an ngrok container to share your dev environment with others.
Opening the virtual host you specify in the browser will bring up the ngrok control panel to reveal your public ngrok url
Here are the links from earlier in my slides.
Thanks for listening. Please remember to leave speaker feedback on the ee conference website