SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez nos Conditions d’utilisation et notre Politique de confidentialité.
SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez notre Politique de confidentialité et nos Conditions d’utilisation pour en savoir plus.
It worked on my machine!" How many times have you heard (or even said) this sentence? Keeping consistent environments across your development, test, and production systems can be a complex task. Enter containers! Containers offer a way to develop and test your application in the same environment in which it runs in production. Developers can use tools such as Docker Compose for local testing of complex applications; Jenkins and AWS CodePipeline for building and orchestration; and Amazon ECS to manage and scale their containers. Come to this session to learn how to build containers into your continuous deployment workflow, accelerating the testing and building phases and leading to more frequent software releases. Attendees will learn to use Docker containers to develop their applications and test locally with Docker Compose (or Amazon ECS local), integrate containers in building, deploy complex applications on Amazon ECS, and orchestrate continuous development workflows with CodePipeline.
In this session we want to highlight how containers can help you deliver high quality software. We will dive deep into a set of tools that you can use to apply the concepts we’ll describe today. And, of course, this wouldn’t be complete if we wouldn’t show you how to actually do it, so we’ll have few demos along the way. So, let’s get going!
I believe you are all familiar with the benefits of using containers, but here’s a quick refresher. Containers are similar to hardware virtualization (like EC2), however instead of partitioning a machine, containers isolate the processes running on a single operating system. Containers are portable, a container image is consistent and immutable -- no matter where I run it, or when I start it, it’s the same. Containers start quickly because the operating system is already running, but they also improve the speed of dev process. Finally, containers are efficient. You can allocate exactly the resources you want – specific cpu, ram, disk, network. Since it shares the same OS kernel & libs, containers use less resources than running the same processes on different virtual machines (different way to get isolation)
That’s great, but how can containers actually help for CD? Continuous delivery is all about reducing risk and delivering value faster by producing reliable software in short iterations. That means that your software is deployable throughout its lifecycle, it means that you can get fast and automated feedback on the production readiness of their software whenever you make some changes, and it means that you can perform push-button deployments of any version of the software to any environment. Containers reduce the risk of introducing errors as they provide a consistent and predictable environment throughout the software lifecycle and given their lightweight they can increase speed and agility.
As we said earlier, throughout the session we’ll be showing few demos. We will be using a sample RoR application (an Instagram clone) fronted by a Nginx proxy, both running in Docker containers. The app will also be using a PostgreSQL database on Amazon RDS.
…. Amazon EC2 Container Service. Amazon ECS is a scalable container management service, doesn’t matter if you want to run 10s or 1000s of containers, Amazon ECS will seamlessly scale and provide consistent performance. ECS provides a set of schedulers that can be used to place containers on the cluster, but it also exposes the cluster state through a set of APIs that would allow you to create your own scheduler. ECS is also highly integrated with other AWS Service, e.g. ELB, CloudWatch. Just a quick reminder of some core concepts of ECS: a cluster is a set of resources, container instances are EC2 instances with ECS agent, task definition defines what containers, what resources and task ins an instance of a def.
For such an application, this is what the dev/deployment workflow would typically look like: Devs write code on their machine and Push changes to Code repository Push triggers a build, artifacts are build Test are run, if all green… New version is deployed in prod Orchestration tool that is the brain, knows how to move the code/build from one stage to the next one We’ll now dive deep into each stage and explore where and how containers can be used.
The first step of a development process is the source code.
This would be your local development machine. You write some code, test it locally, make some more changes. Once you’re happy with your changes you will push them to a code repository. This can be a distributed system, so multiple devs on the same team can work on the same project. What tools do we need to achieve this?
Let’s start with the code repo, and the tool we’d like to highlight here is AWS CodeCommit, a fully managed git repo. This means you don’t need to host, maintain, back up, and scale your own source control servers. CodeCommit also encrypts your files in transit and at rest to provide a secure solution and thanks to the integration with AWS Identity and Access Management (IAM) you can assign user-specific permissions to your repositories. Last but not least, CodeCommit is designed to keep your repositories highly available and accessible. There are other tools available that can provide similar functionalities, such as Github or Bitbucket. For our demo, we’ll be using Github.
When we talk about containers, we refer more and more often to Docker containers. Docker is available for different Linux distro with a recent kernel and on Mac and Windows through Docker toolbox. With Docker we can define the environment our application will be executed in and specify any additional dependency using a Dockerfile.
In this example, we start from a Ruby base image and install some additional packages using the OS package manager. We then specify our app specific dependencies using a Gemfile and finally we copy our source code. This Dockerfile can now be used to build an image we can use to run our containers, and we can use the same image throughout the different lifecycle stages.
One of the interesting things about Docker, it’s its growing tools ecosystem. One of them, Docker Compose, allows you to run complex applications that can include different components. You simply have to define each component env with a Dockerfile, specify how the components make up your application in a docker-compose yaml file and finally with a simple command, docker-compse up, you’ll be able to run all the services included in your app.
Here we have a sample docker-compose yaml file with two service: a proxy and a web app. The proxy service is built from the Dockerfile in the the proxy directory, it exposes port 80 on the container to port 80 on the host and it’s linked to the web service (this will allow us to refer to the web service container as ‘web’ from the proxy service container). The web container is also built from a Dockerfile, in the web direcotry, it’s a Rails app so we specify the command we want to be executed and it exposes port 3000 to any linked services, not on the host machine.
If are already using Compose, you’ll be glad to hear that, as announced earlier today, we now have a tool that will allow you to run your application both locally and on an ECS cluster using the same docker-compse yaml file: the Amazon ECS CLI. With the ECS CLI you’ll be able to run the same Docker compose commands in your local environmetn and up, start, stop, and ps on Amazon ECS. The Amazon ECS CLI is available today for your to download and it’s open source, so we’d like to see you getting involved.
Init configures the cli, similar to ‘eb init’ Compose up –local runs on dev machine
Let’s switch to the demo and see these tools in action!
Now that we made some changes to our code, let’s have a look at the setup we have to build the new artifacts.
At this stage, containers will be used in two ways: to provide an execution environment for the build jobs and as an output of the build process itself. We’ll see how we can run our builds on an ECS cluster, but also how to produce container images that can then be used throughout the rest of the workflow.
Some of our partners have created integrated CD solutions with Amazon ECS. For this talk and demo we’ll be focusing on one of these solutions, Jenkins.
One tool that is frequently used for builds is Jenkins. Jenkins is an open tool that can easily be extended using plugins, it provides a flexible environment to build your Ant or Maven based project, but also, and this quite interesting for us, it can be used to build Docker images. Not only that, but you can have Jenkins itself running inside a Docker container, so you can have your whole CD workflow containerized.
A Jenkins plugin we want to highlight is the CloudBees Docker Build and Publish plugin. This is what we use to build our container images and push them to a Docker Registry. We simply have to specify the repository name we want to push the image to, a tag for it – in this case we tag it using the Jenkins build number – and the registry we want to use. For our demo we will use Dockerhub. Here we are using the ….
…. Amazon EC2 Container Registry -- announced and available later this year -- sign up if you're interested. By using Amazon EC2 Container Registry you can use the familiar Docker CLI commands to push, pull, and manage your images. The service provides all of the benefits of the AWS ecosystem including fine-grained access controls through IAM Policies, CloudTrail logging for auditing, and seamless integration with Amazon EC2 Container Service.
Once our build is complete, we are ready to run some tests
We will take our rspec tests and turn them into acceptance tests that we can run against a live endpoint using a gem called capybara-webkit. Working with this gem requires you to install some tricky dependencies, including qt, qt-webkit, and the headless x server xvfb. Fortunately thanks to Docker we can encapsulate all of these into a container that can be run anywhere.
To fit our tests into our CD pipeline we need a test driver. Again we turn to Jenkins to execute our tests.
As we start building more tests, the time taken to execute them will become large enough that we’ll want to distribute their execution across many machines. ECS can help here. With the Cloudbees Jenkins ECS plugin, you’re also able to run your test jobs on an Amazon ECS cluster. This plugin will simply connect to your ECS cluster, create a new task definition for your test job, start a new task, and tear everything down when the test completes.
This is the Dockerfile that sets up our jenkins slave and that we’ll use to run our tests.
This is the Dockerfile that sets up our jenkins slave and that we’ll use to run our tests.
We’ll get back to our demo in a short while.
Now that we have the build and test stage cover, all is left to do is actually deploy our new version to our production env. We will deploy our application to…
As mentioned earlier, with the Amazon ECS CLI, you are able to run your application both locally and on ECS using the same docker-compse yaml file. Here to run our app on the ECS cluster we simply use the ‘ecs compose up’ command and this will convert the docker-compose yaml into an ECS task definition and run a task with that def. We can also inspect what containers are running using the ‘ecs’ compose ps’ command. Note that here we didn’t use the –local option.
AWS CodeDeploy …fully automates your code deployments, allowing you to deploy reliably and rapidly. You can consistently deploy your application across your development, test, and production ... helps maximize your application availability during the software deployment process. It peforms rolling updates across your instances and tracks application health according to configurable rules … is platform and language agnostic and works with any application. You can easily reuse your existing setup code. CodeDeploy can also integrate with your existing software release process or continuous delivery toolchain (e.g., Jenkins). CodeDeploy can be used to deploy to ECS using a shell script…
An easy way to deploy Docker containers within a pipeline is using AWS Elastic Beanstalk. Beanstalk supports a single container deployment directly on an EC2 instance and multi-container deployment on ECS. The benefits of Beanstalk is that it can manage your resources: your DB, ELB, ECS cluster and it also provides monitoring and logging for your app. It’s also easy to set up multiple environments within one application so you can have an integ stack that is similar to your production stack. Elastic Beanstalk is ideal if you want to leverage the benefits of containers but just want the simplicity of deploying applications from development to production by uploading a container image. You can work with Amazon ECS directly if you want more fine-grained control for custom application architectures.
The final piece of our workflow is an orchestration tool…
… that knows how to get the code from the code repo, build our artifacts, test them and deploy. This tool is….
…AWS CodePipeline. With CodePipeline you can automate your software release process, allowing you to release new features to users very quickly. You can also model the different stages of your software release process with a graphical interface and by running each change through your standardized release process you can assure the quality of your code.
Let’s now see how all these tools and services work together end-to-end
Finally, a few things you might want to take with you from this session: use the ECS CLI to run your app locally and ECS, especially if already using Docker Compose. Make your build environment hihgly scalable by running build jobs in containers and finally, let codePipeline orchestrate your workflow.
TurboCharge Your Continuous Delivery Pipeline with Containers - Pop-up Loft
TurboCharge Your Continuous Delivery
Pipeline with Containers
Yaniv Donenfeld, Solutions Architect
Amazon Web Services
What to expect from the session
• Best practices for containers in continuous
• Toolset to implement such solutions
Why use containers?
• Process isolation
Why use containers for continuous delivery?
• Roll out features as quickly as possible
• Predictable and reproducible environment
• Fast feedback
Demo application architecture
Ruby on Rails
Amazon EC2 Container Service
• Highly scalable container management service
• Easily manage clusters for any scale
• Flexible container placement
• Integrated with other AWS services
• Amazon ECS concepts
• Cluster and container instances
• Task definition and task
Development and deployment workflow
Code repository Build
Docker and Docker Toolbox
• Docker (Linux > 3.10) or Docker Toolbox (OS X,
• Define app environment with Dockerfile
RUN apt-get update -qq && apt-get install -y build-
RUN mkdir -p /opt/web
ADD Gemfile /tmp/
ADD Gemfile.lock /tmp/
RUN bundle install
ADD . /opt/web
Define and run multi-container applications:
1. Define app environment with Dockerfile
2. Define services that make up your app in
3. Run docker-compose up to start and run
rspec and capybara-webkit
feature 'Signing in' do
scenario 'can sign in' do
fill_in 'Email', :with => 'firstname.lastname@example.org'
fill_in 'Password', :with => 'password'
click_button 'Log in'
expect(page).to have_content('Signed in successfully.')
• Run tests directly via Docker run
• Run tests in a Docker slave on Amazon ECS
Amazon ECS CLI
> ecs-cli up
> ecs-cli compose up
> ecs-cli ps
AWS Elastic Beanstalk
• Deploy and manage applications without
worrying about the infrastructure
• AWS Elastic Beanstalk manages your database,
Elastic Load Balancing (ELB), Amazon ECS
cluster, monitoring, and logging
• Docker support
• Single container (on Amazon EC2)
• Multi container (on Amazon ECS)