Learn about the Amazon made to a service-oriented architecture over a decade ago and an introduction to AWS CodeCommit, AWS CodePipeline, and AWS CodeDeploy, three new services born out of Amazon's internal DevOps experience.
2. What we’ll cover
The Amazon DevOps story
Q & A
New AWS developer tools
AWS CodeCommit AWS CodePipeline AWS CodeDeploy
3. What is DevOps?
DevOps = efficiencies that speed up this lifecycle
developers customers
releasetestbuild
plan monitor
delivery pipeline
feedback loop
Software development lifecycle
26. AWS CodeCommit
Use standard Git tools
Scalability, availability, and durability of Amazon S3
Encryption at rest with customer-specific keys
git pull/push CodeCommit
Git objects in
Amazon S3
Git index in
Amazon
DynamoDB
Encryption key
in AWS KMS
SSH or HTTPS
28. AWS CodePipeline
Connect to best-of-breed tools
Accelerate your release process
Consistently verify each release
Build
1) Build
2) Unit test
1) Deploy
2) UI test
Source Beta Production
1) Deploy
2) Perf test
Gamma
1) Deploy canary
2) Deploy region 1
3) Deploy region 2
1) Pull
30. AWS CodeDeploy
Easy and reliable deployments
Scale with ease
Deploy to any server
Test
CodeDeployv1, v2, v3
Production
Dev
application
revisions
deployment groups
welcome everyone
I’m Nico, SA Manager
- today, we're going to talk about DevOps at Amazon, and give you an inside peak at how Amazon develops our web applications and services
2 SECTION:
1. Amazon own DevOps transformation – Changes to product delivery
2. Introduce 3 AWS services that are used internally
You should walk away with a high level understanding of the different parts involved with a DevOps transformation, and an idea of how you could use our AWS Code services in your own DevOps processes
“DevOps”, what does that mean?
Let’s define it as new modern style of cloud development and delivery
Let’s talk about speed today
<POINT AT SLIDE> Here's the general development lifecycle for a web application or service
<DESCRIBE CYCLE>
Speed of loop – agility – Agilyt is idea to feature
Faster = innovation
That’s the essence of DevOps – to make this process as efficient as possible, and speed up the learning cycle
Let’s look at amazon.
We did not start that way that I just described
2001: Amazon.com was a monolith
Don’t get me wrong: It had multiple tiers each with many components, but they were so tightly-coupled that they behaved like a monolith
Problems: - fragile API contracts that kept changing, shared libraries, shared test and deployment procedures
this monolith-first architecture is not uncommon –
Growing startups and early projects do tradeoffs that optimize for short term speed
That create Long term issues over time the size of the project is crushed by its own weight
- this happened to Amazon, and as we scaled the website and the team, we started to get bogged down
- to visualize why it was getting bogged down, let's look at the development lifecycle
Coordinate Development changes
“Merge Fridays” or event “merge weeks”
Re-build entire app
Run the whole test suite, re-deploy if issue
Amazon had a central team whose sole job it was to deploy this monolithic app into production
One-line change – go through whole process again.
For Amazon - Monolith = impossible to scale / So we made a couple of big changes
- one was architectural, and the other was organizational
Monolith -> SOA
Small Focused Single-purpose services
e.g. buy button on a product page, calculating taxes, etc.
- to give you an idea of the scope of these small services, I've included this graphic
- this is the constellation of services that deliver the Amazon.com website back in 2009, 6 years ago
- this term didn't exist back then, but today you'd call this a microservice architecture
Organizational changes:
Split up into small teams – 2-pizza team that would be broken up if they couldn’t feed on 2 pizzas
Decentralized/autonomy
OPERATIONS:
- operational issue in the middle of the night, the team was paged
- lack of tests breaking customers, the team got a bunch of support tickets
MOTIVATION
Everyone had to handle their own deployment and they were struggling.
TOOLING GAP
- these new tools had some unique characteristics
- the tools had to be self-service, because there's no other way to be able to scale to that many customers
- the tools had to be technology agnostic, because the teams chose many different types of platforms and programming languages for their services
- the tools had to encourage best practices, while we allow autonomy, we also want to support shared learning across the teams so everyone can improve
- and of course, in the service-oriented mindset, the tools were delivered as primitive services
Primitive: Apollo
12 years ago;
Name borrowed from NASA
Primitive: Pipelines
- started after we did a study of how long a software change took to go from a developer check in to running in production
- I'm not going to share any numbers, but let's say that it was embarrassing how long that took
- we found that it wasn't the builds, tests, or deployments that were taking so long, but rather the human processes that tied them all together
- one person would notify another person that a task was ready, eventually they'd see the request and batch it with others, finally they'd start a job and let it run, they'd come back later to see if it completed successfully or needed to be re-run, then they'd finally route the task onto another group for the next job
- this process added in a ton of human delay, and for a company with an insane focus on efficiency, this was unacceptable
- since we're automating our fulfillment centers, we thought we should automate our software delivery
- we created Pipelines to automate that end-to-end release process, from code check-in to build to test to production
- this tool is used pervasively across Amazon, by well over 90% of the teams
- with these new tools, we completed the puzzle
- the teams were decoupled and they had the tools necessary to efficiently release on their own
- what does success look like
- at Amazon in 2015, we ran over 64M deployments
- that's an average of 2 deployments every second
- after we tell customers the story of our DevOps transformation, they typically ask us how they can do the same
- I'm not going to over-simplify this, because it is a very complex answer
- this can involve organizational changes, cultural changes, and process changes
- plus there's no one right answer for these
- every company is going to tweak their approach to optimize for their own environment
- but there is one standard thing that every DevOps transformation needs, and that's an efficient and reliable continuous delivery pipeline
- that's the focus for the rest of this talk
- to set up a continuous delivery pipeline, you first need automated deployments
- that's because you're going to be deploying a lot – just remember that 50 million number for Amazon
- for every release you'll need to deploy to a test environment multiple times to work out bugs, then deploy to staging, and finally to production
- if you don't have these deployments automated, it's going to slow you down and discourage you from releasing frequently
- after you have automated deployments, you'll next want to automate your end-to-end release process
- this involves wiring together your source control, your builds, your deployments, and your tests
- you want to automate as much of this as possible, but it's OK to get started with some manual steps involved- we have three new services that can help you set up your continuous delivery pipeline
- you can use CodeDeploy for your automated deployments and CodePipeline for release automation
- plus, if you want to move your source control to the cloud to have your entire pipeline in AWS, you can use CodeCommit to store your code
- that was a very quick introduction, but I wanted to leave plenty of time for the demonstration
- I'll now invite up Clare to give you a more in depth tour of these three services
- I'm now going to give you a quick intro to these services
- after that, Clare's going to come up and give you a full demonstration of them
- the final service is CodeCommit, where we implemented the Git protocol on top of Amazon S3 storage
- this means from the front-end, it behaves like any other Git source control system
- you'll use the same Git tools and issue the same Git commands that you do today
- on the backend though, we've taken a whole new approach
- rather than use a file-system based architecture, we built CodeCommit on top of Amazon S3 and DynamoDB
- this brings all of their benefits of replicated cloud-based storage, plus some interesting bonus features
- one of those is that CodeCommit automatically encrypts all repositories using customer-specific keys
- this means that every customer will have their repositories encrypted differently in S3
- I'm now going to give you a quick intro to these services
- after that, Clare's going to come up and give you a full demonstration of them
- the next service is CodePipeline, which was inspired by our internal Pipelines service
- it allows you to completely model out your custom software release process
- you specify how you want your new code changes built and unit tested, how they should be deployed to pre-production test environments, how they should be validated with functional and performance tests, and ultimately how they should roll out to production
- you have complete control over the end-to-end workflow and how each step is performed
- you can connect to an AWS service like CodeDeploy, your own custom server like Jenkins, or even an integrated partner tool like GitHub
- it's completely extensible and allows anyone to plug in
- one of the great things about CodePipeline is how integrates our large ecosystem of developer tool partners
- you'll see in the upcoming demo how easy it is to discover and connect to these partner services, and include them as a step in your own release process
- after you set up your automated release workflow, then you're free to push changes as often a you like
- CodePipeline will automatically marshal your code changes through your process as quickly as it can, while ensuring that they go through all of your quality checks
- I'm now going to give you a quick intro to these services
- after that, Clare's going to come up and give you a full demonstration of them
- the first service I'd like to introduce is CodeDeploy
- CodeDeploy is the externalization of our internal Apollo service, and it enables you to deploy just like Amazon
- you specify what version of your application to install on what group of servers, and CodeDeploy coordinates that rollout for you
- it has the same rolling update feature to deploy without downtime
- it has the health tracking feature to stop bad deployments before they take down your application
- all you do is define how to install your application on a single machine, and CodeDeploy can scale that across a fleet of hundreds of servers
- when we launched CodeDeploy, it only supported deployments to Amazon EC2 instances
- but earlier this year, we released support for on premises deployments
- this allows you to deploy to servers in your private data center, as well as VMs in other clouds
- as long as the machine can run our agent and make calls to our public service endpoint, you can deploy to it
- this means you can have a single tool to centralize the deployment for all of your applications to all of your different environments
- I demonstrated using a few partner solutions with our AWS Code services
- Here's the full list of partners who have integrated their tools with CodePipeline and CodeDeploy
- And this list is growing as we welcome more integrations into our tools suite
- Many of these partners have booths in the expo hall
- I encourage everyone to explore their solutions to see how they might benefit your cloud development projects
- We even have some partners here at this talk, so when we take Q&A (up front|| out in the hallway) after the talk, you'll be able to ask questions of them as well