A presentation covering three new services from Amazon Web Services: the new Application Load Balancer (ALB), recent updates to the EC2 Container Service (ECS), and the new Kinesis Analytics.
6. We’ll start simple.
But we’ll get progressively
more technical.
At a certain point, we’ll
dive deep into the
technical nuances of the
topic.
In such cases, look for the
Nerd Alert ribbon.
Nerd Alert
7. Hi, I’m Josh Padnick.
‣ Published A Comprehensive Guide
to Building a Scalable Web App on AWS.
Received 500+ up votes on Hacker News.
‣ Consulted on DevOps & AWS with ~25
companies worldwide including Intel and
Infusionsoft.
‣ Full-stack engineer for 10+ years
‣ Co-founder at Gruntwork.
8. ‣ We setup software teams on AWS with DevOps
best practices and world-class infrastructure.
‣ But we do it in about 2 weeks!
‣ The secret sauce is we offer battle-tested, pre-
written “Infrastructure Packages” for common
AWS needs.
‣ Plus consulting & support as needed.
http://gruntwork.io
I work at Gruntwork.
15. The Big Idea
If one VM goes
down, we can just
serve traffic from
the other.
X
X
✅
16. The Big Idea
But how do we route
requests to more
than one VM?
?
17. The Big Idea
We use a Load
Balancer. This is
sometimes called a
Reverse Proxy.
Load
Balancer
18. The Big Idea
There are a few properties we want out of this load
balancer:
19. The Big Idea
There are a few properties we want out of this
generic load balancer:
‣ It should itself be HA!
‣ It should elastically scale as we get more traffic.
‣ It should do a Health Check on each VM.
20. The Big Idea
Keep going…
‣ It should support the latest protocols
(TCP, UDP, HTTP(S) 1.1, HTTP/2, WebSockets).
‣ It should log all requests.
‣ It should emit helpful metrics.
21. The Big Idea
Keep going…
‣ It should allow routing a single user to the
same VM, but spread different users across
different VMs (sticky sessions).
‣ It should route a request for /apples to one
set of VMs and /oranges to another (path-
based routing).
22. The Big Idea
Keep going…
‣ It should have first-class support for routing to
Docker containers in EC2 Container Service (ECS)
‣ Route to an app running in a container, not just
to a VM.
‣ Route to multiple different containers on the
same VM.
‣ Know about new containers when I launch
them (service discovery).
Nerd Alert
23. In 2012, Amazon released the
Elastic Load Balancer.
Elastic Load Balancer (ELB)
24. Nerd Alert
Old ELB was a Layer 4 Load Balancer
Open Systems Interconnection (OSI) Network Model
Physical / Data Link1 / 2
Network (IP, ICMP)3
Transport (TCP, UDP)4
Session5
Presentation (TLS)6
Application (HTTP, FTP, DNS, SSH)7
25. But there’s a problem…
‣ Helpful metrics like “Sum HTTP 5XX errors” only
apply to HTTP traffic.
‣ Path-based routing requires inspecting the HTTP
traffic.
Some of our feature asks are HTTP-specific.
26. But there’s a problem…
‣ Route to more than one port on the same VM
Some of our feature asks are DOCKER-specific.
27. So AWS has released the new
Application Load Balancer (ALB).
28. So AWS has released the new
Application Load Balancer (ALB).
An updated load balancer opinionated to:
- modern apps built with HTTP
- Docker
30. Nerd Alert
ALB is a Layer 7 Load Balancer
Open Systems Interconnection (OSI) Network Model
Physical / Data Link1 / 2
Network (IP, ICMP)3
Transport (TCP, UDP)4
Session5
Presentation (TLS)6
Application (HTTP, FTP, DNS, SSH)7
31. Nerd Alert
ALB is a Layer 7 Load Balancer
Translation
‣ The ALB inspects HTTP traffic and makes
routing decisions based on this.
‣ But the ALB doesn’t deal with “OSI Layer 3”
forwarding, so no TCP or UDP forwarding.
Application (HTTP, FTP, DNS, SSH)7
35. HTTP/2 Benefits
‣ Sends headers/cookies just once instead of on
every request.
‣ Encodes all data in binary versus a textual
format.
‣ Transmits all data over a single, multiplexed
TCP connection versus multiple blocking
connections in HTTP/1.1.
Nerd Alert
36. Your Backend App Can Still
Speak HTTP/1.1
Nerd Alert
HTTP/2 HTTP/1.x
Note that HTTP/2 requires that
you use HTTPS on the ALB.
38. Support for WebSockets
‣ A long-time ask for ELBs has been WebSocket
support. ALBs now support this!
Nerd Alert
ws://…
ws://…
39. Content-Based Routing
‣ Route /blue to one service.
‣ Route /green to another service.
‣ Previously, this required two load balancers.
Now, it requires just one!
40. Content-Based Routing
‣ LIMITATION
‣ We don’t get path rewriting.
‣ So you can’t send /blue to /hello/blue
unless your backend app handles that.
Nerd Alert
41. New Concepts in Elastic Load Balancing
‣ Target Groups
The Classic Load Balancer includes as part of its
configuration which EC2 Instances it will route to.
ELB
42. New Concepts in Elastic Load Balancing
‣ Target Groups
With ALBs, the concept of Load Balancer is separated from
the concept of Target EC2 Instances.
ALB
Target
Group
Target
Group
43. New Concepts in Elastic Load Balancing
‣ Target Groups
Our ALB needs a list of “targets” where it can send traffic.
We’ll group all such targets into a Target Group.
Empty Target Group
44. New Concepts in Elastic Load Balancing
‣ Target Groups
Let’s add one Target:
i-123
Port 8000
Notice we have both an
instance id and port number.
45. New Concepts in Elastic Load Balancing
‣ Target Groups
Let’s add a second Target:
i-123
Port 8000
i-123
Port 8001
This target has the
same instance id
but a different
port number.
46. New Concepts in Elastic Load Balancing
‣ Target Groups
Let’s add a third Target:
i-123
Port 8000
i-123
Port 8001
i-789
Port 3034
47. New Concepts in Elastic Load Balancing
‣ Target Groups
Our ALB will send traffic to any
Healthy Target in the Target Group.
i-123
Port 8000
i-123
Port 8001
i-789
Port 3034
48. New Concepts in Elastic Load Balancing
‣ Target Groups
Note that the Classic ELB does not use a
Target Group and can only send to the
same port on different EC2 Instances.
i-123
Port 8000
i-789
Port 8000
49. New Concepts in Elastic Load Balancing
‣ Target Groups
The big takeaway is you can group your (micro)services into
Target Groups, even if multiple target groups include the
same EC2 Instance!
i-123 i-456 i-789
Service B
Service A
Nerd Alert
50. Content-Based Routing
‣ Route /blue to one service Target Group.
‣ Route /green to another service Target Group.
‣ Previously, this required two load balancers.
Now, it requires just one!
51. Support for Container-Based Apps
‣ We often want to run the same Docker image
on the same EC2 Instance on different ports.
‣ Target Groups means the ALB can route to
either to two different ports on the same
server!
‣ This also means we can dynamically select our
container ports in an EC2 Container Service
Cluster!
Nerd Alert
53. Target Group metrics.
‣ We get CloudWatch Metrics on Target Groups.
‣ This is a nice way to get metrics specific to a
service.
Nerd Alert
54. Better metrics.
‣ Many new metrics on the ALB!
Nerd Alert
‣ ClientTLSNegotiationErrorCount
‣ TargetTLSNegotiationErrorCount
‣ TargetConnectionErrorCount
‣ TargetResponseTime
‣ NewConnectionCount
‣ ActiveConnectionCount
‣ RejectedConnectionCount
‣ ProcessedBytes
55. Other Cool Features
‣ Load-balancer generated sticky-session
cookies (client must support cookies).
‣ Slightly less expensive.
‣ Faster performance in general.
Nerd Alert
56. When to Use the ALB
‣ When running any HTTP-based service.
‣ When using WebSockets with a load balancer.
‣ When using Docker, especially with EC2
Container Service.
57. When to Use the Classic ELB
‣ You need OSI Layer 4 Routing (i.e. TCP / UDP)
‣ Your app listens on a protocol other than HTTP.
58. Alternatives to the ALB/ELB
‣ Set up your own load balancer using Nginx or
HAProxy.
‣ But this means you need to build auto-
scaling, auto-failover, automated DNS
updates, configure metrics, configure
logging, manage upgrades, and a few more
items.
‣ Conclusion: don’t do this unless you have to.
61. The Big Idea
I can offer you resource isolation.
And I can be launched in just minutes!
62. Limitations of a VM
But minutes could be an eternity.
If deploying multiple times a day,
we’re just waiting for VMs to launch.
Building an Amazon Machine Image
also takes on the order of minutes.
63. Limitations of a VM
And I can’t run that AMI locally.
If I want to run the same “Golden
Image” locally, I’m out of luck.
X
64. Sometimes a single app uses a
tiny portion of available resources.
Mem Usage: 12%
CPU Usage: 7%
65. So it’d be nice if we could pack
multiple apps in a single EC2 Instance.
Mem Usage: 85%
CPU Usage: 90%
App 1 App 2
App 3
67. Why developers love containers.
‣ A container is just an isolated OS process, so it runs
directly on your EC2 Instance.
‣ It’s similar to a “lightweight VM” and can start in
milliseconds.
‣ You can run multiple containers on a single EC2
Instance.
‣ You can run the same docker image on any platform.
‣ You can download pre-built docker images for almost
all custom software.
68. So we want to run our apps as containers.
‣ But we don’t want to run containers on just a
single EC2 Instance.
If I go down, I’m taking
all apps with me!
69. We want to run multiple containers across
multiple EC2 Instances.
70. But running a “docker cluster” is hard.
‣ Way to bootstrap the cluster
‣ Container scheduler
‣ Service Discovery solution
‣ Load balance to containers
‣ Auto-restart failed containers
‣ Cluster-wide metrics
We need…
72. But my favorite solution is
Amazon EC2 Container Service (ECS)
Amazon EC2 Container Service
73. Benefits of ECS
‣ Built-in cluster bootstrapping
‣ Built-in scheduler
(with ability to use a custom scheduler)
‣ Built-in service discovery
‣ Built-in load balancer (ALB)
‣ Built-in auto-restart on failed containers
‣ NEW! Auto-scale your service
‣ NEW! Fine-grained AWS permissions on your service
74. What’s Missing from ECS
‣ Service-to-service authentication
‣ Run background jobs within the cluster (you
can still do this with Lamba’s run on cron
schedules, though)
‣ DNS namespacing
‣ Built-in persistent volumes
‣ Built-in support for log aggregation (on
services other than CloudWatch Logs)
75. Then why is it my favorite?
‣ Because most teams don’t need those features.
‣ If you’re ok with the limitations, ECS is easier to
setup than anything else.
‣ The new ALB plus the new features we’ll talk
about make this even more compelling.
87. IAM Roles for EC2 Instances
Previously, ECS Tasks could only get permission
to other AWS resources (e.g. a file in S3) by using
the IAM Role of the ECS Instance.
ECS Instance
IAM Role
88. IAM Roles for EC2 Instances
This meant that the BLUE and YELLOW app both
got the same AWS permissions.
ECS Instance IAM Role
ECS Instance IAM Role
89. IAM Roles for ECS Tasks
With IAM Roles for ECS Tasks, now each ECS Task
can get its own IAM Role!
ECS Task IAM Role
ECS Task IAM Role
90. IAM Roles for ECS Tasks
This means that each ECS Task can have its own
set of permissions to other AWS resources.
ECS Task IAM Role
ECS Task IAM Role
Bucket A
Bucket B
91. How It Works
‣ When we create an ECS Task Definition, we can
now specify a Task Role.
93. ECS Service Auto-Scaling
Previously, we could auto-scale the ECS Instances
but not the ECS Tasks.
This meant that we could not auto-scale an ECS
Service without lots of hackery.
98. When you work with Docker, you need
a place to store your Docker images.
‣ Classic Docker build pipeline example:
Git Commit
to Master Branch
Build Docker
Image
Push to Docker
Registry
99. There are a few options for the
Docker Registry
‣ Docker Hub
‣ Quay.io by CoreOS
‣ Artifactory by jfrog
100. But there are some challenges.
‣ Docker Hub can sometimes be slow or
unreliable.
‣ Authenticating to any solution means you have
to store the credentials somewhere.
‣ Download speeds and proximity to the service
make a difference.
101. So Amazon has released
EC2 Container Registry (ECR)
Amazon EC2 Container Registry
102. ECR Features
‣ Fully managed by Amazon
‣ Relatively fast
‣ Accessible by a typical docker client
‣ Integrated with IAM Policies and IAM Users
103. ECR Limitations
‣ You can only store up to 1,000 images per
docker repo.
‣ Pricing model requires you cull your unused
docker images from the ECR repo.
‣ No hosting of public docker images.
‣ Docker repo names can be awkwardly long.
104. But I still prefer ECR.
‣ One less vendor to deal with.
‣ One integrated security model.
‣ Repo limits are probably appropriate.
‣ Not hosting public repos gives clear separation
of public and private repos.
106. Big Idea
‣ As companies grow, they eventually evolve out
of the monolithic app and into a microservices
architecture.
Microservice A Microservice B
107. ‣ Usually, companies will start with two microservices.
‣ Then they’ll keep factoring out monolithic code into
more and more microservices.
108. ‣ Eventually, teams will want an individual
microservice to publish an event stream.
109. ‣ This way Microservice B can do something
when Microservice A publishes a certain event.
110. ‣ But if we have n services, and each service reads
the event stream of the other n - 1 services, now
we have a combinatorial explosion:
112. ‣ What if instead all services published their event streams to a
central service.
‣ And all services read event streams from that same central service.
113. ‣ Now we have n connections, which is
manageable!
114. ‣ These are the insights that LinkedIn had around 2011 when it wrote
Apache Kafka.
‣ The central “event publishing service” would need to be:
‣ scalable
‣ resilient
‣ temporarily persist data to support consumers that go down
‣ not lose any data, even as data volume surges
115. ‣ The details are published in an epic blog post
by LinkedIn engineer and Kakfa author Jay
Kreps:
116. ‣ It turns out the concept of a scalable,
performant, resilient centralized event stream
can apply to lots of domains!
‣ IoT events
‣ Logging events
‣ Social media clickstreams
‣ Basically, any real-time data source
117. ‣ But running a Kafka cluster is highly non-trivial.
‣ So AWS introduced its own version of Kafka
and offered it as a managed service.
Amazon Kinesis Streams
118. ‣ At ReInvent 2014, Amazon shared a wicked
cool example of how Major League Baseball
was tracking data from the field and using it to
generate stats, visualizations, and more:
120. ‣ But what happens after the data gets into
Kinesis?
Amazon Kinesis
?
121. ‣ The answer is that we can have Kinesis
Consumers that periodically read the data.
Amazon Kinesis
Me Want Moar Data!
122. ‣ The consumer can then do anything with it
‣ Store it in S3 for later retrieval.
‣ Store it in RedShift for later querying.
‣ Store it in a relational database.
‣ Or any other custom operation.
123. ‣ Previously, we had to write our own custom
worker to do any processing.
124. ‣ But what if we just want to query windows of
incoming data and write it to a database? Isn’t
that pretty common?
125. ‣ But now we don’t have to!
‣ That’s why Amazon has introduced:
Amazon Kinesis Analytics
126. Input - Query - Output
‣ Inputs
‣ Streaming Data Sources: Kinesis Streams, Kinesis Firehose
‣ Reference Data Source: Data in S3
‣ Query
‣ Write ANSI SQL against the data stream
‣ Outputs
‣ S3
‣ Redshift
‣ Kinesis Firehose ( —> Amazon Elasticsearch)
‣ Kinesis Streams
127. Core Features
‣ Use Standard SQL to query data streams.
‣ Kinesis will inspect your data stream and automatically
create a baseline schema against which you can write
your queries.
‣ Built-in live SQL editor to test queries against live data.
‣ Pre-written queries for common use cases.
‣ Query continuously, by Tumbling Windows, or Sliding
Windows.
129. Business Problem
‣ Ice Cream shop
‣ IoT Enabled
‣ We track weight of each tub of ice cream
continuously as a way to know in real-time how
much ice cream we need to order.
‣ Our customer wants a slick real-time
dashboard of everything.
130. Architecture
IoT Weight Monitors
Kinesis Streams
Kinesis Analytics
S3 Bucket
ECS Cluster
App to Query S3 Data
and return Dashboard data
App that serves static assets
for a Single-Page App
ALB
Users get dashboard
updates with WebSockets
RDS PostgreSQL
131. Caveats
‣ If you had a low enough volume of data, you could just have
your sensors write directly to RDS Postgres and reduce lots of
cost and complexity.
‣ But if you have enough data volume that you need the power of
Kinesis, then this architecture makes sense.
‣ Querying S3 for real-time data is probably a bad idea, so it may
make more sense to write a worker to read from S3 and write
data to RDS Postgres or to use Redshift.
‣ Serving a static web app from an ECS app isn’t bad, but using S3
(+ CloudFront) is more efficient (but also more complex to
setup).
132. Thank you!
Want to keep up with the latest news on
DevOps, AWS, software infrastructure,
and Gruntwork?
http://www.gruntwork.io/newsletter/