2. Agenda
• Introduction
– NetflixOSS, Cloud Native with Operational
Excellence, and IBM Cloud Services Fabric
• Docker Local Port
• Docker Cloud Port
3. About Andrew
• IBM - Cloud Performance Architecture and Strategy
• How did I get into cloud?
– Performance led to cloud
scale, led to cloud platforms
– Created Mobile/Cloud Acme Air
– Cloud platforms led to NetflixOSS,
led to winning Netflix Cloud Prize
for best sample application
– Also ported to IBM Cloud - SoftLayer
– Two years focused on IBM Cloud
Services Fabric and Operations
• RTP dad that enjoys technology as well as
running, wine and poker
@aspyker
ispyker.blogspot.com
4. About Sudhir
• Manages the Cloud Platform
Infrastructure team at Netflix
• Many of these components have been
open sourced under the NetflixOSS
umbrella.
• Sudhir is a weekend golfer and tries to
make the most of the wonderful
California weather and public courses.
5. NetflixOSS on Github
• NetflixOSS is what it
takes to run a cloud
service and business
with operational
excellence
• netflix.github.io
–40+ OSS projects
–Expanding every day
• Focusing more on
interactive mid-tier
server technology today
8. Elastic, Web and Hyper Scale
Doing This
Not Doing That
Source: Programmableweb.com 2012
9. Elastic, Web and Hyper Scale
Front end API
(browser and mobile)
Authentication
Service
Booking
Service
Temporal
caching
Durable
Storage
Load
Balancers
Strategy Benefit
Make deployments automated Without automation impossible
Expose well designed API to users Offloads presentation complexity to clients
Remove state for mid tier services Allows easy elastic scale out
Push temporal state to client and caching tier Leverage clients, avoids data tier overload
Use partitioned data storage Data design and storage scales with HA
11. Micro service
Implementation
Call “Auth Service”
Highly Available Service Runtime Recipe
Ribbon REST client
with Eureka
Web App
Front End
(REST services)
App Service
(auth-service)
Execute
auth-service
call
Hystrix
Eureka
Server(s)
Eureka
Server(s)
Eureka
Server(s)
Karyon
Fallback
Implementation
Implementation Detail Benefits
Decompose into micro services
• Key user path always available
• Failure does not propagate across service boundaries
Karyon /w automatic Eureka registration
• New instances are quickly found
• Failing individual instances disappear
Ribbon client with Eureka awareness
• Load balances & retries across instances with “smarts”
• Handles temporal instance failure
Hystrix as dependency circuit breaker
• Allows for fast failure
• Provides graceful cross service degradation/recovery
12. IaaS High Availability
Region (Dallas)
DAL01
Datacenter (DAL06)
DAL05
Eureka
Local LBs
Web App Auth Service Booking Service
Cluster Auto Recovery and Scaling Services
Global Load
Balancers
Rule Why?
Always > 2 of everything 1 is SPOF, 2 doesn’t web scale and slow DR recovery
Including IaaS and cloud services You’re only as strong as your weakest dependency
Use auto scaler/recovery monitoring Clusters guarantee availability and service latency
Use application level health checks Instance on the network != healthy
13. Only proof is testing!
Chaos Testing
Region (Dallas)
DAL06
Datacenter (DAL05)
DAL01
Eureka
Local LBs
Web App Auth Service Booking Service
Cluster Auto Recovery and Scaling Services
Global Load
Balancers
✗
Chaos Gorilla
✗
Videos: bit.ly/noss-sl-blog, http://bit.ly/sl-gorilla
15. Continuous
Delivery
Cluster v1 Canary v2 Cluster V2
Step Technology
Developers test locally Unit test frameworks
Continuous build Continuous build server based on gradle builds
Build “bakes” full instance image Imaginator (Aminator inspired) creates SoftLayer images
Developer work across dev and test Archaius allows for environment based context
Developers do canary tests,
red/black deployments in prod
Asgard console provides app cluster common devops
approach, security patterns, and visibility
Continuous
Build Server
Baked to SoftLayer
Image Templates
(or AMI’s)
17. Operational Visibility
Web App Auto Service
Visibility Point Technology
Basic IaaS instance monitoring Not enough (not scalable, not app specific)
User like external monitoring SaaS offerings or OSS like Uptime
Service to service interconnects Hystrix streams Turbine aggregation Hystrix dashboard
Application centric metrics Servo gauges, counters, timers sent to metrics store
Remote logging Logstash/Kibana
Threshold monitoring and alerts Services like PagerDuty for incident management
Servo
Hystrix/TurbineUptime
Metric/Event
Repositories
LogStash/Elastic
Search/Kibana
Incidents
18. 3. Region (us-south-1)
5. Asgard
Service
3. Datacenter (DAL01) – Fabric services are clustered across 3 DC’s
3. Datacenter (DAL05) – Apps are clustered across 3 DC’s
Datacenter (DAL06)
1. Eureka
2. Local LB
Service A service you
depend on
4. Cluster Auto Recovery and Scaling Services
2. Global Load
Balancers
8. Logstash
Kibana
6. Imaginator
Service
7. Uptime
Service
Your
built code
Tested base
images /w
agents
Your front end
service
Your mid tier
service
Code and Image Build
Devops
Current IBM Cloud Services Fabric
Currently
VM based
21. Service
Discovery
(Eureka)
Web App Auth Service
Region (docker-local)
Datacenter
(docker-local-1a)
Cluster Auto Recovery & Scaling Service (Microscaler)
Load Balancer
(Zuul)
Docker-local-1c
Docker-local-1b
Users
Devops
(admin)
Devops Console
(Asgard)
Acme Air
Web App
Acme Air
Auth Service Cassandra
NodeBlue and green boxes are container instances
Docker “Local” Setup
Skydock SkyDNS
22. Why Docker for our work?
• Because we could, actually …
– To show Netflix cloud platform as portable to non-VM clouds
– Help with NetflixOSS understanding inside of IBM
• Local Testing – “Cloud in a box” more production like
– Developers able to do larger scale testing
– Continuous build/test tool systems able to run at “scale”
• Public Cloud Support
– Understand how an container IaaS layer could be implemented
• So far, proof of concept, you can help continue
– More on that later (hint open source!)
23. Micro service
Implementation
Call “Auth Service”
Ribbon REST client
with Eureka
Web App
Front End
(REST services)
App Service
(auth-service)
Execute
auth-service
call
Eureka
Server(s)
Eureka
Server(s)
Eureka
Server(s)
Karyon
DockerHost
SkydockSkyDNS Eureka
Auth Service
Micro Service
Docker
Daemon
Event
API
Two Service Location Technologies?
24. Service Location Lessons Learned
• Both did their job well
– SkyDNS/SkyDock for container basic DNS
• Must be careful of DNS caching clients
– Eureka for application level routing
• Interesting to see the contrasts
– Intrusiveness (Eureka requires on instance/in app changes)
– Data available (DNS isn’t application aware)
– Application awareness (running container != healthy code)
• Points to value in “above IaaS” service location registration
– Transparent IaaS implementations struggle to be as application aware
• More information on my blog http://bit.ly/aws-sd-intr
25. Instance Auto Recovery / Scaling
• Auto scaling performs three important aspects
– Devops cluster rolling versions
– Auto recovery of instances due to failure
– Auto scaling due to load
• Various NetflixOSS auto scalers
– For NetflixOSS proper – Amazon Auto Scaler
– For SoftLayer port – RightScale Server Arrays
– For Docker local port – we implemented
“Microscaler”
26. Microscaler Agent Architecture
• OSS at http://github.com/EmergingTechnologyInstitute/microscaler
• Microscaler service, agent are containers
• Microscaler has CLI remote client and REST interface
• Note:
– No IBM support, OSS proof of concept of auto scaler needed for local usage
– Works well for small scale Docker local testing
Dockerhost
WebAppi001
WebAppi002
AuthServicei001
AuthServicei002
MicroscalerAgent
Docker
Remote
API
Microscaler
Microscaler
REST or CLI
28. Working with the Docker remote API
• Microscaler and Asgard need to work against the “IaaS” API
– Docker remote API to the rescue
– Start and stop containers, query images and containers
• Exposed http://172.17.42.1:4243 to both
– Could (should) have used socket
– Be careful of security once you do this
• Found that this needs to easily configurable
– Boot2docker and docker.io default to different addresses
• Found that current API isn’t totally documented
– Advanced options not documented or shown in examples
– Open Source to the rescue (looked at service code)
– Need to work on submitting pull requests for documentation
29. Region and Availability Zones
• Coded Microscaler to assign availability zones
– Via user_data in an environment variable
– Need metadata about deployment in Docker eventually?
• Tested Chaos Gorilla
– Stop all containers in a single availability zone
• Tested Split Brain Monkey
– Jepsen inspired, used iptables to isolate Docker network
• Eureka awareness of availability zones not there yet
– Should be an easy change based on similar SoftLayer port
30. Image management
• Docker and baked images are kindred spirits
• Using locally built images - Easy for a simple demo
• Haven’t yet pushed the images to dockerhub
• Considering Imaginator (Aminator) extension
– To allow for Docker images to be built as we are VM’s
– Considering http://www.packer.io/
– Or maybe the other way around?
• Dockerfiles for VM images?
31. Using Docker as an IaaS?
• We do all the bad things
– Our containers run multiple processes
– Our containers use unmanaged TCP ports
– Our containers run and allow ssh access
• Good
– Get all the benefits of Docker containers and images
– Only small changes to CSF/NetflixOSS cloud platform
• Bad
– Might not take full advantage of Docker
• Portability, container process optimizations, composability
• Considering more Docker centric approaches over time
32. Where can I play with this?
# on boot2docker or docker.io under virtual box Ubuntu
git clone http://github.com/EmergingTechnologyInstitute/
acmeair-netflixoss-dockerlocal
cd bin
# please read http://bit.ly/aa-noss-dl-license
./acceptlicenses.sh
# get coffee (or favorite caffeinated drink), depending on download speed ~ 30 min
./buildsimages.sh
# this is FAST! – but wait for about eight minutes for cross topology registration
./startminimum.sh
# Route your network from guest to docker network (http://bit.ly/docker-tcpdirect)
./showipaddrs.sh
# Look at the environment (Zuul front end, Asgard console, Eureka console, etc.)
Browse to http://172.17.0.X
All Open Source
Today!
33. Service
Discovery
(Eureka)
Web App Auth Service
Region (docker-local)
Datacenter
(docker-local-1a)
Cluster Auto Recovery & Scaling Service (Microscaler)
Load Balancer
(Zuul)
Docker-local-1c
Docker-local-1b
Users
Devops
(admin)
Devops Console
(Asgard)
Acme Air
Web App
Acme Air
Auth Service Cassandra
NodeBlue and green boxes are container instances
Docker “Local” Setup
Skydock SkyDNS
Show demo here
36. Networking
• Docker starts docker0 bridge to interconnect single host instances
• We assigned the subnet of the bridge to be a portable subnet
within our SoftLayer account within a VLAN
– We routed all traffic to the actual private interface
• This allows network to work seamlessly
– Between datacenters
– Across hardware firewall appliances
– To external load balancers
– To all other instances (VM’s, bare metal) in SoftLayer
• This allowed for easy networking between multiple Docker hosts
37. Docker API and Multi-host
• Once you have multiple Docker hosts
– You have multiple Docker remote API’s
• Wrote “API Proxy” to deal with this
• Not the best solution in the world, but worked
• Considering how this works with existing IaaS API
– Single SoftLayer API handles bare metal, virtual machines
– How to keep the API Docker compatible
• Maybe other more Docker centric approaches coming?
38. Image Management
• Currently using standard Docker private registry
• Considering how this could be integrated with
SoftLayer Image management system
– Use optimized cross datacenter distribution network
– Expose Docker layered versions through console
• Again, important to not lose Docker value in
image transparency and portability
39. DAL05 Datacenter
SoftLayer Private Network
Docker Cloud on IBM SoftLayer
DAL06 Datacenter
Dockerhost Dockerhost
Dockerhost
WebAppi001
WebAppi003
AuthServicei001
AuthServicei003
WebAppi002
WebAppi004
AuthServicei002
AuthServicei004
Registry
Zuul
Eureka
Cassandra
Microscaler
MicroscalerAgent
MicroscalerAgent
Skydock
SkyDNS
Skydock
Skydock
Asgard
APIProxy
Docker
Remote
API
Docker
Remote
API
Demos 1-1 today or
tomorrow at Jerry’s session