How to protect your application from outages and failures of cloud infrastructures. Planning disaster recovery architecture and use Cloudify for cloud abstraction and monitoring.
1. Protect your app from Outages
Ron Zavner, Applications Architect at Gigaspaces
February 2013
2. AGENDA
AWS and outages
Outage impact
Disaster Recovery – it’s all about redundancy!
Cloudify as a solution for redundancy
Demo with Cloudify on EC2
2 ® Copyright 2013 GigaSpaces Ltd. All Rights Reserved
3. AWS USAGE
• AWS – around 0.5M servers
• Facebook – less than 0.1M servers
• Google – around 1M servers
3
5. OUTAGE – APRIL 21, 2011
5 ® Copyright 2012 GigaSpaces Ltd. All Rights Reserved
6. OUTAGE - JUNE 29, 2012
6 ® Copyright 2012 GigaSpaces Ltd. All Rights Reserved
7. OUTAGE - OCTOBER 22, 2012
7 ® Copyright 2012 GigaSpaces Ltd. All Rights Reserved
8. OUTAGE - CHRISTMAS EVE 2012
8 ® Copyright 2012 GigaSpaces Ltd. All Rights Reserved
9. THAT’S WHAT YOU EXPECT?
99% - 3.65 days downtime
99.9% - 8.76 hours downtime
99.99% - 53 minutes downtime
99.999% - 5.26 minutes downtime
9
10. OUTAGE IMPACT – DESIGN FOR FAILURES
Outage could cost…
$89K per hour for Amadeus
$225K per hour for PayPal!
10 ® Copyright 2012 GigaSpaces Ltd. All Rights Reserved
13. PREPARE FOR DISASTER RECOVERY
•Dedicated expert for DR architecture
•Define target recovery time & point
•Assume every tier can fail
•Use monitoring and alerts
•Document your operational processes
13
19. CLOUDIFY POSITIONING IN THE CLOUD STACK
Productivity High productivity with
full control
Heroku
PaaS CloudFoundry
GAE
OpenShift
Rightscale
DevOps
(Automation) Enstratus
Puppet Chef
IaaS Public clouds
(AWS, Rackspace,..) Private clouds
(Vmware, OpenStack..)
Control
19
24. DATA REPLICATION
• Cloudify Replicated MySQL Recipe
• Generic replication service using WAN Gateway
24 ® Copyright 2012 GigaSpaces Ltd. All Rights Reserved
25. GENERIC REPLICATION SERVICE OVER WAN
London
New York
Hong Kong
In-Memory Speed Scalable and Efficient High Availability and Self-Healing
27. VERIFI (CURRENT) DEPLOYMENT ARCHITECTURE
PostgresSQL
mod_cluster
Data Volume
Internet EC2 Instance EC2 Instance
Cassandra
JBoss 4 recipes
Data Volume
EC2 Instance EC2 Instance
Availability region (US-West: Oregon)
27
28. TARGET ARCHITECTURE
Bootstrap two EC2 clouds in different regions, install the “verifi” application on each. The second cloud will have a slightly modified
(extended) postgres recipe for acting as a slave + no running app servers. Upon the primary zone failure, the second cloud will spin up
instances of the app servers and turn the data instance into master, then bootstrapping another “slave” cloud in another zone.
replication
mod_cluster mod_cluster
Data Volume Data Volume
Postgres Master EC2 Instance Postgres Slave
Internet EC2 Instance
EC2 Instance EC2 Instance
Cassandra Cassandra
JBoss JBoss
Data Volume Data Volume
EC2 Instance EC2 Instance EC2 Instance
EC2 Instance
Availability Region (US-West Oregon) Availability Region (US-East Virginia)
29. FAILOVER SCENARIO
Cloud #1 Upon initial deployment, the primary deployment
Cloud #2 of the application will be bootstrapped onto cloud
#1, another slightly modified application recipe
will be bootstrapped as cloud #2, polling cloud #1
for failure, and acting as a PostgresSQL db slave.
PostgresSQL PostgresSQL
App Servers Liveness poll
Region (US-West Oregon) Region (US-East Virginia)
Bootstrap another cloud in
a different region using the
Region failure same application recipe
Turn Postgres slave into used to bootstrap cloud #2
occurs master, Start app server above*
instances*
Cloud #2 Cloud #3
Cloud #1
Liveness poll
App Servers PostgresSQL PostgresSQL
Region (US-West California)
Region (US-East Virginia )
29
30. DEMO ON EC2 - 5 MINUTES SETUP
/* Credentials - You must enter your
* cloud provider account credentials
*/
user="ENTER_USER_HERE"
apiKey="ENTER_API_KEY_HERE"
keyFile="ENTER_KEY_FILE_HERE"
keyPair="ENTER_KEY_PAIR_HERE"
// Advanced usage
hardwareId="m1.small"
locationId="us-east-1"
linuxImageId="us-east-1/ami-1624987f"
ubuntuImageId="us-east-1/ami-82fa58eb"
30 ® Copyright 2012 GigaSpaces Ltd. All Rights Reserved
31. SUMMARY
AWS and outages
Outage impact
Disaster Recovery – it’s all about redundancy!
Cloning your environment – app stack
Cloning your DB – Replication
Cloudify as a solution for Redundancy
Use recipes to work on any cloud
Fast and customized data replication
Demo with Cloudify on EC2
31 ® Copyright 2013 GigaSpaces Ltd. All Rights Reserved
32. QUESTIONS & ANSWERS
Thank You!
RonZ@gigaspaces.com
32 ® Copyright 2013 GigaSpaces Ltd. All Rights Reserved
Notes de l'éditeur
A high-ranking Amazon executive said there are 60,000 different customers across the various Amazon Web Services, and most of them are not the startups that are normally associated with on-demand computing. Rather the biggest customers in both number and amount of computing resources consumed are divisions of banks, pharmaceuticals companies and other large corporations who try AWS once for a temporary project, and then get hooked. According to Statspotting.com in March 2012 - researcher estimates that Amazon Web Services is using at least 454,400 servers in seven data center hubs around the globe. Let us try this: Google is powered by a million servers. Maybe a little more than that. And Amazon has half a million servers. Now, things fall in place. Facebook, the service that takes up one fourth of all our time online, is powered by less than 100,000 servers.Biggest customers – pinterest, instagram, Netflix, heroku, quora, foursquare etcAmazon Web Services runs more than 835,000 requests per second for hundreds of thousands of customers in 190 countries, including 300 government agencies and 1,500 educational institutions.
The Amazon cloud proved itself in that sufficient resources were available world-wide such that many well-prepared users could continue operating with relatively little downtime. But because Amazon’s reliability has been incredible, many users were not well-prepared leading to widespread outages.Amazon EC2 outage on April 2011 was the worst in cloud computing’s history back then. It made the front page of many news pages, including the New York Times, probably because many people were shocked by how many web sites and services rely on EC2.Microsoft Azure outageDec 28 2012 - some owners of Microsoft's Xbox 360 game console were unable to access some of their cloud-based save storage files.July 26 - 2012 - Service for Microsoft’s Windows Azure Europe region went down for more than two hoursFeb 29 2012 - The ultimate result was service impacts of 8-10 hours for users of Azure data centers in Dublin, Ireland, Chicago, and San Antonio.
Some parts of Amazon Web Services suffered a major outage. A portion of volumes utilizing the Elastic Block Store (EBS) service became "stuck" and were unable to fulfill read/write requests. It took at least two days for service to be fully restored. Reddit, one of the better-known sites to go down due to the error, said it has 700 EBS volumes with Amazon.Sites like Quora and Reddit were able to come back online in "read-only" mode, but users couldn't post new content for many hours.
For second time in less than a month, Amazon’s Northern Virginia data center has suffered an outage and is impacting many popular services such as Instagram, Pinterest & Netflix.Several websites that rely on Amazon Web Services were taken offline due to a severe storm of historic proportions in the Northern Virginia area where Amazon's largest datacenter is located. Amazon previously suffered an outage in its Northern Virginia facilities on June 14, 2012.A line of severe storms packing winds of up to 80 mph has caused extensive damage and power outages in Virginia. Dominion Virginia Power crews are assessing damages and will be restoring power where safe to do so.
A major outage occurred, affecting many sites such as reddit, Foursquare, Pinterest, and others. The cause was a latent bug in an operational data collection agent. A memory leak and a failed monitoring system caused the Amazon Web Services outage on Monday that took out Reddit and other major services.According to a post Friday night, AWS explained that the problem arose after a simple replacement of a data collection server. After installation, the server did not propagate its DNS address correctly and so a fraction of servers did not get the message. Those servers kept trying to reach the server, which led to a memory leak that then went out of control due to the failure of an internal monitoring alarm. Eventually the system ground to a virtual stop and millions of customers felt the pain.
Amazon AWS again suffered an outage, causing websites such as Netflix instant video to be unavailable for some customers, particularly in the North-eastern US. Amazon later issued a statement detailing the issues with the Elastic Load Balancing service that led up to the outage.The disruption began shortly after noon Pacific time on December 24 when data was accidentally deleted by a developer during maintenance on the East Coast Elastic Load Balancing system, which is designed to distribute traffic volume among servers."Netflix is designed to handle failure of all or part of a single availability zone in a region as we run across three zones and operate with no loss of functionality on two," the company said in ablog post this afternoon. "We are working on ways of extending our resiliency to handle partial or complete regional outages."
Fault tolerant systems are measured by their uptime / downtime for end usersAmazon says it is "committed" to a 99.95 percent uptime
Although AWS went offline for a few hours only, the downtime experience did have an impact on customers’ businesses. There is no known data for the number of people affected by a cloud computing service outage. It is estimated that the travel service provider Amadeus loses $89,000 per hour during any cloud computing outage, while Paypal loses around $225,000 per hour.
DR – The process and procedures you take to restore your system after catastrophic event.Cloud infrastructure has made DR much easier and affordable comparing to previous options.Cloud can also suffer from large scale failures because of network, power or any IT failures.Applications owners need to be responsible for HA and DR – can use multiple servers, AZ, regions and even clouds.Zones within a region share a LAN so they have high bandwidth, low latency and private IP access. Zones utilize separate power resources. Regions are “islands” – they share no resources.
Each cloud is unique in many aspects offering different API and functionality to manage the resources.Different set of available resourcesDifferent format, encoding and versionsDifferent security groups, machine images, snapshots etc.
Make sure to have a dedicated expert to manage your DR architecture, processes and testing.Define what your target recovery time and recovery point is.Be pessimistic and design for failures – (assume everything will fail and design a solution that is capable of handling it). Avoid single point of failures – all parts of your app should be highly available (different AZ / regions / cloud) – load balancers, app servers, web servers, message bus, database.Use monitoring and alerts for failover processes and for every change in state.Document your DR operational processes and automations.Try to “break” different part in your application. Try different ways to break it – unplug the network, turn machine off etc. Try it again.
Netflix has open sourced ”Chaos Monkey,” its tool designed to purposely cause failure in order to increase the resiliency of an application in Amazon Web Services (AWS.)It’s a timely move as AWS has had its fair share of outages. With tools like Chaos Monkey, companies can be better prepared when a cloud infrastructure has a failure.In a blog post, Netflx says that this is the first of several tools that it will open source to help companies better manage the services they run in cloud infrastructures. Next up is likely to be Janitor Monkey which helps keep an environment tidy and costs down.Chaos Monkey has achieved its own fame for its innovative approach. According to Netflix, the tool “randomly disables production instances to make sure it can survive common types of failure without any customer impact. The name comes from the idea of unleashing a wild monkey with a weapon in your data center (or cloud region) to randomly shoot down instances and chew through cables — all the while we continue serving our customers without interruption.”
Netflix provides an excellent toolset for surving outages at the operation level.In this part i wanted to zoom-in more on the design implication of our application.The core principle for surviving failure is actually fairly simple and in fact applies to any systems not just cloud whether they happen to be Airplane, Missiles, Cars etc.. At the end its all about redundancy. The degree of tolerance is often determined by how many alternate systems or parts of the system we have in our design and how much they are separated from one another. The degree tolerance is also determined by how fast we can detect the broken part in our system and make the switch. In software terms the common parts that comprises our system is built out of two main groups - the business logic and the data.Making a redundant software application that can survive failure is often based on setting up clones for two of those parts of our system.
We need abstraction – we don’t want to be locked in. We want to use tools that offer this abstraction layers both for daily management and for DR. This tool should translate our architecture concepts to the cloud specific properties (using recipes).To clone our application business logic we need to be able to ensure that all parts of our system runs the exact same version of all our software components . That include not just the binaries but also the configuration, the scripts that runs our application and more importantly that all our post deployment procedures such as fail-over, scaling and monitoring are also kept consistent. Quite often the things that makes the cloning of our business logic complex is due to the fact that the information on how to run our application is often scattered in many different sources such as scripts, as well as the mind of the people that runs those apps. To make the job of cloning our application much simpler and thus more consistent we need to be able to capture all parts of the information for running our apps in the same place. Configuration management tools such as Chef, Puppet and in the case of Amazon CloudFormation can help on this regard.
RDS read replica - Amazon RDS uses MySQL’s built-in replication functionality to create a special type of DB Instance called a Read Replica that allows you to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. Once you create a Read Replica, database updates on the source DB Instance are replicated to the Read Replica using MySQL’s native, asynchronous replication. Since Read Replicas leverage standard MySQL replication, they may fall behind their sources, and they are therefore not intended to be used for enhancing fault tolerance in the event of source DB Instance failure or Availability Zone failure.
There are lots of patterns on how to avoid failure.It took Netflix lots of development work to build a framework that can handle them well.Most users, startup don't have the lactury of implementing them themselves. You need a tool that will enable you to automate those patterns in a consistent way. - Enter Cloudify
Cloudify is an enterprise-class open source PaaS stack that sits between your application and your chosen cloud. It enables your application to concentrate on doing what it does best, leaving Cloudify to ensure that the resources it needs are available regardless of the cloud and stack used.
Any App, Any Stack — Move your application to the cloud without making any code changes, regardless of the application stack (Java/Spring, Java EE, Ruby on Rails, …), database store (relational such as MySQL or non-relational such as Apache Cassandra), or any other middleware components it uses. This enables you to achieve your objective of no code changes.To make the work of setting all this work simpler we tried to bake all those patterns into a readymade tools and are scripted into out of the box recipes. The cloudify recipes includes: Database cluster recipes with support for MySQL, MongoDB, Cassandra, Postgress etc..Integration with Chef and Puppet Automation of fail-over, scaling and continues maintenance of our application.Application recipes that allows you to capture all the aspect of running your application including the post deployment aspect such as fail-over, scaling and monitoring.
Need to add Puppet graphics next to Chef
Any Cloud — Move your application to any cloud environment, from any environment, at any time. Cloudify supports public clouds (Amazon EC2, Windows Azure, Rackspace, …) and private clouds (OpenStack, cloud.com, VMWarevCenter, Citrix XenServer, …). Moreover, to support enterprises that want to deploy the same application in multiple environments (say, for cloud bursting), Cloudify’s framework is designed to be flexible enough to handle any application stack, and yet on the other hand, isolate the application completely from the underlying cloud runtime. By hiding the APIs and configuration of a cloud from your application, your application can be easily moved from cloud to cloud. This enables you to achieve your objective of no lock-in.
Cloudify Mysql master slave recipe – The first service instance becomes the master (automatically) and the other instances (two instances in our case), are the slaves.Generic replication service using XAP WAN Gateway
Monitor data mutating SQL statements on source site. Turn on the MySQL query log, and write a listener (“Feeder”) to intercept data mutating SQL statements, then write them to GigaSpaces In-Memory Data Grid.Replicate data mutating SQL statements over WAN. I used GigaSpaces WAN Replication to replicate the SQL statements between the data grids of the primary and secondary sites in a real-time and transactional manner.Execute data mutating SQL statements on target site. Write a listener (“Processor”) to intercept incoming SQL statements on the data grid and execute them on the local MySQL DB.The network connectivity between the primary and secondary sites can be addressed in several ways, ranging from load-balancing between the sites, through setting up VPN between the sites, and up to using designated products such as Cisco’s Connected Cloud Solution.
There are lots of patterns on how to avoid failure.It took Netflix lots of development work to build a framework that can handle them well.Most users, startup don't have the lactury of implementing them themselves. You need a tool that will enable you to automate those patterns in a consistent way. - Enter Cloudify
Cloud brings lots of promise for making our business more agile.Cloud has also become a huge shared infrastructure in which every failure has a much more significant impact on our business world wide.The experience in the past year had tought us that even a robust cloud infrastructure such as Amazon can fail. Through this experience we've learned that rather than relying on the infrastructure for preventing failure we need to design our system to cope with failure and get used to failure as away of life. Having said that the investment required to build a robust application can be fairly large and not something that everyone can afford.Using tools like Cloudify, Chef Puppet and if your a pure Amazon shop Netflix <framework> could help greatly to reduce this effort by making a lot of those patterns pre-backed into recipes.