It's harder than ever to predict the load your application will need to handle in advance, so how do you design your architecture so you can afford to implement as you go and be ready for whatever comes your way.
It's easy to focus on optimizing each part of your application but your application architecture determines the options you have to make big leaps in scalability.
In this talk we'll cover practical patterns you can build today to meet the needs of rapid development while still creating systems that can scale up and out. Specific code examples will focus on .NET but the principles apply across many technologies. Real world systems will be discussed based on our experience helping customers around the world optimize their enterprise applications.
2. Who Am I?
• Kendall Miller
• One of the Founders of Gibraltar Software
– Small Independent Software Vendor Founded in 2008
– Developers of VistaDB and Gibraltar
– Engineers, not Sales People
• Enterprise Systems Architect & Developer since 1995
• BSE in Computer Engineering, University of Illinois
Urbana-Champaign (UIUC)
• Twitter: @KendallMiller
10. Specific Architectures
• Gossip • Load Balancers + Shared
• Map Reduce Nothing Units
• Tree of Responsibility • Load Balancers +
• Stream Processing Stateless Nodes +
Scalable Storage
• Scalable Storage
• Content Addressable
• Publish/Subscribe Networks
• Distributed Queues • General Peer to Peer
11. ACD/C
• Async – Do the work whenever
• Caching – Don’t do any work you don’t have
to
• Distribution – Get as many people to do the
work as you can
• Consistency – We all agree on these key things
12. Async
• Decouple operations so you do the minimum
amount of work in performance critical paths
• Queue work that can be completed later to
smooth out load
• Speculative Execution
• Scheduled Requests (Nightly processes)
13. Caching
• Save results of earlier work nearby where they
are handy to use again later
• Apply in front of anything that’s time
consuming
• Easiest to apply from the left to the right
• Simple strategies can be really effective (EF
Dump all on update)
14. Why Caching?
• Loading the world is impractical
• Apps ask a lot of repeating questions.
– Stateless applications even more so
• Answers don’t change often
• Authoritative information is expensive
15. Distribution
• Distribute requests across multiple systems
• Classic web “Scale Out” approach
• The less state held, the easier to distribute
work.
– Distributed database = hard
– Distributed static content server = easy
• Request routing for distribution can serve
other availability purposes
16. Consistency
• The degree to which all parties observe the
same state of the system at the same time
• Scaling inevitably requires compromise
– Forces one source of the truth for absolute
consistency and requires extensive locking to
ensure parties agree
– The real world doesn’t require the consistency we
tend to demand of our systems
17. Consistency Challenges
• Singleton Data Structures (Order numbers..)
• State held between the endpoints of a process
• Consistent results of queries across
partitioned datasets
18. Typical Application
Session State Transaction Isolation
SSL Session Reader/Writer Locks
Log Contention Singleton Data Structures
Memory Allocation/GC
Network Sockets
Request Queue
Client Server
(Web (Web Storage
Browser) Server) (Database)
20. Distribution
Session State and Identity
need to be factored out
Partition (Sticky Session)
First, then stateless nodes
Client Server
(Web (Web
Client
Browser) Server)
(Web
Client Storage
Browser)
(Web (Database)
Client Server
Browser)
(Web (Web
Browser) Server)
21. Partitioned Storage Zones
Server
Client (Web
Server
(Web Server)
(Web Storage
Client (Database)
Browser) Server)
(Web
Client
Browser)
(Web
Client Server
Browser)
(Web (Web
Browser) Server
Server) Storage
(Web
Server) (Database)
22. Partitioned Storage Intra-Zone
Client Server
Orders
(Web Customer B (Web
Server
Client
Browser) Server)
(Web
(Web Server
Client Server)
Browser) (Web
(Web Server
Client Server)
Browser) (Web Products
(Web
Server)
Browser)
Inventory
23. Asynchronous Processing
Server Orders
(Web Order
Server
Server) Queue
(Web
Server
Server)
(Web
Server
Server)
(Web Products
Server)
Order
Processing
Server Inventory
24. Fallacies of Distributed Computing
• The network is reliable
• Latency is zero
• Bandwidth is infinite
• The network is secure
• Topology doesn’t change
• There is one administrator
• Transport cost is zero
• The network is homogeneous
26. Fresh Problems: Partial Failures
1. Break system into individual failure zones
2. Monitor each instance of each zone for
problems
3. Route around bad instances
28. Fresh Problems: Upgrades
Server
Client (Web
Server
(Web Server)
(Web Storage
Client (Database)
Browser) Server)
(Web
Client
Browser)
(Web
Client Server
Browser)
(Web (Web
Browser) Server
Server) Storage
(Web
Server) (Database)
29. Fresh Problems: Upgrades
1. Break system into individual upgrade zones
2. Upgrade each zone – Drain &
Stop, Upgrade, Verify.
3. Cut traffic over to updated zones
What level of scaling are we talking about?Scaling is the ability to cope and perform under an increasing workload.
This is VISITORS per DAYMicrosoft.com: 60M Twitter.com: 35MAmazon.com: 15MTarget.com: 2MDevExpress.com & Telerik.com: 25KHanselman.com: 12KGibraltar Software: 1K
This is VISITORS per DAYMicrosoft.com: 60M Twitter.com: 35MAmazon.com: 15MTarget.com: 2MDevExpress.com & Telerik.com: 25KHanselman.com: 12KGibraltar Software: 1K
THIS IS NOT ABOUT ASYNC FOR FASTER PERCEIVED PERFORMANCE
Improve response under loadDo only the work you have to Up to 95% of the work on the typical site can be pulled from cache
Add reverse proxy (Load Balancer)Add additional middle tier serversSession state and identity need to be factored outPartition (“Sticky session”) first, then true load balancing with no state in center
Break down traffic by easy to determine characteristic: Customer, product category, etc.Add storage regions that are self-consistentCan vary exact mix of what data is in each container and how you partitionTypically some parts may be shared like IdentityCross-zone aggregation is slowCross-zone coherency strategy
Middle tier routes storage requests based on easy to determine characteristicConsistency strategy complexity (reports may reflect delayed data, different parties may not see the same view of the world)
Separate long running, dangerous, or serialized tasks from general workWorkflow consistency strategy requiredComplications with deployment and versioningDeferred failure scenarios.
Add reverse proxy (Load Balancer)Add additional middle tier serversSession state and identity need to be factored outPartition (“Sticky session”) first, then true load balancing with no state in center
Break down traffic by easy to determine characteristic: Customer, product category, etc.Add storage regions that are self-consistentCan vary exact mix of what data is in each container and how you partitionTypically some parts may be shared like IdentityCross-zone aggregation is slowCross-zone coherency strategy