SlideShare une entreprise Scribd logo
1  sur  49
Design Principles of Scalable,
               Distributed Systems


                                                 Tinniam V Ganesh
                                                 tvganesh.85@gmail.com



03/28/12           Tinniam V Ganesh - http://gigadom.wordpress.com       1
Distributed Systems
There are two classes of systems
- Monolithic
- Distributed




03/28/12       Tinniam V Ganesh - http://gigadom.wordpress.com   2
Traditional Client Server Architecture




                Client                                          Server




03/28/12                 Tinniam V Ganesh - http://gigadom.wordpress.com   3
Properties of Distributed Systems
Distributed Systems are made up of 100s of commodity servers
• No machine has complete information about the system state
• Machines make decisions based on local information
• Failure of one machine does not cause any problems
• There is no implicit assumption about a global clock




03/28/12                  Tinniam V Ganesh - http://gigadom.wordpress.com   4
Characteristics of Distributed Systems

Distributed Systems are made up of
• Commodity Servers
• Large number of servers
• Servers crash, there network failures, messages not sent, received
• New Servers can join without changing behavior




03/28/12                   Tinniam V Ganesh - http://gigadom.wordpress.com   5
Examples of Distributed Systems
• Amazon’s e-retail store
• Google
• Yahoo
• Facebook
• Twitter
• Youtube
Etc




03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com   6
Key principles of distributed systems

•   Incremental scalability
•   Symmetry – All nodes are equal
•   Decentralization – No central control
•   Work distribution heterogenity




03/28/12             Tinniam V Ganesh - http://gigadom.wordpress.com   7
Transaction Processing System
•   Traditional databases have to ensure that transactions are consistent. Transaction
    must be fully complete or not at all.
•   Successful transactions are committed.
•   Otherwise transactions are rolled back




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com             8
ACID postulate
Transactions in traditional system have to have the following properties
Earlier Systems were designed for ACID properties
A – Atomic
C – Consistent
I – Isolated
D - Durable




03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com   9
ACID
Atomic – This property ensures that each transaction happens completely or not at all

Consistent - The transaction should maintain system invariants. For e.g. an internal
   bank transfer should result in the total amount in the bank before and after the
   transaction to be same. It may be temporarily different

Isolated – Different transactions should be isolated or serializable. It must appear that
    transactions happen sequentially in some particular order

Durable – Once the transaction commits the effect is complete and durable going
   forward.




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com           10
Scaling
There are 2 types of scaling

Vertical scaling – This method scales by adding faster CPU , more memory and a
   larger database. Does not scale beyond a particular point
Horizontal scalability – This method scales laterally by adding more servers with the
   same capacity




03/28/12                       Tinniam V Ganesh - http://gigadom.wordpress.com          11
System behavior on Scaling




                         Response
                                                                          Response
Transactions                                                 Throughput   Time
Per Second




                               Load
    03/28/12       Tinniam V Ganesh - http://gigadom.wordpress.com           12
Consistency and Replication

In order to increase reliability against failures data has to be replicated across multiple
    servers.
The problem with replicas is the need to keep the data consistent




03/28/12                      Tinniam V Ganesh - http://gigadom.wordpress.com            13
Reasons for Replication

Data is replicated in distributed systems for two reasons
- Reliability – Ensuring that there is a consistency in data in a majority of the replicas
- Performance – Performance can be improved by accessing a replica that is closer
   to the user. Geographical resiliency




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com            14
Downside of Replication
•   Replication of data has several advantages but the downside is the issue
    maintaining consistency
•   A modification of a copy makes it different from the rest and this update has to be
    propagated to all copies to ensure consistency




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com         15
Synchronization
No machine has a view of the global system state

•   Problems with distributed systems
•   How can processes synchronize ?
•   Clocks on different systems will be slightly different
•   Is there a way to maintain a global view of the clock
•   Can we order events causally?




03/28/12                      Tinniam V Ganesh - http://gigadom.wordpress.com   16
Hypothetical situation
Consider a hypothetical situation with banks

 - Man deposits Rs 55,000/- at 10.00 am
- Man withdraws Rs 20,000/- at 10.02 am
What will happen if the updates happen in different order
- Operations must be idempotent. Idempotency refers to getting the same
  result no matter how many times the operation is performed.

eCommerce Site – Amazon
-add to shopping cart
-delete from shopping cart



03/28/12                 Tinniam V Ganesh - http://gigadom.wordpress.com   17
Vector Clocks
Vector clocks are used to capture causality between different versions of the same
   object.
Amazon’s Dynamo uses vector clocks to reconcile different versions of the objects and
   determine the causal ordering of events.




03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com        18
Vector Clocks
      2    OK                     5                               8

      4                          10                               16

      6                          15                               24

      8                          20                               32

     10                          25                 Adjust        40

     12                          30                               48

     14                          41                               56

     16                          46                               64

     18                          51                               68

03/28/12        Tinniam V Ganesh - http://gigadom.wordpress.com        19
Dynamo’s reconciliation process




03/28/12     Tinniam V Ganesh - http://gigadom.wordpress.com   20
Problem with Relational Databases
RDBMS databases provide the user the ability to construct complex queries but they
   do not scale well.
Problem
Performance deteriorates as the number of records reach several million

Solution
To partition the database horizontally and distribute records across several servers.




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com            21
No SQL Databases
•   Databases horizontally partitioned
•   Simple queries based on gets() and sets()
•   Access are made on key/value pairs
•   Cannot do complex queries like joins
•   Database can contain several hundred million records




03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com   22
Databases that use Consistent Hashing
1.         Cassandra
2.         Amazon’s Dynamo
3.         NoSQL
4.         HBASE
5.         CouchDB
6.         MongoDB




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com   23
Hash Tables
  •   Distribute records among many servers
  •   Distribution based on keys which is hashed
  •   Key – 128 bit or 160 bits
  •   Hash values fall into a range servers visualized to lie on the circumference of a
      circle going clockwise.




03/28/12                      Tinniam V Ganesh - http://gigadom.wordpress.com             24
Distributed Hash Table
•   Hashing the keys results in reaching servers are assumed to reside on the
    circumference of a circle
•   The highest key coincides back to the beginning of this circle
•   The movement is clockwise




03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com     25
Distributed Hash Table
An entity with key K falls under the jurisdiction of the node
  with the smallest id >= K

•   For e.g. if we have two nodes, one at position 50 and another at position 200.
•   If we want to store a key / value pair in the DHT and the key hash is 100, would go
    to node 200.
•   Another key hash of 30 would go to the node 50




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com          26
Consistent Hashing
A naïve approach with 8 nodes and 100 keys could use a simple modulo algorithm.
So key 18 would end up on node 2 and key 63 on node 7.
But how do we handle servers crashing or new servers joining the system.
Consistent Hashing handles this issue




03/28/12                   Tinniam V Ganesh - http://gigadom.wordpress.com        27
Consistent Hashing




Source: http://offthelip.org/
03/28/12                        Tinniam V Ganesh - http://gigadom.wordpress.com   28
Distributed Hash Table




03/28/12        Tinniam V Ganesh - http://gigadom.wordpress.com   29
Consistent Hashing




                                                       Source: http://horicky.blogspot.in



03/28/12      Tinniam V Ganesh - http://gigadom.wordpress.com                               30
1      4


             Chord System                                                1
                                                                         3
                                                                                4
                                                                                9                     Resolving K = 26
                                                                         4      9
                                                                         5      18
                                                                 1
             1        1                                                              2
             1        1
                                                                                                 3
             3        1
             4        4             28
                                                                                                       4
             5        14




1        28
1        28
                                                                  2
3        28
4        1                 21                                                                                 9
5        9

     1           21
                            20
     1           28
     3           28
                                1    20
     4           28
                                1    20   18                                                         FTp[i]=succ(p+2 i-1)
     5           4
                                3    28                                              14
                                4    28
    03/28/12                                   Tinniam V Ganesh - http://gigadom.wordpress.com                        31
                                5    4
Process of determining node
To look up a key k node p will forward request to node q with index j in p’s finger table
    such that
q = FTp[j] <= k < FTp[j+1]
To resolve k =26
4. 26> FT1[5] = 18. Hence forwarded to Node 18
5. FT18[2] <= 26 < FT 18[3]
6. FT20[1] <=26 < FT20[2]
7. 26 > FT21[1] = 28 Hence Node 28 is responsible for key 26




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com           32
Hashing efficiency of Chord System
The Chord System gets to the node in O (log n) steps
There are other hashing techniques that get in O(1) but use a larger local table. For
   example attains a O(1) hashing method.




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com            33
Joining the Chord System

Suppose node p wants to join. It performs the following steps
- Requests lookup for succ (p+1)
- Inserts itself before this node




03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com   34
Maintaining consistency
Periodically each node checks its successor’s predecessor.
Node ‘q’ contacts succ(q+1) and requests it to return pred(succ(q+1))
If q = pred(succ(q+1)) then nothing has changed. If the node passes another value
     then q knows that a new node ‘p’ has joined the system
q < p < succ (q+1)so it updates its Finger table so q
Will set FTq[1] = p




03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com         35
CAP Theorem
Databases that are designed based on ACID properties have poor availability.

Postulated by Eric Brewer of University of Berkeley
At most only 2 of 3 properties are possible in distributed systems
C – Consistency
A – Availability
P – Partition Tolerance




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com   36
CAP Theorem
•   Consistency – Ability for repeated reads to provide the same value
•   Availability – Ability to be resilient to server crashes
•   Partition Tolerance – Ability to partition data between servers and always be able
    to get the data




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com         37
Real world examples of CAP Theorem

Amazon’s Dynamo chooses availability over consistency. Dynamo implements
   eventual consistency where data become consistent over time
Google’s BigTable chooses consistency over availability

Consistentcy, Partition Tolerance (CP)
Big Table
Hbase

Availability, Partition Tolerance (AP)
Dynamo
Voldemort
Cassandra



03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com   38
Consistency issues
Data replication used in many commercial systems perform synchronous replica
    coordination to provide strongly consistent data.
The downside of this approach is the poor availability
These systems maintain that the data is unavailable if they are not able to ensure
    consistency
For e.g.
If data is replicated on 5 servers and an update needs to be made then the following
    has to be done
- Update all 5 copies
- Ensure all of them are successful
- If one of them fails roll back the updates on the other 4

If a read is done when one of the server fail a strongly consistent system would return
     “data unavailable” when correctness is undetermined.

03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com           39
Quorum Protocol
To maintain consistency data is replicated in many servers.
For e.g. let us assume there are N servers in the system
Typical algorithms maintain at least writes to > N/2 => N/2 +1
Usually Nw> N/2
A write is successful if it has been successfully committed in N/2 +1 servers
This is known as write quorum




03/28/12                  Tinniam V Ganesh - http://gigadom.wordpress.com       40
Quorum Protocol
Similarly reads are done from an arbitrary number of server replicas Nr. This
   is known as a read quorum
Reads from different servers are compared
A consistent design requires that Nw + Nr > N
With this you are assured of reading your writes




03/28/12                  Tinniam V Ganesh - http://gigadom.wordpress.com       41
Election Algorithm
Many distributed systems usually have one process to act as a coordinator. If
   the coordinator crashes then an election takes place to identify the new
   coordinator
2. P sends a ELECTION message to all higher numbered processes
3. If no one responds P becomes coordinator
4. If a higher number process answers, it takes over the election process




03/28/12                 Tinniam V Ganesh - http://gigadom.wordpress.com    42
Traditional Fault Tolerance
Traditional systems use redundancy to handle failures and be tolerant to fault as
   shown below




                Active                   Standby




                Active                      Standby



03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com         43
Process Resilience
Handling failures in distributed systems is much more difficult as no system has any
   view of the global state.




03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com            44
Byzantine Failures
Byzantine refers to Byzantine General Problem where an army must unanimously
   decide whether to attack another army. The problem is complicated because the
   generals must use messengers to communicate and by the presence of traitors



Distributed Systems are prone to a type of failures known as Byzantine failures
Omission failures – Disk crashes, network congestion, failure to receive request etc
Commission failures – Failures when the server behaves incorrectly, corrupting local
    state etc

Solution: To be able to handle Byzantine Failures where k processes are sick is to have
    a minimum 2k+1 processes so that we are left with k+1 replies given that k process
    are behaving incorrectly



03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com            45
Checkpointing
In fault tolerant distributed computing backward error recovery requires that the
    system regularly save its state at periodic intervals. We need to create a consistent
    global state called a distributed snapshot.

In a distributed snapshot if a process P has recorded the receipt of a message then
    there should be a process Q that has sent a corresponding message.

Each process saves its state from time to time.
To recover we need to construct a consistent global state from these local states




03/28/12                     Tinniam V Ganesh - http://gigadom.wordpress.com           46
Gossip Protocol
Used to handle server crashes and server or servers joining into the system
Changes to the distributed system like membership changes are spread
   similar to gossiping
- A server picks another random server and sends a message regarding a
   server crash or a server joining
- If the receiver has already received this message it is dropped.
- The receiving server similarly gossips to other servers and the system
   reaches a steady state soon




03/28/12                 Tinniam V Ganesh - http://gigadom.wordpress.com      47
Sloppy Quorum
Quorum protocol is applied on first N healthy nodes rather than N nodes walking
   clockwise in the ring.

Data meant for Node A is sent to Node D if A is temporarily down.
Node D has a hinted handoff in its metadata that updates Node A when it is up.




03/28/12                    Tinniam V Ganesh - http://gigadom.wordpress.com       48
Thank You !



                             Tinniam V Ganesh
                             tvganesh.85@gmail.com
                             Read my blogs: http://gigadom.wordpress.com/

                                                http://savvydom.wordpress.com/




03/28/12    Tinniam V Ganesh - http://gigadom.wordpress.com                      49

Contenu connexe

Tendances

Introduction to Distributed System
Introduction to Distributed SystemIntroduction to Distributed System
Introduction to Distributed SystemSunita Sahu
 
Collaborating Using Cloud Services
Collaborating Using Cloud ServicesCollaborating Using Cloud Services
Collaborating Using Cloud ServicesDr. Sunil Kr. Pandey
 
Google File System
Google File SystemGoogle File System
Google File Systemguest2cb4689
 
Fault tolerance in distributed systems
Fault tolerance in distributed systemsFault tolerance in distributed systems
Fault tolerance in distributed systemssumitjain2013
 
Distributed file system
Distributed file systemDistributed file system
Distributed file systemAnamika Singh
 
Challenges of Conventional Systems.pptx
Challenges of Conventional Systems.pptxChallenges of Conventional Systems.pptx
Challenges of Conventional Systems.pptxGovardhanV7
 
Grid computing Seminar PPT
Grid computing Seminar PPTGrid computing Seminar PPT
Grid computing Seminar PPTUpender Upr
 
Distributed datababase Transaction and concurrency control
Distributed datababase Transaction and concurrency controlDistributed datababase Transaction and concurrency control
Distributed datababase Transaction and concurrency controlbalamurugan.k Kalibalamurugan
 
Distributed Systems
Distributed SystemsDistributed Systems
Distributed SystemsRupsee
 
Block Chain Cloud Technology
Block Chain Cloud TechnologyBlock Chain Cloud Technology
Block Chain Cloud TechnologyVedant Mane
 
Big Data Analytics with R
Big Data Analytics with RBig Data Analytics with R
Big Data Analytics with RGreat Wide Open
 

Tendances (20)

Introduction to Distributed System
Introduction to Distributed SystemIntroduction to Distributed System
Introduction to Distributed System
 
BIGDATA ANALYTICS LAB MANUAL final.pdf
BIGDATA  ANALYTICS LAB MANUAL final.pdfBIGDATA  ANALYTICS LAB MANUAL final.pdf
BIGDATA ANALYTICS LAB MANUAL final.pdf
 
Collaborating Using Cloud Services
Collaborating Using Cloud ServicesCollaborating Using Cloud Services
Collaborating Using Cloud Services
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Google File System
Google File SystemGoogle File System
Google File System
 
11. dfs
11. dfs11. dfs
11. dfs
 
Distributed Database
Distributed DatabaseDistributed Database
Distributed Database
 
Fault tolerance in distributed systems
Fault tolerance in distributed systemsFault tolerance in distributed systems
Fault tolerance in distributed systems
 
Cloud Computing Architecture
Cloud Computing ArchitectureCloud Computing Architecture
Cloud Computing Architecture
 
Distributed file system
Distributed file systemDistributed file system
Distributed file system
 
Data integration
Data integrationData integration
Data integration
 
Challenges of Conventional Systems.pptx
Challenges of Conventional Systems.pptxChallenges of Conventional Systems.pptx
Challenges of Conventional Systems.pptx
 
Grid computing Seminar PPT
Grid computing Seminar PPTGrid computing Seminar PPT
Grid computing Seminar PPT
 
Distributed datababase Transaction and concurrency control
Distributed datababase Transaction and concurrency controlDistributed datababase Transaction and concurrency control
Distributed datababase Transaction and concurrency control
 
Distributed Coordination-Based Systems
Distributed Coordination-Based SystemsDistributed Coordination-Based Systems
Distributed Coordination-Based Systems
 
Distributed Systems
Distributed SystemsDistributed Systems
Distributed Systems
 
Block Chain Cloud Technology
Block Chain Cloud TechnologyBlock Chain Cloud Technology
Block Chain Cloud Technology
 
Big Data Analytics with R
Big Data Analytics with RBig Data Analytics with R
Big Data Analytics with R
 
Memory virtualization
Memory virtualizationMemory virtualization
Memory virtualization
 
Database security
Database securityDatabase security
Database security
 

Similaire à Design principles of scalable, distributed systems

HbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubeyHbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubeyRohit Dubey
 
Rethinking State Management in Cloud-Native Streaming Systems
Rethinking State Management in Cloud-Native Streaming SystemsRethinking State Management in Cloud-Native Streaming Systems
Rethinking State Management in Cloud-Native Streaming SystemsYingjun Wu
 
Rethinking State Management in Cloud-Native Streaming Systems With Yingjun Wu...
Rethinking State Management in Cloud-Native Streaming Systems With Yingjun Wu...Rethinking State Management in Cloud-Native Streaming Systems With Yingjun Wu...
Rethinking State Management in Cloud-Native Streaming Systems With Yingjun Wu...HostedbyConfluent
 
Light sayed database_system_architecture
Light sayed database_system_architectureLight sayed database_system_architecture
Light sayed database_system_architectureSayed Ahmed
 
Light sayed database_system_architecture
Light sayed database_system_architectureLight sayed database_system_architecture
Light sayed database_system_architectureSayed Ahmed
 
MySQL Backed - Fraud Prevention
MySQL Backed - Fraud PreventionMySQL Backed - Fraud Prevention
MySQL Backed - Fraud PreventionRan Grushkowsky
 
C* Summit 2013: Optimizing the Public Cloud for Cost and Scalability with Cas...
C* Summit 2013: Optimizing the Public Cloud for Cost and Scalability with Cas...C* Summit 2013: Optimizing the Public Cloud for Cost and Scalability with Cas...
C* Summit 2013: Optimizing the Public Cloud for Cost and Scalability with Cas...DataStax Academy
 
Keeping Data in Sync with Syncsort
Keeping Data in Sync with SyncsortKeeping Data in Sync with Syncsort
Keeping Data in Sync with SyncsortPrecisely
 
Denodo Platform 7.0: Redefine Analytics with In-Memory Parallel Processing an...
Denodo Platform 7.0: Redefine Analytics with In-Memory Parallel Processing an...Denodo Platform 7.0: Redefine Analytics with In-Memory Parallel Processing an...
Denodo Platform 7.0: Redefine Analytics with In-Memory Parallel Processing an...Denodo
 
You got a couple Microservices, now what? - Adding SRE to DevOps
You got a couple Microservices, now what?  - Adding SRE to DevOpsYou got a couple Microservices, now what?  - Adding SRE to DevOps
You got a couple Microservices, now what? - Adding SRE to DevOpsGonzalo Maldonado
 
VEDAViz for ETSAP partners
VEDAViz for ETSAP partnersVEDAViz for ETSAP partners
VEDAViz for ETSAP partnersIEA-ETSAP
 
Data data everywhere
Data data everywhereData data everywhere
Data data everywhereMetron
 
Scaling systems using change propagation across data stores
Scaling systems using change propagation across data storesScaling systems using change propagation across data stores
Scaling systems using change propagation across data storesJagadeesh Huliyar
 
XPDS13: In-Guest Mechanism to Strengthen Guest Separation - Philip Tricca, Ci...
XPDS13: In-Guest Mechanism to Strengthen Guest Separation - Philip Tricca, Ci...XPDS13: In-Guest Mechanism to Strengthen Guest Separation - Philip Tricca, Ci...
XPDS13: In-Guest Mechanism to Strengthen Guest Separation - Philip Tricca, Ci...The Linux Foundation
 
Case Study with Answers.com on Scaling with Memcached and MySQL
Case Study with Answers.com on Scaling with Memcached and MySQLCase Study with Answers.com on Scaling with Memcached and MySQL
Case Study with Answers.com on Scaling with Memcached and MySQLanswers
 

Similaire à Design principles of scalable, distributed systems (20)

Hbase hive pig
Hbase hive pigHbase hive pig
Hbase hive pig
 
Hbase hivepig
Hbase hivepigHbase hivepig
Hbase hivepig
 
HbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubeyHbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubey
 
Rethinking State Management in Cloud-Native Streaming Systems
Rethinking State Management in Cloud-Native Streaming SystemsRethinking State Management in Cloud-Native Streaming Systems
Rethinking State Management in Cloud-Native Streaming Systems
 
Rethinking State Management in Cloud-Native Streaming Systems With Yingjun Wu...
Rethinking State Management in Cloud-Native Streaming Systems With Yingjun Wu...Rethinking State Management in Cloud-Native Streaming Systems With Yingjun Wu...
Rethinking State Management in Cloud-Native Streaming Systems With Yingjun Wu...
 
Light sayed database_system_architecture
Light sayed database_system_architectureLight sayed database_system_architecture
Light sayed database_system_architecture
 
Light sayed database_system_architecture
Light sayed database_system_architectureLight sayed database_system_architecture
Light sayed database_system_architecture
 
MySQL Backed - Fraud Prevention
MySQL Backed - Fraud PreventionMySQL Backed - Fraud Prevention
MySQL Backed - Fraud Prevention
 
Csc concepts
Csc conceptsCsc concepts
Csc concepts
 
C* Summit 2013: Optimizing the Public Cloud for Cost and Scalability with Cas...
C* Summit 2013: Optimizing the Public Cloud for Cost and Scalability with Cas...C* Summit 2013: Optimizing the Public Cloud for Cost and Scalability with Cas...
C* Summit 2013: Optimizing the Public Cloud for Cost and Scalability with Cas...
 
Keeping Data in Sync with Syncsort
Keeping Data in Sync with SyncsortKeeping Data in Sync with Syncsort
Keeping Data in Sync with Syncsort
 
Database System Architecture
Database System ArchitectureDatabase System Architecture
Database System Architecture
 
Denodo Platform 7.0: Redefine Analytics with In-Memory Parallel Processing an...
Denodo Platform 7.0: Redefine Analytics with In-Memory Parallel Processing an...Denodo Platform 7.0: Redefine Analytics with In-Memory Parallel Processing an...
Denodo Platform 7.0: Redefine Analytics with In-Memory Parallel Processing an...
 
You got a couple Microservices, now what? - Adding SRE to DevOps
You got a couple Microservices, now what?  - Adding SRE to DevOpsYou got a couple Microservices, now what?  - Adding SRE to DevOps
You got a couple Microservices, now what? - Adding SRE to DevOps
 
VEDAViz for ETSAP partners
VEDAViz for ETSAP partnersVEDAViz for ETSAP partners
VEDAViz for ETSAP partners
 
Data data everywhere
Data data everywhereData data everywhere
Data data everywhere
 
Scaling systems using change propagation across data stores
Scaling systems using change propagation across data storesScaling systems using change propagation across data stores
Scaling systems using change propagation across data stores
 
XPDS13: In-Guest Mechanism to Strengthen Guest Separation - Philip Tricca, Ci...
XPDS13: In-Guest Mechanism to Strengthen Guest Separation - Philip Tricca, Ci...XPDS13: In-Guest Mechanism to Strengthen Guest Separation - Philip Tricca, Ci...
XPDS13: In-Guest Mechanism to Strengthen Guest Separation - Philip Tricca, Ci...
 
Case Study with Answers.com on Scaling with Memcached and MySQL
Case Study with Answers.com on Scaling with Memcached and MySQLCase Study with Answers.com on Scaling with Memcached and MySQL
Case Study with Answers.com on Scaling with Memcached and MySQL
 
DBMS Bascis
DBMS BascisDBMS Bascis
DBMS Bascis
 

Plus de Tinniam V Ganesh (TV)

Plus de Tinniam V Ganesh (TV) (9)

Internet of Things - TEDx talk
Internet of Things - TEDx talkInternet of Things - TEDx talk
Internet of Things - TEDx talk
 
Long Term Evolution (LTE) -
Long Term Evolution (LTE) -Long Term Evolution (LTE) -
Long Term Evolution (LTE) -
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud Computing
 
Intelligent networks, camel_services_and_applications_v1
Intelligent networks, camel_services_and_applications_v1Intelligent networks, camel_services_and_applications_v1
Intelligent networks, camel_services_and_applications_v1
 
Wireless technologies - Part 2
Wireless technologies - Part 2Wireless technologies - Part 2
Wireless technologies - Part 2
 
Wireless technologies - Part 1
Wireless technologies - Part 1Wireless technologies - Part 1
Wireless technologies - Part 1
 
Seven habits of highly effective people
Seven habits of highly effective peopleSeven habits of highly effective people
Seven habits of highly effective people
 
Signaling system 7 (ss7)
Signaling system 7 (ss7)Signaling system 7 (ss7)
Signaling system 7 (ss7)
 
Technology trends that will shape our future
Technology trends that will shape our futureTechnology trends that will shape our future
Technology trends that will shape our future
 

Dernier

Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 

Dernier (20)

Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 

Design principles of scalable, distributed systems

  • 1. Design Principles of Scalable, Distributed Systems Tinniam V Ganesh tvganesh.85@gmail.com 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 1
  • 2. Distributed Systems There are two classes of systems - Monolithic - Distributed 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 2
  • 3. Traditional Client Server Architecture Client Server 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 3
  • 4. Properties of Distributed Systems Distributed Systems are made up of 100s of commodity servers • No machine has complete information about the system state • Machines make decisions based on local information • Failure of one machine does not cause any problems • There is no implicit assumption about a global clock 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 4
  • 5. Characteristics of Distributed Systems Distributed Systems are made up of • Commodity Servers • Large number of servers • Servers crash, there network failures, messages not sent, received • New Servers can join without changing behavior 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 5
  • 6. Examples of Distributed Systems • Amazon’s e-retail store • Google • Yahoo • Facebook • Twitter • Youtube Etc 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 6
  • 7. Key principles of distributed systems • Incremental scalability • Symmetry – All nodes are equal • Decentralization – No central control • Work distribution heterogenity 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 7
  • 8. Transaction Processing System • Traditional databases have to ensure that transactions are consistent. Transaction must be fully complete or not at all. • Successful transactions are committed. • Otherwise transactions are rolled back 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 8
  • 9. ACID postulate Transactions in traditional system have to have the following properties Earlier Systems were designed for ACID properties A – Atomic C – Consistent I – Isolated D - Durable 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 9
  • 10. ACID Atomic – This property ensures that each transaction happens completely or not at all Consistent - The transaction should maintain system invariants. For e.g. an internal bank transfer should result in the total amount in the bank before and after the transaction to be same. It may be temporarily different Isolated – Different transactions should be isolated or serializable. It must appear that transactions happen sequentially in some particular order Durable – Once the transaction commits the effect is complete and durable going forward. 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 10
  • 11. Scaling There are 2 types of scaling Vertical scaling – This method scales by adding faster CPU , more memory and a larger database. Does not scale beyond a particular point Horizontal scalability – This method scales laterally by adding more servers with the same capacity 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 11
  • 12. System behavior on Scaling Response Response Transactions Throughput Time Per Second Load 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 12
  • 13. Consistency and Replication In order to increase reliability against failures data has to be replicated across multiple servers. The problem with replicas is the need to keep the data consistent 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 13
  • 14. Reasons for Replication Data is replicated in distributed systems for two reasons - Reliability – Ensuring that there is a consistency in data in a majority of the replicas - Performance – Performance can be improved by accessing a replica that is closer to the user. Geographical resiliency 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 14
  • 15. Downside of Replication • Replication of data has several advantages but the downside is the issue maintaining consistency • A modification of a copy makes it different from the rest and this update has to be propagated to all copies to ensure consistency 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 15
  • 16. Synchronization No machine has a view of the global system state • Problems with distributed systems • How can processes synchronize ? • Clocks on different systems will be slightly different • Is there a way to maintain a global view of the clock • Can we order events causally? 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 16
  • 17. Hypothetical situation Consider a hypothetical situation with banks - Man deposits Rs 55,000/- at 10.00 am - Man withdraws Rs 20,000/- at 10.02 am What will happen if the updates happen in different order - Operations must be idempotent. Idempotency refers to getting the same result no matter how many times the operation is performed. eCommerce Site – Amazon -add to shopping cart -delete from shopping cart 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 17
  • 18. Vector Clocks Vector clocks are used to capture causality between different versions of the same object. Amazon’s Dynamo uses vector clocks to reconcile different versions of the objects and determine the causal ordering of events. 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 18
  • 19. Vector Clocks 2 OK 5 8 4 10 16 6 15 24 8 20 32 10 25 Adjust 40 12 30 48 14 41 56 16 46 64 18 51 68 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 19
  • 20. Dynamo’s reconciliation process 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 20
  • 21. Problem with Relational Databases RDBMS databases provide the user the ability to construct complex queries but they do not scale well. Problem Performance deteriorates as the number of records reach several million Solution To partition the database horizontally and distribute records across several servers. 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 21
  • 22. No SQL Databases • Databases horizontally partitioned • Simple queries based on gets() and sets() • Access are made on key/value pairs • Cannot do complex queries like joins • Database can contain several hundred million records 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 22
  • 23. Databases that use Consistent Hashing 1. Cassandra 2. Amazon’s Dynamo 3. NoSQL 4. HBASE 5. CouchDB 6. MongoDB 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 23
  • 24. Hash Tables • Distribute records among many servers • Distribution based on keys which is hashed • Key – 128 bit or 160 bits • Hash values fall into a range servers visualized to lie on the circumference of a circle going clockwise. 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 24
  • 25. Distributed Hash Table • Hashing the keys results in reaching servers are assumed to reside on the circumference of a circle • The highest key coincides back to the beginning of this circle • The movement is clockwise 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 25
  • 26. Distributed Hash Table An entity with key K falls under the jurisdiction of the node with the smallest id >= K • For e.g. if we have two nodes, one at position 50 and another at position 200. • If we want to store a key / value pair in the DHT and the key hash is 100, would go to node 200. • Another key hash of 30 would go to the node 50 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 26
  • 27. Consistent Hashing A naïve approach with 8 nodes and 100 keys could use a simple modulo algorithm. So key 18 would end up on node 2 and key 63 on node 7. But how do we handle servers crashing or new servers joining the system. Consistent Hashing handles this issue 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 27
  • 28. Consistent Hashing Source: http://offthelip.org/ 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 28
  • 29. Distributed Hash Table 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 29
  • 30. Consistent Hashing Source: http://horicky.blogspot.in 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 30
  • 31. 1 4 Chord System 1 3 4 9 Resolving K = 26 4 9 5 18 1 1 1 2 1 1 3 3 1 4 4 28 4 5 14 1 28 1 28 2 3 28 4 1 21 9 5 9 1 21 20 1 28 3 28 1 20 4 28 1 20 18 FTp[i]=succ(p+2 i-1) 5 4 3 28 14 4 28 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 31 5 4
  • 32. Process of determining node To look up a key k node p will forward request to node q with index j in p’s finger table such that q = FTp[j] <= k < FTp[j+1] To resolve k =26 4. 26> FT1[5] = 18. Hence forwarded to Node 18 5. FT18[2] <= 26 < FT 18[3] 6. FT20[1] <=26 < FT20[2] 7. 26 > FT21[1] = 28 Hence Node 28 is responsible for key 26 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 32
  • 33. Hashing efficiency of Chord System The Chord System gets to the node in O (log n) steps There are other hashing techniques that get in O(1) but use a larger local table. For example attains a O(1) hashing method. 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 33
  • 34. Joining the Chord System Suppose node p wants to join. It performs the following steps - Requests lookup for succ (p+1) - Inserts itself before this node 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 34
  • 35. Maintaining consistency Periodically each node checks its successor’s predecessor. Node ‘q’ contacts succ(q+1) and requests it to return pred(succ(q+1)) If q = pred(succ(q+1)) then nothing has changed. If the node passes another value then q knows that a new node ‘p’ has joined the system q < p < succ (q+1)so it updates its Finger table so q Will set FTq[1] = p 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 35
  • 36. CAP Theorem Databases that are designed based on ACID properties have poor availability. Postulated by Eric Brewer of University of Berkeley At most only 2 of 3 properties are possible in distributed systems C – Consistency A – Availability P – Partition Tolerance 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 36
  • 37. CAP Theorem • Consistency – Ability for repeated reads to provide the same value • Availability – Ability to be resilient to server crashes • Partition Tolerance – Ability to partition data between servers and always be able to get the data 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 37
  • 38. Real world examples of CAP Theorem Amazon’s Dynamo chooses availability over consistency. Dynamo implements eventual consistency where data become consistent over time Google’s BigTable chooses consistency over availability Consistentcy, Partition Tolerance (CP) Big Table Hbase Availability, Partition Tolerance (AP) Dynamo Voldemort Cassandra 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 38
  • 39. Consistency issues Data replication used in many commercial systems perform synchronous replica coordination to provide strongly consistent data. The downside of this approach is the poor availability These systems maintain that the data is unavailable if they are not able to ensure consistency For e.g. If data is replicated on 5 servers and an update needs to be made then the following has to be done - Update all 5 copies - Ensure all of them are successful - If one of them fails roll back the updates on the other 4 If a read is done when one of the server fail a strongly consistent system would return “data unavailable” when correctness is undetermined. 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 39
  • 40. Quorum Protocol To maintain consistency data is replicated in many servers. For e.g. let us assume there are N servers in the system Typical algorithms maintain at least writes to > N/2 => N/2 +1 Usually Nw> N/2 A write is successful if it has been successfully committed in N/2 +1 servers This is known as write quorum 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 40
  • 41. Quorum Protocol Similarly reads are done from an arbitrary number of server replicas Nr. This is known as a read quorum Reads from different servers are compared A consistent design requires that Nw + Nr > N With this you are assured of reading your writes 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 41
  • 42. Election Algorithm Many distributed systems usually have one process to act as a coordinator. If the coordinator crashes then an election takes place to identify the new coordinator 2. P sends a ELECTION message to all higher numbered processes 3. If no one responds P becomes coordinator 4. If a higher number process answers, it takes over the election process 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 42
  • 43. Traditional Fault Tolerance Traditional systems use redundancy to handle failures and be tolerant to fault as shown below Active Standby Active Standby 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 43
  • 44. Process Resilience Handling failures in distributed systems is much more difficult as no system has any view of the global state. 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 44
  • 45. Byzantine Failures Byzantine refers to Byzantine General Problem where an army must unanimously decide whether to attack another army. The problem is complicated because the generals must use messengers to communicate and by the presence of traitors Distributed Systems are prone to a type of failures known as Byzantine failures Omission failures – Disk crashes, network congestion, failure to receive request etc Commission failures – Failures when the server behaves incorrectly, corrupting local state etc Solution: To be able to handle Byzantine Failures where k processes are sick is to have a minimum 2k+1 processes so that we are left with k+1 replies given that k process are behaving incorrectly 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 45
  • 46. Checkpointing In fault tolerant distributed computing backward error recovery requires that the system regularly save its state at periodic intervals. We need to create a consistent global state called a distributed snapshot. In a distributed snapshot if a process P has recorded the receipt of a message then there should be a process Q that has sent a corresponding message. Each process saves its state from time to time. To recover we need to construct a consistent global state from these local states 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 46
  • 47. Gossip Protocol Used to handle server crashes and server or servers joining into the system Changes to the distributed system like membership changes are spread similar to gossiping - A server picks another random server and sends a message regarding a server crash or a server joining - If the receiver has already received this message it is dropped. - The receiving server similarly gossips to other servers and the system reaches a steady state soon 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 47
  • 48. Sloppy Quorum Quorum protocol is applied on first N healthy nodes rather than N nodes walking clockwise in the ring. Data meant for Node A is sent to Node D if A is temporarily down. Node D has a hinted handoff in its metadata that updates Node A when it is up. 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 48
  • 49. Thank You ! Tinniam V Ganesh tvganesh.85@gmail.com Read my blogs: http://gigadom.wordpress.com/ http://savvydom.wordpress.com/ 03/28/12 Tinniam V Ganesh - http://gigadom.wordpress.com 49