SlideShare une entreprise Scribd logo
1  sur  20
CHORD: A SCALABLE PEER-TO-PEER
LOOKUP SERVICE FOR INTERNET
APPLICATIONS

 Ion Stoica                 Robert Morris
 David Liben-Nowell         David R. Karger
 M. Frans Kaashoek          Frank Dabek
 Hari Balakrishnan

                      Presented By-
                 Chaitanya Pratap
                MCA(4th Semester)
         South Asian University, New Delhi.
INTRODUCTION
 A fundamental problem that confronts peer-to-peer
  applications is to efficiently locate the node that
  stores a particular data item.
 Chord provides support for just one operation:
  given a key, it maps the key onto a node.
 Data location can be easily implemented on top of
  Chord by associating a key with each data
  item, and storing the key/data item pair at the node
  to which the key maps.
SYSTEM MODEL (FEATURES)
 Load balance: Chord acts as a distributed hash
  function, spreading keys evenly over the nodes. this
  provides a degree of natural load balance.
 Decentralization: Chord is fully distributed: no
  node is more important than any other. This
  improves robustness.
CONT…
 Scalability: The cost of a Chord lookup grows as
  the log of the number of nodes, so even very large
  systems are feasible.
 Availability: Chord automatically adjusts its internal
  tables to reflect newly joined nodes as well as node
  failures, ensuring that the node responsible for a
  key can always be found.
 Flexible naming: Chord places no constraints on
  the structure of the keys it looks up: the Chord key-
  space is flat.
THE BASE CHORD PROTOCOL
   The Chord protocol specifies how to find the
    locations of keys, how new nodes join the
    system, and how to recover from the failure (or
    planned departure) of existing nodes.
OVERVIEW
 Chord provides fast distributed computation of a
  hash function mapping keys to nodes responsible
  for them.
 when an Nth node joins (or leaves) the
  network, only an O(1/N) fraction of the keys are
  moved to a different location
 A Chord node needs only a small amount of
  “routing” information about other nodes.
 In an N node network, each node maintains
  information only about O(log N) other nodes, and a
  lookup requires O(log N) messages.
 a join or leave requires O(log N^2) messages.
CONSISTENT HASHING
 Consistent hashing is designed to let nodes enter
  and leave the network with minimal disruption.
 To maintain the consistent hashing mapping when a
  node n joins the network, certain keys previously
  assigned to n’s successor now become assigned to
  n.
 When node n leaves the network, all of its assigned
  keys are reassigned to n’s successor.
SCALABLE KEY LOCATION
 This scheme has two important characteristics.
 First, each node stores information about only a
  small number of other nodes, it only knows about
  closer nodes.
 Second, a node’s finger table generally does not
  contain enough information to determine the
  successor of an arbitrary key.
FINGER TABLE
NODE JOINS
 Initialize the predecessor and fingers of node n.
 Update the fingers and predecessors of existing
  nodes to reflect the addition of n.
 Notify the higher layer software so that it can
  transfer state (e.g. values) associated with keys
  that node n is now responsible for.
IMPACT OF NODE JOINS ON
LOOKUPS
   Correctness
       If finger table entries are reasonably current
           Lookup finds the correct successor in O(log N) steps
       If successor pointers are correct but finger tables are
        incorrect
           Correct lookup but slower
       If incorrect successor pointers
           Lookup may fail
IMPACT OF NODE JOINS ON LOOKUPS
 Performance
     If stabilization is complete
         Lookup can be done in O(log N) time
     If stabilization is not complete
         Existing nodes finger tables may not reflect the new
          nodes
             Doesn’t significantly affect lookup speed
         Newly joined nodes can affect the lookup speed, if the
          new nodes ID’s are in between target and target’s
          predecessor
             Lookup will have to be forwarded through the intervening
              nodes, one at a time
STABILIZATION AFTER JOIN
                                                                             n joins
                                     ns
                                                                                 predecessor = nil
                                                                                 n acquires ns as successor via some n’
                                                                                 n notifies ns being the new predecessor
                      pred(ns) = n


                                                                                 ns acquires n as its predecessor

                                                                             np runs stabilize
                                                                                 np asks ns for its predecessor (now n)
                                     n                    succ(np) = ns          np acquires n as its successor
      pred(ns) = np




                                                                                 np notifies n
                                                                                 n will acquire np as its predecessor
                                           succ(np) = n




                                     nil                                     all predecessor and successor pointers
                                                                              are now correct

                                                                             fingers still need to be fixed, but old
                                                                              fingers will still work
                                     np
                                                                                                               13
SOURCE OF INCONSISTENCIES:
CONCURRENT OPERATIONS AND FAILURES


   Basic “stabilization” protocol is used to keep nodes’ successor
    pointers up to date, which is sufficient to guarantee
    correctness of lookups

   Those successor pointers can then be used to verify the finger
    table entries

   Every node runs stabilize periodically to find newly joined
    nodes




                                                                  14
FAILURE RECOVERY
   Key step in failure recovery is maintaining correct successor pointers

   To help achieve this, each node maintains a successor-list of its r
    nearest successors on the ring

   If node n notices that its successor has failed, it replaces it with the
    first live entry in the list

   stabilize will correct finger table entries and successor-list entries
    pointing to failed node

   Performance is sensitive to the frequency of node joins and leaves
    versus the frequency at which the stabilization protocol is invoked




                                                                             15
SIMULATION
   Protocol Simulator:
     The Chord protocol can be implemented in an iterative
      or recursive style.
     In the iterative style, a node resolving a lookup initiates
      all communication: it asks a series of nodes for
      information from their finger tables, each time moving
      closer on the Chord ring to the desired successor.
     In the recursive style, each intermediate node forwards
      a request to the next node until it reaches the
      successor.
     The simulator implements the protocols in an iterative
      style.
EXPERIMENTAL RESULTS


   Latency grows slowly with the total
    number of nodes



   Path length for lookups is about ½
    log2N



   Chord is robust in the face of multiple
    node failures
                                              17
CONCLUSION
 Efficient
          location of the node that stores a
  desired data item is a fundamental problem in
  P2P networks
 Chord protocol solves it in a efficient
  decentralized manner
    Routing information: O(log N) nodes
    Lookup: O(log N) nodes
    Update: O(log2 N) messages
 Italso adapts dynamically to the topology
  changes introduced during the run
FUTURE WORK
   Using Chord to detect and heal partitions whose
    nodes know of each other.
     Every node should know of some same set of initial
      nodes
     Nodes should maintain long-term memory of a random
      set of nodes that they have visited earlier
   Malicious set of Chord participants could present an
    incorrect view of the Chord ring
       Node n periodically asks other nodes to do a lookup for
        n
REFERENCES
   Chord: A Scalable Peer-to-Peer Lookup Service
    for Internet Applications by Ion Stoica, Robert
    Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan.

Contenu connexe

Tendances

Chord presentation
Chord presentationChord presentation
Chord presentation
GertThijs
 
Intelligently Collecting Data at the Edge - Intro to Apache MiNiFi
Intelligently Collecting Data at the Edge - Intro to Apache MiNiFiIntelligently Collecting Data at the Edge - Intro to Apache MiNiFi
Intelligently Collecting Data at the Edge - Intro to Apache MiNiFi
DataWorks Summit
 
Avl trees
Avl treesAvl trees
Avl trees
ppreeta
 

Tendances (20)

RIP Routing Information Protocol Extreme Networks
RIP Routing Information Protocol Extreme NetworksRIP Routing Information Protocol Extreme Networks
RIP Routing Information Protocol Extreme Networks
 
Tutorial ns 3-tutorial-slides
Tutorial ns 3-tutorial-slidesTutorial ns 3-tutorial-slides
Tutorial ns 3-tutorial-slides
 
Apache NiFi in the Hadoop Ecosystem
Apache NiFi in the Hadoop Ecosystem Apache NiFi in the Hadoop Ecosystem
Apache NiFi in the Hadoop Ecosystem
 
Leach & Pegasis
Leach & PegasisLeach & Pegasis
Leach & Pegasis
 
Chord presentation
Chord presentationChord presentation
Chord presentation
 
Intelligently Collecting Data at the Edge - Intro to Apache MiNiFi
Intelligently Collecting Data at the Edge - Intro to Apache MiNiFiIntelligently Collecting Data at the Edge - Intro to Apache MiNiFi
Intelligently Collecting Data at the Edge - Intro to Apache MiNiFi
 
Thesis-Final-slide
Thesis-Final-slideThesis-Final-slide
Thesis-Final-slide
 
DNS Record
DNS RecordDNS Record
DNS Record
 
Kademlia introduction
Kademlia introductionKademlia introduction
Kademlia introduction
 
Avl trees
Avl treesAvl trees
Avl trees
 
Hive(ppt)
Hive(ppt)Hive(ppt)
Hive(ppt)
 
Apache Zookeeper Explained: Tutorial, Use Cases and Zookeeper Java API Examples
Apache Zookeeper Explained: Tutorial, Use Cases and Zookeeper Java API ExamplesApache Zookeeper Explained: Tutorial, Use Cases and Zookeeper Java API Examples
Apache Zookeeper Explained: Tutorial, Use Cases and Zookeeper Java API Examples
 
Data models in NoSQL
Data models in NoSQLData models in NoSQL
Data models in NoSQL
 
Three way handshake
Three way handshakeThree way handshake
Three way handshake
 
Dist deadlock sureka
Dist deadlock surekaDist deadlock sureka
Dist deadlock sureka
 
Security in WSN
Security in WSNSecurity in WSN
Security in WSN
 
Introduction to NoSQL
Introduction to NoSQLIntroduction to NoSQL
Introduction to NoSQL
 
Consistency protocols
Consistency protocolsConsistency protocols
Consistency protocols
 
Working with NS2
Working with NS2Working with NS2
Working with NS2
 
Internal representation of files ppt
Internal representation of files pptInternal representation of files ppt
Internal representation of files ppt
 

Plus de Chandan Thakur (6)

Libsys 7 to koha
Libsys 7 to kohaLibsys 7 to koha
Libsys 7 to koha
 
Quality improvement paradigm (QIP)
Quality improvement paradigm (QIP)Quality improvement paradigm (QIP)
Quality improvement paradigm (QIP)
 
A maximum residual multicast protocol for large scale mobile ad hoc networks
A maximum residual multicast protocol for large scale mobile ad hoc networksA maximum residual multicast protocol for large scale mobile ad hoc networks
A maximum residual multicast protocol for large scale mobile ad hoc networks
 
Shayri
ShayriShayri
Shayri
 
Nirmal baba
Nirmal babaNirmal baba
Nirmal baba
 
Presentation on component based software engineering(cbse)
Presentation on component based software engineering(cbse)Presentation on component based software engineering(cbse)
Presentation on component based software engineering(cbse)
 

Dernier

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Dernier (20)

MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 

Chord- A Scalable Peer-to-Peer Lookup Service for Internet Applications

  • 1. CHORD: A SCALABLE PEER-TO-PEER LOOKUP SERVICE FOR INTERNET APPLICATIONS Ion Stoica Robert Morris David Liben-Nowell David R. Karger M. Frans Kaashoek Frank Dabek Hari Balakrishnan Presented By- Chaitanya Pratap MCA(4th Semester) South Asian University, New Delhi.
  • 2. INTRODUCTION  A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item.  Chord provides support for just one operation: given a key, it maps the key onto a node.  Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps.
  • 3. SYSTEM MODEL (FEATURES)  Load balance: Chord acts as a distributed hash function, spreading keys evenly over the nodes. this provides a degree of natural load balance.  Decentralization: Chord is fully distributed: no node is more important than any other. This improves robustness.
  • 4. CONT…  Scalability: The cost of a Chord lookup grows as the log of the number of nodes, so even very large systems are feasible.  Availability: Chord automatically adjusts its internal tables to reflect newly joined nodes as well as node failures, ensuring that the node responsible for a key can always be found.  Flexible naming: Chord places no constraints on the structure of the keys it looks up: the Chord key- space is flat.
  • 5. THE BASE CHORD PROTOCOL  The Chord protocol specifies how to find the locations of keys, how new nodes join the system, and how to recover from the failure (or planned departure) of existing nodes.
  • 6. OVERVIEW  Chord provides fast distributed computation of a hash function mapping keys to nodes responsible for them.  when an Nth node joins (or leaves) the network, only an O(1/N) fraction of the keys are moved to a different location  A Chord node needs only a small amount of “routing” information about other nodes.  In an N node network, each node maintains information only about O(log N) other nodes, and a lookup requires O(log N) messages.  a join or leave requires O(log N^2) messages.
  • 7. CONSISTENT HASHING  Consistent hashing is designed to let nodes enter and leave the network with minimal disruption.  To maintain the consistent hashing mapping when a node n joins the network, certain keys previously assigned to n’s successor now become assigned to n.  When node n leaves the network, all of its assigned keys are reassigned to n’s successor.
  • 8. SCALABLE KEY LOCATION  This scheme has two important characteristics.  First, each node stores information about only a small number of other nodes, it only knows about closer nodes.  Second, a node’s finger table generally does not contain enough information to determine the successor of an arbitrary key.
  • 10. NODE JOINS  Initialize the predecessor and fingers of node n.  Update the fingers and predecessors of existing nodes to reflect the addition of n.  Notify the higher layer software so that it can transfer state (e.g. values) associated with keys that node n is now responsible for.
  • 11. IMPACT OF NODE JOINS ON LOOKUPS  Correctness  If finger table entries are reasonably current  Lookup finds the correct successor in O(log N) steps  If successor pointers are correct but finger tables are incorrect  Correct lookup but slower  If incorrect successor pointers  Lookup may fail
  • 12. IMPACT OF NODE JOINS ON LOOKUPS  Performance  If stabilization is complete  Lookup can be done in O(log N) time  If stabilization is not complete  Existing nodes finger tables may not reflect the new nodes  Doesn’t significantly affect lookup speed  Newly joined nodes can affect the lookup speed, if the new nodes ID’s are in between target and target’s predecessor  Lookup will have to be forwarded through the intervening nodes, one at a time
  • 13. STABILIZATION AFTER JOIN  n joins ns  predecessor = nil  n acquires ns as successor via some n’  n notifies ns being the new predecessor pred(ns) = n  ns acquires n as its predecessor  np runs stabilize  np asks ns for its predecessor (now n) n succ(np) = ns  np acquires n as its successor pred(ns) = np  np notifies n  n will acquire np as its predecessor succ(np) = n nil  all predecessor and successor pointers are now correct  fingers still need to be fixed, but old fingers will still work np 13
  • 14. SOURCE OF INCONSISTENCIES: CONCURRENT OPERATIONS AND FAILURES  Basic “stabilization” protocol is used to keep nodes’ successor pointers up to date, which is sufficient to guarantee correctness of lookups  Those successor pointers can then be used to verify the finger table entries  Every node runs stabilize periodically to find newly joined nodes 14
  • 15. FAILURE RECOVERY  Key step in failure recovery is maintaining correct successor pointers  To help achieve this, each node maintains a successor-list of its r nearest successors on the ring  If node n notices that its successor has failed, it replaces it with the first live entry in the list  stabilize will correct finger table entries and successor-list entries pointing to failed node  Performance is sensitive to the frequency of node joins and leaves versus the frequency at which the stabilization protocol is invoked 15
  • 16. SIMULATION  Protocol Simulator:  The Chord protocol can be implemented in an iterative or recursive style.  In the iterative style, a node resolving a lookup initiates all communication: it asks a series of nodes for information from their finger tables, each time moving closer on the Chord ring to the desired successor.  In the recursive style, each intermediate node forwards a request to the next node until it reaches the successor.  The simulator implements the protocols in an iterative style.
  • 17. EXPERIMENTAL RESULTS  Latency grows slowly with the total number of nodes  Path length for lookups is about ½ log2N  Chord is robust in the face of multiple node failures 17
  • 18. CONCLUSION  Efficient location of the node that stores a desired data item is a fundamental problem in P2P networks  Chord protocol solves it in a efficient decentralized manner  Routing information: O(log N) nodes  Lookup: O(log N) nodes  Update: O(log2 N) messages  Italso adapts dynamically to the topology changes introduced during the run
  • 19. FUTURE WORK  Using Chord to detect and heal partitions whose nodes know of each other.  Every node should know of some same set of initial nodes  Nodes should maintain long-term memory of a random set of nodes that they have visited earlier  Malicious set of Chord participants could present an incorrect view of the Chord ring  Node n periodically asks other nodes to do a lookup for n
  • 20. REFERENCES  Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications by Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan.