Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.
Chord: A scalable peer-to-peer lookup service for
internet applications
Tom Faulhaber

tom@infolace.com

Papers We Love SF...
Chord is a completely peer-to-peer distributed key management
system that works under dynamic membership churn.
Context
Idea 1: Consistent Hashing
Consistent Hashing
• Map keys to a hash m-bits
long, e.g. SHA-1.

• Construct a ring with
operations performed mod 2m
Consistent Hashing
• Map keys to a hash m-bits
long, e.g. SHA-1.

• Construct a ring with
operations performed

• For exam...
Nodes and Keys
• Each node in the network has
an address, typically

• We define , the
successor of , defined as

• Key is s...
Idea 2: Finger Tables
Finger tables
• To move from to
Chord uses a “finger table” to
track nodes around the ring.

• Fundamental insight

- dense...
Example Layout
m = 6
2m
= 64
Node Location
α 7
β 16
γ 42
δ 44
ε 50
ζ 52
η 3
θ 4
This table does not exist!
The View from α
k start end n
1 8 8 β
2 9 10 β
3 11 14 β
4 15 22 β
5 23 38 γ
6 39 7 γ
Starting from α, retrieve key 51

Fi...
The View from γ
k start end n
1 43 43 δ
2 44 45 δ
3 46 49 ε
4 50 57 ε
5 58 9 η
6 10 42 β
Second step, ask ε
The View from ε
k start end n
1 51 51 ζ
2 52 53 ζ
3 54 57 η
4 58 1 η
5 2 17 η
6 18 50 γ
Third step, ask ζ
At this point, s...
Idea 3: Handling Churn
Joining the network
Once a node has assigned itself
an id, , it does 3 things:

1. Builds its finger table and
predecessor
...
Joining the network
Once a node has assigned itself
an id, , it does 3 things:

1. Builds its finger table and
predecessor
...
Joining the network
Once a node has assigned itself
an id, , it does 3 things:

1. Builds its finger table and
predecessor
...
Concurrency & Failure
Two basic mechanisms:

1. Every node periodically performs stabilization

2. Each node maintains a s...
Related Work
Related Work
• Pastry

• CAN

• Kademlia

• Tapestry
Impact
Impact
• Research applications in domains such as distributed file systems, pub-sub,
document sharing, search algorithms.

...
The End!
Chord Presentation at Papers We Love SF, August 2016
Chord Presentation at Papers We Love SF, August 2016
Chord Presentation at Papers We Love SF, August 2016
Prochain SlideShare
Chargement dans…5
×
Prochain SlideShare
Content addressable network(can)
Suivant
Télécharger pour lire hors ligne et voir en mode plein écran

0

Partager

Télécharger pour lire hors ligne

Chord Presentation at Papers We Love SF, August 2016

Télécharger pour lire hors ligne

My presentation on "Chord: A scalable peer-to-peer lookup service for internet applications" by Stoica, Morris, et al., SIGCOMM 2001

Livres associés

Gratuit avec un essai de 30 jours de Scribd

Tout voir
  • Soyez le premier à aimer ceci

Chord Presentation at Papers We Love SF, August 2016

  1. 1. Chord: A scalable peer-to-peer lookup service for internet applications Tom Faulhaber tom@infolace.com Papers We Love SF August 2016
  2. 2. Chord is a completely peer-to-peer distributed key management system that works under dynamic membership churn.
  3. 3. Context
  4. 4. Idea 1: Consistent Hashing
  5. 5. Consistent Hashing • Map keys to a hash m-bits long, e.g. SHA-1. • Construct a ring with operations performed mod 2m
  6. 6. Consistent Hashing • Map keys to a hash m-bits long, e.g. SHA-1. • Construct a ring with operations performed • For example, take • Gives us separate addresses. m = 3 23 = 8 mod 2m
  7. 7. Nodes and Keys • Each node in the network has an address, typically • We define , the successor of , defined as
 • Key is stored at node • Each node knows
 • gives lookup performance, where is the number of nodes addr = hash(ip) succ(k) min(n) | n k mod 2m succ(k) O(N) N k succ(k) k n n0 = succ(n + 1 mod 2m )
  8. 8. Idea 2: Finger Tables
  9. 9. Finger tables • To move from to Chord uses a “finger table” to track nodes around the ring. • Fundamental insight
 - dense information nearby, 
 - sparse information far away • Table defined by: • Also track O(n) O(log n) finger[k].start = (n + 2k 1 ) mod 2m , 1  k  m .interval = [finger[k].start, finger[k + 1].start) .node = succ(finger[k].start) successor = finger[1].node predecessor
  10. 10. Example Layout m = 6 2m = 64 Node Location α 7 β 16 γ 42 δ 44 ε 50 ζ 52 η 3 θ 4 This table does not exist!
  11. 11. The View from α k start end n 1 8 8 β 2 9 10 β 3 11 14 β 4 15 22 β 5 23 38 γ 6 39 7 γ Starting from α, retrieve key 51 First step, ask γ
  12. 12. The View from γ k start end n 1 43 43 δ 2 44 45 δ 3 46 49 ε 4 50 57 ε 5 58 9 η 6 10 42 β Second step, ask ε
  13. 13. The View from ε k start end n 1 51 51 ζ 2 52 53 ζ 3 54 57 η 4 58 1 η 5 2 17 η 6 18 50 γ Third step, ask ζ At this point, so ζ will have the key succ(51) = ⇣
  14. 14. Idea 3: Handling Churn
  15. 15. Joining the network Once a node has assigned itself an id, , it does 3 things: 1. Builds its finger table and predecessor n0 k start n 1 2 3 4 5 … … … n0 + 1 succ(n0 + 1) n0 + 2 n0 + 4 n0 + 8 n0 + 16 succ(n0 + 16) succ(n0 + 2) succ(n0 + 4) succ(n0 + 8)
  16. 16. Joining the network Once a node has assigned itself an id, , it does 3 things: 1. Builds its finger table and predecessor 2. Updates other nodes that should have their finger tables point to 3. Notify upper layers of software that they need to move keys. n0 n0
  17. 17. Joining the network Once a node has assigned itself an id, , it does 3 things: 1. Builds its finger table and predecessor 2. Updates other nodes that should have their finger tables point to 3. Notify upper layers of software that they need to move keys. n0 n0 Joins take messages keys will be moved O(log2 n) O( 1 N )
  18. 18. Concurrency & Failure Two basic mechanisms: 1. Every node periodically performs stabilization 2. Each node maintains a successor list rather than a single successor When a node fails, it’s keys are lost. Other mechanisms are used by higher levels to build resiliency, e.g. republishing or replication.
  19. 19. Related Work
  20. 20. Related Work • Pastry • CAN • Kademlia • Tapestry
  21. 21. Impact
  22. 22. Impact • Research applications in domains such as distributed file systems, pub-sub, document sharing, search algorithms. • Basis for sharing data to nodes in systems like Cassandra without requiring a global index.
  23. 23. The End!

My presentation on "Chord: A scalable peer-to-peer lookup service for internet applications" by Stoica, Morris, et al., SIGCOMM 2001

Vues

Nombre de vues

193

Sur Slideshare

0

À partir des intégrations

0

Nombre d'intégrations

1

Actions

Téléchargements

4

Partages

0

Commentaires

0

Mentions J'aime

0

×