SlideShare une entreprise Scribd logo
1  sur  23
Distributed Shared
Memory Systems
Arush Nagpal (101303034)
Ankit Gupta (101303023)
What is a Distributed System?
What is DSM?
 The distributed shared memory (DSM) implements the
shared memory model in Distributed Systems, which have
no physical shared memory.
 The shared memory model provides a virtual address
space shared between all nodes.
 This overcomes the high cost of communication in
distributed systems. DSM systems move data to the
location of Access.
Memory
Mapping
Manager
Memory
Mapping
Manager
Memory
Mapping
Manager
Shared Memory
NODE 1 NODE 2 NODE 3
Purpose Of DSM Research
 Building less expensive parallel machines
 Building larger parallel machines
 Eliminating the programming difficulty of Massively
Parallel Processing and Cluster architectures
Distributed Shared Memory Models
 Object Based DSM
 Variable Based DSM
 Page Based DSM
 Structured DSM
 Hardware Supported DSM
Object Based
 Probably the simplest way to implement DSM
 Shared data must be encapsulated in an object
 Shared data may only be accessed via the methods in the object
Variable Based
 Delivers the lowest distribution granularity
 Closely integrated in the compiler
 May be hardware supported
Hardware Based DSM
 Uses hardware to eliminate software overhead
 May be hidden even from the operating system
 Usually provides sequential consistency
 May limit the size of the DSM system
Page Based DSM
 Involves virtual memory and paging
Advantages of DSM
(Distributed Shared Memory)
 Data sharing is implicit, hiding data movement (as opposed to
‘Send’/‘Receive’ in message passing model)
 Passing data structures containing pointers is easier (in message passing
model data moves between different address spaces)
 Moving entire object to user takes advantage of locality difference
 Less expensive to build than tightly coupled multiprocessor system: off-the-
shelf hardware, no expensive interface to shared physical memory
 Very large total physical memory for all nodes: Large programs can run more
efficiently
 Programs written for shared memory multiprocessors can be run on DSM
systems with minimum changes
Issues faced in development of DSM
 Granularity
 Structure of Shared memory
 Memory coherence and access synchronization
 Data location and access
 Replacement strategy
 Thrashing
Granularity
 Granularity is the amount of data sent with each
update
 If granularity is too small and a large amount of
contiguous data is updated, the overhead of
sending many small messages leads to less
efficiency
 If granularity is too large, a whole page (or more)
would be sent for an update to a single byte, thus
reducing efficiency
Structure of Shared Memory
 Structure refers to the layout of the shared data in
memory.
 Dependent on the type of applications that the DSM
system is intended to support.
 By datatype
 By database
 No structuring (simple linear array)
Replacement Strategy
 If the local memory of a node is full, a cache miss at that
node implies not only a fetch of accessed data block from
a remote node but also a replacement.
 Data block must be replaced by the new data block.
- Example: LRU with access modes
Private (local) pages to be replaced before shared
ones
Private pages swapped to disk
Shared pages sent over network to owner
Read-only pages may be discarded (owners have a
copy)
Thrashing
 Thrashing occurs when network resources are
exhausted, and more time is spent in validating
data and sending updates than is used doing
actual work.
 Based on system specifics, one should choose
write-update or write-invalidate to avoid
thrashing.
Memory Coherence and Access
Synchronization
 In a DSM system that allows replication of shared data
item, copies of shared data item may simultaneously be
available in the main memories of a number of nodes.
 To solve the memory coherence problem that deal with
the consistency of a piece of shared data lying in the main
memories of two or more nodes.
 DSM are based on
- Replicated shared data objects
- Concurrent access of data objects at many nodes
 Coherent memory: Mechanism that control/synchronizes
accesses is needed to maintain memory coherence
 Strict Consistency: when value returned by read operation
is the expected value (e.g., value of most recent write)
 Write-Invalidate
 Write-update
 Sequential consistency: A system is sequentially consistent
if
- The result of any execution of operations of all processors is the
same as if they were executed in sequential order, and
- The operations of each processor appear in this sequence in the
order specified by its program
 Replicated consistency:
- All copies of a memory location (replicas) eventually contain same
data when all writes issued by every processor have completed
Algorithms for implementing DSM
 The Central Server Algorithm
 The Migration Algorithm
 The Read-Replication Algorithm
 The full-Replication Algorithm
The Central Server Algorithm
- Central server maintains all shared data
Read request: returns data item
Write request: updates data and returns
acknowledgement message
- Implementation
A timeout is used to resend a request if
acknowledgment fails
Associated sequence numbers can be used to detect
duplicate write requests
If an application’s request to access shared data fails
repeatedly, a failure condition is sent to the
application
The Migration Algorithm
- Operation
 Ship (migrate) entire data object (page, block) containing data item to
requesting location
 Allow only one node to access a shared data at a time
- Advantages
 Takes advantage of the locality of reference
 DSM can be integrated with Virtual Memory at each node
To locate a remote data object:
 Use a location server
 Maintain hints at each node
 Broadcast query
- Issues
 Only one node can access a data object at a time
 Thrashing can occur: to minimize it, set minimum time data object
resides at a node
The Read-Replication Algorithm
 Replicates data objects to multiple nodes
 DSM keeps track of location of data objects
 Multiple readers-one writer protocol
 After a write, all copies are invalidated or updated
 Examples of implementations:
 IVY(Integrated shared Virtual memory at Yale): owner node of
data object knows all nodes that have copies
 MIRAGE: Developed at UCLA, kernel modified to support DSM
operation.
 Clouds: Georgia Institute of Technology
 Advantage
 The read-replication can lead to substantial performance
improvements if the ratio of reads to writes is large
The Full-Replication Algorithm
- Extension of read-replication algorithm: multiple nodes can read and
multiple nodes can write (multiple-readers, multiple-writers protocol)
- Issue: consistency of data for multiple writers
- Solution: use of gap-free sequencer
• All writes sent to sequencer
• Sequencer assigns sequence number and sends write request to all
sites that have copies
• Each node performs writes according to sequence numbers
• A gap in sequence numbers indicates a missing write request: node
asks for retransmission of missing write requests
List of References
 The Distributed Shared Memory System (Brian N. Bershad,
Matthew J. Zekauskas, Wayne A. Sawdon) March 1993
 Techniques for Reducing Consistency-Related
Communication in Distributed Shared-Memory Systems
(JOHN B. CARTER, University of Utah and JOHN K.
BENNETT and WILLY ZWAENEPOEL , Rice University )
 Distributed Shared Memory (Ajay Kshemkalyani and
Mukesh Singhal)
Any Questions?

Contenu connexe

Tendances

Tendances (20)

Lec 7 query processing
Lec 7 query processingLec 7 query processing
Lec 7 query processing
 
Lamport’s algorithm for mutual exclusion
Lamport’s algorithm for mutual exclusionLamport’s algorithm for mutual exclusion
Lamport’s algorithm for mutual exclusion
 
Page replacement algorithms
Page replacement algorithmsPage replacement algorithms
Page replacement algorithms
 
Shared memory
Shared memoryShared memory
Shared memory
 
Clock synchronization in distributed system
Clock synchronization in distributed systemClock synchronization in distributed system
Clock synchronization in distributed system
 
8. mutual exclusion in Distributed Operating Systems
8. mutual exclusion in Distributed Operating Systems8. mutual exclusion in Distributed Operating Systems
8. mutual exclusion in Distributed Operating Systems
 
Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...
Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...
Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...
 
Replication Techniques for Distributed Database Design
Replication Techniques for Distributed Database DesignReplication Techniques for Distributed Database Design
Replication Techniques for Distributed Database Design
 
Ddbms1
Ddbms1Ddbms1
Ddbms1
 
Distributed Operating System_1
Distributed Operating System_1Distributed Operating System_1
Distributed Operating System_1
 
File models and file accessing models
File models and file accessing modelsFile models and file accessing models
File models and file accessing models
 
Distributed Mutual exclusion algorithms
Distributed Mutual exclusion algorithmsDistributed Mutual exclusion algorithms
Distributed Mutual exclusion algorithms
 
Presentation on Shared Memory Parallel Programming
Presentation on Shared Memory Parallel ProgrammingPresentation on Shared Memory Parallel Programming
Presentation on Shared Memory Parallel Programming
 
Chapter 12 - Mass Storage Systems
Chapter 12 - Mass Storage SystemsChapter 12 - Mass Storage Systems
Chapter 12 - Mass Storage Systems
 
Page Replacement
Page ReplacementPage Replacement
Page Replacement
 
Distribution transparency and Distributed transaction
Distribution transparency and Distributed transactionDistribution transparency and Distributed transaction
Distribution transparency and Distributed transaction
 
distributed Computing system model
distributed Computing system modeldistributed Computing system model
distributed Computing system model
 
Distributed File Systems
Distributed File Systems Distributed File Systems
Distributed File Systems
 
Distributed transaction
Distributed transactionDistributed transaction
Distributed transaction
 
Chapter 9 - Virtual Memory
Chapter 9 - Virtual MemoryChapter 9 - Virtual Memory
Chapter 9 - Virtual Memory
 

Similaire à Distributed Shared Memory Systems

Distributed Shared Memory Systems
Distributed Shared Memory SystemsDistributed Shared Memory Systems
Distributed Shared Memory Systems
Ankit Gupta
 
Advance Operating Systems
Advance Operating SystemsAdvance Operating Systems
Advance Operating Systems
Raghu nath
 
Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OS
C.U
 

Similaire à Distributed Shared Memory Systems (20)

Distributed Shared Memory Systems
Distributed Shared Memory SystemsDistributed Shared Memory Systems
Distributed Shared Memory Systems
 
Chap 4
Chap 4Chap 4
Chap 4
 
Distributed Shared Memory-jhgfdsserty.pdf
Distributed Shared Memory-jhgfdsserty.pdfDistributed Shared Memory-jhgfdsserty.pdf
Distributed Shared Memory-jhgfdsserty.pdf
 
Distributed system unit II according to syllabus of RGPV, Bhopal
Distributed system unit II according to syllabus of  RGPV, BhopalDistributed system unit II according to syllabus of  RGPV, Bhopal
Distributed system unit II according to syllabus of RGPV, Bhopal
 
Distributed Shared Memory
Distributed Shared MemoryDistributed Shared Memory
Distributed Shared Memory
 
Unit-1 Introduction to Big Data.pptx
Unit-1 Introduction to Big Data.pptxUnit-1 Introduction to Big Data.pptx
Unit-1 Introduction to Big Data.pptx
 
Dos unit3
Dos unit3Dos unit3
Dos unit3
 
Advance Operating Systems
Advance Operating SystemsAdvance Operating Systems
Advance Operating Systems
 
Distributed Shared Memory notes in distributed systems.pptx
Distributed Shared Memory notes in distributed systems.pptxDistributed Shared Memory notes in distributed systems.pptx
Distributed Shared Memory notes in distributed systems.pptx
 
Distributed systems and scalability rules
Distributed systems and scalability rulesDistributed systems and scalability rules
Distributed systems and scalability rules
 
Unit 1
Unit 1Unit 1
Unit 1
 
Terminologies Used In Big data Environments,G.Sumithra,II-M.sc(computer scien...
Terminologies Used In Big data Environments,G.Sumithra,II-M.sc(computer scien...Terminologies Used In Big data Environments,G.Sumithra,II-M.sc(computer scien...
Terminologies Used In Big data Environments,G.Sumithra,II-M.sc(computer scien...
 
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and Hadoop
 
Google
GoogleGoogle
Google
 
Introduction to hadoop and hdfs
Introduction to hadoop and hdfsIntroduction to hadoop and hdfs
Introduction to hadoop and hdfs
 
Hadoop data management
Hadoop data managementHadoop data management
Hadoop data management
 
Cassandra internals
Cassandra internalsCassandra internals
Cassandra internals
 
Overview of Distributed Systems
Overview of Distributed SystemsOverview of Distributed Systems
Overview of Distributed Systems
 
Lecture1
Lecture1Lecture1
Lecture1
 
Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OS
 

Dernier

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Dernier (20)

Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 

Distributed Shared Memory Systems

  • 1. Distributed Shared Memory Systems Arush Nagpal (101303034) Ankit Gupta (101303023)
  • 2. What is a Distributed System?
  • 3. What is DSM?  The distributed shared memory (DSM) implements the shared memory model in Distributed Systems, which have no physical shared memory.  The shared memory model provides a virtual address space shared between all nodes.  This overcomes the high cost of communication in distributed systems. DSM systems move data to the location of Access.
  • 5. Purpose Of DSM Research  Building less expensive parallel machines  Building larger parallel machines  Eliminating the programming difficulty of Massively Parallel Processing and Cluster architectures
  • 6. Distributed Shared Memory Models  Object Based DSM  Variable Based DSM  Page Based DSM  Structured DSM  Hardware Supported DSM
  • 7. Object Based  Probably the simplest way to implement DSM  Shared data must be encapsulated in an object  Shared data may only be accessed via the methods in the object Variable Based  Delivers the lowest distribution granularity  Closely integrated in the compiler  May be hardware supported
  • 8. Hardware Based DSM  Uses hardware to eliminate software overhead  May be hidden even from the operating system  Usually provides sequential consistency  May limit the size of the DSM system Page Based DSM  Involves virtual memory and paging
  • 9. Advantages of DSM (Distributed Shared Memory)  Data sharing is implicit, hiding data movement (as opposed to ‘Send’/‘Receive’ in message passing model)  Passing data structures containing pointers is easier (in message passing model data moves between different address spaces)  Moving entire object to user takes advantage of locality difference  Less expensive to build than tightly coupled multiprocessor system: off-the- shelf hardware, no expensive interface to shared physical memory  Very large total physical memory for all nodes: Large programs can run more efficiently  Programs written for shared memory multiprocessors can be run on DSM systems with minimum changes
  • 10. Issues faced in development of DSM  Granularity  Structure of Shared memory  Memory coherence and access synchronization  Data location and access  Replacement strategy  Thrashing
  • 11. Granularity  Granularity is the amount of data sent with each update  If granularity is too small and a large amount of contiguous data is updated, the overhead of sending many small messages leads to less efficiency  If granularity is too large, a whole page (or more) would be sent for an update to a single byte, thus reducing efficiency
  • 12. Structure of Shared Memory  Structure refers to the layout of the shared data in memory.  Dependent on the type of applications that the DSM system is intended to support.  By datatype  By database  No structuring (simple linear array)
  • 13. Replacement Strategy  If the local memory of a node is full, a cache miss at that node implies not only a fetch of accessed data block from a remote node but also a replacement.  Data block must be replaced by the new data block. - Example: LRU with access modes Private (local) pages to be replaced before shared ones Private pages swapped to disk Shared pages sent over network to owner Read-only pages may be discarded (owners have a copy)
  • 14. Thrashing  Thrashing occurs when network resources are exhausted, and more time is spent in validating data and sending updates than is used doing actual work.  Based on system specifics, one should choose write-update or write-invalidate to avoid thrashing.
  • 15. Memory Coherence and Access Synchronization  In a DSM system that allows replication of shared data item, copies of shared data item may simultaneously be available in the main memories of a number of nodes.  To solve the memory coherence problem that deal with the consistency of a piece of shared data lying in the main memories of two or more nodes.  DSM are based on - Replicated shared data objects - Concurrent access of data objects at many nodes
  • 16.  Coherent memory: Mechanism that control/synchronizes accesses is needed to maintain memory coherence  Strict Consistency: when value returned by read operation is the expected value (e.g., value of most recent write)  Write-Invalidate  Write-update  Sequential consistency: A system is sequentially consistent if - The result of any execution of operations of all processors is the same as if they were executed in sequential order, and - The operations of each processor appear in this sequence in the order specified by its program  Replicated consistency: - All copies of a memory location (replicas) eventually contain same data when all writes issued by every processor have completed
  • 17. Algorithms for implementing DSM  The Central Server Algorithm  The Migration Algorithm  The Read-Replication Algorithm  The full-Replication Algorithm
  • 18. The Central Server Algorithm - Central server maintains all shared data Read request: returns data item Write request: updates data and returns acknowledgement message - Implementation A timeout is used to resend a request if acknowledgment fails Associated sequence numbers can be used to detect duplicate write requests If an application’s request to access shared data fails repeatedly, a failure condition is sent to the application
  • 19. The Migration Algorithm - Operation  Ship (migrate) entire data object (page, block) containing data item to requesting location  Allow only one node to access a shared data at a time - Advantages  Takes advantage of the locality of reference  DSM can be integrated with Virtual Memory at each node To locate a remote data object:  Use a location server  Maintain hints at each node  Broadcast query - Issues  Only one node can access a data object at a time  Thrashing can occur: to minimize it, set minimum time data object resides at a node
  • 20. The Read-Replication Algorithm  Replicates data objects to multiple nodes  DSM keeps track of location of data objects  Multiple readers-one writer protocol  After a write, all copies are invalidated or updated  Examples of implementations:  IVY(Integrated shared Virtual memory at Yale): owner node of data object knows all nodes that have copies  MIRAGE: Developed at UCLA, kernel modified to support DSM operation.  Clouds: Georgia Institute of Technology  Advantage  The read-replication can lead to substantial performance improvements if the ratio of reads to writes is large
  • 21. The Full-Replication Algorithm - Extension of read-replication algorithm: multiple nodes can read and multiple nodes can write (multiple-readers, multiple-writers protocol) - Issue: consistency of data for multiple writers - Solution: use of gap-free sequencer • All writes sent to sequencer • Sequencer assigns sequence number and sends write request to all sites that have copies • Each node performs writes according to sequence numbers • A gap in sequence numbers indicates a missing write request: node asks for retransmission of missing write requests
  • 22. List of References  The Distributed Shared Memory System (Brian N. Bershad, Matthew J. Zekauskas, Wayne A. Sawdon) March 1993  Techniques for Reducing Consistency-Related Communication in Distributed Shared-Memory Systems (JOHN B. CARTER, University of Utah and JOHN K. BENNETT and WILLY ZWAENEPOEL , Rice University )  Distributed Shared Memory (Ajay Kshemkalyani and Mukesh Singhal)