SlideShare une entreprise Scribd logo
1  sur  34
All you didn’t know about the CAP theorem
CAP theorem
The theorem was presented on the Symposium on Principles of
Distributed Computing in 2000 by Eric Brewer.
In 2002, Seth Gilbert and Nancy Lynch of MIT published a
formal proof of Brewer's conjecture, rendering it a theorem.
According to Brewer, he wanted the
community to start conversation about it,
but his words have been corrected and
treated as a theorem.
What stands behind the cap?
The CAP theorem states that in the distributed system you can
choose only 2 out of 3:
Consistency: every read would get you the most recent write
Availability: every node (if not failed) always executes
queries (read and writes)
Partition-tolerance: even if the connections between nodes
are down, the other two (A & C) promises, are kept.
AP Proof
Not consistent!
CP Proof
Not available!
CA Proof
No partition tolerance!
the CAP-triangle
Let’s take a look at the postgresql
Master/slave architecture is one of common solutions
Slave can be synced with master in async/sync way
The transaction system use two-phase commit to ensure consistency
If a partition occurs you can’t talk to the server (in the basic case), the
system is not CAP-available.
So, it can’t continue work in case of network partitioning, but it provides
strong consistency and high-availability. It’s a CA system!
Let’s take a look at the mongodb
MongoDB provides strong consistency, because it is a single-master system and
all writes go to the primary by default
MongoDB provides automatic failover in case of partitioning
If a partition occurs it will stop accepting writes to the system until it
believes that it can safely complete those writes.
So, it can continue work in case of network partitioning and it gives up
availability. It’s a CP system!
Let’s take a look at the
postgres + salesforce + heroku connect system
It’s master-master system
Heroku connect is responsible for keeping system consistency
Salesforce data and our postgres db doesn’t know about each other
If network partition occurs both storages will be available. Heroku connect
will try to reconnect.
So, it can continue work in case of network partitioning and it gives up
consistency. It should be an AP system!
It is so easy! Now I know everything!
Actually, no.
There are a lot of problems with CAP theorem:
CAP uses very narrow and far-from-the-real-world definitions
Actually, it is the choice only between consistency and availability
Many systems are neither CAP-consistent nor CAP-available
Pure AP systems are useless
Pure CP systems might behave not as expected
What is wrong with definitions
Consistency in CAP actually means linearizability (and it’s
really hard to reach it).
Availability in CAP is defined as “every request received by
a non-failing [database] node in the system must result in
a [non-error] response” and it’s not restricted by time.
The only fault considered by the CAP theorem is a network
partition.
Linearizability
Why node failures are outside CAP?
By the definition of availability: ...every node (if not
failed) always...
By the proof of CAP: the proof of CAP used by Gilbert and
Lynch relies on having code running on both sides of the
partition.
In some cases, a partition will be equivalent to a failure,
but this equivalence will be obtained by implementing a
specific logic in all the nodes.
Of course, we should manage node failures, but CAP doesn’t
help us here.
AP / CP choice
Partition Tolerance basically means that you’re communicating
over an asynchronous network that may delay or drop messages.
The internet and all our data centers have this property, so
you don’t really have any choice in this matter.
Many systems are only “p”
In case you have one master and
one slave, and you are
partitioned from the master - you
can’t write, but you can read.
It’s not CAP-available.
Ok, it’s a CP system, but usually
sync between slave and master is
async and there might be a gap
between sync and system
partitioning, so you do not have
CAP-consistency.
AP and CP problems
Pure AP is useless, it may just return any random value and
it would be an AP system
Pure CP is useless too, because partitioning in CAP have no
fixed duration, so the system provides only eventual
consistency, which is not the strong one that we want to
have.
Try to digest it
How to describe distributed system
Remember about CAP narrow definitions, as they
are still widely used
Use PACELC(A) theorem instead of CAP, it
provides additional consistency/latency
tradeoff
Describe how ACID/BASE principles apply to
your system
Decide if the system suits your needs,
considering the project you are working on.
Let’s take a look on PACELC(A)
The PACELC theorem was first described and formalised by Daniel J. Abadi from
Yale University in 2012.
IF there is a partition (P), how does the system tradeoff availability and
consistency (A and C)
ELSE (E) When the system is running normally in the absence of partitions, how
does the system trade off latency (L) and consistency (C)?
As PACELC theorem is based on CAP, it also uses CAP definitions.
PACELC(A) described systems
Let’s take a look on ACID
ACID - is a set of properties of database transactions.
Jim Gray defined these properties of a reliable transaction system in the late
1970s and developed technologies to achieve them automatically.
Database vendors long time ago introduced 2 phase commit for providing ACID
across multiple database instances.
Let’s take a look on ACID
Atomicity. All of the operations in the transaction will complete, or none
will
Consistency. The database will be in a consistent state when the transaction
begins and ends
Isolation. The transaction will behave as if it is the only operation being
performed upon the database
Durability. Once a transaction has been committed, it will remain so, even in
the event of power loss, crashes, or errors.
CAP/ACID definition confusion
Consistency in ACID relates to data integrity,
whereas Consistency in CAP is a reference for
Atomic Consistency (Linearizability), which
is a consistency model
Isolation term is not used in CAP, but it’s
definition in ACID is actually the same for
what linearizability stands for
Availability in ACID is not used in definitions,
but when it’s presented in articles, it means
the same as CAP-availability, apart that it’s
not required for all non-failing nodes to
respond.
Let’s take a look on BASE
Eventually consistent services are often classified
as providing BASE semantics, in contrast to
traditional ACID guarantees.
One of the earliest definitions of eventual
consistency comes from a 1988.
BASE essentially embraces the fact that true
consistency cannot be achieved in the real world,
and as such cannot be modelled in highly scalable
distributed systems.
What is BASE
Basic Availability states that there will be a response to any request, but,
that response could still be a “failure” or the data may be in an
inconsistent or changing state
Soft-state state of the system could change over time due to “eventual
consistency” changes
Eventual consistency states that the system will eventually become consistent,
the system will continue to receive input and is not checking the
consistency of every transaction before it moves onto the next one
Try to digest it too...
Fresh look on the postgres
Postgres does allow multiple cluster configuration, so it’s really hard to
describe all of them. Let’s just take the master - slave replication with the
Slony implementation.
The system works according ACID (there are couple of problems with two-phase
commit, but mostly it’s reliable)
In case of partition the Slony will try to proceed switchover and if all went
fine, we have our new master with its consistency
When there is no partition the Slony gives up latency and does everything to
approach strong consistency. Actually, ACID is the reason off high latency
The system is considered as PC/EC(A)
Fresh look on the mongodb
MongoDB is a NOSQL database.
MongoDB is ACID in limited sense at the document level
In case of distributed system - it’s all about BASE
In case of no partition, the system guarantees reads and writes to be
consistent
If the master node will be failed or partitioned from the rest of the system,
some data will not be replicated. System elects a new master to remain
available for reads and writes.(New master and old master are inconsistent)
The system is considered as PA/EC(A), as most of the nodes remain CAP-available
in case of partition.
fresh look on the
postgres + salesforce + heroku connect system
The Salesforce is an abstraction to oracle database (RDBMS) and the Postgres is
an actual RDBMS. Heroku connect is a tool to sync those too dbs. Let’s see what
we have…
Heroku connect, Postgres and Salesforce provide us ACID, but we can’t operate
with related entities
Surprise! The full system looks more like a BASE.
It uses streaming API to sync the entities.
In case of partition, both systems will be available, but not consistent
In case of no partition, the heroku connect try to sync as much as possible,
Conclusion
The distributed systems might and should be understanded by a lot of metrics
and terms, but the start point is it’s tradeoffs
It’s really difficult to classify an abstract system. You should decide what
kind of system do you want and then - look at what you can achieve
The 2 point is the reason why to find any good articles about that is not easy
either
Do not overwhelm yourself by that work, you should be a scientist to do it 99 %
correctly
Links
https://dzone.com/articles/better-explaining-cap-theorem
https://martin.kleppmann.com/2015/05/11/please-stop-calling-databases-cp-or-ap.html
https://habrahabr.ru/post/231703/
http://blog.thislongrun.com/2015/03/the-confusing-cap-and-acid-wording.html
https://neo4j.com/blog/acid-vs-base-consistency-models-explained/
http://databases.about.com/od/databasetraining/a/databasesbegin.htm
https://brooker.co.za/blog/2014/07/16/pacelc.html
https://www.postgresql.org/files/developer/transactions.pdf
https://www.airpair.com/postgresql/posts/sql-vs-nosql-ko-postgres-vs-mongo
http://jennyxiaozhang.com/nosql-hbase-vs-cassandra-vs-mongodb/

Contenu connexe

Tendances

Firewall en cluster de alta disponibilidad
Firewall en cluster de alta disponibilidadFirewall en cluster de alta disponibilidad
Firewall en cluster de alta disponibilidad
cercer
 
Manual Monitoreo de Servidores
Manual  Monitoreo de ServidoresManual  Monitoreo de Servidores
Manual Monitoreo de Servidores
cyberleon95
 

Tendances (20)

Salesforce Integration Pattern Overview
Salesforce Integration Pattern OverviewSalesforce Integration Pattern Overview
Salesforce Integration Pattern Overview
 
Informe vlans
Informe vlansInforme vlans
Informe vlans
 
Benefits of Stream Processing and Apache Kafka Use Cases
Benefits of Stream Processing and Apache Kafka Use CasesBenefits of Stream Processing and Apache Kafka Use Cases
Benefits of Stream Processing and Apache Kafka Use Cases
 
Apex Trigger Debugging: Solving the Hard Problems
Apex Trigger Debugging: Solving the Hard ProblemsApex Trigger Debugging: Solving the Hard Problems
Apex Trigger Debugging: Solving the Hard Problems
 
Hardening Kafka Replication
Hardening Kafka Replication Hardening Kafka Replication
Hardening Kafka Replication
 
Kafka internals
Kafka internalsKafka internals
Kafka internals
 
Firewall en cluster de alta disponibilidad
Firewall en cluster de alta disponibilidadFirewall en cluster de alta disponibilidad
Firewall en cluster de alta disponibilidad
 
Metrics Are Not Enough: Monitoring Apache Kafka and Streaming Applications
Metrics Are Not Enough: Monitoring Apache Kafka and Streaming ApplicationsMetrics Are Not Enough: Monitoring Apache Kafka and Streaming Applications
Metrics Are Not Enough: Monitoring Apache Kafka and Streaming Applications
 
Kafka High Availability in multi data center setup with floating Observers wi...
Kafka High Availability in multi data center setup with floating Observers wi...Kafka High Availability in multi data center setup with floating Observers wi...
Kafka High Availability in multi data center setup with floating Observers wi...
 
Apache kafka performance(throughput) - without data loss and guaranteeing dat...
Apache kafka performance(throughput) - without data loss and guaranteeing dat...Apache kafka performance(throughput) - without data loss and guaranteeing dat...
Apache kafka performance(throughput) - without data loss and guaranteeing dat...
 
Apex Testing Best Practices
Apex Testing Best PracticesApex Testing Best Practices
Apex Testing Best Practices
 
Introduction to Kafka Cruise Control
Introduction to Kafka Cruise ControlIntroduction to Kafka Cruise Control
Introduction to Kafka Cruise Control
 
Apex Code Analysis Using the Tooling API and Canvas
Apex Code Analysis Using the Tooling API and CanvasApex Code Analysis Using the Tooling API and Canvas
Apex Code Analysis Using the Tooling API and Canvas
 
Alta disponibilidad-con-heartbeat
Alta disponibilidad-con-heartbeatAlta disponibilidad-con-heartbeat
Alta disponibilidad-con-heartbeat
 
Getting Started with Confluent Schema Registry
Getting Started with Confluent Schema RegistryGetting Started with Confluent Schema Registry
Getting Started with Confluent Schema Registry
 
카프카(kafka) 성능 테스트 환경 구축 (JMeter, ELK)
카프카(kafka) 성능 테스트 환경 구축 (JMeter, ELK)카프카(kafka) 성능 테스트 환경 구축 (JMeter, ELK)
카프카(kafka) 성능 테스트 환경 구축 (JMeter, ELK)
 
Chap 20 smtp, pop, imap
Chap 20 smtp, pop, imapChap 20 smtp, pop, imap
Chap 20 smtp, pop, imap
 
Apache Kafka as Message Queue for your microservices and other occasions
Apache Kafka as Message Queue for your microservices and other occasionsApache Kafka as Message Queue for your microservices and other occasions
Apache Kafka as Message Queue for your microservices and other occasions
 
Mule with workday connectors
Mule with workday connectorsMule with workday connectors
Mule with workday connectors
 
Manual Monitoreo de Servidores
Manual  Monitoreo de ServidoresManual  Monitoreo de Servidores
Manual Monitoreo de Servidores
 

En vedette

NoSQL in Perspective
NoSQL in PerspectiveNoSQL in Perspective
NoSQL in Perspective
Jeff Smith
 
Riak Training Session — Surge 2011
Riak Training Session — Surge 2011Riak Training Session — Surge 2011
Riak Training Session — Surge 2011
DstroyAllModels
 

En vedette (10)

Cap in depth
Cap in depthCap in depth
Cap in depth
 
NoSQL in Perspective
NoSQL in PerspectiveNoSQL in Perspective
NoSQL in Perspective
 
Rolling With Riak
Rolling With RiakRolling With Riak
Rolling With Riak
 
Riak Core: Building Distributed Applications Without Shared State
Riak Core: Building Distributed Applications Without Shared StateRiak Core: Building Distributed Applications Without Shared State
Riak Core: Building Distributed Applications Without Shared State
 
Riak in Ten Minutes
Riak in Ten MinutesRiak in Ten Minutes
Riak in Ten Minutes
 
Riak Training Session — Surge 2011
Riak Training Session — Surge 2011Riak Training Session — Surge 2011
Riak Training Session — Surge 2011
 
Что нового в nginx? / Максим Дунин (Nginx, Inc.)
Что нового в nginx? / Максим Дунин (Nginx, Inc.)Что нового в nginx? / Максим Дунин (Nginx, Inc.)
Что нового в nginx? / Максим Дунин (Nginx, Inc.)
 
NoSQL databases, the CAP theorem, and the theory of relativity
NoSQL databases, the CAP theorem, and the theory of relativityNoSQL databases, the CAP theorem, and the theory of relativity
NoSQL databases, the CAP theorem, and the theory of relativity
 
CAP Theorem - Theory, Implications and Practices
CAP Theorem - Theory, Implications and PracticesCAP Theorem - Theory, Implications and Practices
CAP Theorem - Theory, Implications and Practices
 
NoSQL Databases: Why, what and when
NoSQL Databases: Why, what and whenNoSQL Databases: Why, what and when
NoSQL Databases: Why, what and when
 

Similaire à All you didn't know about the CAP theorem

CAP, PACELC, and Determinism
CAP, PACELC, and DeterminismCAP, PACELC, and Determinism
CAP, PACELC, and Determinism
Daniel Abadi
 
HbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubeyHbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubey
Rohit Dubey
 
Distributed Algorithms
Distributed AlgorithmsDistributed Algorithms
Distributed Algorithms
913245857
 
Lecture-04-Principles of data management.pdf
Lecture-04-Principles of data management.pdfLecture-04-Principles of data management.pdf
Lecture-04-Principles of data management.pdf
manimozhi98
 
Container independent failover framework
Container independent failover frameworkContainer independent failover framework
Container independent failover framework
telestax
 
Container Independent failover framework - Mobicents Summit 2011
Container Independent failover framework - Mobicents Summit 2011Container Independent failover framework - Mobicents Summit 2011
Container Independent failover framework - Mobicents Summit 2011
telestax
 
Design Patterns For Distributed NO-reational databases
Design Patterns For Distributed NO-reational databasesDesign Patterns For Distributed NO-reational databases
Design Patterns For Distributed NO-reational databases
lovingprince58
 

Similaire à All you didn't know about the CAP theorem (20)

CAP: Scaling, HA
CAP: Scaling, HACAP: Scaling, HA
CAP: Scaling, HA
 
A Critique of the CAP Theorem by Martin Kleppmann
A Critique of the CAP Theorem by Martin KleppmannA Critique of the CAP Theorem by Martin Kleppmann
A Critique of the CAP Theorem by Martin Kleppmann
 
Database Throwdown Introduction
Database Throwdown IntroductionDatabase Throwdown Introduction
Database Throwdown Introduction
 
Lightning talk: highly scalable databases and the PACELC theorem
Lightning talk: highly scalable databases and the PACELC theoremLightning talk: highly scalable databases and the PACELC theorem
Lightning talk: highly scalable databases and the PACELC theorem
 
CAP, PACELC, and Determinism
CAP, PACELC, and DeterminismCAP, PACELC, and Determinism
CAP, PACELC, and Determinism
 
17-NoSQL.pptx
17-NoSQL.pptx17-NoSQL.pptx
17-NoSQL.pptx
 
Nosql availability & integrity
Nosql availability & integrityNosql availability & integrity
Nosql availability & integrity
 
Cassandra & Python - Springfield MO User Group
Cassandra & Python - Springfield MO User GroupCassandra & Python - Springfield MO User Group
Cassandra & Python - Springfield MO User Group
 
Dsys guide37
Dsys guide37Dsys guide37
Dsys guide37
 
HbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubeyHbaseHivePigbyRohitDubey
HbaseHivePigbyRohitDubey
 
Database Consistency Models
Database Consistency ModelsDatabase Consistency Models
Database Consistency Models
 
Distributed Algorithms
Distributed AlgorithmsDistributed Algorithms
Distributed Algorithms
 
Lecture-04-Principles of data management.pdf
Lecture-04-Principles of data management.pdfLecture-04-Principles of data management.pdf
Lecture-04-Principles of data management.pdf
 
No sql (not only sql)
No sql                 (not only sql)No sql                 (not only sql)
No sql (not only sql)
 
Container independent failover framework
Container independent failover frameworkContainer independent failover framework
Container independent failover framework
 
Container Independent failover framework - Mobicents Summit 2011
Container Independent failover framework - Mobicents Summit 2011Container Independent failover framework - Mobicents Summit 2011
Container Independent failover framework - Mobicents Summit 2011
 
Design Patterns For Distributed NO-reational databases
Design Patterns For Distributed NO-reational databasesDesign Patterns For Distributed NO-reational databases
Design Patterns For Distributed NO-reational databases
 
Scalable Data Storage Getting You Down? To The Cloud!
Scalable Data Storage Getting You Down? To The Cloud!Scalable Data Storage Getting You Down? To The Cloud!
Scalable Data Storage Getting You Down? To The Cloud!
 
Scalable Data Storage Getting you Down? To the Cloud!
Scalable Data Storage Getting you Down? To the Cloud!Scalable Data Storage Getting you Down? To the Cloud!
Scalable Data Storage Getting you Down? To the Cloud!
 
Design Patterns for Distributed Non-Relational Databases
Design Patterns for Distributed Non-Relational DatabasesDesign Patterns for Distributed Non-Relational Databases
Design Patterns for Distributed Non-Relational Databases
 

Dernier

Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
VictoriaMetrics
 
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
chiefasafspells
 
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
masabamasaba
 

Dernier (20)

OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
 
tonesoftg
tonesoftgtonesoftg
tonesoftg
 
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK Software
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 
%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
 
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
 
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
 
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
Artyushina_Guest lecture_YorkU CS May 2024.pptx
Artyushina_Guest lecture_YorkU CS May 2024.pptxArtyushina_Guest lecture_YorkU CS May 2024.pptx
Artyushina_Guest lecture_YorkU CS May 2024.pptx
 
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
 
%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand
 
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
 

All you didn't know about the CAP theorem

  • 1. All you didn’t know about the CAP theorem
  • 2. CAP theorem The theorem was presented on the Symposium on Principles of Distributed Computing in 2000 by Eric Brewer. In 2002, Seth Gilbert and Nancy Lynch of MIT published a formal proof of Brewer's conjecture, rendering it a theorem. According to Brewer, he wanted the community to start conversation about it, but his words have been corrected and treated as a theorem.
  • 3. What stands behind the cap? The CAP theorem states that in the distributed system you can choose only 2 out of 3: Consistency: every read would get you the most recent write Availability: every node (if not failed) always executes queries (read and writes) Partition-tolerance: even if the connections between nodes are down, the other two (A & C) promises, are kept.
  • 8. Let’s take a look at the postgresql Master/slave architecture is one of common solutions Slave can be synced with master in async/sync way The transaction system use two-phase commit to ensure consistency If a partition occurs you can’t talk to the server (in the basic case), the system is not CAP-available. So, it can’t continue work in case of network partitioning, but it provides strong consistency and high-availability. It’s a CA system!
  • 9. Let’s take a look at the mongodb MongoDB provides strong consistency, because it is a single-master system and all writes go to the primary by default MongoDB provides automatic failover in case of partitioning If a partition occurs it will stop accepting writes to the system until it believes that it can safely complete those writes. So, it can continue work in case of network partitioning and it gives up availability. It’s a CP system!
  • 10. Let’s take a look at the postgres + salesforce + heroku connect system It’s master-master system Heroku connect is responsible for keeping system consistency Salesforce data and our postgres db doesn’t know about each other If network partition occurs both storages will be available. Heroku connect will try to reconnect. So, it can continue work in case of network partitioning and it gives up consistency. It should be an AP system!
  • 11. It is so easy! Now I know everything!
  • 12. Actually, no. There are a lot of problems with CAP theorem: CAP uses very narrow and far-from-the-real-world definitions Actually, it is the choice only between consistency and availability Many systems are neither CAP-consistent nor CAP-available Pure AP systems are useless Pure CP systems might behave not as expected
  • 13. What is wrong with definitions Consistency in CAP actually means linearizability (and it’s really hard to reach it). Availability in CAP is defined as “every request received by a non-failing [database] node in the system must result in a [non-error] response” and it’s not restricted by time. The only fault considered by the CAP theorem is a network partition.
  • 15. Why node failures are outside CAP? By the definition of availability: ...every node (if not failed) always... By the proof of CAP: the proof of CAP used by Gilbert and Lynch relies on having code running on both sides of the partition. In some cases, a partition will be equivalent to a failure, but this equivalence will be obtained by implementing a specific logic in all the nodes. Of course, we should manage node failures, but CAP doesn’t help us here.
  • 16. AP / CP choice Partition Tolerance basically means that you’re communicating over an asynchronous network that may delay or drop messages. The internet and all our data centers have this property, so you don’t really have any choice in this matter.
  • 17. Many systems are only “p” In case you have one master and one slave, and you are partitioned from the master - you can’t write, but you can read. It’s not CAP-available. Ok, it’s a CP system, but usually sync between slave and master is async and there might be a gap between sync and system partitioning, so you do not have CAP-consistency.
  • 18. AP and CP problems Pure AP is useless, it may just return any random value and it would be an AP system Pure CP is useless too, because partitioning in CAP have no fixed duration, so the system provides only eventual consistency, which is not the strong one that we want to have.
  • 20. How to describe distributed system Remember about CAP narrow definitions, as they are still widely used Use PACELC(A) theorem instead of CAP, it provides additional consistency/latency tradeoff Describe how ACID/BASE principles apply to your system Decide if the system suits your needs, considering the project you are working on.
  • 21. Let’s take a look on PACELC(A) The PACELC theorem was first described and formalised by Daniel J. Abadi from Yale University in 2012. IF there is a partition (P), how does the system tradeoff availability and consistency (A and C) ELSE (E) When the system is running normally in the absence of partitions, how does the system trade off latency (L) and consistency (C)? As PACELC theorem is based on CAP, it also uses CAP definitions.
  • 23. Let’s take a look on ACID ACID - is a set of properties of database transactions. Jim Gray defined these properties of a reliable transaction system in the late 1970s and developed technologies to achieve them automatically. Database vendors long time ago introduced 2 phase commit for providing ACID across multiple database instances.
  • 24. Let’s take a look on ACID Atomicity. All of the operations in the transaction will complete, or none will Consistency. The database will be in a consistent state when the transaction begins and ends Isolation. The transaction will behave as if it is the only operation being performed upon the database Durability. Once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors.
  • 25. CAP/ACID definition confusion Consistency in ACID relates to data integrity, whereas Consistency in CAP is a reference for Atomic Consistency (Linearizability), which is a consistency model Isolation term is not used in CAP, but it’s definition in ACID is actually the same for what linearizability stands for Availability in ACID is not used in definitions, but when it’s presented in articles, it means the same as CAP-availability, apart that it’s not required for all non-failing nodes to respond.
  • 26. Let’s take a look on BASE Eventually consistent services are often classified as providing BASE semantics, in contrast to traditional ACID guarantees. One of the earliest definitions of eventual consistency comes from a 1988. BASE essentially embraces the fact that true consistency cannot be achieved in the real world, and as such cannot be modelled in highly scalable distributed systems.
  • 27. What is BASE Basic Availability states that there will be a response to any request, but, that response could still be a “failure” or the data may be in an inconsistent or changing state Soft-state state of the system could change over time due to “eventual consistency” changes Eventual consistency states that the system will eventually become consistent, the system will continue to receive input and is not checking the consistency of every transaction before it moves onto the next one
  • 28. Try to digest it too...
  • 29. Fresh look on the postgres Postgres does allow multiple cluster configuration, so it’s really hard to describe all of them. Let’s just take the master - slave replication with the Slony implementation. The system works according ACID (there are couple of problems with two-phase commit, but mostly it’s reliable) In case of partition the Slony will try to proceed switchover and if all went fine, we have our new master with its consistency When there is no partition the Slony gives up latency and does everything to approach strong consistency. Actually, ACID is the reason off high latency The system is considered as PC/EC(A)
  • 30. Fresh look on the mongodb MongoDB is a NOSQL database. MongoDB is ACID in limited sense at the document level In case of distributed system - it’s all about BASE In case of no partition, the system guarantees reads and writes to be consistent If the master node will be failed or partitioned from the rest of the system, some data will not be replicated. System elects a new master to remain available for reads and writes.(New master and old master are inconsistent) The system is considered as PA/EC(A), as most of the nodes remain CAP-available in case of partition.
  • 31. fresh look on the postgres + salesforce + heroku connect system The Salesforce is an abstraction to oracle database (RDBMS) and the Postgres is an actual RDBMS. Heroku connect is a tool to sync those too dbs. Let’s see what we have… Heroku connect, Postgres and Salesforce provide us ACID, but we can’t operate with related entities Surprise! The full system looks more like a BASE. It uses streaming API to sync the entities. In case of partition, both systems will be available, but not consistent In case of no partition, the heroku connect try to sync as much as possible,
  • 32. Conclusion The distributed systems might and should be understanded by a lot of metrics and terms, but the start point is it’s tradeoffs It’s really difficult to classify an abstract system. You should decide what kind of system do you want and then - look at what you can achieve The 2 point is the reason why to find any good articles about that is not easy either Do not overwhelm yourself by that work, you should be a scientist to do it 99 % correctly
  • 33.

Notes de l'éditeur

  1. Спросить эксперта в зале и попросить его внимательно следить и в конце поправить?
  2. Кто слышал про CAP теорему? А кто использовал её?
  3. Что осталось не ясно?
  4. Linearizability gives isolation at the level of operations. This is a fairly expensive guarantee to provide, because it requires a lot of coordination. Even the CPU in your computer doesn’t provide linearizable access to your local RAM! On modern CPUs, you need to use an explicit memory barrier instruction in order to get linearizability. And even testing whether a system provides linearizability is tricky.
  5. Вопросы?
  6. Вопросы?
  7. Вопросы?