SlideShare une entreprise Scribd logo
1  sur  46
Télécharger pour lire hors ligne
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
SCYLLA’S COMPACTION STRATEGIES
“Big Things” meetup @ Oracle, March 2019.
Nadav Har’El,
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Presenter bio
2
Nadav Har’El has had a diverse 20-year career in
computer programming and computer science. He
worked on high-performance scientific computing,
networking software, information retrieval,
virtualization and operating systems.
Today he works on Scylla.
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Agenda
▪ What is compaction?
▪ Scylla’s compaction strategies:
o Size Tier
o Leveled
o Incremental
o Date Tier
o Time Window
▪ Which should I use for my workload and why?
▪ Examples
3
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
What is compaction?
Scylla’s write path:
4
Writes
commit log
compaction
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
(What is compaction?)
▪ Scylla’s write path:
o Updates are inserted into a memory table (“memtable”)
o Memtables are periodically flushed to a new sorted file (“sstable”)
▪ After a while, we have many separate sstables
o Different sstables may contain old and new values of the same cell
o Or different rows in the same partition
o Wastes disk space
o Slows down reads
▪ Compaction: read several sstables and output one (or more)
containing the merged and most recent information
5
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
What is compaction? (cont.)
▪ This technique of keeping sorted files and merging them is
well-known and often called Log-Structured Merge (LSM) Tree
▪ Published in 1996 (but differs from modern implementations).
▪ Earliest popular application that I know of is the Lucene search
engine, 1999
o High performance write.
o Immediately readable.
o Reasonable performance for read.
6
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Compaction Strategy
▪ Which sstables to compact, and when?
▪ This is called the compaction strategy
▪ The goal of the strategy is low amplification:
o Avoid compacting the same data again and again.
• write amplification
o Avoid read requests needing many sstables.
• read amplification
o Avoid overwritten/deleted/expired data staying on disk.
o Avoid excessive temporary disk space needs.
• space amplification
7
Which compaction
strategy shall I
choose?
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Strategy #1: Size-Tiered Compaction
▪ Cassandra’s oldest, and still default, compaction strategy.
▪ The default on Scylla too.
▪ Dates back to Google’s BigTable paper (2006).
o Idea used even earlier (e.g., Lucene, 1999).
8
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Size-Tiered compaction strategy
9
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
(Size-Tiered compaction strategy)
▪ Each time when enough data is in the memory table, flush it to a
small sstable
▪ When several small sstables exist, compact them into one bigger
sstable
▪ When several bigger sstables exist, compact them into one very big
sstable
▪ …
▪ Each time one “size tier” has enough sstables, compact them into
one sstable in the (usually) next size tier
10
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Size-Tiered compaction - amplification
▪ write amplification: O(logN)
o Most data is in highest tier - needed to pass through O(logN) tiers
• Where “N” is (data size) / (flushed sstable size).
o This is asymptotically optimal
11
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Size-Tiered compaction - amplification
Read amplification? O(logN) sstables, but:
▪ If workload writes a partition once and never modifies it:
o Eventually each partition’s data will be compacted into one sstable
o In-memory bloom filter will usually allow reading only one sstable
o Optimal
▪ But if workload continues to update the same partition:
o All sstables will contain updates to the same partition
o O(logN) reads per read request
o Reasonable, but not great
12
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Size-Tiered compaction - amplification
▪ Space amplification
13
“storage space is most
often the primary
bottleneck when using
Flash SSDs under typical
production workloads at
Facebook”
Facebook researchers, 2017.
http://cidrdb.org/cidr2017/papers/p82-dong
-cidr17.pdf
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Size-Tiered compaction - amplification
Space amplification:
▪ Obsolete data in a huge sstable will remain for a very long time.
▪ Compaction needs a lot of temporary space:
o Worst-case, needs to merge all existing sstables into one and may need
half the disk to be empty for the merged result. (2x)
o Less of a problem in Scylla than Cassandra because of sharding.
o Still a risk - so best practice is to leave half the disk empty...
14
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Size-Tiered compaction - amplification
Space amplification (continued):
▪ When workload is overwrite-intensive, it is even worse:
o Cassandra waits until 4 large sstables.
o All 4 overwrote the same data, so merged amount is same as in 1 sstable
o 5-fold space amplification!
o Even worse: same data in several tiers.
▪ Scylla reduces this space amplification vs. Cassandra:
o Wait for only two sstables of similar size!
o May wait for more depending on workload (automatic controller)
15
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Strategy #2: Leveled Compaction
▪ Introduced in Cassandra 1.0, in 2011.
▪ Based on Google’s LevelDB (itself based on Google’s BigTable)
▪ Aims to reduce space and read amplifications.
▪ Replaces huge SSTables with runs:
o A run is a big SSTable split into small (160 MB by default) SSTables
o These have non-overlapping key ranges
o A huge SSTable must be rewritten as a whole, but in a run we can modify only
parts of it (individual sstables) while keeping the disjoint key requirement
▪ In leveled compaction strategy:
16
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Leveled compaction strategy
17
Level 0
Level 1
(run of 10
sstables) Level 2
(run of 100
sstables)
...
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
(Leveled compaction strategy)
▪ SSTables are divided into “levels”:
o New SSTables (dumped from memtables) are created in Level 0
o Each other level is a run of SSTables of exponentially increasing size:
• Level 1 is a run of 10 SSTables (of 160 MB each)
• Level 2 is a run of 100 SSTables (of 160 MB each)
• etc.
▪ When we have enough (e.g., 4) sstables in Level 0, we compact
them with all 10 sstables in Level 1
o We don't create one large sstable - rather, a run: we write one sstable and
when we reach the size limit (160 MB), we start a new sstable
18
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
(Leveled compaction strategy)
▪ After the compaction of level 0 into level 1, level 1 may have more
than 10 of sstables. We pick one and compact it into level 2:
o Take one sstable from level 1
o Look at its key range and find all sstables in level 2 which overlap with it
o Typically, there are about 12 of these
• The level 1 sstable spans roughly 1/10th of the keys, while each level 2
sstable spans 1/100th of the keys; so a level-1 sstable’s range roughly
overlaps 10 level-2 sstables plus two more on the edges
o As before, we compact the one sstable from level 1 and the 12 sstables from
level 2 and replace all of those with new sstables in level 2
19
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
(Leveled compaction strategy)
▪ After this compaction of level 1 into level 2, now we can have
excess sstables in level 2 so we merge them into level 3. Again, one
sstable from level 2 will need to be compacted with around 10
sstables from level 3.
20
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Leveled compaction - amplification
▪ Space amplification:
o Because of sstable counts, 90% of the data is in the deepest level (if full!)
o These sstables do not overlap, so it can’t have duplicate data!
o So at most, 10% of the space is wasted
• (Actually, today, 50%, if deepest level is not full. Can be improved.)
o Also, each compaction needs a constant (~12*160MB) temporary space
o Nearly optimal
21
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Leveled compaction - amplification
▪ Read amplification:
o We have O(N) tables!
o But in each level sstables have disjoint ranges (cached in memory)
o Worst-case, O(logN) sstables relevant to a partition - plus L0 size.
o Under some assumptions 10% space amplification implies: 90% of the reads
will need just one sstable!
o Nearly optimal
22
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Leveled compaction - amplification
▪ Write amplification:
23
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Leveled compaction - amplification
▪ Write amplification:
o Also O(logN) steps like size-tiered:
• Again, most of the data is in the deepest level k
• All data was written once in L0, then compacted into L1, …, Lk
o But... significantly more writes in each step!
• STCS: reads sstables of total size X, writes X (if no overwrites)
• LCS: one input sstable size X (Li) - merged with ~10X (Li+1) write ~ 11X.
o If enough writing and LCS can’t keep up, its read and space advantages are
lost
o If also have cache-miss reads, they will get less disk bandwidth
24
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Leveled compaction - write amplification
▪ Leveled compactions write amplification is not only a problem with
100% write...
▪ Can have just 10% writes and an amplified write workload so high
that
o Uncached reads slowed down because we need the disk to write
o Compaction can’t keep up, uncompacted sstables pile up, even slower reads
▪ Leveled compaction is unsuitable for many workloads with a
non-negligible amount of writes even if they seem “read mostly”
25
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Example 1 - write-only workload
▪ Write-only workload
o Cassandra-stress writing 30 million partitions (about 9 GB of data)
o Constant write rate 10,000 writes/second
o One shard
26
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Example 1 (space amplification)
constant multiple of
flushed memtable &
sstable size
27
x2 space
amplification
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Example 1 (write amplification)
▪ Amount of actual data collected: 8.8 GB
▪ Size-tiered compaction: 50 GB writes (4 tiers + commit log)
▪ Leveled compaction: 111 GB writes
28
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Example 1 - conclusions
▪ Size-tiered compaction:
at some points needs twice the disk space
o In Scylla with many shards, “usually” maximum space use is not concurrent
▪ Level-tiered compaction:
more than double the amount of disk I/O
o Test used smaller-than default sstables (10 MB) to illustrate the problem
o Same problem with default sstable size (160 MB) - with larger workloads
29
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Can we create a new compaction strategy with
▪ Low write amplification of size-tiered compaction,
▪ Without its high temporary disk space usage during compaction?
30
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Strategy #3: Incremental Compaction
▪ New in Scylla Enterprise 2019.1
▪ Improved Size-Tiered strategy, with ideas from Leveled strategy:
31
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Strategy #3: Incremental Compaction
▪ Size-tiered compaction needs temporary space because we only
remove a huge sstable after we fully compact it.
▪ Let’s split each huge sstable into a run (as in LCS):
o Treat the entire run (not individual sstables) as a file for STCS
o Remove individual sstables as compacted. Low temporary space.
32
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Incremental compaction - amplification
▪ Space amplification:
o Small constant temporary space needs, even smaller than LCS
(M*S per parallel compaction, e.g., M=2, S=160 MB)
o Overwrite-mostly still a worst-case - but 2-fold, not 5-fold as Cassandra.
▪ Write amplification:
o Same as size-tiered: O(logN), small constant
▪ Read amplification:
o Same as size-tiered: at worst O(logN) if updating the same partitions
33
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Example 1, with incremental compaction
34
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Example 2 - overwrite workload
▪ Write 15 times the same 4 million partitions
o cassandra-stress write n=4000000 -pop seq=1..4000000 -schema
"replication(strategy=org.apache.cassandra.locator.SimpleStrategy,factor=1)"
o In this test cassandra-stress not rate limited
o Again, small (10MB) LCS tables
▪ Necessary amount of sstable data: 1.2 GB
▪ STCS space amplification: x7.7 (with Cassandra’s default of 4)
▪ LCS space amplification lower, constant multiple of sstable size
▪ ICS and Scylla’s default of 2, around x2
35
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Example 2
36
x7.7
space
amplification
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Example 3 - read+updates workload
▪ When workloads are read-mostly, read amplification is important
▪ When workloads also have updates to existing partitions
o With STCS, each partition ends up in multiple sstables
o Read amplification
▪ An example to simulate this:
o Do a write-only update workload
• cassandra_stress write n=4,000,000 -pop seq=1..1,000,000
o Now run a read-only workload
• cassandra_stress read n=1,000,000 -pop seq=1..1,000,000
• measure avg. number of read bytes per request
37
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Example 3 - read+updates workload
▪ Size-tiered: 46,915 bytes read per request
o Optimal after major compaction - 11,979
▪ Leveled: 11,982
o Equal to optimal because in this case all sstables fit in L1...
▪ Increasing the number of partitions 8-fold:
o Size-tiered: 29,794 luckier this time
o Leveled: 16,713 unlucky (0.5 of data, not 0.9, in L2)
▪ BUT: Remember that if we have non-negligable amount of writes,
LCS write amplification may slow down reads
38
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Example 3, and major compaction
▪ We saw that size-tiered major compaction reduces read
amplification
▪ It also reduces space amplification (expired/overwritten data)
▪ Major compaction only makes sense if very few writes
o But in that case, LCS’s write amplification is not a problem!
o So LCS is recommended instead of major compaction
• Easier to use
• No huge operations like major compaction (need to find when to run)
• No 50%-free-disk worst-case requirement
• Good read amplification and space amplification
39
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Strategy #4: Time-Window Compaction
▪ Introduced in Cassandra 3.0.8, designed for time-series data
▪ Replaces Date-Tiered compaction strategy of Cassandra 2.1
(which is also supported by Scylla, but not recommended)
40
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Time-Window compaction strategy (cont.)
In a time-series use case:
▪ Clustering key and write time are correlated
▪ Data is added in time order. Only few out-of-order writes, typically
rearranged by just a few seconds
▪ Data is only deleted through expiration (TTL) or by deleting an
entire partition, usually the same TTL on all the data
▪ A query is a clustering-key range query on a given partition
Most common query: "values from the last hour/day/week"
41
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Time-Window compaction strategy (cont.)
▪ Scylla remembers in memory the minimum and maximum
clustering key in each newly-flushed sstable
o Efficiently find only the sstables with data relevant to a query
▪ But most compaction strategies,
o Destroy this feature by merging “old” and “new” sstables
o Move all rows of a partition to the same sstable…
• But time series queries don’t need all rows of a partition, just rows in a
given time range
• Makes it impossible to expire old sstable’s when everything in them has
expired
• Read and write amplification (needless compactions)
42
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Time-Window compaction strategy (cont.)
So TWCS:
▪ Divides time into “time windows”
o E.g., if typical query asks for 1 day of data, choose a time window of 1 day
▪ Divide sstables into time buckets, according to time window
▪ Compact using Size-Tiered strategy inside each time bucket
o If the 2-day old window has just one big sstable and a repair creates an
additional tiny “old” sstable, the two will not get compacted
o A tradeoff: slows read but avoids the write amplification problem of DTCS
▪ When the current time window ends, major compaction of bucket
o Except for small repair-produced sstables, we get 1 sstable per time window
43
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Summary
44
Workload Size-Tiered Leveled Incremental Time-Window
Write-only 2x peak space 2x writes Best -
Overwrite Huge peak
space
write
amplification
high peak
space, but not
like size-tiered
-
Read-mostly,
few updates
read
amplification
Best read
amplification
-
Read-mostly,
but a lot of
updates
read and space
amplification
write
amplification
may overwhelm
read
amplification
-
Time series write, read, and
space ampl.
write and space
amplification
write and read
amplification
Best
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
nyh@scylladb.com
Please stay in touch
Any questions?
For more information, see my blog posts:
1. https://www.scylladb.com/2018/01/17/compaction-series-space-a
mplification/ Space Amplification in Size-Tiered
Compaction
2. https://www.scylladb.com/2018/01/31/compaction-series-leveled-
compaction/ Write Amplification in Leveled Compaction
3. Soon to come, post about Incremental Compaction.
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Questions?
nyh@scylladb.com
Please stay in touch
Any questions?
Thank you for listening!

Contenu connexe

Tendances

How to upgrade like a boss to MySQL 8.0 - PLE19
How to upgrade like a boss to MySQL 8.0 -  PLE19How to upgrade like a boss to MySQL 8.0 -  PLE19
How to upgrade like a boss to MySQL 8.0 - PLE19Alkin Tezuysal
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to RedisDvir Volk
 
introduction to linux kernel tcp/ip ptocotol stack
introduction to linux kernel tcp/ip ptocotol stack introduction to linux kernel tcp/ip ptocotol stack
introduction to linux kernel tcp/ip ptocotol stack monad bobo
 
Introduction VAUUM, Freezing, XID wraparound
Introduction VAUUM, Freezing, XID wraparoundIntroduction VAUUM, Freezing, XID wraparound
Introduction VAUUM, Freezing, XID wraparoundMasahiko Sawada
 
Looking towards an official cassandra sidecar netflix
Looking towards an official cassandra sidecar   netflixLooking towards an official cassandra sidecar   netflix
Looking towards an official cassandra sidecar netflixVinay Kumar Chella
 
Linux tuning to improve PostgreSQL performance
Linux tuning to improve PostgreSQL performanceLinux tuning to improve PostgreSQL performance
Linux tuning to improve PostgreSQL performancePostgreSQL-Consulting
 
MySQL Atchitecture and Concepts
MySQL Atchitecture and ConceptsMySQL Atchitecture and Concepts
MySQL Atchitecture and ConceptsTuyen Vuong
 
Database in Kubernetes: Diagnostics and Monitoring
Database in Kubernetes: Diagnostics and MonitoringDatabase in Kubernetes: Diagnostics and Monitoring
Database in Kubernetes: Diagnostics and MonitoringSveta Smirnova
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to RedisArnab Mitra
 
MySQL 상태 메시지 분석 및 활용
MySQL 상태 메시지 분석 및 활용MySQL 상태 메시지 분석 및 활용
MySQL 상태 메시지 분석 및 활용I Goo Lee
 
Spark Operator—Deploy, Manage and Monitor Spark clusters on Kubernetes
 Spark Operator—Deploy, Manage and Monitor Spark clusters on Kubernetes Spark Operator—Deploy, Manage and Monitor Spark clusters on Kubernetes
Spark Operator—Deploy, Manage and Monitor Spark clusters on KubernetesDatabricks
 
An Overview of Apache Cassandra
An Overview of Apache CassandraAn Overview of Apache Cassandra
An Overview of Apache CassandraDataStax
 
Introduction to Apache Calcite
Introduction to Apache CalciteIntroduction to Apache Calcite
Introduction to Apache CalciteJordan Halterman
 
Kernel Recipes 2017: Using Linux perf at Netflix
Kernel Recipes 2017: Using Linux perf at NetflixKernel Recipes 2017: Using Linux perf at Netflix
Kernel Recipes 2017: Using Linux perf at NetflixBrendan Gregg
 
Introduction to Cassandra
Introduction to CassandraIntroduction to Cassandra
Introduction to CassandraGokhan Atil
 

Tendances (20)

How to upgrade like a boss to MySQL 8.0 - PLE19
How to upgrade like a boss to MySQL 8.0 -  PLE19How to upgrade like a boss to MySQL 8.0 -  PLE19
How to upgrade like a boss to MySQL 8.0 - PLE19
 
Introduction to Galera Cluster
Introduction to Galera ClusterIntroduction to Galera Cluster
Introduction to Galera Cluster
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
 
introduction to linux kernel tcp/ip ptocotol stack
introduction to linux kernel tcp/ip ptocotol stack introduction to linux kernel tcp/ip ptocotol stack
introduction to linux kernel tcp/ip ptocotol stack
 
Introduction VAUUM, Freezing, XID wraparound
Introduction VAUUM, Freezing, XID wraparoundIntroduction VAUUM, Freezing, XID wraparound
Introduction VAUUM, Freezing, XID wraparound
 
Looking towards an official cassandra sidecar netflix
Looking towards an official cassandra sidecar   netflixLooking towards an official cassandra sidecar   netflix
Looking towards an official cassandra sidecar netflix
 
Linux tuning to improve PostgreSQL performance
Linux tuning to improve PostgreSQL performanceLinux tuning to improve PostgreSQL performance
Linux tuning to improve PostgreSQL performance
 
MySQL Atchitecture and Concepts
MySQL Atchitecture and ConceptsMySQL Atchitecture and Concepts
MySQL Atchitecture and Concepts
 
Database in Kubernetes: Diagnostics and Monitoring
Database in Kubernetes: Diagnostics and MonitoringDatabase in Kubernetes: Diagnostics and Monitoring
Database in Kubernetes: Diagnostics and Monitoring
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
 
MongoDB Oplog入門
MongoDB Oplog入門MongoDB Oplog入門
MongoDB Oplog入門
 
TiDB Introduction
TiDB IntroductionTiDB Introduction
TiDB Introduction
 
MySQL 상태 메시지 분석 및 활용
MySQL 상태 메시지 분석 및 활용MySQL 상태 메시지 분석 및 활용
MySQL 상태 메시지 분석 및 활용
 
Spark Operator—Deploy, Manage and Monitor Spark clusters on Kubernetes
 Spark Operator—Deploy, Manage and Monitor Spark clusters on Kubernetes Spark Operator—Deploy, Manage and Monitor Spark clusters on Kubernetes
Spark Operator—Deploy, Manage and Monitor Spark clusters on Kubernetes
 
An Overview of Apache Cassandra
An Overview of Apache CassandraAn Overview of Apache Cassandra
An Overview of Apache Cassandra
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 
Introduction to Apache Calcite
Introduction to Apache CalciteIntroduction to Apache Calcite
Introduction to Apache Calcite
 
Using galera replication to create geo distributed clusters on the wan
Using galera replication to create geo distributed clusters on the wanUsing galera replication to create geo distributed clusters on the wan
Using galera replication to create geo distributed clusters on the wan
 
Kernel Recipes 2017: Using Linux perf at Netflix
Kernel Recipes 2017: Using Linux perf at NetflixKernel Recipes 2017: Using Linux perf at Netflix
Kernel Recipes 2017: Using Linux perf at Netflix
 
Introduction to Cassandra
Introduction to CassandraIntroduction to Cassandra
Introduction to Cassandra
 

Similaire à Scylla Compaction Strategies

Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...ScyllaDB
 
Scylla Summit 2017: Keynote, Looking back, looking ahead
Scylla Summit 2017: Keynote, Looking back, looking aheadScylla Summit 2017: Keynote, Looking back, looking ahead
Scylla Summit 2017: Keynote, Looking back, looking aheadScyllaDB
 
Scylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data Center
Scylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data CenterScylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data Center
Scylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data CenterScyllaDB
 
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...ScyllaDB
 
If You Care About Performance, Use User Defined Types
If You Care About Performance, Use User Defined TypesIf You Care About Performance, Use User Defined Types
If You Care About Performance, Use User Defined TypesScyllaDB
 
Wikimedia Content API (Strangeloop)
Wikimedia Content API (Strangeloop)Wikimedia Content API (Strangeloop)
Wikimedia Content API (Strangeloop)Eric Evans
 
Apache spark on planet scale
Apache spark on planet scaleApache spark on planet scale
Apache spark on planet scaleDenis Chapligin
 
TechTalk: Reduce Your Storage Footprint with a Revolutionary New Compaction S...
TechTalk: Reduce Your Storage Footprint with a Revolutionary New Compaction S...TechTalk: Reduce Your Storage Footprint with a Revolutionary New Compaction S...
TechTalk: Reduce Your Storage Footprint with a Revolutionary New Compaction S...ScyllaDB
 
Scaling MySQL in Amazon Web Services
Scaling MySQL in Amazon Web ServicesScaling MySQL in Amazon Web Services
Scaling MySQL in Amazon Web ServicesLaine Campbell
 
Db2 performance tuning for dummies
Db2 performance tuning for dummiesDb2 performance tuning for dummies
Db2 performance tuning for dummiesAngel Dueñas Neyra
 
Scaling RDBMS on AWS- ClustrixDB @AWS Meetup 20160711
Scaling RDBMS on AWS- ClustrixDB @AWS Meetup 20160711Scaling RDBMS on AWS- ClustrixDB @AWS Meetup 20160711
Scaling RDBMS on AWS- ClustrixDB @AWS Meetup 20160711Dave Anselmi
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Community
 
Scylla Summit 2017: From Elasticsearch to Scylla at Zenly
Scylla Summit 2017: From Elasticsearch to Scylla at ZenlyScylla Summit 2017: From Elasticsearch to Scylla at Zenly
Scylla Summit 2017: From Elasticsearch to Scylla at ZenlyScyllaDB
 
Using ТРСС to study Firebird performance
Using ТРСС to study Firebird performanceUsing ТРСС to study Firebird performance
Using ТРСС to study Firebird performanceMind The Firebird
 
Scylla Summit 2017: A Toolbox for Understanding Scylla in the Field
Scylla Summit 2017: A Toolbox for Understanding Scylla in the FieldScylla Summit 2017: A Toolbox for Understanding Scylla in the Field
Scylla Summit 2017: A Toolbox for Understanding Scylla in the FieldScyllaDB
 
Beyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
Beyond Shuffling and Streaming Preview - Salt Lake City Spark MeetupBeyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
Beyond Shuffling and Streaming Preview - Salt Lake City Spark MeetupHolden Karau
 
Rails Conf Europe 2007 Notes
Rails Conf  Europe 2007  NotesRails Conf  Europe 2007  Notes
Rails Conf Europe 2007 NotesRoss Lawley
 
Scylla Summit 2017: Planning Your Queries for Maximum Performance
Scylla Summit 2017: Planning Your Queries for Maximum PerformanceScylla Summit 2017: Planning Your Queries for Maximum Performance
Scylla Summit 2017: Planning Your Queries for Maximum PerformanceScyllaDB
 

Similaire à Scylla Compaction Strategies (20)

Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
 
Scylla Summit 2017: Keynote, Looking back, looking ahead
Scylla Summit 2017: Keynote, Looking back, looking aheadScylla Summit 2017: Keynote, Looking back, looking ahead
Scylla Summit 2017: Keynote, Looking back, looking ahead
 
Scylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data Center
Scylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data CenterScylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data Center
Scylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data Center
 
Log Structured Merge Tree
Log Structured Merge TreeLog Structured Merge Tree
Log Structured Merge Tree
 
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
 
If You Care About Performance, Use User Defined Types
If You Care About Performance, Use User Defined TypesIf You Care About Performance, Use User Defined Types
If You Care About Performance, Use User Defined Types
 
Wikimedia Content API (Strangeloop)
Wikimedia Content API (Strangeloop)Wikimedia Content API (Strangeloop)
Wikimedia Content API (Strangeloop)
 
Apache spark on planet scale
Apache spark on planet scaleApache spark on planet scale
Apache spark on planet scale
 
TechTalk: Reduce Your Storage Footprint with a Revolutionary New Compaction S...
TechTalk: Reduce Your Storage Footprint with a Revolutionary New Compaction S...TechTalk: Reduce Your Storage Footprint with a Revolutionary New Compaction S...
TechTalk: Reduce Your Storage Footprint with a Revolutionary New Compaction S...
 
Scaling MySQL in Amazon Web Services
Scaling MySQL in Amazon Web ServicesScaling MySQL in Amazon Web Services
Scaling MySQL in Amazon Web Services
 
Db2 performance tuning for dummies
Db2 performance tuning for dummiesDb2 performance tuning for dummies
Db2 performance tuning for dummies
 
MyRocks Deep Dive
MyRocks Deep DiveMyRocks Deep Dive
MyRocks Deep Dive
 
Scaling RDBMS on AWS- ClustrixDB @AWS Meetup 20160711
Scaling RDBMS on AWS- ClustrixDB @AWS Meetup 20160711Scaling RDBMS on AWS- ClustrixDB @AWS Meetup 20160711
Scaling RDBMS on AWS- ClustrixDB @AWS Meetup 20160711
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
Scylla Summit 2017: From Elasticsearch to Scylla at Zenly
Scylla Summit 2017: From Elasticsearch to Scylla at ZenlyScylla Summit 2017: From Elasticsearch to Scylla at Zenly
Scylla Summit 2017: From Elasticsearch to Scylla at Zenly
 
Using ТРСС to study Firebird performance
Using ТРСС to study Firebird performanceUsing ТРСС to study Firebird performance
Using ТРСС to study Firebird performance
 
Scylla Summit 2017: A Toolbox for Understanding Scylla in the Field
Scylla Summit 2017: A Toolbox for Understanding Scylla in the FieldScylla Summit 2017: A Toolbox for Understanding Scylla in the Field
Scylla Summit 2017: A Toolbox for Understanding Scylla in the Field
 
Beyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
Beyond Shuffling and Streaming Preview - Salt Lake City Spark MeetupBeyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
Beyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
 
Rails Conf Europe 2007 Notes
Rails Conf  Europe 2007  NotesRails Conf  Europe 2007  Notes
Rails Conf Europe 2007 Notes
 
Scylla Summit 2017: Planning Your Queries for Maximum Performance
Scylla Summit 2017: Planning Your Queries for Maximum PerformanceScylla Summit 2017: Planning Your Queries for Maximum Performance
Scylla Summit 2017: Planning Your Queries for Maximum Performance
 

Dernier

TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providermohitmore19
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech studentsHimanshiGarg82
 
%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...
%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...
%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...masabamasaba
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...masabamasaba
 
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park %in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park masabamasaba
 
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyviewmasabamasaba
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...Shane Coughlan
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareJim McKeeth
 
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfVishalKumarJha10
 
Chinsurah Escorts ☎️8617697112 Starting From 5K to 15K High Profile Escorts ...
Chinsurah Escorts ☎️8617697112  Starting From 5K to 15K High Profile Escorts ...Chinsurah Escorts ☎️8617697112  Starting From 5K to 15K High Profile Escorts ...
Chinsurah Escorts ☎️8617697112 Starting From 5K to 15K High Profile Escorts ...Nitya salvi
 
SHRMPro HRMS Software Solutions Presentation
SHRMPro HRMS Software Solutions PresentationSHRMPro HRMS Software Solutions Presentation
SHRMPro HRMS Software Solutions PresentationShrmpro
 
%in Durban+277-882-255-28 abortion pills for sale in Durban
%in Durban+277-882-255-28 abortion pills for sale in Durban%in Durban+277-882-255-28 abortion pills for sale in Durban
%in Durban+277-882-255-28 abortion pills for sale in Durbanmasabamasaba
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisamasabamasaba
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension AidPhilip Schwarz
 
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfPayment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfkalichargn70th171
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdfPearlKirahMaeRagusta1
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...Health
 

Dernier (20)

TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...
%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...
%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
 
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park %in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
 
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK Software
 
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
 
Chinsurah Escorts ☎️8617697112 Starting From 5K to 15K High Profile Escorts ...
Chinsurah Escorts ☎️8617697112  Starting From 5K to 15K High Profile Escorts ...Chinsurah Escorts ☎️8617697112  Starting From 5K to 15K High Profile Escorts ...
Chinsurah Escorts ☎️8617697112 Starting From 5K to 15K High Profile Escorts ...
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
SHRMPro HRMS Software Solutions Presentation
SHRMPro HRMS Software Solutions PresentationSHRMPro HRMS Software Solutions Presentation
SHRMPro HRMS Software Solutions Presentation
 
%in Durban+277-882-255-28 abortion pills for sale in Durban
%in Durban+277-882-255-28 abortion pills for sale in Durban%in Durban+277-882-255-28 abortion pills for sale in Durban
%in Durban+277-882-255-28 abortion pills for sale in Durban
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
 
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfPayment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 

Scylla Compaction Strategies

  • 1. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company SCYLLA’S COMPACTION STRATEGIES “Big Things” meetup @ Oracle, March 2019. Nadav Har’El,
  • 2. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Presenter bio 2 Nadav Har’El has had a diverse 20-year career in computer programming and computer science. He worked on high-performance scientific computing, networking software, information retrieval, virtualization and operating systems. Today he works on Scylla.
  • 3. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Agenda ▪ What is compaction? ▪ Scylla’s compaction strategies: o Size Tier o Leveled o Incremental o Date Tier o Time Window ▪ Which should I use for my workload and why? ▪ Examples 3
  • 4. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company What is compaction? Scylla’s write path: 4 Writes commit log compaction
  • 5. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company (What is compaction?) ▪ Scylla’s write path: o Updates are inserted into a memory table (“memtable”) o Memtables are periodically flushed to a new sorted file (“sstable”) ▪ After a while, we have many separate sstables o Different sstables may contain old and new values of the same cell o Or different rows in the same partition o Wastes disk space o Slows down reads ▪ Compaction: read several sstables and output one (or more) containing the merged and most recent information 5
  • 6. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company What is compaction? (cont.) ▪ This technique of keeping sorted files and merging them is well-known and often called Log-Structured Merge (LSM) Tree ▪ Published in 1996 (but differs from modern implementations). ▪ Earliest popular application that I know of is the Lucene search engine, 1999 o High performance write. o Immediately readable. o Reasonable performance for read. 6
  • 7. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Compaction Strategy ▪ Which sstables to compact, and when? ▪ This is called the compaction strategy ▪ The goal of the strategy is low amplification: o Avoid compacting the same data again and again. • write amplification o Avoid read requests needing many sstables. • read amplification o Avoid overwritten/deleted/expired data staying on disk. o Avoid excessive temporary disk space needs. • space amplification 7 Which compaction strategy shall I choose?
  • 8. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Strategy #1: Size-Tiered Compaction ▪ Cassandra’s oldest, and still default, compaction strategy. ▪ The default on Scylla too. ▪ Dates back to Google’s BigTable paper (2006). o Idea used even earlier (e.g., Lucene, 1999). 8
  • 9. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Size-Tiered compaction strategy 9
  • 10. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company (Size-Tiered compaction strategy) ▪ Each time when enough data is in the memory table, flush it to a small sstable ▪ When several small sstables exist, compact them into one bigger sstable ▪ When several bigger sstables exist, compact them into one very big sstable ▪ … ▪ Each time one “size tier” has enough sstables, compact them into one sstable in the (usually) next size tier 10
  • 11. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Size-Tiered compaction - amplification ▪ write amplification: O(logN) o Most data is in highest tier - needed to pass through O(logN) tiers • Where “N” is (data size) / (flushed sstable size). o This is asymptotically optimal 11
  • 12. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Size-Tiered compaction - amplification Read amplification? O(logN) sstables, but: ▪ If workload writes a partition once and never modifies it: o Eventually each partition’s data will be compacted into one sstable o In-memory bloom filter will usually allow reading only one sstable o Optimal ▪ But if workload continues to update the same partition: o All sstables will contain updates to the same partition o O(logN) reads per read request o Reasonable, but not great 12
  • 13. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Size-Tiered compaction - amplification ▪ Space amplification 13 “storage space is most often the primary bottleneck when using Flash SSDs under typical production workloads at Facebook” Facebook researchers, 2017. http://cidrdb.org/cidr2017/papers/p82-dong -cidr17.pdf
  • 14. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Size-Tiered compaction - amplification Space amplification: ▪ Obsolete data in a huge sstable will remain for a very long time. ▪ Compaction needs a lot of temporary space: o Worst-case, needs to merge all existing sstables into one and may need half the disk to be empty for the merged result. (2x) o Less of a problem in Scylla than Cassandra because of sharding. o Still a risk - so best practice is to leave half the disk empty... 14
  • 15. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Size-Tiered compaction - amplification Space amplification (continued): ▪ When workload is overwrite-intensive, it is even worse: o Cassandra waits until 4 large sstables. o All 4 overwrote the same data, so merged amount is same as in 1 sstable o 5-fold space amplification! o Even worse: same data in several tiers. ▪ Scylla reduces this space amplification vs. Cassandra: o Wait for only two sstables of similar size! o May wait for more depending on workload (automatic controller) 15
  • 16. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Strategy #2: Leveled Compaction ▪ Introduced in Cassandra 1.0, in 2011. ▪ Based on Google’s LevelDB (itself based on Google’s BigTable) ▪ Aims to reduce space and read amplifications. ▪ Replaces huge SSTables with runs: o A run is a big SSTable split into small (160 MB by default) SSTables o These have non-overlapping key ranges o A huge SSTable must be rewritten as a whole, but in a run we can modify only parts of it (individual sstables) while keeping the disjoint key requirement ▪ In leveled compaction strategy: 16
  • 17. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Leveled compaction strategy 17 Level 0 Level 1 (run of 10 sstables) Level 2 (run of 100 sstables) ...
  • 18. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company (Leveled compaction strategy) ▪ SSTables are divided into “levels”: o New SSTables (dumped from memtables) are created in Level 0 o Each other level is a run of SSTables of exponentially increasing size: • Level 1 is a run of 10 SSTables (of 160 MB each) • Level 2 is a run of 100 SSTables (of 160 MB each) • etc. ▪ When we have enough (e.g., 4) sstables in Level 0, we compact them with all 10 sstables in Level 1 o We don't create one large sstable - rather, a run: we write one sstable and when we reach the size limit (160 MB), we start a new sstable 18
  • 19. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company (Leveled compaction strategy) ▪ After the compaction of level 0 into level 1, level 1 may have more than 10 of sstables. We pick one and compact it into level 2: o Take one sstable from level 1 o Look at its key range and find all sstables in level 2 which overlap with it o Typically, there are about 12 of these • The level 1 sstable spans roughly 1/10th of the keys, while each level 2 sstable spans 1/100th of the keys; so a level-1 sstable’s range roughly overlaps 10 level-2 sstables plus two more on the edges o As before, we compact the one sstable from level 1 and the 12 sstables from level 2 and replace all of those with new sstables in level 2 19
  • 20. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company (Leveled compaction strategy) ▪ After this compaction of level 1 into level 2, now we can have excess sstables in level 2 so we merge them into level 3. Again, one sstable from level 2 will need to be compacted with around 10 sstables from level 3. 20
  • 21. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Leveled compaction - amplification ▪ Space amplification: o Because of sstable counts, 90% of the data is in the deepest level (if full!) o These sstables do not overlap, so it can’t have duplicate data! o So at most, 10% of the space is wasted • (Actually, today, 50%, if deepest level is not full. Can be improved.) o Also, each compaction needs a constant (~12*160MB) temporary space o Nearly optimal 21
  • 22. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Leveled compaction - amplification ▪ Read amplification: o We have O(N) tables! o But in each level sstables have disjoint ranges (cached in memory) o Worst-case, O(logN) sstables relevant to a partition - plus L0 size. o Under some assumptions 10% space amplification implies: 90% of the reads will need just one sstable! o Nearly optimal 22
  • 23. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Leveled compaction - amplification ▪ Write amplification: 23
  • 24. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Leveled compaction - amplification ▪ Write amplification: o Also O(logN) steps like size-tiered: • Again, most of the data is in the deepest level k • All data was written once in L0, then compacted into L1, …, Lk o But... significantly more writes in each step! • STCS: reads sstables of total size X, writes X (if no overwrites) • LCS: one input sstable size X (Li) - merged with ~10X (Li+1) write ~ 11X. o If enough writing and LCS can’t keep up, its read and space advantages are lost o If also have cache-miss reads, they will get less disk bandwidth 24
  • 25. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Leveled compaction - write amplification ▪ Leveled compactions write amplification is not only a problem with 100% write... ▪ Can have just 10% writes and an amplified write workload so high that o Uncached reads slowed down because we need the disk to write o Compaction can’t keep up, uncompacted sstables pile up, even slower reads ▪ Leveled compaction is unsuitable for many workloads with a non-negligible amount of writes even if they seem “read mostly” 25
  • 26. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Example 1 - write-only workload ▪ Write-only workload o Cassandra-stress writing 30 million partitions (about 9 GB of data) o Constant write rate 10,000 writes/second o One shard 26
  • 27. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Example 1 (space amplification) constant multiple of flushed memtable & sstable size 27 x2 space amplification
  • 28. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Example 1 (write amplification) ▪ Amount of actual data collected: 8.8 GB ▪ Size-tiered compaction: 50 GB writes (4 tiers + commit log) ▪ Leveled compaction: 111 GB writes 28
  • 29. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Example 1 - conclusions ▪ Size-tiered compaction: at some points needs twice the disk space o In Scylla with many shards, “usually” maximum space use is not concurrent ▪ Level-tiered compaction: more than double the amount of disk I/O o Test used smaller-than default sstables (10 MB) to illustrate the problem o Same problem with default sstable size (160 MB) - with larger workloads 29
  • 30. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Can we create a new compaction strategy with ▪ Low write amplification of size-tiered compaction, ▪ Without its high temporary disk space usage during compaction? 30
  • 31. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Strategy #3: Incremental Compaction ▪ New in Scylla Enterprise 2019.1 ▪ Improved Size-Tiered strategy, with ideas from Leveled strategy: 31
  • 32. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Strategy #3: Incremental Compaction ▪ Size-tiered compaction needs temporary space because we only remove a huge sstable after we fully compact it. ▪ Let’s split each huge sstable into a run (as in LCS): o Treat the entire run (not individual sstables) as a file for STCS o Remove individual sstables as compacted. Low temporary space. 32
  • 33. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Incremental compaction - amplification ▪ Space amplification: o Small constant temporary space needs, even smaller than LCS (M*S per parallel compaction, e.g., M=2, S=160 MB) o Overwrite-mostly still a worst-case - but 2-fold, not 5-fold as Cassandra. ▪ Write amplification: o Same as size-tiered: O(logN), small constant ▪ Read amplification: o Same as size-tiered: at worst O(logN) if updating the same partitions 33
  • 34. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Example 1, with incremental compaction 34
  • 35. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Example 2 - overwrite workload ▪ Write 15 times the same 4 million partitions o cassandra-stress write n=4000000 -pop seq=1..4000000 -schema "replication(strategy=org.apache.cassandra.locator.SimpleStrategy,factor=1)" o In this test cassandra-stress not rate limited o Again, small (10MB) LCS tables ▪ Necessary amount of sstable data: 1.2 GB ▪ STCS space amplification: x7.7 (with Cassandra’s default of 4) ▪ LCS space amplification lower, constant multiple of sstable size ▪ ICS and Scylla’s default of 2, around x2 35
  • 36. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Example 2 36 x7.7 space amplification
  • 37. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Example 3 - read+updates workload ▪ When workloads are read-mostly, read amplification is important ▪ When workloads also have updates to existing partitions o With STCS, each partition ends up in multiple sstables o Read amplification ▪ An example to simulate this: o Do a write-only update workload • cassandra_stress write n=4,000,000 -pop seq=1..1,000,000 o Now run a read-only workload • cassandra_stress read n=1,000,000 -pop seq=1..1,000,000 • measure avg. number of read bytes per request 37
  • 38. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Example 3 - read+updates workload ▪ Size-tiered: 46,915 bytes read per request o Optimal after major compaction - 11,979 ▪ Leveled: 11,982 o Equal to optimal because in this case all sstables fit in L1... ▪ Increasing the number of partitions 8-fold: o Size-tiered: 29,794 luckier this time o Leveled: 16,713 unlucky (0.5 of data, not 0.9, in L2) ▪ BUT: Remember that if we have non-negligable amount of writes, LCS write amplification may slow down reads 38
  • 39. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Example 3, and major compaction ▪ We saw that size-tiered major compaction reduces read amplification ▪ It also reduces space amplification (expired/overwritten data) ▪ Major compaction only makes sense if very few writes o But in that case, LCS’s write amplification is not a problem! o So LCS is recommended instead of major compaction • Easier to use • No huge operations like major compaction (need to find when to run) • No 50%-free-disk worst-case requirement • Good read amplification and space amplification 39
  • 40. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Strategy #4: Time-Window Compaction ▪ Introduced in Cassandra 3.0.8, designed for time-series data ▪ Replaces Date-Tiered compaction strategy of Cassandra 2.1 (which is also supported by Scylla, but not recommended) 40
  • 41. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Time-Window compaction strategy (cont.) In a time-series use case: ▪ Clustering key and write time are correlated ▪ Data is added in time order. Only few out-of-order writes, typically rearranged by just a few seconds ▪ Data is only deleted through expiration (TTL) or by deleting an entire partition, usually the same TTL on all the data ▪ A query is a clustering-key range query on a given partition Most common query: "values from the last hour/day/week" 41
  • 42. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Time-Window compaction strategy (cont.) ▪ Scylla remembers in memory the minimum and maximum clustering key in each newly-flushed sstable o Efficiently find only the sstables with data relevant to a query ▪ But most compaction strategies, o Destroy this feature by merging “old” and “new” sstables o Move all rows of a partition to the same sstable… • But time series queries don’t need all rows of a partition, just rows in a given time range • Makes it impossible to expire old sstable’s when everything in them has expired • Read and write amplification (needless compactions) 42
  • 43. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Time-Window compaction strategy (cont.) So TWCS: ▪ Divides time into “time windows” o E.g., if typical query asks for 1 day of data, choose a time window of 1 day ▪ Divide sstables into time buckets, according to time window ▪ Compact using Size-Tiered strategy inside each time bucket o If the 2-day old window has just one big sstable and a repair creates an additional tiny “old” sstable, the two will not get compacted o A tradeoff: slows read but avoids the write amplification problem of DTCS ▪ When the current time window ends, major compaction of bucket o Except for small repair-produced sstables, we get 1 sstable per time window 43
  • 44. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Summary 44 Workload Size-Tiered Leveled Incremental Time-Window Write-only 2x peak space 2x writes Best - Overwrite Huge peak space write amplification high peak space, but not like size-tiered - Read-mostly, few updates read amplification Best read amplification - Read-mostly, but a lot of updates read and space amplification write amplification may overwhelm read amplification - Time series write, read, and space ampl. write and space amplification write and read amplification Best
  • 45. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company nyh@scylladb.com Please stay in touch Any questions? For more information, see my blog posts: 1. https://www.scylladb.com/2018/01/17/compaction-series-space-a mplification/ Space Amplification in Size-Tiered Compaction 2. https://www.scylladb.com/2018/01/31/compaction-series-leveled- compaction/ Write Amplification in Leveled Compaction 3. Soon to come, post about Incremental Compaction.
  • 46. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Questions? nyh@scylladb.com Please stay in touch Any questions? Thank you for listening!