2. 2
• Customer Stories
• Sharding for Performance/Scale
– When to shard?
– How many shards do I need?
• Types of Sharding
• How to Pick a Shard Key
• Sharding for Other Reasons
Agenda
5. 5
• 50M users.
• 6B check-ins to date (6M per day growth).
• 55M points of interest / venues.
• 1.7M merchants using the platform for marketing
• Operations Per Second: 300,000
• Documents: 5.5B
Foursquare
6. 6
• 11 MongoDB clusters
– 8 are sharded
• Largest cluster has 15 shards (check ins)
– Sharded on user id
Foursquare clusters
16. 18
Does one server/replica…
• Have enough disk space to store
all my data?
• Handle my query throughput
(operations per second)?
• Respond to queries fast enough
(latency)?
17. 19
• Have enough disk space to store
all my data?
• Handle my query throughput
(operations per second)?
• Respond to queries fast enough
(latency)?
Does one server/replica set…
Server Specs
Disk Capacity
Disk IOPS
RAM
Network
Disk IOPS
RAM
Network
19. 21
• Sum of disk space across shards > greater than
required storage size
Disk Space: How Many Shards Do I
Need?
20. 22
• Sum of disk space across shards > greater than
required storage size
Disk Space: How Many Shards Do I
Need?
Example
Storage size = 3 TB
Server disk capacity = 2 TB
2 Shards Required
21. 23
• Working set should fit in RAM
– Sum of RAM across shards > Working Set
• WorkSet = Indexes plus the set of documents
accessed frequently
• WorkSet in RAM
– Shorter latency
– Higher Throughput
RAM: How Many Shards Do I Need?
22. 24
• Measuring Index Size and Working Set
db.stats() – index size of each collection
db.serverStatus({ workingSet: 1}) – working
set size estimate
RAM: How Many Shards Do I Need?
23. 25
• Measuring Index Size and Working Set
db.stats() – index size of each collection
db.serverStatus({ workingSet: 1}) – working
set size estimate
RAM: How Many Shards Do I Need?
Example
Working Set = 428 GB
Server RAM = 128 GB
428/128 = 3.34
4 Shards Required
24. 26
• Sum of IOPS across shards > greater than
required IOPS
• IOPS are difficult to estimate
– Update doc
– Update indexes
– Append to journal
– Log entry?
• Best approach – build a prototype and measure
Disk Throughput: How Many Shards
Do I Need
25. 27
• Sum of IOPS across shards > greater than
required IOPS
• IOPS are difficult to estimate
– Update doc
– Update indexes
– Append to journal
– Log entry?
• Best approach – build a prototype and measure
Disk Throughput: How Many Shards
Do I Need
Example
Required IOPS = 11000
Server disk IOPS = 5000
3 Shards Required
26. 28
• S = ops/sec of a single server
• G = required ops/sec
• N = # of shards
• G = N * S * .7
N = G/.7S
OPS: How Many Shards Do I Need?
27. 29
• S = ops/sec of a single server
• G = required ops/sec
• N = # of shards
• G = N * S * .7
N = G/.7S
OPS: How Many Shards Do I Need?
Sharding Overhead
28. 30
• S = ops/sec of a single server
• G = required ops/sec
• N = # of shards
• G = N * S * .7
N = G/.7S
OPS: How Many Shards Do I Need?
Example
S = 4000
G = 10000
N = 3.57
4 Shards
31. 33
Range Sharding
mongod mongod mongod mongod
Key Range
0..25
Key Range
26..50
Key Range
51..75
Key Range
76.. 100
Read/Write Scalability
32. 34
Tag-Aware Sharding
mongod mongod mongod mongod
Shard Tags
Shard Tag Start End
Winter 23 Dec 21 Mar
Spring 22 Mar 21 Jun
Summer 21 Jun 23 Sep
Fall 24 Sep 22 Dec
Tag Ranges
Winter Spring Summer Fall
38. 40
Shard Key characteristics
• A good shard key has:
– sufficient cardinality
– distributed writes
– targeted reads ("query isolation")
• Shard key should be in every query if possible
– scatter gather otherwise
• Choosing a good shard key is important!
– affects performance and scalability
– changing it later is expensive
39. 41
Low cardinality shard key
• Induces "jumbo chunks"
• Examples: boolean field
Shard 1
mongos
Shard 2 Shard 3 Shard N
[ a, b )
42. 44
• Scale
– Data volume
– Query volume
• Global deployment with local writes
– Geography aware sharding
• Tiered Storage
• Fast backup restore
Reasons to shard
44. 46
• Save hardware costs
• Put frequently accessed documents on fast
servers
– Infrequently accessed documents on less capable
servers
• Use Tag aware sharding
Tiered Storage
mongod mongod mongod mongod
Current Current Archive Archive
SSD SSD HDD HDD
45. 47
• 40 TB Database
• 2 shards of 20 TB each
• Challenge
– Cannot meet restore SLA after data loss
Fast Restore
mongod mongod
20 TB 20 TB
46. 48
• 40 TB Database
• 4 shards of 10 TB each
• Solution
– Reduce the restore time by 50%
Fast Restore
mongod mongod
10 TB 10 TB
mongod mongod
10 TB 10 TB
48. 50
• To determine required # of shards determine
– Storage requirements
– Latency requirements
– Throughput requirements
• Derive total
– Disk capacity
– Disk throughput
– RAM
• Calculate # of shards based upon individual
server specs
Determining the # of shards
51. Get Expert Advice on Scaling. For Free.
For a limited time, if
you’re considering a
commercial
relationship with
MongoDB, you can
sign up for a free one
hour consult about
scaling with one of our
MongoDB Engineers.
Sign Up: http://bit.ly/1rkXcfN
MongoDB provides horizontal scale-out for databases using a technique called sharding, which is trans- parent to applications. Sharding distributes data across multiple physical partitions called shards. Sharding allows MongoDB deployments to address the hardware limitations of a single server, such as bottlenecks in RAM or disk I/O, without adding complexity to the application.
MongoDB supports three types of sharding:
• Range-based Sharding. Documents are partitioned across shards according to the shard key value. Documents with shard key values “close” to one another are likely to be co-located on the same shard. This approach is well suited for applications that need to optimize range- based queries.
• Hash-based Sharding. Documents are uniformly distributed according to an MD5 hash of the shard key value. Documents with shard key values “close” to one another are unlikely to be co-located on the same shard. This approach guarantees a uniform distribution of writes across shards, but is less optimal for range-based queries.
• Tag-aware Sharding. Documents are partitioned according to a user-specified configuration that associates shard key ranges with shards. Users can optimize the physical location of documents for application requirements such as locating data in specific data centers.
MongoDB automatically balances the data in the cluster as the data grows or the size of the cluster increases or decreases.