1. From 100s to 100s of
Millions
July 2011
Erik Onnen
2. About Me
• Director of Platform Engineering at Urban Airship (.75
years)
• Previously Principal Engineer at Jive Software (3 years)
• 12 years large scale, distributed systems experience going
back to CORBA
• Cassandra, HBase, Kafka and ZooKeeper contributor -
most recently CASSANDRA-2463
3. In this Talk
• About Urban Airship
• Systems Overview
• A Tale of Storage Engines
• Our Cassandra Deployment
• Battle Scars
• Development Lessons Learned
• Operations Lessons Learned
• Looking Forward
4. What is an Urban Airship?
• Hosting for mobile services that developers should not
build themselves
• Unified API for services across platforms
• SLAs for throughput, latency
6. By The Numbers
• Over 160 million active application installs use our system
across over 80 million unique devices
7. By The Numbers
• Over 160 million active application installs use our system
across over 80 million unique devices
• Freemium API peaks at 700 requests/second, dedicated
customer API 10K requests/second
8. By The Numbers
• Over 160 million active application installs use our system
across over 80 million unique devices
• Freemium API peaks at 700 requests/second, dedicated
customer API 10K requests/second
• Over half of those are device check-ins
9. By The Numbers
• Over 160 million active application installs use our system
across over 80 million unique devices
• Freemium API peaks at 700 requests/second, dedicated
customer API 10K requests/second
• Over half of those are device check-ins
• Transactions - send push, check status, get content
10. By The Numbers
• Over 160 million active application installs use our system
across over 80 million unique devices
• Freemium API peaks at 700 requests/second, dedicated
customer API 10K requests/second
• Over half of those are device check-ins
• Transactions - send push, check status, get content
• At any given point in time, we have ~ 1.1 million secure
socket connections into our transactional core
11. By The Numbers
• Over 160 million active application installs use our system
across over 80 million unique devices
• Freemium API peaks at 700 requests/second, dedicated
customer API 10K requests/second
• Over half of those are device check-ins
• Transactions - send push, check status, get content
• At any given point in time, we have ~ 1.1 million secure
socket connections into our transactional core
• 6 months for the company to deliver 1M messages, just
broke 4.2B
16. A Tale of Storage Engines
• “Is there a NoSQL system you guys don’t use?”
17. A Tale of Storage Engines
• “Is there a NoSQL system you guys don’t use?”
• Riak :)
18. A Tale of Storage Engines
• “Is there a NoSQL system you guys don’t use?”
• Riak :)
• We do use:
19. A Tale of Storage Engines
• “Is there a NoSQL system you guys don’t use?”
• Riak :)
• We do use:
• Cassandra
20. A Tale of Storage Engines
• “Is there a NoSQL system you guys don’t use?”
• Riak :)
• We do use:
• Cassandra
• HBase
21. A Tale of Storage Engines
• “Is there a NoSQL system you guys don’t use?”
• Riak :)
• We do use:
• Cassandra
• HBase
• Redis
22. A Tale of Storage Engines
• “Is there a NoSQL system you guys don’t use?”
• Riak :)
• We do use:
• Cassandra
• HBase
• Redis
• MongoDB
23. A Tale of Storage Engines
• “Is there a NoSQL system you guys don’t use?”
• Riak :)
• We do use:
• Cassandra
• HBase
• Redis
• MongoDB
• We’re converging on Cassandra + PostgreSQL for
transactional and HBase for long haul
29. A Tale of Storage Engines
• PostgreSQL
• Bootstrapped the company on PostgreSQL in EC2
30. A Tale of Storage Engines
• PostgreSQL
• Bootstrapped the company on PostgreSQL in EC2
• Highly relational, large index model
31. A Tale of Storage Engines
• PostgreSQL
• Bootstrapped the company on PostgreSQL in EC2
• Highly relational, large index model
• Layered in memcached
32. A Tale of Storage Engines
• PostgreSQL
• Bootstrapped the company on PostgreSQL in EC2
• Highly relational, large index model
• Layered in memcached
• Writes weren’t scaling after ~ 6 months
33. A Tale of Storage Engines
• PostgreSQL
• Bootstrapped the company on PostgreSQL in EC2
• Highly relational, large index model
• Layered in memcached
• Writes weren’t scaling after ~ 6 months
• Continued to use for several silos of data but needed a
way to grow more easily
36. A Tale of Storage Engines
• MongoDB
• Initially, we loved Mongo
37. A Tale of Storage Engines
• MongoDB
• Initially, we loved Mongo
• Document databases are cool
38. A Tale of Storage Engines
• MongoDB
• Initially, we loved Mongo
• Document databases are cool
• BSON is nice
39. A Tale of Storage Engines
• MongoDB
• Initially, we loved Mongo
• Document databases are cool
• BSON is nice
• As data set grew, we learned a lot about MongoDB
40. A Tale of Storage Engines
• MongoDB
• Initially, we loved Mongo
• Document databases are cool
• BSON is nice
• As data set grew, we learned a lot about MongoDB
• “MongoDB does not wait for a response by default when
writing to the database.”
41. A Tale of Storage Engines
• MongoDB
• Initially, we loved Mongo
• Document databases are cool
• BSON is nice
• As data set grew, we learned a lot about MongoDB
• “MongoDB does not wait for a response by default when
writing to the database.”
43. A Tale of Storage Engines
• MongoDB - Read/Write Problems
44. A Tale of Storage Engines
• MongoDB - Read/Write Problems
• Early days (1.2) one global lock (reads block
writes and vice versa)
45. A Tale of Storage Engines
• MongoDB - Read/Write Problems
• Early days (1.2) one global lock (reads block
writes and vice versa)
• Later, one read lock, one write lock per server
46. A Tale of Storage Engines
• MongoDB - Read/Write Problems
• Early days (1.2) one global lock (reads block
writes and vice versa)
• Later, one read lock, one write lock per server
• Long running queries were often devastating
47. A Tale of Storage Engines
• MongoDB - Read/Write Problems
• Early days (1.2) one global lock (reads block
writes and vice versa)
• Later, one read lock, one write lock per server
• Long running queries were often devastating
• Replication would fall too far behind and stop
48. A Tale of Storage Engines
• MongoDB - Read/Write Problems
• Early days (1.2) one global lock (reads block
writes and vice versa)
• Later, one read lock, one write lock per server
• Long running queries were often devastating
• Replication would fall too far behind and stop
• No writes or updates
49. A Tale of Storage Engines
• MongoDB - Read/Write Problems
• Early days (1.2) one global lock (reads block
writes and vice versa)
• Later, one read lock, one write lock per server
• Long running queries were often devastating
• Replication would fall too far behind and stop
• No writes or updates
• Effectively a failure for most clients
50. A Tale of Storage Engines
• MongoDB - Read/Write Problems
• Early days (1.2) one global lock (reads block
writes and vice versa)
• Later, one read lock, one write lock per server
• Long running queries were often devastating
• Replication would fall too far behind and stop
• No writes or updates
• Effectively a failure for most clients
• With replication, queries for anything other than
the shard key talk to every node in the cluster
52. A Tale of Storage Engines
• MongoDB - Update Problems
53. A Tale of Storage Engines
• MongoDB - Update Problems
• Simple updates (i.e. counters) were fine
54. A Tale of Storage Engines
• MongoDB - Update Problems
• Simple updates (i.e. counters) were fine
• Bigger updates commonly resulted in large scans of the
collection depending on position == heavy disk I/O
55. A Tale of Storage Engines
• MongoDB - Update Problems
• Simple updates (i.e. counters) were fine
• Bigger updates commonly resulted in large scans of the
collection depending on position == heavy disk I/O
• Frequently spill to end of the collection datafile leaving
“holes” but not sparse files
56. A Tale of Storage Engines
• MongoDB - Update Problems
• Simple updates (i.e. counters) were fine
• Bigger updates commonly resulted in large scans of the
collection depending on position == heavy disk I/O
• Frequently spill to end of the collection datafile leaving
“holes” but not sparse files
• Those “holes” get MMap’d even though they’re not used
57. A Tale of Storage Engines
• MongoDB - Update Problems
• Simple updates (i.e. counters) were fine
• Bigger updates commonly resulted in large scans of the
collection depending on position == heavy disk I/O
• Frequently spill to end of the collection datafile leaving
“holes” but not sparse files
• Those “holes” get MMap’d even though they’re not used
• Updates moving data acquire multiple locks commonly
blocking other read/write operations
59. A Tale of Storage Engines
• MongoDB - Optimization Problems
60. A Tale of Storage Engines
• MongoDB - Optimization Problems
• Compacting a collection locks the entire collection
61. A Tale of Storage Engines
• MongoDB - Optimization Problems
• Compacting a collection locks the entire collection
• Read slave was too busy to be a backup, needed moar
RAMs but were already on High-Memory EC2, nowhere
else to go
62. A Tale of Storage Engines
• MongoDB - Optimization Problems
• Compacting a collection locks the entire collection
• Read slave was too busy to be a backup, needed moar
RAMs but were already on High-Memory EC2, nowhere
else to go
• Mongo MMaps everything - when your data set is
bigger than RAM, you better have fast disks
63. A Tale of Storage Engines
• MongoDB - Optimization Problems
• Compacting a collection locks the entire collection
• Read slave was too busy to be a backup, needed moar
RAMs but were already on High-Memory EC2, nowhere
else to go
• Mongo MMaps everything - when your data set is
bigger than RAM, you better have fast disks
• Until 1.8, no support for sparse indexes
65. A Tale of Storage Engines
• MongoDB - Ops Issues
66. A Tale of Storage Engines
• MongoDB - Ops Issues
• Lots of good information in mongostat
67. A Tale of Storage Engines
• MongoDB - Ops Issues
• Lots of good information in mongostat
• Recovering a crashed system was effectively impossible
without disabling indexes first (not the default)
68. A Tale of Storage Engines
• MongoDB - Ops Issues
• Lots of good information in mongostat
• Recovering a crashed system was effectively impossible
without disabling indexes first (not the default)
• Replica sets never worked for us in testing, lots of
inconsistencies in failure scenarios
69. A Tale of Storage Engines
• MongoDB - Ops Issues
• Lots of good information in mongostat
• Recovering a crashed system was effectively impossible
without disabling indexes first (not the default)
• Replica sets never worked for us in testing, lots of
inconsistencies in failure scenarios
• Scattered records lead to lots of I/O that hurt on bad
disks (EC2)
71. Cassandra at Urban Airship
• Summer of 2010 - no faith left in MongoDB started a
migration to Cassandra
72. Cassandra at Urban Airship
• Summer of 2010 - no faith left in MongoDB started a
migration to Cassandra
• Lots of L&P testing, client analysis, etc.
73. Cassandra at Urban Airship
• Summer of 2010 - no faith left in MongoDB started a
migration to Cassandra
• Lots of L&P testing, client analysis, etc.
• December 2010 - Cassandra backed 85% of our Android
stack’s persistence
74. Cassandra at Urban Airship
• Summer of 2010 - no faith left in MongoDB started a
migration to Cassandra
• Lots of L&P testing, client analysis, etc.
• December 2010 - Cassandra backed 85% of our Android
stack’s persistence
• Six EC2 XLS with each serving:
75. Cassandra at Urban Airship
• Summer of 2010 - no faith left in MongoDB started a
migration to Cassandra
• Lots of L&P testing, client analysis, etc.
• December 2010 - Cassandra backed 85% of our Android
stack’s persistence
• Six EC2 XLS with each serving:
• 30GB data
76. Cassandra at Urban Airship
• Summer of 2010 - no faith left in MongoDB started a
migration to Cassandra
• Lots of L&P testing, client analysis, etc.
• December 2010 - Cassandra backed 85% of our Android
stack’s persistence
• Six EC2 XLS with each serving:
• 30GB data
• ~1000 reads/second/node
77. Cassandra at Urban Airship
• Summer of 2010 - no faith left in MongoDB started a
migration to Cassandra
• Lots of L&P testing, client analysis, etc.
• December 2010 - Cassandra backed 85% of our Android
stack’s persistence
• Six EC2 XLS with each serving:
• 30GB data
• ~1000 reads/second/node
• ~750 writes/second/node
80. Cassandra at Urban Airship
• Why Cassandra?
• Well suited for most of our data model (simple DAGs)
81. Cassandra at Urban Airship
• Why Cassandra?
• Well suited for most of our data model (simple DAGs)
• Lots of UUIDs and hashes partition well
82. Cassandra at Urban Airship
• Why Cassandra?
• Well suited for most of our data model (simple DAGs)
• Lots of UUIDs and hashes partition well
• Retrievals don’t need ordering beyond keys or TSD
83. Cassandra at Urban Airship
• Why Cassandra?
• Well suited for most of our data model (simple DAGs)
• Lots of UUIDs and hashes partition well
• Retrievals don’t need ordering beyond keys or TSD
• Rolling upgrades FTW
84. Cassandra at Urban Airship
• Why Cassandra?
• Well suited for most of our data model (simple DAGs)
• Lots of UUIDs and hashes partition well
• Retrievals don’t need ordering beyond keys or TSD
• Rolling upgrades FTW
• Dynamic rebalancing and node addition
85. Cassandra at Urban Airship
• Why Cassandra?
• Well suited for most of our data model (simple DAGs)
• Lots of UUIDs and hashes partition well
• Retrievals don’t need ordering beyond keys or TSD
• Rolling upgrades FTW
• Dynamic rebalancing and node addition
• Column TTLs huge for us
86. Cassandra at Urban Airship
• Why Cassandra?
• Well suited for most of our data model (simple DAGs)
• Lots of UUIDs and hashes partition well
• Retrievals don’t need ordering beyond keys or TSD
• Rolling upgrades FTW
• Dynamic rebalancing and node addition
• Column TTLs huge for us
• Awesome community :)
89. Cassandra at Urban Airship
• Why Cassandra cont’d?
• Particularly well suited to working around EC2 availability
90. Cassandra at Urban Airship
• Why Cassandra cont’d?
• Particularly well suited to working around EC2 availability
• Needed a cross AZ strategy - we had seen EBS issues
in the past, didn’t trust fault containment w/n a zone
91. Cassandra at Urban Airship
• Why Cassandra cont’d?
• Particularly well suited to working around EC2 availability
• Needed a cross AZ strategy - we had seen EBS issues
in the past, didn’t trust fault containment w/n a zone
• Didn’t want locality of replication so needed to stripe
across AZs
92. Cassandra at Urban Airship
• Why Cassandra cont’d?
• Particularly well suited to working around EC2 availability
• Needed a cross AZ strategy - we had seen EBS issues
in the past, didn’t trust fault containment w/n a zone
• Didn’t want locality of replication so needed to stripe
across AZs
• Read repair and handoff generally did the right thing
when a node would flap (Ubuntu #708920)
93. Cassandra at Urban Airship
• Why Cassandra cont’d?
• Particularly well suited to working around EC2 availability
• Needed a cross AZ strategy - we had seen EBS issues
in the past, didn’t trust fault containment w/n a zone
• Didn’t want locality of replication so needed to stripe
across AZs
• Read repair and handoff generally did the right thing
when a node would flap (Ubuntu #708920)
• No SPoF
94. Cassandra at Urban Airship
• Why Cassandra cont’d?
• Particularly well suited to working around EC2 availability
• Needed a cross AZ strategy - we had seen EBS issues
in the past, didn’t trust fault containment w/n a zone
• Didn’t want locality of replication so needed to stripe
across AZs
• Read repair and handoff generally did the right thing
when a node would flap (Ubuntu #708920)
• No SPoF
• Ability to alter CLs on a per operation basis
97. Battle Scars - Development
• Know your data model
• Creating indexes after the fact is a PITA
98. Battle Scars - Development
• Know your data model
• Creating indexes after the fact is a PITA
• Design around wide rows
99. Battle Scars - Development
• Know your data model
• Creating indexes after the fact is a PITA
• Design around wide rows
• I/O problems
100. Battle Scars - Development
• Know your data model
• Creating indexes after the fact is a PITA
• Design around wide rows
• I/O problems
• Thrift problems
101. Battle Scars - Development
• Know your data model
• Creating indexes after the fact is a PITA
• Design around wide rows
• I/O problems
• Thrift problems
• Count problems
102. Battle Scars - Development
• Know your data model
• Creating indexes after the fact is a PITA
• Design around wide rows
• I/O problems
• Thrift problems
• Count problems
• Favor JSON over packed binaries if possible
103. Battle Scars - Development
• Know your data model
• Creating indexes after the fact is a PITA
• Design around wide rows
• I/O problems
• Thrift problems
• Count problems
• Favor JSON over packed binaries if possible
• Careful with Thrift in the stack
104. Battle Scars - Development
• Know your data model
• Creating indexes after the fact is a PITA
• Design around wide rows
• I/O problems
• Thrift problems
• Count problems
• Favor JSON over packed binaries if possible
• Careful with Thrift in the stack
• Don’t fear the StorageProxy
106. Battle Scars - Development
• Assume failure in the client
107. Battle Scars - Development
• Assume failure in the client
• Read timeout vs. connection refused
108. Battle Scars - Development
• Assume failure in the client
• Read timeout vs. connection refused
• When maintaining your own indexes, try and cleanup
after failure
109. Battle Scars - Development
• Assume failure in the client
• Read timeout vs. connection refused
• When maintaining your own indexes, try and cleanup
after failure
• Be ready to cleanup inconsistencies anyway
110. Battle Scars - Development
• Assume failure in the client
• Read timeout vs. connection refused
• When maintaining your own indexes, try and cleanup
after failure
• Be ready to cleanup inconsistencies anyway
• Verify client library assumptions and exception handling
111. Battle Scars - Development
• Assume failure in the client
• Read timeout vs. connection refused
• When maintaining your own indexes, try and cleanup
after failure
• Be ready to cleanup inconsistencies anyway
• Verify client library assumptions and exception handling
• Retry now vs. retry later?
112. Battle Scars - Development
• Assume failure in the client
• Read timeout vs. connection refused
• When maintaining your own indexes, try and cleanup
after failure
• Be ready to cleanup inconsistencies anyway
• Verify client library assumptions and exception handling
• Retry now vs. retry later?
• Compensating action during failures?
113. Battle Scars - Development
• Assume failure in the client
• Read timeout vs. connection refused
• When maintaining your own indexes, try and cleanup
after failure
• Be ready to cleanup inconsistencies anyway
• Verify client library assumptions and exception handling
• Retry now vs. retry later?
• Compensating action during failures?
• Don’t avoid the Cassandra code
114. Battle Scars - Development
• Assume failure in the client
• Read timeout vs. connection refused
• When maintaining your own indexes, try and cleanup
after failure
• Be ready to cleanup inconsistencies anyway
• Verify client library assumptions and exception handling
• Retry now vs. retry later?
• Compensating action during failures?
• Don’t avoid the Cassandra code
• Embed for testing
117. Battle Scars - Ops
• Cassandra in EC2:
• Ensure Dynamic Snitch is enabled
118. Battle Scars - Ops
• Cassandra in EC2:
• Ensure Dynamic Snitch is enabled
• Disk I/O
119. Battle Scars - Ops
• Cassandra in EC2:
• Ensure Dynamic Snitch is enabled
• Disk I/O
• Avoid EBS except for snapshot backups or use S3
120. Battle Scars - Ops
• Cassandra in EC2:
• Ensure Dynamic Snitch is enabled
• Disk I/O
• Avoid EBS except for snapshot backups or use S3
• Stripe ephemerals, not EBS volumes
121. Battle Scars - Ops
• Cassandra in EC2:
• Ensure Dynamic Snitch is enabled
• Disk I/O
• Avoid EBS except for snapshot backups or use S3
• Stripe ephemerals, not EBS volumes
• Avoid smaller instances all together
122. Battle Scars - Ops
• Cassandra in EC2:
• Ensure Dynamic Snitch is enabled
• Disk I/O
• Avoid EBS except for snapshot backups or use S3
• Stripe ephemerals, not EBS volumes
• Avoid smaller instances all together
• Don’t always assume traversing close proximity AZs is
more expensive
123. Battle Scars - Ops
• Cassandra in EC2:
• Ensure Dynamic Snitch is enabled
• Disk I/O
• Avoid EBS except for snapshot backups or use S3
• Stripe ephemerals, not EBS volumes
• Avoid smaller instances all together
• Don’t always assume traversing close proximity AZs is
more expensive
• Balance RAM cost vs. the cost of additional hosts and
spending time w/ GC logs
126. Battle Scars - Ops
• Java Best Practices:
• All Java services are managed via the same set of
scripts
127. Battle Scars - Ops
• Java Best Practices:
• All Java services are managed via the same set of
scripts
• In most cases, operators don’t treat Cassandra
different from HBase
128. Battle Scars - Ops
• Java Best Practices:
• All Java services are managed via the same set of
scripts
• In most cases, operators don’t treat Cassandra
different from HBase
• Simple mechanism to take thread or heap dump
129. Battle Scars - Ops
• Java Best Practices:
• All Java services are managed via the same set of
scripts
• In most cases, operators don’t treat Cassandra
different from HBase
• Simple mechanism to take thread or heap dump
• All logging is consistent - GC, application, stdx
130. Battle Scars - Ops
• Java Best Practices:
• All Java services are managed via the same set of
scripts
• In most cases, operators don’t treat Cassandra
different from HBase
• Simple mechanism to take thread or heap dump
• All logging is consistent - GC, application, stdx
• Init scripts use the same scripts operators do
131. Battle Scars - Ops
• Java Best Practices:
• All Java services are managed via the same set of
scripts
• In most cases, operators don’t treat Cassandra
different from HBase
• Simple mechanism to take thread or heap dump
• All logging is consistent - GC, application, stdx
• Init scripts use the same scripts operators do
• Bare metal will rock your world
132. Battle Scars - Ops
• Java Best Practices:
• All Java services are managed via the same set of
scripts
• In most cases, operators don’t treat Cassandra
different from HBase
• Simple mechanism to take thread or heap dump
• All logging is consistent - GC, application, stdx
• Init scripts use the same scripts operators do
• Bare metal will rock your world
• +UseLargePages will rock your world too
133. Battle Scars - Ops
ParNew GC Effectiveness Mean Time ParNew GC
300 0.04
225 0.03
150 0.02
75 0.01
0 0
MB Collected Collection Time (ms)
Bare Metal EC2 XL Bare Metal EC2 XL
ParNew Collection Count
60000
45000
30000
15000
0
Number of Collections
Bare Metal EC2 XL
136. Battle Scars - Ops
• Java Best Practices cont’d:
• Get familiar with GC logs (-XX:+PrintGCDetails)
137. Battle Scars - Ops
• Java Best Practices cont’d:
• Get familiar with GC logs (-XX:+PrintGCDetails)
• Understand what degenerate CMS collection looks like
138. Battle Scars - Ops
• Java Best Practices cont’d:
• Get familiar with GC logs (-XX:+PrintGCDetails)
• Understand what degenerate CMS collection looks like
• We settled at -XX:CMSInitiatingOccupancyFraction=60
139. Battle Scars - Ops
• Java Best Practices cont’d:
• Get familiar with GC logs (-XX:+PrintGCDetails)
• Understand what degenerate CMS collection looks like
• We settled at -XX:CMSInitiatingOccupancyFraction=60
• Possibly experiment with tenuring threshold
140. Battle Scars - Ops
• Java Best Practices cont’d:
• Get familiar with GC logs (-XX:+PrintGCDetails)
• Understand what degenerate CMS collection looks like
• We settled at -XX:CMSInitiatingOccupancyFraction=60
• Possibly experiment with tenuring threshold
• When in doubt take a thread dump
141. Battle Scars - Ops
• Java Best Practices cont’d:
• Get familiar with GC logs (-XX:+PrintGCDetails)
• Understand what degenerate CMS collection looks like
• We settled at -XX:CMSInitiatingOccupancyFraction=60
• Possibly experiment with tenuring threshold
• When in doubt take a thread dump
• TDA (http://java.net/projects/tda/)
142. Battle Scars - Ops
• Java Best Practices cont’d:
• Get familiar with GC logs (-XX:+PrintGCDetails)
• Understand what degenerate CMS collection looks like
• We settled at -XX:CMSInitiatingOccupancyFraction=60
• Possibly experiment with tenuring threshold
• When in doubt take a thread dump
• TDA (http://java.net/projects/tda/)
• Eclipse MAT (http://www.eclipse.org/mat/)
149. Looking Forward
• Cassandra is a great hammer but not everything is a nail
150. Looking Forward
• Cassandra is a great hammer but not everything is a nail
• Coprocessors would be awesome (hint hint)
151. Looking Forward
• Cassandra is a great hammer but not everything is a nail
• Coprocessors would be awesome (hint hint)
• Still spend too much time worrying about GC
152. Looking Forward
• Cassandra is a great hammer but not everything is a nail
• Coprocessors would be awesome (hint hint)
• Still spend too much time worrying about GC
• Glad to see the ecosystem around the product evolving
153. Looking Forward
• Cassandra is a great hammer but not everything is a nail
• Coprocessors would be awesome (hint hint)
• Still spend too much time worrying about GC
• Glad to see the ecosystem around the product evolving
• CQL
154. Looking Forward
• Cassandra is a great hammer but not everything is a nail
• Coprocessors would be awesome (hint hint)
• Still spend too much time worrying about GC
• Glad to see the ecosystem around the product evolving
• CQL
• Pig
155. Looking Forward
• Cassandra is a great hammer but not everything is a nail
• Coprocessors would be awesome (hint hint)
• Still spend too much time worrying about GC
• Glad to see the ecosystem around the product evolving
• CQL
• Pig
• Brisk
156. Looking Forward
• Cassandra is a great hammer but not everything is a nail
• Coprocessors would be awesome (hint hint)
• Still spend too much time worrying about GC
• Glad to see the ecosystem around the product evolving
• CQL
• Pig
• Brisk
• Guardedly optimistic about off heap data management
157. Thanks to
• jbellis, driftx
• Datastax
• Whoever wrote TDA
• SAP
158. Thanks!
• Urban Airship: http://urbanairship.com/
• We’re hiring! http://urbanairship.com/company/jobs/
• Me @eonnen or erik at