Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.
Breakthrough OLAP
Performance with
Cassandra and Spark
Evan Chan
August 2015
Who am I?
Distinguished Engineer,
@evanfchan
User and contributor to Spark since 0.9, Cassandra since 0.6
Co-creator and m...
About Tuplejump
is a big data technology leader providing solutions for
rapid insights from data.
Tuplejump
- the first Sp...
Didn't I attend the same talk last year?
Similar title, but mostly new material
Will reveal new open source projects! :)
Problem Space
Need analytical database / queries on structured big data
Something SQL-like, very flexible and fast
Pre-agg...
Example: Video analytics
Typical collection and analysis of consumer events
3 billion new events every day
Video publisher...
Requirements
Scalable - rules out PostGreSQL, etc.
Easy to update and ingest new data
Not traditional OLAP cubes - that's ...
Parquet
Widely used, lots of support (Spark, Impala, etc.)
Problem: Parquet is read-optimized, not easy to use for writes
...
Turns out this has been solved before!
Even .Facebook uses Vertica
MPP Databases
Easy writes plus fast queries, with constant transfers
Automatic query optimization by storing intermediate ...
What's wrong with MPP Databases?
Closed source
$$$
Usually don't scale horizontally that well (or cost is prohibitive)
Cassandra
Horizontally scalable
Very flexible data modelling (lists, sets, custom data types)
Easy to operate
Perfect for ...
Apache Spark
Horizontally scalable, in-memory queries
Functional Scala transforms - map, filter, groupBy, sort
etc.
SQL, m...
Spark provides the missing fast, deep
analytics piece of Cassandra!
Spark and Cassandra
OLAP Architectures
Separate Storage and Query Layers
Combine best of breed storage and query platforms
Take full advantage of evolution of ea...
Spark as Cassandra's Cache
Spark SQL
Appeared with Spark 1.0
In-memory columnar store
Parquet, Json, Cassandra connector, Avro, many more
SQL as well...
Connecting Spark to Cassandra
Datastax's
Tuplejump
Spark Cassandra Connector
Calliope
 
Get started in one line with spark...
Caching a SQL Table from Cassandra
DataFrames support in Cassandra Connector 1.4.0 (and 1.3.0):
valsqlContext=neworg.apach...
How Spark SQL's Table Caching Works
Spark Cached Tables can be Really Fast
GDELT dataset, 4 million rows, 60 columns, localhost
Method secs
Uncached 317
Cache...
Tuning Connector Partitioning
spark.cassandra.input.split.size
Guideline: One split per partition, one partition per CPU c...
Lesson #1: Take Advantage of Spark
Caching!
Problems with Cached Tables
Still have to read the data from Cassandra first, which is slow
Amount of RAM: your entire dat...
Problems with Cached Tables
If you don't have enough RAM, Spark can cache your tables
partly to disk. This is still way, w...
What about C* Secondary Indexing?
Spark-Cassandra Connector and Calliope can both reduce I/O by
using Cassandra secondary ...
Tachyon Off-Heap Caching
Intro to Tachyon
Tachyon: an in-memory cache for HDFS and other binary data
sources
Keeps data off-heap, so multiple Spark...
Wait, wait, wait!
What am I caching exactly? Tachyon is designed for caching files
or binary blobs.
A serialized form of C...
Bad programmers worry about the code. Good
programmers worry about data structures.
- Linus Torvalds
 
Are we really think...
Efficient Columnar Storage in Cassandra
Wait, I thought Cassandra was columnar?
How Cassandra stores your CQL Tables
Suppose you had this CQL table:
CREATETABLE(
departmenttext,
empIdtext,
firsttext,
la...
How Cassandra stores your CQL Tables
PartitionKey 01:first 01:last 01:age 02:first 02:last 02:age
Sales Bob Jones 34 Susan...
Cassandra is really a row-based, OLTP-oriented datastore.
Unless you know how to use it otherwise :)
The traditional row-based data storage
approach is dead
- Michael Stonebraker
Columnar Storage (Memory)
Name column
0 1
0 1
 
Dictionary: {0: "Barak", 1: "Hillary"}
 
Age column
0 1
46 66
Columnar Storage (Cassandra)
Review: each physical row in Cassandra (e.g. a "partition key")
stores its columns together o...
Columnar Format solves I/O
Compression
Dictionary compression - HUGE savings for low-cardinality
string columns
RLE, other...
Columnar Format solves Caching
Use the same format on disk, in cache, in memory scan
Caching works a lot better when the c...
So, why isn't everybody doing this?
No columnar storage format designed to work with NoSQL
stores
Efficient conversion to/...
All hard work leads to profit, but mere talk leads
to poverty.
- Proverbs 14:23
Columnar Storage Performance Study
 
http://github.com/velvia/cassandra-gdelt
GDELT Dataset
1979 to now
60 columns, 250 million+ rows, 250GB+
Let's compare Cassandra I/O only, no caching or Spark
Glob...
The scenarios
1. Narrow table - CQL table with one row per partition key
2. Wide table - wide rows with 10,000 logical row...
Query and ingest times
Scenario Ingest Read all
columns
Read one
column
Narrow
table
1927
sec
505 sec 504 sec
Wide
table
3...
Disk space usage
Scenario Disk used
Narrow table 2.7 GB
Wide table 1.6 GB
Columnar 0.34 GB
The disk space usage helps expl...
Towards Extreme Query Performance
The filo project
is a binary data vector library
designed for extreme read performance with minimal
deserialization costs....
What is the ceiling?
This Scala loop can read integers from a binary Filo blob at a rate
of 2 billion integers per second ...
Vectorization of Spark Queries
The project.Tungsten
Process many elements from the same column at once, keep data
in L1/L2...
Hot Column Caching in Tachyon
Has a "table" feature, originally designed for Shark
Keep hot columnar chunks in shared off-...
Introducing FiloDB
 
http://github.com/velvia/FiloDB
What's in the name?
Rich sweet layers of distributed, versioned database goodness
Distributed
Apache Cassandra. Scale out with no SPOF. Cross-datacenter
replication. Proven storage and database technology.
Versioned
Incrementally add a column or a few rows as a new version. Easily
control what versions to query. Roll back chan...
Columnar
Parquet-style storage layout
Retrieve select columns and minimize I/O for OLAP queries
Add a new column without h...
100% Reactive
Built completely on the Typesafe Platform:
Scala 2.10 and SBT
Spark (including custom data source)
Akka Acto...
Spark SQL Queries!
SELECTfirst,last,ageFROMcustomers
WHERE_version>3ANDage<40LIMIT100
Read to and write from Spark Datafra...
FiloDB vs Parquet
Comparable read performance - with lots of space to improve
Assuming co-located Spark and Cassandra
On l...
Where FiloDB Fits In
Use regular C* denormalized tables for OLTP and single-key
lookups
Use FiloDB for the remaining ad-ho...
Simplify your Lambda Architecture...
( )https://www.mapr.com/developercentral/lambda-architecture
With Spark, Cassandra, and FiloDB
Ma, where did all the components go?
You mean I don't have to deal with Hadoop?
Use Cass...
Exactly-Once Ingestion from Kafka
New rows appended via Kafka
Writes are idempotent - no need to dedup!
Converted to colum...
You can help!
Send me your use cases for OLAP on Cassandra and Spark
Especially IoT and Geospatial
Email if you want to co...
Thanks...
to the entire OSS community, but in particular:
Lee Mighdoll, Nest/Google
Rohit Rai and Satya B., Tuplejump
My c...
DEMO TIME
GDELT: Regular C* Tables vs FiloDB
Extra Slides
When in doubt, use brute force
- Ken Thompson
Automatic Columnar Conversion using
Custom Indexes
Write to Cassandra as you normally do
Custom indexer takes changes, mer...
Implementing Lambda is Hard
Use real-time pipeline backed by a KV store for new updates
Lots of moving parts
Key-value sto...
Breakthrough OLAP performance with Cassandra and Spark
Prochain SlideShare
Chargement dans…5
×

Breakthrough OLAP performance with Cassandra and Spark

Find out about breakthrough architectures for fast OLAP performance querying Cassandra data with Apache Spark, including a new open source project, FiloDB.

  • Identifiez-vous pour voir les commentaires

Breakthrough OLAP performance with Cassandra and Spark

  1. 1. Breakthrough OLAP Performance with Cassandra and Spark Evan Chan August 2015
  2. 2. Who am I? Distinguished Engineer, @evanfchan User and contributor to Spark since 0.9, Cassandra since 0.6 Co-creator and maintainer of TupleJump http://velvia.github.io Spark Job Server
  3. 3. About Tuplejump is a big data technology leader providing solutions for rapid insights from data. Tuplejump - the first Spark-Cassandra integration - an open source Lucene indexer for Cassandra - open source HDFS for Cassandra Calliope Stargate SnackFS
  4. 4. Didn't I attend the same talk last year? Similar title, but mostly new material Will reveal new open source projects! :)
  5. 5. Problem Space Need analytical database / queries on structured big data Something SQL-like, very flexible and fast Pre-aggregation too limiting Fast data / constant updates Ideally, want my queries to run over fresh data too
  6. 6. Example: Video analytics Typical collection and analysis of consumer events 3 billion new events every day Video publishers want updated stats, the sooner the better Pre-aggregation only enables simple dashboard UIs What if one wants to offer more advanced analysis, or a generic data query API? Eg, top countries filtered by device type, OS, browser
  7. 7. Requirements Scalable - rules out PostGreSQL, etc. Easy to update and ingest new data Not traditional OLAP cubes - that's not what I'm talking about Very fast for analytical queries - OLAP not OLTP Extremely flexible queries Preferably open source
  8. 8. Parquet Widely used, lots of support (Spark, Impala, etc.) Problem: Parquet is read-optimized, not easy to use for writes Cannot support idempotent writes Optimized for writing very large chunks, not small updates Not suitable for time series, IoT, etc. Often needs multiple passes of jobs for compaction of small files, deduplication, etc.   People really want a database-like abstraction, not a file format!
  9. 9. Turns out this has been solved before! Even .Facebook uses Vertica
  10. 10. MPP Databases Easy writes plus fast queries, with constant transfers Automatic query optimization by storing intermediate query projections Stonebraker, et. al. - paper (Brown Univ)CStore
  11. 11. What's wrong with MPP Databases? Closed source $$$ Usually don't scale horizontally that well (or cost is prohibitive)
  12. 12. Cassandra Horizontally scalable Very flexible data modelling (lists, sets, custom data types) Easy to operate Perfect for ingestion of real time / machine data Best of breed storage technology, huge community BUT: Simple queries only OLTP-oriented
  13. 13. Apache Spark Horizontally scalable, in-memory queries Functional Scala transforms - map, filter, groupBy, sort etc. SQL, machine learning, streaming, graph, R, many more plugins all on ONE platform - feed your SQL results to a logistic regression, easy! Huge number of connectors with every single storage technology
  14. 14. Spark provides the missing fast, deep analytics piece of Cassandra!
  15. 15. Spark and Cassandra OLAP Architectures
  16. 16. Separate Storage and Query Layers Combine best of breed storage and query platforms Take full advantage of evolution of each Storage handles replication for availability Query can replicate data for scaling read concurrency - independent!
  17. 17. Spark as Cassandra's Cache
  18. 18. Spark SQL Appeared with Spark 1.0 In-memory columnar store Parquet, Json, Cassandra connector, Avro, many more SQL as well as DataFrames (Pandas-style) API Indexing integrated into data sources (eg C* secondary indexes) Write custom functions in Scala .... take that Hive UDFs!! Integrates well with MLBase, Scala/Java/Python
  19. 19. Connecting Spark to Cassandra Datastax's Tuplejump Spark Cassandra Connector Calliope   Get started in one line with spark-shell! bin/spark-shell --packagescom.datastax.spark:spark-cassandra-connector_2.10:1.4.0-M3 --confspark.cassandra.connection.host=127.0.0.1
  20. 20. Caching a SQL Table from Cassandra DataFrames support in Cassandra Connector 1.4.0 (and 1.3.0): valsqlContext=neworg.apache.spark.sql.SQLContext(sc) valdf=sqlContext.read .format("org.apache.spark.sql.cassandra") .options(Map("table"->"gdelt","keyspace"->"test")) .load() df.registerTempTable("gdelt") sqlContext.cacheTable("gdelt") sqlContext.sql("SELECTcount(monthyear)FROMgdelt").show()   Spark does no caching by default - you will always be reading from C*!
  21. 21. How Spark SQL's Table Caching Works
  22. 22. Spark Cached Tables can be Really Fast GDELT dataset, 4 million rows, 60 columns, localhost Method secs Uncached 317 Cached 0.38   Almost a 1000x speedup! On an 8-node EC2 c3.XL cluster, 117 million rows, can run common queries 1-2 seconds against cached dataset.
  23. 23. Tuning Connector Partitioning spark.cassandra.input.split.size Guideline: One split per partition, one partition per CPU core Much more parallelism won't speed up job much, but will starve other C* requests
  24. 24. Lesson #1: Take Advantage of Spark Caching!
  25. 25. Problems with Cached Tables Still have to read the data from Cassandra first, which is slow Amount of RAM: your entire data + extra for conversion to cached table Cached tables only live in Spark executors - by default tied to single context - not HA once any executor dies, must re-read data from C* Caching takes time: convert from RDD[Row] to compressed columnar format Cannot easily combine new RDD[Row] with cached tables (and keep speed)
  26. 26. Problems with Cached Tables If you don't have enough RAM, Spark can cache your tables partly to disk. This is still way, way, faster than scanning an entire C* table. However, cached tables are still tied to a single Spark context/application. Also: rdd.cache()is NOT the same as SQLContext's cacheTable!
  27. 27. What about C* Secondary Indexing? Spark-Cassandra Connector and Calliope can both reduce I/O by using Cassandra secondary indices. Does this work with caching? No, not really, because only the filtered rows would be cached. Subsequent queries against this limited cached table would not give you expected results.
  28. 28. Tachyon Off-Heap Caching
  29. 29. Intro to Tachyon Tachyon: an in-memory cache for HDFS and other binary data sources Keeps data off-heap, so multiple Spark applications/executors can share data Solves HA problem for data
  30. 30. Wait, wait, wait! What am I caching exactly? Tachyon is designed for caching files or binary blobs. A serialized form of CassandraRow/CassandraRDD? Raw output from Cassandra driver? What you really want is this: Cassandra SSTable -> Tachyon (as row cache) -> CQL -> Spark
  31. 31. Bad programmers worry about the code. Good programmers worry about data structures. - Linus Torvalds   Are we really thinking holistically about data modelling, caching, and how it affects the entire systems architecture?
  32. 32. Efficient Columnar Storage in Cassandra Wait, I thought Cassandra was columnar?
  33. 33. How Cassandra stores your CQL Tables Suppose you had this CQL table: CREATETABLE( departmenttext, empIdtext, firsttext, lasttext, ageint, PRIMARYKEY(department,empId) );
  34. 34. How Cassandra stores your CQL Tables PartitionKey 01:first 01:last 01:age 02:first 02:last 02:age Sales Bob Jones 34 Susan O'Connor 40 Engineering Dilbert P ? Dogbert Dog 1   Each row is stored contiguously. All columns in row 2 come after row 1. To analyze only age, C* still has to read every field.
  35. 35. Cassandra is really a row-based, OLTP-oriented datastore. Unless you know how to use it otherwise :)
  36. 36. The traditional row-based data storage approach is dead - Michael Stonebraker
  37. 37. Columnar Storage (Memory) Name column 0 1 0 1   Dictionary: {0: "Barak", 1: "Hillary"}   Age column 0 1 46 66
  38. 38. Columnar Storage (Cassandra) Review: each physical row in Cassandra (e.g. a "partition key") stores its columns together on disk.   Schema CF Rowkey Type Name StringDict Age Int   Data CF Rowkey 0 1 Name 0 1 Age 46 66
  39. 39. Columnar Format solves I/O Compression Dictionary compression - HUGE savings for low-cardinality string columns RLE, other techniques Reduce I/O Only columns needed for query are loaded from disk Batch multiple rows in one cell for efficiency (avoid cluster key overhead)
  40. 40. Columnar Format solves Caching Use the same format on disk, in cache, in memory scan Caching works a lot better when the cached object is the same!! No data format dissonance means bringing in new bits of data and combining with existing cached data is seamless
  41. 41. So, why isn't everybody doing this? No columnar storage format designed to work with NoSQL stores Efficient conversion to/from columnar format a hard problem Most infrastructure is still row oriented Spark SQL/DataFrames based on RDD[Row] Spark Catalyst is a row-oriented query parser
  42. 42. All hard work leads to profit, but mere talk leads to poverty. - Proverbs 14:23
  43. 43. Columnar Storage Performance Study   http://github.com/velvia/cassandra-gdelt
  44. 44. GDELT Dataset 1979 to now 60 columns, 250 million+ rows, 250GB+ Let's compare Cassandra I/O only, no caching or Spark Global Database of Events, Language, and Tone
  45. 45. The scenarios 1. Narrow table - CQL table with one row per partition key 2. Wide table - wide rows with 10,000 logical rows per partition key 3. Columnar layout - 1000 rows per columnar chunk, wide rows, with dictionary compression First 4 million rows, localhost, SSD, C* 2.0.9, LZ4 compression. Compaction performed before read benchmarks.
  46. 46. Query and ingest times Scenario Ingest Read all columns Read one column Narrow table 1927 sec 505 sec 504 sec Wide table 3897 sec 365 sec 351 sec Columnar 93 sec 8.6 sec 0.23 sec   On reads, using a columnar format is up to 2190x faster, while ingestion is 20-40x faster. Of course, real life perf gains will depend heavily on query, table width, etc. etc.
  47. 47. Disk space usage Scenario Disk used Narrow table 2.7 GB Wide table 1.6 GB Columnar 0.34 GB The disk space usage helps explain some of the numbers.
  48. 48. Towards Extreme Query Performance
  49. 49. The filo project is a binary data vector library designed for extreme read performance with minimal deserialization costs. http://github.com/velvia/filo Designed for NoSQL, not a file format random or linear access on or off heap missing value support Scala only, but cross-platform support possible
  50. 50. What is the ceiling? This Scala loop can read integers from a binary Filo blob at a rate of 2 billion integers per second - single threaded: defsumAllInts():Int={ vartotal=0 for{i<-0untilnumValuesoptimized}{ total+=sc(i) } total }
  51. 51. Vectorization of Spark Queries The project.Tungsten Process many elements from the same column at once, keep data in L1/L2 cache. Coming in Spark 1.4 through 1.6
  52. 52. Hot Column Caching in Tachyon Has a "table" feature, originally designed for Shark Keep hot columnar chunks in shared off-heap memory for fast access
  53. 53. Introducing FiloDB   http://github.com/velvia/FiloDB
  54. 54. What's in the name? Rich sweet layers of distributed, versioned database goodness
  55. 55. Distributed Apache Cassandra. Scale out with no SPOF. Cross-datacenter replication. Proven storage and database technology.
  56. 56. Versioned Incrementally add a column or a few rows as a new version. Easily control what versions to query. Roll back changes inexpensively. Stream out new versions as continuous queries :)
  57. 57. Columnar Parquet-style storage layout Retrieve select columns and minimize I/O for OLAP queries Add a new column without having to copy the whole table Vectorization and lazy/zero serialization for extreme efficiency
  58. 58. 100% Reactive Built completely on the Typesafe Platform: Scala 2.10 and SBT Spark (including custom data source) Akka Actors for rational scale-out concurrency Futures for I/O Phantom Cassandra client for reactive, type-safe C* I/O Typesafe Config
  59. 59. Spark SQL Queries! SELECTfirst,last,ageFROMcustomers WHERE_version>3ANDage<40LIMIT100 Read to and write from Spark Dataframes Append/merge to FiloDB table from Spark Streaming
  60. 60. FiloDB vs Parquet Comparable read performance - with lots of space to improve Assuming co-located Spark and Cassandra On localhost, both subsecond for simple queries (GDELT 1979-1984) FiloDB has more room to grow - due to hot column caching and much less deserialization overhead Lower memory requirement due to much smaller block sizes Much better fit for IoT / Machine / Time-series applications Limited support for types array / set / map support not there, but will be added later
  61. 61. Where FiloDB Fits In Use regular C* denormalized tables for OLTP and single-key lookups Use FiloDB for the remaining ad-hoc or more complex analytical queries Simplify your analytics infrastructure! No need to export to Hadoop/Parquet/data warehouse. Use Spark and C* for both OLAP and OLTP! Perform ad-hoc OLAP analysis of your time-series, IoT data
  62. 62. Simplify your Lambda Architecture... ( )https://www.mapr.com/developercentral/lambda-architecture
  63. 63. With Spark, Cassandra, and FiloDB Ma, where did all the components go? You mean I don't have to deal with Hadoop? Use Cassandra as a front end to store IoT data first
  64. 64. Exactly-Once Ingestion from Kafka New rows appended via Kafka Writes are idempotent - no need to dedup! Converted to columnar chunks on ingest and stored in C* Only necessary columnar chunks are read into Spark for minimal I/O
  65. 65. You can help! Send me your use cases for OLAP on Cassandra and Spark Especially IoT and Geospatial Email if you want to contribute
  66. 66. Thanks... to the entire OSS community, but in particular: Lee Mighdoll, Nest/Google Rohit Rai and Satya B., Tuplejump My colleagues at Socrata   If you want to go fast, go alone. If you want to go far, go together. -- African proverb
  67. 67. DEMO TIME GDELT: Regular C* Tables vs FiloDB
  68. 68. Extra Slides
  69. 69. When in doubt, use brute force - Ken Thompson
  70. 70. Automatic Columnar Conversion using Custom Indexes Write to Cassandra as you normally do Custom indexer takes changes, merges and compacts into columnar chunks behind scenes
  71. 71. Implementing Lambda is Hard Use real-time pipeline backed by a KV store for new updates Lots of moving parts Key-value store, real time sys, batch, etc. Need to run similar code in two places Still need to deal with ingesting data to Parquet/HDFS Need to reconcile queries against two different places

×