Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

Multi dimension aggregations using spark and dataframes

6 392 vues

Publié le

How Totango uses Apache Spark DataFrames to perform hundreds of aggregations in scale for Customer Success analytics

Publié dans : Logiciels
  • Soyez le premier à commenter

Multi dimension aggregations using spark and dataframes

  1. 1. Multidimensional Aggregations using Spark and DataFrames 2015-11-10 Romi Kuntsman, Senior Big Data Engineer
  2. 2. About me • Leading adoption of Apache Spark in Totango • Working with Spark for 1.5 years from version 1.0 • Passionate about actionable big data analytics • Working with web scale and cloud since 2008 • Previously: Outbrain, Foresight, RockeTier, Mamram • B.Sc. in Bioinformatics from Open University • LinkedIn: https://il.linkedin.com/in/romik • email: romi@totango.com
  3. 3. Agenda • Totango Data Flow Overview • Apache Spark DataFrames Introduction • Merging Multiple Results Efficiently • Open issues and questions
  4. 4. Data Flow Overview “Numbers have an important story to tell. They rely on you to give them a voice.” – Stephen Few
  5. 5. Let's talk about aggregations You've all done this... SELECT module, count(*) FROM activities GROUP BY module
  6. 6. Aggregations with big data You probably done or seen this before as well...
  7. 7. Life isn't so simple
  8. 8. Multiple levels of calculations
  9. 9. Different points of view • First level aggregations (across last 7, 14, 30 days etc) –Counts (per account, activity, module, user etc) –Distinct counts (unique users in module etc) –Sessions (multiple activities grouped by time proximity) –Activity days (how many days had any activity) • Higher level analytics: –Engagement Score (overall activity compared to others) –Change Metrics (how activity changes over time) –Account Health (good, average or poor) • And more...
  10. 10. What do we need • Easy way to develop a new aggregations • No boilerplate code, just business logic • Scalable and distributed • Accurate results (often underestimated) • Fast (short batch, but not realtime in this case) • Idempotent (same results on every run on same input) • Multi-tenant (same computations on isolated datasets)
  11. 11. Spark DataFrames “Simple things should be simple, complex things should be possible.” – Alan Key
  12. 12. Spark DataFrames • Table-like abstraction on top of Big Data • Able to scale from kilobytes to petabytes, node to cluster • Transformations available in code or SQL • User defined functions can add columns • Actively developed optimizer • Spark 1.3 (March 2015) - initially released • Spark 1.4 (June 2015) - mature and usable • Spark 1.5 (September 2015) - performance optimized • Spark 1.6 (not yet released) - more optimizations
  13. 13. Look ma, no map reduce! • module counts: –events.groupBy(module).count • module unique users: –events.groupBy(module,user).count.group By(module).count
  14. 14. User defined function • activity days: –udfRegistration.register("date_to_days", new DateToDays()) –eventsWithDate = sqlContext.query( "select *,date_days(date) from events") –eventsWithDate.groupBy(module,day).count
  15. 15. RDDs interoperate with DataFrames Note: sometimes we do need to go from DataFrame to Java and back to accomplish some things: RDD<FooBar> myRdd = dataframe.toJavaRDD.map(...).groupBy(...) newDataFrame = createDataFrame(myRdd, FooBar.class)
  16. 16. Advantage: speed, ease of development Disadvantages: less flexible, limited aggregations, strict simple schema When going from DF to RDD: toJavaRDD forces computation; losing Catalyst optimizer in the transition Future: maybe can be replaced by UDAF (user defined aggregate function) in upcoming Spark releases DataFrames vs. RDDs
  17. 17. Merge Multiple Results
  18. 18. Merge the results We've calculated aggregations across various dimensions. Now it's time to collect them grouped by entity (account, user, etc).
  19. 19. Partitioning scheme • RDD<Value> - not partitioned by key (there is not key…) → Union of many RDD results will shuffle everything • DataFrames are not partitioned by column (to be fixed…) → Union of many DFs results will shuffle everything • PairRDD<Key,Value> with partitionBy(partitioner) is partitioned → Union of many PairRDDs which used the same partitioner will be partitioned together! Partitioner interface: (default HashPartitioner fits most cases) int getPartition(key) int numPartitions
  20. 20. Number of partitions • Processing always happens in chunks that can fit into one executor memory • Too few partitions - some may not fit and you get a OOM • Too many partitions - many small steps and overall long time • In a multitenant environment - have to find a formula by input size that works for everyone, from smallest to largest • When re-partitioning, take note of data being reshuffled • No magic formula for the optimal number of partitions :-(
  21. 21. Name your stages • Stages can be named sparkContext.setCallSite (per thread)
  22. 22. To cache or not to cache • With RDDs you cache at every intersection • With DataFrames, best to cache input, then optimizer plans • Cache when dividing input into sub sections (like time slices) • For Caching DataFrame - need to cause computation, otherwise only LogicalPlan is cache and optimizer decides what to do (for example when we cache for time data subset)
  23. 23. More Spark gotchas... • When loading from Parquet, can't partition by column hash, only by column value • Use Kryo for serialization (register all classes) • Use standalone shuffle service to avoid losing shuffles when worker crashes (like in OutOfMemory) We'll upload separate posts about these and others on our blog http://labs.totango.com/
  24. 24. • Check out our blog: http://labs.totango.com/ • We're hiring! http://www.totango.com/jobs/ –Backend / Big Data Engineers –DevOps –Application / FrontEnd • Stay in touch –romi@totango.com –https://il.linkedin.com/in/romik Questions?

×