Ce diaporama a bien été signalé.
Le téléchargement de votre SlideShare est en cours. ×

M7 and Apache Drill, Micheal Hausenblas

Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Chargement dans…3
×

Consultez-les par la suite

1 sur 59 Publicité
Publicité

Plus De Contenu Connexe

Diaporamas pour vous (20)

Les utilisateurs ont également aimé (20)

Publicité

Similaire à M7 and Apache Drill, Micheal Hausenblas (20)

Plus par HUG France (20)

Publicité

M7 and Apache Drill, Micheal Hausenblas

  1. 1. The MapR Big Data platform M7 and Apache Drill Michael Hausenblas, Chief Data Engineer EMEA, MapR HUG France, Paris, 2013-06-05
  2. 2. Which workloads do you encounter in your environment? http://www.flickr.com/photos/kevinomara/2866648330/licensedunderCCBY-NC-ND2.0
  3. 3. Batch processing … for recurring tasks such as large-scale data mining, ETL offloading/data-warehousing  for the batch layer in Lambda architecture
  4. 4. OLTP … user-facing eCommerce transactions, real-time messaging at scale (FB), time-series processing, etc.  for the serving layer in Lambda architecture
  5. 5. Stream processing … in order to handle stream sources such as social media feeds or sensor data (mobile phones, RFID, weather stations, etc.)  for the speed layer in Lambda architecture
  6. 6. Search/Information Retrieval … retrieval of items from unstructured documents (plain text, etc.), semi-structured data formats (JSON, etc.), as well as data stores (MongoDB, CouchDB, etc.)
  7. 7. MapR Big Data platform
  8. 8. MAPR’S M7 HOW WE MADE HBASE ENTERPRISE-GRADE
  9. 9. HBase Adoption  Used by 45% of Hadoop users  Database Operations: Large scale key-value store Blob store Lightweight OLTP  Real-time Analytics: Logistics Shopping cart Billing Auction engine Log analysis
  10. 10. HBase Issues Reliability •Compactions disrupt operations •Very slow crash recovery •Unreliable splitting Business continuity •Common hardware/software issues cause downtime •Administration requires downtime •No point-in-time recovery •Complex backup process Performance •Many bottlenecks result in low throughput •Limited data locality •Limited # of tables Manageability •Compactions, splits and merges must be done manually (in reality) •Basic operations like backup or table rename are complex
  11. 11. M7: Enterprise-grade HBase EASY DEPENDABLE FAST M7 Enterprise Grade No RegionServers No Compactions Consistent Low Latency Snapshots Mirroring Instant Recovery No Manual Processes
  12. 12. Unified Namespace
  13. 13. Unified Namespace $ pwd /mapr/default/user/dave $ ls file1 file2 table1 table2 $ hbase shell hbase(main):003:0> create '/user/dave/table3', 'cf1', 'cf2', 'cf3' 0 row(s) in 0.1570 seconds $ ls file1 file2 table1 table2 table3 $ hadoop fs -ls /user/dave Found 5 items -rw-r--r-- 3 mapr mapr 16 2012-09-28 08:34 /user/dave/file1 -rw-r--r-- 3 mapr mapr 22 2012-09-28 08:34 /user/dave/file2 trwxr-xr-x 3 mapr mapr 2 2012-09-28 08:32 /user/dave/table1 trwxr-xr-x 3 mapr mapr 2 2012-09-28 08:33 /user/dave/table2 trwxr-xr-x 3 mapr mapr 2 2012-09-28 08:38 /user/dave/table3
  14. 14. Fewer Layers MapR%M7
  15. 15. Eliminating Compactions HBase-style LevelDB-style M7 Examples BigTable, HBase, Cassandra, Riak Cassandra, Riak WAF Low High Low RAF High Low Low I/O storms Yes No No Disk space overhead High (2x) Low Low Skewed data handling Bad Good Good Rewrite large values Yes Yes No Write-amplification factor (WAF): The ratio between writes to disk and application writes. Note that data must be rewritten in every indexed structure. Read-amplification factor (RAF): The ratio between reads from disk and application reads. Skewed data handling: When inserting values with similar keys (eg, increasing keys, trending topic), do other values also need to be rewritten?
  16. 16. Portability (In Both Ways) • HBase applications work as is with M7 – No need to recompile – No vendor lock-in • You can also run Apache HBase on an M7 cluster – Recommended during a migration – Table names with a slash (/) are in M7, table names without a slash are in Apache HBase (this can be overridden to allow table-by-table migration) • Use standard CopyTable tool to copy a table from HBase to M7 and vice versa – hbase org.apache.hadoop.hbase.mapreduce.CopyTable -- new.name=/user/tshiran/mytable mytable
  17. 17. MapR Big Data platform
  18. 18. Apache Drill interactive, ad-hoc query at scale
  19. 19. http://www.flickr.com/photos/9479603@N02/4144121838/ licensed under CC BY-NC-ND 2.0 How to do interactive ad-hoc query at scale?
  20. 20. Impala Interactive Query (?) low-latency
  21. 21. Use Case: Marketing Campaign • Jane, a marketing analyst • Determine target segments • Data from different sources
  22. 22. Use Case: Logistics • Supplier tracking and performance • Queries – Shipments from supplier ‘ACM’ in last 24h – Shipments in region ‘US’ not from ‘ACM’ SUPPLIER_ID NAME REGION ACM ACME Corp US GAL GotALot Inc US BAP Bits and Pieces Ltd Europe ZUP Zu Pli Asia { "shipment": 100123, "supplier": "ACM", “timestamp": "2013-02-01", "description": ”first delivery today” }, { "shipment": 100124, "supplier": "BAP", "timestamp": "2013-02-02", "description": "hope you enjoy it” } …
  23. 23. Use Case: Crime Detection • Online purchases • Fraud, bilking, etc. • Batch-generated overview • Modes – Explorative – Alerts
  24. 24. Requirements • Support for different data sources • Support for different query interfaces • Low-latency/real-time • Ad-hoc queries • Scalable, reliable
  25. 25. And now for something completely different …
  26. 26. Google’s Dremel http://research.google.com/pubs/pub36632.html Sergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Romer, Shiva Shivakumar, Matt Tolton, Theo Vassilakis, Proc. of the 36th Int'l Conf on Very Large Data Bases (2010), pp. 330-339 Dremel is a scalable, interactive ad-hoc query system for analysis of read-only nested data. By combining multi-level execution trees and columnar data layout, it is capable of running aggregation queries over trillion-row tables in seconds. The system scales to thousands of CPUs and petabytes of data, and has thousands of users at Google. … “ “ Dremel is a scalable, interactive ad-hoc query system for analysis of read-only nested data. By combining multi-level execution trees and columnar data layout, it is capable of running aggregation queries over trillion-row tables in seconds. The system scales to thousands of CPUs and petabytes of data, and has thousands of users at Google. …
  27. 27. Google’s Dremel multi-level execution trees columnar data layout
  28. 28. Google’s Dremel nested data + schema column-striped representation map nested data to tables
  29. 29. Google’s Dremel experiments: datasets & query performance
  30. 30. Back to Apache Drill …
  31. 31. Apache Drill–key facts • Inspired by Google’s Dremel • Standard SQL 2003 support • Plug-able data sources • Nested data is a first-class citizen • Schema is optional • Community driven, open, 100’s involved
  32. 32. High-level Architecture
  33. 33. Principled Query Execution • Source query—what we want to do (analyst friendly) • Logical Plan— what we want to do (language agnostic, computer friendly) • Physical Plan—how we want to do it (the best way we can tell) • Execution Plan—where we want to do it
  34. 34. Principled Query Execution Source Query Parser Logical Plan Optimizer Physical Plan Execution SQL 2003 DrQL MongoQL DSL scanner APITopology CF etc. query: [ { @id: "log", op: "sequence", do: [ { op: "scan", source: “logs” }, { op: "filter", condition: "x > 3” }, parser API
  35. 35. Wire-level Architecture • Each node: Drillbit - maximize data locality • Co-ordination, query planning, execution, etc, are distributed Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node
  36. 36. Wire-level Architecture • Curator/Zookeeper for ephemeral cluster membership info • Distributed cache (Hazelcast) for metadata, locality information, etc. Curator/Zk Distributed Cache Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node Distributed Cache Distributed Cache Distributed Cache
  37. 37. Wire-level Architecture • Originating Drillbit acts as foreman: manages query execution, scheduling, locality information, etc. • Streaming data communication avoiding SerDe Curator/Zk Distributed Cache Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node Distributed Cache Distributed Cache Distributed Cache
  38. 38. Wire-level Architecture Foreman turns into root of the multi-level execution tree, leafs activate their storage engine interface. node node node Curator/Zk
  39. 39. On the shoulders of giants … • Jackson for JSON SerDe for metadata • Typesafe HOCON for configuration and module management • Netty4 as core RPC engine, protobuf for communication • Vanilla Java, Larray and Netty ByteBuf for off-heap large data structures • Hazelcast for distributed cache • Netflix Curator on top of Zookeeper for service registry • Optiq for SQL parsing and cost optimization • Parquet (http://parquet.io) as native columnar format • Janino for expression compilation • ASM for ByteCode manipulation • Yammer Metrics for metrics • Guava extensively • Carrot HPC for primitive collections
  40. 40. Key features • Full SQL – ANSI SQL 2003 • Nested Data as first class citizen • Optional Schema • Extensibility Points …
  41. 41. Extensibility Points • Source query  parser API • Custom operators, UDF  logical plan • Serving tree, CF, topology  physical plan/optimizer • Data sources &formats  scanner API Source Query Parser Logical Plan Optimizer Physical Plan Execution
  42. 42. … and Hadoop? • How is it different to Hive, Cascading, etc.? • Complementary use cases* • … use Apache Drill – Find record with specified condition – Aggregation under dynamic conditions • … use MapReduce – Data mining with multiple iterations – ETL *)https://cloud.google.com/files/BigQueryTechnicalWP.pdf
  43. 43. LET’S GET OUR HANDS DIRTY…
  44. 44. Basic Demo https://cwiki.apache.org/confluence/display/DRILL/Demo+HowTo { "id": "0001", "type": "donut", ”ppu": 0.55, "batters": { "batter”: [ { "id": "1001", "type": "Regular" }, { "id": "1002", "type": "Chocolate" }, … data source: donuts.json query:[ { op:"sequence", do:[ { op: "scan", ref: "donuts", source: "local-logs", selection: {data: "activity"} }, { op: "filter", expr: "donuts.ppu < 2.00" }, … logical plan: simple_plan.json result: out.json { "sales" : 700.0, "typeCount" : 1, "quantity" : 700, "ppu" : 1.0 } { "sales" : 109.71, "typeCount" : 2, "quantity" : 159, "ppu" : 0.69 } { "sales" : 184.25, "typeCount" : 2, "quantity" : 335, "ppu" : 0.55 }
  45. 45. SELECT t.cf1.name as name, SUM(t.cf1.sales) as total_sales FROM m7://cluster1/sales t GROUP BY name ORDER BY by total_sales desc
  46. 46. sequence: [ { op: scan, storageengine: m7, selection: {table: sales}} { op: project, projections: [ {ref: name, expr: cf1.name}, {ref: sales, expr: cf1.sales}]} { op: segment, ref: by_name, exprs: [name]} { op: collapsingaggregate, target: by_name, carryovers: [name], aggregations: [{ref: total_sales, expr: sum(name)}]} { op: order, ordering: [{order: desc, expr: total_sales}]} { op: store, storageengine: screen} ]
  47. 47. { @id: 1, pop: m7scan, cluster: def, table: sales, cols: [cf1.name, cf2.name] } { @id: 2, op: hash-random-exchange, input: 1, expr: 1 } { @id: 3, op: sorting-hash-aggregate, input: 2, grouping: 1, aggr:[sum(2)], carry: [1], sort: ~agrr[0] } { @id: 4, op: screen, input: 4 }
  48. 48. Execution Plan • Break physical plan into fragments • Determine quantity of parallelization for each task based on estimated costs • Assign particular nodes based on affinity, load and topology
  49. 49. BE A PART OF IT!
  50. 50. Status • Heavy development by multiple organizations • Available – Logical plan (ADSP) – Reference interpreter – Basic SQL parser – Basic demo
  51. 51. Status June 2013 • Full SQL support (+JDBC) • Physical plan • In-memory compressed data interfaces • Distributed execution
  52. 52. Status June 2013 • HBase and MySQL storage engine • User Interfaces
  53. 53. User Interfaces
  54. 54. User Interfaces • API—DrillClient – Encapsulates endpoint discovery – Supports logical and physical plan submission, query cancellation, query status – Supports streaming return results • JDBC driver, converting JDBC into DrillClient communication. • REST proxy for DrillClient
  55. 55. User Interfaces
  56. 56. Contributing Contributions appreciated (not only code drops) … • Test data & test queries • Use case scenarios (textual/SQL queries) • Documentation • Further schedule – Alpha Q2 – Beta Q3
  57. 57. Kudos to … • Julian Hyde, Pentaho • Lisen Mu, XingCloud • Tim Chen, Microsoft • Chris Merrick, RJMetrics • David Alves, UT Austin • Sree Vaadi, SSS • Jacques Nadeau, MapR • Ted Dunning, MapR
  58. 58. Engage! • Follow @ApacheDrill on Twitter • Sign up at mailing lists (user | dev) http://incubator.apache.org/drill/mailing-lists.html • Standing G+ hangouts every Tuesday at 18:00 CET http://j.mp/apache-drill-hangouts • Keep an eye on http://drill-user.org/

Notes de l'éditeur

  • http://solr-vs-elasticsearch.com/
  • (This is a ASR-35 at DEC mainframe–other console terminals used were Teletype model 35 Teletypes)Allowing the user to issue ad-hoc queries is essential: often, the user might not necessarily know ahead of time what queries to issue. Also, one may need to react to changing circumstances. The lack of tools to perform interactive ad-hoc analysis at scale is a gap that Apache Drill fills.
  • Hive: compile to MR, Aster: external tables in MPP, Oracle/MySQL: export MR results to RDBMSDrill, Impala, CitusDB: real-time
  • Suppose a marketing analyst trying to experiment with ways to do targeting of user segments for next campaign. Needs access to web logs stored in Hadoop, and also needs to access user profiles stored in MongoDB as well as access to transaction data stored in a conventional database.
  • Geo-spatial + time series data with highly discriminative queries (timeframe, region, etc.)
  • Re ad-hoc:You might not know ahead of time what queries you will want to make. You may need to react to changing circumstances.
  • Two innovations: handle nested-data column style (column-striped representation) and multi-level execution trees
  • repetition levels (r) — at what repeated field in the field’s path the value has repeated.definition levels (d) — how many fields in path thatcould be undefined (because they are optional or repeated) are actually presentOnly repeated fields increment the repetition level, only non-required fields increment the definition level. Required fields are always defined and do not need a definition level. Non repeated fields do not need a repetition level.An optional field requires one extra bit to store zero if it is NULL and one if it is defined. NULL values do not need to be stored as the definition level captures this information.
  • Source query - Human (eg DSL) or tool written(eg SQL/ANSI compliant) query Source query is parsed and transformed to produce the logical planLogical plan: dataflow of what should logically be doneTypically, the logical plan lives in memory in the form of Java objects, but also has a textual formThe logical query is then transformed and optimized into the physical plan.Optimizer introduces of parallel computation, taking topology into accountOptimizer handles columnar data to improve processing speedThe physical plan represents the actual structure of computation as it is done by the systemHow physical and exchange operators should be appliedAssignment to particular nodes and cores + actual query execution per node
  • Drillbits per node, maximize data localityCo-ordination, query planning, optimization, scheduling, execution are distributedBy default, Drillbits hold all roles, modules can optionally be disabled.Any node/Drillbit can act as endpoint for particular query.
  • Zookeeper maintains ephemeral cluster membership information onlySmall distributed cache utilizing embedded Hazelcast maintains information about individual queue depth, cached query plans, metadata, locality information, etc.
  • Originating Drillbit acts as foreman, manages all execution for their particular query, scheduling based on priority, queue depth and locality information.Drillbit data communication is streaming and avoids any serialization/deserialization
  • Red: originating drillbit, is the root of the multi-level execution tree, per query/jobLeafs use their storage engine interface to scan respective data source (DB, file, etc.)
  • Relation of Drill to HadoopHadoop = HDFS + MapReduceDrill for:Finding particular records with specified conditions. For example, to findrequest logs with specified account ID.Quick aggregation of statistics with dynamically-changing conditions. For example, getting a summary of request traffic volume from the previous night for a web application and draw a graph from it.Trial-and-error data analysis. For example, identifying the cause of trouble and aggregating values by various conditions, including by hour, day and etc...MapReduce: Executing a complex data mining on Big Data which requires multiple iterations and paths of data processing with programmed algorithms.Executing large join operations across huge datasets.Exporting large amount of data after processing.
  • Designed to be as easy as possible for language implementers to utilizeDon’t constrain ourselves to SQL specific paradigm – support complex data type operators such as collapse and expand as wellAllow late typing
  • Insert points of parallelization where optimizer thinks they are necessaryPick the right version of each operatorApply projection and other push-down rules into capable operators

×