Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

M7 and Apache Drill, Micheal Hausenblas

5 229 vues

Publié le

Publié dans : Technologie
  • Hello there! Get Your Professional Job-Winning Resume Here! http://bit.ly/topresum
       Répondre 
    Voulez-vous vraiment ?  Oui  Non
    Votre message apparaîtra ici

M7 and Apache Drill, Micheal Hausenblas

  1. 1. The MapR Big Data platformM7 and Apache DrillMichael Hausenblas, Chief Data Engineer EMEA, MapRHUG France, Paris, 2013-06-05
  2. 2. Whichworkloads doyouencounter inyourenvironment?http://www.flickr.com/photos/kevinomara/2866648330/licensedunderCCBY-NC-ND2.0
  3. 3. Batch processing… for recurring tasks such as large-scale data mining, ETLoffloading/data-warehousing  for the batch layer in Lambdaarchitecture
  4. 4. OLTP… user-facing eCommerce transactions, real-time messaging atscale (FB), time-series processing, etc.  for the serving layer inLambda architecture
  5. 5. Stream processing… in order to handle stream sources such as social media feedsor sensor data (mobile phones, RFID, weather stations, etc.) for the speed layer in Lambda architecture
  6. 6. Search/Information Retrieval… retrieval of items from unstructured documents (plaintext, etc.), semi-structured data formats (JSON, etc.), aswell as data stores (MongoDB, CouchDB, etc.)
  7. 7. MapR Big Data platform
  8. 8. MAPR’S M7HOW WE MADE HBASE ENTERPRISE-GRADE
  9. 9. HBase Adoption Used by 45% of Hadoop users Database Operations:Large scale key-value storeBlob storeLightweight OLTP Real-time Analytics:Logistics Shopping cartBilling Auction engineLog analysis
  10. 10. HBase IssuesReliability•Compactions disrupt operations•Very slow crash recovery•Unreliable splittingBusiness continuity•Common hardware/software issues cause downtime•Administration requires downtime•No point-in-time recovery•Complex backup processPerformance•Many bottlenecks result in low throughput•Limited data locality•Limited # of tablesManageability•Compactions, splits and merges must be done manually (in reality)•Basic operations like backup or table rename are complex
  11. 11. M7: Enterprise-grade HBaseEASY DEPENDABLE FASTM7 Enterprise GradeNo RegionServers NoCompactionsConsistent Low LatencySnapshotsMirroringInstant RecoveryNo Manual Processes
  12. 12. Unified Namespace
  13. 13. Unified Namespace$ pwd/mapr/default/user/dave$ lsfile1 file2 table1 table2$ hbase shellhbase(main):003:0> create /user/dave/table3, cf1, cf2, cf30 row(s) in 0.1570 seconds$ lsfile1 file2 table1 table2 table3$ hadoop fs -ls /user/daveFound 5 items-rw-r--r-- 3 mapr mapr 16 2012-09-28 08:34 /user/dave/file1-rw-r--r-- 3 mapr mapr 22 2012-09-28 08:34 /user/dave/file2trwxr-xr-x 3 mapr mapr 2 2012-09-28 08:32 /user/dave/table1trwxr-xr-x 3 mapr mapr 2 2012-09-28 08:33 /user/dave/table2trwxr-xr-x 3 mapr mapr 2 2012-09-28 08:38 /user/dave/table3
  14. 14. Fewer LayersMapR%M7
  15. 15. Eliminating CompactionsHBase-style LevelDB-style M7Examples BigTable, HBase,Cassandra, RiakCassandra, RiakWAF Low High LowRAF High Low LowI/O storms Yes No NoDisk space overhead High (2x) Low LowSkewed data handling Bad Good GoodRewrite large values Yes Yes NoWrite-amplification factor (WAF): The ratio between writes to disk and application writes. Note that data mustbe rewritten in every indexed structure.Read-amplification factor (RAF): The ratio between reads from disk and application reads.Skewed data handling: When inserting values with similar keys (eg, increasing keys, trending topic), do othervalues also need to be rewritten?
  16. 16. Portability (In Both Ways)• HBase applications work as is with M7– No need to recompile– No vendor lock-in• You can also run Apache HBase on an M7 cluster– Recommended during a migration– Table names with a slash (/) are in M7, table names without a slashare in Apache HBase (this can be overridden to allow table-by-tablemigration)• Use standard CopyTable tool to copy a table from HBase to M7 andvice versa– hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=/user/tshiran/mytable mytable
  17. 17. MapR Big Data platform
  18. 18. Apache Drillinteractive, ad-hoc query at scale
  19. 19. http://www.flickr.com/photos/9479603@N02/4144121838/ licensed under CC BY-NC-ND 2.0How to dointeractivead-hoc queryat scale?
  20. 20. ImpalaInteractive Query (?)low-latency
  21. 21. Use Case: Marketing Campaign• Jane, a marketing analyst• Determine target segments• Data from different sources
  22. 22. Use Case: Logistics• Supplier tracking and performance• Queries– Shipments from supplier ‘ACM’ in last 24h– Shipments in region ‘US’ not from ‘ACM’SUPPLIER_ID NAME REGIONACM ACME Corp USGAL GotALot Inc USBAP Bits and Pieces Ltd EuropeZUP Zu Pli Asia{"shipment": 100123,"supplier": "ACM",“timestamp": "2013-02-01","description": ”first delivery today”},{"shipment": 100124,"supplier": "BAP","timestamp": "2013-02-02","description": "hope you enjoy it”}…
  23. 23. Use Case: Crime Detection• Online purchases• Fraud, bilking, etc.• Batch-generated overview• Modes– Explorative– Alerts
  24. 24. Requirements• Support for different data sources• Support for different query interfaces• Low-latency/real-time• Ad-hoc queries• Scalable, reliable
  25. 25. And now for something completely different …
  26. 26. Google’s Dremelhttp://research.google.com/pubs/pub36632.htmlSergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Romer, Shiva Shivakumar, Matt Tolton,Theo Vassilakis, Proc. of the 36th Intl Conf on Very Large Data Bases (2010), pp. 330-339Dremel is a scalable, interactive ad-hocquery system for analysis of read-onlynested data. By combining multi-levelexecution trees and columnar data layout,it is capable of running aggregationqueries over trillion-row tables inseconds. The system scales to thousands ofCPUs and petabytes of data, and hasthousands of users at Google.…““Dremel is a scalable, interactive ad-hocquery system for analysis of read-onlynested data. By combining multi-levelexecution trees and columnar data layout,it is capable of running aggregationqueries over trillion-row tables inseconds. The system scales to thousands ofCPUs and petabytes of data, and hasthousands of users at Google.…
  27. 27. Google’s Dremelmulti-level execution treescolumnar data layout
  28. 28. Google’s Dremelnested data + schema column-striped representationmap nested data to tables
  29. 29. Google’s Dremelexperiments:datasets & query performance
  30. 30. Back to Apache Drill …
  31. 31. Apache Drill–key facts• Inspired by Google’s Dremel• Standard SQL 2003 support• Plug-able data sources• Nested data is a first-class citizen• Schema is optional• Community driven, open, 100’s involved
  32. 32. High-level Architecture
  33. 33. Principled Query Execution• Source query—what we want to do (analystfriendly)• Logical Plan— what we want to do (languageagnostic, computer friendly)• Physical Plan—how we want to do it (the bestway we can tell)• Execution Plan—where we want to do it
  34. 34. Principled Query ExecutionSourceQuery ParserLogicalPlan OptimizerPhysicalPlan ExecutionSQL 2003DrQLMongoQLDSLscanner APITopologyCFetc.query: [{@id: "log",op: "sequence",do: [{op: "scan",source: “logs”},{op: "filter",condition:"x > 3”},parser API
  35. 35. Wire-level Architecture• Each node: Drillbit - maximize data locality• Co-ordination, query planning, execution, etc, are distributedStorageProcessDrillbitnodeStorageProcessDrillbitnodeStorageProcessDrillbitnodeStorageProcessDrillbitnode
  36. 36. Wire-level Architecture• Curator/Zookeeper for ephemeral cluster membership info• Distributed cache (Hazelcast) for metadata, localityinformation, etc.Curator/ZkDistributed CacheStorageProcessDrillbitnodeStorageProcessDrillbitnodeStorageProcessDrillbitnodeStorageProcessDrillbitnodeDistributed Cache Distributed Cache Distributed Cache
  37. 37. Wire-level Architecture• Originating Drillbit acts as foreman: manages query execution,scheduling, locality information, etc.• Streaming data communication avoiding SerDeCurator/ZkDistributed CacheStorageProcessDrillbitnodeStorageProcessDrillbitnodeStorageProcessDrillbitnodeStorageProcessDrillbitnodeDistributed Cache Distributed Cache Distributed Cache
  38. 38. Wire-level ArchitectureForeman turns intoroot of the multi-levelexecution tree, leafsactivate their storageengine interface.nodenode nodeCurator/Zk
  39. 39. On the shoulders of giants …• Jackson for JSON SerDe for metadata• Typesafe HOCON for configuration and module management• Netty4 as core RPC engine, protobuf for communication• Vanilla Java, Larray and Netty ByteBuf for off-heap large data structures• Hazelcast for distributed cache• Netflix Curator on top of Zookeeper for service registry• Optiq for SQL parsing and cost optimization• Parquet (http://parquet.io) as native columnar format• Janino for expression compilation• ASM for ByteCode manipulation• Yammer Metrics for metrics• Guava extensively• Carrot HPC for primitive collections
  40. 40. Key features• Full SQL – ANSI SQL 2003• Nested Data as first class citizen• Optional Schema• Extensibility Points …
  41. 41. Extensibility Points• Source query  parser API• Custom operators, UDF  logical plan• Serving tree, CF, topology  physical plan/optimizer• Data sources &formats  scanner APISourceQuery ParserLogicalPlan OptimizerPhysicalPlan Execution
  42. 42. … and Hadoop?• How is it different to Hive, Cascading, etc.?• Complementary use cases*• … use Apache Drill– Find record with specified condition– Aggregation under dynamic conditions• … use MapReduce– Data mining with multiple iterations– ETL*)https://cloud.google.com/files/BigQueryTechnicalWP.pdf
  43. 43. LET’S GET OUR HANDS DIRTY…
  44. 44. Basic Demohttps://cwiki.apache.org/confluence/display/DRILL/Demo+HowTo{"id": "0001","type": "donut",”ppu": 0.55,"batters":{"batter”:[{ "id": "1001", "type": "Regular" },{ "id": "1002", "type": "Chocolate" },…data source: donuts.jsonquery:[ {op:"sequence",do:[{op: "scan",ref: "donuts",source: "local-logs",selection: {data: "activity"}},{op: "filter",expr: "donuts.ppu < 2.00"},…logical plan: simple_plan.jsonresult: out.json{"sales" : 700.0,"typeCount" : 1,"quantity" : 700,"ppu" : 1.0}{"sales" : 109.71,"typeCount" : 2,"quantity" : 159,"ppu" : 0.69}{"sales" : 184.25,"typeCount" : 2,"quantity" : 335,"ppu" : 0.55}
  45. 45. SELECTt.cf1.name as name,SUM(t.cf1.sales) as total_salesFROM m7://cluster1/sales tGROUP BY nameORDER BY by total_sales desc
  46. 46. sequence: [{ op: scan, storageengine: m7,selection: {table: sales}}{ op: project, projections: [{ref: name, expr: cf1.name},{ref: sales, expr: cf1.sales}]}{ op: segment, ref: by_name, exprs: [name]}{ op: collapsingaggregate, target: by_name,carryovers: [name],aggregations: [{ref: total_sales, expr:sum(name)}]}{ op: order, ordering: [{order: desc, expr:total_sales}]}{ op: store, storageengine: screen}]
  47. 47. {@id: 1, pop: m7scan, cluster: def,table: sales, cols: [cf1.name, cf2.name]}{@id: 2, op: hash-random-exchange,input: 1, expr: 1}{@id: 3, op: sorting-hash-aggregate, input: 2,grouping: 1, aggr:[sum(2)], carry: [1], sort:~agrr[0]}{@id: 4, op: screen, input: 4}
  48. 48. Execution Plan• Break physical plan into fragments• Determine quantity of parallelization for eachtask based on estimated costs• Assign particular nodes based on affinity, loadand topology
  49. 49. BE A PART OF IT!
  50. 50. Status• Heavy development by multiple organizations• Available– Logical plan (ADSP)– Reference interpreter– Basic SQL parser– Basic demo
  51. 51. StatusJune 2013• Full SQL support (+JDBC)• Physical plan• In-memory compressed data interfaces• Distributed execution
  52. 52. StatusJune 2013• HBase and MySQL storage engine• User Interfaces
  53. 53. User Interfaces
  54. 54. User Interfaces• API—DrillClient– Encapsulates endpoint discovery– Supports logical and physical plan submission,query cancellation, query status– Supports streaming return results• JDBC driver, converting JDBC into DrillClientcommunication.• REST proxy for DrillClient
  55. 55. User Interfaces
  56. 56. ContributingContributions appreciated (not only code drops) …• Test data & test queries• Use case scenarios (textual/SQL queries)• Documentation• Further schedule– Alpha Q2– Beta Q3
  57. 57. Kudos to …• Julian Hyde, Pentaho• Lisen Mu, XingCloud• Tim Chen, Microsoft• Chris Merrick, RJMetrics• David Alves, UT Austin• Sree Vaadi, SSS• Jacques Nadeau, MapR• Ted Dunning, MapR
  58. 58. Engage!• Follow @ApacheDrill on Twitter• Sign up at mailing lists (user | dev)http://incubator.apache.org/drill/mailing-lists.html• Standing G+ hangouts every Tuesday at 18:00 CEThttp://j.mp/apache-drill-hangouts• Keep an eye on http://drill-user.org/

×