Presented at JAX London 2013
Apache Drill is a distributed system for interactive ad-hoc query and analysis of large-scale datasets. It is the Open Source version of Google’s Dremel technology. Apache Drill is designed to scale to thousands of servers and able to process Petabytes of data in seconds, enabling SQL-on-Hadoop and supporting a variety of data sources.
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
Interactive Ad-Hoc Queries over Diverse Data with Apache Drill
1. Large-scale, interactive
ad-hoc queries over
different datastores with
Apache Drill
Michael Hausenblas, Chief Data Engineer, MapR Technologies
JAX London, 2013-10-29
3. Batch processing
Apache Pig
Cascalog
… for recurring tasks such as large-scale data mining, ETL
offloading/data-warehousing for the batch layer in Lambda
architecture
4. OLTP
… user-facing eCommerce transactions, real-time messaging at
scale (FB), time-series processing, etc. for the serving layer in
Lambda architecture
5. Stream processing
… in order to handle stream sources such as social media feeds
or sensor data (mobile phones, RFID, weather stations, etc.)
for the speed layer in Lambda architecture
6. Search/Information Retrieval
… retrieval of items from unstructured documents (plain
text, etc.), semi-structured data formats (JSON, etc.), as
well as data stores (MongoDB, CouchDB, etc.)
7. But what about
interactive
ad-hoc query
at scale?
http://www.flickr.com/photos/9479603@N02/4144121838/ licensed under CC BY-NC-ND 2.0
9. Use Case: Marketing Campaign
• Jane, a marketing analyst
• Determine target segments
• Data from different sources
10. Use Case: Logistics
• Supplier tracking and performance
• Queries
– Shipments from supplier ‘ACM’ in last 24h
– Shipments in region ‘US’ not from ‘ACM’
SUPPLIER_ID
NAME
REGION
ACM
ACME Corp
US
GAL
GotALot Inc
US
BAP
Bits and Pieces Ltd
Europe
ZUP
Zu Pli
Asia
{
"shipment": 100123,
"supplier": "ACM",
“timestamp": "2013-02-01",
"description": ”first delivery today”
},
{
"shipment": 100124,
"supplier": "BAP",
"timestamp": "2013-02-02",
"description": "hope you enjoy it”
}
…
14. Google’s Dremel
“
Dremel is a scalable, interactive ad-hoc
query system for analysis of read-only
nested data. By combining multi-level
execution trees and columnar data layout,
it is capable of running aggregation
queries over trillion-row tables in
seconds. The system scales to thousands of
CPUs and petabytes of data, and has
thousands of users at Google.
…
“
http://research.google.com/pubs/pub36632.html
Sergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Romer, Shiva Shivakumar, Matt
Tolton, Theo Vassilakis, Proc. of the 36th Int'l Conf on Very Large Data Bases (2010), pp. 330339
19. Apache Drill–key facts
•
•
•
•
•
•
Inspired by Google’s Dremel
Standard SQL 2003 support
Plug-able data sources
Nested data is a first-class citizen
Schema is optional
Community driven, open, 100’s involved
21. Principled Query Execution
• Source query—what we want to do (analyst
friendly)
• Logical Plan— what we want to do (language
agnostic, computer friendly)
• Physical Plan—how we want to do it (the best
way we can tell)
• Execution Plan—where we want to do it
22. Principled Query Execution
Source
Query
SQL 2003
DrQL
MongoQL
DSL
Parser
parser API
Logical
Plan
query: [
{
@id: "log",
op: "sequence",
do: [
{
op: "scan",
source: “logs”
},
{
op: "filter",
condition:
"x > 3”
},
Optimizer
Topology
CF
etc.
Physical
Plan
Execution
scanner API
23. Wire-level Architecture
• Each node: Drillbit - maximize data locality
• Co-ordination, query planning, execution, etc, are distributed
• Any node can act as endpoint for a query—foreman
Drillbit
Drillbit
Drillbit
Drillbit
Storage
Process
Storage
Process
Storage
Process
Storage
Process
node
node
node
node
24. Wire-level Architecture
• Curator/Zookeeper for ephemeral cluster membership info
• Distributed cache (Hazelcast) for metadata, locality
information, etc.
Curator/Zk
Drillbit
Drillbit
Drillbit
Drillbit
Distributed Cache
Distributed Cache
Distributed Cache
Distributed Cache
Storage
Process
Storage
Process
Storage
Process
Storage
Process
node
node
node
node
25. Wire-level Architecture
• Originating Drillbit acts as foreman: manages query execution,
scheduling, locality information, etc.
• Streaming data communication avoiding SerDe
Curator/Zk
Drillbit
Drillbit
Drillbit
Drillbit
Distributed Cache
Distributed Cache
Distributed Cache
Distributed Cache
Storage
Process
Storage
Process
Storage
Process
Storage
Process
node
node
node
node
26. Wire-level Architecture
Foreman turns into
root of the multi-level
execution tree, leafs
activate their storage
engine interface.
node
Curator/Zk
node
node
27. On the shoulders of giants …
•
•
•
•
•
•
•
•
•
•
•
•
•
Jackson for JSON SerDe for metadata
Typesafe HOCON for configuration and module management
Netty4 as core RPC engine, protobuf for communication
Vanilla Java, Larray and Netty ByteBuf for off-heap large data structures
Hazelcast for distributed cache
Netflix Curator on top of Zookeeper for service registry
Optiq for SQL parsing and cost optimization
Parquet (http://parquet.io)/ ORC
Janino for expression compilation
ASM for ByteCode manipulation
Yammer Metrics for metrics
Guava extensively
Carrot HPC for primitive collections
28. Key features
•
•
•
•
Full SQL – ANSI SQL 2003
Nested Data as first class citizen
Optional Schema
Extensibility Points …
29. Extensibility Points
•
•
•
•
Source query parser API
Custom operators, UDF logical plan
Serving tree, CF, topology physical plan/optimizer
Data sources &formats scanner API
Source
Query
Parser
Logical
Plan
Optimizer
Physical
Plan
Execution
30. User Interfaces
• API—DrillClient
– Encapsulates endpoint discovery
– Supports logical and physical plan submission,
query cancellation, query status
– Supports streaming return results
• JDBC driver, converting JDBC into DrillClient
communication.
• REST proxy for DrillClient
34. Demo: submitting physical plan in a 3-node cluster
Test 1: Scan JSON doc
$ bin/submit_plan -f sample-data/physical_json_scan_test1.json -t physical -zk 127.0.0.1:2181
Test 2: Scan Parquet doc
$ bin/submit_plan -f sample-data/parquet_scan_union_screen_physical.json -t physical -zk 127.0.0.1:2181
35. Demo: SQL on single node
$ ./bin/sqlline -u jdbc:drill:schema=parquet-local
0: jdbc:drill:schema=parquet-local> SELECT _MAP['N_REGIONKEY'] as regionKey, _MAP['N_NAME'] as name
FROM "sample-data/nation.parquet" WHERE cast(_MAP['N_NAME'] as varchar) < 'M';
39. Status
• Heavy development by multiple organizations
(MapR, Pentaho, Microsoft, Thoughtworks,
XingCloud, etc.)
• Currently more than 100k LOC
• M1 Alpha available via
http://www.apache.org/dyn/closer.cgi/incubator/drill/drill-1.0.0-m1-incubating/
40. Kudos to …
•
•
•
•
•
•
•
Julian Hyde, Pentaho
Lisen Mu, XingCloud
Tim Chen, Microsoft
Chris Merrick, RJMetrics
David Alves, UT Austin
Sree Vaadi, SSS
Srihari Srinivasan,
ThoughtWorks
• Alexandre Beche, CERN
• Jason Altekruse, MapR
•
•
•
•
•
•
•
•
•
Ben Becker, MapR
Jacques Nadeau, MapR
Ted Dunning, MapR
Keys Botzum, MapR
Jason Frantz
Ellen Friedman
Chris Wensel, Concurrent
Gera Shegalov, Oracle
Ryan Rawson, Ohm Data
http://incubator.apache.org/drill/team.html
42. Engage!
• Follow @ApacheDrill on Twitter
• Sign up at mailing lists (user | dev)
http://incubator.apache.org/drill/mailing-lists.html
• Standing G+ hangouts every Tuesday at 5pm GMT
http://j.mp/apache-drill-hangouts
• Keep an eye on http://drill-user.org/
Notes de l'éditeur
http://solr-vs-elasticsearch.com/
(This is a ASR-35 at DEC mainframe–other console terminals used were Teletype model 35 Teletypes)Allowing the user to issue ad-hoc queries is essential: often, the user might not necessarily know ahead of time what queries to issue. Also, one may need to react to changing circumstances. The lack of tools to perform interactive ad-hoc analysis at scale is a gap that Apache Drill fills.
Hive: compile to MR, Aster: external tables in MPP, Oracle/MySQL: export MR results to RDBMSDrill, Impala, CitusDB: real-time
Suppose a marketing analyst trying to experiment with ways to do targeting of user segments for next campaign. Needs access to web logs stored in Hadoop, and also needs to access user profiles stored in MongoDB as well as access to transaction data stored in a conventional database.
Geo-spatial + time series data with highly discriminative queries (timeframe, region, etc.)
Re ad-hoc:You might not know ahead of time what queries you will want to make. You may need to react to changing circumstances.
Two innovations: handle nested-data column style (column-striped representation) and multi-level execution trees
repetition levels (r) — at what repeated field in the field’s path the value has repeated.definition levels (d) — how many fields in path thatcould be undefined (because they are optional or repeated) are actually presentOnly repeated fields increment the repetition level, only non-required fields increment the definition level. Required fields are always defined and do not need a definition level. Non repeated fields do not need a repetition level.An optional field requires one extra bit to store zero if it is NULL and one if it is defined. NULL values do not need to be stored as the definition level captures this information.
Source query - Human (eg DSL) or tool written(eg SQL/ANSI compliant) query Source query is parsed and transformed to produce the logical planLogical plan: dataflow of what should logically be doneTypically, the logical plan lives in memory in the form of Java objects, but also has a textual formThe logical query is then transformed and optimized into the physical plan.Optimizer introduces of parallel computation, taking topology into accountOptimizer handles columnar data to improve processing speedThe physical plan represents the actual structure of computation as it is done by the systemHow physical and exchange operators should be appliedAssignment to particular nodes and cores + actual query execution per node
Drillbits per node, maximize data localityCo-ordination, query planning, optimization, scheduling, execution are distributedBy default, Drillbits hold all roles, modules can optionally be disabled.Any node/Drillbit can act as endpoint for particular query.
Zookeeper maintains ephemeral cluster membership information onlySmall distributed cache utilizing embedded Hazelcast maintains information about individual queue depth, cached query plans, metadata, locality information, etc.
Originating Drillbit acts as foreman, manages all execution for their particular query, scheduling based on priority, queue depth and locality information.Drillbit data communication is streaming and avoids any serialization/deserialization
Red: originating drillbit, is the root of the multi-level execution tree, per query/jobLeafs use their storage engine interface to scan respective data source (DB, file, etc.)