Presto is a distributed SQL query engine that was developed by Facebook to make SQL queries scalable for large datasets. It translates SQL queries into multiple parallel tasks that can process data across many servers without using intermediate storage. This allows Presto to handle millions of records per second. Presto is now open source and used by many companies for interactive analysis of petabyte-scale datasets.
2. How do we make SQL scalable?
• Problem
• Count access logs of each web page:
• SELECT page, count(*) FROM weblog
GROUP BY page
• A Challenge
• How do you process millions of records in a
second?
• Making SQL scalable enough to handle large
data set
2
3. 3
HDFS
• Translate SQL into MapReduce (Hadoop) programs
• MapReduce:
• Does the same job by using many machines
Hive
A B
A0
B0
A1
A2
B
B1
B2
B3
A
map reduce mergesplit
HDFS
Single CPU Job
Distributed Processing
4. SQL to MapReduce
• Mapping SQL stages into MapReduce program
• SELECT page, count(*) FROM weblog
GROUP BY page
4
HDFS
A0
B0
A1
A2
B
B1
B2
B3
A
map reduce mergesplit
HDFS
TableScan(weblog)
GroupBy(hash(page))
count(weblog of a page)
result
5. HDFS is the bottleneck
• HDFS (Hadoop File System)
• Used for storing intermediate results
• Provides fault-tolerance, but slow
5
HDFS
A0
B0
A1
A2
B
B1
B2
B3
A
map reduce mergesplit
HDFS
TableScan(weblog)
GroupBy(hash(page))
count(weblog of a page)
result
6. Presto
• Distributed query engine developed by Facebook
• Uses HTTP for data transfer
• No intermediate storage like HDFS
• No fault-tolerance (but failure rate is less than 0.2%)
• Pipelining data transfer and data processing
6
A0
B0
A1
A2
B
B1
B2
B3
A
map reduce mergesplit
TableScan(weblog)
GroupBy(hash(page))
count(weblog of a page)
result
7. Architecture Comparison
7
Hive Presto Spark BigQuery
Performance Slow Fast Fast Ultra Fast
(using many disks)
Intermediate
Storage
HDFS None Memory/Disk Colossus (?)
Data
Transfer
HTTP HTTP HTTP ?
Query
Execution
Stage-wize
MapReduce
Run all stages
at once
(pipelining)
Stage-wise ?
Fault
Tolerance
Yes
None
(but, TD will retry
the query)
fromscratch)
Yes, but
limited
?
Multiple Job
Support
Good
Can handle many
jobs
limited
(~ 5 concurrent queries
per account in TD)
Require another
resource manager
(e.g. YARN, mesos)
limited
(Query queue)
8. Presto Usage Stats
• More than 99.8% queries finishes without any error
• 90%~ of queries finishes within 1 minute
• Treasure Data Presto Stats
• Processing more than 100,000 queries / day
• Processing 15 trillion records / day
• Facebook’s stat:
• 30,000~100,000 queries / day
• 1 trillion records / day
• Treasure data is No.1 Presto user in the world
8
10. Presto Overview
• A distributed SQL Engine developed by Facebook
• For interactive analysis on peta-scale dataset
• As a replacement of Hive
• Nov. 2013: Open sourced at GitHub
• Facebook now has 12 engineers working on Presto
• Code
• In-memory query engine, written in Java
• Based on ANSI SQL syntax
• Isolating query execution layer and storage access layer
• Connector provides data access methods
• Cassandra / Hive / JMX / Kafka / MySQL / PostgreSQL / MongoDB /
System / TPCH connectors
• td-presto is our connector to access PlazmaDB (Columnar Message
Pack Database)
10
24. Utilizing Time Index
24
1-hour
partition
2015-09-29 01:00:00
2015-09-29 02:00:00
2015-09-29 03:00:00
time column-based partitioning
…
Hive/Presto
1-hour
partition1-hour
partition1-hour
partition
TD_TIME_RANGE(time, ‘2015-09-29 02:00:00’, ‘2015-09-29 03:00:00’)
Query Results
2015-09-29 01:00:00
2015-09-29 02:00:00
2015-09-29 03:00:00
…
Hive/Presto Query Results
TD_TIME_RANGE(non_time_column, ‘2015-09-29 02:00:00’, ‘2015-09-29 03:00:00’)
Scanning the whole data set
1-hour
partition1-hour
partition1-hour
partition1-hour
partition
Full Scan
Partial Scan
25. Queries with huge results
• SELECT col1, col2, col3, … FROM …
• INSERT INTO (table) SELECT col1, col2, …
• or CREATE TABLE AS
25
1-hour
partition
header
col1
col2
…
…
Presto
Read query results in JSON
(single-thread task: slow)
msgack.gz
On Amazon S3
Presto
1-hour
partition
1-hour
partition
1-hour
partition
Directly create 1-hour partition on S3 from query results
Runs in parallel: fast
26. Memory Consuming Operators
• DISTINCT col1, col2, … (duplicate elimination)
• Need to store the whole data set in a single node
• COUNT(DISTINCT col1), etc.
• Use approx_distinct(col1) instead
• order by col1, col2, …
• A single node task (in Presto)
• UNION
• performs duplicate elimination (single node)
• Use UNION ALL
26
27. Finding bottlenecks
• Table scan range
• Check TD_TIME_RANGE condition
• distinct
• duplicate elimination of all selected columns (single node)
• slow and memory consuming
• huge result output
• Output Stage (0) becomes the bottleneck
• Use DROP TABLE IF EXISTS …, then CREATE TABLE AS SELECT …
27