Building data product requires having Lambda Architecture to bridge the batch and streaming processing. AirStream is a framework built on top of Apache Spark to allow users to easily build data products at Airbnb. It proved Spark is impactful and useful in the production for mission-critical data products.
On the streaming side, hear how AirStream integrates multiple ecosystems with Spark Streaming, such as HBase, Elasticsearch, MySQL, DynamoDB, Memcache and Redis. On the batch side, learn how to apply the same computation logic in Spark over large data sets from Hive and S3. The speakers will also go through a few production use cases, and share several best practices on how to manage Spark jobs in production.
9. Computation
Liyin Tang and Jingwei Lu
9
Streaming/Batch
process: [{
name = process_example,
type = sql,
sql = """
SELECT listing_id, checkin_date, context.source as source
FROM source_example
WHERE user_id IS NOT NULL """
}]
10. Sinks
Liyin Tang and Jingwei Lu
10
Streaming
sink: [
{
name = sink_example
input = process_example
type = hbase_update
hbase_table_name = test_table
bulk_upload = false
}
]
Batch
sink: [
{
name = sink_example
input = process_example
type = hbase_update
hbase_table_name = test_table
bulk_upload = true
}
]
12. Liyin Tang and Jingwei Lu
Unified API through AirStream
• Declarative job configuration
• Streaming source vs static source
• Computation operator or sink can be shared by streaming
and batch job.
• Computation flow is shared by streaming and batch
• Single driver executes in both streaming and batch mode job
12
14. AirStream
Shared Global State Store
Liyin Tang and Jingwei Lu
14
HBase Tables
Spark StreamingSpark StreamingSpark StreamingSpark Streaming
Spark BatchSpark BatchSpark BatchSpark Batch
15. •Well integrated with Hadoop eco system
•Efficient API for streaming writes and bulk uploads
•Rich API for sequential scan and point-lookups
• Merged view based on version
15
Why HBase
16. Unified Write API
Liyin Tang and Jingwei Lu
16
DataFrame
HBase
Region 1
Region 2
Region N
Re-partition
<Region 1, [RowKey, Value]>
<Region 2, [RowKey, Value]>
<Region N, [RowKey, Value]>
… …
Puts
HFile
BulkLoad
17. Rich Read API
Liyin Tang and Jingwei Lu
17
HBase Tables
Spark Streaming/Batch Jobs
Multi-Gets Prefix Scan Time Range Scan
23. • Large amount of data: Multiple large mysql DBs
• Realtime-ness: minutes delay/ hours delay
• Transaction : Need to keep transaction across different tables
• Schema change: Table schema evolves
Database Snapshot
23
Move Elephant
24. 24
Binlog Replay on Spark
20+ hr 4+ hr
AirStream Job
5 mins
15 mins
1 hr
spinal tap
seed
25. • Streaming and Batch shares Logic:
Binlog file reader, DDL processor,
transaction processor, DML processor.
• Merged by binlog position: <filenum,
offset>
• Idempotent: Log can be replayed
multiple times.
• Schema changes: Full schema
change history.
25
Log Parser
Transaction
Processor
Change
Processor
Schema
Processor
HBASE
Lambda Architecture
Binlog(realtime/history)
DML
DDL
XVID
Mysql Instance
41. Thrift-> DataFrame
Liyin Tang and Jingwei Lu
41
Thrift
Event
https://github.com/airbnb/airbnb-spark-thrift
Thrift
Class
Thrift
Object
Field
Meta
Data
Struct
Type
Field
Value
Row
DataFrame