SlideShare une entreprise Scribd logo
1  sur  52
Télécharger pour lire hors ligne
Unified Big Data Processing 
with Apache Spark 
Matei Zaharia 
@matei_zaharia
What is Apache Spark? 
Fast & general engine for big data processing 
Generalizes MapReduce model to support more types 
of processing 
Most active open source project in big data
About Databricks 
Founded by the creators of Spark in 2013 
Continues to drive open source Spark development, 
and offers a cloud service (Databricks Cloud) 
Partners to support Spark with Cloudera, MapR, 
Hortonworks, Datastax
Spark Community 
MapReduce 
YARN 
HDFS 
Storm 
Spark 
2000 
1500 
1000 
500 
0 
MapReduce 
YARN 
HDFS 
Storm 
Spark 
350000 
300000 
250000 
200000 
150000 
100000 
50000 
0 
Commits Lines of Code Changed 
Activity in past 6 months
Community Growth 
100 
75 
50 
25 
0 
Contributors per Month to Spark 
2010 2011 2012 2013 2014 
2-3x more activity than Hadoop, Storm, 
MongoDB, NumPy, D3, Julia, …
Overview 
Why a unified engine? 
Spark execution model 
Why was Spark so general? 
What’s next
History: Cluster Programming Models 
2004
MapReduce 
A general engine for batch processing
Beyond MapReduce 
MapReduce was great for batch processing, but users 
quickly needed to do more: 
> More complex, multi-pass algorithms 
> More interactive ad-hoc queries 
> More real-time stream processing 
Result: many specialized systems for these workloads
Big Data Systems Today 
MapReduce 
Pregel 
Giraph 
Presto 
Storm 
Dremel 
Drill 
Impala 
S4 . . . 
Specialized systems 
for new workloads 
General batch 
processing
Problems with Specialized Systems 
More systems to manage, tune, deploy 
Can’t combine processing types in one application 
> Even though many pipelines need to do this! 
> E.g. load data with SQL, then run machine learning 
In many pipelines, data exchange between 
engines is the dominant cost!
MapReduce 
Pregel 
Giraph 
Presto 
Storm 
Dremel 
Drill 
Impala 
S4 
Specialized systems 
for new workloads 
General batch 
processing 
Unified engine 
Big Data Systems Today 
? . . .
Overview 
Why a unified engine? 
Spark execution model 
Why was Spark so general? 
What’s next
Background 
Recall 3 workloads were issues for MapReduce: 
> More complex, multi-pass algorithms 
> More interactive ad-hoc queries 
> More real-time stream processing 
While these look different, all 3 need one thing that 
MapReduce lacks: efficient data sharing
Data Sharing in MapReduce 
iter. 1 iter. 2 . . . 
Input 
HDFS 
read 
HDFS 
write 
HDFS 
read 
HDFS 
write 
Input 
query 1 
query 2 
query 3 
result 1 
result 2 
result 3 
. . . 
HDFS 
read 
Slow due to data replication and disk I/O
What We’d Like 
iter. 1 iter. 2 . . . 
Input 
Distributed 
memory 
Input 
query 1 
query 2 
query 3 
. . . 
one-time 
processing 
10-100× faster than network and disk
Spark Model 
Resilient Distributed Datasets (RDDs) 
> Collections of objects that can be stored in memory or 
disk across a cluster 
> Built via parallel transformations (map, filter, …) 
> Fault-tolerant without replication
Example: Log Mining 
Load error messages from a log into memory, then 
interactively search for various patterns 
Base RTDraDn sformed RDD 
lines = spark.textFile(“hdfs://...”) 
errors = lines.filter(lambda s: s.startswith(“ERROR”)) 
messages = errors.map(lambda s: s.split(‘t’)[2]) 
messages.cache() 
Block 1 
Block 2 
Block 3 
Worker 
Worker 
Worker 
Driver 
messages.filter(lambda s: “foo” in s).count() 
messages.filter(lambda s: “bar” in s).count() 
. . . 
results 
tasks 
Cache 1 
Cache 2 
Cache 3 
Action 
Full-text search of Wikipedia in <1 sec 
(vs 20 sec for on-disk data)
Fault Tolerance 
RDDs track lineage info to rebuild lost data 
file.map(lambda rec: (rec.type, 1)) 
.reduceByKey(lambda x, y: x + y) 
.filter(lambda (type, count): count > 10) 
map reduce filter 
Input file
Fault Tolerance 
RDDs track lineage info to rebuild lost data 
file.map(lambda rec: (rec.type, 1)) 
map reduce filter 
Input file 
.reduceByKey(lambda x, y: x + y) 
.filter(lambda (type, count): count > 10)
Example: Logistic Regression 
4000 
3500 
Running Time (s) Number of Iterations 
3000 
2500 
2000 
1500 
1000 
500 
0 
1 5 10 20 30 
110 s / iteration 
Hadoop 
Spark 
first iteration 80 s 
later iterations 1 s
Spark in Scala and Java 
// Scala: 
val lines = sc.textFile(...) 
lines.filter(s => s.contains(“ERROR”)).count() 
// Java: 
JavaRDD<String> lines = sc.textFile(...); 
lines.filter(s -> s.contains(“ERROR”)).count();
How General Is It?
Spark 
Streaming 
real-time 
Spark Core 
Spark SQL 
relational 
MLlib 
machine 
learning 
GraphX 
graph 
Libraries Built on Spark
Spark SQL 
Represents tables as RDDs 
Tables = Schema + Data
Spark SQL 
Represents tables as RDDs 
Tables = Schema + Data = SchemaRDD 
From Hive: 
c = HiveContext(sc) 
rows = c.sql(“select text, year from hivetable”) 
rows.filter(lambda r: r.year > 2013).collect() 
{“text”: “hi”, 
“user”: { 
“name”: “matei”, 
“id”: 123 
}} 
From JSON: 
c.jsonFile(“tweets.json”).registerTempTable(“tweets”) 
c.sql(“select text, user.name from tweets”) 
tweets.json
Spark Streaming 
Time 
Input
Spark Streaming 
Time 
RDD RDD RDD RDD RDD RDD 
Represents streams as a series of RDDs over time 
val spammers = sc.sequenceFile(“hdfs://spammers.seq”) 
sc.twitterStream(...) 
.filter(t => t.text.contains(“QCon”)) 
.transform(tweets => tweets.map(t => (t.user, t)).join(spammers)) 
.print()
MLlib 
Vectors, Matrices
MLlib 
Vectors, Matrices = RDD[Vector] 
Iterative computation 
points = sc.textFile(“data.txt”).map(parsePoint) 
model = KMeans.train(points, 10) 
model.predict(newPoint)
GraphX 
Represents graphs as RDDs of edges and vertices
GraphX 
Represents graphs as RDDs of edges and vertices
GraphX 
Represents graphs as RDDs of edges and vertices
Combining Processing Types 
// Load data using SQL 
val points = ctx.sql( 
“select latitude, longitude from historic_tweets”) 
// Train a machine learning model 
val model = KMeans.train(points, 10) 
// Apply it to a stream 
sc.twitterStream(...) 
.map(t => (model.closestCenter(t.location), 1)) 
.reduceByWindow(“5s”, _ + _)
Composing Workloads 
Separate systems: 
. . . 
HDFS 
read 
HDFS 
write 
ETL 
HDFS 
read 
HDFS 
write 
train 
HDFS 
read 
HDFS 
write 
query 
HDFS 
write 
HDFS 
read 
ETL 
train 
query 
Spark:
Hive 
Impala (disk) 
Impala (mem) 
Spark (disk) 
Spark (mem) 
0 
10 
20 
30 
40 
50 
Response Time (sec) 
SQL 
Mahout 
GraphLab 
Spark 
0 
10 
20 
30 
40 
50 
60 
Response Time (min) 
ML 
Performance vs Specialized Systems Storm 
Spark 
0 
5 
10 
15 
20 
25 
30 
35 
Throughput (MB/s/node) 
Streaming
On-Disk Performance: Petabyte Sort 
Spark beat last year’s Sort Benchmark winner, Hadoop, 
by 3× using 10× fewer machines 
2013 Record (Hadoop) Spark 100 TB Spark 1 PB 
Data Size 102.5 TB 100 TB 1000 TB 
Time 72 min 23 min 234 min 
Nodes 2100 206 190 
Cores 50400 6592 6080 
Rate/Node 0.67 GB/min 20.7 GB/min 22.5 GB/min 
tinyurl.com/spark-sort
Overview 
Why a unified engine? 
Spark execution model 
Why was Spark so general? 
What’s next
Why was Spark so General? 
In a world of growing data complexity, understanding 
this can help us design new tools / pipelines 
Two perspectives: 
> Expressiveness perspective 
> Systems perspective
1. Expressiveness Perspective 
Spark ≈ MapReduce + fast data sharing
1. Expressiveness Perspective 
MapReduce can emulate any distributed system! 
How to share data! 
quickly across steps? 
Local computation 
All-to-all communication 
One MR step 
… 
Spark: RDDs 
How low is this latency? 
Spark: ~100 ms
2. Systems Perspective 
Main bottlenecks in clusters are network and I/O 
Any system that lets apps control these resources can 
match speed of specialized ones 
In Spark: 
> Users control data partitioning & caching 
> We implement the data structures and algorithms of 
specialized systems within Spark records
Examples 
Spark SQL 
> A SchemaRDD holds records for each chunk of data 
(multiple rows), with columnar compression 
GraphX 
> GraphX represents graphs as an RDD of HashMaps so 
that it can join quickly against each partition
Result 
Spark can leverage most of the latest innovations in 
databases, graph processing, machine learning, … 
Users get a single API that composes very efficiently 
More info: tinyurl.com/matei-thesis
Overview 
Why a unified engine? 
Spark execution model 
Why was Spark so general? 
What’s next
What’s Next for Spark 
While Spark has been around since 2009, many pieces 
are just beginning 
300 contributors, 2 whole libraries new this year 
Big features in the works
Spark 1.2 (Coming in Dec) 
New machine learning pipelines API 
> Featurization & parameter search, similar to SciKit-Learn 
Python API for Spark Streaming 
Spark SQL pluggable data sources 
> Hive, JSON, Parquet, Cassandra, ORC, … 
Scala 2.11 support
Beyond Hadoop 
Batch Interactive Streaming 
Hadoop Cassandra Mesos 
… 
… 
Public 
Clouds 
Your 
application 
here 
Unified API across workloads, storage systems 
and environments
Learn More 
Downloads and tutorials: spark.apache.org 
Training: databricks.com/training (free videos) 
Databricks Cloud: databricks.com/cloud
www.spark-summit.org
Unified Big Data Processing with Apache Spark (QCON 2014)

Contenu connexe

Tendances

Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...
 Best Practice of Compression/Decompression Codes in Apache Spark with Sophia... Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...
Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...
Databricks
 
How to Utilize MLflow and Kubernetes to Build an Enterprise ML Platform
How to Utilize MLflow and Kubernetes to Build an Enterprise ML PlatformHow to Utilize MLflow and Kubernetes to Build an Enterprise ML Platform
How to Utilize MLflow and Kubernetes to Build an Enterprise ML Platform
Databricks
 

Tendances (20)

Modernizing to a Cloud Data Architecture
Modernizing to a Cloud Data ArchitectureModernizing to a Cloud Data Architecture
Modernizing to a Cloud Data Architecture
 
Microsoft Azure Databricks
Microsoft Azure DatabricksMicrosoft Azure Databricks
Microsoft Azure Databricks
 
Intro to Delta Lake
Intro to Delta LakeIntro to Delta Lake
Intro to Delta Lake
 
Introduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse ArchitectureIntroduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse Architecture
 
Introduction to Spark Streaming
Introduction to Spark StreamingIntroduction to Spark Streaming
Introduction to Spark Streaming
 
Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
 
Making Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta LakeMaking Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta Lake
 
Azure Databricks is Easier Than You Think
Azure Databricks is Easier Than You ThinkAzure Databricks is Easier Than You Think
Azure Databricks is Easier Than You Think
 
Kafka Streams: What it is, and how to use it?
Kafka Streams: What it is, and how to use it?Kafka Streams: What it is, and how to use it?
Kafka Streams: What it is, and how to use it?
 
Introduction to PySpark
Introduction to PySparkIntroduction to PySpark
Introduction to PySpark
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Running Apache Spark on Kubernetes: Best Practices and Pitfalls
Running Apache Spark on Kubernetes: Best Practices and PitfallsRunning Apache Spark on Kubernetes: Best Practices and Pitfalls
Running Apache Spark on Kubernetes: Best Practices and Pitfalls
 
Delta lake and the delta architecture
Delta lake and the delta architectureDelta lake and the delta architecture
Delta lake and the delta architecture
 
Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...
 Best Practice of Compression/Decompression Codes in Apache Spark with Sophia... Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...
Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...
 
Introduction to spark
Introduction to sparkIntroduction to spark
Introduction to spark
 
Apache Spark Overview
Apache Spark OverviewApache Spark Overview
Apache Spark Overview
 
How to Utilize MLflow and Kubernetes to Build an Enterprise ML Platform
How to Utilize MLflow and Kubernetes to Build an Enterprise ML PlatformHow to Utilize MLflow and Kubernetes to Build an Enterprise ML Platform
How to Utilize MLflow and Kubernetes to Build an Enterprise ML Platform
 
Building Modern Data Platform with Microsoft Azure
Building Modern Data Platform with Microsoft AzureBuilding Modern Data Platform with Microsoft Azure
Building Modern Data Platform with Microsoft Azure
 
Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)
 

En vedette

AWSクラウド利用料算出の参考資料
AWSクラウド利用料算出の参考資料AWSクラウド利用料算出の参考資料
AWSクラウド利用料算出の参考資料
SORACOM, INC
 
SnappyData, the Spark Database. A unified cluster for streaming, transactions...
SnappyData, the Spark Database. A unified cluster for streaming, transactions...SnappyData, the Spark Database. A unified cluster for streaming, transactions...
SnappyData, the Spark Database. A unified cluster for streaming, transactions...
SnappyData
 
AWS Simple Monthly Calculator 操作説明書
AWS Simple Monthly Calculator 操作説明書AWS Simple Monthly Calculator 操作説明書
AWS Simple Monthly Calculator 操作説明書
Amazon Web Services Japan
 

En vedette (20)

Processing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekProcessing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeek
 
Bring the Spark To Your Eyes
Bring the Spark To Your EyesBring the Spark To Your Eyes
Bring the Spark To Your Eyes
 
Prediction as a service with ensemble model in SparkML and Python ScikitLearn
Prediction as a service with ensemble model in SparkML and Python ScikitLearnPrediction as a service with ensemble model in SparkML and Python ScikitLearn
Prediction as a service with ensemble model in SparkML and Python ScikitLearn
 
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data PlatformsCassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
 
Thing you didn't know you could do in Spark
Thing you didn't know you could do in SparkThing you didn't know you could do in Spark
Thing you didn't know you could do in Spark
 
Using Spark with Tachyon by Gene Pang
Using Spark with Tachyon by Gene PangUsing Spark with Tachyon by Gene Pang
Using Spark with Tachyon by Gene Pang
 
Databricks Meetup @ Los Angeles Apache Spark User Group
Databricks Meetup @ Los Angeles Apache Spark User GroupDatabricks Meetup @ Los Angeles Apache Spark User Group
Databricks Meetup @ Los Angeles Apache Spark User Group
 
Spark 101 - First steps to distributed computing
Spark 101 - First steps to distributed computingSpark 101 - First steps to distributed computing
Spark 101 - First steps to distributed computing
 
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and MoreStrata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
 
AWSクラウド利用料算出の参考資料
AWSクラウド利用料算出の参考資料AWSクラウド利用料算出の参考資料
AWSクラウド利用料算出の参考資料
 
Global Big Data Conference Sept 2014 AWS Kinesis Spark Streaming Approximatio...
Global Big Data Conference Sept 2014 AWS Kinesis Spark Streaming Approximatio...Global Big Data Conference Sept 2014 AWS Kinesis Spark Streaming Approximatio...
Global Big Data Conference Sept 2014 AWS Kinesis Spark Streaming Approximatio...
 
Getting Started Running Apache Spark on Apache Mesos
Getting Started Running Apache Spark on Apache MesosGetting Started Running Apache Spark on Apache Mesos
Getting Started Running Apache Spark on Apache Mesos
 
Deep Dive : Spark Data Frames, SQL and Catalyst Optimizer
Deep Dive : Spark Data Frames, SQL and Catalyst OptimizerDeep Dive : Spark Data Frames, SQL and Catalyst Optimizer
Deep Dive : Spark Data Frames, SQL and Catalyst Optimizer
 
QCon São Paulo: Real-Time Analytics with Spark Streaming
QCon São Paulo: Real-Time Analytics with Spark StreamingQCon São Paulo: Real-Time Analytics with Spark Streaming
QCon São Paulo: Real-Time Analytics with Spark Streaming
 
What's new with Apache Spark?
What's new with Apache Spark?What's new with Apache Spark?
What's new with Apache Spark?
 
Strata EU 2014: Spark Streaming Case Studies
Strata EU 2014: Spark Streaming Case StudiesStrata EU 2014: Spark Streaming Case Studies
Strata EU 2014: Spark Streaming Case Studies
 
Spark For Faster Batch Processing
Spark For Faster Batch ProcessingSpark For Faster Batch Processing
Spark For Faster Batch Processing
 
SnappyData, the Spark Database. A unified cluster for streaming, transactions...
SnappyData, the Spark Database. A unified cluster for streaming, transactions...SnappyData, the Spark Database. A unified cluster for streaming, transactions...
SnappyData, the Spark Database. A unified cluster for streaming, transactions...
 
Big data processing with apache spark
Big data processing with apache sparkBig data processing with apache spark
Big data processing with apache spark
 
AWS Simple Monthly Calculator 操作説明書
AWS Simple Monthly Calculator 操作説明書AWS Simple Monthly Calculator 操作説明書
AWS Simple Monthly Calculator 操作説明書
 

Similaire à Unified Big Data Processing with Apache Spark (QCON 2014)

Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
Databricks
 
End-to-end Data Pipeline with Apache Spark
End-to-end Data Pipeline with Apache SparkEnd-to-end Data Pipeline with Apache Spark
End-to-end Data Pipeline with Apache Spark
Databricks
 
Boston Spark Meetup event Slides Update
Boston Spark Meetup event Slides UpdateBoston Spark Meetup event Slides Update
Boston Spark Meetup event Slides Update
vithakur
 
Building a unified data pipeline in Apache Spark
Building a unified data pipeline in Apache SparkBuilding a unified data pipeline in Apache Spark
Building a unified data pipeline in Apache Spark
DataWorks Summit
 

Similaire à Unified Big Data Processing with Apache Spark (QCON 2014) (20)

Unified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkUnified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache Spark
 
Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
 
Introduction to Spark (Intern Event Presentation)
Introduction to Spark (Intern Event Presentation)Introduction to Spark (Intern Event Presentation)
Introduction to Spark (Intern Event Presentation)
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutions
 
Stanford CS347 Guest Lecture: Apache Spark
Stanford CS347 Guest Lecture: Apache SparkStanford CS347 Guest Lecture: Apache Spark
Stanford CS347 Guest Lecture: Apache Spark
 
End-to-end Data Pipeline with Apache Spark
End-to-end Data Pipeline with Apache SparkEnd-to-end Data Pipeline with Apache Spark
End-to-end Data Pipeline with Apache Spark
 
20130912 YTC_Reynold Xin_Spark and Shark
20130912 YTC_Reynold Xin_Spark and Shark20130912 YTC_Reynold Xin_Spark and Shark
20130912 YTC_Reynold Xin_Spark and Shark
 
In Memory Analytics with Apache Spark
In Memory Analytics with Apache SparkIn Memory Analytics with Apache Spark
In Memory Analytics with Apache Spark
 
Apache spark - Architecture , Overview & libraries
Apache spark - Architecture , Overview & librariesApache spark - Architecture , Overview & libraries
Apache spark - Architecture , Overview & libraries
 
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
 
An introduction To Apache Spark
An introduction To Apache SparkAn introduction To Apache Spark
An introduction To Apache Spark
 
Big data distributed processing: Spark introduction
Big data distributed processing: Spark introductionBig data distributed processing: Spark introduction
Big data distributed processing: Spark introduction
 
Big Data Analytics and Ubiquitous computing
Big Data Analytics and Ubiquitous computingBig Data Analytics and Ubiquitous computing
Big Data Analytics and Ubiquitous computing
 
20170126 big data processing
20170126 big data processing20170126 big data processing
20170126 big data processing
 
Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming
 
Bds session 13 14
Bds session 13 14Bds session 13 14
Bds session 13 14
 
Spark Study Notes
Spark Study NotesSpark Study Notes
Spark Study Notes
 
Boston Spark Meetup event Slides Update
Boston Spark Meetup event Slides UpdateBoston Spark Meetup event Slides Update
Boston Spark Meetup event Slides Update
 
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
 
Building a unified data pipeline in Apache Spark
Building a unified data pipeline in Apache SparkBuilding a unified data pipeline in Apache Spark
Building a unified data pipeline in Apache Spark
 

Plus de Databricks

Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 

Plus de Databricks (20)

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
 
Machine Learning CI/CD for Email Attack Detection
Machine Learning CI/CD for Email Attack DetectionMachine Learning CI/CD for Email Attack Detection
Machine Learning CI/CD for Email Attack Detection
 

Dernier

Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Klinik kandungan
 
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
nirzagarg
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
nirzagarg
 
Jodhpur Park | Call Girls in Kolkata Phone No 8005736733 Elite Escort Service...
Jodhpur Park | Call Girls in Kolkata Phone No 8005736733 Elite Escort Service...Jodhpur Park | Call Girls in Kolkata Phone No 8005736733 Elite Escort Service...
Jodhpur Park | Call Girls in Kolkata Phone No 8005736733 Elite Escort Service...
HyderabadDolls
 
Top profile Call Girls In Latur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Latur [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Latur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Latur [ 7014168258 ] Call Me For Genuine Models We ...
gajnagarg
 
Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1
ranjankumarbehera14
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
gajnagarg
 
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
gajnagarg
 

Dernier (20)

Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
 
High Profile Call Girls Service in Jalore { 9332606886 } VVIP NISHA Call Girl...
High Profile Call Girls Service in Jalore { 9332606886 } VVIP NISHA Call Girl...High Profile Call Girls Service in Jalore { 9332606886 } VVIP NISHA Call Girl...
High Profile Call Girls Service in Jalore { 9332606886 } VVIP NISHA Call Girl...
 
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
 
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
 
Nirala Nagar / Cheap Call Girls In Lucknow Phone No 9548273370 Elite Escort S...
Nirala Nagar / Cheap Call Girls In Lucknow Phone No 9548273370 Elite Escort S...Nirala Nagar / Cheap Call Girls In Lucknow Phone No 9548273370 Elite Escort S...
Nirala Nagar / Cheap Call Girls In Lucknow Phone No 9548273370 Elite Escort S...
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
 
Jodhpur Park | Call Girls in Kolkata Phone No 8005736733 Elite Escort Service...
Jodhpur Park | Call Girls in Kolkata Phone No 8005736733 Elite Escort Service...Jodhpur Park | Call Girls in Kolkata Phone No 8005736733 Elite Escort Service...
Jodhpur Park | Call Girls in Kolkata Phone No 8005736733 Elite Escort Service...
 
Top profile Call Girls In Latur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Latur [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Latur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Latur [ 7014168258 ] Call Me For Genuine Models We ...
 
Statistics notes ,it includes mean to index numbers
Statistics notes ,it includes mean to index numbersStatistics notes ,it includes mean to index numbers
Statistics notes ,it includes mean to index numbers
 
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24  Building Real-Time Pipelines With FLaNKDATA SUMMIT 24  Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
 
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book nowVadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
 
Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangePredicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
 
Fun all Day Call Girls in Jaipur 9332606886 High Profile Call Girls You Ca...
Fun all Day Call Girls in Jaipur   9332606886  High Profile Call Girls You Ca...Fun all Day Call Girls in Jaipur   9332606886  High Profile Call Girls You Ca...
Fun all Day Call Girls in Jaipur 9332606886 High Profile Call Girls You Ca...
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Research
 
Top Call Girls in Balaghat 9332606886Call Girls Advance Cash On Delivery Ser...
Top Call Girls in Balaghat  9332606886Call Girls Advance Cash On Delivery Ser...Top Call Girls in Balaghat  9332606886Call Girls Advance Cash On Delivery Ser...
Top Call Girls in Balaghat 9332606886Call Girls Advance Cash On Delivery Ser...
 
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
 
Gulbai Tekra * Cheap Call Girls In Ahmedabad Phone No 8005736733 Elite Escort...
Gulbai Tekra * Cheap Call Girls In Ahmedabad Phone No 8005736733 Elite Escort...Gulbai Tekra * Cheap Call Girls In Ahmedabad Phone No 8005736733 Elite Escort...
Gulbai Tekra * Cheap Call Girls In Ahmedabad Phone No 8005736733 Elite Escort...
 
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
 

Unified Big Data Processing with Apache Spark (QCON 2014)

  • 1.
  • 2. Unified Big Data Processing with Apache Spark Matei Zaharia @matei_zaharia
  • 3. What is Apache Spark? Fast & general engine for big data processing Generalizes MapReduce model to support more types of processing Most active open source project in big data
  • 4. About Databricks Founded by the creators of Spark in 2013 Continues to drive open source Spark development, and offers a cloud service (Databricks Cloud) Partners to support Spark with Cloudera, MapR, Hortonworks, Datastax
  • 5. Spark Community MapReduce YARN HDFS Storm Spark 2000 1500 1000 500 0 MapReduce YARN HDFS Storm Spark 350000 300000 250000 200000 150000 100000 50000 0 Commits Lines of Code Changed Activity in past 6 months
  • 6. Community Growth 100 75 50 25 0 Contributors per Month to Spark 2010 2011 2012 2013 2014 2-3x more activity than Hadoop, Storm, MongoDB, NumPy, D3, Julia, …
  • 7. Overview Why a unified engine? Spark execution model Why was Spark so general? What’s next
  • 9. MapReduce A general engine for batch processing
  • 10. Beyond MapReduce MapReduce was great for batch processing, but users quickly needed to do more: > More complex, multi-pass algorithms > More interactive ad-hoc queries > More real-time stream processing Result: many specialized systems for these workloads
  • 11. Big Data Systems Today MapReduce Pregel Giraph Presto Storm Dremel Drill Impala S4 . . . Specialized systems for new workloads General batch processing
  • 12. Problems with Specialized Systems More systems to manage, tune, deploy Can’t combine processing types in one application > Even though many pipelines need to do this! > E.g. load data with SQL, then run machine learning In many pipelines, data exchange between engines is the dominant cost!
  • 13. MapReduce Pregel Giraph Presto Storm Dremel Drill Impala S4 Specialized systems for new workloads General batch processing Unified engine Big Data Systems Today ? . . .
  • 14. Overview Why a unified engine? Spark execution model Why was Spark so general? What’s next
  • 15. Background Recall 3 workloads were issues for MapReduce: > More complex, multi-pass algorithms > More interactive ad-hoc queries > More real-time stream processing While these look different, all 3 need one thing that MapReduce lacks: efficient data sharing
  • 16. Data Sharing in MapReduce iter. 1 iter. 2 . . . Input HDFS read HDFS write HDFS read HDFS write Input query 1 query 2 query 3 result 1 result 2 result 3 . . . HDFS read Slow due to data replication and disk I/O
  • 17. What We’d Like iter. 1 iter. 2 . . . Input Distributed memory Input query 1 query 2 query 3 . . . one-time processing 10-100× faster than network and disk
  • 18. Spark Model Resilient Distributed Datasets (RDDs) > Collections of objects that can be stored in memory or disk across a cluster > Built via parallel transformations (map, filter, …) > Fault-tolerant without replication
  • 19. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Base RTDraDn sformed RDD lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(‘t’)[2]) messages.cache() Block 1 Block 2 Block 3 Worker Worker Worker Driver messages.filter(lambda s: “foo” in s).count() messages.filter(lambda s: “bar” in s).count() . . . results tasks Cache 1 Cache 2 Cache 3 Action Full-text search of Wikipedia in <1 sec (vs 20 sec for on-disk data)
  • 20. Fault Tolerance RDDs track lineage info to rebuild lost data file.map(lambda rec: (rec.type, 1)) .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10) map reduce filter Input file
  • 21. Fault Tolerance RDDs track lineage info to rebuild lost data file.map(lambda rec: (rec.type, 1)) map reduce filter Input file .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10)
  • 22. Example: Logistic Regression 4000 3500 Running Time (s) Number of Iterations 3000 2500 2000 1500 1000 500 0 1 5 10 20 30 110 s / iteration Hadoop Spark first iteration 80 s later iterations 1 s
  • 23. Spark in Scala and Java // Scala: val lines = sc.textFile(...) lines.filter(s => s.contains(“ERROR”)).count() // Java: JavaRDD<String> lines = sc.textFile(...); lines.filter(s -> s.contains(“ERROR”)).count();
  • 25. Spark Streaming real-time Spark Core Spark SQL relational MLlib machine learning GraphX graph Libraries Built on Spark
  • 26. Spark SQL Represents tables as RDDs Tables = Schema + Data
  • 27. Spark SQL Represents tables as RDDs Tables = Schema + Data = SchemaRDD From Hive: c = HiveContext(sc) rows = c.sql(“select text, year from hivetable”) rows.filter(lambda r: r.year > 2013).collect() {“text”: “hi”, “user”: { “name”: “matei”, “id”: 123 }} From JSON: c.jsonFile(“tweets.json”).registerTempTable(“tweets”) c.sql(“select text, user.name from tweets”) tweets.json
  • 29. Spark Streaming Time RDD RDD RDD RDD RDD RDD Represents streams as a series of RDDs over time val spammers = sc.sequenceFile(“hdfs://spammers.seq”) sc.twitterStream(...) .filter(t => t.text.contains(“QCon”)) .transform(tweets => tweets.map(t => (t.user, t)).join(spammers)) .print()
  • 31. MLlib Vectors, Matrices = RDD[Vector] Iterative computation points = sc.textFile(“data.txt”).map(parsePoint) model = KMeans.train(points, 10) model.predict(newPoint)
  • 32. GraphX Represents graphs as RDDs of edges and vertices
  • 33. GraphX Represents graphs as RDDs of edges and vertices
  • 34. GraphX Represents graphs as RDDs of edges and vertices
  • 35. Combining Processing Types // Load data using SQL val points = ctx.sql( “select latitude, longitude from historic_tweets”) // Train a machine learning model val model = KMeans.train(points, 10) // Apply it to a stream sc.twitterStream(...) .map(t => (model.closestCenter(t.location), 1)) .reduceByWindow(“5s”, _ + _)
  • 36. Composing Workloads Separate systems: . . . HDFS read HDFS write ETL HDFS read HDFS write train HDFS read HDFS write query HDFS write HDFS read ETL train query Spark:
  • 37. Hive Impala (disk) Impala (mem) Spark (disk) Spark (mem) 0 10 20 30 40 50 Response Time (sec) SQL Mahout GraphLab Spark 0 10 20 30 40 50 60 Response Time (min) ML Performance vs Specialized Systems Storm Spark 0 5 10 15 20 25 30 35 Throughput (MB/s/node) Streaming
  • 38. On-Disk Performance: Petabyte Sort Spark beat last year’s Sort Benchmark winner, Hadoop, by 3× using 10× fewer machines 2013 Record (Hadoop) Spark 100 TB Spark 1 PB Data Size 102.5 TB 100 TB 1000 TB Time 72 min 23 min 234 min Nodes 2100 206 190 Cores 50400 6592 6080 Rate/Node 0.67 GB/min 20.7 GB/min 22.5 GB/min tinyurl.com/spark-sort
  • 39. Overview Why a unified engine? Spark execution model Why was Spark so general? What’s next
  • 40. Why was Spark so General? In a world of growing data complexity, understanding this can help us design new tools / pipelines Two perspectives: > Expressiveness perspective > Systems perspective
  • 41. 1. Expressiveness Perspective Spark ≈ MapReduce + fast data sharing
  • 42. 1. Expressiveness Perspective MapReduce can emulate any distributed system! How to share data! quickly across steps? Local computation All-to-all communication One MR step … Spark: RDDs How low is this latency? Spark: ~100 ms
  • 43. 2. Systems Perspective Main bottlenecks in clusters are network and I/O Any system that lets apps control these resources can match speed of specialized ones In Spark: > Users control data partitioning & caching > We implement the data structures and algorithms of specialized systems within Spark records
  • 44. Examples Spark SQL > A SchemaRDD holds records for each chunk of data (multiple rows), with columnar compression GraphX > GraphX represents graphs as an RDD of HashMaps so that it can join quickly against each partition
  • 45. Result Spark can leverage most of the latest innovations in databases, graph processing, machine learning, … Users get a single API that composes very efficiently More info: tinyurl.com/matei-thesis
  • 46. Overview Why a unified engine? Spark execution model Why was Spark so general? What’s next
  • 47. What’s Next for Spark While Spark has been around since 2009, many pieces are just beginning 300 contributors, 2 whole libraries new this year Big features in the works
  • 48. Spark 1.2 (Coming in Dec) New machine learning pipelines API > Featurization & parameter search, similar to SciKit-Learn Python API for Spark Streaming Spark SQL pluggable data sources > Hive, JSON, Parquet, Cassandra, ORC, … Scala 2.11 support
  • 49. Beyond Hadoop Batch Interactive Streaming Hadoop Cassandra Mesos … … Public Clouds Your application here Unified API across workloads, storage systems and environments
  • 50. Learn More Downloads and tutorials: spark.apache.org Training: databricks.com/training (free videos) Databricks Cloud: databricks.com/cloud