SlideShare une entreprise Scribd logo
1  sur  66
Télécharger pour lire hors ligne
© 2015 IBM Corporation
DSK-3576
Apache Spark:
What's Under the Hood?
Adarsh Pannu
Senior Technical Staff Member
IBM Analytics Platform
adarshrp@us.ibm.com
Abstract
This session covers Spark's
architectural components through
the life of simple Spark jobs. Those
already familiar with the basic
Spark API will gain deeper
knowledge of how Spark works with
the potential of becoming an
advanced Spark user, administrator
or contributor.
1
Outline
Why Understand Spark Internals? To Write Better Applications.
●  Simple Spark Application
●  Resilient Distributed Datasets
●  Cluster architecture
●  Job Execution through Spark components
●  Tasks & Scheduling
●  Shuffle
●  Memory management
●  Writing better Spark applications: Tips and Tricks
First Spark Application
On-Time Arrival Performance Dataset
Record of every US airline flight since
1980s.
Fields:
Year, Month, DayofMonth
UniqueCarrier,FlightNum
DepTime, ArrTime, ActualElapsedTime
ArrDelay, DepDelay
Origin, Dest, Distance
Cancelled, ...
“Where, When, How Long? ...”
3
First Spark Application (contd.)
4
Year,Month,DayofMonth,DayOfWeek,DepTime,CRSDepTime,ArrTime,CRS
ArrTime,UniqueCarrier,FlightNum,TailNum,ActualElapsedTime,CRSElapse
dTime,AirTime,ArrDelay,DepDelay,Origin,Dest,Distance,TaxiIn,TaxiOut,Can
celled,CancellationCode,Diverted,CarrierDelay,WeatherDelay,NASDelay,S
ecurityDelay,LateAircraftDelay
2004,2,5,4,1521,1530,1837,1838,CO,65,N67158,376,368,326,-1,-9,EWR,LAX,2454,...
UniqueCarrier
Origin
Dest
Year,Month,DayofMonth
FlightNum
ActualElapsedTime
Distance
DepTime
First Spark Application (contd.)
Which airports handles the most airline carriers?
!  Small airports (E.g. Ithaca) only served by a few airlines
!  Larger airports (E.g. Newark) handle dozens of carriers
In SQL, this translates to:
SELECT Origin, count(distinct UniqueCarrier)
FROM flights
GROUP BY Origin
5
// Read data rows
sc.textFile("hdfs:///.../flights").
2004,2,5...,AA,..,EWR,... 2004,3,22,...,UA,...SFO,... 2004,3,22,...,AA,...EWR,..
// Extract Airport (key) and Airline (value)
(EWR, AA) (SFO, UA) (EWR, UA)
(EWR, [AA, UA]) (SFO, UA)
// Discard duplicate pairs, and compute group size
mapValues(values => values.toSet.size).
// Return results to client
map(row => (row.split(",")(16), row.split(",")(8))).
// Group by Airport
groupByKey.
collect
(EWR, 2) (SFO, 1)
[ (EWR, 2), (SFO, 1) ]
sc.textFile("hdfs:///.../flights").
mapValues(values => values.toSet.size).
map(row => (row.split(",")(16), row.split(",")(8))).
groupByKey.
collect
Base RDD
Transformed
RDDs
Action
What’s an RDD?
8
CO780, IAH, MCI
CO683, IAH, MSY
CO1707, TPA, IAH
...
UA620, SJC, ORD
UA675, ORD, SMF
UA676, ORD, LGA
...
DL282, ATL, CVG
DL2128, CHS, ATL
DL2592, PBI, LGA
DL417, FLL, ATL
...
Resilient Distributed Datasets
•  Key abstraction in Spark
Immutable collection of objects
•  Distributed across machines
Can be operated on in parallel
Can hold any kind of data
Hadoop datasets
Parallelized Scala collections
RDBMS or No-SQL, ...
Can recover from failures, be cached, ...
Resilient Distributed Datasets (RDD)
1.  Set of partitions (“splits” in Hadoop)
2.  List of dependencies on parent RDDs
3.  Function to compute a partition given its parent(s)
4.  (Optional) Partitioner (hash, range)
5.  (Optional) Preferred location(s)
9
Lineage
Optimized
Execution
We’ve written the code, what next?
Spark supports four different cluster managers:
●  Local: Useful only for development
●  Standalone: Bundled with Spark, doesn’t play well with other
applications
●  YARN
●  Mesos
Each mode has a similar “logical” architecture although physical
details differ in terms of which/where processes and threads are
launched.
10
Spark Cluster Architecture: Logical View
11
Driver represents the application. It runs the main() function.
SparkContext is the main entry point for Spark functionality.
Represents the connection to a Spark cluster.
Executor runs tasks and keeps data in memory or disk storage
across them. Each application has its own executors.
Task
Driver Program
SparkContext
Cluster Manager
Executor
Cache
Task
What’s Inside an Executor?
12
Task
Task
Task
Internal Threads
Single JVM
Running
Tasks
RDD1-1
RDD2-3
Cached RDD
partitions
Shuffle, Transport,
GC, ...
Free
Task
Slots
Broadcast-1
Broadcast-2
Other
global
memory
Spark Cluster Architecture: Physical View
13
Task
Task
Task
RDD P1
RDD P2
RDD P3
Internal
Threads
Node 1
Executor 1
Task
Task
Task
RDD P5
RDD P6
RDD P1
Internal
Threads
Node 2
Executor 2
Task
Task
Task
RDD P3
RDD P4
RDD P2
Internal
Threads
Node 3
Executor 3
Driver
Ok, can we get back to the code?
14
Spark Execution Model
15
sc.textFile(”...").
map(row =>...).
groupByKey.
mapValues(...).
collect
Application
Code
RDD
DAG
DAG and Task
Scheduler
Executor(s)
Task
Task
Task
Task
Spark builds DAGs
16
Directed (arrows)
Acyclic (no loops)
Graph
•  Spark applications are
written in a functional
style.
•  Internally, Spark turns a
functional pipeline into
a graph of RDD
objects.
sc.textFile("hdfs:///.../flightdata").
mapValues(values => values.toSet.size).
map(row => (row.split(",")(16), row.split(",")(8))).
groupByKey.
collect
Spark builds DAGs (contd.)
17
HadoopRDD
MapPartitionsRDD
MapPartitionsRDD
ShuffleRDD
MapPartitionsRDD
Data-set level View Partition level View
•  Spark has a rich
collection of operations
that generally map to
RDD classes
•  HadoopRDD,
FilteredRDD, JoinRDD,
etc.
DAG Scheduler
18
textFile
mapValues
map
groupByKey
Stage 1
Stage 2
collect
•  Split the DAG into
stages
•  A stage marks the
boundaries of a
pipeline
•  One stage is
completed before
starting the next
stage
DAG Scheduler: Pipelining
2004,2,5...,AA,..,EWR,... 2004,3,22,...,UA,...SFO,... 2004,3,22,...,AA,...EWR,..
(EWR, AA) (SFO, UA) (EWR, UA)
(EWR, [AA, UA]) (SFO, UA)
Task Scheduler (contd.)
20
textFile
map
Stage 1
•  Turn the Stages into Tasks
•  Task = Data + Computation
•  Ship tasks across the cluster
•  All RDDs in the stage have the same number of partitions
•  One task computes each pipeline
Task Scheduler (contd.)
21
textFile
map
Stage 1
hdfs://.../flights
(partition 1)
hdfs://.../flights
(partition 2)
hdfs://.../flights
(partition 3)
hdfs://.../flights
(partition 4)
Computation
Data
Task 1 Task 2 Task 3 Task 4
1 2 3 4
22
Task Scheduler (contd.)
HDFS
Partitions
Time
•  3 cores
•  4 partitions1
2
2
3
3
4
MapPartitionsRDD
HadoopRDD
1
MapPartitionsRDD
HadoopRDD
2
MapPartitionsRDD
HadoopRDD
3
MapPartitionsRDD
HadoopRDD
4
Task Scheduler: The Shuffle
23
textFile
mapValues
map
groupByKey
Stage 1
Stage 2
collect
•  All-to-all data
movement
Task Scheduler: The Shuffle (contd.)
•  Redistributes data among partitions
•  Typically hash-partitioned but can have user-defined partitioner
•  Avoided when possible, if data is already properly partitioned
•  Partial aggregation reduces data movement
!  Similar to Map-Side Combine in MapReduce
24
1 2 3
1 2
4Stage 1
Stage 2
Task Scheduler: The Shuffle (contd.)
25
1
1 2
1 2
2
1 2
3
1 2
4
1 2
•  Shuffle writes intermediate files to disk
•  These files are pulled by the next stage
•  Two algorithms: sort- based (new/default) and hash-based (older)
groupByKey
26
mapValues
groupByKey
Stage 2
collect
•  After shuffle, each “reduce-side”
partition contains all groups of
the same key
•  Within each partition,
groupByKey builds an in-
memory hash map
•  EWR -> [UA, AA, ...]
•  SFO -> [UA, ...]
•  JFK -> [DL, UA, AA, DL, ...]
•  Single key-values pair must fit in
memory. Can be spilled to disk in
its entirety.
Stage 1
Task Execution
27
Stage 2
Stage 1
Stage N
•  Spark jobs can have any number
of stages (1, 2, ... N)
•  There’s a shuffle between stages
•  The last stage ends with an
“action” – sending results back to
client, writing files to disk, etc.
28
29
30
Writing better Spark applications
How can we optimize our application? Key considerations:
•  Partitioning – How many tasks in each stage?
•  Shuffle – How much data moved across nodes?
•  Memory pressure – Usually a byproduct of partitioning and
shuffling
•  Code path length – Is your code optimal? Are you using the best
available Spark API for the job?
31
Writing better Spark applications: Partitioning
•  Too few partitions?
!  Less concurrency
!  More susceptible to data skew
!  Increased memory pressure
•  Too many partitions
!  Over-partitioning leads to very short-running tasks
!  Administrative inefficiencies outweigh benefits of parallelism
•  Need a reasonable number of partitions
!  Usually a function of number of cores in cluster ( ~2 times is a good
rule of thumb)
!  Ensure tasks task execution time > serialization time
32
Writing better Spark applications: Memory
•  Symptoms
!  Bad performance
!  Executor failures (OutOfMemory errors)
•  Resolution
!  Use GC and other traces to track memory usage
!  Give Spark more memory
!  Tweak memory distribution between User memory vs Spark
memory
!  Increase number of partitions
!  Look at your code!
33
Rewriting our application
sc.textFile("hdfs://localhost:9000/user/Adarsh/flights").
map(row => (row.split(",")(16), row.split(",")(8))).
groupByKey.
mapValues(values => values.toSet.size).
collect
34
Partitioning
Shuffle size
Memory pressure
Code path length
Rewriting our application (contd.)
sc.textFile("hdfs://localhost:9000/user/Adarsh/flights").
map(row => (row.split(",")(16), row.split(",")(8))).
repartition(16).
groupByKey.
mapValues(values => values.toSet.size).
collect
35
Partitioning
Shuffle size
Memory pressure
Code path length
How many stages do you see?
Answer: 3
36
Rewriting our application (contd.)
sc.textFile("hdfs://localhost:9000/user/Adarsh/flights").
map(row => (row.split(",")(16), row.split(",")(8))).
repartition(16).
distinct.
groupByKey.
mapValues(values => values.toSet.size).
collect
37
Partitioning
Shuffle size
Memory pressure
Code path length
How many stages do you see now?
Answer: 4
38
Rewriting our application (contd.)
sc.textFile("hdfs://localhost:9000/user/Adarsh/flights").
map(row => (row.split(",")(16), row.split(",")(8))).
distinct(numPartitions = 16).
groupByKey.
mapValues(values => values.size).
collect
39
Partitioning
Shuffle size
Memory pressure
Code path length
40
Rewriting our application (contd.)
sc.textFile("hdfs://localhost:9000/user/Adarsh/flights").
map(row => {
val cols = row.split(",")
(cols(16), cols(8))
}).
distinct(numPartitions = 16).
groupByKey.
mapValues(values => values.size).
collect
41
Partitioning
Shuffle size
Memory pressure
Code path length
Rewriting our application (contd.)
sc.textFile("hdfs://localhost:9000/user/Adarsh/flights").
map(row => {
val cols = row.split(",")
(cols(16), cols(8))
}).
distinct(numPartitions = 16).
map(e => (e._1, 1)).
reduceByKey(_ + _).
collect
42
Partitioning
Shuffle size
Memory pressure
Code path length
Original
sc.textFile("hdfs://localhost:9000/user/Adarsh/flights").
map(row => (row.split(",")(16), row.split(",")(8))).
groupByKey.
mapValues(values => values.toSet.size).
collect
Revised
sc.textFile("hdfs://localhost:9000/user/Adarsh/flights").
map(row => {
val cols = row.split(",")
(cols(16), cols(8))
}).
distinct(numPartitions = 16).
map(e => (e._1, 1)).
reduceByKey(_ + _).
collect
43
OutOfMemory Error
after running for
several minutes on
my laptop
Completed in
seconds
Writing better Spark applications: Configuration
How many Executors per Node?
--num-executors OR spark.executor.instances
How many tasks can each Executor run simultaneously?
--executor-cores OR spark.executor.cores
How much memory does an Executor have?
--executor-memory OR spark.executor.memory
How is the memory divided inside an Executor?
spark.storage.memoryFraction and spark.shuffle.memoryFraction
How is the data stored? Partitioned? Compressed?
44
As you can see, writing Spark jobs is easy. However, doing so in
an efficient manner takes some know-how.
Want to learn more about Spark?
______________________________________________
Reference slides are at the end of this slide deck.
45
46
Notices and Disclaimers
Copyright © 2015 by International Business Machines Corporation (IBM). No part of this document may be reproduced or transmitted in any form
without written permission from IBM.
U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM.
Information in these presentations (including information relating to products that have not yet been announced by IBM) has been reviewed for
accuracy as of the date of initial publication and could include unintentional technical or typographical errors. IBM shall have no responsibility to
update this information. THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IN NO
EVENT SHALL IBM BE LIABLE FOR ANY DAMAGE ARISING FROM THE USE OF THIS INFORMATION, INCLUDING BUT NOT LIMITED TO,
LOSS OF DATA, BUSINESS INTERRUPTION, LOSS OF PROFIT OR LOSS OF OPPORTUNITY. IBM products and services are warranted
according to the terms and conditions of the agreements under which they are provided.
Any statements regarding IBM's future direction, intent or product plans are subject to change or withdrawal without notice.
Performance data contained herein was generally obtained in a controlled, isolated environments. Customer examples are presented as
illustrations of how those customers have used IBM products and the results they may have achieved. Actual performance, cost, savings or other
results in other operating environments may vary.
References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or
services available in all countries in which IBM operates or does business.
Workshops, sessions and associated materials may have been prepared by independent session speakers, and do not necessarily reflect the
views of IBM. All materials and discussions are provided for informational purposes only, and are neither intended to, nor shall constitute legal or
other guidance or advice to any individual participant or their specific situation.
It is the customer’s responsibility to insure its own compliance with legal requirements and to obtain advice of competent legal counsel as to the
identification and interpretation of any relevant laws and regulatory requirements that may affect the customer’s business and any actions the
customer may need to take to comply with such laws. IBM does not provide legal advice or represent or warrant that its services or products will
ensure that the customer is in compliance with any law.
47
Notices and Disclaimers (con’t)
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly
available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance,
compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products. IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to
interoperate with IBM’s products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents, copyrights,
trademarks or other intellectual property right.
•  IBM, the IBM logo, ibm.com, Aspera®, Bluemix, Blueworks Live, CICS, Clearcase, Cognos®, DOORS®, Emptoris®, Enterprise Document
Management System™, FASP®, FileNet®, Global Business Services ®, Global Technology Services ®, IBM ExperienceOne™, IBM
SmartCloud®, IBM Social Business®, Information on Demand, ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON,
OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®,
pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, Smarter Commerce®, SoDA, SPSS, Sterling Commerce®, StoredIQ,
Tealeaf®, Tivoli®, Trusteer®, Unica®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of
International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at:
www.ibm.com/legal/copytrade.shtml.
© 2015 IBM Corporation
Thank You
We Value Your Feedback!
Don’t forget to submit your Insight session and speaker
feedback! Your feedback is very important to us – we use it
to continually improve the conference.
Access your surveys at insight2015survey.com to quickly
submit your surveys from your smartphone, laptop or
conference kiosk.
49
Acknowledgements
Some of the material in this session was informed and inspired
by presentations done by:
Matei Zaharia (Creator of Apache Spark)
Reynold Xin
Aaron Davidson
Patrick Wendell
... and dozens of other Spark contributors.
50
Reference Slides
51
Intro
What is Spark? How does it relate
to Hadoop? When would you use
it?
1-2 hours
Basic
Understand basic technology and
write simple programs
1-2 days
Intermediate
Start writing complex Spark
programs even as you understand
operational aspects
5-15 days, to
weeks and
months
Expert
Become a Spark Black Belt! Know
Spark inside out.
Months to
years
How deep do you want to go?
Intro Spark
Go through these additional presentations to understand the value of Spark. These
speakers also attempt to differentiate Spark from Hadoop, and enumerate its comparative
strengths. (Not much code here)
"  Turning Data into Value, Ion Stoica, Spark Summit 2013 Video & Slides 25 mins
https://spark-summit.org/2013/talk/turning-data-into-value
"  Spark: What’s in it your your business? Adarsh Pannu 60 mins
IBM Insight Conference 2015
"  How Companies are Using Spark, and Where the Edge in Big Data Will Be, Matei
Zaharia, Video & Slides 12 mins
http://conferences.oreilly.com/strata/strata2014/public/schedule/detail/33057
"  Spark Fundamentals I (Lesson 1 only), Big Data University <20 mins
https://bigdatauniversity.com/bdu-wp/bdu-course/spark-fundamentals/
Basic Spark
" Pick up some Scala through this article co-authored
by Scala’s creator, Martin Odersky. Link
http://www.artima.com/scalazine/articles/steps.html
Estimated time: 2 hours
Basic Spark (contd.)
"  Do these two courses. They cover Spark basics and include a
certification. You can use the supplied Docker images for all other
labs.
7 hours
Basic Spark (contd.)
"  Go to spark.apache.org and study the Overview and the
Spark Programming Guide. Many online courses borrow
liberally from this material. Information on this site is
updated with every new Spark release.
Estimated 7-8 hours.
Intermediate Spark
"  Stay at spark.apache.org. Go through the component specific
Programming Guides as well as the sections on Deploying and More.
Browse the Spark API as needed.
Estimated time 3-5 days and more.
Intermediate Spark (contd.)
•  Learn about the operational aspects of Spark:
"  Advanced Apache Spark (DevOps) 6 hours # EXCELLENT!
Video https://www.youtube.com/watch?v=7ooZ4S7Ay6Y
Slides https://www.youtube.com/watch?v=7ooZ4S7Ay6Y
"  Tuning and Debugging Spark Slides 48 mins
Video https://www.youtube.com/watch?v=kkOG_aJ9KjQ
•  Gain a high-level understanding of Spark architecture:
"  Introduction to AmpLab Spark Internals, Matei Zaharia, 1 hr 15 mins
Video https://www.youtube.com/watch?v=49Hr5xZyTEA
"  A Deeper Understanding of Spark Internals, Aaron Davidson, 44 mins
Video https://www.youtube.com/watch?v=dmL0N3qfSc8
PDF https://spark-summit.org/2014/wp-content/uploads/2014/07/A-Deeper-
Understanding-of-Spark-Internals-Aaron-Davidson.pdf
Intermediate Spark (contd.)
•  Experiment, experiment, experiment ...
"  Setup your personal 3-4 node cluster
"  Download some “open” data. E.g. “airline” data on
stat-computing.org/dataexpo/2009/
"  Write some code, make it run, see how it performs, tune it, trouble-shoot it
"  Experiment with different deployment modes (Standalone + YARN)
"  Play with different configuration knobs, check out dashboards, etc.
"  Explore all subcomponents (especially Core, SQL, MLLib)
Read the original academic papers
"  Resilient Distributed Datasets: A Fault-
Tolerant Abstraction for In-Memory Cluster
Computing, Matei Zaharia, et. al.
"  Discretized Streams: An Efficient and Fault-
Tolerant Model for Stream Processing on
Large Clusters, Matei Zaharia, et. al.
"  GraphX: A Resilient Distributed Graph
System on Spark, Reynold S. Xin, et. al.
"  Spark SQL: Relational Data Processing in
Spark, Michael Armbrust, et. al.
Advanced Spark: Original Papers
Advanced Spark: Enhance your Scala skills
This book by
Odersky is excellent
but it isn’t meant to
give you a quick
start. It’s deep stuff.
"  Use this as your
primary Scala text
"  Excellent MooC by Odersky. Some of
the material is meant for CS majors.
Highly recommended for STC
developers.
35+ hours
Advanced Spark: Browse Conference Proceedings
Spark Summits cover technology and use cases. Technology is also covered in
various other places so you could consider skipping those tracks. Don’t forget to
check out the customer stories. That is how we learn about enablement
opportunities and challenges, and in some cases, we can see through the Spark
hype ☺
100+ hours of FREE videos and associated PDFs available on spark-
summit.org. You don’t even have to pay the conference fee! Go back in time and
“attend” these conferences!
Advanced Spark: Browse YouTube Videos
YouTube is full of training videos, some good, other not so
much. These are the only channels you need to watch though.
There is a lot of repetition in the material, and some of the
videos are from the conferences mentioned earlier.
Advanced Spark: Check out these books
Provides a good overview of
Spark much of material is also
available through other sources
previously mentioned.
Covers concrete statistical analysis /
machine learning use cases. Covers
Spark APIs and MLLib. Highly
recommended for data scientists.
Advanced Spark: Yes ... read the code
Even if you don’t intend to contribute to Spark, there are a ton of valuable
comments in the code that provide insights into Spark’s design and these will
help you write better Spark applications. Don’t be shy! Go to github.com/
apache/spark and check it to out.

Contenu connexe

Tendances

Spark overview
Spark overviewSpark overview
Spark overviewLisa Hua
 
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveApache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveSachin Aggarwal
 
Spark introduction and architecture
Spark introduction and architectureSpark introduction and architecture
Spark introduction and architectureSohil Jain
 
Transformations and actions a visual guide training
Transformations and actions a visual guide trainingTransformations and actions a visual guide training
Transformations and actions a visual guide trainingSpark Summit
 
Introduction to spark
Introduction to sparkIntroduction to spark
Introduction to sparkDuyhai Doan
 
Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
 
Apache Spark RDDs
Apache Spark RDDsApache Spark RDDs
Apache Spark RDDsDean Chen
 
Introduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingIntroduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingCloudera, Inc.
 
Processing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekProcessing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekVenkata Naga Ravi
 
Apache Spark Tutorial
Apache Spark TutorialApache Spark Tutorial
Apache Spark TutorialAhmet Bulut
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapePaco Nathan
 
Apache Spark Introduction
Apache Spark IntroductionApache Spark Introduction
Apache Spark Introductionsudhakara st
 
Advanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLab
Advanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLabAdvanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLab
Advanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
 
Top 5 mistakes when writing Spark applications
Top 5 mistakes when writing Spark applicationsTop 5 mistakes when writing Spark applications
Top 5 mistakes when writing Spark applicationshadooparchbook
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDatabricks
 
Spark Study Notes
Spark Study NotesSpark Study Notes
Spark Study NotesRichard Kuo
 
DTCC '14 Spark Runtime Internals
DTCC '14 Spark Runtime InternalsDTCC '14 Spark Runtime Internals
DTCC '14 Spark Runtime InternalsCheng Lian
 
Hadoop Spark Introduction-20150130
Hadoop Spark Introduction-20150130Hadoop Spark Introduction-20150130
Hadoop Spark Introduction-20150130Xuan-Chao Huang
 

Tendances (20)

Spark Deep Dive
Spark Deep DiveSpark Deep Dive
Spark Deep Dive
 
Spark overview
Spark overviewSpark overview
Spark overview
 
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveApache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
 
Spark introduction and architecture
Spark introduction and architectureSpark introduction and architecture
Spark introduction and architecture
 
Transformations and actions a visual guide training
Transformations and actions a visual guide trainingTransformations and actions a visual guide training
Transformations and actions a visual guide training
 
Introduction to spark
Introduction to sparkIntroduction to spark
Introduction to spark
 
Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & Internals
 
Apache Spark RDDs
Apache Spark RDDsApache Spark RDDs
Apache Spark RDDs
 
Introduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingIntroduction to Apache Spark Developer Training
Introduction to Apache Spark Developer Training
 
Processing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekProcessing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeek
 
Apache Spark Tutorial
Apache Spark TutorialApache Spark Tutorial
Apache Spark Tutorial
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscape
 
Apache Spark Introduction
Apache Spark IntroductionApache Spark Introduction
Apache Spark Introduction
 
Advanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLab
Advanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLabAdvanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLab
Advanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLab
 
Top 5 mistakes when writing Spark applications
Top 5 mistakes when writing Spark applicationsTop 5 mistakes when writing Spark applications
Top 5 mistakes when writing Spark applications
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
 
Spark Study Notes
Spark Study NotesSpark Study Notes
Spark Study Notes
 
DTCC '14 Spark Runtime Internals
DTCC '14 Spark Runtime InternalsDTCC '14 Spark Runtime Internals
DTCC '14 Spark Runtime Internals
 
Hadoop Spark Introduction-20150130
Hadoop Spark Introduction-20150130Hadoop Spark Introduction-20150130
Hadoop Spark Introduction-20150130
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 

En vedette

Deep learning and Apache Spark
Deep learning and Apache SparkDeep learning and Apache Spark
Deep learning and Apache SparkQuantUniversity
 
Deep learning Tutorial - Part II
Deep learning Tutorial - Part IIDeep learning Tutorial - Part II
Deep learning Tutorial - Part IIQuantUniversity
 
Spark 2.x Troubleshooting Guide
Spark 2.x Troubleshooting GuideSpark 2.x Troubleshooting Guide
Spark 2.x Troubleshooting GuideIBM
 
Deep Learning with Apache Spark: an Introduction
Deep Learning with Apache Spark: an IntroductionDeep Learning with Apache Spark: an Introduction
Deep Learning with Apache Spark: an IntroductionEmanuele Bezzi
 
Tugas 2 0317-nurul azmi-1412510587
Tugas 2 0317-nurul azmi-1412510587Tugas 2 0317-nurul azmi-1412510587
Tugas 2 0317-nurul azmi-1412510587nurul azmi
 
Deep dive into spark streaming
Deep dive into spark streamingDeep dive into spark streaming
Deep dive into spark streamingTao Li
 
Big data, Analytics and Beyond
Big data, Analytics and BeyondBig data, Analytics and Beyond
Big data, Analytics and BeyondQuantUniversity
 
La fonction RH dans une entreprise familiale
La fonction RH dans une entreprise familialeLa fonction RH dans une entreprise familiale
La fonction RH dans une entreprise familialePierre Durand
 
Patient congestion in ED
Patient congestion in EDPatient congestion in ED
Patient congestion in EDaash1520
 
Treatment Resistant Depression
Treatment Resistant DepressionTreatment Resistant Depression
Treatment Resistant DepressionHasnain Afzal
 
Fully fault tolerant real time data pipeline with docker and mesos
Fully fault tolerant real time data pipeline with docker and mesos Fully fault tolerant real time data pipeline with docker and mesos
Fully fault tolerant real time data pipeline with docker and mesos Rahul Kumar
 
Hadoop Operations - Best practices from the field
Hadoop Operations - Best practices from the fieldHadoop Operations - Best practices from the field
Hadoop Operations - Best practices from the fieldUwe Printz
 
Data Analytics with Apache Spark and Cassandra
Data Analytics with Apache Spark and CassandraData Analytics with Apache Spark and Cassandra
Data Analytics with Apache Spark and CassandraGerard Maas
 
Hadoop meets Agile! - An Agile Big Data Model
Hadoop meets Agile! - An Agile Big Data ModelHadoop meets Agile! - An Agile Big Data Model
Hadoop meets Agile! - An Agile Big Data ModelUwe Printz
 
Memory Management in Apache Spark
Memory Management in Apache SparkMemory Management in Apache Spark
Memory Management in Apache SparkDatabricks
 

En vedette (20)

Deep learning and Apache Spark
Deep learning and Apache SparkDeep learning and Apache Spark
Deep learning and Apache Spark
 
Deep learning Tutorial - Part II
Deep learning Tutorial - Part IIDeep learning Tutorial - Part II
Deep learning Tutorial - Part II
 
Spark 2.x Troubleshooting Guide
Spark 2.x Troubleshooting GuideSpark 2.x Troubleshooting Guide
Spark 2.x Troubleshooting Guide
 
Deep Learning with Apache Spark: an Introduction
Deep Learning with Apache Spark: an IntroductionDeep Learning with Apache Spark: an Introduction
Deep Learning with Apache Spark: an Introduction
 
Tugas 2 0317-nurul azmi-1412510587
Tugas 2 0317-nurul azmi-1412510587Tugas 2 0317-nurul azmi-1412510587
Tugas 2 0317-nurul azmi-1412510587
 
Ymag60
Ymag60Ymag60
Ymag60
 
Deep dive into spark streaming
Deep dive into spark streamingDeep dive into spark streaming
Deep dive into spark streaming
 
Alimentos tradicionales
Alimentos tradicionalesAlimentos tradicionales
Alimentos tradicionales
 
ΜΑΚΕΔΟΝΙΑ
ΜΑΚΕΔΟΝΙΑΜΑΚΕΔΟΝΙΑ
ΜΑΚΕΔΟΝΙΑ
 
Big data, Analytics and Beyond
Big data, Analytics and BeyondBig data, Analytics and Beyond
Big data, Analytics and Beyond
 
La fonction RH dans une entreprise familiale
La fonction RH dans une entreprise familialeLa fonction RH dans une entreprise familiale
La fonction RH dans une entreprise familiale
 
Patient congestion in ED
Patient congestion in EDPatient congestion in ED
Patient congestion in ED
 
Treatment Resistant Depression
Treatment Resistant DepressionTreatment Resistant Depression
Treatment Resistant Depression
 
Fully fault tolerant real time data pipeline with docker and mesos
Fully fault tolerant real time data pipeline with docker and mesos Fully fault tolerant real time data pipeline with docker and mesos
Fully fault tolerant real time data pipeline with docker and mesos
 
Hadoop Operations - Best practices from the field
Hadoop Operations - Best practices from the fieldHadoop Operations - Best practices from the field
Hadoop Operations - Best practices from the field
 
Data Analytics with Apache Spark and Cassandra
Data Analytics with Apache Spark and CassandraData Analytics with Apache Spark and Cassandra
Data Analytics with Apache Spark and Cassandra
 
Contents Page Analysis
Contents Page AnalysisContents Page Analysis
Contents Page Analysis
 
Hadoop meets Agile! - An Agile Big Data Model
Hadoop meets Agile! - An Agile Big Data ModelHadoop meets Agile! - An Agile Big Data Model
Hadoop meets Agile! - An Agile Big Data Model
 
Apache Spark
Apache SparkApache Spark
Apache Spark
 
Memory Management in Apache Spark
Memory Management in Apache SparkMemory Management in Apache Spark
Memory Management in Apache Spark
 

Similaire à Apache Spark: What's under the hood

OVERVIEW ON SPARK.pptx
OVERVIEW ON SPARK.pptxOVERVIEW ON SPARK.pptx
OVERVIEW ON SPARK.pptxAishg4
 
Fast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonFast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonBenjamin Bengfort
 
Spark Overview and Performance Issues
Spark Overview and Performance IssuesSpark Overview and Performance Issues
Spark Overview and Performance IssuesAntonios Katsarakis
 
Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Olalekan Fuad Elesin
 
Apache Spark for Beginners
Apache Spark for BeginnersApache Spark for Beginners
Apache Spark for BeginnersAnirudh
 
Apache Spark II (SparkSQL)
Apache Spark II (SparkSQL)Apache Spark II (SparkSQL)
Apache Spark II (SparkSQL)Datio Big Data
 
SCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARK
SCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARKSCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARK
SCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARKzmhassan
 
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
 Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F... Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...Databricks
 
Architecting and productionising data science applications at scale
Architecting and productionising data science applications at scaleArchitecting and productionising data science applications at scale
Architecting and productionising data science applications at scalesamthemonad
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsDatabricks
 
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...Josef A. Habdank
 
Apache Spark Introduction.pdf
Apache Spark Introduction.pdfApache Spark Introduction.pdf
Apache Spark Introduction.pdfMaheshPandit16
 
Apache spark - Spark's distributed programming model
Apache spark - Spark's distributed programming modelApache spark - Spark's distributed programming model
Apache spark - Spark's distributed programming modelMartin Zapletal
 
Profiling & Testing with Spark
Profiling & Testing with SparkProfiling & Testing with Spark
Profiling & Testing with SparkRoger Rafanell Mas
 
Bring the Spark To Your Eyes
Bring the Spark To Your EyesBring the Spark To Your Eyes
Bring the Spark To Your EyesDemi Ben-Ari
 
A Java Implementer's Guide to Better Apache Spark Performance
A Java Implementer's Guide to Better Apache Spark PerformanceA Java Implementer's Guide to Better Apache Spark Performance
A Java Implementer's Guide to Better Apache Spark PerformanceTim Ellison
 

Similaire à Apache Spark: What's under the hood (20)

OVERVIEW ON SPARK.pptx
OVERVIEW ON SPARK.pptxOVERVIEW ON SPARK.pptx
OVERVIEW ON SPARK.pptx
 
Fast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonFast Data Analytics with Spark and Python
Fast Data Analytics with Spark and Python
 
Apache Spark
Apache SparkApache Spark
Apache Spark
 
Spark Overview and Performance Issues
Spark Overview and Performance IssuesSpark Overview and Performance Issues
Spark Overview and Performance Issues
 
Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2
 
Apache Spark for Beginners
Apache Spark for BeginnersApache Spark for Beginners
Apache Spark for Beginners
 
Apache Spark II (SparkSQL)
Apache Spark II (SparkSQL)Apache Spark II (SparkSQL)
Apache Spark II (SparkSQL)
 
SCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARK
SCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARKSCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARK
SCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARK
 
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
 Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F... Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
 
Architecting and productionising data science applications at scale
Architecting and productionising data science applications at scaleArchitecting and productionising data science applications at scale
Architecting and productionising data science applications at scale
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutions
 
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
 
Spark on YARN
Spark on YARNSpark on YARN
Spark on YARN
 
Apache Spark Introduction.pdf
Apache Spark Introduction.pdfApache Spark Introduction.pdf
Apache Spark Introduction.pdf
 
Hadoop and Spark
Hadoop and SparkHadoop and Spark
Hadoop and Spark
 
Apache spark - Spark's distributed programming model
Apache spark - Spark's distributed programming modelApache spark - Spark's distributed programming model
Apache spark - Spark's distributed programming model
 
Profiling & Testing with Spark
Profiling & Testing with SparkProfiling & Testing with Spark
Profiling & Testing with Spark
 
Bring the Spark To Your Eyes
Bring the Spark To Your EyesBring the Spark To Your Eyes
Bring the Spark To Your Eyes
 
A Java Implementer's Guide to Better Apache Spark Performance
A Java Implementer's Guide to Better Apache Spark PerformanceA Java Implementer's Guide to Better Apache Spark Performance
A Java Implementer's Guide to Better Apache Spark Performance
 
Spark architechure.pptx
Spark architechure.pptxSpark architechure.pptx
Spark architechure.pptx
 

Dernier

Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 

Dernier (20)

Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 

Apache Spark: What's under the hood

  • 1. © 2015 IBM Corporation DSK-3576 Apache Spark: What's Under the Hood? Adarsh Pannu Senior Technical Staff Member IBM Analytics Platform adarshrp@us.ibm.com
  • 2. Abstract This session covers Spark's architectural components through the life of simple Spark jobs. Those already familiar with the basic Spark API will gain deeper knowledge of how Spark works with the potential of becoming an advanced Spark user, administrator or contributor. 1
  • 3. Outline Why Understand Spark Internals? To Write Better Applications. ●  Simple Spark Application ●  Resilient Distributed Datasets ●  Cluster architecture ●  Job Execution through Spark components ●  Tasks & Scheduling ●  Shuffle ●  Memory management ●  Writing better Spark applications: Tips and Tricks
  • 4. First Spark Application On-Time Arrival Performance Dataset Record of every US airline flight since 1980s. Fields: Year, Month, DayofMonth UniqueCarrier,FlightNum DepTime, ArrTime, ActualElapsedTime ArrDelay, DepDelay Origin, Dest, Distance Cancelled, ... “Where, When, How Long? ...” 3
  • 5. First Spark Application (contd.) 4 Year,Month,DayofMonth,DayOfWeek,DepTime,CRSDepTime,ArrTime,CRS ArrTime,UniqueCarrier,FlightNum,TailNum,ActualElapsedTime,CRSElapse dTime,AirTime,ArrDelay,DepDelay,Origin,Dest,Distance,TaxiIn,TaxiOut,Can celled,CancellationCode,Diverted,CarrierDelay,WeatherDelay,NASDelay,S ecurityDelay,LateAircraftDelay 2004,2,5,4,1521,1530,1837,1838,CO,65,N67158,376,368,326,-1,-9,EWR,LAX,2454,... UniqueCarrier Origin Dest Year,Month,DayofMonth FlightNum ActualElapsedTime Distance DepTime
  • 6. First Spark Application (contd.) Which airports handles the most airline carriers? !  Small airports (E.g. Ithaca) only served by a few airlines !  Larger airports (E.g. Newark) handle dozens of carriers In SQL, this translates to: SELECT Origin, count(distinct UniqueCarrier) FROM flights GROUP BY Origin 5
  • 7. // Read data rows sc.textFile("hdfs:///.../flights"). 2004,2,5...,AA,..,EWR,... 2004,3,22,...,UA,...SFO,... 2004,3,22,...,AA,...EWR,.. // Extract Airport (key) and Airline (value) (EWR, AA) (SFO, UA) (EWR, UA) (EWR, [AA, UA]) (SFO, UA) // Discard duplicate pairs, and compute group size mapValues(values => values.toSet.size). // Return results to client map(row => (row.split(",")(16), row.split(",")(8))). // Group by Airport groupByKey. collect (EWR, 2) (SFO, 1) [ (EWR, 2), (SFO, 1) ]
  • 8. sc.textFile("hdfs:///.../flights"). mapValues(values => values.toSet.size). map(row => (row.split(",")(16), row.split(",")(8))). groupByKey. collect Base RDD Transformed RDDs Action
  • 9. What’s an RDD? 8 CO780, IAH, MCI CO683, IAH, MSY CO1707, TPA, IAH ... UA620, SJC, ORD UA675, ORD, SMF UA676, ORD, LGA ... DL282, ATL, CVG DL2128, CHS, ATL DL2592, PBI, LGA DL417, FLL, ATL ... Resilient Distributed Datasets •  Key abstraction in Spark Immutable collection of objects •  Distributed across machines Can be operated on in parallel Can hold any kind of data Hadoop datasets Parallelized Scala collections RDBMS or No-SQL, ... Can recover from failures, be cached, ...
  • 10. Resilient Distributed Datasets (RDD) 1.  Set of partitions (“splits” in Hadoop) 2.  List of dependencies on parent RDDs 3.  Function to compute a partition given its parent(s) 4.  (Optional) Partitioner (hash, range) 5.  (Optional) Preferred location(s) 9 Lineage Optimized Execution
  • 11. We’ve written the code, what next? Spark supports four different cluster managers: ●  Local: Useful only for development ●  Standalone: Bundled with Spark, doesn’t play well with other applications ●  YARN ●  Mesos Each mode has a similar “logical” architecture although physical details differ in terms of which/where processes and threads are launched. 10
  • 12. Spark Cluster Architecture: Logical View 11 Driver represents the application. It runs the main() function. SparkContext is the main entry point for Spark functionality. Represents the connection to a Spark cluster. Executor runs tasks and keeps data in memory or disk storage across them. Each application has its own executors. Task Driver Program SparkContext Cluster Manager Executor Cache Task
  • 13. What’s Inside an Executor? 12 Task Task Task Internal Threads Single JVM Running Tasks RDD1-1 RDD2-3 Cached RDD partitions Shuffle, Transport, GC, ... Free Task Slots Broadcast-1 Broadcast-2 Other global memory
  • 14. Spark Cluster Architecture: Physical View 13 Task Task Task RDD P1 RDD P2 RDD P3 Internal Threads Node 1 Executor 1 Task Task Task RDD P5 RDD P6 RDD P1 Internal Threads Node 2 Executor 2 Task Task Task RDD P3 RDD P4 RDD P2 Internal Threads Node 3 Executor 3 Driver
  • 15. Ok, can we get back to the code? 14
  • 16. Spark Execution Model 15 sc.textFile(”..."). map(row =>...). groupByKey. mapValues(...). collect Application Code RDD DAG DAG and Task Scheduler Executor(s) Task Task Task Task
  • 17. Spark builds DAGs 16 Directed (arrows) Acyclic (no loops) Graph •  Spark applications are written in a functional style. •  Internally, Spark turns a functional pipeline into a graph of RDD objects. sc.textFile("hdfs:///.../flightdata"). mapValues(values => values.toSet.size). map(row => (row.split(",")(16), row.split(",")(8))). groupByKey. collect
  • 18. Spark builds DAGs (contd.) 17 HadoopRDD MapPartitionsRDD MapPartitionsRDD ShuffleRDD MapPartitionsRDD Data-set level View Partition level View •  Spark has a rich collection of operations that generally map to RDD classes •  HadoopRDD, FilteredRDD, JoinRDD, etc.
  • 19. DAG Scheduler 18 textFile mapValues map groupByKey Stage 1 Stage 2 collect •  Split the DAG into stages •  A stage marks the boundaries of a pipeline •  One stage is completed before starting the next stage
  • 20. DAG Scheduler: Pipelining 2004,2,5...,AA,..,EWR,... 2004,3,22,...,UA,...SFO,... 2004,3,22,...,AA,...EWR,.. (EWR, AA) (SFO, UA) (EWR, UA) (EWR, [AA, UA]) (SFO, UA)
  • 21. Task Scheduler (contd.) 20 textFile map Stage 1 •  Turn the Stages into Tasks •  Task = Data + Computation •  Ship tasks across the cluster •  All RDDs in the stage have the same number of partitions •  One task computes each pipeline
  • 22. Task Scheduler (contd.) 21 textFile map Stage 1 hdfs://.../flights (partition 1) hdfs://.../flights (partition 2) hdfs://.../flights (partition 3) hdfs://.../flights (partition 4) Computation Data Task 1 Task 2 Task 3 Task 4 1 2 3 4
  • 23. 22 Task Scheduler (contd.) HDFS Partitions Time •  3 cores •  4 partitions1 2 2 3 3 4 MapPartitionsRDD HadoopRDD 1 MapPartitionsRDD HadoopRDD 2 MapPartitionsRDD HadoopRDD 3 MapPartitionsRDD HadoopRDD 4
  • 24. Task Scheduler: The Shuffle 23 textFile mapValues map groupByKey Stage 1 Stage 2 collect •  All-to-all data movement
  • 25. Task Scheduler: The Shuffle (contd.) •  Redistributes data among partitions •  Typically hash-partitioned but can have user-defined partitioner •  Avoided when possible, if data is already properly partitioned •  Partial aggregation reduces data movement !  Similar to Map-Side Combine in MapReduce 24 1 2 3 1 2 4Stage 1 Stage 2
  • 26. Task Scheduler: The Shuffle (contd.) 25 1 1 2 1 2 2 1 2 3 1 2 4 1 2 •  Shuffle writes intermediate files to disk •  These files are pulled by the next stage •  Two algorithms: sort- based (new/default) and hash-based (older)
  • 27. groupByKey 26 mapValues groupByKey Stage 2 collect •  After shuffle, each “reduce-side” partition contains all groups of the same key •  Within each partition, groupByKey builds an in- memory hash map •  EWR -> [UA, AA, ...] •  SFO -> [UA, ...] •  JFK -> [DL, UA, AA, DL, ...] •  Single key-values pair must fit in memory. Can be spilled to disk in its entirety. Stage 1
  • 28. Task Execution 27 Stage 2 Stage 1 Stage N •  Spark jobs can have any number of stages (1, 2, ... N) •  There’s a shuffle between stages •  The last stage ends with an “action” – sending results back to client, writing files to disk, etc.
  • 29. 28
  • 30. 29
  • 31. 30
  • 32. Writing better Spark applications How can we optimize our application? Key considerations: •  Partitioning – How many tasks in each stage? •  Shuffle – How much data moved across nodes? •  Memory pressure – Usually a byproduct of partitioning and shuffling •  Code path length – Is your code optimal? Are you using the best available Spark API for the job? 31
  • 33. Writing better Spark applications: Partitioning •  Too few partitions? !  Less concurrency !  More susceptible to data skew !  Increased memory pressure •  Too many partitions !  Over-partitioning leads to very short-running tasks !  Administrative inefficiencies outweigh benefits of parallelism •  Need a reasonable number of partitions !  Usually a function of number of cores in cluster ( ~2 times is a good rule of thumb) !  Ensure tasks task execution time > serialization time 32
  • 34. Writing better Spark applications: Memory •  Symptoms !  Bad performance !  Executor failures (OutOfMemory errors) •  Resolution !  Use GC and other traces to track memory usage !  Give Spark more memory !  Tweak memory distribution between User memory vs Spark memory !  Increase number of partitions !  Look at your code! 33
  • 35. Rewriting our application sc.textFile("hdfs://localhost:9000/user/Adarsh/flights"). map(row => (row.split(",")(16), row.split(",")(8))). groupByKey. mapValues(values => values.toSet.size). collect 34 Partitioning Shuffle size Memory pressure Code path length
  • 36. Rewriting our application (contd.) sc.textFile("hdfs://localhost:9000/user/Adarsh/flights"). map(row => (row.split(",")(16), row.split(",")(8))). repartition(16). groupByKey. mapValues(values => values.toSet.size). collect 35 Partitioning Shuffle size Memory pressure Code path length How many stages do you see? Answer: 3
  • 37. 36
  • 38. Rewriting our application (contd.) sc.textFile("hdfs://localhost:9000/user/Adarsh/flights"). map(row => (row.split(",")(16), row.split(",")(8))). repartition(16). distinct. groupByKey. mapValues(values => values.toSet.size). collect 37 Partitioning Shuffle size Memory pressure Code path length How many stages do you see now? Answer: 4
  • 39. 38
  • 40. Rewriting our application (contd.) sc.textFile("hdfs://localhost:9000/user/Adarsh/flights"). map(row => (row.split(",")(16), row.split(",")(8))). distinct(numPartitions = 16). groupByKey. mapValues(values => values.size). collect 39 Partitioning Shuffle size Memory pressure Code path length
  • 41. 40
  • 42. Rewriting our application (contd.) sc.textFile("hdfs://localhost:9000/user/Adarsh/flights"). map(row => { val cols = row.split(",") (cols(16), cols(8)) }). distinct(numPartitions = 16). groupByKey. mapValues(values => values.size). collect 41 Partitioning Shuffle size Memory pressure Code path length
  • 43. Rewriting our application (contd.) sc.textFile("hdfs://localhost:9000/user/Adarsh/flights"). map(row => { val cols = row.split(",") (cols(16), cols(8)) }). distinct(numPartitions = 16). map(e => (e._1, 1)). reduceByKey(_ + _). collect 42 Partitioning Shuffle size Memory pressure Code path length
  • 44. Original sc.textFile("hdfs://localhost:9000/user/Adarsh/flights"). map(row => (row.split(",")(16), row.split(",")(8))). groupByKey. mapValues(values => values.toSet.size). collect Revised sc.textFile("hdfs://localhost:9000/user/Adarsh/flights"). map(row => { val cols = row.split(",") (cols(16), cols(8)) }). distinct(numPartitions = 16). map(e => (e._1, 1)). reduceByKey(_ + _). collect 43 OutOfMemory Error after running for several minutes on my laptop Completed in seconds
  • 45. Writing better Spark applications: Configuration How many Executors per Node? --num-executors OR spark.executor.instances How many tasks can each Executor run simultaneously? --executor-cores OR spark.executor.cores How much memory does an Executor have? --executor-memory OR spark.executor.memory How is the memory divided inside an Executor? spark.storage.memoryFraction and spark.shuffle.memoryFraction How is the data stored? Partitioned? Compressed? 44
  • 46. As you can see, writing Spark jobs is easy. However, doing so in an efficient manner takes some know-how. Want to learn more about Spark? ______________________________________________ Reference slides are at the end of this slide deck. 45
  • 47. 46 Notices and Disclaimers Copyright © 2015 by International Business Machines Corporation (IBM). No part of this document may be reproduced or transmitted in any form without written permission from IBM. U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM. Information in these presentations (including information relating to products that have not yet been announced by IBM) has been reviewed for accuracy as of the date of initial publication and could include unintentional technical or typographical errors. IBM shall have no responsibility to update this information. THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IN NO EVENT SHALL IBM BE LIABLE FOR ANY DAMAGE ARISING FROM THE USE OF THIS INFORMATION, INCLUDING BUT NOT LIMITED TO, LOSS OF DATA, BUSINESS INTERRUPTION, LOSS OF PROFIT OR LOSS OF OPPORTUNITY. IBM products and services are warranted according to the terms and conditions of the agreements under which they are provided. Any statements regarding IBM's future direction, intent or product plans are subject to change or withdrawal without notice. Performance data contained herein was generally obtained in a controlled, isolated environments. Customer examples are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual performance, cost, savings or other results in other operating environments may vary. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Workshops, sessions and associated materials may have been prepared by independent session speakers, and do not necessarily reflect the views of IBM. All materials and discussions are provided for informational purposes only, and are neither intended to, nor shall constitute legal or other guidance or advice to any individual participant or their specific situation. It is the customer’s responsibility to insure its own compliance with legal requirements and to obtain advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulatory requirements that may affect the customer’s business and any actions the customer may need to take to comply with such laws. IBM does not provide legal advice or represent or warrant that its services or products will ensure that the customer is in compliance with any law.
  • 48. 47 Notices and Disclaimers (con’t) Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to interoperate with IBM’s products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents, copyrights, trademarks or other intellectual property right. •  IBM, the IBM logo, ibm.com, Aspera®, Bluemix, Blueworks Live, CICS, Clearcase, Cognos®, DOORS®, Emptoris®, Enterprise Document Management System™, FASP®, FileNet®, Global Business Services ®, Global Technology Services ®, IBM ExperienceOne™, IBM SmartCloud®, IBM Social Business®, Information on Demand, ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON, OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®, pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, Smarter Commerce®, SoDA, SPSS, Sterling Commerce®, StoredIQ, Tealeaf®, Tivoli®, Trusteer®, Unica®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at: www.ibm.com/legal/copytrade.shtml.
  • 49. © 2015 IBM Corporation Thank You
  • 50. We Value Your Feedback! Don’t forget to submit your Insight session and speaker feedback! Your feedback is very important to us – we use it to continually improve the conference. Access your surveys at insight2015survey.com to quickly submit your surveys from your smartphone, laptop or conference kiosk. 49
  • 51. Acknowledgements Some of the material in this session was informed and inspired by presentations done by: Matei Zaharia (Creator of Apache Spark) Reynold Xin Aaron Davidson Patrick Wendell ... and dozens of other Spark contributors. 50
  • 53. Intro What is Spark? How does it relate to Hadoop? When would you use it? 1-2 hours Basic Understand basic technology and write simple programs 1-2 days Intermediate Start writing complex Spark programs even as you understand operational aspects 5-15 days, to weeks and months Expert Become a Spark Black Belt! Know Spark inside out. Months to years How deep do you want to go?
  • 54. Intro Spark Go through these additional presentations to understand the value of Spark. These speakers also attempt to differentiate Spark from Hadoop, and enumerate its comparative strengths. (Not much code here) "  Turning Data into Value, Ion Stoica, Spark Summit 2013 Video & Slides 25 mins https://spark-summit.org/2013/talk/turning-data-into-value "  Spark: What’s in it your your business? Adarsh Pannu 60 mins IBM Insight Conference 2015 "  How Companies are Using Spark, and Where the Edge in Big Data Will Be, Matei Zaharia, Video & Slides 12 mins http://conferences.oreilly.com/strata/strata2014/public/schedule/detail/33057 "  Spark Fundamentals I (Lesson 1 only), Big Data University <20 mins https://bigdatauniversity.com/bdu-wp/bdu-course/spark-fundamentals/
  • 55. Basic Spark " Pick up some Scala through this article co-authored by Scala’s creator, Martin Odersky. Link http://www.artima.com/scalazine/articles/steps.html Estimated time: 2 hours
  • 56. Basic Spark (contd.) "  Do these two courses. They cover Spark basics and include a certification. You can use the supplied Docker images for all other labs. 7 hours
  • 57. Basic Spark (contd.) "  Go to spark.apache.org and study the Overview and the Spark Programming Guide. Many online courses borrow liberally from this material. Information on this site is updated with every new Spark release. Estimated 7-8 hours.
  • 58. Intermediate Spark "  Stay at spark.apache.org. Go through the component specific Programming Guides as well as the sections on Deploying and More. Browse the Spark API as needed. Estimated time 3-5 days and more.
  • 59. Intermediate Spark (contd.) •  Learn about the operational aspects of Spark: "  Advanced Apache Spark (DevOps) 6 hours # EXCELLENT! Video https://www.youtube.com/watch?v=7ooZ4S7Ay6Y Slides https://www.youtube.com/watch?v=7ooZ4S7Ay6Y "  Tuning and Debugging Spark Slides 48 mins Video https://www.youtube.com/watch?v=kkOG_aJ9KjQ •  Gain a high-level understanding of Spark architecture: "  Introduction to AmpLab Spark Internals, Matei Zaharia, 1 hr 15 mins Video https://www.youtube.com/watch?v=49Hr5xZyTEA "  A Deeper Understanding of Spark Internals, Aaron Davidson, 44 mins Video https://www.youtube.com/watch?v=dmL0N3qfSc8 PDF https://spark-summit.org/2014/wp-content/uploads/2014/07/A-Deeper- Understanding-of-Spark-Internals-Aaron-Davidson.pdf
  • 60. Intermediate Spark (contd.) •  Experiment, experiment, experiment ... "  Setup your personal 3-4 node cluster "  Download some “open” data. E.g. “airline” data on stat-computing.org/dataexpo/2009/ "  Write some code, make it run, see how it performs, tune it, trouble-shoot it "  Experiment with different deployment modes (Standalone + YARN) "  Play with different configuration knobs, check out dashboards, etc. "  Explore all subcomponents (especially Core, SQL, MLLib)
  • 61. Read the original academic papers "  Resilient Distributed Datasets: A Fault- Tolerant Abstraction for In-Memory Cluster Computing, Matei Zaharia, et. al. "  Discretized Streams: An Efficient and Fault- Tolerant Model for Stream Processing on Large Clusters, Matei Zaharia, et. al. "  GraphX: A Resilient Distributed Graph System on Spark, Reynold S. Xin, et. al. "  Spark SQL: Relational Data Processing in Spark, Michael Armbrust, et. al. Advanced Spark: Original Papers
  • 62. Advanced Spark: Enhance your Scala skills This book by Odersky is excellent but it isn’t meant to give you a quick start. It’s deep stuff. "  Use this as your primary Scala text "  Excellent MooC by Odersky. Some of the material is meant for CS majors. Highly recommended for STC developers. 35+ hours
  • 63. Advanced Spark: Browse Conference Proceedings Spark Summits cover technology and use cases. Technology is also covered in various other places so you could consider skipping those tracks. Don’t forget to check out the customer stories. That is how we learn about enablement opportunities and challenges, and in some cases, we can see through the Spark hype ☺ 100+ hours of FREE videos and associated PDFs available on spark- summit.org. You don’t even have to pay the conference fee! Go back in time and “attend” these conferences!
  • 64. Advanced Spark: Browse YouTube Videos YouTube is full of training videos, some good, other not so much. These are the only channels you need to watch though. There is a lot of repetition in the material, and some of the videos are from the conferences mentioned earlier.
  • 65. Advanced Spark: Check out these books Provides a good overview of Spark much of material is also available through other sources previously mentioned. Covers concrete statistical analysis / machine learning use cases. Covers Spark APIs and MLLib. Highly recommended for data scientists.
  • 66. Advanced Spark: Yes ... read the code Even if you don’t intend to contribute to Spark, there are a ton of valuable comments in the code that provide insights into Spark’s design and these will help you write better Spark applications. Don’t be shy! Go to github.com/ apache/spark and check it to out.