2. OUTLINE
History of "Big Data" engines
Apache Spark: What is it and what's special about it?
Apache Spark: What is used for?
Apache Spark: API
Tools and software usually used with Apache Spark
Demo
7. WHY SPARK?
Most machine learning algorithms are iterative; each
iteration can improve the results
With disk-based approach, each iteration's output is
written to disk making the processing slow
9. Spark is a distributed data processing engine
Started in 2009
Open source & written in Scala
Compatible with Hadoop's data
10. It runs on memory and on disk
Run 10 to 100 times faster than Hadoop MapReduce
Can be written in Java, Scala, Python & R
Supports batch and near-realtime workflows (micro-
batches)
13. CAPTURE ANDEXTRACTDATA
Data can come from several sources:
Databases
Flat files
Web and mobile applications' logs
Data feeds from social media
IoT devices
14.
15. TRANSFORMDATA
Data in an analytics pipeline needs transformation
Check and correct quality issues
Handle missing values
Cast fields into specific data types
Compute derived fields
Split or merge records for more granularity
Join with other datasets
Restructure data
16.
17. STORE DATA
Data can then be stored in several ways
As self describing files (Parquet, JSON, XML)
SQL databases
Search databases (Elasticsearch, Solr)
Key-value stores (HBase, Cassandra)
23. RESILENTDISTRIBUTEDDATASETS
RDD's are the fundamental data unit in Spark
Resilient: If data in memory is lost, it can be recreated
Distributed: Stored in memory across the cluster
Dataset: The initial data can come from a file or
created programmatically
24. RDD'S
Immutable and partionned collection of elements
Basic operations: map, filter, reduce, persist
Several implementations: PairRDD, DoubleRDD,
SequenceFileRDD
25. HISTORY
2011 (Spark release) - RDD API
2013 - introduction of the DataFrame API: Add the
concept of schema and allow Spark to manage it for
more efficient serialization and deserialization
2015 - introduction of the DataSet API
33. USING ANOTEBOOK
There are many Spark notebooks, we are going to use
http://spark-notebook.io/
sparknotebook
open http://localhost:9000/
34. ASASPARKAPPLICATION
By adding spark-core and other Spark modules as project
dependencies and using Spark API inside the application
code
def main(args: Array[String]) {
val conf = new SparkConf()
.setAppName("Sample Application")
.setMaster("local")
val sc = new SparkContext(conf)
val logData = sc.textFile("/tmp/spark/README.md")
val lines = textFile.filter(line => line contains "Spark")
lines.collect()
sc.stop()
}
36. TERMINOLOGY
SparkContext: A connection to a Spark context
Worker node: Node that runs the program in a cluster
Task: A unit of work
Job: Consists of multiple tasks
Executor: Process in a worker node, that runs the
tasks
39. Simple: Uses many servers as one big computer
Reliable: Detects failures, has redundant storage
Fault-tolerant: Auto-retry, self-healing
Scalable: Scales (almost) lineary with disks and CPU
43. Coordination: Needed when multiple nodes need to
work together
Examples:
Group membership
Locking
Leaders election
Synchronization
Publisher/subscriber
44. APACHE MESOS
Mesos is built using the same principles as the Linux
kernel, only at a different level of abstraction.
The Mesos kernel runs on every machine and provides
applications (e.g., Hadoop, Spark, Kafka, Elastic Search)
with API's for resource management and scheduling
across entire datacenter and cloud environments.
45. A cluster manager that:
Runs distributed applications
Abstracts CPU, memory, storage, and other resources
Handles resource allocation
Handles applications' isolation
Has a Web UI for viewing the cluster's state
46. NOTEBOOKS
Spark Notebook: Allows performing reproducible
analysis with Scala, Apache Spark and more
Apache Zeppelin: A web-based notebook that enables
interactive data analytics
47.
48. THE END
APACHE SPARK
Is a fast distributed data processing engine
Runs on memory
Can be used with Java, Scala, Python & R
Its main data structure is a Resilient Distributed
Dataset