The document discusses using Apache Spark and Apache Cassandra together for fast data analysis as an alternative to Hadoop. It provides examples of basic Spark operations on Cassandra tables like counting rows, filtering, joining with external data sources, and importing/exporting data. The document argues that Spark on Cassandra provides a simpler distributed processing framework compared to Hadoop.
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Escape from Hadoop: Ultra Fast Data Analysis with Spark & Cassandra
1. Escape From Hadoop:
Ultra Fast Data Analysis
with Apache Cassandra & Spark
Kurt Russell Spitzer
Piotr Kołaczkowski
Piotr Kołaczkowski
DataStax
slides by
presented by
2. Why escape from Hadoop?
Hadoop
Many Moving Pieces
Map Reduce
Single Points of Failure
Lots of Overhead
And there is a way out!
3. Spark Provides a Simple and Efficient
framework for Distributed Computations
Node Roles 2
In Memory Caching Yes!
Fault Tolerance Yes!
Great Abstraction For
Datasets? RDD!
Spark
Worker
Spark
Worker
Spark
Master
Spark
Worker
Resilient
Distributed
Dataset
SSppaarrkk EExxeeccuuttoorr
4. Spark is Compatible with
HDFS, JDBC, Parquet, CSVs, ….
AND
APACHE CASSANDRA
Apache
Cassandra
5. Apache Cassandra is a Linearly Scaling and
Fault Tolerant noSQL Database
Linearly Scaling:
The power of the database
increases linearly with the number
of machines
2x machines = 2x throughput
http://techblog.netflix.com/2011/11/benchmarking-cassandra-scalability-on.html
Fault Tolerant:
Nodes down != Database Down
Datacenter down != Database
Down
6. Apache Cassandra Architecture is Very Simple
Node Roles 1
Replication Tunable
Consistency Replication
Tunable
CC**
CC** CC**
CC**
CClliieenntt
7. DataStax OSS Connector
Spark to Cassandra
https://github.com/datastax/spark-cassandra-connector
CCaassssaannddrraa SSppaarrkk
KKeeyyssppaaccee TTaabbllee
RRDDDD[[CCaassssaannddrraaRRooww]]
RRDDDD[[TTuupplleess]]
Bundled and Supported with
DSE 4.5!
10. Spark Cassandra Connector uses the DataStax
Java Driver to Read from and Write to C*
CC**
Full Token
Range
Each Executor Maintains a
connection to the C* Cluster
Spark
Executor
DataStax
Java Driver
Tokens 1001 -2000
Tokens 1-1000
Tokens …
RDD’s read into different
splits based on token ranges
11. Co-locate Spark and C* for Best Performance
CC** Running Spark Workers on
the same nodes as your C*
cluster will save network hops
when reading and writing CC** CC**
Spark
Worker
CC**
Spark
Worker
Spark
Master
Spark
Worker
12. Setting up C* and Spark
DSE > 4.5.0
Just start your nodes with
dse cassandra -k
Apache Cassandra
Follow the excellent guide by Al Tobey
http://tobert.github.io/post/2014-07-15-installing-cassandra-spark-stack.html
13. We need a Distributed System For
Analytics and Batch Jobs
But it doesn’t have to be complicated!
14. Even count needs to be distributed
Ask me to write a Map
Reduce for word count, I
dare you.
You could make this easier by adding yet another technology to your
Hadoop Stack (hive, pig, impala) or
we could just do one liners on the spark shell.
15. Basics: Getting a Table and Counting
CREATE KEYSPACE newyork WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1 };
USE newyork;
CREATE TABLE presidentlocations ( time int, location text , PRIMARY KEY time );
INSERT INTO presidentlocations (time, location ) VALUES ( 1 , 'White House' );
INSERT INTO presidentlocations (time, location ) VALUES ( 2 , 'White House' );
INSERT INTO presidentlocations (time, location ) VALUES ( 3 , 'White House' );
INSERT INTO presidentlocations (time, location ) VALUES ( 4 , 'White House' );
INSERT INTO presidentlocations (time, location ) VALUES ( 5 , 'Air Force 1' );
INSERT INTO presidentlocations (time, location ) VALUES ( 6 , 'Air Force 1' );
INSERT INTO presidentlocations (time, location ) VALUES ( 7 , 'Air Force 1' );
INSERT INTO presidentlocations (time, location ) VALUES ( 8 , 'NYC' );
INSERT INTO presidentlocations (time, location ) VALUES ( 9 , 'NYC' );
INSERT INTO presidentlocations (time, location ) VALUES ( 10 , 'NYC' );
CREATE KEYSPACE newyork WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1 };
USE newyork;
CREATE TABLE presidentlocations ( time int, location text , PRIMARY KEY time );
INSERT INTO presidentlocations (time, location ) VALUES ( 1 , 'White House' );
INSERT INTO presidentlocations (time, location ) VALUES ( 2 , 'White House' );
INSERT INTO presidentlocations (time, location ) VALUES ( 3 , 'White House' );
INSERT INTO presidentlocations (time, location ) VALUES ( 4 , 'White House' );
INSERT INTO presidentlocations (time, location ) VALUES ( 5 , 'Air Force 1' );
INSERT INTO presidentlocations (time, location ) VALUES ( 6 , 'Air Force 1' );
INSERT INTO presidentlocations (time, location ) VALUES ( 7 , 'Air Force 1' );
INSERT INTO presidentlocations (time, location ) VALUES ( 8 , 'NYC' );
INSERT INTO presidentlocations (time, location ) VALUES ( 9 , 'NYC' );
INSERT INTO presidentlocations (time, location ) VALUES ( 10 , 'NYC' );
scala> sc.cassandraTable(“newyork","presidentlocations").count
res3: Long = 10
scala> sc.cassandraTable(“newyork","presidentlocations").count
res3: Long = 10
cassandraTable
count 10
16. Basics: take() and toArray
scala> sc.cassandraTable("newyork","presidentlocations").take(1)
res2: Array[com.datastax.spark.connector.CassandraRow] = Array(CassandraRow{time: 9, location: NYC})
scala> sc.cassandraTable("newyork","presidentlocations").take(1)
res2: Array[com.datastax.spark.connector.CassandraRow] = Array(CassandraRow{time: 9, location: NYC})
cassandraTable
take(1)
Array of CassandraRows
99 NNYYCC
scala> sc.cassandraTable(“newyork","presidentlocations").toArray
res3: Array[com.datastax.spark.connector.CassandraRow] = Array(
scala> sc.cassandraTable(“newyork","presidentlocations").toArray
res3: Array[com.datastax.spark.connector.CassandraRow] = Array(
CassandraRow{time: 9, location: NYC},
CassandraRow{time: 3, location: White House},
…,
CassandraRow{time: 6, location: Air Force 1})
cassandraTable
toArray
Array of CassandraRows
99 NNYYCC
CassandraRow{time: 9, location: NYC},
CassandraRow{time: 3, location: White House},
…,
CassandraRow{time: 6, location: Air Force 1})
9999 NNNNYYYYCCCC 9999 NNNNYYYYCCCC
17. Basics: Getting Row Values out of a
CassandraRow
scala> sc.cassandraTable("newyork","presidentlocations").first.get[Int]("time")
res5: Int = 9
scala> sc.cassandraTable("newyork","presidentlocations").first.get[Int]("time")
res5: Int = 9
cassandraTable
first
A CassandraRow object
99 NNYYCC
99 get[Int]
get[Int]
get[String]
get[List[...]]
…get[Any]
Got null ?
get[Option[Int]]
http://www.datastax.com/documentation/datastax_enterprise/4.5/datastax_enterprise/spark/sparkSupportedTypes.html
18. get[Int] get[String]
CC**
Copy A Table
Say we want to restructure our table or add a new column?
CREATE TABLE characterlocations (
CREATE TABLE characterlocations (
time int,
character text,
location text,
PRIMARY KEY (time,character)
);
time int,
character text,
location text,
PRIMARY KEY (time,character)
);
scala> sc.cassandraTable(“newyork","presidentlocations")
.map( row => (
scala> sc.cassandraTable(“newyork","presidentlocations")
.map( row => (
row.get[Int](“time"),
"president",
row.get[String](“location")))
row.get[Int](“time"),
"president",
row.get[String](“location")))
.saveToCassandra("newyork","characterlocations")
.saveToCassandra("newyork","characterlocations")
cqlsh:newyork> SELECT * FROM characterlocations ;
time | character | location
------+-----------+-------------
cqlsh:newyork> SELECT * FROM characterlocations ;
time | character | location
------+-----------+-------------
5 | president | Air Force 1
10 | president | NYC
……
5 | president | Air Force 1
10 | president | NYC
……
cassandraTable
11 wwhhiittee hhoouussee
11,,pprreessiiddeenntt,,wwhhiittee hhoouussee
saveToCassandra
19. Filter a Table
What if we want to filter based on a non-clustering key column?
scala> sc.cassandraTable(“newyork","presidentlocations")
scala> sc.cassandraTable(“newyork","presidentlocations")
.filter( _.getInt("time") > 7 )
.toArray
res9: Array[com.datastax.spark.connector.CassandraRow] = Array(
CassandraRow{time: 9, location: NYC},
CassandraRow{time: 10, location: NYC},
CassandraRow{time: 8, location: NYC}
)
.filter( _.getInt("time") > 7 )
.toArray
res9: Array[com.datastax.spark.connector.CassandraRow] = Array(
CassandraRow{time: 9, location: NYC},
CassandraRow{time: 10, location: NYC},
CassandraRow{time: 8, location: NYC}
)
cassandraTable
11 wwhhiittee hhoouussee
getInt
11
>7
filter
20. Backfill a Table with a Different Key!
CREATE TABLE timelines (
time int,
character text,
location text,
PRIMARY KEY ((character), time)
)
CREATE TABLE timelines (
time int,
character text,
location text,
PRIMARY KEY ((character), time)
)
If we actually want to have quick access to
timelines we need a C* table with a
different structure.
sc.cassandraTable("newyork","characterlocations")
.saveToCassandra("newyork","timelines")
sc.cassandraTable("newyork","characterlocations")
.saveToCassandra("newyork","timelines")
cqlsh:newyork> select * from timelines;
character | time | location
-----------+------+-------------
president | 1 | White House
president | 2 | White House
president | 3 | White House
president | 4 | White House
president | 5 | Air Force 1
president | 6 | Air Force 1
president | 7 | Air Force 1
president | 8 | NYC
president | 9 | NYC
president | 10 | NYC
cqlsh:newyork> select * from timelines;
character | time | location
-----------+------+-------------
president | 1 | White House
president | 2 | White House
president | 3 | White House
president | 4 | White House
president | 5 | Air Force 1
president | 6 | Air Force 1
president | 7 | Air Force 1
president | 8 | NYC
president | 9 | NYC
president | 10 | NYC
11 wwhhiittee hhoouussee
cassandraTable
saveToCassandra
pprreessiiddeenntt CC**
21. Import a CSV
I have some data in another source which I could really use in
my Cassandra table
sc.textFile("file:///home/pkolaczk/ReallyImportantDocuments/PlisskenLocations.csv")
sc.textFile("file:///home/pkolaczk/ReallyImportantDocuments/PlisskenLocations.csv")
.map(_.split(","))
.map(line => (line(0),line(1),line(2)))
.saveToCassandra("newyork","timelines", SomeColumns("character", "time", "location"))
.map(_.split(","))
.map(line => (line(0),line(1),line(2)))
.saveToCassandra("newyork","timelines", SomeColumns("character", "time", "location"))
textFile
map
plissken,1,white house
cqlsh:newyork> select * from timelines where character = 'plissken';
character | time | location
-----------+------+-----------------
plissken | 1 | Federal Reserve
plissken | 2 | Federal Reserve
plissken | 3 | Federal Reserve
plissken | 4 | Court
plissken | 5 | Court
plissken | 6 | Court
plissken | 7 | Court
plissken | 8 | Stealth Glider
plissken | 9 | NYC
plissken | 10 | NYC
cqlsh:newyork> select * from timelines where character = 'plissken';
character | split
time | location
-----------+------+-----------------
plissken | 1 | Federal Reserve
plissken | 2 | Federal Reserve
plissken | 3 | Federal Reserve
plissken | 4 | Court
plissken | 5 | Court
plissken | 6 | Court
plissken | 7 | Court
plissken | 8 | Stealth Glider
plissken | 9 | NYC
plissken | 10 | NYC
pplliisssskkeenn 11 wwhhiittee hhoouussee
pplliisssskkeenn,,11,,wwhhiittee hhoouussee
saveToCassandra
CC**
22. Perform a Join with MySQL
Maybe a little more than one line …
import java.sql._
import org.apache.spark.rdd.JdbcRDD
Class.forName("com.mysql.jdbc.Driver").newInstance();
val quotes = new JdbcRDD(
sc,
getConnection = () => DriverManager.getConnection("jdbc:mysql://Localhost/escape_from_ny?user=root"),
sql = "SELECT * FROM quotes WHERE ? <= ID and ID <= ?",
lowerBound = 0,
upperBound = 100,
numPartitions = 5,
mapRow = (r: ResultSet) => (r.getInt(2),r.getString(3))
)
quotes: org.apache.spark.rdd.JdbcRDD[(Int, String)] = JdbcRDD[9] at JdbcRDD at <console>:23
import java.sql._
import org.apache.spark.rdd.JdbcRDD
Class.forName("com.mysql.jdbc.Driver").newInstance();
val quotes = new JdbcRDD(
sc,
getConnection = () => DriverManager.getConnection("jdbc:mysql://Localhost/escape_from_ny?user=root"),
sql = "SELECT * FROM quotes WHERE ? <= ID and ID <= ?",
lowerBound = 0,
upperBound = 100,
numPartitions = 5,
mapRow = (r: ResultSet) => (r.getInt(2),r.getString(3))
)
quotes: org.apache.spark.rdd.JdbcRDD[(Int, String)] = JdbcRDD[9] at JdbcRDD at <console>:23
23. Perform a Join with MySQL
Maybe a little more than one line …
val locations =
sc.cassandraTable("newyork","timelines")
val locations =
sc.cassandraTable("newyork","timelines")
.filter(_.getString("character") == "plissken")
.map(row => (row.getInt("time"), row.getString("location")))
.filter(_.getString("character") == "plissken")
.map(row => (row.getInt("time"), row.getString("location")))
quotes.join(locations)
.take(1)
.foreach(println)
quotes.join(locations)
.take(1)
.foreach(println)
(5, (
Bob Hauk: There was an accident.
About an hour ago, a small jet went down inside New York City.
The President was on board.
Snake Plissken: The president of what?,
Court
))
cassandraTable
JdbcRDD
pplliisssskkeenn,, 55,, ccoouurrtt
55,,ccoouurrtt 55,,((‘‘BBoobb HHaauukk:: ……’’,,ccoouurrtt))
55,, ‘‘BBoobb HHaauukk:: ……''
(5, (
Bob Hauk: There was an accident.
About an hour ago, a small jet went down inside New York City.
The President was on board.
Snake Plissken: The president of what?,
Court
))
join
24. Easy Objects with Case Classes
We have the technology to make this even easier!
case class TimelineRow(character: String, time: Int, location: String)
sc.cassandraTable[TimelineRow]("newyork","timelines")
case class TimelineRow(character: String, time: Int, location: String)
sc.cassandraTable[TimelineRow]("newyork","timelines")
.filter(_.character == "plissken")
.filter(_.time == 8)
.toArray
.filter(_.character == "plissken")
.filter(_.time == 8)
.toArray
res13: Array[TimelineRow] = Array(TimelineRow(plissken,8,Stealth Glider))
res13: Array[TimelineRow] = Array(TimelineRow(plissken,8,Stealth Glider))
cassandraTable[TimelineRow]
TimelineRow
cchhaarraacctteerr,,ttiimmee,,llooccaattiioonn
filter
cchhaarraacctteerr ==== pplliisssskkeenn
ttiimmee ==== 88
cchhaarraacctteerr::pplliisssskkeenn,, ttiimmee::88,, llooccaattiioonn:: SStteeaalltthh GGlliiddeerr
30. In-memory Processing
Call cache or persist(storageLevel) to store RDD data in memory.
val rdd = sc.cassandraTable("newyork","presidentlocations")
.filter(...)
.map(...)
.reduce(...)
.cache
rdd.first // slow, loads data from Cassandra and keeps in memory
rdd.first // fast, doesn't read from Cassandra, reads from memory
val rdd = sc.cassandraTable("newyork","presidentlocations")
.filter(...)
.map(...)
.reduce(...)
.cache
rdd.first // slow, loads data from Cassandra and keeps in memory
rdd.first // fast, doesn't read from Cassandra, reads from memory
Multiple StorageLevels available:
● MEMORY_ONLY
● MEMORY_ONLY_SER
● MEMORY_AND_DISK
● MEMORY_AND_DISK_SER
● DISK_ONLY
Also replicated variants available: just append _2 to the constant name.