You’ve heard all of the hype, but how can SMACK work for you? In this all-star lineup, you will learn how to create a reactive, scaling, resilient and performant data processing powerhouse. We will go through the basics of Akka, Kafka and Mesos and then deep dive into putting them together in an end2end (and back again) distrubuted transaction. Distributed transactions mean producers waiting for one or more of consumers to respond. On the backend, you will see how Apache Cassandra and Spark can be combined to add the incredibly scaling storage and data analysis needed for fast data pipelines. With these technologies as a foundation, you have the assurance that scale is never a problem and uptime is default.
38. Kafka
Producer Consumer
Collection
API
Temperature
Processor
Precipitation
Processor
Topic = Temperature
Tem
1
Temp
2
Tem
3
Temp
4
Temp
5
Topic = Precipitation
Precip
1
Precip
2
Precip
3
Precip
4
Precip
5
Broker
Partition 0
Partition 0
Tem
1
Temp
2
Tem
3
Temp
4
Temp
5
Partition 1
Temperature
Processor
Topic = Temperature
Tem
1
Temp
2
Tem
3
Temp
4
Temp
5
Topic = Precipitation
Precip
1
Precip
2
Precip
3
Precip
4
Precip
5
Broker
Partition 0
Partition 0
Tem
1
Temp
2
Tem
3
Temp
4
Temp
5
Partition 1
Topic Temperature
Replication Factor = 2
Topic Precipitation
Replication Factor = 2
39. Kafka
Producer
Consumer
Collection
API
Temperature
Processor
Precipitation
Processor
Topic = Temperature
Tem
1
Temp
2
Tem
3
Temp
4
Temp
5
Topic = Precipitation
Precip
1
Precip
2
Precip
3
Precip
4
Precip
5
Broker
Partition 0
Partition 0
Tem
1
Temp
2
Tem
3
Temp
4
Temp
5
Partition 1 Temperature
Processor
Topic = Temperature
Tem
1
Temp
2
Tem
3
Temp
4
Temp
5
Topic = Precipitation
Precip
1
Precip
2
Precip
3
Precip
4
Precip
5
Broker
Partition 0
Partition 0
Tem
1
Temp
2
Tem
3
Temp
4
Temp
5
Partition 1
Temperature
Processor
Temperature
Processor
Precipitation
Processor
Topic Temperature
Replication Factor = 2
Topic Precipitation
Replication Factor = 2
40. Guarantees
Order
•Messages are ordered as they are sent by the
producer
•Consumers see messages in the order they were
inserted by the producer
Durability
•Messages are delivered at least once
•With a Replication Factor N up to N-1 server failures
can be tolerated without losing committed messages
43. Akka in a nutshell
• Highly concurrent
• Reactive
• Fully distributed
• Completely elastic and resilient
Actor
Mailbox
Actor
Mailbox
Actor
Mailbox
Actor
Mailbox
46. TemperatureActor
class TemperatureActor(sc: SparkContext, settings: WeatherSettings)
extends WeatherActor with ActorLogging {
def receive : Actor.Receive = {
case e: GetDailyTemperature => daily(e.day, sender)
case e: DailyTemperature => store(e)
case e: GetMonthlyHiLowTemperature => highLow(e, sender)
}
47. TemperatureActor
/** Computes and sends the daily aggregation to the `requester` actor.
* We aggregate this data on-demand versus in the stream.
*
* For the given day of the year, aggregates 0 - 23 temp values to statistics:
* high, low, mean, std, etc., and persists to Cassandra daily temperature table
* by weather station, automatically sorted by most recent - due to our cassandra schema -
* you don't need to do a sort in spark.
*
* Because the gov. data is not by interval (window/slide) but by specific date/time
* we look for historic data for hours 0-23 that may or may not already exist yet
* and create stats on does exist at the time of request.
*/
def daily(day: Day, requester: ActorRef): Unit =
(for {
aggregate <- sc.cassandraTable[Double](keyspace, rawtable)
.select("temperature").where("wsid = ? AND year = ? AND month = ? AND day = ?",
day.wsid, day.year, day.month, day.day)
.collectAsync()
} yield forDay(day, aggregate)) pipeTo requester
48. TemperatureActor
/**
* Would only be handling handles 0-23 small items or fewer.
*/
private def forDay(key: Day, temps: Seq[Double]): WeatherAggregate =
if (temps.nonEmpty) {
val stats = StatCounter(temps)
val data = DailyTemperature(
key.wsid, key.year, key.month, key.day,
high = stats.max, low = stats.min,
mean = stats.mean, variance = stats.variance, stdev = stats.stdev)
self ! data
data
} else NoDataAvailable(key.wsid, key.year, classOf[DailyTemperature])
49. TemperatureActor
class TemperatureActor(sc: SparkContext, settings: WeatherSettings)
extends WeatherActor with ActorLogging {
def receive : Actor.Receive = {
case e: GetDailyTemperature => daily(e.day, sender)
case e: DailyTemperature => store(e)
case e: GetMonthlyHiLowTemperature => highLow(e, sender)
}
50. TemperatureActor
/** Stores the daily temperature aggregates asynchronously which are triggered
* by on-demand requests during the `forDay` function's `self ! data`
* to the daily temperature aggregation table.
*/
private def store(e: DailyTemperature): Unit =
sc.parallelize(Seq(e)).saveToCassandra(keyspace, dailytable)
54. Token
Server
•Consistent hash between 2-63
and 264
•Each node owns a range of those
values
•The token is the beginning of that
range to the next node’s token value
•Virtual Nodes break these down
further
Data
Token Range
0 …
58. Table
CREATE TABLE weather_station (
id text,
name text,
country_code text,
state_code text,
call_sign text,
lat double,
long double,
elevation double,
PRIMARY KEY(id)
);
Table Name
Column Name
Column CQL Type
Primary Key Designation Partition Key
59. Queries supported
CREATE TABLE raw_weather_data (
wsid text,
year int,
month int,
day int,
hour int,
temperature double,
dewpoint double,
pressure double,
wind_direction int,
wind_speed double,
sky_condition int,
sky_condition_text text,
one_hour_precip double,
six_hour_precip double,
PRIMARY KEY ((wsid), year, month, day, hour)
) WITH CLUSTERING ORDER BY (year DESC, month DESC, day DESC, hour DESC);
Get weather data given
•Weather Station ID
•Weather Station ID and Time
•Weather Station ID and Range of Time
64. Consistency level
Consistency Level Number of Nodes Acknowledged
One One - Read repair triggered
Local One One - Read repair in local DC
Quorum 51%
Local Quorum 51% in local DC
81. Simple example
/** keyspace & table */
val tableRDD = sc.cassandraTable("isd_weather_data", "raw_weather_data")
/** get a simple count of all the rows in the raw_weather_data table */
val rowCount = tableRDD.count()
println(s"Total Rows in Raw Weather Table: $rowCount")
sc.stop()
Executer
SELECT *
FROM isd_weather_data.raw_weather_data
Spark RDD
Spark Partition
Spark Connector
82. Saving back the weather data
val cc = new CassandraSQLContext(sc)
cc.setKeyspace("isd_weather_data")
cc.sql("""
SELECT wsid, year, month, day, max(temperature) high, min(temperature) low
FROM raw_weather_data
WHERE month = 6
AND temperature !=0.0
GROUP BY wsid, year, month, day;
""")
.map{row => (row.getString(0), row.getInt(1), row.getInt(2), row.getInt(3), row.getDouble(4), row.getDouble(5))}
.saveToCassandra("isd_weather_data", "daily_aggregate_temperature")
92. Kafka on Mesos example
Scheduler
• Provides the operational automation for a Kafka Cluster
• Manages the changes to the broker's configuration
• Exposes a REST API for the CLI to use or any other client
• Runs on Marathon for high availability
Executor
• The executor interacts with the kafka broker as an
intermediary to the scheduler