For many businesses, the batch-oriented architecture of Big Data–where data is captured in large, scalable stores, then processed later–is simply too slow: a new breed of “Fast Data” architectures has evolved to be stream-oriented, where data is processed as it arrives, providing businesses with a competitive advantage.
There are many stream processing tools, so which ones should you choose? It helps to consider several factors in the context of your applications:
* Low latency: How low is necessary?
* High volume: How high is required?
* Integration with other tools: Which ones and how?
* Data processing: What kinds? In bulk? As individual events?
In this talk by Dean Wampler, PhD., VP of Fast Data Engineering at Lightbend, we’ll look at the criteria you need to consider when selecting technologies, plus specific examples of how four streaming tools–Akka Streams, Kafka Streams, Apache Flink and Apache Spark serve particular needs and use cases when working with continuous streams of data.
Moving from Big Data to Fast Data? Here's How To Pick The Right Streaming Engine
1. Moving from Big Data to Fast Data? Here's
How To Pick The Right Streaming Engine
WEBINAR
Dean Wampler, Ph.D (@deanwampler)
VP of Fast Data Engineering
2. Upgrade your grey matter!
Get the free O’Reilly book by Dr. Dean Wampler,
VP of Fast Data Engineering at Lightbend
bit.ly/lightbend-fast-data
27. 0
Time (minutes)
1 2 3 …
Analysis
Server 1
Server 2
accumulate
1 1
2 2 2 2 2 2
1 1
2 2
1 1 1
…
Key
Collect data,
Then process
accumulate
n
Event at Server n
propagated to
Analysis