SlideShare une entreprise Scribd logo
1  sur  83
Nadav Wiener
Scala Tech Lead @ Riskified
Scala since 2007
Akka Streams since 2016
RISKIFIED
total employees
in New York and
Tel Aviv
$64M
in funding secured
to date
1.000.000
global orders reviewed
every day
1000
merchants, including
several publicly traded
companies
250
Time Windowing
Streaming Data Platforms
vs Libraries
Glazier: Event Time Windowing
Libraries
Spark /
Flink
Kafka
Streams
Akka
Streams
Monix /
fs2
Platforms
Poll
?
This was our
dilemma
Behavioral Data
Proxy
?
no proxy
?
Proxy
?
Gather lowest latencies
● per each session
● per 10 second window
We want to:
Browser HTTP Server
latencies
Gather lowest latencies
● per each session
● per 10 second window
Browser HTTP Server
latencies
write to journal
Gather lowest latencies
● per each session
● per 10 second window
Browser HTTP Server
latencies
write to journal
10 second windows (for each user)
Lowest Latency
Stream
Processing
lowest
latencies
Database
consume
Time Windowing
Time Windowing
Platforms (Spark/Flink):
Libraries (Akka Streams):
😀
😕
?Platforms Libraries
✔ Powerful
Platforms are:
✘ Big fish to catch
✘ Constraining
✔ Powerful
Platforms are:
but:
Platforms
Libraries
You are here
Spark /
Flink
Kafka
Streams
Akka
Streams
Monix /
fs2
Platforms
Libraries
Platforms
Libraries
You are here
Gather lowest latencies
● per each session
● per 10 second window
Browser HTTP Server
latencies
lowest
latencies
Database
write to journal
consume
Stream
Processing
Take #1
case class LatencyEntry(sessionId: String,
latency: Duration)
session id
latency
LatencyEntry
Stream Processing Take #1
latencySource
.groupBy(_.sessionId)
.groupedWithin(10.seconds)
.map(group => group.minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Partition into per-session substreams
Stream Processing Take #1
session id
latency
LatencyEntry
latencySource
.groupBy(_.sessionId)
.groupedWithin(10.seconds)
.map(group => group.minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Accumulate & emit every 10s
Partition into per-session substreams
Stream Processing Take #1
session id
latency
LatencyEntry
latencySource
.groupBy(_.sessionId)
.groupedWithin(10.seconds)
.map(group => group.minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Accumulate & emit every 10s
Lowest latency in accumulated data
Partition into per-session substreams
Stream Processing Take #1
session id
latency
LatencyEntry
latencySource
.groupBy(_.sessionId)
.groupedWithin(10.seconds)
.map(group => group.minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Accumulate & emit every 10s
Lowest latency in accumulated data
Partition into per-session substreams
Merge substreams &
send to downstream db
Stream Processing Take #1
session id
latency
LatencyEntry
But this is naive...
write to journal
Browser HTTP Server
latencies
1) Bring up only the HTTP server,
and wait for latencies to accumulate
1
1) Bring up only the HTTP server,
and wait for latencies to accumulate
2) Only then bring up stream processing
write to journal
consume
1Browser HTTP Server
latencies
2
Database
Stream
Processing
lowest
latencies
1) Bring up only the HTTP server,
and wait for latencies to accumulate
2) Only then bring up stream processing
Instead of this:
10 second windows (for each user)
Lowest Latency
2
Database
write to journal
consume
1Browser HTTP Server
latencies
lowest
latencies Stream
Processing
10 second windows (for each user)
Lowest Latency
We get this:
2
Database
write to journal
consume
1Browser HTTP Server
latencies
lowest
latencies Stream
Processing
WE SHOULDN’T BE
LOOKING AT THE
CLOCK
Processing
Time
Event
Time
Database
write to journal
consume
Browser HTTP Server
latencies
lowest
latencies Stream
Processing
Event
Time
● Timestamp as payload
● Plays well with
distributed systems
● Not available in libraries
Processing
Time
● Time derived from clock
● Less suitable for
business logic
● Available in libraries😕
😀
Event
Time
Processing
Time
?
Glazier
Event time windowing library
Glazier
Tour of the API
Under the hood
Glazier |+| Akka Streams
Glazier
case class LatencyEntry(sessionId: String,
latency: Duration,
timestamp: Timestamp)
session id
latency
timestamp
LatencyEntry
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Event-time obtained from LatencyEntry.timestamp
session id
latency
timestamp
LatencyEntry
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Partitioned by session id
session id
latency
timestamp
LatencyEntry
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
10 second (event-time) windows
10s 10s
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
1 second grace period
Events not guaranteed to arrive in order,
Windows stay around for late events.
10s grace
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Emit lowest latency in window, once it closes
session id
latency
timestamp
LatencyEntry
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Merge window substreams
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def tumbling(span: Span): WindowingFunction = ...
span span span span
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def tumbling(span: Span): WindowingFunction = { timestamp =>
val elapsed = timestamp % span
val start = timestamp - elapsed
List(Interval(start, start + span))
}
span span span span
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def sliding(span: Span, step: Span): WindowingFunction = ...
span
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def sliding(span: Span, step: Span): WindowingFunction = ...
spanstep
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def sliding(span: Span, step: Span): WindowingFunction = ...
step
spanstep
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def sliding(span: Span, step: Span): WindowingFunction = ...
step step step step step step step span
State
Event
Active
Windows
Logical
Clock
Instructions
Active
Windows
Logical
Clock
State
Step
Windowing State
case class Step(presentTime: Timestamp,
windows: Map[Interval, Set[Any]])
Timekeeping
Newer events advance 'presentTime’
Active
Windows
Logical
Clock
case class Step(presentTime: Timestamp,
windows: Map[Interval, Set[Any]])
Timekeeping
Windows represented as key-set by interval
Active
Windows
Logical
Clock
case class Step(presentTime: Timestamp,
windows: Map[Interval, Set[Any]])
Timekeeping
Newer events advance 'presentTime’
10s…20s
0s…10s
Timestamp 19s
1, 2
2, 5
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Timekeeping
10s…20s
0s…10s
Timestamp 19s
1, 2
2, 5
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Timekeeping
LatencyEntry(4, 100ms, 22s) 10s…20s
0s…10s 1, 2
2, 5Timestamp 22s
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Active Windows
10s…20s
0s…10s
Timestamp 22s
1, 2
2, 5LatencyEntry(4, 100ms, 22s)
10s…20s
Timestamp 22s
2, 5
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Active Windows
20s…30s 4LatencyEntry(4, 100ms, 22s)
10s…20s
Timestamp 22s
2, 5
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Instructions
20s…30s 4LatencyEntry(4, 100ms, 22s)
10s…20s
Timestamp 22s
2, 5
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Instructions
20s…30s 4LatencyEntry(4, 100ms, 22s)
10s…20s
Timestamp 22s
2, 5
closeInstructions ++ openInstructions ++ handleInstructions == List(
WindowStatusChange(Window(1, Interval(0s, 10s)), Close),
WindowStatusChange(Window(2, Interval(0s, 10s)), Close),
WindowStatusChange(Window(4, Interval(20s, 30s)), Open),
HandleEvent(Window(4, Interval(20s, 30s)),
LatencyEntry(4, 100ms, 20s))
)
Instructions
20s…30s 4LatencyEntry(4, 100ms, 22s)
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Glazier |+| Akka Streams
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Akka Streams Support
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Akka Streams Support
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Akka Streams Support
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Akka Streams Support
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Akka Streams Support
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
User code in windowed substream
Akka Streams Support
latencySource
.assignTimestamps(_.timestamp)
.keyBy(_.sessionId)
.window(TumblingEventTimeWindows.of(Time.seconds(10)))
.allowedLateness(Time.seconds(1))
.reduceWith { case (r1, r2) => Seq(r1, r2).minBy(_.latency) }
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
vs Flink API
Time Windowing
Spark/Flink:
Akka Streams:
with Glazier:
😀
😕
😀
Questions?
Takeaways
?Platforms Libraries
✘ Upfront investment
✘ Constraining
✔ Powerful
✔ Flexible
✘ Missing
functionality
✔ Flexible
Platforms
Libraries
You are here
Platforms
Libraries
Platforms
Libraries
Significant overlap
Thank you for
your time!
Thank you!
Glazier
https://github.com/riskified/glazier
“Streaming Microservices”, Dean Wampler
https://slideslive.com/38908773/kafkabased-microservices-with-akka-streams-and-kafka-streams
“Windowing data in Akka Streams”, Adam Warski
https://softwaremill.com/windowing-data-in-akka-streams/

Contenu connexe

Tendances

Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SPPrimeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
Zabbix BR
 

Tendances (20)

Cracking JWT tokens: a tale of magic, Node.js and parallel computing - Code E...
Cracking JWT tokens: a tale of magic, Node.js and parallel computing - Code E...Cracking JWT tokens: a tale of magic, Node.js and parallel computing - Code E...
Cracking JWT tokens: a tale of magic, Node.js and parallel computing - Code E...
 
Streaming all the things with akka streams
Streaming all the things with akka streams   Streaming all the things with akka streams
Streaming all the things with akka streams
 
Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SPPrimeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
 
Cassandra Metrics
Cassandra MetricsCassandra Metrics
Cassandra Metrics
 
Nagios Conference 2014 - Jack Chu - How to Think With Nagios to Solve Monitor...
Nagios Conference 2014 - Jack Chu - How to Think With Nagios to Solve Monitor...Nagios Conference 2014 - Jack Chu - How to Think With Nagios to Solve Monitor...
Nagios Conference 2014 - Jack Chu - How to Think With Nagios to Solve Monitor...
 
Asynchronous stream processing with Akka Streams
Asynchronous stream processing with Akka StreamsAsynchronous stream processing with Akka Streams
Asynchronous stream processing with Akka Streams
 
Yet another node vs php
Yet another node vs phpYet another node vs php
Yet another node vs php
 
Introduction to apache zoo keeper
Introduction to apache zoo keeper Introduction to apache zoo keeper
Introduction to apache zoo keeper
 
Volker Fröhlich - How to Debug Common Agent Issues
Volker Fröhlich - How to Debug Common Agent IssuesVolker Fröhlich - How to Debug Common Agent Issues
Volker Fröhlich - How to Debug Common Agent Issues
 
Advanced Apache Cassandra Operations with JMX
Advanced Apache Cassandra Operations with JMXAdvanced Apache Cassandra Operations with JMX
Advanced Apache Cassandra Operations with JMX
 
Docker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic StackDocker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic Stack
 
LinkRest at JeeConf 2017
LinkRest at JeeConf 2017LinkRest at JeeConf 2017
LinkRest at JeeConf 2017
 
Apache ZooKeeper
Apache ZooKeeperApache ZooKeeper
Apache ZooKeeper
 
GopherFest 2017 - Adding Context to NATS
GopherFest 2017 -  Adding Context to NATSGopherFest 2017 -  Adding Context to NATS
GopherFest 2017 - Adding Context to NATS
 
Monitoring as Code: Getting to Monitoring-Driven Development - DEV314 - re:In...
Monitoring as Code: Getting to Monitoring-Driven Development - DEV314 - re:In...Monitoring as Code: Getting to Monitoring-Driven Development - DEV314 - re:In...
Monitoring as Code: Getting to Monitoring-Driven Development - DEV314 - re:In...
 
Sensu
SensuSensu
Sensu
 
Altitude SF 2017: Advanced VCL: Shielding and Clustering
Altitude SF 2017: Advanced VCL: Shielding and ClusteringAltitude SF 2017: Advanced VCL: Shielding and Clustering
Altitude SF 2017: Advanced VCL: Shielding and Clustering
 
OSMC 2017 | SNMP explained by Rob Hassing
OSMC 2017 | SNMP explained by Rob HassingOSMC 2017 | SNMP explained by Rob Hassing
OSMC 2017 | SNMP explained by Rob Hassing
 
Event Sourcing - what could possibly go wrong?
Event Sourcing - what could possibly go wrong?Event Sourcing - what could possibly go wrong?
Event Sourcing - what could possibly go wrong?
 
OSMC 2011 | Distributed monitoring using NSClient++ by Michael Medin
OSMC 2011 | Distributed monitoring using NSClient++ by Michael MedinOSMC 2011 | Distributed monitoring using NSClient++ by Michael Medin
OSMC 2011 | Distributed monitoring using NSClient++ by Michael Medin
 

Similaire à About time

Similaire à About time (20)

Unified Stream and Batch Processing with Apache Flink
Unified Stream and Batch Processing with Apache FlinkUnified Stream and Batch Processing with Apache Flink
Unified Stream and Batch Processing with Apache Flink
 
Flink 0.10 @ Bay Area Meetup (October 2015)
Flink 0.10 @ Bay Area Meetup (October 2015)Flink 0.10 @ Bay Area Meetup (October 2015)
Flink 0.10 @ Bay Area Meetup (October 2015)
 
Unified Stream & Batch Processing with Apache Flink (Hadoop Summit Dublin 2016)
Unified Stream & Batch Processing with Apache Flink (Hadoop Summit Dublin 2016)Unified Stream & Batch Processing with Apache Flink (Hadoop Summit Dublin 2016)
Unified Stream & Batch Processing with Apache Flink (Hadoop Summit Dublin 2016)
 
Flink. Pure Streaming
Flink. Pure StreamingFlink. Pure Streaming
Flink. Pure Streaming
 
Stream processing with Apache Flink - Maximilian Michels Data Artisans
Stream processing with Apache Flink - Maximilian Michels Data ArtisansStream processing with Apache Flink - Maximilian Michels Data Artisans
Stream processing with Apache Flink - Maximilian Michels Data Artisans
 
Logging for Production Systems in The Container Era
Logging for Production Systems in The Container EraLogging for Production Systems in The Container Era
Logging for Production Systems in The Container Era
 
Unbounded bounded-data-strangeloop-2016-monal-daxini
Unbounded bounded-data-strangeloop-2016-monal-daxiniUnbounded bounded-data-strangeloop-2016-monal-daxini
Unbounded bounded-data-strangeloop-2016-monal-daxini
 
Stream Processing with Apache Flink
Stream Processing with Apache FlinkStream Processing with Apache Flink
Stream Processing with Apache Flink
 
Beam me up, Samza!
Beam me up, Samza!Beam me up, Samza!
Beam me up, Samza!
 
Spark Streaming with Cassandra
Spark Streaming with CassandraSpark Streaming with Cassandra
Spark Streaming with Cassandra
 
Onyx data processing the clojure way
Onyx   data processing  the clojure wayOnyx   data processing  the clojure way
Onyx data processing the clojure way
 
Apache Flink Stream Processing
Apache Flink Stream ProcessingApache Flink Stream Processing
Apache Flink Stream Processing
 
Cloud Dataflow - A Unified Model for Batch and Streaming Data Processing
Cloud Dataflow - A Unified Model for Batch and Streaming Data ProcessingCloud Dataflow - A Unified Model for Batch and Streaming Data Processing
Cloud Dataflow - A Unified Model for Batch and Streaming Data Processing
 
Apache Beam (incubating)
Apache Beam (incubating)Apache Beam (incubating)
Apache Beam (incubating)
 
Norikra: SQL Stream Processing In Ruby
Norikra: SQL Stream Processing In RubyNorikra: SQL Stream Processing In Ruby
Norikra: SQL Stream Processing In Ruby
 
Deep dive into stateful stream processing in structured streaming by Tathaga...
Deep dive into stateful stream processing in structured streaming  by Tathaga...Deep dive into stateful stream processing in structured streaming  by Tathaga...
Deep dive into stateful stream processing in structured streaming by Tathaga...
 
Serverless London 2019 FaaS composition using Kafka and CloudEvents
Serverless London 2019   FaaS composition using Kafka and CloudEventsServerless London 2019   FaaS composition using Kafka and CloudEvents
Serverless London 2019 FaaS composition using Kafka and CloudEvents
 
K. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward KeynoteK. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward Keynote
 
So you think you can stream.pptx
So you think you can stream.pptxSo you think you can stream.pptx
So you think you can stream.pptx
 
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
 

Dernier

Final DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manualFinal DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manual
BalamuruganV28
 
Online crime reporting system project.pdf
Online crime reporting system project.pdfOnline crime reporting system project.pdf
Online crime reporting system project.pdf
Kamal Acharya
 

Dernier (20)

Electrostatic field in a coaxial transmission line
Electrostatic field in a coaxial transmission lineElectrostatic field in a coaxial transmission line
Electrostatic field in a coaxial transmission line
 
Seismic Hazard Assessment Software in Python by Prof. Dr. Costas Sachpazis
Seismic Hazard Assessment Software in Python by Prof. Dr. Costas SachpazisSeismic Hazard Assessment Software in Python by Prof. Dr. Costas Sachpazis
Seismic Hazard Assessment Software in Python by Prof. Dr. Costas Sachpazis
 
BURGER ORDERING SYSYTEM PROJECT REPORT..pdf
BURGER ORDERING SYSYTEM PROJECT REPORT..pdfBURGER ORDERING SYSYTEM PROJECT REPORT..pdf
BURGER ORDERING SYSYTEM PROJECT REPORT..pdf
 
"United Nations Park" Site Visit Report.
"United Nations Park" Site  Visit Report."United Nations Park" Site  Visit Report.
"United Nations Park" Site Visit Report.
 
Insurance management system project report.pdf
Insurance management system project report.pdfInsurance management system project report.pdf
Insurance management system project report.pdf
 
Interfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdfInterfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdf
 
Introduction to Artificial Intelligence and History of AI
Introduction to Artificial Intelligence and History of AIIntroduction to Artificial Intelligence and History of AI
Introduction to Artificial Intelligence and History of AI
 
BRAKING SYSTEM IN INDIAN RAILWAY AutoCAD DRAWING
BRAKING SYSTEM IN INDIAN RAILWAY AutoCAD DRAWINGBRAKING SYSTEM IN INDIAN RAILWAY AutoCAD DRAWING
BRAKING SYSTEM IN INDIAN RAILWAY AutoCAD DRAWING
 
Diploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdfDiploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdf
 
5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...
 
Introduction to Arduino Programming: Features of Arduino
Introduction to Arduino Programming: Features of ArduinoIntroduction to Arduino Programming: Features of Arduino
Introduction to Arduino Programming: Features of Arduino
 
Intelligent Agents, A discovery on How A Rational Agent Acts
Intelligent Agents, A discovery on How A Rational Agent ActsIntelligent Agents, A discovery on How A Rational Agent Acts
Intelligent Agents, A discovery on How A Rational Agent Acts
 
Circuit Breaker arc phenomenon.pdf engineering
Circuit Breaker arc phenomenon.pdf engineeringCircuit Breaker arc phenomenon.pdf engineering
Circuit Breaker arc phenomenon.pdf engineering
 
Final DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manualFinal DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manual
 
analog-vs-digital-communication (concept of analog and digital).pptx
analog-vs-digital-communication (concept of analog and digital).pptxanalog-vs-digital-communication (concept of analog and digital).pptx
analog-vs-digital-communication (concept of analog and digital).pptx
 
How to Design and spec harmonic filter.pdf
How to Design and spec harmonic filter.pdfHow to Design and spec harmonic filter.pdf
How to Design and spec harmonic filter.pdf
 
Quiz application system project report..pdf
Quiz application system project report..pdfQuiz application system project report..pdf
Quiz application system project report..pdf
 
Online crime reporting system project.pdf
Online crime reporting system project.pdfOnline crime reporting system project.pdf
Online crime reporting system project.pdf
 
Online book store management system project.pdf
Online book store management system project.pdfOnline book store management system project.pdf
Online book store management system project.pdf
 
SLIDESHARE PPT-DECISION MAKING METHODS.pptx
SLIDESHARE PPT-DECISION MAKING METHODS.pptxSLIDESHARE PPT-DECISION MAKING METHODS.pptx
SLIDESHARE PPT-DECISION MAKING METHODS.pptx
 

About time

Notes de l'éditeur

  1. Riskified is the world's leading eCommerce fraud prevention company. We use machine learning and behavioural analytics to protect our customers from online fraud.
  2. How many familiar with Akka Streams? How many with Kafka or Flink? Faced with a decision between the two?
  3. “One thing we analyze is page visits to shops: what pages, when, and in particular—latencies”
  4. Let’s run this through
  5. Whale: public domain Butterfly: taken from wikipedia, should be commons-*
  6. No copyright info, appears to be in public domain
  7. How many use: Spark? Flink? Kafka Streams? Akka Streams? Monix? Fs2? How many of you are evaluating these as alternatives?
  8. Check out our other sessions here and drop by our booth to learn more about us!
  9. “...By the time events reach our code, timing for 10s windows is inaccurate, Worse: catching up with backlog will flood 10s windows”