After the construction of several datalakes and large business intelligence pipelines, we now know that the use of Scala and its principles were essential to the success of those large undertakings.
In this talk, we will go through the 7 key scala-based architectures and methodologies that were used in real-life projects. More specifically, we will see the impact of these recipes on Spark performances, and how it enabled the rapid growth of those projects.
4. About Me
Jonathan WINANDY
Lead Data Engineer:
- Data Lake building,
- Audit / Coaching,
- Spark Training.
Founder of Univalence (BI / Big Data)
Co-Founder of CYM (IoT / Predictive Maintenance),
Craft Analytics† (BI / Big Data),
and Valwin (Health Care Data).
7. 1.It’s all about our Organisations
Data engineering is not about scaling
computation.
8. 1.It’s all about our Organisations
Data engineering is not a support
function
for Data Scientists[1].
[1] whatever they are nowadays
9. 1.It’s all about our Organisations
Instead, Data engineering
enables access to Data!
10. 1.It’s all about our Organisations
access to Data … in complex organisations.
Product OpsBI You
Marketing
data
new data
11. Holding
1.It’s all about our Organisations
access to Data … in complex organisations.
Marketing
Yo
u
data
new data
Entity 1
MarketingIT
Entity N
MarketingIT
12. 1.It’s all about our Organisations
access to Data … in complex organisations.
It’s very frustrating!
We run a support group meetup if you are interested : Paris Data Engineers!
13. 1.It’s all about our Organisations
Small tips :
Only one hadoop cluster (no TEST/REC/INT/PREPROD).
No Air-Data-Eng, it helps no one.
Radical transparency with other teams.
Hack that sh**.
15. 2. Optimising our work
There are 3 key concerns governing our decisions :
Lead time
Impact
Failure management
16. 2. Optimising our work
Lead time (noun) :
The period of time between the initial phase of a process and the emergence
of results, as between the planning and completed manufacture of a product.
Short lead times are essential!
The Elastic stack helps a lot in this area.
17. 2. Optimising our work
Impact
To have impact, we have to analyse beyond
immediate needs. That way, we’re able to
provide solutions to entire kinds of
problems.
18. 2. Optimising our work
Failure management
Things fail, be prepared!
On the same morning the RER A public transports
and
our Hadoop job tracker can fail.
Unprepared failures may pile up and lead to huge wastes.
19. 2. Optimising our work
“What is likely to fail?” $componentName_____
“How? (root cause)”
“Can we know if this will fail?”
“Can we prevent this failure?”
“What are the impacts?”
“How to fix it when it happens?”
“Can we facilitate today?”
How to mitigate failure in 7 questions.
22. 3. Staging the data
Data is moving around, freeze it!
Staging changed with Big Data. We moved from transient
staging (FTP, NFS, etc.) to persistent staging in distributed
solutions:
● In Streaming with Kafka, we may retain logs in Kafka
for several months.
● In Batch, staging in HDFS may retain source Data for
years.
23. 3. Staging the data
Modern staging anti-pattern :
Dropping destination places before moving the Data.
Having incomplete data visible.
Short log retention in streams (=> new failure modes).
Modern staging should be seen as a persistent data structure.
26. 4. Using RDDs or Dataframes
Dataframes have great performance,
but are untyped and foreign.
RDDs have a robust Scala API,
but are a pain to map from data sources.
btw, SQL is AWESOME
27. 4. Using RDDs or Dataframes
Dataframes RDDs
Predicate push down Types !!
Bare metal / unboxed Nested structures
Connectors Better unit tests
Pluggable Optimizer Less stages
SQL + Meta Scala * Scala
28. 4. Using RDDs or Dataframes
We should use RDDs in large ETL jobs :
Loading the data with dataframe APIs,
Basic case class mapping (or better Datasets),
Typesafe transformations,
Storing with dataframe APIs
29. 4. Using RDDs or Dataframes
Dataframes are perfect for :
Exploration, drill down,
Light jobs,
Dynamic jobs.
30. 4. Using RDDs or Dataframes
RDD based jobs are like marine
mammals.
32. 5. Cogroup all the things
The cogroup is the best operation
to link data together.
It changes fundamentally the way we work with data.
33. 5. Cogroup all the things
join (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,( A , B ))]
leftJoin (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,( A ,Option[B]))]
rightJoin (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,(Option[A], B) )]
outerJoin (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,(Option[A],Option[B]))]
cogroup (left:RDD[(K,A)],right:RDD[(K,B)]):RDD[(K,( Seq[A], Seq[B]))]
groupBy (rdd:RDD[(K,A)]):RDD[(K,Seq[A])]
On cogroup and groupBy, for a given key:K, there is only one
unique row with that key in the output dataset.
35. 5. Cogroup all the things
{case (k,(s1,s2)) => (k,(s1.map(fA).filter(pA)
,s2.map(fB).filter(pB)))}
CHECKPOINT
36. 5. Cogroup all the things
3k LoC
30 minutes to run (non-
blocking)
15 LoC
11 hours to run
(blocking)
37. 5. Cogroup all the things
What about tests? Cogrouping allows us to have “ScalaChecks-like” tests, by
minimising examples.
Test workflow :
Write a predicate to isolate the bug.
Get the minimal cogrouped row
ouput the row in test resources.
Reproduce the bug.
Write tests and fix code.
45. 6. Inline data quality
https://github.com/ahoy-jon/autoBuild (presented in october 2015)
There are opportunities to make those approaches more “precepte-like”.
(DAG of workflow, provenance of every fields, structure tags)
47. 7. Create real programs
Most pipelines are designed as “Stateless” computation.
They require no state (good)
Or
Infer the current state based on filesystem’ states (bad).
48. 7. Create real programs
Solution : Allow pipelines to access a commit log to read about past execution
and to push data for future execution.
49. 7. Create real programs
In progress: project codename Kerguelen
Multi level abstractions / commit log backed / api for jobs.
Allow creation of jobs that have different concern level.
Level 1 : name resolving
Level 2 : smart intermediaries (schema capture, stats, delta, …)
Level 3 : smart high level scheduler (replay, load management, coherence)
Level 4 : “code as data” (=> continuous delivery, auto QA, auto mep)