Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2lGNybu.
Stefan Krawczyk discusses how his team at StitchFix use the cloud to enable over 80 data scientists to be productive. He also talks about prototyping ideas, algorithms and analyses, how they set up & keep schemas in sync between Hive, Presto, Redshift & Spark and make access easy for their data scientists, etc. Filmed at qconsf.com..
Stefan Krawczyk is Algo Dev Platform Lead at StitchFix, where he’s leading development of the algorithm development platform. He spent formative years at Stanford, LinkedIn, Nextdoor & Idibon, working on everything from growth engineering, product engineering, data engineering, to recommendation systems, NLP, data science and business intelligence.
1. Data Science
in the Cloud
Stefan Krawczyk
@stefkrawczyk
linkedin.com/in/skrawczyk
November 2016
2. InfoQ.com: News & Community Site
• Over 1,000,000 software developers, architects and CTOs read the site world-
wide every month
• 250,000 senior developers subscribe to our weekly newsletter
• Published in 4 languages (English, Chinese, Japanese and Brazilian
Portuguese)
• Post content from our QCon conferences
• 2 dedicated podcast channels: The InfoQ Podcast, with a focus on
Architecture and The Engineering Culture Podcast, with a focus on building
• 96 deep dives on innovative topics packed as downloadable emags and
minibooks
• Over 40 new content items per week
Watch the video with slide
synchronization on InfoQ.com!
https://www.infoq.com/presentations/
stitchfix-cloud
3. Purpose of QCon
- to empower software development by facilitating the spread of
knowledge and innovation
Strategy
- practitioner-driven conference designed for YOU: influencers of
change and innovation in your teams
- speakers and topics driving the evolution and innovation
- connecting and catalyzing the influencers and innovators
Highlights
- attended by more than 12,000 delegates since 2007
- held in 9 cities worldwide
Presented at QCon San Francisco
www.qconsf.com
32. This is Usually Nothing to Worry About
● OS handles correct access
● DB has ACID properties
A
B
33. This is Usually Nothing to Worry About
● OS handles correct access
● DB has ACID properties
● But it’s easy to outgrow these
options with a big data/team.
A
B
34. ● Amazon’s Simple Storage Service
● Infinite* storage
● Can write, read, delete, BUT NOT append.
● Looks like a file system*:
○ URIs: my.bucket/path/to/files/file.txt
● Scales well
S3
* For all intents and purposes
35. ● Hadoop service, that stores:
○ Schema
○ Partition information, e.g. date
○ Data location for a partition
Hive Metastore
36. ● Hadoop service, that stores:
○ Schema
○ Partition information, e.g. date
○ Data location for a partition
Hive Metastore:
Hive Metastore
Partition Location
20161001 s3://bucket/sold_items/20161001
...
20161031 s3://bucket/sold_items/20161031
sold_items
41. But if we’re not careful
● S3 is eventually
consistent
● These bugs are hard
to track down
A
B
42. ● Use Hive Metastore to control partition source of truth
● Principles:
○ Never delete
○ Always write to a new place each time a partition changes
● Stitch Fix solution:
○ Use an inner directory → called Batch ID
Hive Metastore to the Rescue
44. Batch ID Pattern
Date Location
20161001 s3://bucket/sold_items/
20161001/20161002002334/
... ...
20161031 s3://bucket/sold_items/
20161031/20161101002256/
sold_items
45. ● Overwriting a partition is just a matter of updating the location
Batch ID Pattern
Date Location
20161001 s3://bucket/sold_items/
20161001/20161002002334/
... ...
20161031 s3://bucket/sold_items/
20161031/20161101002256/
s3://bucket/sold_items/
20161031/20161102234252
sold_items
46. ● Overwriting a partition is just a matter of updating the location
● To the user this is a hidden inner directory
Batch ID Pattern
Date Location
20161001 s3://bucket/sold_items/
20161001/20161002002334/
... ...
20161031 s3://bucket/sold_items/
20161031/20161101002256/
s3://bucket/sold_items/
20161031/20161102234252
sold_items
50. ● Full partition history
○ Can rollback
■ Data Scientists are less afraid of mistakes
○ Can create audit trails more easily
■ What data changed and when
○ Can anchor downstream consumers to a particular batch ID
Batch ID Pattern Benefits
52. Workstation Env. Mgmt. Scalability
Low Low
Medium Medium
High High
Ad hoc Infra: In the Beginning...
53. Workstation Env. Mgmt. Scalability
Low Low
Medium Medium
High High
Ad hoc Infra: Evolution I
54. Workstation Env. Mgmt. Scalability
Low Low
Medium Medium
High High
Ad hoc Infra: Evolution II
55. Workstation Env. Mgmt. Scalability
Low Low
Medium Medium
Low High
Ad hoc Infra: Evolution III
56. ● Control of environment
○ Data Scientists don’t need to worry about env.
● Isolation
○ can host many docker containers on a single machine.
● Better host management
○ allowing central control of machine types.
Why Does Docker Lower Overhead?
58. ● Has:
○ Our internal API libraries
○ Jupyter Notebook:
■ Pyspark
■ IPython
○ Python libs:
■ scikit, numpy, scipy, pandas, etc.
○ RStudio
○ R libs:
■ Dplyr, magrittr, ggplot2, lme4, BOOT, etc.
● Mounts User NFS
● User has terminal access to file system via Jupyter for git, pip, etc.
Our Docker Image
62. ● Docker tightly integrates with the Linux Kernel.
○ Hypothesis:
■ Anything that makes uninterruptable calls to the kernel can:
● Break the ECS agent because the container doesn’t respond.
● Break isolation between containers.
■ E.g. Mounting NFS
● Docker Hub:
○ Switched to artifactory
Our Docker Problems So Far
70. ● Naïve scheme of JSON + Zlib works well:
Observations
import json
import zlib
...
# compress
compressed = zlib.compress(json.dumps(value))
# decompress
original = json.loads(zlib.decompress(compressed))
71. ● Naïve scheme of JSON + Zlib works well:
● Double vs Float: do you really need to store that much precision?
Observations
import json
import zlib
...
# compress
compressed = zlib.compress(json.dumps(value))
# decompress
original = json.loads(zlib.decompress(compressed))
72. ● Naïve scheme of JSON + Zlib works well:
● Double vs Float: do you really need to store that much precision?
● For more inspiration look to columnar DBs and how they compress columns
Observations
import json
import zlib
...
# compress
compressed = zlib.compress(json.dumps(value))
# decompress
original = json.loads(zlib.decompress(compressed))
73. To Batch or Not To Batch:
When is batch inefficient?
74. ● Online:
○ Computation occurs synchronously when needed.
● Streamed:
○ Computation is triggered by an event(s).
Online & Streamed Computation
75. Online & Streamed Computation
Very likely
you start with
a batch system
76. Online & Streamed Computation
● Do you need to recompute:
○ features for all users?
○ predicted results for all users?
Very likely
you start with
a batch system
77. Online & Streamed Computation
● Do you need to recompute:
○ features for all users?
○ predicted results for all users?
● Are you heavily dependent on your
ETL running every night?
Very likely
you start with
a batch system
78. Online & Streamed Computation
● Do you need to recompute:
○ features for all users?
○ predicted results for all users?
● Are you heavily dependent on your
ETL running every night?
● Online vs Streamed depends on in
house factors:
○ Number of models
○ How often they change
○ Cadence of output required
○ In house eng. expertise
○ etc.
Very likely
you start with
a batch system
79. Online & Streamed Computation
● Do you need to recompute:
○ features for all users?
○ predicted results for all users?
● Are you heavily dependent on your
ETL running every night?
● Online vs Streamed depends on in
house factors:
○ Number of models
○ How often they change
○ Cadence of output required
○ In house eng. expertise
○ etc.
Very likely
you start with
a batch system
We use online
system for
recommendations
84. ● Dedicated infrastructure → More room on batch infrastructure
○ Hopefully $$$ savings
○ Hopefully less stressed Data Scientists
Online/Streaming Thoughts
85. ● Dedicated infrastructure → More room on batch infrastructure
○ Hopefully $$$ savings
○ Hopefully less stressed Data Scientists
● Requires better software engineering practices
○ Code portability/reuse
○ Designing APIs/Tools Data Scientists will use
Online/Streaming Thoughts
86. ● Dedicated infrastructure → More room on batch infrastructure
○ Hopefully $$$ savings
○ Hopefully less stressed Data Scientists
● Requires better software engineering practices
○ Code portability/reuse
○ Designing APIs/Tools Data Scientists will use
● Prototyping on AWS Lambda & Kinesis was surprisingly quick
○ Need to compile C libs on an amazon linux instance
Online/Streaming Thoughts
88. Ever:
● Had someone leave and then nobody understands how they trained their
models?
89. Ever:
● Had someone leave and then nobody understands how they trained their
models?
○ Or you didn’t remember yourself?
90. Ever:
● Had someone leave and then nobody understands how they trained their
models?
○ Or you didn’t remember yourself?
● Had performance dip in models and you have trouble figuring out why?
91. Ever:
● Had someone leave and then nobody understands how they trained their
models?
○ Or you didn’t remember yourself?
● Had performance dip in models and you have trouble figuring out why?
○ Or not known what’s changed between model deployments?
92. Ever:
● Had someone leave and then nobody understands how they trained their
models?
○ Or you didn’t remember yourself?
● Had performance dip in models and you have trouble figuring out why?
○ Or not known what’s changed between model deployments?
● Wanted to compare model performance over time?
93. Ever:
● Had someone leave and then nobody understands how they trained their
models?
○ Or you didn’t remember yourself?
● Had performance dip in models and you have trouble figuring out why?
○ Or not known what’s changed between model deployments?
● Wanted to compare model performance over time?
● Wanted to train a model in R/Python/Spark and then deploy it a webserver?
95. ● Isn’t that just saving the coefficients/model values?
Produce Model Artifacts
96. ● Isn’t that just saving the coefficients/model values?
○ NO!
Produce Model Artifacts
97. ● Isn’t that just saving the coefficients/model values?
○ NO!
● Why?
Produce Model Artifacts
98. ● Isn’t that just saving the coefficients/model values?
○ NO!
● Why?
Produce Model Artifacts
99. ● Isn’t that just saving the coefficients/model values?
○ NO!
● Why?
How do you deal with
organizational drift?
Produce Model Artifacts
100. ● Isn’t that just saving the coefficients/model values?
○ NO!
● Why?
How do you deal with
organizational drift?
Produce Model Artifacts
Makes it easy to keep an
archive and track
changes over time
101. ● Isn’t that just saving the coefficients/model values?
○ NO!
● Why?
How do you deal with
organizational drift?
Produce Model Artifacts
Helps a lot with model
debugging & diagnosis!
Makes it easy to keep an
archive and track
changes over time
102. ● Isn’t that just saving the coefficients/model values?
○ NO!
● Why?
How do you deal with
organizational drift?
Produce Model Artifacts
Helps a lot with model
debugging & diagnosis!
Makes it easy to keep an
archive and track
changes over time Can more easily use in
downstream processes
103. ● Analogous to software libraries
● Packaging:
○ Zip/Jar file
Produce Model Artifacts