Data science teams tend to put a lot of thought into developing a predictive model to address a business need, tackling data processing, model development, training and validation. After validation, the model then tends to get rolled out -- without much testing -- into production. While software engineering best practices have been around for a long time, until recently, no formal guidelines existed for checking the quality of code of a machine learning pipeline.
The talk will cover tips and best practices for writing more robust production-ready predictive model pipelines. We know that code is never perfect; Irina will also share the pains and lessons learned from experience productionalizing and maintaining 4 customer-facing models at 4 different companies: in online advertising, consulting, finance, and fashion.
3. 3
Spectrum of (Select) Production Environments
Online Advertising*
Revenue $1M+/year
Near-real time
Consulting
Revenue $3M+/year
ASAP
Fashion
Customer Retention
Near-real time
Finance
Automation
Daily
Love, love, love
leather/faux leather
jackets! Especially the
blue!
Positive sentiment
4. What ML Model Lifecycle Really Looks Like
what customer
described
what data
looked like
what DS
began to build
what budget
could buy
what pivot
looked like
what code
got reused
what got
tested [1]
what got
pushed to prod
what got
documented
what customer
wanted
4
5. Agenda
what customer
described
what data
looked like
what DS
began to build
what budget
could buy
what pivot
looked like
what code
got reused
what got
tested [1]
what got
pushed to prod
what got
documented
what customer
wanted
Step 1
(Appendix)
Step 2:
Data QA
(Other Talks)
Step 3:
ML Development
(Other Talks)
Step 4:
Pre-prod
Step 5:
Prod
Step 5:
Prod
Step 4:
Pre-prod
Step 1
(Appendix)
5
7. 7
Pre-Production: Pay Down Tech Debt
Technical Debt — Borrowing against future to tradeoff between quality of
code and speed of delivery now [2], [3]
• Incur debt: write code, including ML pipelines [4]
• Pay down debt: extensively test and refactor pipeline end-to-end
[5]
8. → Test 1: Joel’s Test: 12 Steps to Better Code [6]
• Spec?
• Source control? Best practices? [7]
• One-step build? Daily builds? CI?
• Bug database? Release schedule? QA?
• Fix bugs before write new code?
8
Pre-Production: Pay Down Tech Debt
[8]
9. 9
Consulting
Daily stand-up
Sprint planning
Version control*
Fix bugs first
Bugs emailed/db
Release on bugfix
One-step build
Atlassian suite
Virtual machines
Online Advertising*
Daily stand-up
Sprint planning
—
—
Bug database
—
—
—
—
Finance
—
—
Version control*
Fix bugs first
—
—
One-step build
Trello
—
Fashion
Weekly stand-up
—
Version control
Fix bugs first
—
Release on bugfix
One-step build
PRD, Trello
Virtual env, CircleCI,
Docker
Test 1: Joel’s Test: 12 Steps to Better Code … in practice:
Pre-Production: Pay Down Tech Debt
10. → Test 2: ML Test Score [9], [10]
• Data and feature quality
• Model development
• ML infrastructure
• Monitoring ML
10
Pre-Production: Pay Down Tech Debt
[11]
11. 11
→ Other tips — ML:
• Choose simplest model, most appropriate for task and prod env
• Test model against (simulated) “ground truth” or 2nd implementation [12]
• Evaluate effects of floating point [12]
• Model validation beyond accuracy
[13]
Pre-Production: Pay Down Tech Debt
12. 12
→ Other tips — Code:
• What is your production environment?
• Set-up logging
• Add else to if or try/except + error
• DRY → refactor
• Add regression tests
• Comment liberally (explain “why”) + up-to-date [20]
Pre-Production: Pay Down Tech Debt
[14]
13. 13
Consulting
Minimal time to add new
feature
Unsuitable features
excluded
Online Advertising* Fashion
Test input features
(typing, pytest)
Finance
Minimal time to add new
feature
Privacy built-in
Test 2: ML Test Score … in practice:
– Data and Feature Quality –
Pre-Production: Pay Down Tech Debt
– Model Development –
Simulated ground truth Baseline model + 2nd
implementation
Rolling refresh
Performance overall +
those most likely to click
Proxy + actual metrics
Code review (PR)
Hyperparameter
optimization
(sklearn.GridSearchCV)
Baseline model
Bias correction
Proxy + actual metrics
14. 14
Consulting
Loosely coupled fcns
Central repo for clients
Regression testing
One-step build, prod
Online Advertising*
Streaming
Fashion
Loosely coupled fcns
Streaming API (sanic)
Integration test (pytest)
One-step build, prod
Finance
Loosely coupled fcns
Streaming
One-step build, prod*
Reproducibility of
training
Pre-Production: Pay Down Tech Debt
– ML Monitoring –
Logging
Software + package
versions check
Data availability check
Logging (logging)
Evaluates empty +
factual responses
Local = prod env
(virtualenv, Docker)
Comp time (timeit)
Missing data check
Test 2: ML Test Score … in practice (cont’d):
– ML Infrastructure –
15. 15
Test 2: ML Test Score … in practice (cont’d):
Pre-Production: Pay Down Tech Debt
19. 19
→ Documentation + QA gets cut first
→ Debugging, debugging, debugging → code is never perfect
→ Bugfix vs. Feature
→ Post-mortem
→ Use the product
→ Support and Training
Post-Production: Keep Code Up and Running
[16]
21. 21
Key Takeaways
→ Communication, tooling, logging, documentation, debugging
→ Automatically evaluate all components of ML pipeline
→ High model accuracy is not always the answer
→ Scope down, then scale up
[18]
26. 26
Appendix: Establish Business Use Case
→ Kick-off meeting with stakeholders:
• Discuss use case, motivation and scope
• Find out about format of deliverable and how it will be used by team
• Brainstorm and discuss potential solutions
• Iron-out deadlines, checkpoints and on-going support structure
• Ask about prod env (if appropriate)
• Scope down, then scale up
• Close meeting with recap of action items
Key Takeaways: communication + clear expectations
[19]