The document discusses the steps involved in productionalizing machine learning models from development through deployment and monitoring. It covers pre-production activities like testing the model and code, paying down technical debt. Production involves deploying the code and monitoring performance. Post-production focuses on documentation, debugging, and aligning business and team goals. Key takeaways are the importance of communication, testing all parts of the ML pipeline, and that high model accuracy alone is not sufficient.
3. 3
Spectrum of (Select) Production Environments
Love, love, love
leather/faux leather
jackets! Especially the
blue!
Positive sentiment
Fashion
Customer Retention
Near-real time
Consulting
Revenue $3M+/year
ASAP
Online Advertising*
Revenue $1M+/year
Near-real time
Finance
Automation
Daily
4. What ML Model Lifecycle Really Looks Like
what customer
described
what data
looked like
what DS
began to build
what budget
could buy
what pivot
looked like
what code
got reused
what got
tested [1]
what got
pushed to prod
what got
documented
what customer
wanted
4
5. Agenda
what customer
described
what data
looked like
what DS
began to build
what budget
could buy
what pivot
looked like
what code
got reused
what got
tested
what got
pushed to prod
what got
documented
Step 1
(Appendix)
what customer
wanted
Step 2:
Data QA
(Other Talks)
Step 3:
ML
Development
(Other Talks)
Step 4:
Pre-prod
Step 5:
Prod
Step 5:
Prod
Step 1
(Appendix)
5
Step 4:
Pre-prod
[1]
7. 7
Pre-Production: Pay Down Tech Debt
Technical Debt — Borrowing against future to trade-off between quality of
code and speed of delivery now [2], [3]
• Incur debt: write code, including ML pipelines [4], [21]
• Pay down debt: extensively test and refactor pipeline end-to-end
[5]
8. → Test 1: Joel’s Test: 12 Steps to Better Code [6]
• Spec?
• Source control? Best practices? [7]
• One-step build? Daily builds? CI?
• Bug database? Release schedule? QA?
• Fix bugs before write new code?
8
Pre-Production: Pay Down Tech Debt
[8]
9. 9
Consulting
Daily stand-up
Sprint planning
Version control*
Fix bugs first
Bugs emailed/db
Release on bugfix
One-step build
Atlassian suite
Virtual machines
Test 1: Joel’s Test: 12 Steps to Better Code … in practice:
Pre-Production: Pay Down Tech Debt
Fashion
Weekly stand-up
—
Version control
Fix bugs first
—
Release on bugfix
One-step build
PRD, Trello
Virtual env, Docker,
CircleCI, cookiecutter
Online Advertising*
Daily stand-up
Sprint planning
—
—
Bug database
—
—
—
—
Finance
—
—
Version control*
Fix bugs first
—
—
One-step build
Trello
—
11. → Test 2: ML Test Score [9], [10]
• Data and feature quality
• Model development
• ML infrastructure
• Monitoring ML
11
Pre-Production: Pay Down Tech Debt
[11]
12. 12
→ Other tips — ML:
• Choose simplest model, most appropriate for task and prod env
• Test model against (simulated) “ground truth” or 2nd implementation [12]
• Evaluate effects of floating point [12]
• Model validation beyond AUC [13]
Pre-Production: Pay Down Tech Debt
13. → Other tips — Code:
• What is the production environment?
• Set-up logging
• Add else to if or try/except + error
• DRY → refactor
• Add regression tests
• Comment liberally (explain “why”) + up-to-date [20]
• Lint
Pre-Production: Pay Down Tech Debt
[14]
13
14. Consulting
Minimal time to add new
feature
Unsuitable features
excluded
Test 2: ML Test Score … in practice:
– Data and Feature Quality –
Pre-Production: Pay Down Tech Debt
– Model Development –
Baseline model
Bias correction
Proxy + actual metrics
Simulated ground truthBaseline model + 2nd
implementation
Rolling refresh
Performance overall + those
most likely to click
Proxy + actual metrics
Code review (PR)
Hyperparameter
optimization
(sklearn.GridSearchCV)
14
Online Advertising* Fashion
Test input features
(typing, pytest)
Finance
Minimal time to add
new feature
Privacy built-in
15. Consulting
Loosely coupled fcns
Central repo for clients
Regression testing
One-step build, prod
Pre-Production: Pay Down Tech Debt
Missing data check
Logging
Software + package
versions check
Data availability check
Logging (logging)
Evaluates empty +
factual responses
Local = prod env
(virtualenv, Docker)
Comp time (timeit)
15
Online Advertising*
Streaming
Fashion
Loosely coupled fcns
Streaming API (sanic)
Integration test (pytest)
One-step build, prod
Finance
Loosely coupled fcns
Streaming
One-step build, prod*
Reproducibility of training
Test 2: ML Test Score … in practice:
– ML Infrastructure –
– ML Monitoring –
16. 16
Test 2: ML Test Score … in practice (cont’d):
Pre-Production: Pay Down Tech Debt
20. 20
→ Documentation + QA
→ Debugging, debugging, debugging
→ Bugfix vs. Feature
→ Container Management
→ Post-mortem
→ Use the product
→ Support and Training
Post-Production: Keep Code Up and Running
[16]
22. 22
Key Takeaways
→ Communication, tooling, logging, documentation, debugging
→ Automatically evaluate all components of ML pipeline
→ High model AUC is not always the answer
→ Scope down, then scale up
[18]
27. 27
Appendix: Establish Business Use Case
→ Kick-off meeting with stakeholders:
• Discuss use case, motivation and scope
• Brainstorm and discuss potential solutions
• Format of deliverable? Prod env (if appropriate)
• Iron-out deadlines, checkpoints and ongoing support structure
• Scope down, then scale up
• Close meeting with recap of action items
Key Takeaways: communication + clear expectations
[19]