Using the latest advancements from TensorFlow including the Accelerated Linear Algebra (XLA) Framework, JIT/AOT Compiler, and Graph Transform Tool, Chris will demonstrate how to optimize, profile, and deploy TensorFlow Models in GPU-based production environment. This talk is 100% demo based with open source tools and completely reproducible through Docker on your own GPU cluster.
https://github.com/fluxcapacitor/pipeline/gpu.ml
http://pipeline.io
3. INTRODUCTIONS: ME
§ Chris Fregly, Research Engineer @ in San Fran
§ Formerly Netflix and Databricks
§ Advanced Spark and TensorFlow Meetup (Global)
Please Join Our 7,000 Members!!
4. INTRODUCTIONS: YOU
§ Software Engineer or Data Scientist interested in optimizing
and deploying TensorFlow models to production
§ Assume you have a working knowledge of TensorFlow
6. CONTENT BREAKDOWN
§ 50% Training Optimizations (TensorFlow, XLA, Tools)
§ 50% Deployment and Inference Optimizations (Serving)
§ Why Heavy Focus on Inference?
§ Training: boring batch, O(num_researchers)
§ Inference: exciting realtime, O(num_users_of_app)
§ We Use Simple Models to Highlight Optimizations
§
Warning: This is not introductory TensorFlow material!
7. 100% OPEN SOURCE CODE
§ https://github.com/fluxcapacitor/pipeline/
§ Please Star this Repo! J
§ Slides, code, notebooks, Docker images available here:
https://github.com/fluxcapacitor/pipeline/
gpu.ml
8. HANDS-ON EXERCISES
§ Combo of Jupyter Notebooks and Command Line
§ Command Line through Jupyter Terminal
§ Some Exercises Based on Experimental Features
In Other Words, Some Code May Be Wonky!
9. YOU WILL LEARN…
§ TensorFlow Best Practices
§ To Inspect and Debug Models
§ To Distribute Training Across a Cluster
§ To Optimize Training with Queue Feeders
§ To Optimize Training with XLA JIT Compiler
§ To Optimize Inference with AOT and Graph Transform Tool (GTT)
§ Key Components of TensorFlow Serving
§ To Deploy Models with TensorFlow Serving
§ To Optimize Inference by Tuning TensorFlow Serving
10. AGENDA
§ Setup Environment
§ Train and Debug TensorFlow Model
§ Train with Distributed TensorFlow Cluster
§ Optimize Model with XLA JIT Compiler
§ Optimize Model with XLA AOT and Graph Transforms
§ Deploy Model to TensorFlow Serving Runtime
§ Optimize TensorFlow Serving Runtime
§ Wrap-up and Q&A
12. SETUP ENVIRONMENT
§ Step 1: Browse to the following:
http://[REDACTED]
§ Step 2: Browse to the following:
http://<ip-address>
Need Help?
Use the Chat!
16. GPU HALF-PRECISION SUPPORT
§ New Pascal P100: FP16, INT8
§ Flexible FP32 GPU Cores
§ Process 2x FP16 on 2-element vectors simultaneously
§ Half-precision is OK for Approximate Deep Learning!
17. GPU CUDA PROGRAMMING
§ Barbaric At Best!
§ CUDA Developers Must Be Very Aware of Hardware Specs
§ As Such, There are Many Great Debuggers/Profilers
Warning: Nvidia Can Change Hardware Specs Anytime
19. LET’S SEE WHAT THIS THING CAN DO!
§ Navigate to the following notebook:
01_Explore_GPU
§ https://github.com/fluxcapacitor/pipeline/gpu.ml/
notebooks/
22. AGENDA
§ Setup Environment
§ Train and Debug TensorFlow Model
§ Train with Distributed TensorFlow Cluster
§ Optimize Model with XLA JIT Compiler
§ Optimize Model with XLA AOT and Graph Transforms
§ Deploy Model to TensorFlow Serving Runtime
§ Optimize TensorFlow Serving Runtime
§ Wrap-up and Q&A
23. TRAINING TERMINOLOGY
§ Tensors: N-Dimensional Arrays
§ ie. Scalar, Vector, Matrix
§ Operations: MatMul, Add, SummaryLog,…
§ Graph: Graph of Operations (DAG)
§ Session: ContainsGraph(s)
§ Feeds: Feed inputs into Operation
§ Fetches: Fetch output from Operation
§ Variables: What we learn through training
§ aka “weights”, “parameters”
§ Devices: Hardware device on which we train
-TensorFlow-
Trains
Variables
-User-
Fetches
Outputs
-User-
Feeds
Inputs
-TensorFlow-
Performs
Operations
-TensorFlow-
Flows
Tensors
with tf.device(“worker:0/device/gpu:0,worker:1/device/gpu:0”)
24. TRAINING DEVICES
§ cpu:0
§ By default, all CPUs
§ Requires extra config to target a CPU
§ gpu:0..n
§ Each GPU has a unique id
§ TF usually prefers a single GPU
§ xla_cpu:0, xla_gpu:0..n
§ “JIT Compiler Device”
§ Hints TF to attempt JIT Compile
with tf.device(“/cpu:0”):
with tf.device(“/gpu:0”):
with tf.device(“/gpu:1”):
25. TRAINING METRICS: TENSORBOARD
§ Summary Ops
§ Event Files
/root/tensorboard/linear/<version>/events…
§ Tags
§ Organize data within Tensorboard UI
loss_summary_op = tf.summary.scalar('loss',
loss)
merge_all_summary_op = tf.summary.merge_all()
summary_writer = tf.summary.FileWriter(
'/root/tensorboard/linear/<version>',
graph=sess.graph)
26. TRAINING ON EXISTING INFRASTRUCTURE
§ Data Processing
§ HDFS/Hadoop
§ Spark
§ Containers
§ Docker
§ Schedulers
§ Kubernetes
§ Mesos
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-hadoop</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
https://github.com/tensorflow/ecosystem
27. FEED TRAINING DATA WITH QUEUES
§ Don’t Use feed_dict for Production Workloads!!
§ feed_dict Requires C++ <-> Python Serialization
§ Batch Retrieval is Single-threaded,Synchronous, SLOW!
§ Next batch can’t be retrieved until current batch is processed
§ CPUs and GPUs are Not Fully Utilized
§ Use Queues to Read and Pre-Process Data for GPUs!
§ Queues use CPUs for I/O, pre-processing, shuffling, …
§ GPUs stay saturated and focused on cool GPU stuff!!
28. DATA MOVEMENT WITH QUEUES
§ Queue Pulls Batch from Source (ie HDFS, Kafka)
§ Queue pre-processes and shuffles data
§ Queue should use only CPUs - no GPUs
§ Combine many small files into a few large TFRecord files
§ GPU Pulls Batch from Queue (CUDA Streams)
§ GPU pulls next batch while processing current batch
§ GPU Processes Next Batch Immediately
§ GPUs are fully utilized!
29. QUEUE CAPACITY PLANNING
§ batch_size
§ # of examples per batch (ie. 64 jpg)
§ Limited by GPU RAM
§ num_processing_threads
§ CPU threads pull and pre-process batches of data
§ Limited by CPU Cores
§ queue_capacity
§ Limited by CPU RAM (ie. 5 * batch_size)
Saturate those GPUs!
GPU Pulls Batches while
Processing Current Batch
AsyncMemory Transfer
with CUDA Streams
-- Thanks,Nvidia!! --
30. DETECT UNDERUTILIZED CPUS, GPUS
§ Instrument training code to generate “timelines”
§ Analyze with Google Web
Tracing Framework (WTF)
§ Monitor CPU with `top`, GPU with `nvidia-smi`
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
31. LET’S FEED A QUEUE FROM HDFS
§ Navigate to the following notebook:
02_Feed_Queue_HDFS
§ https://github.com/fluxcapacitor/pipeline/gpu.ml/
notebooks/
34. TENSORFLOW MODEL
§ GraphDef
§ Architecture of your model (nodes, edges)
§ Metadata
§ Asset: Accompanying assets to your model
§ SignatureDef: Maps external to internal graph nodes for TF Serving
§ MetaGraph
§ Combines GraphDef and Metadata
§ Variables
§ Stored separately (checkpointed) during training
§ Allows training to continue from any checkpoint
§ Values are “frozen” into Constants when deployed for inference
Graph Definition
x
W
mul add
b
MetaGraph
Metadata
Assets
SignatureDef
Tags
Version
42. AGENDA
§ Setup Environment
§ Train and Debug TensorFlow Model
§ Train with Distributed TensorFlow Cluster
§ Optimize Model with XLA JIT Compiler
§ Optimize Model with XLA AOT and Graph Transforms
§ Deploy Model to TensorFlow Serving Runtime
§ Optimize TensorFlow Serving Runtime
§ Wrap-up and Q&A
43. MULTI-GPU TRAINING (SINGLE NODE)
§ Variables stored on CPU (cpu:0)
§ Model graph (aka “replica”, “tower”)
is copied to each GPU(gpu:0, gpu:1, …)
Multi-GPU Training Steps:
1. CPU transfersmodel to each GPU
2. CPU waitson all GPUs to finish batch
3. CPU copiesall gradientsback from all GPUs
4. CPU synchronizesand averagesall gradientsfrom GPUs
5. CPU updatesGPUs with new variables/weights
6. Repeat Step 1 until reaching stop condition (ie. max_epochs)
44. DISTRIBUTED, MULTI-NODE TRAINING
§ TensorFlow Automatically Inserts Send and Receive Ops into Graph
§ Parameter Server Synchronously AggregatesUpdates to Variables
§ Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS
Worker0 Worker0
Worker1
Worker0 Worker1 Worker2
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0
gpu1
gpu0
gpu0
45. SYNCHRONOUS VS. ASYNCHRONOUS
§ Synchronous
§ Worker (“graphreplica”, “tower”)
§ Reads samevariables from Parameter Server in parallel
§ Computes gradients for variables using partition of data
§ Sends gradients to central Parameter Server
§ Parameter Server
§ Aggregates (avg) gradients for each variable based on its portion of data
§ Applies gradients (+, -) to each variables
§ Broadcasts updated variables to each node in parallel
§ ^^ Repeat ^^
§ Asynchronous
§ Each node computes gradients independently
§ Reads stale values,does not synchronized with other nodes
46. DATA PARALLEL VS MODEL PARALLEL
§ Data Parallel
§ Send exact same model to each worker/device
§ Each worker/device operates on their partition of data, same model
§ Similar to how Spark sends the same function to differentworkers
§ Each Spark worker operates on their partition of the data
§ Model Parallel
§ Send different partition of model to each worker/device
§ Each worker/device operates on all data, their partition of model
Very Difficult, But Necessary for Very Large Models (RAM)
47. LET’S TRAIN A DISTRIBUTED MODEL
§ Navigate to the following notebook:
06_Train_Distributed_Model
§ https://github.com/fluxcapacitor/pipeline/gpu.ml/
notebooks/
48. AGENDA
§ Setup Environment
§ Train and Debug TensorFlow Model
§ Train with Distributed TensorFlow Cluster
§ Optimize Model with XLA JIT Compiler
§ Optimize Model with XLA AOT and Graph Transforms
§ Deploy Model to TensorFlow Serving Runtime
§ Optimize TensorFlow Serving Runtime
§ Wrap-up and Q&A
50. XLA HIGH LEVEL OPTIMIZER (HLO)
§ Define Graphs using HLO Language
§ Compiler IntermediateRepresentation (IR)
§ Independentof source and target language
§ XLA Emits Target-Independent,OptimizedHLO
§ Backend Emits Target-Dependent,OptimizedLLVM
§ LLVM Emits Native Code for Target
§ x86-64 and ARM64 CPU Instruction Sets
§ Nvidia GPU NVPTX
51. JIT COMPILER
§ Just-In-Time Compiler
§ Built on XLA Framework
§ Goals:
§ Reduce memory movement – especiallyuseful on GPUs
§ Reduce overhead of multiple function calls
§ Similar to Spark Operator Fusing in Spark 2.0
§ Unroll Loops, Fuse Operators, Fold Constants, …
§ Enable session-wide or `with jit_scope():`
52. VISUALIZING JIT COMPILER IN ACTION
Before After
Google Web Tracing Framework:
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
54. LET’S TRAIN A MODEL (XLA + JIT)
§ Navigate to the following notebook:
07_Train_Model_XLA_JIT
§ https://github.com/fluxcapacitor/pipeline/gpu.ml/
notebooks/
55. IT’S WORTH HIGHLIGHTING…
§ From now on, we will optimize models for inference only
§ In other words,
No More Training Optimizations!
58. AGENDA
§ Setup Environment
§ Train and Debug TensorFlow Model
§ Train with Distributed TensorFlow Cluster
§ Optimize Model with XLA JIT Compiler
§ Optimize Model with XLA AOT and Graph Transforms
§ Deploy Model to TensorFlow Serving Runtime
§ Optimize TensorFlow Serving Runtime
§ Wrap-up and Q&A
59. AOT COMPILER
§ Standalone, Ahead-Of-Time (AOT) Compiler
§ `tfcompile`
§ Built on XLA framework
§ Creates executable with minimal TensorFlow Runtime
§ Creates functions with feeds (input) and fetches (output)
§ Includes only dependencies neededby subgraph computation
§ Packaged as `cc_libary` with header and object file
§ Commonly used on inference graph for mobile devices
§ CPU x86-64 and ARM only – currently, no GPU support
61. WEIGHT QUANTIZATION FOR INFERENCE
§ FP16 and INT8 have much lower precision than FP32
§ Weights are known at inference time
§ Linear quantization
62. ACTIVATION QUANTIZATION
§ Activations are not known without forward propagation
§ Depends on input during inference
§ Requires “calibration” from representative dataset
64. FUSED BATCH NORMALIZATION
§ What is Batch Normalization?
§ Each batch of data may have wildly different distributions
§ Normalize per batch (and layer)
§ Speeds up training dramatically
§ Weights are learned quicker
§ Final model is more accurate
Always Use Batch Normalization!
§ GTT Fuses Final Mean and Variance MatMul into Graph
z = tf.matmul(a_prev, W)
a = tf.nn.relu(z)
a_mean, a_var = tf.nn.moments(a, [0])
scale = tf.Variable(tf.ones([depth/channels]))
beta = tf.Variable(tf.zeros ([depth/channels]))
bn = tf.nn.batch_normalizaton(a, a_mean, a_var,
beta, scale, 0.001)
65. LET’S OPTIMIZE MODEL WITH GTT
§ Navigate to the following notebook:
08_Optimize_Model_CPU
§ https://github.com/fluxcapacitor/pipeline/gpu.ml/
notebooks/
66. LET’S OPTIMIZE MODEL WITH GTT
§ Navigate to the following notebook:
09_Optimize_Model_GPU
§ https://github.com/fluxcapacitor/pipeline/gpu.ml/
notebooks/
69. AGENDA
§ Setup Environment
§ Train and Debug TensorFlow Model
§ Train with Distributed TensorFlow Cluster
§ Optimize Model with XLA JIT Compiler
§ Optimize Model with XLA AOT and Graph Transforms
§ Deploy Model to TensorFlow Serving Runtime
§ Optimize TensorFlow Serving Runtime
§ Wrap-up and Q&A
70. MODEL SERVING TERMINOLOGY
§ Inference
§ Only Forward Propagation through Network
§ Predict, Classify, Regress, …
§ Bundle
§ GraphDef, Variables, Metadata, …
§ Assets
§ ie. Map of ClassificationID -> String
§ {9283: “penguin”, 9284: “bridge”, …}
§ Version
§ Every Model Has a Version Number (Integers Only?!)
§ Version Policy
§ ie. Serve Only Latest (Highest), Serve both Latest and Previous, …
71. TENSORFLOW SERVING FEATURES
§ Low-latency or High-throughput Tuning
§ Supports Auto-Scaling
§ DifferentModels/Versions Served in Same Process
§ Custom Loaders beyond File-based
§ Custom Serving Models beyond HashMap and TensorFlow
§ Custom Version Policies for A/B and Bandit Tests
§ Drain Requests for Graceful Model Shutdown or Update
§ Extensible Request Batching Strategies for Diff Use Cases and HW
§ Uses Highly-Efficient GRPC and Protocol Buffers
72. PREDICTION SERVICE
§ Predict (Original, Generic)
§ Input: List of Tensors
§ Output: List of Tensors
§ Classify
§ Input: List of `tf.Example` (key, value) pairs
§ Output: List of (class_label: String, score: float)
§ Regress
§ Input: List of `tf.Example` (key, value) pairs
§ Output: List of (label: String, score: float)
74. MULTI-HEADED INFERENCE
§ Inputs Propagated Forward Only Once
§ Multiple “Heads” of Model
§ Optimizes Bandwidth, CPU, Latency, Memory, Coolness
75. BUILD YOUR OWN MODEL SERVER (?!)
§ Overcome HTTP <-> GRPC limitation
§ Control Latency Requirements
§ Perform Batch Inference vs. Request/Response
§ Asynchronous Request Handling
§ Custom Request Batching
§ Wrap Circuit Breakers, Fallbacks
§ Mobile Inference Use Cases
§ Reduce Number of Moving Parts
#include
“tensorflow_serving/model_servers/server_core.h”
…
class MyTensorFlowModelServer {
ServerCore::Options options;
// set options (model name, path, etc)
std::unique_ptr<ServerCore> core;
TF_CHECK_OK(
ServerCore::Create(std::move(options), &core)
);
…
}
Compile and Link
libtensorflow.so
76. LET’S DEPLOY AND SERVE A MODEL
§ Navigate to the following notebook:
10_Deploy_Model_Server
§ https://github.com/fluxcapacitor/pipeline/gpu.ml/
notebooks/
79. AGENDA
§ Setup Environment
§ Train and Debug TensorFlow Model
§ Train with Distributed TensorFlow Cluster
§ Optimize Model with XLA JIT Compiler
§ Optimize Model with XLA AOT and Graph Transforms
§ Deploy Model to TensorFlow Serving Runtime
§ Optimize TensorFlow Serving Runtime
§ Wrap-up and Q&A
80. REQUEST BATCH TUNING
§ max_batch_size
§ Enables throughput/latency tradeoff
§ Bound by RAM
§ batch_timeout_micros
§ Defines batch time window, latency upper-bound
§ Bound by RAM
§ num_batch_threads
§ Defines parallelism
§ Bound by CPU cores
§ max_enqueued_batches
§ Defines queue upper bound, throttling
§ Bound by RAM
Reaching either threshold
will trigger a batch
81. BATCH SCHEDULER STRATEGIES
§ BasicBatchScheduler
§ Best for homogeneous request types (ie. always classify or always regress)
§ Async callback when `max_batch_size` or `batch_timeout_micros` is reached
§ `BatchTask` encapsulates unit of work to be batched
§ SharedBatchScheduler
§ Best for heterogeneous request types, multi-step inference, ensembles, …
§ Groups BatchTasks into separate queues to form homogenous batches
§ Processes batches fairly through interleaving
§ StreamingBatchScheduler
§ Mixed CPU/GPU/IO-bound workloads
§ Provides fine-grained control for complex, multi-phase inference logic
Must Experiment to Find Best Strategy for You!!
82. LET’S OPTIMIZE THE MODEL SERVER
§ Navigate to the following notebook:
11_Tune_Model_Server
§ https://github.com/fluxcapacitor/pipeline/gpu.ml/
notebooks/
84. AGENDA
§ Setup Environment
§ Train and Debug TensorFlow Model
§ Train with Distributed TensorFlow Cluster
§ Optimize Model with XLA JIT Compiler
§ Optimize Model with XLA AOT and Graph Transforms
§ Deploy Model to TensorFlow Serving Runtime
§ Optimize TensorFlow Serving Runtime
§ Wrap-up and Q&A
85. YOU JUST LEARNED…
§ TensorFlow Best Practices
§ To Inspect and Debug Models
§ To Distribute Training Across a Cluster
§ To Optimize Training with Queue Feeders
§ To Optimize Training with XLA JIT Compiler
§ To Optimize Inference with AOT and Graph Transform Tool (GTT)
§ Key Components of TensorFlow Serving
§ To Deploy Models with TensorFlow Serving
§ To Optimize Inference by Tuning TensorFlow Serving