Deep neural networks (deep nets) are revolutionizing many machine learning (ML) applications. But there is a major bottleneck to broader adoption: the pain of model selection.
3. About us
▪ PHD students from ADALab at UCSD, advised by
Prof. Arun Kumar
▪ Our research mission: democratize data science
▪ More:
Supun Nakandala
https://scnakandala.github.io/
Yuhao Zhang
https://yhzhang.info/
ADALab
https://adalabucsd.github.io/
5. Problem: training deep nets is Painful!
Batch size?
8, 16, 64, 256 ...
Model architecture?
3 layer CNN,5 layer
CNN, LSTM…
Learning rate?
0.1, 0.01, 0.001,
0.0001 ...
Regularization?
L2, L1, Dropout,
Batchnorm ...
4 4 4 4
256 Different configurations !
Model performance = f(model architecture, hyperparameters, ...)
→Trial and error
Need for speed → $$$
(Distributed DL)
→ Better utilization of resources
6. Outline
1. Background
a. Mini-batch SGD
b. Task Parallelism
c. Data Parallelism
2. Model Hopper Parallelism (MOP)
3. MOP on Apache Spark
a. Implementation
b. APIs
c. Tests
7. Outline
1. Background
a. Mini-batch SGD
b. Task Parallelism
c. Data Parallelism
2. Model Hopper Parallelism (MOP)
3. MOP on Apache Spark
a. Implementation
b. APIs
c. Tests
8. Introduction - mini-batch SGD
Model
Updated Model
η ∇
Learning
rate
Avg. of
gradients
X1 X2 y
1.1 2.3 0
0.9 1.6 1
0.6 1.3 1
... ... ...
... ... ...
... ... ...
... ... ...
... ... ...
... ... ...
One
mini-batch
The most popular algorithm family for
training deep nets
10. Outline
1. Background
a. Mini-batch SGD
b. Task Parallelism
c. Data Parallelism
2. Model Hopper Parallelism (MOP)
3. MOP on Apache Spark
a. Implementation
b. APIs
c. Tests
14. Outline
1. Background
a. Mini-batch SGD
b. Task Parallelism
c. Data Parallelism
2. Model Hopper Parallelism (MOP)
3. MOP on Apache Spark
a. Implementation
b. APIs
c. Tests
15. Data Parallelism - Problem Setting
Models(tasks)
Partitioned data
High data scalability
16. Data Parallelism
Queue
Training on one mini-batch
or full partition
● Update only per epoch: bulk synchronous parallelism
(model averaging)
○ Bad convergence
● Update per mini-batch: sync parameter server
○ + Async updates: async parameter server
○ + Decentralized: MPI allreduce (Horovod)
○ High communication cost
Updates
17. Task Parallelism
+ high throughput
- low data scalability
- memory/storage wastage
Data Parallelism
+ high data scalability
- low throughput
- high communication cost
Model Hopper Parallelism (Cerebro)
+ high throughput
+ high data scalability
+ low communication cost
+ no memory/storage wastage
18. Outline
1. Background
a. Mini-batch SGD
b. Task Parallelism
c. Data Parallelism
2. Model Hopper Parallelism (MOP)
3. MOP on Apache Spark
a. Implementation
b. APIs
c. Tests
27. Outline
1. Background
a. Mini-batch SGD
b. Task Parallelism
c. Data Parallelism
2. Model Hopper Parallelism (MOP)
3. MOP on Apache Spark
a. Implementation
b. APIs
c. Tests
29. Implementation Details
▪ Spark DataFrames converted to partitioned Parquet
and locally cached in workers
▪ TensorFlow threads run training on local data
partitions
▪ Model Hopping implemented via shared file system
30. Outline
1. Background
a. Mini-batch SGD
b. Task Parallelism
c. Data Parallelism
2. Model Hopper Parallelism (MOP)
3. MOP on Apache Spark
a. Implementation
b. APIs
c. Tests
31. Example: Grid Search on
Model Selection + Hyperparameter
Search
▪ Two model architecture: {VGG16, ResNet50}
▪ Two learning rate: {1e-4, 1e-6}
▪ Two batch size: {32, 256}
32. Initialization
from pyspark.sql import SparkSession
import cerebro
spark = SparkSession.builder.master(...) # initialize spark
spark_backend = cerebro.backend.SparkBackend(
spark_context=spark.sparkContext, num_workers=num_workers
) # initialize cerebro
data_store = cerebro.storage.HDFSStore('hdfs://...') # set the shared data
storage
33. Define the Models
params = {'model_arch':['vgg16', 'resnet50'], 'learning_rate':[1e-4, 1e-6], 'batch_size':[32, 256]}
def estimator_gen_fn(params):
'''A model factory that returns an estimator,
given the input hyper-parameters, as well as model architectures'''
if params['model_arch'] == 'resnet50':
model = ... # tf.keras model
elif params['model_arch'] == 'vgg16':
model = ... # tf.keras model
optimizer = tf.keras.optimizers.Adam(lr=params['learning_rate']) # choose optimizer
loss = ... # define loss
estimator = cerebro.keras.SparkEstimator(model=model,
optimizer=optimizer,
loss=loss,
batch_size=params['batch_size'])
return estimator
34. Run Grid Search
df = ... # read data in as Spark DataFrame
grid_search = cerebro.tune.GridSearch(spark_backend,
data_store,
estimator_gen_fn,
params,
epoch=5,
validation=0.2,
feature_columns=['features'],
label_columns=['labels'])
model = grid_search.fit(df)
35. Outline
1. Background
a. Mini-batch SGD
b. Task Parallelism
c. Data Parallelism
2. Model Hopper Parallelism (MOP)
3. MOP on Apache Spark
a. Implementation
b. APIs
c. Tests
36. Tests - Setups - Hardware
▪ 9-node cluster, 1 master + 8 workers
▪ On each nodes:
▪ Intel Xeon 10-core 2.20 GHz CPU x 2
▪ 192 GB RAM
▪ Nvidia P100 GPU x 1
42. Tests - Cerebro-Spark Gantt Chart
▪ Only overhead: stragglers randomly caused by TF 2.1 Keras Model saving/loading.
Overheads range from 1% to 300%
Stragglers