Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.
Summary of Bay Area
Deep Learning School
Niketan Pansare
• Summary
• Why Deep Learning is gaining popularity ?
• Introduction to Deep Learning
• Case-study of the state-of-the-art...
Summary
• 1300 applicants for 500 spots (industry + academia)
• Videos are online:
• Day 1: https://www.youtube.com/watch?...
Why Deep Learning is gaining
popularity ?
• Efficacy of larger networks
Why Deep Learning is gaining popularity ?
Reference: Andrew Ng (Spark summit 2016).
• Efficacy of larger networks
Why Deep Learning is gaining popularity ?
Reference: Andrew Ng (Spark summit 2016).
Train la...
• Efficacy of larger networks
• Large amount of data
Why Deep Learning is gaining popularity ?
Caltech101 dataset (by
FeiF...
• Efficacy of larger networks
• Large amount of data
• Compute power necessary to train larger networks
Why Deep Learning ...
• Efficacy of larger networks
• Large amount of data
• Compute power necessary to train larger networks
• Techniques/Algor...
• Efficacy of larger networks
• Large amount of data
• Compute power necessary to train larger networks
• Techniques/Algor...
• Efficacy of larger networks
• Large amount of data
• Compute power necessary to train larger networks
• Techniques/Algor...
• Efficacy of larger networks
• Large amount of data
• Compute power necessary to train larger networks
• Techniques/Algor...
• Efficacy of larger networks
• Large amount of data
• Compute power necessary to train larger networks
• Techniques/Algor...
• Efficacy of larger networks
• Large amount of data
• Compute power necessary to train larger networks
• Techniques/Algor...
• DL for Speech (covers CTC + Speech pipeline):
• https://youtu.be/9dXiAecyJrY?t=3h49m40s
• https://github.com/baidu-resea...
Introduction to Deep Learning
Different abstractions for Deep Learning
Deep Learning pipeline Deep Learning task
Eg: CNN + classifier
=> Image captionin...
Common layers
• Fully connected layer
Reference: Convolutional Neural Networks for Visual Recognition. http://cs231n.githu...
Common layers
• Fully connected layer
• Convolution layer
• Less number of parameters as
compared to FC
• Useful to captur...
Common layers
• Fully connected layer
• Convolution layer
• Pooling layer
• Useful to tolerate feature
deformation such as...
Common layers
• Fully connected layer
• Convolution layer
• Pooling layer
• Activations
• Sigmoid
• Tanh
• ReLU
Reference:...
• Squashes the neuron’s pre-activations between [0, 1]
• Historically popular
• Disadvantages:
• Tends to vanish the gradi...
• Squashes the neuron’s pre-activations between [-1, 1]
• Advantage:
• Zero-centered
• Disadvantages:
• Tends to vanish th...
• Bounded below by 0 (always non-negative)
• Advantages:
• Does not saturate (in +region)
• Very computationally efficient...
• According to Hinton, why did deep learning not catch on earlier ?
• Our labeled datasets were thousands of times too sma...
Common layers
• Fully connected layer
• Convolution layer
• Pooling layer
• Activations
• SoftMax
• Strictly positive
• Su...
Common layers
• Fully connected layer
• Convolution layer
• Pooling layer
• Activations
• SoftMax
• Dropout
• Idea: «cripp...
Common layers
• Normalization layers
• Batch Normalization (BN)
• Network converge faster if inputs are whitened, i.e. lin...
Common layers
• Normalization layers
• Batch Normalization (BN)
• Network converge faster if inputs are whitened, i.e. lin...
Common layers
• Normalization layers
• Batch Normalization (BN)
• BN: normalizing each layer, for each mini-batch
• Greatl...
Common layers
• Normalization layers
• Batch Normalization (BN)
• Local Response Normalization (LRN)
• Used in AlexNet pap...
Different abstractions for Deep Learning
Deep Learning pipeline Deep Learning task
Eg: CNN + classifier
=> Image captionin...
Convolutional Neural networks
Convolutional Neural networks
LeNet for OCR (90s)
AlexNet
Compared to LeCun 1998, AlexNet used:
•More data: 10^6 vs. 10^3
...
Convolutional Neural networks
ZFNet [Zeiler and Fergus, 2013]
•It was an improvement on AlexNet by tweaking the architectu...
Convolutional Neural networks
• Homogenous architecture
• All convolution layers use small 3x3 filters
(compared to AlexNe...
New Lego brick or mini-network
(Inception module)
For Inception v4, see https://arxiv.org/abs/1602.07261
Convolutional Neural networks
GoogLeNet [Szegedy et al., 2014]
- 9 inception modules
- ILSVRC 2014 winner
(6.7% top 5 erro...
Convolutional Neural networks
GoogLeNet VGG_model_A AlexNet
updateOutput 130.76 162.74 27.65
updateGradInput 197.86 167.05...
Analysis of errors on GoogLeNet vs
human on ImageNet dataset
• Types of error that both GoogLeNet human are susceptible to...
Convolutional Neural networks
New Lego brick (Residual block)
Reference: http://torch.ch/blog/2016/02/04/resnets.html
Shortcut to address underfitting
d...
Convolutional Neural networks
• ResNet Architecture
• VGG style design => just deep
• All 3x3 convolution
• #Filter x2
• O...
Different abstractions for Deep Learning
Deep Learning pipeline Deep Learning task
Eg: CNN + classifier
=> Image captionin...
Addressing other tasks …
Reference: https://docs.google.com/presentation/d/1Q1CmVVnjVJM_9CDk3B8Y6MWCavZOtiKmOLQ0XB7s9Vg/ed...
Addressing other tasks … SKIP THIS !!
…
How to train a Deep Neural
Network ?
Training a Deep Neural Network
Training a Deep Neural Network
“Forward propagation”
Compute a function via composition of linear
transformations followed...
Training a Deep Neural Network
Training features:
Training label:
Goal: learn the weights
Define a loss function:
For nume...
• Using the loss function: , we learn weights by
Training a Deep Neural Network
• Learning is cast as optimization
• Popul...
• Evaluate derivative of f(x) = sin(x – 3/x) at x = 0.01
• Symbolic differentiation
• Symbolically differentiate the funct...
Examples of AD in practice
https://github.com/HIPS/autograd
For Python and NumPy:
See http://www.autodiff.org/ for more de...
• Convert the algorithm into sequence of assignment of basic
operations:
Reverse-mode AD (how it works)
https://justindomk...
Reverse-mode AD (how it works – NN)
From Neural Network with Torch - Alex Wiltschko
Reverse-mode AD (how it works – NN)
From Neural Network with Torch - Alex Wiltschko
• Normalize your data
• Mini-batch instead of SGD (leverage matrix-matrix operations)
• Use momentum
• Use adaptive learni...
• Use momentum
• Use adaptive learning rates:
• Adagrad: learning rates are scaled by the square root of the cumulative su...
• Initialization matters
• Assume 10-layer FC network with tanh non-linearity
Tricks of the Trade
- Initialize with zero m...
• Initialization matters
• Assume 10-layer FC network with tanh non-linearity
Tricks of the Trade
Xavier initialization [G...
• Initialization matters
• Assume 10-layer FC network with tanh non-linearity
• Batch normalization reduces the strong dep...
Overview of existing deep
learning stack
Existing Deep Learning Stack
Caffee, Theano , Torch7, TensorFlow, DeepLearning4J, SystemML*
cuDNN Aparapi (converts byteco...
Comparison of existing framework
Core
Lang
Bindings CPU Single
GPU
Multi
GPU
Distributed Comments
Caffe C++ Python,
MatLab...
Thank You !!
Prochain SlideShare
Chargement dans…5
×

Notes from 2016 bay area deep learning school

2 087 vues

Publié le

Slide-deck for the lunch talk at IBM Almaden Research Center on Oct 11, 2016.

Abstract: In this lunch talk, I will give a high-level summary of bay area deep learning school which was held at Stanford on Sept 24 and 25. The videos and slides of the lectures are available online at http://www.bayareadlschool.org/. I will also give a very brief introduction of deep learning.

Publié dans : Formation

Notes from 2016 bay area deep learning school

  1. 1. Summary of Bay Area Deep Learning School Niketan Pansare
  2. 2. • Summary • Why Deep Learning is gaining popularity ? • Introduction to Deep Learning • Case-study of the state-of-the-art networks • How to train them • Tricks of the trade • Overview of existing deep learning stack Agenda
  3. 3. Summary • 1300 applicants for 500 spots (industry + academia) • Videos are online: • Day 1: https://www.youtube.com/watch?v=eyovmAtoUx0 • Day 2: https://www.youtube.com/watch?v=9dXiAecyJrY • Mostly high-quality talks from different areas • Computer Vision (Karpathy – OpenAI), Speech (Coates - Baidu), NLP (Socher – Salesforce, Quoc Le - Google), Unsupervised Learning (Salakhutdinov - CMU), Reinforcement Learning (Schulman - OpenAI) • Tools (TensorFlow/Theano/Torch) • Overview/Vision talks (Ng, Bengio and Larochelle) • Networking: • Keras contributor (working in startup) – CNTK integration, potential for SystemML integration • TensorFlow users in Google • Discussion on “dynamic operator placement” described in the whitepaper
  4. 4. Why Deep Learning is gaining popularity ?
  5. 5. • Efficacy of larger networks Why Deep Learning is gaining popularity ? Reference: Andrew Ng (Spark summit 2016).
  6. 6. • Efficacy of larger networks Why Deep Learning is gaining popularity ? Reference: Andrew Ng (Spark summit 2016). Train large network on large amount of data Relative ordering not defined for small data
  7. 7. • Efficacy of larger networks • Large amount of data Why Deep Learning is gaining popularity ? Caltech101 dataset (by FeiFei Li) Google Street View House Numbers (SVHN) Dataset CIFAR-10 dataset Flickr 30K Images
  8. 8. • Efficacy of larger networks • Large amount of data • Compute power necessary to train larger networks Why Deep Learning is gaining popularity ? VGG: ~2-3 weeks training with 4 GPUs ResNet 101: 2-3 weeks with 4 GPUs Rocket Fuel*
  9. 9. • Efficacy of larger networks • Large amount of data • Compute power necessary to train larger networks • Techniques/Algorithms/Networks to deal with training issues • Non-linearities, Batch normalization, Dropout, Ensembles • Will discuss these in detail later Why Deep Learning is gaining popularity ?
  10. 10. • Efficacy of larger networks • Large amount of data • Compute power necessary to train larger networks • Techniques/Algorithms/Networks to deal with training issues • Success stories in vision, speech and text Why Deep Learning is gaining popularity ?
  11. 11. • Efficacy of larger networks • Large amount of data • Compute power necessary to train larger networks • Techniques/Algorithms/Networks to deal with training issues • Success stories in vision, speech and text • No feature engineering Why Deep Learning is gaining popularity ?
  12. 12. • Efficacy of larger networks • Large amount of data • Compute power necessary to train larger networks • Techniques/Algorithms/Networks to deal with training issues • Success stories in vision, speech and text • No feature engineering • Transfer Learning + Open-source (network, learned weights, dataset as well as codebase) • https://github.com/BVLC/caffe/wiki/Model-Zoo • https://github.com/KaimingHe/deep-residual-networks • https://github.com/facebook/fb.resnet.torch • https://github.com/baidu-research/warp-ctc • https://github.com/NervanaSystems/ModelZoo Why Deep Learning is gaining popularity ?
  13. 13. • Efficacy of larger networks • Large amount of data • Compute power necessary to train larger networks • Techniques/Algorithms/Networks to deal with training issues • Success stories in vision, speech and text • No feature engineering • Transfer Learning + Open-source (network, learned weights, dataset as well as codebase) • Tooling support for rapid iterations/experimentation • Auto-differentiation, general purpose optimizer (SGD variants) • Layered architecture • Tensorboard Why Deep Learning is gaining popularity ?
  14. 14. • Efficacy of larger networks • Large amount of data • Compute power necessary to train larger networks • Techniques/Algorithms/Networks to deal with training issues • Success stories in vision, speech and text • No feature engineering • Transfer Learning + Open-source (network, learned weights, dataset as well as codebase) • Tooling support for rapid iterations/experimentation • Auto-differentiation, general purpose optimizer (SGD variants) • Layered architecture • Tensorboard Why Deep Learning is gaining popularity ? Will skip RNN, LSTM, CTC, Parameter server, Unsupervised and Reinforcement Deep Learning
  15. 15. • DL for Speech (covers CTC + Speech pipeline): • https://youtu.be/9dXiAecyJrY?t=3h49m40s • https://github.com/baidu-research/ba-dls-deepspeech • DL for NLP (covers word embeddings, RNN, LSTM, seq2seq) • https://youtu.be/eyovmAtoUx0?t=3h51m45s (Richard Socher) • https://youtu.be/9dXiAecyJrY?t=7h4m12s (Quoc Le) • Deep Unsupervised Learning (covers RBM, Autoencoders, …): • https://youtu.be/eyovmAtoUx0?t=7h7m54s • Deep Reinforcement Learning (covers Q-learning, policy gradients): • https://youtu.be/9dXiAecyJrY?t=7m43s • Tutorial (TensorFlow, Torch, Theano) • https://github.com/wolffg/tf-tutorial/ • https://github.com/alexbw/bayarea-dl-summerschool • https://github.com/lamblin/bayareadlschool Not covered in this talk
  16. 16. Introduction to Deep Learning
  17. 17. Different abstractions for Deep Learning Deep Learning pipeline Deep Learning task Eg: CNN + classifier => Image captioning, Localization, … Deep Neural Network Eg: CNN, AlexNet, GoogLeNet, … Layer Eg: Convolution, Pooling, …
  18. 18. Common layers • Fully connected layer Reference: Convolutional Neural Networks for Visual Recognition. http://cs231n.github.io/
  19. 19. Common layers • Fully connected layer • Convolution layer • Less number of parameters as compared to FC • Useful to capture local features (spatially) • Output #channels = #filters Reference: Convolutional Neural Networks for Visual Recognition. http://cs231n.github.io/
  20. 20. Common layers • Fully connected layer • Convolution layer • Pooling layer • Useful to tolerate feature deformation such as local shifts • Output #channels = Input #channels Reference: Convolutional Neural Networks for Visual Recognition. http://cs231n.github.io/
  21. 21. Common layers • Fully connected layer • Convolution layer • Pooling layer • Activations • Sigmoid • Tanh • ReLU Reference: Introduction to Feedforward Neural Networks - Larochelle.​ https://dl.dropboxusercontent.com/u/19557502/hugo_dlss.pdf http://cs231n.stanford.edu/slides/winter1516_lecture5.pdf
  22. 22. • Squashes the neuron’s pre-activations between [0, 1] • Historically popular • Disadvantages: • Tends to vanish the gradient as activation increase (i.e. saturated neurons) • Sigmoid outputs are not zero-centered • exp() is a bit compute expensive Sigmoid Reference: Introduction to Feedforward Neural Networks - Larochelle.​ http://cs231n.stanford.edu/slides/winter1516_lecture5.pdf
  23. 23. • Squashes the neuron’s pre-activations between [-1, 1] • Advantage: • Zero-centered • Disadvantages: • Tends to vanish the gradient as activation increase • exp() is compute expensive Tanh Reference: Introduction to Feedforward Neural Networks - Larochelle.​ http://cs231n.stanford.edu/slides/winter1516_lecture5.pdf
  24. 24. • Bounded below by 0 (always non-negative) • Advantages: • Does not saturate (in +region) • Very computationally efficient • Converges much faster than sigmoid/tanh in practice (e.g. 6x) • Disadvantages: • Tends to blowup the activations • Alternatives: • Leaky ReLU: max(0.001*a, a) • Parameteric ReLU: max(alpha*a, a) • Exponential ReLU: a if a>0; else alpha*(exp(a)-1) ReLU (Rectified Linear Units) Reference: Introduction to Feedforward Neural Networks - Larochelle.​ http://cs231n.stanford.edu/slides/winter1516_lecture5.pdf max(0, a)
  25. 25. • According to Hinton, why did deep learning not catch on earlier ? • Our labeled datasets were thousands of times too small. • Our computers were millions of times too slow. • We initialized the weights in a stupid way. • We used the wrong type of non-linearity (i.e. sigmoid/tanh). • Which non-linearity to use => ReLU according to • LeCun: http://yann.lecun.com/exdb/publis/pdf/jarrett-iccv-09.pdf • Hinton: http://www.cs.toronto.edu/~fritz/absps/reluICML.pdf • Bengio: https://www.utc.fr/~bordesan/dokuwiki/_media/en/glorot10nipsworkshop.pdf • If not satisfied with ReLU, • Double-check the learning rates • Then, try out Leaky ReLU / ELU • Then, try out tanh but don’t expect much • Don’t use sigmoid Reference: Introduction to Feedforward Neural Networks - Larochelle.​ http://cs231n.stanford.edu/slides/winter1516_lecture5.pdf
  26. 26. Common layers • Fully connected layer • Convolution layer • Pooling layer • Activations • SoftMax • Strictly positive • Sums to 1 • Used for multi-class classification • Other losses: Hinge, Euclidean, Sigmoid cross-entropy, … Reference: Introduction to Feedforward Neural Networks - Larochelle. https://dl.dropboxusercontent.com/u/19557502/hugo_dlss.pdf​
  27. 27. Common layers • Fully connected layer • Convolution layer • Pooling layer • Activations • SoftMax • Dropout • Idea: «cripple» neural network by removing hidden units stochastically • Use random mask: Could use a different dropout probability, but 0.5 usually works well • Beats regular backpropagation on many datasets, but is slower (~2x) • Helps to prevent overfitting
  28. 28. Common layers • Normalization layers • Batch Normalization (BN) • Network converge faster if inputs are whitened, i.e. linearly transformed to have zero mean and unit variance, and decorrelated • Ioffe and Szegedy, 2014 suggested to also use normalization at the level of hidden level • BN: normalizing each layer, for each mini-batch => addresses “internal covariate shift” • Greatly accelerate training + Less sensitive to initialization + Improve regularization Reference: Batch Normalization: Accelerating Deep Network Training b y Reducing Internal Covariate Shift Two popular approaches: - Subtract the mean image (e.g. AlexNet) - Subtract per-channel mean (e.g. VGGNet)
  29. 29. Common layers • Normalization layers • Batch Normalization (BN) • Network converge faster if inputs are whitened, i.e. linearly transformed to have zero mean and unit variance, and decorrelated • Ioffe and Szegedy, 2014 suggested to also use normalization at the level of hidden level • BN: normalizing each layer, for each mini-batch => addresses “internal covariate shift” • Greatly accelerate training + Less sensitive to initialization + Improve regularization Reference: Batch Normalization: Accelerating Deep Network Training b y Reducing Internal Covariate Shift
  30. 30. Common layers • Normalization layers • Batch Normalization (BN) • BN: normalizing each layer, for each mini-batch • Greatly accelerate training + Less sensitive to initialization + Improve regularization Reference: Batch Normalization: Accelerating Deep Network Training b y Reducing Internal Covariate Shift Trained with initial learning rate 0.0015 Same as Inception with BN before each nonlinearity Initial learning rate increased by 5x (0.0075) and 30x (0.045) Same as N-x5, but with Sigmoid instead of ReLU
  31. 31. Common layers • Normalization layers • Batch Normalization (BN) • Local Response Normalization (LRN) • Used in AlexNet paper with k=2, alpha=10e-4, beta=0.75, n=5 • Not common anymore channel Number of channels
  32. 32. Different abstractions for Deep Learning Deep Learning pipeline Deep Learning task Eg: CNN + classifier => Image captioning, Localization, … Deep Neural Network Eg: CNN, AlexNet, GoogLeNet, … Layer Eg: Convolution, Pooling, …
  33. 33. Convolutional Neural networks
  34. 34. Convolutional Neural networks LeNet for OCR (90s) AlexNet Compared to LeCun 1998, AlexNet used: •More data: 10^6 vs. 10^3 •GPU (~20x speedup) => Almost 1B FLOPs for single image •Deeper: More layers (8 weight layers) •Fancy regularization (dropout 0.5) •Fancy non-linearity (first use of ReLU according to Karpathy) •Accuracy on ImageNet (ILSVRC 2012 winner): 16.4% •Using ensembles (7 CNN), accuracy 15.4%
  35. 35. Convolutional Neural networks ZFNet [Zeiler and Fergus, 2013] •It was an improvement on AlexNet by tweaking the architecture hyperparameters, • In particular by expanding the size of the middle convolutional layers • CONV 3,4,5: instead of 384, 384, 256 filters use 512, 1024, 512 • And making the stride and filter size on the first layer smaller. • CONV 1: change from (11x11 stride 4) to (7x7 stride 2) •Accuracy on ImageNet (ILSVRC 2013 winner): 16.4% -> 14.8% Reference: http://cs231n.github.io/convolutional-networks/
  36. 36. Convolutional Neural networks • Homogenous architecture • All convolution layers use small 3x3 filters (compared to AlexNet that uses 11x11, 5x5 and 3x3 filters) with stride 1 (compared to AlexNet that uses 4 and 1 strides) • Depth of network critical component (19 layers) • Other details: • 5 maxpool layers (x2 reduction) • No normalization • 3 FC layers (instead of 2) => Most number of parameters (102760448, 16777216, 409600) • ImageNet top 5 error (ILSVRC 2014 runner-up): • 14.8% -> 7.3% (top 5 error) Reference: https://arxiv.org/pdf/1509.07627.pdf, https://arxiv.org/pdf/1409.1556v6.pdf, https://www.youtube.com/watch?v=j1jIoHN3m0s 64 128 256 512 512 Number of filters • Why 3x3 layers ? • Stacked convolution layers have large receptive field • two 3x3 => 5x5 receptive field • three 3x3 layers => 7x7 receptive field • More non-linearity • Less parameters to learn
  37. 37. New Lego brick or mini-network (Inception module) For Inception v4, see https://arxiv.org/abs/1602.07261
  38. 38. Convolutional Neural networks GoogLeNet [Szegedy et al., 2014] - 9 inception modules - ILSVRC 2014 winner (6.7% top 5 error ) - Only 5 million params! (Uses Avg pooling instead of FC layers)
  39. 39. Convolutional Neural networks GoogLeNet VGG_model_A AlexNet updateOutput 130.76 162.74 27.65 updateGradInput 197.86 167.05 24.32 accGradParameters 142.15 199.49 28.99 Forward 130.76 162.74 27.65 Backward 340.01 366.54 53.31 TOTAL 470.77 529.29 80.96 Speed with Torch7 (using GeForce GTX TITAN X and CuDNN) … all time in milliseconds Compared to AlexNet, GoogLeNet has - 12x less params - 2x more compute - 6.67% (vs. 16.4%) Compared to VGGNet, GoogLeNet has - 36x less params - 22 layers (vs. 19) - 6.67% (vs. 7.3%) Reference: https://arxiv.org/pdf/1512.00567.pdf, https://github.com/soumith/convnet-benchmarks/blob/master/torch7/imagenet_winners/output.log
  40. 40. Analysis of errors on GoogLeNet vs human on ImageNet dataset • Types of error that both GoogLeNet human are susceptible to: • Multiple objects (24% of GoogLeNet errors and 16% of human errors) • Incorrect annotations • Types of error that GoogLeNet is more susceptible to than human: • Object small or thin (21% of GoogLeNet errors) • Image filters, eg: distort contrast/color distribution (13% of GoogLeNet errors and only 1 human error) • Abstract representations, eg: shadow on the ground, of a child on a swing (6% GoogleNet errors) • Types of error that human is more susceptible to than GoogLeNet: • Fine-grained recognition, eg: species of dogs (7% of GoogLeNet errors and 37% of human errors) • Insufficient training data Reference: http://arxiv.org/abs/1409.0575
  41. 41. Convolutional Neural networks
  42. 42. New Lego brick (Residual block) Reference: http://torch.ch/blog/2016/02/04/resnets.html Shortcut to address underfitting due to vanishing gradients - Occurs even with batch normalization
  43. 43. Convolutional Neural networks • ResNet Architecture • VGG style design => just deep • All 3x3 convolution • #Filter x2 • Other remarks: • no max pooling (almost) • no FC • no dropout • See https://github.com/facebook/fb.resnet.torch Reference: http://image-net.org/challenges/talks/ilsvrc2015_deep_residual_learning_kaiminghe.pdf
  44. 44. Different abstractions for Deep Learning Deep Learning pipeline Deep Learning task Eg: CNN + classifier => Image captioning, Localization, … Deep Neural Network Eg: CNN, AlexNet, GoogLeNet, … Layer Eg: Convolution, Pooling, …
  45. 45. Addressing other tasks … Reference: https://docs.google.com/presentation/d/1Q1CmVVnjVJM_9CDk3B8Y6MWCavZOtiKmOLQ0XB7s9Vg/edit#slide=id.g17e6880c10_0_926 SKIP THIS !!
  46. 46. Addressing other tasks … SKIP THIS !! …
  47. 47. How to train a Deep Neural Network ?
  48. 48. Training a Deep Neural Network
  49. 49. Training a Deep Neural Network “Forward propagation” Compute a function via composition of linear transformations followed by element-wise non-linearities “Backward propagation” Propagates errors backwards and update weights according to how much they contributed to the output Reference: “You Should Be Using Automatic Differentiation” by Ryan Adams (Twitter) Special case of “automatic differentiation” discussed in next slides
  50. 50. Training a Deep Neural Network Training features: Training label: Goal: learn the weights Define a loss function: For numerical stability and mathematical simplicity, we use negative log-likelihood (often referred to as cross-entropy):
  51. 51. • Using the loss function: , we learn weights by Training a Deep Neural Network • Learning is cast as optimization • Popular algorithm: Stochastic Gradient Descent • Needs to compute the gradients: • And initialization of weights (covered later):
  52. 52. • Evaluate derivative of f(x) = sin(x – 3/x) at x = 0.01 • Symbolic differentiation • Symbolically differentiate the function as an expression, and evaluate it at the required point • Low speed + difficult to convert DNN into expressions • Symbolically, f’(x) = cos(x – 3/x)(1+ 3/x2 ) … at x=0.01 => -962.8192798 • Numerical differentiation • Use finite differences: • Generally bad numerical stability Methods for differentiating functions Reference: http://homes.cs.washington.edu/~naveenks/files/2009_Cranfield_PPT.pdf • Automatic/Algorithmic Differentiation (AD) • Mechanically calculates derivatives as functions expressed as computer programs, at machine precision, and with complexity guarantees - Barak Pearlmutter • Reverse-mode automatic differentiation used in practice
  53. 53. Examples of AD in practice https://github.com/HIPS/autograd For Python and NumPy: See http://www.autodiff.org/ for more details For Torch (developed by Twitter cortex): https://github.com/twitter/torch-autograd/
  54. 54. • Convert the algorithm into sequence of assignment of basic operations: Reverse-mode AD (how it works) https://justindomke.wordpress.com/2009/03/24/a-simple-explanation-of-reverse-mode-automatic-differentiation/ parents of • Apply chain rule: • Differentiate each basic operation f in the reverse order:
  55. 55. Reverse-mode AD (how it works – NN) From Neural Network with Torch - Alex Wiltschko
  56. 56. Reverse-mode AD (how it works – NN) From Neural Network with Torch - Alex Wiltschko
  57. 57. • Normalize your data • Mini-batch instead of SGD (leverage matrix-matrix operations) • Use momentum • Use adaptive learning rates: • Adagrad: learning rates are scaled by the square root of the cumulative sum of squared gradients • RMSProp: instead of cumulative sum, use exponential moving average • Adam: essentially combines RMSProp with momentum • Debug your gradient using finite difference method Tricks of the Trade
  58. 58. • Use momentum • Use adaptive learning rates: • Adagrad: learning rates are scaled by the square root of the cumulative sum of squared gradients • RMSProp: instead of cumulative sum, use exponential moving average • Adam: essentially combines RMSProp with momentum Tricks of the Trade
  59. 59. • Initialization matters • Assume 10-layer FC network with tanh non-linearity Tricks of the Trade - Initialize with zero mean & 0.01 std dev - Does not work for deep networks Layer Number Layer mean Layer std dev - Initialize with zero mean & unit std dev - Almost all neurons completely saturated, either -1 and 1. Gradients will be all zero. Layer Number Layer mean Layer std dev
  60. 60. • Initialization matters • Assume 10-layer FC network with tanh non-linearity Tricks of the Trade Xavier initialization [Glorot et al., 2010]: Layer Number Layer mean Layer std dev - Use zero-mean and 1/fan_in variance - Works well for tanh - But not for ReLU He al proposed replacing by Note: additional /2
  61. 61. • Initialization matters • Assume 10-layer FC network with tanh non-linearity • Batch normalization reduces the strong dependence on initialization Tricks of the Trade
  62. 62. Overview of existing deep learning stack
  63. 63. Existing Deep Learning Stack Caffee, Theano , Torch7, TensorFlow, DeepLearning4J, SystemML* cuDNN Aparapi (converts bytecode to OpenCL) ~CPU’s BLAS/LAPACK: cuBLAS, MAGMA, CULA, cuSPARSE, cuSOLVER, cuRAND, etc CUDA (preferred if Nvidia GPUs) OpenCL (portable) Framework: Library with commonly used building blocks: Driver/Toolkit: Hardware Multicore, Task parallelism, Minimize latency (eg: Unsafe/DirectBuf/GC pauses/NIO) Data parallelism (single task), Cost of moving data from CPU to GPU (Kernel fusion ?), Maximize throughput. Rule of Thumb: Always use libraries !! Caffe (GPU) 11x but Caffe(cuDNN) 14x on AlexNet training (5 convolution + 3 connected layers) *Conditions apply: Unified memory model since CUDA 6
  64. 64. Comparison of existing framework Core Lang Bindings CPU Single GPU Multi GPU Distributed Comments Caffe C++ Python, MatLab Yes Yes Yes See com.yahoo.ml. CaffeOnSpark Mostly for image classification, Models/Layers expressed in proto format Theano / PyLearn2 Python Yes Yes In Progress No Transparent use of GPU, Auto-diff, General purpose, Computation as DAG. Torch7 Lua Yes Yes Yes See Twitter’s torch-distlearn CTC impl on Torch7 of Baidu’s Deep Speech opensourced. Very efficient. TensorFlow C++ Python Yes Yes Upto 4 GPUs Not open- sourced Slower than Theano/Torch, TensorBoard useful, Computation as DAG DL4J Java Yes Yes Most likely Yes Supports GPUs via CUDA, Support for Hadoop/Spark SystemML Java Python, Scala Yes In Progress Not yet Yes Minerva/CXXN et (Smola) C++ Python Yes Yes Yes Yes https://github.com/dmlc. Minerva ~ Theano and CXXNet ~ Caffe
  65. 65. Thank You !!

×