2. 2
If any content in this presentation is
yours but is not correctly referenced or
if it should be removed, please just let
me know and I will correct it.
Disclaimer
3. 3
1) Convolutional Neural Network (image
classification)
2) AutoEncoder (unsupervised representation)
3) Recurrent Neural Network (sequence, NLP)
4) probabilistic models, DBN, RBM (speech
recognition, music genre classification)
5) learning the distributions of data (GAN, VAE)
Several kind of
architectures
4. 4
Task: classify images of dogs and cats
1) picture are represented with d variables (pixels)
2) Decision based on the neighbors
3) Separate the sample
Naive classification
[Mallat 2014]
9. 9
Deep Representation
origins
• Theorem Cybenko (1989) A neural network with
one single hidden layer is a universal
“approximator”, it can represent any continuous
function on compact subsets of Rn 2 layers are
enough…but hidden layer size may be
exponential
Theorem Hastad (1986), Bengio et al. (2007)
Functions representable compactly with k layers
may require exponentially size with k-1 layers
11. 11
A cell is related to a subpart of the field of
vision
Two main kind of cells:
1) S cells: extract the characteristics
2) C cells: assemble the characteristics
Deep representation by
CNN
14. 14
Yann Lecun, [LeCun et al., 1998]
1. Subpart of the field of vision and translation invariant
2. S cells: convolution with filters
3. C cells: max pooling
Deep representation by
CNN
16. 16
Yann Lecun, [LeCun et al., 1998]
1. Subpart of the field of vision and translation invariant
2. S cells: convolution with filters
3. C cells: max pooling
Deep representation by
CNN
20. 20
C’est l’heure du Quiz!
Question : Quels paramètres apprend
on sur un réseau de neurones ?
A) les poids sur les connections ?
B) les poids dans les neurones
C) les pixels des images ?
D) le nombre de sorties du réseau ?
Répondez vite en tweetant sur
@TechConfQuiz
24. 24
Combining the Content of one image
with the style of an artwork using a
L. A. Gatys, A. S. Ecker, and M. Bethge, “Image Style Transfer Using Convolutional
Neural Networks”, Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2016
35. 35
Amazing but…Adversary
examples
- rethink the generalization abilities of deep networks
- adversarial examples not only a threat
- understand better the learning abilities of deep networks
39. 39
Is it even possible ?
‘REAL’ errors bounds are not tight
The nature of your data matters
40. 40
Active Learning : How to pick queries
- Uncertainty selection irrelevant
for CNNs (adversarial examples)
- Query examples close to
the decision boundary
Margin Based Theory (not tractable)
41. 41
Adversarial Active Query (AAQ)
- agnostic adversarial attacks
- AAQ : query the label of the sample
with the smallest adversarial
perturbation
- SAAQ query also the adversarial version
42. 42
Experiments
0 200 400 600 800 1000
0.2
0.4
0.6
0.8
1.0
random
aaq
bald
ceal
egl
uncertainty
saaq
200 300 400 500 600 700 800 900 1000
0.90
0.92
0.94
0.96
0.98
random
aaq
saaq
MNIST, 10 gray scaled digits, CNN LeNet5 99.04 % on test set,
|train|=60,000 images
2870 queries are enough
Test accuracy given the number of labelled data
44. 44
Experiments
Test accuracy given the number of labelled data
200 300 400 500 600 700 800 900 1000
0.70
0.75
0.80
0.85
0.90
0.95
1.00
random
aaq
saaq
aaq_transfer
uncertainty
45. 45
Conclusion
Impressive results
Wide range of applications
But
Security issue (adversarial examples)
Etchics (recovering the training set)
Understanding better the learning mechanism
CURIOUS about adversarial examples ?
http://www.telecom-valley.fr/wp-content/uploads/2017/05/DEBARD.pdf