SlideShare une entreprise Scribd logo
1  sur  96
SEQUENCE-TO-SEQUENCE LEARNING USING
DEEP LEARNING FOR OPTICAL CHARACTER
RECOGNITION
Advisor
Dr. Devinder Kaur
Presented
By
Vishal Vijay Shankar Mishra
AGENDA
• Problem Statement
 Converting Mathematical Equations into Latex
representation.
• Approach (Deep Learning Techniques)
 Convolutional Neural Network (CNN)
 Recurrent Neural Network (RNN)
 Long Term-Short Memory (LTSM)
 Attention Model
• Introduction to CNN
 Gist of Neural Network
 Architecture of CNN
• CNN layers
 Convolution Layer
 Non-Linear Activation Layer (ReLu)
 Pooling Layer
• Hyper-Parameters.
• Introduction to RNN
 Architecture of RNN
 Working of RNN
 RNN Example
• Drawback of RNN
• LSTM
 Architecture of LSTM
 Working of LSTM
 LSTM Example
• Proposed Model
• Results and Future work
• Conclusion
AGENDA
• Problem Statement
 Converting Mathematical Equations into Latex
representation.
• Approach (Deep Learning Techniques)
 Convolutional Neural Network (CNN)
 Recurrent Neural Network (RNN)
 Long Term-Short Memory (LTSM)
 Attention Model
• Introduction to CNN
 Gist of Neural Network
 Architecture of CNN
• CNN layers
 Convolution Layer
 Non-Linear Activation Layer (ReLu)
 Pooling Layer
• Hyper-Parameters.
• Introduction to RNN
 Architecture of RNN
 Working of RNN
 RNN Example
• Drawback of RNN
• LSTM
 Architecture of LSTM
 Working of LSTM
 LSTM Example
• Proposed Model
• Results and Future work
• Conclusion
PROBLEM STATEMENT
•In this thesis, I have implemented a sequence-to-
sequence analysis using Deep Learning for Optical
Character Recognition.
•I have used the images of the mathematical equations to
convert it into LATEX representation.
AGENDA
• Problem Statement
 Converting Mathematical Equations into Latex
representation.
• Approach (Deep Learning Techniques)
 Convolutional Neural Network (CNN)
 Recurrent Neural Network (RNN)
 Long Term-Short Memory (LTSM)
 Attention Model
• Introduction to CNN
 Gist of Neural Network
 Architecture of CNN
• CNN layers
 Convolution Layer
 Non-Linear Activation Layer (ReLu)
 Pooling Layer
• Hyper-Parameters.
• Introduction to RNN
 Architecture of RNN
 Working of RNN
 RNN Example
• Drawback of RNN
• LSTM
 Architecture of LSTM
 Working of LSTM
 LSTM Example
• Proposed Model
• Results and Future work
• Conclusion
APPROACH (DEEP LEARNING TECHNIQUES)
• To accomplish this research work, I have used the following deep
learning techniques.
Convolutional Neural Network (CNN)
Recurrent Neural Network (RNN)
Long Term-Short Memory (LSTM)
Attention model.
• In the subsequent slides, I’ll try to give the gist of the techniques.
WHAT IS DEEP NEURAL NETWORK?
• Deep Neural Networks are those networks that have more than 2 layer to perform the task.
WHY DO WE NEED DEEP NEURAL NETWORK?
• Neural nets tend to be computationally expensive for data with simple
patterns; in such cases you should use a model like Logistic Regression or
an SVM.
• As the pattern complexity increases, neural nets start to outperform other
machine learning methods.
• At the highest levels of pattern complexity –for example high-resolution
images
• – neural nets with a small number of layers will require a number of nodes
that grows exponentially with the number of unique patterns. Even then,
the net would likely take excessive time to train, or simply would fail to
AGENDA
• Problem Statement
 Converting Mathematical Equations into Latex
representation.
• Approach (Deep Learning Techniques)
 Convolutional Neural Network (CNN)
 Recurrent Neural Network (RNN)
 Long Term-Short Memory (LTSM)
 Attention Model
• Introduction to CNN
 Gist of Neural Network
 Architecture of CNN
• CNN layers
 Convolution Layer
 Non-Linear Activation Layer (ReLu)
 Pooling Layer
• Hyper-Parameters.
• Introduction to RNN
 Architecture of RNN
 Working of RNN
 RNN Example
• Drawback of RNN
• LSTM
 Architecture of LSTM
 Working of LSTM
 LSTM Example
• Proposed Model
• Results and Future work
• Conclusion
WHY CONVOLUTIONAL NEURAL NETWORK?
WHY CONVOLUTIONAL NEURAL NETWORK?
WHY CONVOLUTIONAL NEURAL NETWORK?
WHY CONVOLUTIONAL NEURAL NETWORK?
WHY CONVOLUTIONAL NEURAL NETWORK?
INTRODUCTION TO CNN
• Architecture of CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
WORKING OF CNN
AGENDA
• Problem Statement
 Converting Mathematical Equations into Latex
representation.
• Approach (Deep Learning Techniques)
 Convolutional Neural Network (CNN)
 Recurrent Neural Network (RNN)
 Long Term-Short Memory (LTSM)
 Attention Model
• Introduction to CNN
 Gist of Neural Network
 Architecture of CNN
• CNN layers
 Convolution Layer
 Non-Linear Activation Layer (ReLu)
 Pooling Layer
• Hyper-Parameters.
• Introduction to RNN
 Architecture of RNN
 Working of RNN
 RNN Example
• Drawback of RNN
• LSTM
 Architecture of LSTM
 Working of LSTM
 LSTM Example
• Proposed Model
• Results and Future work
• Conclusion
LAYERS IN CNN
CONVOLUTION LAYER
CONVOLUTION LAYER
NON-LINEAR ACTIVATION LAYER (RELU)
• ReLU is a non-linear activation function, which is used to apply elementwise non-linearity.
• ReLU layer applies an activation function to each element, such as the max(0, x) thresholding to zero.
AGENDA
• Problem Statement
 Converting Mathematical Equations into Latex
representation.
• Approach (Deep Learning Techniques)
 Convolutional Neural Network (CNN)
 Recurrent Neural Network (RNN)
 Long Term-Short Memory (LTSM)
 Attention Model
• Introduction to CNN
 Gist of Neural Network
 Architecture of CNN
• CNN layers
 Convolution Layer
 Non-Linear Activation Layer (ReLu)
 Pooling Layer
• Hyper-Parameters.
• Introduction to RNN
 Architecture of RNN
 Working of RNN
 RNN Example
• Drawback of RNN
• LSTM
 Architecture of LSTM
 Working of LSTM
 LSTM Example
• Proposed Model
• Results and Future work
• Conclusion
HYPER-PARAMETERS
• Convolution
• Filter Size
• Number of Filters
• Padding
• Stride
• Pooling
• Filter Size
• Stride
• Fully Connected
• Number of neurons
AGENDA
• Problem Statement
 Converting Mathematical Equations into Latex
representation.
• Approach (Deep Learning Techniques)
 Convolutional Neural Network (CNN)
 Recurrent Neural Network (RNN)
 Long Term-Short Memory (LTSM)
 Attention Model
• Introduction to CNN
 Gist of Neural Network
 Architecture of CNN
• CNN layers
 Convolution Layer
 Non-Linear Activation Layer (ReLu)
 Pooling Layer
• Hyper-Parameters.
• Introduction to RNN
 Architecture of RNN
 Working of RNN
 RNN Example
• Drawback of RNN
• LSTM
 Architecture of LSTM
 Working of LSTM
 LSTM Example
• Proposed Model
• Results and Future work
• Conclusion
INTRODUCTION TO RNN
• RNNs are a type of artificial neural network designed to recognize the patterns in sequences of data. It is used to
process the data sequentially.
• Why can’t we accomplish this task with Feed forward network?
• The drawback of feed forward network is that, it doesn't remember the inputs over the period of time.
• To process the data sequentially we need a network that behave recurrently.
• Architecture of RNN
RNN have
loop
• RNNs are not all that different than Neural Network. RNN can be thought of as multiple copies of the same network,
each passing a message to a successor. An unrolled RNN is shown below.
• In fast last few years, there have been incredible success applying RNNs to a variety of problems: speech recognition,
language modeling, translation, image captioning…. The list goes on.
An Unrolled RNN
DRAWBACK OF AN RNN
• RNN has a problem of long term dependency. It doesn’t remember the inputs after certain time steps.
• This problem occurs due to gradient exploding or gradient vanishing while performing backpropagation.
• I’ll try to example this problem with an example.
• Let’s consider a language model trying to predict the next word based on the previous ones.
• For example “the clouds are in the sky ”. So in order to predict the sky we don’t need any further context.
In such cases, where the gap between the relevant information and the place that it’s needed is small, RNNs can
learn to use the past information
• But there are also cases where we need to know more context of the input.
• For example “I grew up in France ……………………………. I speak fluent French” .
• Unfortunately, as that gap grows, RNNs become unable to learn to connect the information.
• This happens due to vanishing gradient and exploding gradient problem.
VANISHING GRADIENT
EXPLODING GRADIENT
HOW TO OVERCOME THESE CHALLENGES?
• For Vanishing gradient we can use,
ReLu activation function: We can use activation functions like
ReLU, which gives output one while calculating gradient.
LSTM, GRUs : Different network architectures that has been
specially designed can be used to combat this problem.
• For Exploding gradient we can use,
Clip gradients at threshold: clip the gradient when it goes
higher than a threshold.
AGENDA
• Problem Statement
 Converting Mathematical Equations into Latex
representation.
• Approach (Deep Learning Techniques)
 Convolutional Neural Network (CNN)
 Recurrent Neural Network (RNN)
 Long Term-Short Memory (LTSM)
 Attention Model
• Introduction to CNN
 Gist of Neural Network
 Architecture of CNN
• CNN layers
 Convolution Layer
 Non-Linear Activation Layer (ReLu)
 Pooling Layer
• Hyper-Parameters.
• Introduction to RNN
 Architecture of RNN
 Working of RNN
 RNN Example
• Drawback of RNN
• LSTM
 Architecture of LSTM
 Working of LSTM
 LSTM Example
• Proposed Model
• Results and Future work
• Conclusion
LONG SHORT-TERM MEMORY (LSTM)
• Long Short Term Memory network – usually just called “LSTM” – are a special kind of RNN
• They are capable of learning long-term dependencies.
ARCHITECTURE OF LSTM
• Why LSTM is different then RNN, because LSTM has cell state that deals with long term dependencies.
WORKING OF LSTM
• Step 1: The first step in the LSTM is to identify those information that are not required and will be thrown away from
the cell state. This decision is made of neural network with sigmoid activation function called forget gate layer.
• 𝑊𝑓 = weight
• ℎ 𝑡−1 = output from the previous time step
• 𝑋𝑡 = New input
• 𝑏𝑓 = bias
𝑓𝑡 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑(𝑊𝑓 ℎ 𝑡−1, 𝑋𝑡 + 𝑏𝑓
WORKING OF LSTM
• Step 2: The next step is to decide, what new information we’re going to store in the cell state. This whole process
comprises of following steps. A NN layer with sigmoid sigmoid called the “input gate layer” decides which values will
be updated. Next, a NN layer with tanh activation creates a new vector that could be added to the state.
• In the next step, we’ll combine these two to
• V update the state.
𝑖 𝑡 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑(𝑊𝑖 ℎ 𝑡−1, 𝑋𝑡 + 𝑏𝑖
𝐶𝑡′ = 𝑡𝑎𝑛ℎ(𝑊𝑐 ℎ 𝑡−1, 𝑋𝑡 + 𝑏 𝑐
WORKING OF LSTM
• Step 3: Now, we will update the old cell states, Ct-1 into the new cell state Ct. First, we multiply the old state (Ct-1) by ft,
forgetting the things we decided to forget earlier. Then, we add it * 𝐶𝑡
′
. This is the new vector values, scaled by how
much we decided to update each cell state value.
𝐶𝑡 = 𝑓𝑡 ∗ 𝐶𝑡−1 + 𝑖 𝑡 ∗ 𝐶𝑡′
WORKING OF LSTM
• Step 4: We will run a sigmoid layer which decides what parts of the cell state we’re going to output. Then, we put the
cell state through tanh (push the values to be between -1 and 1) and multiply it by the output of the sigmoid gate,
so that we only output the parts we decided to.
𝑂𝑡 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑(𝑊𝑜 ℎ 𝑡−1, 𝑋𝑡 + 𝑏 𝑜
ℎ 𝑡 = 𝑂𝑡 ∗ tan h( 𝐶𝑡 s
PROPOSED LSTM VARIANT WITH PEEPHOLE CONNECTION
• The easiest but very powerful solution to the conventional LSTM unit is to introduce a weighted “peephole”
connections from the cell state unit (Ct-1) to all the gates in the same memory unit. The peephole connections allow
every gate to assess the current cell state even though the output gate is closed.
STOCHASTIC “HARD” ATTENTION MODEL
• With an attention mechanism, the image is first
divided into n, parts, and we compute with a
CNN representations of each part y1,y2………yn.
When LSTM is generating a new word, the
attention mechanism is focusing on the
relevant part of the image, so the decoder only
uses specific parts of the image.
• In stochastic process like Hard attention
mechanism, rather than using all the hidden
states yt as an input for the decoding, the
process finds the probabilities of a hidden state
with respect to location variable 𝑠𝑡. The
gradients are obtained by reinforcement
learning.
AGENDA
• Problem Statement
 Converting Mathematical Equations into Latex
representation.
• Approach (Deep Learning Techniques)
 Convolutional Neural Network (CNN)
 Recurrent Neural Network (RNN)
 Long Term-Short Memory (LTSM)
 Attention Model
• Introduction to CNN
 Gist of Neural Network
 Architecture of CNN
• CNN layers
 Convolution Layer
 Non-Linear Activation Layer (ReLu)
 Pooling Layer
• Hyper-Parameters.
• Introduction to RNN
 Architecture of RNN
 Working of RNN
 RNN Example
• Drawback of RNN
• LSTM
 Architecture of LSTM
 Working of LSTM
 LSTM Example
• Proposed Model
• Results
• Conclusion and Future work
PROPOSED MODEL
PROPOSED MODEL
• Original Image
• Predicted Latex:
• Rendered Predicted Image:
Actual Test results on Test set:
RESULTS
• The proposed method is compared with the previous two methods called INFTY and WYGIWYS on the bases of BLEU
(Bilingual evaluation understudy) metric and Exact Match. BLEU is a metric to evaluate the quality for the predicted
Latex markup representation of the image. Exact Match is the metric which represents the percentage of the images
classified correctly.
• It can be seen that the proposed method scores better than the previous methods. The proposed model generated
results close to 76% which is the highest in this research area. Previously, the highest result was around 75% achieved
by WYGIWYS (What You Get Is What You See) model. The BLEU and Exact Match scores of the proposed model are
slightly above the existing model however, this is a significant achievement considering the low GPU resources and
small dataset.
Model Preprocessing BLEU Exact Match
INFTY - 51.20 15.60
WYGIWYS Tokenize 73.71 74.46
PROPOSED MODEL Tokenize 75.08 75.87
Actual Test results on Test set:
FUTURE WORK.
• For possible future work, this research can be scaled from printed
mathematical formulas images to the hand written mathematical
formulas images. To recognize the hand written mathematical
formulas, one can implement the bidirectional LSTM with CNN
• This model can be used to solve the mathematical question
based on formulas.
• An API (Application Program Interface) can be created to solve
the mathematical problems.
REFERENCES.
[1] R. 1. Anderson, Syntax-directed recognition of handprinted mathematics., CA: Symposium, 1967.
[2] K. Cho, A. Courville and Y. Bengio, "Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks," in IEEE, CA, 2015.
[3] A. K. a. E. Learned-Miller, "Learning on the Fly: Font-Free Approaches to Difficult OCR Problems," MA, 2000.
[4] D. Lopresti, "Optical Character Recognition Errors and Their Effects on Natural Language Processing," International Journal on Document Analysis and Recognition, 19 12
[5] WILDML, "WILDML," [Online]. Available: http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/.
[6] S. a. S. J. Hochreiter, "Long Short-Term Memory. Neural Computation," 1997.
[7] S. Yan, "Understanding LSTM and its diagrams," Software engineer & wantrepreneur. Interested in computer graphics, bitcoin and deep learning., 13 03 2016. [Online]. [1]
https://medium.com/@shiyan/understanding-lstm-and-its-diagrams-37e2f46f1714.
[8] C. R. a. D. P. W. Ellis, "FEED-FORWARD NETWORKS WITH ATTENTION CAN SOLVE SOME LONG-TERM MEMORY PROBLEMS," ICLR, 2016.
[9] a. F.-F. L. Karpathy, Image captioning., 2015.
[10] F. A. Gers, N. N. Schraudolph and J. Schmidhuber, "Learning Precise Timing with LSTM Recurrent Networks," Journal of Machine Learning Reserach, 8 2002.
•Questions ???????????
•Thank You !!!!!!!!!

Contenu connexe

Tendances

Machine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkMachine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkRichard Kuo
 
Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNNShuai Zhang
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryAndrii Gakhov
 
Modern Convolutional Neural Network techniques for image segmentation
Modern Convolutional Neural Network techniques for image segmentationModern Convolutional Neural Network techniques for image segmentation
Modern Convolutional Neural Network techniques for image segmentationGioele Ciaparrone
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkYan Xu
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learningJörgen Sandig
 
Machine model to classify dogs and cat
Machine model to classify dogs and catMachine model to classify dogs and cat
Machine model to classify dogs and catAkash Parui
 
Handwritten Digit Recognition and performance of various modelsation[autosaved]
Handwritten Digit Recognition and performance of various modelsation[autosaved]Handwritten Digit Recognition and performance of various modelsation[autosaved]
Handwritten Digit Recognition and performance of various modelsation[autosaved]SubhradeepMaji
 
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Simplilearn
 
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...Simplilearn
 
Self supervised learning
Self supervised learningSelf supervised learning
Self supervised learning哲东 郑
 
Understanding Convolutional Neural Networks
Understanding Convolutional Neural NetworksUnderstanding Convolutional Neural Networks
Understanding Convolutional Neural NetworksJeremy Nixon
 
PR-120: ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture De...
PR-120: ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture De...PR-120: ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture De...
PR-120: ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture De...Jinwon Lee
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Simplilearn
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural networkMojammilHusain
 
Image Classification using deep learning
Image Classification using deep learning Image Classification using deep learning
Image Classification using deep learning Asma-AH
 
Convolutional neural network from VGG to DenseNet
Convolutional neural network from VGG to DenseNetConvolutional neural network from VGG to DenseNet
Convolutional neural network from VGG to DenseNetSungminYou
 
Classifying Text using CNN
Classifying Text using CNNClassifying Text using CNN
Classifying Text using CNNSomnath Banerjee
 

Tendances (20)

Machine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkMachine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural Network
 
Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNN
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: Theory
 
Modern Convolutional Neural Network techniques for image segmentation
Modern Convolutional Neural Network techniques for image segmentationModern Convolutional Neural Network techniques for image segmentation
Modern Convolutional Neural Network techniques for image segmentation
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learning
 
Machine model to classify dogs and cat
Machine model to classify dogs and catMachine model to classify dogs and cat
Machine model to classify dogs and cat
 
Image captioning
Image captioningImage captioning
Image captioning
 
Handwritten Digit Recognition and performance of various modelsation[autosaved]
Handwritten Digit Recognition and performance of various modelsation[autosaved]Handwritten Digit Recognition and performance of various modelsation[autosaved]
Handwritten Digit Recognition and performance of various modelsation[autosaved]
 
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
 
Introduction to Deep learning
Introduction to Deep learningIntroduction to Deep learning
Introduction to Deep learning
 
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
 
Self supervised learning
Self supervised learningSelf supervised learning
Self supervised learning
 
Understanding Convolutional Neural Networks
Understanding Convolutional Neural NetworksUnderstanding Convolutional Neural Networks
Understanding Convolutional Neural Networks
 
PR-120: ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture De...
PR-120: ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture De...PR-120: ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture De...
PR-120: ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture De...
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
 
Image Classification using deep learning
Image Classification using deep learning Image Classification using deep learning
Image Classification using deep learning
 
Convolutional neural network from VGG to DenseNet
Convolutional neural network from VGG to DenseNetConvolutional neural network from VGG to DenseNet
Convolutional neural network from VGG to DenseNet
 
Classifying Text using CNN
Classifying Text using CNNClassifying Text using CNN
Classifying Text using CNN
 

Similaire à Convolutional Neural Network and RNN for OCR problem.

Complete solution for Recurrent neural network.pptx
Complete solution for Recurrent neural network.pptxComplete solution for Recurrent neural network.pptx
Complete solution for Recurrent neural network.pptxArunKumar674066
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learningJunaid Bhat
 
Convolutional Neural Networks for Natural Language Processing / Stanford cs22...
Convolutional Neural Networks for Natural Language Processing / Stanford cs22...Convolutional Neural Networks for Natural Language Processing / Stanford cs22...
Convolutional Neural Networks for Natural Language Processing / Stanford cs22...changedaeoh
 
A Survey of Convolutional Neural Networks
A Survey of Convolutional Neural NetworksA Survey of Convolutional Neural Networks
A Survey of Convolutional Neural NetworksRimzim Thube
 
Sequence Modelling with Deep Learning
Sequence Modelling with Deep LearningSequence Modelling with Deep Learning
Sequence Modelling with Deep LearningNatasha Latysheva
 
Convolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular ArchitecturesConvolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular Architecturesananth
 
DSRLab seminar Introduction to deep learning
DSRLab seminar   Introduction to deep learningDSRLab seminar   Introduction to deep learning
DSRLab seminar Introduction to deep learningPoo Kuan Hoong
 
04 Deep CNN (Ch_01 to Ch_3).pptx
04 Deep CNN (Ch_01 to Ch_3).pptx04 Deep CNN (Ch_01 to Ch_3).pptx
04 Deep CNN (Ch_01 to Ch_3).pptxZainULABIDIN496386
 
Sequence Model pytorch at colab with gpu.pdf
Sequence Model pytorch at colab with gpu.pdfSequence Model pytorch at colab with gpu.pdf
Sequence Model pytorch at colab with gpu.pdfFEG
 
Lecture on Deep Learning
Lecture on Deep LearningLecture on Deep Learning
Lecture on Deep LearningYasas Senarath
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRUananth
 
240115_Attention Is All You Need (2017 NIPS).pptx
240115_Attention Is All You Need (2017 NIPS).pptx240115_Attention Is All You Need (2017 NIPS).pptx
240115_Attention Is All You Need (2017 NIPS).pptxthanhdowork
 
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)Márton Miháltz
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryAhmed Yousry
 
Deep Learning: Application & Opportunity
Deep Learning: Application & OpportunityDeep Learning: Application & Opportunity
Deep Learning: Application & OpportunityiTrain
 
Deep learning from a novice perspective
Deep learning from a novice perspectiveDeep learning from a novice perspective
Deep learning from a novice perspectiveAnirban Santara
 
Convolutional neural network
Convolutional neural network Convolutional neural network
Convolutional neural network Yan Xu
 

Similaire à Convolutional Neural Network and RNN for OCR problem. (20)

Complete solution for Recurrent neural network.pptx
Complete solution for Recurrent neural network.pptxComplete solution for Recurrent neural network.pptx
Complete solution for Recurrent neural network.pptx
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
 
Recurrent Neural Network
Recurrent Neural NetworkRecurrent Neural Network
Recurrent Neural Network
 
Convolutional Neural Networks for Natural Language Processing / Stanford cs22...
Convolutional Neural Networks for Natural Language Processing / Stanford cs22...Convolutional Neural Networks for Natural Language Processing / Stanford cs22...
Convolutional Neural Networks for Natural Language Processing / Stanford cs22...
 
A Survey of Convolutional Neural Networks
A Survey of Convolutional Neural NetworksA Survey of Convolutional Neural Networks
A Survey of Convolutional Neural Networks
 
Sequence Modelling with Deep Learning
Sequence Modelling with Deep LearningSequence Modelling with Deep Learning
Sequence Modelling with Deep Learning
 
Convolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular ArchitecturesConvolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular Architectures
 
DSRLab seminar Introduction to deep learning
DSRLab seminar   Introduction to deep learningDSRLab seminar   Introduction to deep learning
DSRLab seminar Introduction to deep learning
 
04 Deep CNN (Ch_01 to Ch_3).pptx
04 Deep CNN (Ch_01 to Ch_3).pptx04 Deep CNN (Ch_01 to Ch_3).pptx
04 Deep CNN (Ch_01 to Ch_3).pptx
 
Sequence Model pytorch at colab with gpu.pdf
Sequence Model pytorch at colab with gpu.pdfSequence Model pytorch at colab with gpu.pdf
Sequence Model pytorch at colab with gpu.pdf
 
Lecture on Deep Learning
Lecture on Deep LearningLecture on Deep Learning
Lecture on Deep Learning
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
 
240115_Attention Is All You Need (2017 NIPS).pptx
240115_Attention Is All You Need (2017 NIPS).pptx240115_Attention Is All You Need (2017 NIPS).pptx
240115_Attention Is All You Need (2017 NIPS).pptx
 
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
 
Deep Learning: Application & Opportunity
Deep Learning: Application & OpportunityDeep Learning: Application & Opportunity
Deep Learning: Application & Opportunity
 
Deep learning from a novice perspective
Deep learning from a novice perspectiveDeep learning from a novice perspective
Deep learning from a novice perspective
 
Deep learning
Deep learningDeep learning
Deep learning
 
TensorFlow.pptx
TensorFlow.pptxTensorFlow.pptx
TensorFlow.pptx
 
Convolutional neural network
Convolutional neural network Convolutional neural network
Convolutional neural network
 

Dernier

VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxfenichawla
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...SUHANI PANDEY
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...Call Girls in Nagpur High Profile
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spaintimesproduction05
 

Dernier (20)

VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
NFPA 5000 2024 standard .
NFPA 5000 2024 standard                                  .NFPA 5000 2024 standard                                  .
NFPA 5000 2024 standard .
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spain
 

Convolutional Neural Network and RNN for OCR problem.

  • 1. SEQUENCE-TO-SEQUENCE LEARNING USING DEEP LEARNING FOR OPTICAL CHARACTER RECOGNITION Advisor Dr. Devinder Kaur Presented By Vishal Vijay Shankar Mishra
  • 2. AGENDA • Problem Statement  Converting Mathematical Equations into Latex representation. • Approach (Deep Learning Techniques)  Convolutional Neural Network (CNN)  Recurrent Neural Network (RNN)  Long Term-Short Memory (LTSM)  Attention Model • Introduction to CNN  Gist of Neural Network  Architecture of CNN • CNN layers  Convolution Layer  Non-Linear Activation Layer (ReLu)  Pooling Layer • Hyper-Parameters. • Introduction to RNN  Architecture of RNN  Working of RNN  RNN Example • Drawback of RNN • LSTM  Architecture of LSTM  Working of LSTM  LSTM Example • Proposed Model • Results and Future work • Conclusion
  • 3. AGENDA • Problem Statement  Converting Mathematical Equations into Latex representation. • Approach (Deep Learning Techniques)  Convolutional Neural Network (CNN)  Recurrent Neural Network (RNN)  Long Term-Short Memory (LTSM)  Attention Model • Introduction to CNN  Gist of Neural Network  Architecture of CNN • CNN layers  Convolution Layer  Non-Linear Activation Layer (ReLu)  Pooling Layer • Hyper-Parameters. • Introduction to RNN  Architecture of RNN  Working of RNN  RNN Example • Drawback of RNN • LSTM  Architecture of LSTM  Working of LSTM  LSTM Example • Proposed Model • Results and Future work • Conclusion
  • 4. PROBLEM STATEMENT •In this thesis, I have implemented a sequence-to- sequence analysis using Deep Learning for Optical Character Recognition. •I have used the images of the mathematical equations to convert it into LATEX representation.
  • 5. AGENDA • Problem Statement  Converting Mathematical Equations into Latex representation. • Approach (Deep Learning Techniques)  Convolutional Neural Network (CNN)  Recurrent Neural Network (RNN)  Long Term-Short Memory (LTSM)  Attention Model • Introduction to CNN  Gist of Neural Network  Architecture of CNN • CNN layers  Convolution Layer  Non-Linear Activation Layer (ReLu)  Pooling Layer • Hyper-Parameters. • Introduction to RNN  Architecture of RNN  Working of RNN  RNN Example • Drawback of RNN • LSTM  Architecture of LSTM  Working of LSTM  LSTM Example • Proposed Model • Results and Future work • Conclusion
  • 6. APPROACH (DEEP LEARNING TECHNIQUES) • To accomplish this research work, I have used the following deep learning techniques. Convolutional Neural Network (CNN) Recurrent Neural Network (RNN) Long Term-Short Memory (LSTM) Attention model. • In the subsequent slides, I’ll try to give the gist of the techniques.
  • 7. WHAT IS DEEP NEURAL NETWORK? • Deep Neural Networks are those networks that have more than 2 layer to perform the task.
  • 8. WHY DO WE NEED DEEP NEURAL NETWORK? • Neural nets tend to be computationally expensive for data with simple patterns; in such cases you should use a model like Logistic Regression or an SVM. • As the pattern complexity increases, neural nets start to outperform other machine learning methods. • At the highest levels of pattern complexity –for example high-resolution images • – neural nets with a small number of layers will require a number of nodes that grows exponentially with the number of unique patterns. Even then, the net would likely take excessive time to train, or simply would fail to
  • 9. AGENDA • Problem Statement  Converting Mathematical Equations into Latex representation. • Approach (Deep Learning Techniques)  Convolutional Neural Network (CNN)  Recurrent Neural Network (RNN)  Long Term-Short Memory (LTSM)  Attention Model • Introduction to CNN  Gist of Neural Network  Architecture of CNN • CNN layers  Convolution Layer  Non-Linear Activation Layer (ReLu)  Pooling Layer • Hyper-Parameters. • Introduction to RNN  Architecture of RNN  Working of RNN  RNN Example • Drawback of RNN • LSTM  Architecture of LSTM  Working of LSTM  LSTM Example • Proposed Model • Results and Future work • Conclusion
  • 15. INTRODUCTION TO CNN • Architecture of CNN
  • 34. AGENDA • Problem Statement  Converting Mathematical Equations into Latex representation. • Approach (Deep Learning Techniques)  Convolutional Neural Network (CNN)  Recurrent Neural Network (RNN)  Long Term-Short Memory (LTSM)  Attention Model • Introduction to CNN  Gist of Neural Network  Architecture of CNN • CNN layers  Convolution Layer  Non-Linear Activation Layer (ReLu)  Pooling Layer • Hyper-Parameters. • Introduction to RNN  Architecture of RNN  Working of RNN  RNN Example • Drawback of RNN • LSTM  Architecture of LSTM  Working of LSTM  LSTM Example • Proposed Model • Results and Future work • Conclusion
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64. NON-LINEAR ACTIVATION LAYER (RELU) • ReLU is a non-linear activation function, which is used to apply elementwise non-linearity. • ReLU layer applies an activation function to each element, such as the max(0, x) thresholding to zero.
  • 65.
  • 66.
  • 67.
  • 68. AGENDA • Problem Statement  Converting Mathematical Equations into Latex representation. • Approach (Deep Learning Techniques)  Convolutional Neural Network (CNN)  Recurrent Neural Network (RNN)  Long Term-Short Memory (LTSM)  Attention Model • Introduction to CNN  Gist of Neural Network  Architecture of CNN • CNN layers  Convolution Layer  Non-Linear Activation Layer (ReLu)  Pooling Layer • Hyper-Parameters. • Introduction to RNN  Architecture of RNN  Working of RNN  RNN Example • Drawback of RNN • LSTM  Architecture of LSTM  Working of LSTM  LSTM Example • Proposed Model • Results and Future work • Conclusion
  • 69. HYPER-PARAMETERS • Convolution • Filter Size • Number of Filters • Padding • Stride • Pooling • Filter Size • Stride • Fully Connected • Number of neurons
  • 70. AGENDA • Problem Statement  Converting Mathematical Equations into Latex representation. • Approach (Deep Learning Techniques)  Convolutional Neural Network (CNN)  Recurrent Neural Network (RNN)  Long Term-Short Memory (LTSM)  Attention Model • Introduction to CNN  Gist of Neural Network  Architecture of CNN • CNN layers  Convolution Layer  Non-Linear Activation Layer (ReLu)  Pooling Layer • Hyper-Parameters. • Introduction to RNN  Architecture of RNN  Working of RNN  RNN Example • Drawback of RNN • LSTM  Architecture of LSTM  Working of LSTM  LSTM Example • Proposed Model • Results and Future work • Conclusion
  • 71. INTRODUCTION TO RNN • RNNs are a type of artificial neural network designed to recognize the patterns in sequences of data. It is used to process the data sequentially. • Why can’t we accomplish this task with Feed forward network? • The drawback of feed forward network is that, it doesn't remember the inputs over the period of time. • To process the data sequentially we need a network that behave recurrently. • Architecture of RNN RNN have loop
  • 72. • RNNs are not all that different than Neural Network. RNN can be thought of as multiple copies of the same network, each passing a message to a successor. An unrolled RNN is shown below. • In fast last few years, there have been incredible success applying RNNs to a variety of problems: speech recognition, language modeling, translation, image captioning…. The list goes on. An Unrolled RNN
  • 73. DRAWBACK OF AN RNN • RNN has a problem of long term dependency. It doesn’t remember the inputs after certain time steps. • This problem occurs due to gradient exploding or gradient vanishing while performing backpropagation. • I’ll try to example this problem with an example. • Let’s consider a language model trying to predict the next word based on the previous ones. • For example “the clouds are in the sky ”. So in order to predict the sky we don’t need any further context. In such cases, where the gap between the relevant information and the place that it’s needed is small, RNNs can learn to use the past information
  • 74. • But there are also cases where we need to know more context of the input. • For example “I grew up in France ……………………………. I speak fluent French” . • Unfortunately, as that gap grows, RNNs become unable to learn to connect the information. • This happens due to vanishing gradient and exploding gradient problem.
  • 77. HOW TO OVERCOME THESE CHALLENGES? • For Vanishing gradient we can use, ReLu activation function: We can use activation functions like ReLU, which gives output one while calculating gradient. LSTM, GRUs : Different network architectures that has been specially designed can be used to combat this problem. • For Exploding gradient we can use, Clip gradients at threshold: clip the gradient when it goes higher than a threshold.
  • 78. AGENDA • Problem Statement  Converting Mathematical Equations into Latex representation. • Approach (Deep Learning Techniques)  Convolutional Neural Network (CNN)  Recurrent Neural Network (RNN)  Long Term-Short Memory (LTSM)  Attention Model • Introduction to CNN  Gist of Neural Network  Architecture of CNN • CNN layers  Convolution Layer  Non-Linear Activation Layer (ReLu)  Pooling Layer • Hyper-Parameters. • Introduction to RNN  Architecture of RNN  Working of RNN  RNN Example • Drawback of RNN • LSTM  Architecture of LSTM  Working of LSTM  LSTM Example • Proposed Model • Results and Future work • Conclusion
  • 79. LONG SHORT-TERM MEMORY (LSTM) • Long Short Term Memory network – usually just called “LSTM” – are a special kind of RNN • They are capable of learning long-term dependencies.
  • 80. ARCHITECTURE OF LSTM • Why LSTM is different then RNN, because LSTM has cell state that deals with long term dependencies.
  • 81. WORKING OF LSTM • Step 1: The first step in the LSTM is to identify those information that are not required and will be thrown away from the cell state. This decision is made of neural network with sigmoid activation function called forget gate layer. • 𝑊𝑓 = weight • ℎ 𝑡−1 = output from the previous time step • 𝑋𝑡 = New input • 𝑏𝑓 = bias 𝑓𝑡 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑(𝑊𝑓 ℎ 𝑡−1, 𝑋𝑡 + 𝑏𝑓
  • 82. WORKING OF LSTM • Step 2: The next step is to decide, what new information we’re going to store in the cell state. This whole process comprises of following steps. A NN layer with sigmoid sigmoid called the “input gate layer” decides which values will be updated. Next, a NN layer with tanh activation creates a new vector that could be added to the state. • In the next step, we’ll combine these two to • V update the state. 𝑖 𝑡 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑(𝑊𝑖 ℎ 𝑡−1, 𝑋𝑡 + 𝑏𝑖 𝐶𝑡′ = 𝑡𝑎𝑛ℎ(𝑊𝑐 ℎ 𝑡−1, 𝑋𝑡 + 𝑏 𝑐
  • 83. WORKING OF LSTM • Step 3: Now, we will update the old cell states, Ct-1 into the new cell state Ct. First, we multiply the old state (Ct-1) by ft, forgetting the things we decided to forget earlier. Then, we add it * 𝐶𝑡 ′ . This is the new vector values, scaled by how much we decided to update each cell state value. 𝐶𝑡 = 𝑓𝑡 ∗ 𝐶𝑡−1 + 𝑖 𝑡 ∗ 𝐶𝑡′
  • 84. WORKING OF LSTM • Step 4: We will run a sigmoid layer which decides what parts of the cell state we’re going to output. Then, we put the cell state through tanh (push the values to be between -1 and 1) and multiply it by the output of the sigmoid gate, so that we only output the parts we decided to. 𝑂𝑡 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑(𝑊𝑜 ℎ 𝑡−1, 𝑋𝑡 + 𝑏 𝑜 ℎ 𝑡 = 𝑂𝑡 ∗ tan h( 𝐶𝑡 s
  • 85. PROPOSED LSTM VARIANT WITH PEEPHOLE CONNECTION • The easiest but very powerful solution to the conventional LSTM unit is to introduce a weighted “peephole” connections from the cell state unit (Ct-1) to all the gates in the same memory unit. The peephole connections allow every gate to assess the current cell state even though the output gate is closed.
  • 86. STOCHASTIC “HARD” ATTENTION MODEL • With an attention mechanism, the image is first divided into n, parts, and we compute with a CNN representations of each part y1,y2………yn. When LSTM is generating a new word, the attention mechanism is focusing on the relevant part of the image, so the decoder only uses specific parts of the image. • In stochastic process like Hard attention mechanism, rather than using all the hidden states yt as an input for the decoding, the process finds the probabilities of a hidden state with respect to location variable 𝑠𝑡. The gradients are obtained by reinforcement learning.
  • 87. AGENDA • Problem Statement  Converting Mathematical Equations into Latex representation. • Approach (Deep Learning Techniques)  Convolutional Neural Network (CNN)  Recurrent Neural Network (RNN)  Long Term-Short Memory (LTSM)  Attention Model • Introduction to CNN  Gist of Neural Network  Architecture of CNN • CNN layers  Convolution Layer  Non-Linear Activation Layer (ReLu)  Pooling Layer • Hyper-Parameters. • Introduction to RNN  Architecture of RNN  Working of RNN  RNN Example • Drawback of RNN • LSTM  Architecture of LSTM  Working of LSTM  LSTM Example • Proposed Model • Results • Conclusion and Future work
  • 90. • Original Image • Predicted Latex: • Rendered Predicted Image: Actual Test results on Test set:
  • 91. RESULTS • The proposed method is compared with the previous two methods called INFTY and WYGIWYS on the bases of BLEU (Bilingual evaluation understudy) metric and Exact Match. BLEU is a metric to evaluate the quality for the predicted Latex markup representation of the image. Exact Match is the metric which represents the percentage of the images classified correctly. • It can be seen that the proposed method scores better than the previous methods. The proposed model generated results close to 76% which is the highest in this research area. Previously, the highest result was around 75% achieved by WYGIWYS (What You Get Is What You See) model. The BLEU and Exact Match scores of the proposed model are slightly above the existing model however, this is a significant achievement considering the low GPU resources and small dataset. Model Preprocessing BLEU Exact Match INFTY - 51.20 15.60 WYGIWYS Tokenize 73.71 74.46 PROPOSED MODEL Tokenize 75.08 75.87
  • 92. Actual Test results on Test set:
  • 93. FUTURE WORK. • For possible future work, this research can be scaled from printed mathematical formulas images to the hand written mathematical formulas images. To recognize the hand written mathematical formulas, one can implement the bidirectional LSTM with CNN • This model can be used to solve the mathematical question based on formulas. • An API (Application Program Interface) can be created to solve the mathematical problems.
  • 94. REFERENCES. [1] R. 1. Anderson, Syntax-directed recognition of handprinted mathematics., CA: Symposium, 1967. [2] K. Cho, A. Courville and Y. Bengio, "Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks," in IEEE, CA, 2015. [3] A. K. a. E. Learned-Miller, "Learning on the Fly: Font-Free Approaches to Difficult OCR Problems," MA, 2000. [4] D. Lopresti, "Optical Character Recognition Errors and Their Effects on Natural Language Processing," International Journal on Document Analysis and Recognition, 19 12 [5] WILDML, "WILDML," [Online]. Available: http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/. [6] S. a. S. J. Hochreiter, "Long Short-Term Memory. Neural Computation," 1997. [7] S. Yan, "Understanding LSTM and its diagrams," Software engineer & wantrepreneur. Interested in computer graphics, bitcoin and deep learning., 13 03 2016. [Online]. [1] https://medium.com/@shiyan/understanding-lstm-and-its-diagrams-37e2f46f1714. [8] C. R. a. D. P. W. Ellis, "FEED-FORWARD NETWORKS WITH ATTENTION CAN SOLVE SOME LONG-TERM MEMORY PROBLEMS," ICLR, 2016. [9] a. F.-F. L. Karpathy, Image captioning., 2015. [10] F. A. Gers, N. N. Schraudolph and J. Schmidhuber, "Learning Precise Timing with LSTM Recurrent Networks," Journal of Machine Learning Reserach, 8 2002.

Notes de l'éditeur

  1. In addition these type of networks don’t take into account or understand the relation between the space and the pixels in the images. Particular within images we know that pixels in near by space are much more correlated than those further part. So these networks by being fully connected don’t take this sort of consideration into account. So what we are going to do by our understanding about space relation, we are going to delete some connections
  2. . So instead of fully connected layer. Now we have the units in the hidden layer are connected to the near by pixels in the input layer.
  3. Now instead of weight we call these values as filters
  4. Point 5 it is obvious that the next word is going to be sky. In such a ca
  5. Truncated BTT RMSprop to adjust learning rate.