SlideShare une entreprise Scribd logo
1  sur  24
Introduction to Tensors
-Jayesh Sukdeo Patil
Introduction
• TensorFlow is a software library or framework, designed by the
Google team to implement machine learning and deep learning
concepts in the easiest manner. It combines the computational
algebra of optimization techniques for easy calculation of many
mathematical expressions.
• Features of TensorFlow:
 It includes a feature of that defines, optimizes and calculates
mathematical expressions easily with the help of multi-
dimensional arrays called tensors.
 It includes a programming support of deep neural networks and
machine learning techniques.
 It includes a high scalable feature of computation with various data
sets.
 TensorFlow uses GPU computing, automating management. It also
includes a unique feature of optimization of same memory and the
data used.
Why is TensorFlow So
Popular?
• TensorFlow is well-documented and includes
plenty of machine learning libraries. It offers a
few important functionalities and methods for the
same. TensorFlow is also called a “Google”
product. It includes a variety of machine learning
and deep learning algorithms.
• TensorFlow can train and run deep neural
networks for handwritten digit classification,
image recognition, word embedding and creation
of various sequence models.
Tensor Data
Structure
• Tensors are used as the basic data structures in TensorFlow language.
Tensors represent the connecting edges in any flow diagram called the
Data Flow Graph. Tensors are defined as multidimensional array or list.
Tensors are identified by the following three parameters:
 Rank : Unit of dimensionality described within tensor is called rank. It
identifies the number of dimensions of the tensor. A rank of a tensor can
be described as the order or n-dimensions of a tensor defined.
 Shape : The number of rows and columns together define the shape of
Tensor.
 Type : Type describes the data type assigned to Tensor’s elements. A user
needs to consider the following activities for building a Tensor:
 Build an n-dimensional array
 Convert the n-dimensional array.
Mathematical
Foundations
• Vector An array of numbers, which is either continuous or
discrete, is defined as a vector. Machine learning algorithms
deal with fixed length vectors for better output generation.
Machine learning algorithms deal with multidimensional
data so vectors play a crucial role.
• Scalar: Scalar can be defined as one-
dimensional vector. Scalars are those, which
include only magnitude and no direction.
With scalars, we are only concerned with
the magnitude. Examples of scalar include
weight and height parameters of children.
• Matrix: Matrix can be defined as multi-
dimensional arrays, which are arranged in
the format of rows and columns. The size of
matrix is defined by row length and column
length. Following figure shows the
representation of any specified matrix.
• Consider the matrix with “m” rows and “n”
columns as mentioned above, the matrix
representation will be specified as “m*n
matrix” which defined the length of matrix
as well.
Mathematical Computations
• Addition of matrices Addition of two or more matrices is possible
if the matrices are of the same dimension. The addition implies
addition of each element as per the given position. Consider the
following example to understand how addition of matrices works:
• Subtraction of matrices The subtraction of matrices operates in
similar fashion like the addition of two matrices. The user can
subtract two matrices provided the dimensions are equal.
• Multiplication of matrices For two matrices A m*n and B p*q to
be multipliable, n should be equal to p. The resulting matrix is: C
m*q
• Transpose of matrix The transpose of a matrix A, m*n is generally
represented by AT (transpose) n*m and is obtained by transposing
the column vectors as row vectors.
• Dot product of vectors Any vector of dimension n can be
represented as a matrix v = R^n*1.
Convolutional Neural Networks
• Convolutional Neural networks are designed to process data through
multiple layers of arrays. This type of neural networks is used in
applications like image recognition or face recognition. The primary
difference between CNN and any other ordinary neural network is that
CNN takes input as a two-dimensional array and operates directly on the
images rather than focusing on feature extraction which other neural
networks focus on.
• A convolutional neural network uses three basic ideas:
• Local respective fields
• Convolution
• Pooling
• CNN utilizes spatial correlations that exist within
the input data. Each concurrent layer of a neural
network connects some input neurons. This specific
region is called local receptive field. Local receptive
field focusses on the hidden neurons. The hidden
neurons process the input data inside the mentioned
field not realizing the changes outside the specific
boundary.
• If we observe the above representation, each
connection learns a weight of the hidden neuron
with an associated connection with movement from
one layer to another. Here, individual neurons
perform a shift from time to time. This process is
called “convolution”.
• The mapping of connections from the input layer to
the hidden feature map is defined as “shared
weights” and bias included is called “shared bias”.
CNN or convolutional neural networks use pooling
layers, which are the layers, positioned immediately
after CNN declaration. It takes the input from the
user as a feature map that comes out of
convolutional networks and prepares a condensed
feature map. Pooling layers helps in creating layers
with neurons of previous layers.
Recurrent Neural
Networks
• Recurrent neural networks is a type of deep learning-oriented algorithm, which follows a sequential
approach. In neural networks, we always assume that each input and output is independent of all
other layers. These type of neural networks are called recurrent because they perform mathematical
computations in sequential manner. Consider the following steps to train a recurrent neural
network:
• Step 1: Input a specific example from dataset.
• Step 2: Network will take an example and compute some calculations using randomly initialized
variables.
• Step 3: A predicted result is then computed.
• Step 4: The comparison of actual result generated with the expected value will produce an error.
• Step 5: To trace the error, it is propagated through same path where the variables are also adjusted.
• Step 6: The steps from 1 to 5 are repeated until we are confident that the variables declared to get
the output are defined properly.
• Step 7: A systematic prediction is made by applying these variables to get new unseen input.
TensorBoard
Visualization
• TensorFlow includes a visualization tool, which is
called the TensorBoard. It is used for analyzing Data
Flow Graph and also used to understand machine-
learning models.
• The important feature of TensorBoard includes a view
of different types of statistics about the parameters
and details of any graph in vertical alignment. Deep
neural network includes up to 36,000 nodes.
• TensorBoard helps in collapsing these nodes in high-
level blocks and highlighting the identical structures.
This allows better analysis of graph focusing on the
primary sections of the computation graph. The
TensorBoard visualization is said to be very
interactive where a user can pan, zoom and expand
the nodes to display the details.
Keras
• Keras is compact, easy to learn, high-level Python library run on top of TensorFlow framework. It is made with focus of
understanding deep learning techniques, such as creating layers for neural networks maintaining the concepts of shapes and
mathematical details.
• The creation of framework can be of the following two types:
 Sequential API
 Functional API
• Consider the following eight steps to create deep learning model in Keras:
 Loading the data
 Preprocess the loaded data
 Definition of model
 Compiling the model
 Fit the specified model
 Evaluate it
 Make the required predictions
 Save the model
• Optimizers are the extended class, which
include added information to train a
specific model.
• The optimizer class is initialized with
given parameters but it is important to
remember that no Tensor is needed.
• The optimizers are used for improving
speed and performance for training a
specific model.
• Following are some optimizers in
Tensorflow:
 Stochastic Gradient descent
 Momentum
 Nesterov momentum
 Adagrad
 Adadelta
 RMSProp
 Adam
 Adamax
 SMORMS3
Recommendations
for Neural Network
Training
• Back Propagation Back
propagation is a simple
method to compute
partial derivatives, which
includes the basic form
of composition best
suitable for neural nets.
Stochastic
Gradient Descent
• Stochastic Gradient Descent
In stochastic gradient descent,
a batch is the total number of
examples, which a user uses to
calculate the gradient in a
single iteration. So far, it is
assumed that the batch has
been the entire data set. The
best illustration is working at
Google scale; data sets often
contain billions or even
hundreds of billions
of examples.
Learning Rate
Decay
• Adapting the
learning rate is one
of the most important
features of gradient
descent optimization.
This is crucial to
TensorFlow
implementation.
Dropout
• Deep neural nets
with a large
number of
parameters form
powerful machine
learning systems.
However, over
fitting is a serious
problem in such
networks.
Max Pooling
• Max pooling is a
sample-based
discretization
process. The object is
to down-sample an
input representation,
which reduces the
dimensionality with
the required
assumptions.
Long Short Term
Memory (LSTM)
• LSTM controls the decision
on what inputs should be
taken within the specified
neuron. It includes the
control on deciding what
should be computed and
what output should be
generated.
References:
• https://www.tutorialspoint.com/tensorflow/tensorflow_tutorial.pdf
• https://www.youtube.com/watch?v=tXVNS-V39A0 - Explain
• https://www.youtube.com/watch?v=yjprpOoH5c8&t=78s -Intro
• https://www.youtube.com/watch?v=OmSLter9lyE -Architecture
• https://colab.research.google.com/github/tensorflow/docs/blob/mas
ter/site/en/guide/basics.ipynb - colab basics
• https://medium.com/nybles/create-your-first-image-recognition-
classifier-using-cnn-keras-and-tensorflow-backend-6eaab98d14dd -
Identify
TensorFlow.pptx

Contenu connexe

Similaire à TensorFlow.pptx

TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
Simplilearn
 
Machine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester ElectiveMachine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester Elective
MayuraD1
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3
Nandhini S
 

Similaire à TensorFlow.pptx (20)

Mnist soln
Mnist solnMnist soln
Mnist soln
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
 
Deep learning
Deep learningDeep learning
Deep learning
 
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
 
Wits presentation 6_28072015
Wits presentation 6_28072015Wits presentation 6_28072015
Wits presentation 6_28072015
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
 
A Survey of Convolutional Neural Networks
A Survey of Convolutional Neural NetworksA Survey of Convolutional Neural Networks
A Survey of Convolutional Neural Networks
 
Introduction to Tensor Flow-v1.pptx
Introduction to Tensor Flow-v1.pptxIntroduction to Tensor Flow-v1.pptx
Introduction to Tensor Flow-v1.pptx
 
Machine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester ElectiveMachine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester Elective
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3
 
Module-3_SVM_Kernel_KNN.pptx
Module-3_SVM_Kernel_KNN.pptxModule-3_SVM_Kernel_KNN.pptx
Module-3_SVM_Kernel_KNN.pptx
 
Hand Written Digit Classification
Hand Written Digit ClassificationHand Written Digit Classification
Hand Written Digit Classification
 
ECKOVATION_MACHINE LEARNING
ECKOVATION_MACHINE LEARNINGECKOVATION_MACHINE LEARNING
ECKOVATION_MACHINE LEARNING
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
 
Introduction To Using TensorFlow & Deep Learning
Introduction To Using TensorFlow & Deep LearningIntroduction To Using TensorFlow & Deep Learning
Introduction To Using TensorFlow & Deep Learning
 
ML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptxML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptx
 
Introduction to R.pptx
Introduction to R.pptxIntroduction to R.pptx
Introduction to R.pptx
 
ML in Astronomy - Workshop 1.pptx
ML in Astronomy - Workshop 1.pptxML in Astronomy - Workshop 1.pptx
ML in Astronomy - Workshop 1.pptx
 
DLT UNIT-3.docx
DLT  UNIT-3.docxDLT  UNIT-3.docx
DLT UNIT-3.docx
 

Plus de Jayesh Patil (10)

AWS EC2 JSP.pptx
AWS EC2 JSP.pptxAWS EC2 JSP.pptx
AWS EC2 JSP.pptx
 
AWS Cloudtrail JSP.pptx
AWS Cloudtrail JSP.pptxAWS Cloudtrail JSP.pptx
AWS Cloudtrail JSP.pptx
 
Basics of cloud - AWS.pptx
Basics of cloud - AWS.pptxBasics of cloud - AWS.pptx
Basics of cloud - AWS.pptx
 
Cloud Roles.pptx
Cloud Roles.pptxCloud Roles.pptx
Cloud Roles.pptx
 
ML Softmax JP 24.pptx
ML Softmax JP 24.pptxML Softmax JP 24.pptx
ML Softmax JP 24.pptx
 
IOT EDGE SS JP.pptx
IOT EDGE SS JP.pptxIOT EDGE SS JP.pptx
IOT EDGE SS JP.pptx
 
Flume DS -JSP.pptx
Flume DS -JSP.pptxFlume DS -JSP.pptx
Flume DS -JSP.pptx
 
Blom Scheme CT -JSP.pptx
Blom Scheme CT -JSP.pptxBlom Scheme CT -JSP.pptx
Blom Scheme CT -JSP.pptx
 
AZURE CC JP.pptx
AZURE CC JP.pptxAZURE CC JP.pptx
AZURE CC JP.pptx
 
ATHLETICS - SD.pptx
ATHLETICS - SD.pptxATHLETICS - SD.pptx
ATHLETICS - SD.pptx
 

Dernier

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 

Dernier (20)

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Asian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptxAsian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptx
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIFood Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 

TensorFlow.pptx

  • 2. Introduction • TensorFlow is a software library or framework, designed by the Google team to implement machine learning and deep learning concepts in the easiest manner. It combines the computational algebra of optimization techniques for easy calculation of many mathematical expressions. • Features of TensorFlow:  It includes a feature of that defines, optimizes and calculates mathematical expressions easily with the help of multi- dimensional arrays called tensors.  It includes a programming support of deep neural networks and machine learning techniques.  It includes a high scalable feature of computation with various data sets.  TensorFlow uses GPU computing, automating management. It also includes a unique feature of optimization of same memory and the data used.
  • 3. Why is TensorFlow So Popular? • TensorFlow is well-documented and includes plenty of machine learning libraries. It offers a few important functionalities and methods for the same. TensorFlow is also called a “Google” product. It includes a variety of machine learning and deep learning algorithms. • TensorFlow can train and run deep neural networks for handwritten digit classification, image recognition, word embedding and creation of various sequence models.
  • 4. Tensor Data Structure • Tensors are used as the basic data structures in TensorFlow language. Tensors represent the connecting edges in any flow diagram called the Data Flow Graph. Tensors are defined as multidimensional array or list. Tensors are identified by the following three parameters:  Rank : Unit of dimensionality described within tensor is called rank. It identifies the number of dimensions of the tensor. A rank of a tensor can be described as the order or n-dimensions of a tensor defined.  Shape : The number of rows and columns together define the shape of Tensor.  Type : Type describes the data type assigned to Tensor’s elements. A user needs to consider the following activities for building a Tensor:  Build an n-dimensional array  Convert the n-dimensional array.
  • 5. Mathematical Foundations • Vector An array of numbers, which is either continuous or discrete, is defined as a vector. Machine learning algorithms deal with fixed length vectors for better output generation. Machine learning algorithms deal with multidimensional data so vectors play a crucial role.
  • 6. • Scalar: Scalar can be defined as one- dimensional vector. Scalars are those, which include only magnitude and no direction. With scalars, we are only concerned with the magnitude. Examples of scalar include weight and height parameters of children. • Matrix: Matrix can be defined as multi- dimensional arrays, which are arranged in the format of rows and columns. The size of matrix is defined by row length and column length. Following figure shows the representation of any specified matrix. • Consider the matrix with “m” rows and “n” columns as mentioned above, the matrix representation will be specified as “m*n matrix” which defined the length of matrix as well.
  • 7. Mathematical Computations • Addition of matrices Addition of two or more matrices is possible if the matrices are of the same dimension. The addition implies addition of each element as per the given position. Consider the following example to understand how addition of matrices works: • Subtraction of matrices The subtraction of matrices operates in similar fashion like the addition of two matrices. The user can subtract two matrices provided the dimensions are equal. • Multiplication of matrices For two matrices A m*n and B p*q to be multipliable, n should be equal to p. The resulting matrix is: C m*q • Transpose of matrix The transpose of a matrix A, m*n is generally represented by AT (transpose) n*m and is obtained by transposing the column vectors as row vectors. • Dot product of vectors Any vector of dimension n can be represented as a matrix v = R^n*1.
  • 8.
  • 9. Convolutional Neural Networks • Convolutional Neural networks are designed to process data through multiple layers of arrays. This type of neural networks is used in applications like image recognition or face recognition. The primary difference between CNN and any other ordinary neural network is that CNN takes input as a two-dimensional array and operates directly on the images rather than focusing on feature extraction which other neural networks focus on. • A convolutional neural network uses three basic ideas: • Local respective fields • Convolution • Pooling
  • 10. • CNN utilizes spatial correlations that exist within the input data. Each concurrent layer of a neural network connects some input neurons. This specific region is called local receptive field. Local receptive field focusses on the hidden neurons. The hidden neurons process the input data inside the mentioned field not realizing the changes outside the specific boundary. • If we observe the above representation, each connection learns a weight of the hidden neuron with an associated connection with movement from one layer to another. Here, individual neurons perform a shift from time to time. This process is called “convolution”. • The mapping of connections from the input layer to the hidden feature map is defined as “shared weights” and bias included is called “shared bias”. CNN or convolutional neural networks use pooling layers, which are the layers, positioned immediately after CNN declaration. It takes the input from the user as a feature map that comes out of convolutional networks and prepares a condensed feature map. Pooling layers helps in creating layers with neurons of previous layers.
  • 11. Recurrent Neural Networks • Recurrent neural networks is a type of deep learning-oriented algorithm, which follows a sequential approach. In neural networks, we always assume that each input and output is independent of all other layers. These type of neural networks are called recurrent because they perform mathematical computations in sequential manner. Consider the following steps to train a recurrent neural network: • Step 1: Input a specific example from dataset. • Step 2: Network will take an example and compute some calculations using randomly initialized variables. • Step 3: A predicted result is then computed. • Step 4: The comparison of actual result generated with the expected value will produce an error. • Step 5: To trace the error, it is propagated through same path where the variables are also adjusted. • Step 6: The steps from 1 to 5 are repeated until we are confident that the variables declared to get the output are defined properly. • Step 7: A systematic prediction is made by applying these variables to get new unseen input.
  • 12. TensorBoard Visualization • TensorFlow includes a visualization tool, which is called the TensorBoard. It is used for analyzing Data Flow Graph and also used to understand machine- learning models. • The important feature of TensorBoard includes a view of different types of statistics about the parameters and details of any graph in vertical alignment. Deep neural network includes up to 36,000 nodes. • TensorBoard helps in collapsing these nodes in high- level blocks and highlighting the identical structures. This allows better analysis of graph focusing on the primary sections of the computation graph. The TensorBoard visualization is said to be very interactive where a user can pan, zoom and expand the nodes to display the details.
  • 13. Keras • Keras is compact, easy to learn, high-level Python library run on top of TensorFlow framework. It is made with focus of understanding deep learning techniques, such as creating layers for neural networks maintaining the concepts of shapes and mathematical details. • The creation of framework can be of the following two types:  Sequential API  Functional API • Consider the following eight steps to create deep learning model in Keras:  Loading the data  Preprocess the loaded data  Definition of model  Compiling the model  Fit the specified model  Evaluate it  Make the required predictions  Save the model
  • 14. • Optimizers are the extended class, which include added information to train a specific model. • The optimizer class is initialized with given parameters but it is important to remember that no Tensor is needed. • The optimizers are used for improving speed and performance for training a specific model. • Following are some optimizers in Tensorflow:  Stochastic Gradient descent  Momentum  Nesterov momentum  Adagrad  Adadelta  RMSProp  Adam  Adamax  SMORMS3
  • 15. Recommendations for Neural Network Training • Back Propagation Back propagation is a simple method to compute partial derivatives, which includes the basic form of composition best suitable for neural nets.
  • 16. Stochastic Gradient Descent • Stochastic Gradient Descent In stochastic gradient descent, a batch is the total number of examples, which a user uses to calculate the gradient in a single iteration. So far, it is assumed that the batch has been the entire data set. The best illustration is working at Google scale; data sets often contain billions or even hundreds of billions of examples.
  • 17. Learning Rate Decay • Adapting the learning rate is one of the most important features of gradient descent optimization. This is crucial to TensorFlow implementation.
  • 18. Dropout • Deep neural nets with a large number of parameters form powerful machine learning systems. However, over fitting is a serious problem in such networks.
  • 19. Max Pooling • Max pooling is a sample-based discretization process. The object is to down-sample an input representation, which reduces the dimensionality with the required assumptions.
  • 20. Long Short Term Memory (LSTM) • LSTM controls the decision on what inputs should be taken within the specified neuron. It includes the control on deciding what should be computed and what output should be generated.
  • 21.
  • 22.
  • 23. References: • https://www.tutorialspoint.com/tensorflow/tensorflow_tutorial.pdf • https://www.youtube.com/watch?v=tXVNS-V39A0 - Explain • https://www.youtube.com/watch?v=yjprpOoH5c8&t=78s -Intro • https://www.youtube.com/watch?v=OmSLter9lyE -Architecture • https://colab.research.google.com/github/tensorflow/docs/blob/mas ter/site/en/guide/basics.ipynb - colab basics • https://medium.com/nybles/create-your-first-image-recognition- classifier-using-cnn-keras-and-tensorflow-backend-6eaab98d14dd - Identify