SlideShare une entreprise Scribd logo
1  sur  39
BY:
Eng.Ismail El-Gayar
Under Supervision Of
Prof.Dr. Sheren Youssef
Introduction
Understanding the Brain
Neural Networks as a
Paradigm for Parallel Processing
• The Perceptron Network
• Training a Perceptron
• Multilayer Perceptrons
• Backpropagation
Algorithm
Two-Class Discrimination
Multiclass Discrimination
Multiple Hidden Layers
• Training Procedures
Improving Convergence
Momentum
Adaptive Learning Rate
• Learning Time
Time Delay Neural Networks
Recurrent Networks
Massive parallelism
Brain computer as an
information or signal processing
system, is composed of a large
number of a simple processing
elements, called neurons. These
neurons are interconnected by
numerous direct links, which are
called connection, and cooperate
which other to perform a parallel
distributed processing (PDP) in
order to soft a desired
computation tasks.
Connectionism
Brain computer is a highly
interconnected neurons system in
such a way that the state of one
neuron affects the potential of the
large number of other neurons
which are connected according to
weights or strength. The key idea of
such principle is the functional
capacity of biological neural nets
determs mostly not so of a single
neuron but of its connections
Associative distributed memory
Storage of information in a brain
is supposed to be concentrated
in synaptic connections of brain
neural network, or more
precisely, in the pattern of these
connections and strengths
(weights) of the synaptic
connections.
A process of pattern
recognition and pattern
manipulation is based
on:
How our brain
manipulates with
patterns ?
Processing:-
3
Human brain
contains a
massively
interconnected
net of 1010
-1011
(10 billion)
neurons
The Biological Neuron:-
The schematic
model of a
biological neuron
Synapses
Dendrites
Soma
Axon
Dendrit
e from
other
Axon from
other
neuron
1. Soma or body cell - is a large, round central body in which almost all the logical functions of the neuron
are realized.
2. The axon (output), is a nerve fibre attached to the soma which can serve as a final output channel of the
neuron. An axon is usually highly branched.
3. The dendrites (inputs)- represent a highly branching tree of fibres. These long irregularly shaped nerve
fibres (processes) are attached to the soma.
4. Synapses are specialized contacts on a neuron which are the termination points for the axons from other
neurons.
?
Brain-Like Computer
Brain-like computer –
is a mathematical model of humane-brain
principles of computations. This computer consists
of those elements which can be called the
biological neuron prototypes, which are
interconnected by direct links called connections
and which cooperate to perform parallel
distributed processing (PDP) in order to solve a
desired computational task.
Neurons and Neural Net
The new paradigm of computing
mathematics consists of the combination
of such artificial neurons into some
artificial neuron net.
Artificial Neural Network – Mathematical
Paradigms of Brain-Like Computer
Brain-like Computer
NN as an model of brain-
like Computer
 An artificial neural network (ANN) is a massively
parallel distributed processor that has a natural
propensity for storing experimental knowledge and
making it available for use. It means that:
Knowledge is acquired by the network through a
learning (training) process;
 The strength of the interconnections between neurons is
implemented by means of the synaptic weights used to
store the knowledge.
The learning process is a procedure of the adapting the
weights with a learning algorithm in order to capture the
knowledge. On more mathematically, the aim of the
learning process is to map a given relation between inputs
and output (outputs) of the network.
Brain
The human brain is still not well
understood and indeed its behavior is very
complex!
There are about 10 billion neurons in the
human cortex and 60 trillion synapses of
connections
The brain is a highly complex, nonlinear
and parallel computer (information-
processing system)
ANN as a Brain-Like Computer
7
8
Artificial
Intellect with
Neural
Networks
Intelligent
Control
Intelligent
Control
Technical
Diagnistics
Technical
Diagnistics
Intelligent
Data Analysis
and Signal
Processing
Intelligent
Data Analysis
and Signal
Processing
Advance
Robotics
Advance
Robotics
Machine
Vision
Machine
Vision
Image &
Pattern
Recognition
Image &
Pattern
Recognition
Intelligent
Security
Systems
Intelligent
Security
Systems
Intelligentl
Medicine
Devices
Intelligentl
Medicine
Devices
Intelligent
Expert
Systems
Intelligent
Expert
Systems
Applications of Artificial Neural Networks
8
Artificial Neural Networks
Perceptrons
Multiple input nodes
Single output node
Takes a weighted sum of the inputs, call this S
Unit function calculates the output for the network
Useful to study because
We can use perceptrons to build larger networks
Perceptrons have limited representational abilities
We will look at concepts they can’t learn later
1( ,..., )nf x x
0 1( , ,..., )nw w w
- unknown multi-factor decision rule
Learning process using a representative learning set
- a set of weighting vectors is the result
of the learning process
1
0 1 1
ˆ( ,..., )
( ... )
n
n n
f x x
P w w x w x
=
= + + +
- a partially defined function, which
is an approximation of the decision
rule function 11
Why neural network?
Artificial Neuron
f is a function to be earned
are the inputs
φ is the activation function
1x
nx
1( ,..., )nxf x.
.
.
φ(z)
0 1 1 ... n nz w w x w x= + + +
1,..., nx x
Z is the weighted sum
1 0 1 1( ,..., ) ( ... )n n nf x x F w w x w x= + + +
Perceptrons:-
Output:- using Hardlims function
Simple Example:
Categorising Vehicles
Input to function: pixel data from vehicle images
Output: numbers: 1 for a car; 2 for a bus; 3 for a tank
INPUT INPUT INPUT INPUT
OUTPUT = 3 OUTPUT = 2 OUTPUT = 1 OUTPUT=1
General Idea
1.1
2.7
3.0
-1.3
2.7
4.2
-0.8
7.1
2.1
-1.2
1.1
0.2
0.3
HIDDEN LAYERSINPUT LAYER
NUMBERSINPUT
NUMBERSOUTPUT
OUTPUT LAYER CATEGORY
VALUES PROPAGATE THROUGH THE NETWORK
Cat A
Cat B
Cat C
Choose Cat A
(largest output value)
Value calculated using
all the input unit values
Calculation Example:-
Categorisation of 2x2 pixel black & white images
Into “bright” and “dark”
Representation of this rule:
If it contains 2, 3 or 4 white pixels, it is “bright”
If it contains 0 or 1 white pixels, it is “dark”
Perceptron architecture:
Four input units, one for each pixel
One output unit: +1 for white, -1 for dark
Calculation Example:-
Example calculation: x1=-1, x2=1, x3=1, x4=-1
S = 0.25*(-1) + 0.25*(1) + 0.25*(1) + 0.25*(-1) = 0
0 > -0.1, so the output from the ANN is +1
So the image is categorised as “bright”
Unit Functions
Linear Functions
Simply output the weighted sum
Threshold Functions
Output low values
 Until the weighted sum gets over a threshold
 Then output high values
 Equivalent of “firing” of neurons
Step function:
Output +1 if S > Threshold T
Output –1 otherwise
Sigma function:
Similar to step function but differentiable
Step
Function
Sigma
Function
Learning In Perceptron
Learning Process of ANN
Learn from experience
Learning algorithms
Recognize pattern of
activities
Involves 3 tasks
Compute outputs
Compare outputs with
desired targets
Adjust the weights and
repeat the process
Compute
output
Is
Desired
Output
achieved
Stop
Adjust
Weight
yes
No
Training a Perceptron:-
η -> Learning Rate
T -> target output
O -> output
X -> input
Worked Example
Return to the “bright” and “dark” example
Use a learning rate of η = 0.1
Suppose we have set random weights:
Worked Example
Use this training example, E, to update weights:
Here, x1 = -1, x2 = 1, x3 = 1, x4 = -1 as before
Propagate this information through the network:
 S = (-0.5 * 1) + (0.7 * -1) + (-0.2 * +1) + (0.1 * +1) + (0.9 * -1) = -2.2
Hence the network outputs o(E) = -1
But this should have been “bright”=+1
So t(E) = +1
Calculating the Error Values
Δ0 = η(t(E)-o(E))x0
= 0.1 * (1 - (-1)) * (1) = 0.1 * (2) = 0.2
Δ1 = η(t(E)-o(E))x1
= 0.1 * (1 - (-1)) * (-1) = 0.1 * (-2) = -0.2
Δ2 = η(t(E)-o(E))x2
= 0.1 * (1 - (-1)) * (1) = 0.1 * (2) = 0.2
Δ3 = η(t(E)-o(E))x3
= 0.1 * (1 - (-1)) * (1) = 0.1 * (2) = 0.2
Δ4 = η(t(E)-o(E))x4
Calculating the New Weights
w’0 = -0.5 + Δ0 = -0.5 + 0.2 = -0.3
w’1 = 0.7 + Δ1 = 0.7 + -0.2 = 0.5
w’2 = -0.2 + Δ2 = -0.2 + 0.2 = 0
w’3= 0.1 + Δ3 = 0.1 + 0.2 = 0.3
w’4 = 0.9 + Δ4 = 0.9 - 0.2 = 0.7
New Look Perceptron
Calculate for the example, E, again:
 S = (-0.3 * 1) + (0.5 * -1) + (0 * +1) + (0.3 * +1) + (0.7 * -1) = -1.2
Still gets the wrong categorisation
But the value is closer to zero (from -2.2 to -1.2)
In a few epochs time, this example will be correctly categorised
is an alternative neural network architecture whose primary purpose
is to work on continuous data.
The advantage of this architecture is to adapt the network online
and hence helpful in many real time applications, like time series
prediction, online spell check, continuous speech recognition,etc.
The architecture has a continuous input that is delayed and sent as
an input to the neural network.
As an example, consider training a feed forward neural network
being trained for a time series prediction. The desired output of
the network is the present state of the time series and inputs to
the neural network are the delayed time series (past values).
Hence, the output of the neural network is the predicted next value
in the time series which is computed as the function of the past
values of the time series.
Time delay neural network (TDNN):-
Recurrent Neural Networks
TYPES OF ANN:-
feed-forward feedback
4.1 Feed-forward networks
Feed-forward ANNs allow signals to
travel one way only; from input to
output. There is no feedback (loops)
i.e. the output of any layer does not
affect that same layer. Feed-forward
ANNs tend to be straight forward
networks that associate inputs with
outputs. They are extensively used in
pattern recognition.
4.2 Feedback networks
Feedback networks can have signals
travelling in both directions by
introducing loops in the network.
Feedback networks are very powerful
and can get extremely complicated.
Feedback networks are dynamic;
Some Topologies of ANN:-
Fully-connected feed-forward Partially recurrent network
Fully recurrent network
Recurrent Neural Networks:-
recurrent neural network:-
is a class of neural
network where connections
between units form a
directed cycle. This
creates an internal state
of the network which
allows it to exhibit
dynamic temporal behavior
Partially recurrent network
Fully recurrent network
References:-
[1] Simon Colton - www.doc.ic.ac.uk/~sgc/teaching/v231/
[2] http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html
[3] http://www.willamette.edu/~gorr/classes/cs449/intro.html
[4] http://www.scribd.com/doc/12774663/Neural-Network-Presentation
[5] http://www.speech.sri.com/people/anand/771/html/node32.html
[6] http://en.wikipedia.org/wiki/Recurrent_neural_network
Neural Networks

Contenu connexe

Tendances

Supervised Learning
Supervised LearningSupervised Learning
Supervised Learning
butest
 
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Simplilearn
 
Neural networks...
Neural networks...Neural networks...
Neural networks...
Molly Chugh
 

Tendances (20)

Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshare
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
 
Artificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksArtificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural Networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
Neural networks.ppt
Neural networks.pptNeural networks.ppt
Neural networks.ppt
 
Intro to Neural Networks
Intro to Neural NetworksIntro to Neural Networks
Intro to Neural Networks
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
 
Supervised Learning
Supervised LearningSupervised Learning
Supervised Learning
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Neural networks...
Neural networks...Neural networks...
Neural networks...
 
Perceptron
PerceptronPerceptron
Perceptron
 

En vedette

Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
DEEPASHRI HK
 
Dorso-Lateral Geniculate Nucleus and Parallel Processing
Dorso-Lateral Geniculate Nucleus and Parallel ProcessingDorso-Lateral Geniculate Nucleus and Parallel Processing
Dorso-Lateral Geniculate Nucleus and Parallel Processing
GauriSShrestha
 
Neural Networks
Neural Networks Neural Networks
Neural Networks
Eric Su
 
Visual perception 1
Visual perception 1Visual perception 1
Visual perception 1
cece2012
 
Information processing approach
Information processing approachInformation processing approach
Information processing approach
aj9ajeet
 
Neural networks
Neural networksNeural networks
Neural networks
Slideshare
 

En vedette (20)

neural network
neural networkneural network
neural network
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Feature detection - Image Processing
Feature detection - Image ProcessingFeature detection - Image Processing
Feature detection - Image Processing
 
Artificial Neural Networks applications in Computer Aided Diagnosis. System d...
Artificial Neural Networks applications in Computer Aided Diagnosis. System d...Artificial Neural Networks applications in Computer Aided Diagnosis. System d...
Artificial Neural Networks applications in Computer Aided Diagnosis. System d...
 
Dorso-Lateral Geniculate Nucleus and Parallel Processing
Dorso-Lateral Geniculate Nucleus and Parallel ProcessingDorso-Lateral Geniculate Nucleus and Parallel Processing
Dorso-Lateral Geniculate Nucleus and Parallel Processing
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learning
 
ARTIFICIAL INTELLIGENCE & NEURAL NETWORKS
ARTIFICIAL INTELLIGENCE & NEURAL NETWORKSARTIFICIAL INTELLIGENCE & NEURAL NETWORKS
ARTIFICIAL INTELLIGENCE & NEURAL NETWORKS
 
Why computer engineering
Why computer engineeringWhy computer engineering
Why computer engineering
 
Neural Networks
Neural Networks Neural Networks
Neural Networks
 
Final presentation engineering as a career version1.1
Final presentation engineering as a career version1.1Final presentation engineering as a career version1.1
Final presentation engineering as a career version1.1
 
Introductory Psychology: Sensation & Perception (Vision)
Introductory Psychology: Sensation & Perception (Vision)Introductory Psychology: Sensation & Perception (Vision)
Introductory Psychology: Sensation & Perception (Vision)
 
Visual perception 1
Visual perception 1Visual perception 1
Visual perception 1
 
Career opportunities for engineering students
Career opportunities for engineering studentsCareer opportunities for engineering students
Career opportunities for engineering students
 
Information processing approach
Information processing approachInformation processing approach
Information processing approach
 
Eye powerpoint
Eye powerpointEye powerpoint
Eye powerpoint
 
Vision ppt
Vision pptVision ppt
Vision ppt
 
Colour vision
Colour visionColour vision
Colour vision
 
Color Vision
Color VisionColor Vision
Color Vision
 
Web page concept final ppt
Web page concept  final pptWeb page concept  final ppt
Web page concept final ppt
 
Neural networks
Neural networksNeural networks
Neural networks
 

Similaire à Neural Networks

Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
ncct
 
NeuralProcessingofGeneralPurposeApproximatePrograms
NeuralProcessingofGeneralPurposeApproximateProgramsNeuralProcessingofGeneralPurposeApproximatePrograms
NeuralProcessingofGeneralPurposeApproximatePrograms
Mohid Nabil
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
AMIT KUMAR
 

Similaire à Neural Networks (20)

Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
Artificial Neural networks
Artificial Neural networksArtificial Neural networks
Artificial Neural networks
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
ANN - UNIT 1.pptx
ANN - UNIT 1.pptxANN - UNIT 1.pptx
ANN - UNIT 1.pptx
 
Neural Network
Neural NetworkNeural Network
Neural Network
 
Neural network
Neural networkNeural network
Neural network
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learning
 
ANN.ppt
ANN.pptANN.ppt
ANN.ppt
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 
NeuralProcessingofGeneralPurposeApproximatePrograms
NeuralProcessingofGeneralPurposeApproximateProgramsNeuralProcessingofGeneralPurposeApproximatePrograms
NeuralProcessingofGeneralPurposeApproximatePrograms
 
Towards neuralprocessingofgeneralpurposeapproximateprograms
Towards neuralprocessingofgeneralpurposeapproximateprogramsTowards neuralprocessingofgeneralpurposeapproximateprograms
Towards neuralprocessingofgeneralpurposeapproximateprograms
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
 
Basics of Artificial Neural Network
Basics of Artificial Neural Network Basics of Artificial Neural Network
Basics of Artificial Neural Network
 
Acem neuralnetworks
Acem neuralnetworksAcem neuralnetworks
Acem neuralnetworks
 
Extracted pages from Neural Fuzzy Systems.docx
Extracted pages from Neural Fuzzy Systems.docxExtracted pages from Neural Fuzzy Systems.docx
Extracted pages from Neural Fuzzy Systems.docx
 
Nn devs
Nn devsNn devs
Nn devs
 

Plus de Ismail El Gayar (6)

What is ETL?
What is ETL?What is ETL?
What is ETL?
 
Geographic Information System for Egyptian Railway System(GIS)
Geographic Information System for Egyptian Railway System(GIS)Geographic Information System for Egyptian Railway System(GIS)
Geographic Information System for Egyptian Railway System(GIS)
 
System science documentation
System science documentationSystem science documentation
System science documentation
 
Prolog & lisp
Prolog & lispProlog & lisp
Prolog & lisp
 
Parallel architecture &programming
Parallel architecture &programmingParallel architecture &programming
Parallel architecture &programming
 
Object oriented methodology & unified modeling language
Object oriented methodology & unified modeling languageObject oriented methodology & unified modeling language
Object oriented methodology & unified modeling language
 

Dernier

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Dernier (20)

Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 

Neural Networks

  • 1. BY: Eng.Ismail El-Gayar Under Supervision Of Prof.Dr. Sheren Youssef
  • 2. Introduction Understanding the Brain Neural Networks as a Paradigm for Parallel Processing • The Perceptron Network • Training a Perceptron • Multilayer Perceptrons • Backpropagation Algorithm Two-Class Discrimination Multiclass Discrimination Multiple Hidden Layers • Training Procedures Improving Convergence Momentum Adaptive Learning Rate • Learning Time Time Delay Neural Networks Recurrent Networks
  • 3. Massive parallelism Brain computer as an information or signal processing system, is composed of a large number of a simple processing elements, called neurons. These neurons are interconnected by numerous direct links, which are called connection, and cooperate which other to perform a parallel distributed processing (PDP) in order to soft a desired computation tasks. Connectionism Brain computer is a highly interconnected neurons system in such a way that the state of one neuron affects the potential of the large number of other neurons which are connected according to weights or strength. The key idea of such principle is the functional capacity of biological neural nets determs mostly not so of a single neuron but of its connections Associative distributed memory Storage of information in a brain is supposed to be concentrated in synaptic connections of brain neural network, or more precisely, in the pattern of these connections and strengths (weights) of the synaptic connections. A process of pattern recognition and pattern manipulation is based on: How our brain manipulates with patterns ? Processing:- 3 Human brain contains a massively interconnected net of 1010 -1011 (10 billion) neurons
  • 4. The Biological Neuron:- The schematic model of a biological neuron Synapses Dendrites Soma Axon Dendrit e from other Axon from other neuron 1. Soma or body cell - is a large, round central body in which almost all the logical functions of the neuron are realized. 2. The axon (output), is a nerve fibre attached to the soma which can serve as a final output channel of the neuron. An axon is usually highly branched. 3. The dendrites (inputs)- represent a highly branching tree of fibres. These long irregularly shaped nerve fibres (processes) are attached to the soma. 4. Synapses are specialized contacts on a neuron which are the termination points for the axons from other neurons.
  • 5.
  • 6. ? Brain-Like Computer Brain-like computer – is a mathematical model of humane-brain principles of computations. This computer consists of those elements which can be called the biological neuron prototypes, which are interconnected by direct links called connections and which cooperate to perform parallel distributed processing (PDP) in order to solve a desired computational task. Neurons and Neural Net The new paradigm of computing mathematics consists of the combination of such artificial neurons into some artificial neuron net. Artificial Neural Network – Mathematical Paradigms of Brain-Like Computer Brain-like Computer
  • 7. NN as an model of brain- like Computer  An artificial neural network (ANN) is a massively parallel distributed processor that has a natural propensity for storing experimental knowledge and making it available for use. It means that: Knowledge is acquired by the network through a learning (training) process;  The strength of the interconnections between neurons is implemented by means of the synaptic weights used to store the knowledge. The learning process is a procedure of the adapting the weights with a learning algorithm in order to capture the knowledge. On more mathematically, the aim of the learning process is to map a given relation between inputs and output (outputs) of the network. Brain The human brain is still not well understood and indeed its behavior is very complex! There are about 10 billion neurons in the human cortex and 60 trillion synapses of connections The brain is a highly complex, nonlinear and parallel computer (information- processing system) ANN as a Brain-Like Computer 7
  • 8. 8 Artificial Intellect with Neural Networks Intelligent Control Intelligent Control Technical Diagnistics Technical Diagnistics Intelligent Data Analysis and Signal Processing Intelligent Data Analysis and Signal Processing Advance Robotics Advance Robotics Machine Vision Machine Vision Image & Pattern Recognition Image & Pattern Recognition Intelligent Security Systems Intelligent Security Systems Intelligentl Medicine Devices Intelligentl Medicine Devices Intelligent Expert Systems Intelligent Expert Systems Applications of Artificial Neural Networks 8
  • 10. Perceptrons Multiple input nodes Single output node Takes a weighted sum of the inputs, call this S Unit function calculates the output for the network Useful to study because We can use perceptrons to build larger networks Perceptrons have limited representational abilities We will look at concepts they can’t learn later
  • 11. 1( ,..., )nf x x 0 1( , ,..., )nw w w - unknown multi-factor decision rule Learning process using a representative learning set - a set of weighting vectors is the result of the learning process 1 0 1 1 ˆ( ,..., ) ( ... ) n n n f x x P w w x w x = = + + + - a partially defined function, which is an approximation of the decision rule function 11 Why neural network?
  • 12. Artificial Neuron f is a function to be earned are the inputs φ is the activation function 1x nx 1( ,..., )nxf x. . . φ(z) 0 1 1 ... n nz w w x w x= + + + 1,..., nx x Z is the weighted sum 1 0 1 1( ,..., ) ( ... )n n nf x x F w w x w x= + + +
  • 14. Simple Example: Categorising Vehicles Input to function: pixel data from vehicle images Output: numbers: 1 for a car; 2 for a bus; 3 for a tank INPUT INPUT INPUT INPUT OUTPUT = 3 OUTPUT = 2 OUTPUT = 1 OUTPUT=1
  • 15. General Idea 1.1 2.7 3.0 -1.3 2.7 4.2 -0.8 7.1 2.1 -1.2 1.1 0.2 0.3 HIDDEN LAYERSINPUT LAYER NUMBERSINPUT NUMBERSOUTPUT OUTPUT LAYER CATEGORY VALUES PROPAGATE THROUGH THE NETWORK Cat A Cat B Cat C Choose Cat A (largest output value) Value calculated using all the input unit values
  • 16. Calculation Example:- Categorisation of 2x2 pixel black & white images Into “bright” and “dark” Representation of this rule: If it contains 2, 3 or 4 white pixels, it is “bright” If it contains 0 or 1 white pixels, it is “dark” Perceptron architecture: Four input units, one for each pixel One output unit: +1 for white, -1 for dark
  • 17. Calculation Example:- Example calculation: x1=-1, x2=1, x3=1, x4=-1 S = 0.25*(-1) + 0.25*(1) + 0.25*(1) + 0.25*(-1) = 0 0 > -0.1, so the output from the ANN is +1 So the image is categorised as “bright”
  • 18. Unit Functions Linear Functions Simply output the weighted sum Threshold Functions Output low values  Until the weighted sum gets over a threshold  Then output high values  Equivalent of “firing” of neurons Step function: Output +1 if S > Threshold T Output –1 otherwise Sigma function: Similar to step function but differentiable Step Function Sigma Function
  • 20. Learning Process of ANN Learn from experience Learning algorithms Recognize pattern of activities Involves 3 tasks Compute outputs Compare outputs with desired targets Adjust the weights and repeat the process Compute output Is Desired Output achieved Stop Adjust Weight yes No
  • 21. Training a Perceptron:- η -> Learning Rate T -> target output O -> output X -> input
  • 22. Worked Example Return to the “bright” and “dark” example Use a learning rate of η = 0.1 Suppose we have set random weights:
  • 23. Worked Example Use this training example, E, to update weights: Here, x1 = -1, x2 = 1, x3 = 1, x4 = -1 as before Propagate this information through the network:  S = (-0.5 * 1) + (0.7 * -1) + (-0.2 * +1) + (0.1 * +1) + (0.9 * -1) = -2.2 Hence the network outputs o(E) = -1 But this should have been “bright”=+1 So t(E) = +1
  • 24. Calculating the Error Values Δ0 = η(t(E)-o(E))x0 = 0.1 * (1 - (-1)) * (1) = 0.1 * (2) = 0.2 Δ1 = η(t(E)-o(E))x1 = 0.1 * (1 - (-1)) * (-1) = 0.1 * (-2) = -0.2 Δ2 = η(t(E)-o(E))x2 = 0.1 * (1 - (-1)) * (1) = 0.1 * (2) = 0.2 Δ3 = η(t(E)-o(E))x3 = 0.1 * (1 - (-1)) * (1) = 0.1 * (2) = 0.2 Δ4 = η(t(E)-o(E))x4
  • 25. Calculating the New Weights w’0 = -0.5 + Δ0 = -0.5 + 0.2 = -0.3 w’1 = 0.7 + Δ1 = 0.7 + -0.2 = 0.5 w’2 = -0.2 + Δ2 = -0.2 + 0.2 = 0 w’3= 0.1 + Δ3 = 0.1 + 0.2 = 0.3 w’4 = 0.9 + Δ4 = 0.9 - 0.2 = 0.7
  • 26. New Look Perceptron Calculate for the example, E, again:  S = (-0.3 * 1) + (0.5 * -1) + (0 * +1) + (0.3 * +1) + (0.7 * -1) = -1.2 Still gets the wrong categorisation But the value is closer to zero (from -2.2 to -1.2) In a few epochs time, this example will be correctly categorised
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33. is an alternative neural network architecture whose primary purpose is to work on continuous data. The advantage of this architecture is to adapt the network online and hence helpful in many real time applications, like time series prediction, online spell check, continuous speech recognition,etc. The architecture has a continuous input that is delayed and sent as an input to the neural network. As an example, consider training a feed forward neural network being trained for a time series prediction. The desired output of the network is the present state of the time series and inputs to the neural network are the delayed time series (past values). Hence, the output of the neural network is the predicted next value in the time series which is computed as the function of the past values of the time series. Time delay neural network (TDNN):-
  • 35. TYPES OF ANN:- feed-forward feedback 4.1 Feed-forward networks Feed-forward ANNs allow signals to travel one way only; from input to output. There is no feedback (loops) i.e. the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straight forward networks that associate inputs with outputs. They are extensively used in pattern recognition. 4.2 Feedback networks Feedback networks can have signals travelling in both directions by introducing loops in the network. Feedback networks are very powerful and can get extremely complicated. Feedback networks are dynamic;
  • 36. Some Topologies of ANN:- Fully-connected feed-forward Partially recurrent network Fully recurrent network
  • 37. Recurrent Neural Networks:- recurrent neural network:- is a class of neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior Partially recurrent network Fully recurrent network
  • 38. References:- [1] Simon Colton - www.doc.ic.ac.uk/~sgc/teaching/v231/ [2] http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html [3] http://www.willamette.edu/~gorr/classes/cs449/intro.html [4] http://www.scribd.com/doc/12774663/Neural-Network-Presentation [5] http://www.speech.sri.com/people/anand/771/html/node32.html [6] http://en.wikipedia.org/wiki/Recurrent_neural_network