SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez nos Conditions d’utilisation et notre Politique de confidentialité.
SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez notre Politique de confidentialité et nos Conditions d’utilisation pour en savoir plus.
ARTIFICIAL NEURAL NETWORKS
An information-processing system that has certain performance
characteristics in common with biological neural networks.
Information processing occurs at many simple elements called
Each connection link has an associated weight, which, in an
ANN multiplies the signal transmitted.
Each neuron applies an activation function (usually nonlinear)
to its net input (sum of weighted signals) to determine its output
a w xi i
SOME COMMON ANN MODELS
ADALINE (Adaptive Linear Neuron)
MADALINE (Many ADALINE)
The ADALINE (Adaptive Linear Neuron) [ Widrow & Hoff,
1960] typically uses bipolar (1 or -1) activations for its input
signals and its target output. The weights on the
connections from the input unit to the ADALINE are
adjusted. The ADALINE also has a bias, which acts like an
adjustable weight on a connection from a unit whose
activation is always 1.
In general, an ADALINE can be trained using the delta rule
also known as Least Mean Squares (LMS) or Widrow-Hoff
THE ADALINE - Architecture
Architecture of an ADALINE
ADALINE - Algorithm
Step 0: Initialize weights
Set Learning rate
Step 1: While Stopping condition is false,
do steps 2-6
Step 2: For each bipolar training pair s:t,
do steps 3-5
Step3: Set activations of input units, i=1,…..,n;
Step 4: Compute net input to output unit;
y_in = b + sigma i Xi Wi
Step 5: Update weights, i=1,….,n ;
Wi (new) = Wi (old) + alpha (t-y_in)Xi
Step 6: Test for stopping Condition;
If the largest weight change that occurred in step 2 is smaller than
a specified tolerance, then stop; Otherwise continue
A basic Operating System (MS DOS or
DESIGN AND IMPLEMENTATION
The design of the neural network we call
“Neurotron v1.0” involves five stages.
Implementing the structure
Training the Artificial Neural Network
Getting the input to the network
Processing the data using the ADALINE
Displaying the output.
Implementing the structure
– A single layer, feed forward, fully connected
network is designed and implemented using
neuron and network objects.
– It contains 72 (9x8) input neurons and a bias term
– It contains 8 output neurons to represent the
ASCII code of the recognized alphabet in binary.
– It contains a total of 73x8=584 connections and
Training the ANN
The ANN is trained using the Delta Rule
The initial weights are random numbers
between -0.5 and +0.5
It is currently trained for 70 characters
including 58 ’A’s and one set of ‘B’ to ‘L’.
The input is given in a 9x8 matrix of 1’s and
RESULTS AND FUTURE SCOPE
The Neurotron v1.0 is currently trained to identify 70
characters consisting of 58 ‘A’s and one set of
characters ‘B’ to ‘L’.
May be further trained to recognize any other
character set by training it with a suitable character
Learning capability is limited by the number of
neurons and connections in the system. Training
with very large character sets may result in the
weights not converging i.e. the net may be unable to
learn the entire set.
The network can be trained for a wide range of other
characters, using optimal training set.
The number of input and output layers may be increased as, in
the current system, weights may not converge during large
training sets. This can be done by changing the way getting the
output. Instead of getting the ASCII of the character, the output
may be only one neuron with output ‘1’ for each character.
Another way of increasing the power o the neural network is to
add one or more hidden layers to the network and use the back
propagation algorithms and training them using the Back
propagation training algorithm.
The application and trainer can be integrated to form a
complete flexible software.
Image and Audio Processing
Finance and Marketing