3. What is Artificial Neural Network
An Artificial Neural Network (ANN) is an information processing that
is inspired by the way biological nervous systems, such as the brain.
It is composed of a large number of highly interconnected processing
elements (neurons) working to solve specific problems.
It is an attempt to simulate within specialized hardware or
sophisticated software, the multiple layers of simple processing
elements called neurons.
An ANN is configured for a specific application, such as pattern
recognition or data classification, through a learning process.
4. Research History
• McCulloch and Pitts (1943) are generally recognized as the
designers of the first neural network.
• They combined many simple processing units together that could
lead to an overall increase in computational power.
• They suggested many ideas like : a neuron has a threshold level
and once that level is reached the neuron fires.
• The McCulloch and Pitts's network had a fixed set of weights.
• Hebb (1949) developed the first learning rule, that is if two
neurons are active at the same time then the strength between them
should be increased.
• Minsky & Papert (1969) showed that perceptron could not learn
those functions which are not linearly separable. The researchers,
Parkerand and LeCun discovered a learning algorithm for multi-
layer networks called back propagation that could solve problems
that were not linearly separable.
5. Biological Neurons
1. Soma or body cell - is a large, round
central body in which almost all the logical
functions of the neuron are realized.
2. The axon (output), is a nerve fibre
attached to the soma which can serve as a
final output channel of the neuron. An axon
is usually highly branched. Synapses
3. The dendrites (inputs)- represent a highly Axon from
branching tree of fibres. These long other neuron
irregularly shaped nerve fibres (processes)
are attached to the soma.
Soma
4. Synapses are specialized contacts on a
neuron which are the termination points for
the axons from other neurons. Dendrite
Axon from
other
Dendrites
The schematic model
of a biological neuron
5
6. Why neural network?
f ( x1 ,..., xn ) - unknown multi-factor decision rule
Learning process using a representative learning set
- a set of weighting vectors is the result
( w0 , w1 ,..., wn ) of the learning process
ˆ
f ( x1 ,..., xn ) = - a partially defined function, which
is an approximation of the decision
= P ( w0 + w1 x1 + ... + wn xn ) rule function 6
7. A Neuron
f ( x1 ,..., xn ) = F ( w0 + w1 x1 + ... + wn xn )
f is a function to be earned
x1 ,..., xn are the inputs
x1
φ is the activation function
. f ( x1 ,..., xn )
. φ(z)
.
xn z = w0 + w1 x1 + ... + wn xn Z is the weighted sum
7
8. A Neuron
• Neurons’ functionality is determined by the
nature of its activation function, its main
properties, its plasticity and flexibility, its
ability to approximate a function to be
learned
8
9. When we need a network
• The functionality of a single neuron is
limited. For example, the threshold neuron
can not learn non-linearly separable
functions.
• To learn those functions that can not be
learned by a single neuron, a neural
network should be used.
9
11. Similarities- Artificial Neuron &
Brain Neuron
In the human brain, a typical neuron collects signals from
others through a host of fine structures called dendrites.
The neuron sends out spikes of electrical activity through a
long, thin stand known as an axon, which splits into thousands
of branches.
While in Artificial Neuron……..
12. Similarities- Artificial Neuron &
Brain Neuron
We conduct Artificial neural networks by first trying to deduce
the essential features of neurons and their interconnections.
We then typically program a computer to simulate these
features.
However because our knowledge of neurons is incomplete and
our computing power is limited, our models are necessarily
gross idealizations of real networks of neurons.
13. Firing Rule
The firing rule is an important concept in neural networks and
accounts for their high flexibility. A firing rule determines how
one calculates whether a neuron should fire for any input
pattern. It relates to all the input patterns, not only the ones
on which the node was trained.
A simple firing rule can be implemented by using Hamming
distance technique. The rule goes as follows:
• Take a collection of training patterns for a node, some
of which cause it to fire (the 1-taught set of patterns) and
others which prevent it from doing so (the 0-taught set).
• Then the patterns not in the collection cause the node
to fire if, on comparison , they have more input elements in
common with the 'nearest' pattern in the 1-taught set than
with the 'nearest' pattern in the 0-taught set. If there is a tie,
then the pattern remains in the undefined state.
14. Simple Neuron
An artificial neuron is a device with many inputs and one
output.
If the input pattern does not belong in the taught list of
input patterns, the firing rule is used to determine whether
to fire or not.
The neuron has two modes of operation; the training mode
and the using mode. In the training mode, the neuron can
be trained to fire (or not), for particular input patterns. In
the using mode, when a taught input pattern is detected at
the input, its associated output becomes the current output.
15. More Complicated Neuron
A more sophisticated neuron (figure) is the McCulloch and
Pitts model (MCP).
The inputs are 'weighted', the effect that each input has at
decision making is dependent on the weight of the particular
input.
These weighted inputs are then added together and if they
exceed a pre-set threshold value, the neuron fires. In any
other case the neuron does not fire.
In mathematical terms, the neuron fires if and only if;
X1W1 + X2W2 + X3W3 + ... > T( Threshold Value)
16. Weighted:
The weight of an input is a number which when
multiplied with the input gives the weighted input.
17. Architecture
There are two types of architecture of Neural
Networks:
• Feed-forward Networks
• Feed-back Networks
18. Feed-forward Networks
Feed-forward ANN’s (figure 1) allow signals to travel one way
only; from input to output.
There is no feedback (loops) i.e. the output of any layer does
not affect that same layer.
Feed-forward Ann's tend to be straight forward networks that
associate inputs with outputs.
19. Feed-back Networks
Feedback networks (figure) can have signals traveling in both
directions by introducing loops in the network
Feedback networks are very powerful and can get extremely
complicated.
They remain at the equilibrium point until the input changes
and a new equilibrium needs to be found.
20. Network Layers
The commonest type of artificial neural network consists of
three groups, or layers, of units:
• A layer of "input" units is connected to a layer of "hidden"
units, which is connected to a layer of "output" units.
The activity of the input units represents the raw information
that is fed into the network
The activity of each hidden unit is determined by the
activities of the input units and the weights on the
connections between the input and the hidden units.
The behavior of the output units depends on the activity of
the hidden units and the weights between the hidden and
output units.
21. Threshold Neuron (Perceptrons)
The most influential work on neural nets in the 60's went
under the heading of 'perceptrons' a term coined by Frank
Rosenblatt.
The perceptron (figure 4.4) turns out to be an MCP model
( neuron with weighted inputs ) with some additional, fixed,
pre--processing.
Units labeled A1, A2, Aj , Ap are called association units and
their task is to extract specific, localized featured from the
input images.
22. Perceptrons
Perceptrons mimic the basic idea behind the mammalian
visual system.
They were mainly used in pattern recognition even though
their capabilities extended a lot more.
In 1969 Minsky and Papert wrote a book in which they
described the limitations of single layer Perceptrons.
The book was very well written and showed mathematically
that single layer perceptrons could not do some basic pattern
recognition operations like determining the parity of a shape
or determining whether a shape is connected or not.
What they did not realized, until the 80's, is that given the
appropriate training, multilevel perceptrons can do these
operations.
23. ADVANTAGE OF ANN
• A neural network can perform tasks that a linear
program can not.
• When an element of the neural network fails, it
can continue without any problem by their
parallel nature.
• A neural network learns and does not need to be
reprogrammed.
• It can be implemented in any application.
• It can be implemented without any problem.
24. DISADVANTAGE OF ANN
• The neural network needs training to
operate.
• The architecture of a neural network is
different from the architecture of
microprocessors therefore needs to be
emulated.
• Requires high processing time for large
neural networks.
25. Applications of Artificial Neural
Networks
Intelligent
Intelligent
Advance Control
Control
Advance
Robotics
Robotics Technical
Technical
Diagnisti
Diagnisti
cs
cs
Machine Intelligent
Intelligent
Machine
Vision Data Analysis
Data Analysis
Vision
Artificial and Signal
and Signal
Intellect with Processing
Processing
Neural
Networks
Image &
Image &
Pattern
Pattern
Recognition
Recognition Intelligent
Intelligent
Expert
Expert
Systems
Systems
Intelligent
Intelligent Intelligent
Intelligent
ll Security
Security
Medicine
Medicine Systems
Systems
Devices
Devices
35