SlideShare une entreprise Scribd logo
1  sur  18
Neural Networks And Fuzzy Control

ABSTRACT
A Neural Network is basically a self-adjusting network whose output is consistent
with the desired output and once the network is ‘trained’, only the input data are provided
to the network, which then ‘recalls’ the response that it ‘learned’ during training. To add
more flexibility to the definition of the system, incorporate vague inputs, describe general
boundaries for the system and hence to provide a better control of the system, fuzzy logic
is implemented with the neural networks.
The various methodologies that are required to adapt the synaptic weights, the different
learning algorithms used to implement parallel distributed processing and robust
computation and the major applications of neuro-fuzzy systems have been discussed in
brief in the paper. The paper also brings forth the shortcomings of the various algorithms
and techniques that are currently used in numerous applications, simultaneously
proposing other methods for more efficient control. In addition, the paper demonstrates
some fuzzy parameters and principles in a neural network which adds user flexibility and
robustness to the system.

KEYWORDS
Neural network, modeling, learning, learning algorithm, training, fuzzy logic.

1. INTRODUCTION
An ANN is an information processing paradigm that is inspired by the way biological
nervous systems, such as the brain, process information.
It works along two directions. Firstly, it deals with realizing the anatomy and functioning
of the human brain in order
to define models adherent to its physical behavior. Secondly, it aims at computing on the
basis of the parallel and distributed structure of the brain. [7.] This is implemented by a
process of self adjusting the weights or inter-connection strengths, known as Learning.
There exist robust learning algorithms that guarantee convergence in the presence of
uncertainties (in inputs). This imprecision in the input values is dealt with the help of
fuzzy logics which usually uses IF-THEN rules or equivalent constructs. Neural network
automatically generates and updates fuzzy logic governing rules and membership
functions while the fuzzy logic infers and provides crisp or defuzzified output when
fuzzy parameters exist. Thus neural network and fuzzy control work hand in hand.

2. BIOLOGICAL NEURAL NETWORKS
The biological neural network is an interlinked structure of billions of neurons, each of
which is linked to thousands of other neurons, either fully or partially connected. Neurons
are complex cells that respond to electrochemical signals. They consist of a nucleus, a
cell body, numerous dendritic links which provide “interconnection strengths” from other
neurons through synapses and an axon that carries an action potential trunk which carries
an action potential output to other neurons through synapses. There are two types of
connections: excitatory and inhibitory. If the action potential value exceeds its threshold
value, neuron fires and thus either excites or inhibits other neurons. [1.]
3. NEED OF NEURAL NETWORKS
Conventional computers of the recent times use algorithmic approach to solve a problem.
The problem solving capability of these computers is thus, restricted to only what they
already know. Therefore, in the development of the intelligent machines for application
areas, such as information processing, control applications, communication networks, this
approach proves to be a major hurdle [7.] Moreover, the computer power, in terms of
storage capacity and speed, is continuously increasing for which the microprocessors,
that incorporate reduced instruction set computer architecture, are used. In order to meet
the on-growing demand of this exponential increase in performance/cost, more potent
machines are required that can "think". And to create this "brain-like machine", such
theories need to be devised that can explain "intelligence". [6.]

4. NETWORK ARCHITECTURES
Architecture defines the network structure, that is, number of artificial neurons in the
network and their inter connectivity. Network architecture is categorized into :
(i) Single layer and multilayer network
A network with a single output layer and no intermediate hidden layers is known as
single layer network while a network with one or more intermediate hidden layers in
between the input and output layers is termed as multi layer network.

(ii) Feed forward and Feed back network.
Feed forward network allows signals to travel one way only, from input to output. They
tend to be straight forward and their is no feedback(loops) and hence output of any layer
does not affect that same layer.
Feed back network consists of set of processing units, the output of each unit is fed as
input to all other units including the same unit.
(iii) Fully connected and Partially connected network.
The neural network is said to be fully connected if every node in each layer of the
network is connected to every other node in the adjacent forward layer. If, however, some
of the communication links are missing from the network, the network is said to be
partially connected.
5. MODELING AND LEARNING
Neural networks process information using parallel decomposition of complex
information into basic elements. So, relationships can be established and stored and
similar to the brain, networks can use it later for updation or to achieve certain desired
response. Modeling of the network is done to match its problem solving ways with those
of the brain and can be viewed as our expression(attempt) to approximate nature’s
creation.
CONCEPT OF MODELING
In artificial neural networks, modeling can be achieved fundamentally by artificial
neurons. A neuron has a set of ‘n’ inputs, each weighted by the connection strength or
weight factor, a ‘bias’ term i.e. threshold value (which when exceeded, the neuron fires),
a non-linearity that
Neural Networks And Fuzzy Control
acts on activation signal produced and an output response.
The purpose of non-linear function is to ensure that neuron’s response is bounded, i.e.,
neuron’s actual response is conditioned or damped. Different non-linearity functions and
algorithms used in the network, are used:
(i)Linear or ramp function
It is also termed as hard-limiter and is linear within upper and lower limits.
(ii)Threshold function
(iii)Sigmoid function
It is the most popular function used and is monotonic and differentiable.

5.1 LEARNING
The Neural network resembles the human
brain by acquiring knowledge through learning and by storing this knowledge within the
synaptic weights. Learning is the process of determining the weights by adapting to an
input stimulus and producing a desired response. When the actual output response is the
same as the desired one, the network is said to have “acquired knowledge”.
On the basis of learning method used,
neural networks are classified as [9.]:
(i.) Fixed Networks
In these networks, the weights are fixed a priori according to the problem to solve and so
cannot be changed, i.e. dW / dt = 0.
(ii) Adaptive Networks
In these networks, learning methods can be applied to adjust the weights according to the
problem, i.e. dW / dt ≠ 0.
The various learning methods are classified into:
(i) Supervised Learning
(ii) Unsupervised Learning
(iii) Reinforcement Learning
(iv) Stochastic Learning
(i) SUPERVISED LEARNING
The method of learning which is used to solve the problem of error convergence, i.e., the
minimization of error between desired output and the actual computed output is known as
Supervised Learning. It is also called “Learning with teacher”. The various learning
algorithms that utilize the approach of Supervised Learning are:
(a) LEARNING THROUGH ERROR CORRECTION
It depends upon the availability of desired output for a given input. The minimization of
error is resolved by various learning rules:
1. DELTA RULE: It is based upon the idea of continuous weight adjustments of the
value of weights such that the squared difference of the error between the target output
value and the actual output is minimized. This is also known as Widrow-Hoff learning
rule or LMS. [5.]
2. GRADIENT DESCEND RULE: the values of the weights are adjusted by the amount
proportional to the first derivative of the error between the desired o/p and the actual o/p,
w.r.t the value of the weight. [5.]
(ii) UNSUPERVISED LEARNING
This method uses no teacher and is based upon local information as no target o/p exists.
This learning self-organizes the data presented to the network. The paradigms of
unsupervised learning are:
(a) HEBBIAN LEARNING:
Donald Hebb formulated that the changes in the synaptic strengths is proportional to the
correlation between the firing of the post and pre-synaptic neurons. [2.]
370 International Conference on Systemics, Cybernetics and Informatics
(b). COMPETITIVE LEARNING:
When an input stimulus is applied, each o/p neuron competes with the others to produce
the closest o/p signal to the target. This o/p then becomes dominant and the others cease
producing an o/p signal for that stimulus. [2.]
(iii) REINFORCEMENT LEARNING:
This method is also termed as “Learning with a Critic”. It is used to handle situations
where the desired o/p for a given i/p is not known, and only a binary result that the result
is right or wrong is available.
(iv) STOCHASTIC LEARNING:
This method involves the adjustment of weights of a neural network in a probabilistic
manner. It is used to determine the optimum weights of a multilayer feed-forward
network by overcoming the local minima problem. [6.]

5.2 NETWORK MODELS AND THE LEARNING ALGORITHMS
USED
(i) McCULLOCH–PITTS MODEL(MP Model)
This model is quite simple, with no learning or adaptation. The activation (x) is given by
a weighted sum of its M input values (a i) and a threshold value (Θ).the output signal (s) is
typically a nonlinear function f(x) of the activation value x. The output is represented by:
Oi = f [ j=1ΣN (xijwij) – Θ ].
With the original MP model[7.], binary function was used as nonlinear transfer function.
The major drawback of this model was that the weights were fixed and no learning could
be incorporated.
(ii) PERCEPTRON MODEL
This model was given by Frank Rosenblatt and consists of outputs from sensory units to a
fixed set of association units, the outputs of which are fed to an MP neuron. The
association units perform predetermined manipulations on their inputs. This model
incorporates learning and the target output (T) is compared with the actual output (O) and
the error (E) is used to adjust the weights.
E=T–O
Change in synaptic weights is calculated as:
Δw = μ [ T – f ( w(k)x ) ] x
Sigmoidal non-linearity is used in multi-layer perceptron model.[6.] Following
deficiencies are encountered in this model:
(a) Different perceptrons need to be trained for different set of input patterns.
(b) It could not differentiate between two linearly separable sets of patterns.

(iii) DELTA LEARNING ALGORITHM
This algorithm is based on the least-square-error minimization method and its objective is
to express the difference of the actual and target outputs in terms of the inputs and
weights. The least- squared error is defined by:
E = ½ (Ti – Oi )2 = ½[Ti – f ( wixi ) ]2
(iv) ADALINE
This model[6.] consists of trainable signed weights and +1 or -1 as inputs. The weighted
sum is applied to a quantizer transfer function that restores the outputs to +1 or -1 based
on the mean-square learning algorithm and weights are adjusted by the function:
E=T–R
(v) WINNER-TAKES-ALL ALGORITHM
This algorithm is best suited for the cases of competitive unsupervised learning where
there is a single layer of N nodes and each node has its own set of weights w n. An input
vector x is applied to all nodes, and each node provides an output O n = Σj wnj xj. The node
with the best response to the applied i/p vector x is declared the winner according to the
winner selection criteria:
On = max i = 1,2….N (wn x)
Now, the change in weights is calculated as:
Δwn = α(k) (x- wn )
(vi) BACK-PROPOGATION ALGORITHM
This algorithm is applied to the multilayer feed-forward ANNs. During the training
session of the network, a pair of patterns is presented (X k, Tk), where Xk is the input
pattern and Tk I the target pattern. the Xk pattern causes o/p responses at each neuron in
each layer and, hence, an actual output ok at the output layer. At the o/p layer, the
difference between the actual and target o/ps yields an error signal
depending upon the weights of the neurons in each layer. This error is minimized and
new weight values are obtained.

(vii) COGNITRON AND NEOCOGNITRON MODELS
A neocognitron model is a hybrid hierarchical multilayer feed forward (and feedback)
network that is an outgrowth of an earlier multilayer self-adapting neural model,
proposed as a model of visual pattern recognition in the brain, called the cognitron model.
The network consists of several stages of simple cells(S) and complex neurons(C) layer
pairs arranged in rectangular planes of cells. The S layers act as feature detectors, while
the C layers perform a type of feature blurring on the S cell outputs to make the network
less sensitive to shift and deformation in image patterns. The Neocognitron can learn in
either a supervised or unsupervised mode using the competitive learning. Only the
weights on the S layer are adaptable while those on C layer remain fixed.
(viii)ADAPTIVE RESONANCE THEORY
PARADIGM
This is an unsupervised paradigm based on competitive learning[see 5.i.b] and is
consistent with cognitive and behavioral models. It has two main layers: first is the
input/comparison layer, and the second is the output/recognition
layer; both of which interact extensively through feed forward and feedback connectivity.
[5.]
(ix) HOPFIELD MODEL
This model conforms to the asynchronous nature of the biological neurons. It is a more
abstract, fully connected, random, asynchronous and a symmetrically weighted network
which accepts either bipolar (+1; -1) or binary (0; 1) inputs.[8.] The outputs of each
processing element can be coupled back to the inputs of any other processing element
except itself. It uses sigmoid function as nonlinearity. Based on this model, Analog to
Digital converter was demonstrated.
(x) SELF-ORGANIZING MAP (SOM)
Developed by Teuvo Kohonen, SOM is a clustering algorithm which creates a map of
relationships among input patterns. During training, it finds the output node that has the
least distance from the training pattern and then changes the node's weights to increase
the similarity to the training pattern. The overall effect is to move the output nodes to
"positions" that map the distribution of the training patterns. It has a single layer of nodes
and the output nodes do not correspond to known classes but to unknown clusters that the
SOM finds in the data autonomously.
International Conference on Systemics, Cybernetics and Informatics
(xi) CONTENT ADDRESSABLE MEMORY (CAM)
It is a matrix memory, in which the patterns are written during the learning phase. While
recalling, the input data pattern is presented at the data bus at all locations
simultaneously. If matched, the CAM provides a confirmation signal and the address
where it is stored, hence, providing match and no-match signals in a single operation. The
CAM may be viewed as associating(mapping) data to address i.e. for every data in
memory,[7.] there corresponds some unique address, thus, avoiding ambiguity .It may
also be viewed as a data correlator; input data is correlated with stored data in the CAM.
It can be implemented with the help of RAM by using iterative algorithm.
(xii) REGRESSION ANALYSIS
Regression analysis is used to fit a smooth curve to a number of sample data points which
represent some continuously varying phenomena. The fitting technique can be used to
predict the values of one or more variables on the basis of the information provided by
the measurements on the other independent variables. In regression analysis parameters
defining the functional relationship are estimated using the statistical criteria.

6. TYPES OF ANN
The artificial neural networks are broadly categorized into:
(A) PROBABILISTIC NEURAL NETWORK (PNN)
PNN stores the training patterns to avoid the iterative process. It is a classifier paradigm
that instantly approximates the optimum boundaries between categories. It has two
hidden layers: the first containing a dedicated node for each training pattern and the
second containing a dedicated node for each class; both connected, on a class-by-class
basis. Each new input is classified according to the weighted average of the closest
training examples.
(B) TIME DELAY NEURAL NETWORK (TDNN)
A tapped delay line, or a shift register, and a multilayer perceptron, with the tapped
outputs of the delay line as inputs, constitute the time delay neural network. The output
has a finite temporal dependence on the input
u(k) = F[ x(k), x(k-1), ...., x(k-n) ]
F being a nonlinearity function. When this function is a weighted sum, then TDNN is
equivalent to finite impulse response filter and when the output is fed back via a unit
delay to the input, it is equivalent to the infinite impulse response filter.
7. FUZZY LOGIC
It was proposed by Lotfi Zadeh, to generalize classical set theory and to deal with subsets
of the universe which have no well defined boundaries. Fuzzy language links language
with computing (reasoning) through linguistic variables and quantifiers. The variables
and quantifiers are mapped to fuzzy membership functions (possibility distributions)
which assumes values in the range [0,1].(‘0’ corresponds to member not included, ‘1’ to
fully included and values between 0&1 define fuzzy members). This process of changing
input value to a fuzzy value is called Fuzzification.[9.]
Fuzzy logic replaces Boolean truth values with degrees of truth. Fuzzy truth represents
membership in vaguely defined sets, not likelihood of some event/condition. Thus, fuzzy
logics are conceptually distinct from probabilities.[10.]
Fuzzy logic is well suited to low-cost implementations based on cheap sensors low
resolution A/D converters in 4-bit or 8-bit microprocessor which can be easily upgraded.
7.1 FUZZY CONTROL
A fuzzy controller consists of an input stage, a processing stage and an output stage.
Input stage maps inputs via appropriate membership sets. Processing stage invokes each
appropriate rule and generates results for each i/p and combines them. The output stage
converts the combined result back into an output value. Most common shapes of the
membership function are triangular, trapezoidal etc. Logic rules are in the form of IFTHEN statements(IF is called antecedent and THEN, consequent).
The antecedents are combined using fuzzy operators such as AND, OR and NOT. AND
uses minimum weight of antecedents, OR uses maximum value and NOT for
complementary value of antecedents.[3.]
To define result of a rule, “MAX-MIN” inference method, in which the output
membership function is given the truth value generated by the premise, is used. Rules can
be solved in parallel in hardware, or sequentially in software. The result of rules are
‘defuzzified’ to a crisp value by either Centroid method (most popular method), in which
center of mass of the result provides the crisp value, or Height method which takes the
value of the biggest contributor. In centroid method, the values are OR’d and not added
and results are combined using centroid calculation.[11.]
7.2 BUILDING OF FUZZY CONTROLLER
Antecedents consist of logical combination of the error and error delta signals while the
consequent is a control command output. The rule outputs can be defuzzified using a
discrete centroid computation[9.]

8.NEURO–FUZZY COMPUTING SYSTEM
While fuzzy logic provides a closed link between natural language and “approximate
computational reasoning”, fuzzy computing methods do not include the ability to learn
adaptively, to perform associative memory feats, and to tolerate high levels of noise and
pattern deformations, and the capabilities that are needed for tasks like perception,
learning, and predictive behavioral response. Thus, neural networks are merged
conceptually and are implemented along with fuzzy logic systems. This is known as
SOFT COMPUTING.[9.]
The neuro-fuzzy system has five layers of neurons with selected feed-forward
interconnections. [8. ]
9. APPLICATIONS OF NEURO-FUZZY SYSTEMS AND THEIR LIMITATIONS
(a) Sensors in Chemical Engineering
In this the problem was to relate values produced by ultrasound sensors with actual
physical characteristics of air bubbles in fermenter. Since this was a mapping problem,
Multilayer Perceptron was used. As few data was available for training, simulation of
physical system was developed.
Suggestion: MLP nets are not robust as loss of neurons results in MLP and thus, new
method..
(b) Financial Data Modeling and Prediction
The problem here was to predict if a company would raise funds by issuing shares or
making debts. It’s also used in stock-market predictions. The data comprised of many
parameters describing financial profiles and decisions of hundreds of companies and a
number of techniques were tried such as MLP, Linear Regression, NRBF.. But the setup
and the training of the network require skill, experience and patience.
Suggestion: The data proved inconsistent with accepted economic models and so the
quality of data must be assessed. There isn’t an established track for the reliability and
robustness of such techniques. So, a back-propagated neural network with two or more
hidden layers and more variables must be used.
(c )Forecasting
Considering forecasting requirements, a differentiation between predictive classification
task, where forecasted values are class memberships or probabilities of belonging to
certain class i.e., binary predictors, and regression tasks i.e., point prediction-based(with
single number of scales).
Suggestion: Thus distinct modeling approaches and preprocessing is required in financial
modeling as neural networks have not yet been established as a valid and reliable method
in business forecasting field either on a strategic, tactical or operational level.
(d) Image Compression
Neural Network can accept a vast array of input at once and process it quickly and so
they are useful in image compression. A bottleneck-type network comprises of an input,
an output layers of equal sizes and an intermediate layer of smaller size in between. The
ratio of the size of the input layer to size of intermediate layer is- the Compression Ratio.
Pixels which are fed into input node must be outputted after compression. The outputs of
the hidden layers are, however, decimal values between -1 and 1 and so require possibly
an infinite number of bits. Therefore, the image is quantized and is encoded ( compressed
to about 1/10th of the original)
Suggestion: the encoding scheme used is not lossless. The original image can’t be
retrieved because our information is lost in the process of quantizing. Again, the actual
results of original compression can’t be seen. Also, we need to train the network
continuously if the output isn’t of high quality.
(e)Intelligent Control
Neuro-fuzzy systems are used in many vehicular applications, including trains, and smart
automobiles and intelligent robots. Controller has to account for several variables, many
of them non-linear.
Suggestion: So for fuzzy logic control systems controlling the idle speed of automatic
engines is required. This can be improved by using a radial basis function neural network
with a Gaussian function.

9.1 LATEST APPLICATIONS
(a) Framsticks
Framsticks is a 3D life simulation project both physical structure of the creatures and
their control systems are evolved. Evolutionary algorithms are used with selections,
crossovers and mutations. These features enable us to study the evolution of social
behavior through synthetic modeling of the evolutionary forces that may have led to (cooperative or competitive) social behavior.
(b) In VLSI
VLSI provides a means of capturing truly complex behavior in a highly hierarchical
fashion. The adaptation allows us to compensate for inaccuracies in the physical analog
VLSI implementation besides uncertainties and fluctuations in the system under
optimization. Adaptive algorithms based on physical observation of the ``performance"
gradient in the parameter space are better suited for robust analog VLSI implementation
than are algorithms based on a calculated gradient..
(c) Creatures – The World Most Advanced Artificial Life!
Creature features the most advanced, genuine Artificial Life software ever developed in a
commercial product, technology that has blown the imaginations of scientists worldwide.

10. LIMITATIONS OF NEURO-FUZZY SYSTEMS
(i)(a) Neural techniques are executed sequentially and difficult to parallelize.
(b) When the quantity of data increases, methods may suffer a combinatorial explosion.
(c) The learning process seems difficult to simulate in a symbolic system.
( ii) in perceptron network,
(a) the output values of a perceptron can take only one of two values (0,1) due to hardlimiter transfer function.
(b) perceptrons can only classify linearly separable sets of vectors[5.] else learning will
never reach a point where all vectors are classified properly.
(iii) In computational approach of neural networks,
(a)When we try to solve a stochastic optimization problem,[6.] we need to approximate.
And hence one must decide on how accurately to estimate the quantity before using it for
the purpose of updating. For a finite computing budget, one can either spend most of the
budget on estimating each iterative step very accurately and settle for fewer steps, or
estimate each step very poorly but use the budget to calculate many iterative steps.
(b) In many problems/techniques, the computations involved grow exponentially with the
size of the problem which renders such techniques impractical.
(c ) Learning under dynamic constraints imposes difficulty in identification of causeand-effect in the sense that future desired output now depend on all past input and it isn’t
possible to know which past input deserves credit for the success in current output. So,
dynamic programming must be invoked to convert the problem to a series of static
learning problems.
(d) NP-hardness is a fundamental limitation[11.] on what computation can do.
Quantifying Heuristics and acquiring structural knowledge seems to be the only salvation
for the effective solution of complex real problems. So, human expertise proves to be
better.
(iv) A neural network is used to obtain some information out of given data where other
methods are not available. But sometimes, general mathematical models,[11.] can be
simulated faster & more effectively. eg. drawing of a lottery does not need any past
inputs. Similarly, weather forecasting.

11. CONCLUSIONS
This paper has presented an overview of the neuro-fuzzy systems, their uses, learning
methods to train them, their limitations in various applications and suggestion to rectify
them in order to make them more efficient from the implementation point of view. The
major issues that this paper has addressed are the scalability problem, testing,
verification, and integration of neural network systems into the modern environment. It
also states that neuro-fuzzy programs sometimes become unstable when applied to larger
problems. The defence, nuclear and space industries are concerned about the issue of
testing and verification. The mathematical theories used to guarantee the performance of
an applied neural network are still under development.
As suggested, the solution for the time being may be to train and test these intelligent
systems much as we do for humans .In addition to this, the paper also proposes to solve
the problem of parallelism and sequential execution by implementing neural networks
directly in hardware, which needs a lot of development still.
This "programming” will require feedback from the user in order to be effective but
simple and "passive" sensors (e.g. fingertip sensors, gloves, or wristbands to sense pulse,
blood pressure, skin ionization, and so on),can provide effective feedback into a neural
control system and other variables which the system can learn to correlate with a person's
response state.
Again the paper is trying to put forward a number of possible alternatives that needs to be
applied to various modern day applications of neuro-fuzzy systems so that they may
serve the purpose that they are designed for. It also conveys, that side by side genetic
algorithms, artificial intelligence must be blended to make it faster and thus can be
implemented in VLSI design. Neuro-Fuzzy system which indeed is a powerful tool in
realizing our daily needs and is not just a far-out research trend must be dealt with all
efficient algorithms. Because a lot is to be done as regards.
REFERENCES
[1.] Bose.K.N & Liang.P. Neural network fundamentals with algorithms and
applications
[2.]Anderson A.James. Introduction to neural networks.
[3.] Driankov D.& Hlendron.H Introduction to Fuzzy Control
[4.] Hassoun H. Mohamad Fundamentals of Artificial Neural Networks
[5.]Hagan & Beale Neural Network Design
[6.] Haykin Simon Neural Networks
[7.]Stamatious V. Kartalopoulos Understanding Neural Networks and Fuzzy Logic
[8.]Limmin Fu (TMH) Neural Network in Computer Intelligence
[9.] Patterson W. Dan Artificial Neural Networks Theory and Applications
[10.] Kosko Bart Neural Network and Fuzzy Systems
[11.] www.wikipedia.org
[12] IEEE Special Issue 2002 A self-growing network that grows when required
376

Contenu connexe

Tendances

Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance TheoryNaveen Kumar
 
2. block diagram and components of embedded system
2. block diagram and components of embedded system2. block diagram and components of embedded system
2. block diagram and components of embedded systemVikas Dongre
 
Green cloud computing
Green cloud computingGreen cloud computing
Green cloud computingShreyas Khare
 
Ambient intelligence
Ambient intelligenceAmbient intelligence
Ambient intelligenceMihir Thuse
 
Micro programmed control
Micro programmed  controlMicro programmed  control
Micro programmed controlShashank Singh
 
Models of Distributed System
Models of Distributed SystemModels of Distributed System
Models of Distributed SystemAshish KC
 
Neural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmMostafa G. M. Mostafa
 
OIT552 Cloud Computing - Question Bank
OIT552 Cloud Computing - Question BankOIT552 Cloud Computing - Question Bank
OIT552 Cloud Computing - Question Bankpkaviya
 
Fuzzy relations
Fuzzy relationsFuzzy relations
Fuzzy relationsnaugariya
 
Multiprocessor
MultiprocessorMultiprocessor
MultiprocessorA B Shinde
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian LearningESCOM
 
Scheduling in Cloud Computing
Scheduling in Cloud ComputingScheduling in Cloud Computing
Scheduling in Cloud ComputingHitesh Mohapatra
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Sivagowry Shathesh
 
Pipeline and data hazard
Pipeline and data hazardPipeline and data hazard
Pipeline and data hazardWaed Shagareen
 
Ch 1 introduction to Embedded Systems (AY:2018-2019--> First Semester)
Ch 1 introduction to Embedded Systems (AY:2018-2019--> First Semester)Ch 1 introduction to Embedded Systems (AY:2018-2019--> First Semester)
Ch 1 introduction to Embedded Systems (AY:2018-2019--> First Semester)Moe Moe Myint
 
Levels of Virtualization.docx
Levels of Virtualization.docxLevels of Virtualization.docx
Levels of Virtualization.docxkumari36
 
Learning Methods in a Neural Network
Learning Methods in a Neural NetworkLearning Methods in a Neural Network
Learning Methods in a Neural NetworkSaransh Choudhary
 
Stored program concept
Stored program conceptStored program concept
Stored program conceptgaurav jain
 

Tendances (20)

Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
 
Array Processor
Array ProcessorArray Processor
Array Processor
 
2. block diagram and components of embedded system
2. block diagram and components of embedded system2. block diagram and components of embedded system
2. block diagram and components of embedded system
 
Green cloud computing
Green cloud computingGreen cloud computing
Green cloud computing
 
Ambient intelligence
Ambient intelligenceAmbient intelligence
Ambient intelligence
 
Micro programmed control
Micro programmed  controlMicro programmed  control
Micro programmed control
 
Models of Distributed System
Models of Distributed SystemModels of Distributed System
Models of Distributed System
 
Neural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) Algorithm
 
OIT552 Cloud Computing - Question Bank
OIT552 Cloud Computing - Question BankOIT552 Cloud Computing - Question Bank
OIT552 Cloud Computing - Question Bank
 
Fuzzy relations
Fuzzy relationsFuzzy relations
Fuzzy relations
 
Multiprocessor
MultiprocessorMultiprocessor
Multiprocessor
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian Learning
 
Scheduling in Cloud Computing
Scheduling in Cloud ComputingScheduling in Cloud Computing
Scheduling in Cloud Computing
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing
 
Pipeline and data hazard
Pipeline and data hazardPipeline and data hazard
Pipeline and data hazard
 
Ch 1 introduction to Embedded Systems (AY:2018-2019--> First Semester)
Ch 1 introduction to Embedded Systems (AY:2018-2019--> First Semester)Ch 1 introduction to Embedded Systems (AY:2018-2019--> First Semester)
Ch 1 introduction to Embedded Systems (AY:2018-2019--> First Semester)
 
Levels of Virtualization.docx
Levels of Virtualization.docxLevels of Virtualization.docx
Levels of Virtualization.docx
 
Fuzzy logic ppt
Fuzzy logic pptFuzzy logic ppt
Fuzzy logic ppt
 
Learning Methods in a Neural Network
Learning Methods in a Neural NetworkLearning Methods in a Neural Network
Learning Methods in a Neural Network
 
Stored program concept
Stored program conceptStored program concept
Stored program concept
 

Similaire à Neural network and fuzzy logic

Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfneelamsanjeevkumar
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Mohd Faiz
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 
Artificial Neural Network report
Artificial Neural Network reportArtificial Neural Network report
Artificial Neural Network reportAnjali Agrawal
 
Artificial neural networks and its application
Artificial neural networks and its applicationArtificial neural networks and its application
Artificial neural networks and its applicationHưng Đặng
 
Artificial neural networks and its application
Artificial neural networks and its applicationArtificial neural networks and its application
Artificial neural networks and its applicationHưng Đặng
 
Artificial Neural Networks.pdf
Artificial Neural Networks.pdfArtificial Neural Networks.pdf
Artificial Neural Networks.pdfBria Davis
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks ShwethaShreeS
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Deepu Gupta
 
Neural Network
Neural NetworkNeural Network
Neural NetworkSayyed Z
 

Similaire à Neural network and fuzzy logic (20)

Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 
Jack
JackJack
Jack
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Artificial Neural Network report
Artificial Neural Network reportArtificial Neural Network report
Artificial Neural Network report
 
Artificial neural networks and its application
Artificial neural networks and its applicationArtificial neural networks and its application
Artificial neural networks and its application
 
Artificial neural networks and its application
Artificial neural networks and its applicationArtificial neural networks and its application
Artificial neural networks and its application
 
Artificial Neural Networks.pdf
Artificial Neural Networks.pdfArtificial Neural Networks.pdf
Artificial Neural Networks.pdf
 
Neural network
Neural networkNeural network
Neural network
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
B42010712
B42010712B42010712
B42010712
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Project Report -Vaibhav
Project Report -VaibhavProject Report -Vaibhav
Project Report -Vaibhav
 
Unit+i
Unit+iUnit+i
Unit+i
 
A04401001013
A04401001013A04401001013
A04401001013
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
 
Neural Network
Neural NetworkNeural Network
Neural Network
 
Neural network
Neural networkNeural network
Neural network
 

Neural network and fuzzy logic

  • 1. Neural Networks And Fuzzy Control ABSTRACT A Neural Network is basically a self-adjusting network whose output is consistent with the desired output and once the network is ‘trained’, only the input data are provided to the network, which then ‘recalls’ the response that it ‘learned’ during training. To add more flexibility to the definition of the system, incorporate vague inputs, describe general boundaries for the system and hence to provide a better control of the system, fuzzy logic is implemented with the neural networks. The various methodologies that are required to adapt the synaptic weights, the different learning algorithms used to implement parallel distributed processing and robust computation and the major applications of neuro-fuzzy systems have been discussed in brief in the paper. The paper also brings forth the shortcomings of the various algorithms and techniques that are currently used in numerous applications, simultaneously proposing other methods for more efficient control. In addition, the paper demonstrates some fuzzy parameters and principles in a neural network which adds user flexibility and robustness to the system. KEYWORDS Neural network, modeling, learning, learning algorithm, training, fuzzy logic. 1. INTRODUCTION An ANN is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. It works along two directions. Firstly, it deals with realizing the anatomy and functioning of the human brain in order to define models adherent to its physical behavior. Secondly, it aims at computing on the basis of the parallel and distributed structure of the brain. [7.] This is implemented by a process of self adjusting the weights or inter-connection strengths, known as Learning. There exist robust learning algorithms that guarantee convergence in the presence of uncertainties (in inputs). This imprecision in the input values is dealt with the help of fuzzy logics which usually uses IF-THEN rules or equivalent constructs. Neural network automatically generates and updates fuzzy logic governing rules and membership functions while the fuzzy logic infers and provides crisp or defuzzified output when fuzzy parameters exist. Thus neural network and fuzzy control work hand in hand. 2. BIOLOGICAL NEURAL NETWORKS
  • 2. The biological neural network is an interlinked structure of billions of neurons, each of which is linked to thousands of other neurons, either fully or partially connected. Neurons are complex cells that respond to electrochemical signals. They consist of a nucleus, a cell body, numerous dendritic links which provide “interconnection strengths” from other neurons through synapses and an axon that carries an action potential trunk which carries an action potential output to other neurons through synapses. There are two types of connections: excitatory and inhibitory. If the action potential value exceeds its threshold value, neuron fires and thus either excites or inhibits other neurons. [1.]
  • 3. 3. NEED OF NEURAL NETWORKS Conventional computers of the recent times use algorithmic approach to solve a problem. The problem solving capability of these computers is thus, restricted to only what they already know. Therefore, in the development of the intelligent machines for application areas, such as information processing, control applications, communication networks, this approach proves to be a major hurdle [7.] Moreover, the computer power, in terms of storage capacity and speed, is continuously increasing for which the microprocessors, that incorporate reduced instruction set computer architecture, are used. In order to meet the on-growing demand of this exponential increase in performance/cost, more potent machines are required that can "think". And to create this "brain-like machine", such theories need to be devised that can explain "intelligence". [6.] 4. NETWORK ARCHITECTURES Architecture defines the network structure, that is, number of artificial neurons in the network and their inter connectivity. Network architecture is categorized into : (i) Single layer and multilayer network A network with a single output layer and no intermediate hidden layers is known as single layer network while a network with one or more intermediate hidden layers in between the input and output layers is termed as multi layer network. (ii) Feed forward and Feed back network. Feed forward network allows signals to travel one way only, from input to output. They tend to be straight forward and their is no feedback(loops) and hence output of any layer does not affect that same layer. Feed back network consists of set of processing units, the output of each unit is fed as input to all other units including the same unit. (iii) Fully connected and Partially connected network. The neural network is said to be fully connected if every node in each layer of the network is connected to every other node in the adjacent forward layer. If, however, some of the communication links are missing from the network, the network is said to be partially connected.
  • 4. 5. MODELING AND LEARNING Neural networks process information using parallel decomposition of complex information into basic elements. So, relationships can be established and stored and similar to the brain, networks can use it later for updation or to achieve certain desired response. Modeling of the network is done to match its problem solving ways with those of the brain and can be viewed as our expression(attempt) to approximate nature’s creation. CONCEPT OF MODELING In artificial neural networks, modeling can be achieved fundamentally by artificial neurons. A neuron has a set of ‘n’ inputs, each weighted by the connection strength or weight factor, a ‘bias’ term i.e. threshold value (which when exceeded, the neuron fires), a non-linearity that Neural Networks And Fuzzy Control
  • 5. acts on activation signal produced and an output response. The purpose of non-linear function is to ensure that neuron’s response is bounded, i.e., neuron’s actual response is conditioned or damped. Different non-linearity functions and algorithms used in the network, are used: (i)Linear or ramp function It is also termed as hard-limiter and is linear within upper and lower limits. (ii)Threshold function (iii)Sigmoid function It is the most popular function used and is monotonic and differentiable. 5.1 LEARNING The Neural network resembles the human brain by acquiring knowledge through learning and by storing this knowledge within the synaptic weights. Learning is the process of determining the weights by adapting to an input stimulus and producing a desired response. When the actual output response is the same as the desired one, the network is said to have “acquired knowledge”. On the basis of learning method used, neural networks are classified as [9.]: (i.) Fixed Networks In these networks, the weights are fixed a priori according to the problem to solve and so cannot be changed, i.e. dW / dt = 0. (ii) Adaptive Networks In these networks, learning methods can be applied to adjust the weights according to the problem, i.e. dW / dt ≠ 0. The various learning methods are classified into: (i) Supervised Learning (ii) Unsupervised Learning (iii) Reinforcement Learning (iv) Stochastic Learning (i) SUPERVISED LEARNING The method of learning which is used to solve the problem of error convergence, i.e., the minimization of error between desired output and the actual computed output is known as Supervised Learning. It is also called “Learning with teacher”. The various learning algorithms that utilize the approach of Supervised Learning are: (a) LEARNING THROUGH ERROR CORRECTION It depends upon the availability of desired output for a given input. The minimization of error is resolved by various learning rules:
  • 6. 1. DELTA RULE: It is based upon the idea of continuous weight adjustments of the value of weights such that the squared difference of the error between the target output value and the actual output is minimized. This is also known as Widrow-Hoff learning rule or LMS. [5.] 2. GRADIENT DESCEND RULE: the values of the weights are adjusted by the amount proportional to the first derivative of the error between the desired o/p and the actual o/p, w.r.t the value of the weight. [5.] (ii) UNSUPERVISED LEARNING This method uses no teacher and is based upon local information as no target o/p exists. This learning self-organizes the data presented to the network. The paradigms of unsupervised learning are: (a) HEBBIAN LEARNING: Donald Hebb formulated that the changes in the synaptic strengths is proportional to the correlation between the firing of the post and pre-synaptic neurons. [2.] 370 International Conference on Systemics, Cybernetics and Informatics
  • 7. (b). COMPETITIVE LEARNING: When an input stimulus is applied, each o/p neuron competes with the others to produce the closest o/p signal to the target. This o/p then becomes dominant and the others cease producing an o/p signal for that stimulus. [2.] (iii) REINFORCEMENT LEARNING: This method is also termed as “Learning with a Critic”. It is used to handle situations where the desired o/p for a given i/p is not known, and only a binary result that the result is right or wrong is available. (iv) STOCHASTIC LEARNING: This method involves the adjustment of weights of a neural network in a probabilistic manner. It is used to determine the optimum weights of a multilayer feed-forward network by overcoming the local minima problem. [6.] 5.2 NETWORK MODELS AND THE LEARNING ALGORITHMS USED (i) McCULLOCH–PITTS MODEL(MP Model) This model is quite simple, with no learning or adaptation. The activation (x) is given by a weighted sum of its M input values (a i) and a threshold value (Θ).the output signal (s) is typically a nonlinear function f(x) of the activation value x. The output is represented by: Oi = f [ j=1ΣN (xijwij) – Θ ]. With the original MP model[7.], binary function was used as nonlinear transfer function. The major drawback of this model was that the weights were fixed and no learning could be incorporated. (ii) PERCEPTRON MODEL This model was given by Frank Rosenblatt and consists of outputs from sensory units to a fixed set of association units, the outputs of which are fed to an MP neuron. The association units perform predetermined manipulations on their inputs. This model incorporates learning and the target output (T) is compared with the actual output (O) and the error (E) is used to adjust the weights.
  • 8. E=T–O Change in synaptic weights is calculated as: Δw = μ [ T – f ( w(k)x ) ] x Sigmoidal non-linearity is used in multi-layer perceptron model.[6.] Following deficiencies are encountered in this model: (a) Different perceptrons need to be trained for different set of input patterns. (b) It could not differentiate between two linearly separable sets of patterns. (iii) DELTA LEARNING ALGORITHM This algorithm is based on the least-square-error minimization method and its objective is to express the difference of the actual and target outputs in terms of the inputs and weights. The least- squared error is defined by: E = ½ (Ti – Oi )2 = ½[Ti – f ( wixi ) ]2
  • 9. (iv) ADALINE This model[6.] consists of trainable signed weights and +1 or -1 as inputs. The weighted sum is applied to a quantizer transfer function that restores the outputs to +1 or -1 based on the mean-square learning algorithm and weights are adjusted by the function: E=T–R (v) WINNER-TAKES-ALL ALGORITHM This algorithm is best suited for the cases of competitive unsupervised learning where there is a single layer of N nodes and each node has its own set of weights w n. An input vector x is applied to all nodes, and each node provides an output O n = Σj wnj xj. The node with the best response to the applied i/p vector x is declared the winner according to the winner selection criteria: On = max i = 1,2….N (wn x) Now, the change in weights is calculated as: Δwn = α(k) (x- wn ) (vi) BACK-PROPOGATION ALGORITHM This algorithm is applied to the multilayer feed-forward ANNs. During the training session of the network, a pair of patterns is presented (X k, Tk), where Xk is the input pattern and Tk I the target pattern. the Xk pattern causes o/p responses at each neuron in each layer and, hence, an actual output ok at the output layer. At the o/p layer, the difference between the actual and target o/ps yields an error signal depending upon the weights of the neurons in each layer. This error is minimized and new weight values are obtained. (vii) COGNITRON AND NEOCOGNITRON MODELS A neocognitron model is a hybrid hierarchical multilayer feed forward (and feedback) network that is an outgrowth of an earlier multilayer self-adapting neural model, proposed as a model of visual pattern recognition in the brain, called the cognitron model. The network consists of several stages of simple cells(S) and complex neurons(C) layer pairs arranged in rectangular planes of cells. The S layers act as feature detectors, while the C layers perform a type of feature blurring on the S cell outputs to make the network less sensitive to shift and deformation in image patterns. The Neocognitron can learn in
  • 10. either a supervised or unsupervised mode using the competitive learning. Only the weights on the S layer are adaptable while those on C layer remain fixed. (viii)ADAPTIVE RESONANCE THEORY PARADIGM This is an unsupervised paradigm based on competitive learning[see 5.i.b] and is consistent with cognitive and behavioral models. It has two main layers: first is the input/comparison layer, and the second is the output/recognition layer; both of which interact extensively through feed forward and feedback connectivity. [5.] (ix) HOPFIELD MODEL This model conforms to the asynchronous nature of the biological neurons. It is a more abstract, fully connected, random, asynchronous and a symmetrically weighted network which accepts either bipolar (+1; -1) or binary (0; 1) inputs.[8.] The outputs of each processing element can be coupled back to the inputs of any other processing element except itself. It uses sigmoid function as nonlinearity. Based on this model, Analog to Digital converter was demonstrated. (x) SELF-ORGANIZING MAP (SOM) Developed by Teuvo Kohonen, SOM is a clustering algorithm which creates a map of relationships among input patterns. During training, it finds the output node that has the least distance from the training pattern and then changes the node's weights to increase the similarity to the training pattern. The overall effect is to move the output nodes to "positions" that map the distribution of the training patterns. It has a single layer of nodes and the output nodes do not correspond to known classes but to unknown clusters that the SOM finds in the data autonomously. International Conference on Systemics, Cybernetics and Informatics
  • 11. (xi) CONTENT ADDRESSABLE MEMORY (CAM) It is a matrix memory, in which the patterns are written during the learning phase. While recalling, the input data pattern is presented at the data bus at all locations simultaneously. If matched, the CAM provides a confirmation signal and the address where it is stored, hence, providing match and no-match signals in a single operation. The CAM may be viewed as associating(mapping) data to address i.e. for every data in memory,[7.] there corresponds some unique address, thus, avoiding ambiguity .It may also be viewed as a data correlator; input data is correlated with stored data in the CAM. It can be implemented with the help of RAM by using iterative algorithm. (xii) REGRESSION ANALYSIS Regression analysis is used to fit a smooth curve to a number of sample data points which represent some continuously varying phenomena. The fitting technique can be used to predict the values of one or more variables on the basis of the information provided by the measurements on the other independent variables. In regression analysis parameters defining the functional relationship are estimated using the statistical criteria. 6. TYPES OF ANN The artificial neural networks are broadly categorized into: (A) PROBABILISTIC NEURAL NETWORK (PNN) PNN stores the training patterns to avoid the iterative process. It is a classifier paradigm that instantly approximates the optimum boundaries between categories. It has two hidden layers: the first containing a dedicated node for each training pattern and the second containing a dedicated node for each class; both connected, on a class-by-class basis. Each new input is classified according to the weighted average of the closest training examples. (B) TIME DELAY NEURAL NETWORK (TDNN) A tapped delay line, or a shift register, and a multilayer perceptron, with the tapped outputs of the delay line as inputs, constitute the time delay neural network. The output has a finite temporal dependence on the input u(k) = F[ x(k), x(k-1), ...., x(k-n) ] F being a nonlinearity function. When this function is a weighted sum, then TDNN is equivalent to finite impulse response filter and when the output is fed back via a unit delay to the input, it is equivalent to the infinite impulse response filter. 7. FUZZY LOGIC It was proposed by Lotfi Zadeh, to generalize classical set theory and to deal with subsets of the universe which have no well defined boundaries. Fuzzy language links language with computing (reasoning) through linguistic variables and quantifiers. The variables and quantifiers are mapped to fuzzy membership functions (possibility distributions) which assumes values in the range [0,1].(‘0’ corresponds to member not included, ‘1’ to fully included and values between 0&1 define fuzzy members). This process of changing input value to a fuzzy value is called Fuzzification.[9.] Fuzzy logic replaces Boolean truth values with degrees of truth. Fuzzy truth represents membership in vaguely defined sets, not likelihood of some event/condition. Thus, fuzzy logics are conceptually distinct from probabilities.[10.]
  • 12. Fuzzy logic is well suited to low-cost implementations based on cheap sensors low resolution A/D converters in 4-bit or 8-bit microprocessor which can be easily upgraded. 7.1 FUZZY CONTROL
  • 13. A fuzzy controller consists of an input stage, a processing stage and an output stage. Input stage maps inputs via appropriate membership sets. Processing stage invokes each appropriate rule and generates results for each i/p and combines them. The output stage converts the combined result back into an output value. Most common shapes of the membership function are triangular, trapezoidal etc. Logic rules are in the form of IFTHEN statements(IF is called antecedent and THEN, consequent). The antecedents are combined using fuzzy operators such as AND, OR and NOT. AND uses minimum weight of antecedents, OR uses maximum value and NOT for complementary value of antecedents.[3.] To define result of a rule, “MAX-MIN” inference method, in which the output membership function is given the truth value generated by the premise, is used. Rules can be solved in parallel in hardware, or sequentially in software. The result of rules are ‘defuzzified’ to a crisp value by either Centroid method (most popular method), in which center of mass of the result provides the crisp value, or Height method which takes the value of the biggest contributor. In centroid method, the values are OR’d and not added and results are combined using centroid calculation.[11.] 7.2 BUILDING OF FUZZY CONTROLLER Antecedents consist of logical combination of the error and error delta signals while the consequent is a control command output. The rule outputs can be defuzzified using a discrete centroid computation[9.] 8.NEURO–FUZZY COMPUTING SYSTEM While fuzzy logic provides a closed link between natural language and “approximate computational reasoning”, fuzzy computing methods do not include the ability to learn adaptively, to perform associative memory feats, and to tolerate high levels of noise and pattern deformations, and the capabilities that are needed for tasks like perception, learning, and predictive behavioral response. Thus, neural networks are merged conceptually and are implemented along with fuzzy logic systems. This is known as SOFT COMPUTING.[9.] The neuro-fuzzy system has five layers of neurons with selected feed-forward interconnections. [8. ] 9. APPLICATIONS OF NEURO-FUZZY SYSTEMS AND THEIR LIMITATIONS (a) Sensors in Chemical Engineering In this the problem was to relate values produced by ultrasound sensors with actual physical characteristics of air bubbles in fermenter. Since this was a mapping problem, Multilayer Perceptron was used. As few data was available for training, simulation of physical system was developed. Suggestion: MLP nets are not robust as loss of neurons results in MLP and thus, new method.. (b) Financial Data Modeling and Prediction The problem here was to predict if a company would raise funds by issuing shares or making debts. It’s also used in stock-market predictions. The data comprised of many parameters describing financial profiles and decisions of hundreds of companies and a number of techniques were tried such as MLP, Linear Regression, NRBF.. But the setup and the training of the network require skill, experience and patience.
  • 14. Suggestion: The data proved inconsistent with accepted economic models and so the quality of data must be assessed. There isn’t an established track for the reliability and robustness of such techniques. So, a back-propagated neural network with two or more hidden layers and more variables must be used. (c )Forecasting Considering forecasting requirements, a differentiation between predictive classification task, where forecasted values are class memberships or probabilities of belonging to certain class i.e., binary predictors, and regression tasks i.e., point prediction-based(with single number of scales).
  • 15. Suggestion: Thus distinct modeling approaches and preprocessing is required in financial modeling as neural networks have not yet been established as a valid and reliable method in business forecasting field either on a strategic, tactical or operational level. (d) Image Compression Neural Network can accept a vast array of input at once and process it quickly and so they are useful in image compression. A bottleneck-type network comprises of an input, an output layers of equal sizes and an intermediate layer of smaller size in between. The ratio of the size of the input layer to size of intermediate layer is- the Compression Ratio. Pixels which are fed into input node must be outputted after compression. The outputs of the hidden layers are, however, decimal values between -1 and 1 and so require possibly an infinite number of bits. Therefore, the image is quantized and is encoded ( compressed to about 1/10th of the original) Suggestion: the encoding scheme used is not lossless. The original image can’t be retrieved because our information is lost in the process of quantizing. Again, the actual results of original compression can’t be seen. Also, we need to train the network continuously if the output isn’t of high quality. (e)Intelligent Control Neuro-fuzzy systems are used in many vehicular applications, including trains, and smart automobiles and intelligent robots. Controller has to account for several variables, many of them non-linear. Suggestion: So for fuzzy logic control systems controlling the idle speed of automatic engines is required. This can be improved by using a radial basis function neural network with a Gaussian function. 9.1 LATEST APPLICATIONS (a) Framsticks Framsticks is a 3D life simulation project both physical structure of the creatures and their control systems are evolved. Evolutionary algorithms are used with selections, crossovers and mutations. These features enable us to study the evolution of social behavior through synthetic modeling of the evolutionary forces that may have led to (cooperative or competitive) social behavior. (b) In VLSI VLSI provides a means of capturing truly complex behavior in a highly hierarchical fashion. The adaptation allows us to compensate for inaccuracies in the physical analog VLSI implementation besides uncertainties and fluctuations in the system under optimization. Adaptive algorithms based on physical observation of the ``performance" gradient in the parameter space are better suited for robust analog VLSI implementation than are algorithms based on a calculated gradient.. (c) Creatures – The World Most Advanced Artificial Life! Creature features the most advanced, genuine Artificial Life software ever developed in a commercial product, technology that has blown the imaginations of scientists worldwide. 10. LIMITATIONS OF NEURO-FUZZY SYSTEMS (i)(a) Neural techniques are executed sequentially and difficult to parallelize. (b) When the quantity of data increases, methods may suffer a combinatorial explosion.
  • 16. (c) The learning process seems difficult to simulate in a symbolic system. ( ii) in perceptron network, (a) the output values of a perceptron can take only one of two values (0,1) due to hardlimiter transfer function. (b) perceptrons can only classify linearly separable sets of vectors[5.] else learning will never reach a point where all vectors are classified properly. (iii) In computational approach of neural networks, (a)When we try to solve a stochastic optimization problem,[6.] we need to approximate. And hence one must decide on how accurately to estimate the quantity before using it for the purpose of updating. For a finite computing budget, one can either spend most of the budget on estimating each iterative step very accurately and settle for fewer steps, or estimate each step very poorly but use the budget to calculate many iterative steps. (b) In many problems/techniques, the computations involved grow exponentially with the size of the problem which renders such techniques impractical.
  • 17. (c ) Learning under dynamic constraints imposes difficulty in identification of causeand-effect in the sense that future desired output now depend on all past input and it isn’t possible to know which past input deserves credit for the success in current output. So, dynamic programming must be invoked to convert the problem to a series of static learning problems. (d) NP-hardness is a fundamental limitation[11.] on what computation can do. Quantifying Heuristics and acquiring structural knowledge seems to be the only salvation for the effective solution of complex real problems. So, human expertise proves to be better. (iv) A neural network is used to obtain some information out of given data where other methods are not available. But sometimes, general mathematical models,[11.] can be simulated faster & more effectively. eg. drawing of a lottery does not need any past inputs. Similarly, weather forecasting. 11. CONCLUSIONS This paper has presented an overview of the neuro-fuzzy systems, their uses, learning methods to train them, their limitations in various applications and suggestion to rectify them in order to make them more efficient from the implementation point of view. The major issues that this paper has addressed are the scalability problem, testing, verification, and integration of neural network systems into the modern environment. It also states that neuro-fuzzy programs sometimes become unstable when applied to larger problems. The defence, nuclear and space industries are concerned about the issue of testing and verification. The mathematical theories used to guarantee the performance of an applied neural network are still under development. As suggested, the solution for the time being may be to train and test these intelligent systems much as we do for humans .In addition to this, the paper also proposes to solve the problem of parallelism and sequential execution by implementing neural networks directly in hardware, which needs a lot of development still. This "programming” will require feedback from the user in order to be effective but simple and "passive" sensors (e.g. fingertip sensors, gloves, or wristbands to sense pulse, blood pressure, skin ionization, and so on),can provide effective feedback into a neural control system and other variables which the system can learn to correlate with a person's response state. Again the paper is trying to put forward a number of possible alternatives that needs to be applied to various modern day applications of neuro-fuzzy systems so that they may serve the purpose that they are designed for. It also conveys, that side by side genetic algorithms, artificial intelligence must be blended to make it faster and thus can be implemented in VLSI design. Neuro-Fuzzy system which indeed is a powerful tool in realizing our daily needs and is not just a far-out research trend must be dealt with all efficient algorithms. Because a lot is to be done as regards. REFERENCES [1.] Bose.K.N & Liang.P. Neural network fundamentals with algorithms and applications [2.]Anderson A.James. Introduction to neural networks. [3.] Driankov D.& Hlendron.H Introduction to Fuzzy Control
  • 18. [4.] Hassoun H. Mohamad Fundamentals of Artificial Neural Networks [5.]Hagan & Beale Neural Network Design [6.] Haykin Simon Neural Networks [7.]Stamatious V. Kartalopoulos Understanding Neural Networks and Fuzzy Logic [8.]Limmin Fu (TMH) Neural Network in Computer Intelligence [9.] Patterson W. Dan Artificial Neural Networks Theory and Applications [10.] Kosko Bart Neural Network and Fuzzy Systems [11.] www.wikipedia.org [12] IEEE Special Issue 2002 A self-growing network that grows when required 376