SlideShare une entreprise Scribd logo
1  sur  41
ADAPTIVE CHANNEL
         EQUALIZATION




       College of Technology, Pantnagar
G.B.Pant University of Agriculture and Technology,
                    Pantnagar


                                             Kamal Bhatt
                 M.Tech-Electronics & Communication Engg.
                                                ID-44036
NEURAL NETWORK
Neural networks are the simplified models of the biological
neuron systems.

 Neural networks are typically organized in layers. Layers are
made up of a number of interconnected 'nodes' .which contain an
'activation function'.

Patterns are presented to the network via the 'input layer', which
communicates to one or more 'hidden layers' where the actual
processing is done via a system of weighted 'connections'.

The hidden layers then link to an 'output layer' where the answer
is output
MODEL OF ARTIFICIAL NEURON
   An appropriate model/simulation of the nervous system should be
    able to produce similar responses and behaviours in artificial
    systems.
   The nervous system is build by relatively simple units, the neurons,
    so copying their behaviour and functionality should be the solution.
LEARNING IN A SIMPLE NEURON

   Preceptron Learning Algorithm:

1. Initialize weights
2. Present a pattern and target output
                                                2
                                    y f [ wxi]
                                      2       i
3. Compute output :           y    f [ wi x ]
                                          0
                                            i       i
                                      i 0


4. Update weights :           wi(t 1 wi(t)
                                    )                   wi

Repeat starting at 2 until acceptable level of error
NEURAL NETWORK ARCHITECTURE
   An artificial Neural Network is defined as a data
    processing system consisting of a large number of
    interconnected processing elements or artificial
    neurons.
    There are three fundamentally different classes of
    neural networks. Those are.

            Single layer feedforward Networks.

            Multilayer feedforward Networks.

            Recurrent Networks.
Application
The tasks to which artificial neural networks are applied
tend to fall within the following broad categories:

•Function approximation, or regression analysis,
including time series prediction and modeling.

•Classification, including pattern and sequence
recognition, novelty detection and sequential
decision making.

•Data processing, including filtering, clustering,
blind signal separation and compression.
Equalization History
 The LMS algorithm by Widrow and Hoff in 1960
paved the way for the development of adaptive filters
used for equalisation.

Lucky used this algorithm in 1965 to design adaptive
channel equalisers. Maximum Likelihood Sequence
Estimator (MLSE) equaliser and its Viterbi
implementation in 1970’s.

The multi layer perceptron (MLP) based symbol-by-
symbol equalisers was developed in 1990
During 1989 to 1995 some efficient nonlinear artificial
neural network equalizer structure for channel equalization
were proposed, those include Chebyshev Neural Network
, Functional link ANN

In 2002 Kevin M. Passino described the Optimization
Foraging Theory in article “Biomimicry of Bacterial Foraging”

More recently in 2008, a rank based statistics approach
known as Wilcoxon learning method has been proposed for
signals processing application to mitigate the linear and
nonlinear learning problems.
Digital Communication
Systems
Equalizers

Adaptive channel equalizers have played an important role in
digital communication systems.

Equalizer works like an inversed filter which is placed at
the front end of the receiver. Its transfer function is inverse to
the transfer function of the associated channel , is able to
reduce the error causes between the desired and estimated
signal.

This is achieved through a process of training. During this
period the transmitter transmits a fixed data sequence and the
receiver has a copy of the same.
We use Equalizers to compensate the received signals which
are corrupted by the noise, interference and signal power
attenuation introduced by communication channels during
transmission.

Linear transversal filters (LTF) are commonly used in the
design of channel equalizers. The linear equalizers fail to work
well when transmitted signals have encountered severe
nonlinear distortion.

A neural network (NN) has the capability of complicatedly
mapping the input to the output signals, which makes the NN-
based equalizers a potentially suitable solution to deal with
nonlinear channel distortion.
The problem of equalization may be treated as a problem of signals
classification, so neural networks (NN) are quite promising candidates
because they can produce arbitrarily complex decision region.

Studies performed during the last decade have established the
superiority of neural equalizers comparative to the traditional equalizers,
in conditions of shigh nonlinear distortions and rapidly varying signals.

Several different neural equalizers architectures have been
developed, mostly combinations between a conventional linear
transversal filter (LTE) and a neural network.

The LTE eliminates the linear distortions, such as ISI, so the NN can
be focused on compensating the nonlinearities. There have been
studies on the following structures: a LTE and a multilayer perception
(MLP) , a LTE and a radial basis function network (RBF) a LTE and a
recurrent neural network
MLP networks are sometimes plagued by long training
times and may be trapped at bad local minima.

RBF networks often provide a faster and more robust
solution to the equalization problem. In addition, the RBF
neural network has a structure similar to the optimal
Bayesian symbol decision Therefore, the RBF is an ideal
processing structure to implement the optimal Bayesian
equalizer

. The RBF performances are better than the LTE and MLP
equalizers. g. Several learning algorithms have been
proposed to update the RBF parameters. However, the most
popular algorithm consists of an unsupervised learning rule
for the centers of hidden neurons and a supervised learning
rule for the weights of the output neurons.
The centers are generally updated using the k-means clustering
algorithm which consists of computing the squared distance
between the input vector and the centers, choosing a minimum
squared distance, and moving the corresponding center closer to
the input vector.

The k mean algorithm has some potential problems:
classification depend on the initials values of the centers of
RBF, on the type of chosen distance, on the number of classes. If a
center is inappropriate chosen it may never be updated, so it may
never represent a class.

 Here is proposed a new competitive method to update the RBF
centers, which recompenses the winning neuron and penalizes the
second winner, named rival..
Gradient Based Adaptive Algorithm
An adaptive algorithm is a procedure for adjusting the
parameters of an adaptive filter to minimize a cost function
chosen for the task at hand.
In this case, the parameters in ω(t) correspond
to the impulse response values of the filter at
time n. We can write the output signal y(t) as




 The general form of an adaptive FIR filtering algorithm is



where G( ) is a particular vector-valued nonlinear function(
depends on cost function chosen), μ(t) is a step size
parameter, e(t) and s(t) are the error signal and input signal
vector, respectively, and ω (t) is a vector of states that store
pertinent information about the characteristics of the input and
error signals
The Mean-Squared Error (MSE) cost function can be
defined as




 WMSE(t) can be found from the solution to the system of
 equations




 The method of steepest descent is an optimization procedure
 for minimizing the cost function J(t) with respect to a set of
 adjustable parameters W(t). This procedure adjusts each
 parameter of the system according to relationship
Linear Equalization
    Algorithms
LMS ALGORITHM

• In the family of stochastic gradient algorithms
• Approximation of the steepest – descent method
• Based on the MMSE criterion.(Minimum Mean square
  Error)
• Adaptive process containing two input signals:
•      1.) Filtering process, producing output signal.
•      2.) Desired signal (Training sequence)
• Adaptive process: recursive adjustment of filter tap
  weights
LMS ALGORITHM STEPS
                                   M 1
                                                  *
                        yn               un    k wk n
   Filter output                  k 0

                           en       dn        yn
   Estimation error
                                    wk n 1         wk n    u n k e* n
   Tap-weight adaptation



    update value       old value          learning -      tap
                                                                  error
    of tap - weigth    of tap - weight    rate            input
                                                                  signal
    vector             vector             parameter vector



                                                                      21
Recursive Least Square Algorithm

The recursive least squares (RLS) algorithm is another
algorithm for determining the coefficients of an adaptive filter.
In contrast to the LMS algorithm, the RLS algorithm uses
information from all past input samples (and not only from the
current tap-input samples) to estimate the (inverse of the)
autocorrelation matrix of the input vector.

To decrease the influence of input samples from the far
past, a weighting factor for the influence of each sample is
used. This cost function can be represented as
Non Linear Equalizers
Multilayer Perceptron Network

In 1958, Rosenblatt demonstrated some practical
applications using the perceptron. The perceptron is a
single level connection of McCulloch-Pitts neurons is
called as Single-layer feed forward networks.

The network is capable of linearly separating the input
vectors into
pattern of classes by a hyper plane. Similarly many
perceptrons can be connected in layers to provide a
MLP network, the input signal propagates through the
network in a forward direction, on a layer-by-layer
basis. This network has been applied successfully to
solve
diverse problems.
MLP Neural Network Using BP Algorithm
Generally MLP is trained using popular error back-
propagation algorithm.Si represent the inputs
s1, s2………. sn to the network, and yk
represents the output of the final layer of the neural
network. The
connecting weights between the input to the first hidden
layer, first to second hidden layer and the second
hidden layer to the output layers are represented by

respectively.

The final output layer of the MLP may be expressed as
The final output yk(t) at the output of neuron k, is compared with the desired
output d(t) and the resulting error signal e(t) is obtained as




 The instantaneous value of the total error energy is obtained by
 summing all error signals over all neurons in the output layer, that is




This error signal is used to update the weights and thresholds of the hidden
layers as well as the output layer. The updated weights are,
Functional Link Artificial Neural Network
FLANN is a novel single layer ANN network in which the original input
pattern is expanded to a higher dimensional space using nonlinear
functions, which provides arbitrarily complex decision regions by
generating nonlinear decision boundaries.

The main purpose of enhanced the functional expansion block to used
for the channel equalization process.

Each element undergoes nonlinear expansion to form M elements such
that the resultant matrix has the dimension of N×M. The functional
expansion of the element xk by power series expansion is carried out using
the equation given in
At tth iteration the error signal e(t) can be
computed as




 The weight vector can be updated by least mean
 square (LMS) algorithm, as
BER Performance of FLANN equalizer compared with
LMS, RLS based Equalizer
Chebyshev Artificial Neural Network
Chebyshev artificial neural network is similar to FLANN.
The difference being that in a FLANN the input signal is
expanded to higher dimension using functional expansion.
In Chebyshev the input is expanded using Chebyshev
polynomial. Similarly as FLANN network given in the
ChNN weights are updated by LMS algorithm. The
Chebyshev polynomials generated using the recursive
formula given as
BER Performance of ChNN equalizer compared with FLANN and
LMS, RLS based equalizer
Radial Basis Function Equalizer
The centres of the RBF networks are updated using k-means
clustering algorithm. This RBF structure can be extended for
multidimensional output as well. Gaussian kernel is the most
popular form of kernel function for equalization application, it
can be represented as



This network can implement a mapping Frbf : Rm→ R by the
function




Training of the RBF networks involves setting the
parameters for the centres Ci, spread σr and the linear
weights ωi RBF spread parameter, σr 2 is set to channel
noise variance σn 2
This provides the optimum RBF network as an equaliser.
BER Performance RBF Equalizer Compared ChNN, FLANN,
LMS, RLS equalizer
Conclusion
We observed that RLS provides faster convergence
rate than LMS equalizer.
We observed that MLP equalizer is a feed-forward
network trained using BP algorithm, it performed better
than the linear equalizer, but it has a drawback of slow
convergence rate, depending upon the number of nodes and
layers.
Optimal equalizer based on maximum a-posterior
probability (MAP) criterion can be implemented using Radial
basis function (RBF) network.
RBF equalizer mitigation all the ISI, CCI and BN
interference and provide minimum BER plot. But it has one
draw back that if input is increased the number of centres of
the network increases and makes the network more
complicated.
REFERENCES
•Haykin, S., "Adaptive Filter Theory", Prentice Hall,2005
•Haykin.S “Neural Network”, PHI 2003
•Kavita Burse, R. N. Yadav, and S. C. Shrivastava
Channel Equalization Using Neural Networks: A Review „ IEEE
Transactions     on Systems, Man, And Cybernetics —Part B:
CYBERNETICS, VOL. 40, NO. 3, MAY 2010‟
•Jagdish C. Patra, Ranendra N. Pal, Rameswar Baliarsingh, and
Ganapati Panda : Nonlinear Channel Equalization for QAM
Constellation Using Artificial Neural Network „ IEEE Transactions on
Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 29,
NO. 2, APRIL 1999‟
•Amalendu Patnaik„, Dimitrios E. Anagnostou„, Rabindra K. Mishra‟,
Christos G. Christodoulou„, and J. C. Lyke‟ „
Applications of Neural Networks in Wireless Communications „IEEE
Antennas and Propagation Magazine, Vol. 46, No. 3. June 2004
•R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996
•http://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm

Contenu connexe

Tendances

Linear prediction
Linear predictionLinear prediction
Linear prediction
Uma Rajaram
 

Tendances (20)

3.3 modulation formats msk and gmsk
3.3 modulation formats   msk and gmsk3.3 modulation formats   msk and gmsk
3.3 modulation formats msk and gmsk
 
Channel equalization
Channel equalizationChannel equalization
Channel equalization
 
Introduction to equalization
Introduction to equalizationIntroduction to equalization
Introduction to equalization
 
Relationships Among EVM, BER and SNR + WiFi minimum SNR consideration
Relationships Among EVM, BER and SNR + WiFi minimum SNR considerationRelationships Among EVM, BER and SNR + WiFi minimum SNR consideration
Relationships Among EVM, BER and SNR + WiFi minimum SNR consideration
 
Design of Filters PPT
Design of Filters PPTDesign of Filters PPT
Design of Filters PPT
 
Matched filter detection
Matched filter detectionMatched filter detection
Matched filter detection
 
Ofdm
OfdmOfdm
Ofdm
 
4.5 equalizers and its types
4.5   equalizers and its types4.5   equalizers and its types
4.5 equalizers and its types
 
combat fading in wireless
combat fading in wirelesscombat fading in wireless
combat fading in wireless
 
Rake Receiver
Rake ReceiverRake Receiver
Rake Receiver
 
Linear prediction
Linear predictionLinear prediction
Linear prediction
 
Matched filter
Matched filterMatched filter
Matched filter
 
2.5 capacity calculations of fdma, tdma and cdma
2.5   capacity calculations of fdma, tdma and cdma2.5   capacity calculations of fdma, tdma and cdma
2.5 capacity calculations of fdma, tdma and cdma
 
Convolution Codes
Convolution CodesConvolution Codes
Convolution Codes
 
FILTER BANKS
FILTER BANKSFILTER BANKS
FILTER BANKS
 
Adaptive linear equalizer
Adaptive linear equalizerAdaptive linear equalizer
Adaptive linear equalizer
 
carrier synchronization
carrier synchronizationcarrier synchronization
carrier synchronization
 
Wireless Channels Capacity
Wireless Channels CapacityWireless Channels Capacity
Wireless Channels Capacity
 
Digital modulation techniques...
Digital modulation techniques...Digital modulation techniques...
Digital modulation techniques...
 
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...
 

Similaire à Adaptive equalization

ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
vijaym148
 

Similaire à Adaptive equalization (20)

ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Adaptive Noise Cancellation using Multirate Techniques
Adaptive Noise Cancellation using Multirate TechniquesAdaptive Noise Cancellation using Multirate Techniques
Adaptive Noise Cancellation using Multirate Techniques
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
ANN.ppt
ANN.pptANN.ppt
ANN.ppt
 
Data science course in pune
Data science course in puneData science course in pune
Data science course in pune
 
Data science course in pune
Data science course in puneData science course in pune
Data science course in pune
 
Machine learning certification in gurgaon
Machine learning certification in gurgaon Machine learning certification in gurgaon
Machine learning certification in gurgaon
 
Data science training ang placements
Data science training ang placementsData science training ang placements
Data science training ang placements
 
Data science training in mumbai
Data science training in mumbaiData science training in mumbai
Data science training in mumbai
 
data science course
data science coursedata science course
data science course
 
machine learning training in bangalore
machine learning training in bangalore machine learning training in bangalore
machine learning training in bangalore
 
data science course in pune
data science course in punedata science course in pune
data science course in pune
 
Data science certification in mumbai
Data science certification in mumbaiData science certification in mumbai
Data science certification in mumbai
 
Data science training in mumbai
Data science training in mumbaiData science training in mumbai
Data science training in mumbai
 
Data science courses in bangalore
Data science courses in bangaloreData science courses in bangalore
Data science courses in bangalore
 
Data science course
Data science courseData science course
Data science course
 
Data science certification in pune
Data science certification in puneData science certification in pune
Data science certification in pune
 

Dernier

Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
Chris Hunter
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
kauryashika82
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 

Dernier (20)

Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 

Adaptive equalization

  • 1. ADAPTIVE CHANNEL EQUALIZATION College of Technology, Pantnagar G.B.Pant University of Agriculture and Technology, Pantnagar Kamal Bhatt M.Tech-Electronics & Communication Engg. ID-44036
  • 2. NEURAL NETWORK Neural networks are the simplified models of the biological neuron systems.  Neural networks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' .which contain an 'activation function'. Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'. The hidden layers then link to an 'output layer' where the answer is output
  • 3. MODEL OF ARTIFICIAL NEURON  An appropriate model/simulation of the nervous system should be able to produce similar responses and behaviours in artificial systems.  The nervous system is build by relatively simple units, the neurons, so copying their behaviour and functionality should be the solution.
  • 4. LEARNING IN A SIMPLE NEURON Preceptron Learning Algorithm: 1. Initialize weights 2. Present a pattern and target output 2 y f [ wxi] 2 i 3. Compute output : y f [ wi x ] 0 i i i 0 4. Update weights : wi(t 1 wi(t) ) wi Repeat starting at 2 until acceptable level of error
  • 5. NEURAL NETWORK ARCHITECTURE  An artificial Neural Network is defined as a data processing system consisting of a large number of interconnected processing elements or artificial neurons.  There are three fundamentally different classes of neural networks. Those are.  Single layer feedforward Networks.  Multilayer feedforward Networks.  Recurrent Networks.
  • 6. Application The tasks to which artificial neural networks are applied tend to fall within the following broad categories: •Function approximation, or regression analysis, including time series prediction and modeling. •Classification, including pattern and sequence recognition, novelty detection and sequential decision making. •Data processing, including filtering, clustering, blind signal separation and compression.
  • 7. Equalization History  The LMS algorithm by Widrow and Hoff in 1960 paved the way for the development of adaptive filters used for equalisation. Lucky used this algorithm in 1965 to design adaptive channel equalisers. Maximum Likelihood Sequence Estimator (MLSE) equaliser and its Viterbi implementation in 1970’s. The multi layer perceptron (MLP) based symbol-by- symbol equalisers was developed in 1990
  • 8. During 1989 to 1995 some efficient nonlinear artificial neural network equalizer structure for channel equalization were proposed, those include Chebyshev Neural Network , Functional link ANN In 2002 Kevin M. Passino described the Optimization Foraging Theory in article “Biomimicry of Bacterial Foraging” More recently in 2008, a rank based statistics approach known as Wilcoxon learning method has been proposed for signals processing application to mitigate the linear and nonlinear learning problems.
  • 10. Equalizers Adaptive channel equalizers have played an important role in digital communication systems. Equalizer works like an inversed filter which is placed at the front end of the receiver. Its transfer function is inverse to the transfer function of the associated channel , is able to reduce the error causes between the desired and estimated signal. This is achieved through a process of training. During this period the transmitter transmits a fixed data sequence and the receiver has a copy of the same.
  • 11. We use Equalizers to compensate the received signals which are corrupted by the noise, interference and signal power attenuation introduced by communication channels during transmission. Linear transversal filters (LTF) are commonly used in the design of channel equalizers. The linear equalizers fail to work well when transmitted signals have encountered severe nonlinear distortion. A neural network (NN) has the capability of complicatedly mapping the input to the output signals, which makes the NN- based equalizers a potentially suitable solution to deal with nonlinear channel distortion.
  • 12.
  • 13. The problem of equalization may be treated as a problem of signals classification, so neural networks (NN) are quite promising candidates because they can produce arbitrarily complex decision region. Studies performed during the last decade have established the superiority of neural equalizers comparative to the traditional equalizers, in conditions of shigh nonlinear distortions and rapidly varying signals. Several different neural equalizers architectures have been developed, mostly combinations between a conventional linear transversal filter (LTE) and a neural network. The LTE eliminates the linear distortions, such as ISI, so the NN can be focused on compensating the nonlinearities. There have been studies on the following structures: a LTE and a multilayer perception (MLP) , a LTE and a radial basis function network (RBF) a LTE and a recurrent neural network
  • 14. MLP networks are sometimes plagued by long training times and may be trapped at bad local minima. RBF networks often provide a faster and more robust solution to the equalization problem. In addition, the RBF neural network has a structure similar to the optimal Bayesian symbol decision Therefore, the RBF is an ideal processing structure to implement the optimal Bayesian equalizer . The RBF performances are better than the LTE and MLP equalizers. g. Several learning algorithms have been proposed to update the RBF parameters. However, the most popular algorithm consists of an unsupervised learning rule for the centers of hidden neurons and a supervised learning rule for the weights of the output neurons.
  • 15. The centers are generally updated using the k-means clustering algorithm which consists of computing the squared distance between the input vector and the centers, choosing a minimum squared distance, and moving the corresponding center closer to the input vector. The k mean algorithm has some potential problems: classification depend on the initials values of the centers of RBF, on the type of chosen distance, on the number of classes. If a center is inappropriate chosen it may never be updated, so it may never represent a class.  Here is proposed a new competitive method to update the RBF centers, which recompenses the winning neuron and penalizes the second winner, named rival..
  • 16. Gradient Based Adaptive Algorithm An adaptive algorithm is a procedure for adjusting the parameters of an adaptive filter to minimize a cost function chosen for the task at hand.
  • 17. In this case, the parameters in ω(t) correspond to the impulse response values of the filter at time n. We can write the output signal y(t) as The general form of an adaptive FIR filtering algorithm is where G( ) is a particular vector-valued nonlinear function( depends on cost function chosen), μ(t) is a step size parameter, e(t) and s(t) are the error signal and input signal vector, respectively, and ω (t) is a vector of states that store pertinent information about the characteristics of the input and error signals
  • 18. The Mean-Squared Error (MSE) cost function can be defined as WMSE(t) can be found from the solution to the system of equations The method of steepest descent is an optimization procedure for minimizing the cost function J(t) with respect to a set of adjustable parameters W(t). This procedure adjusts each parameter of the system according to relationship
  • 19. Linear Equalization Algorithms
  • 20. LMS ALGORITHM • In the family of stochastic gradient algorithms • Approximation of the steepest – descent method • Based on the MMSE criterion.(Minimum Mean square Error) • Adaptive process containing two input signals: • 1.) Filtering process, producing output signal. • 2.) Desired signal (Training sequence) • Adaptive process: recursive adjustment of filter tap weights
  • 21. LMS ALGORITHM STEPS M 1 * yn un k wk n  Filter output k 0 en dn yn  Estimation error wk n 1 wk n u n k e* n  Tap-weight adaptation update value old value learning - tap error of tap - weigth of tap - weight rate input signal vector vector parameter vector 21
  • 22. Recursive Least Square Algorithm The recursive least squares (RLS) algorithm is another algorithm for determining the coefficients of an adaptive filter. In contrast to the LMS algorithm, the RLS algorithm uses information from all past input samples (and not only from the current tap-input samples) to estimate the (inverse of the) autocorrelation matrix of the input vector. To decrease the influence of input samples from the far past, a weighting factor for the influence of each sample is used. This cost function can be represented as
  • 23.
  • 25. Multilayer Perceptron Network In 1958, Rosenblatt demonstrated some practical applications using the perceptron. The perceptron is a single level connection of McCulloch-Pitts neurons is called as Single-layer feed forward networks. The network is capable of linearly separating the input vectors into pattern of classes by a hyper plane. Similarly many perceptrons can be connected in layers to provide a MLP network, the input signal propagates through the network in a forward direction, on a layer-by-layer basis. This network has been applied successfully to solve diverse problems.
  • 26. MLP Neural Network Using BP Algorithm
  • 27. Generally MLP is trained using popular error back- propagation algorithm.Si represent the inputs s1, s2………. sn to the network, and yk represents the output of the final layer of the neural network. The connecting weights between the input to the first hidden layer, first to second hidden layer and the second hidden layer to the output layers are represented by respectively. The final output layer of the MLP may be expressed as
  • 28. The final output yk(t) at the output of neuron k, is compared with the desired output d(t) and the resulting error signal e(t) is obtained as The instantaneous value of the total error energy is obtained by summing all error signals over all neurons in the output layer, that is This error signal is used to update the weights and thresholds of the hidden layers as well as the output layer. The updated weights are,
  • 29.
  • 30. Functional Link Artificial Neural Network FLANN is a novel single layer ANN network in which the original input pattern is expanded to a higher dimensional space using nonlinear functions, which provides arbitrarily complex decision regions by generating nonlinear decision boundaries. The main purpose of enhanced the functional expansion block to used for the channel equalization process. Each element undergoes nonlinear expansion to form M elements such that the resultant matrix has the dimension of N×M. The functional expansion of the element xk by power series expansion is carried out using the equation given in
  • 31.
  • 32. At tth iteration the error signal e(t) can be computed as The weight vector can be updated by least mean square (LMS) algorithm, as
  • 33. BER Performance of FLANN equalizer compared with LMS, RLS based Equalizer
  • 34. Chebyshev Artificial Neural Network Chebyshev artificial neural network is similar to FLANN. The difference being that in a FLANN the input signal is expanded to higher dimension using functional expansion. In Chebyshev the input is expanded using Chebyshev polynomial. Similarly as FLANN network given in the ChNN weights are updated by LMS algorithm. The Chebyshev polynomials generated using the recursive formula given as
  • 35.
  • 36. BER Performance of ChNN equalizer compared with FLANN and LMS, RLS based equalizer
  • 38. The centres of the RBF networks are updated using k-means clustering algorithm. This RBF structure can be extended for multidimensional output as well. Gaussian kernel is the most popular form of kernel function for equalization application, it can be represented as This network can implement a mapping Frbf : Rm→ R by the function Training of the RBF networks involves setting the parameters for the centres Ci, spread σr and the linear weights ωi RBF spread parameter, σr 2 is set to channel noise variance σn 2 This provides the optimum RBF network as an equaliser.
  • 39. BER Performance RBF Equalizer Compared ChNN, FLANN, LMS, RLS equalizer
  • 40. Conclusion We observed that RLS provides faster convergence rate than LMS equalizer. We observed that MLP equalizer is a feed-forward network trained using BP algorithm, it performed better than the linear equalizer, but it has a drawback of slow convergence rate, depending upon the number of nodes and layers. Optimal equalizer based on maximum a-posterior probability (MAP) criterion can be implemented using Radial basis function (RBF) network. RBF equalizer mitigation all the ISI, CCI and BN interference and provide minimum BER plot. But it has one draw back that if input is increased the number of centres of the network increases and makes the network more complicated.
  • 41. REFERENCES •Haykin, S., "Adaptive Filter Theory", Prentice Hall,2005 •Haykin.S “Neural Network”, PHI 2003 •Kavita Burse, R. N. Yadav, and S. C. Shrivastava Channel Equalization Using Neural Networks: A Review „ IEEE Transactions on Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 40, NO. 3, MAY 2010‟ •Jagdish C. Patra, Ranendra N. Pal, Rameswar Baliarsingh, and Ganapati Panda : Nonlinear Channel Equalization for QAM Constellation Using Artificial Neural Network „ IEEE Transactions on Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 29, NO. 2, APRIL 1999‟ •Amalendu Patnaik„, Dimitrios E. Anagnostou„, Rabindra K. Mishra‟, Christos G. Christodoulou„, and J. C. Lyke‟ „ Applications of Neural Networks in Wireless Communications „IEEE Antennas and Propagation Magazine, Vol. 46, No. 3. June 2004 •R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 •http://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm