SlideShare une entreprise Scribd logo
1  sur  11
Télécharger pour lire hors ligne
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.1, February 2014
DOI:10.5121/ijcsa.2014.4102 11
Hybrid HMM/DTW based Speech Recognition
with Kernel Adaptive Filtering Method
Siva Prasad Nandyala1
and T. Kishore Kumar2
1
Research Scholar, 2
Associate Professor, Department of E.C.E., NIT Warangal, Andhra
Pradesh, India
ABSTRACT
We have proposed new approach for the speech recognition system by applying kernel adaptive filter for
speech enhancement and for the recognition, the hybrid HMM/DTW methods are used in this paper. Noise
removal is very important in many applications like telephone conversation, speech recognition, etc. In the
recent past, the kernel methods are showing good results for speech processing applications. The feature
used in the recognition process is MFCC features. It consists of a HMM system used to train the speech
features and for classification purpose used the DTW method. Experimental results show a relative
improvement of recognition rate compared to the traditional methods.
KEYWORDS
Speech Enhancement, Hidden Markov Model, Dynamic Time Warping, Kernel Least Mean Square, Least
Mean Square, Signal to Noise Ratio
1. INTRODUCTION
In many quality progressive speech recognition systems, Hidden Markov Models (HMMs) are
used to model the acoustic behavior of basic speech units like words or phones. The existence of
ground noise can cause degradation inside the standard and quality of the speech signal in many
applications, as associate example, speech recognition system, conference systems, mobile
communications and hearing aids.[1] [12]. The speech enhancement in the above application is a
challenging task, the main reason is the noise signals nature and characteristics may change
dramatically in time and also on application to application, for this reason, an effective speech
enhancement technique to recover the clean speech from its corrupted observation is necessary
[11] [13]. It is required to estimate the speech signal from the observed noisy one in the above
applications.
Lot of different noise reduction algorithms have been proposed in literature to reduce the noise
effect for speech enhancement, if we take for example, spectral subtraction, Wiener filtering for
noise reduction, minimum mean square error (MMSE) based speech estimation, Least Mean
Square algorithm, Bayesian based estimation and kalman filtering.
2. ADAPTIVE FILTERS
There are many approaches are present among the literature related to speech enhancement. For
the last 20 years, adaptive filters are effective and popular approaches for the speech
International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014
12
enhancement. The advantage of adaptive filters is that they find time varied potentials and
conjointly track the dynamic variations of the signals.
Before progressing to the LMS and KLMS, associate introduction to adaptive filter is given as
follows. Because the name is “adaptive filters” it's going to be vital to know that means of the
terms “adaptive” and “filter” during a very general sense. The adjective adaptation will be
understood by considering a system that is making an attempt to regulate itself thus on reply to
some development that's going down in its close. In different words the system tries to regulate
its parameters with a aim of meeting some well outline goal or target that depends upon the state
of system moreover as its close .this is what adaptation suggests that. Furthermore there's a
necessity to own a collection of steps or sure procedure by that this method of adaptation is
administered. And at last the system that carries out or undergoes the method of adaptation is
termed by the additional technical name “filter”.
The adaptive filter has several benefits which incorporates lower process delay and higher
trailing of the mechanical phenomenon of nonstationary signals. These area unit necessary
characteristics in applications like noise estimation, echo cancellation, echo cancellation delay
estimation and channel equalization in mobile telecommunications, wherever low delay and
quick wrenching of time-varying processes and time-varying environments area unit necessary
objectives [2].
The mathematical definition of adaptive filter is given below as follows
(k)x(k)Wn)Wn(k)x(ky(k)
1M
0n
T
∑
=
=
=−= eq. (1)
In which k is the time index, y is the filter output, x the filter input, Wn are the filter coefficients.
The below figure 1 shown in the below block diagram, gives the foundation for particular
adaptive filter realisation [3], which is named as Least Mean Squares (LMS). The main concept
that lies in the block diagram is , a variable filter extracts the estimate of the desired signal.
Figure 1: Adaptive Filter
From the block diagram, the following assumptions are made:
1) The input signal x(n) which is the sum of a desired signal d(n) and interfering noise v(n)
)()()( nvndnx += eq. (2)
International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014
13
The variable filter has the structure of Finite Impulse Response (FIR). The impulse response is
adequate for such structures the with the filter coefficients. For the filter of order p with the
coefficients for square measure given as follows
)](),.....1(),0([ pwwww nnnn = eq. (3)
2) the second one is the cost function or the error signal or which is the difference between
the estimated signal and desired signal
∧
−= )()()( ndndne eq. (4)
The desired signal is estimated by the variable filter which convolves the input signal with the
impulse response. This is expressed in vector notation as follows
)(*)( nxWnd n=
∧
eq. (5)
Where
)](),...1(),([)( pnxnxnxnx −−= eq. (6)
In which x(n) is an input signal vector and the variable filter updates the filter coefficients at
every time instant
nNn WWW ∇+=+1 eq. (7)
Where, nW∇ is a correction factor for the filter coefficients.
This correction factor is generated by the adaptive algorithm based on the error and input signals.
The Least Mean sq. (LMS) algorithmic program was initially developed by Widrow and Hoff
in1959 through their studies of pattern recognition. From there it's become one among the
foremost wide used algorithms in adaptative filtering algorithms [4]. The LMS algorithmic is a
stochastic gradient-based algorithm in which it utilizes the gradient vector of the filter tap
weights to converge on the optimum wiener solution. It is renowned and wide used owing to its
procedure simplicity. It is this simplicity that has created it the benchmark against that all
alternative adaptation filtering algorithms which are judged.
The LMS algorithmic program could be a linear adaptive filter algorithmic program that
normally consists of two basic processes.
1. A filter process: In this the following steps are involved
a. Calculating the linear filter output in response to an input signal.
b. The estimation error is generated by comparing this output with a desired response.
2. An adaptive process: This involves the automatic adjustment of filter Parameters in
accordance with the estimation error.
The combination of those two processes working together constitutes a feedback loop; first,
we've got a transversal filter, around which the LMS algorithm is constructed. This component is
International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014
14
accountable for performing the filtering process. Second, we've got a mechanism for performing
the adaptive control process on filter tap weights of the transversal filter. With each iteration of
the LMS algorithm, the filter tap weights of the adaptive filter are updated in keeping with the
subsequent formula.
w (n +1) = w(n) + 2μex(n) eq. (8)
Here x(n) is the input vector of time delayed input values,
x(n) = [x(n) x(n-1) x(n-2) –.x(n- N+1)] eq. (9)
The vector w (n) = [w0 (n), w1 (n), w2 (n), w3 (n)... wN-1(n)] represents the coefficients of the
adaptative Finite Impulse Response filter tap weight vector at time n. The parameter μ is
understood because the step size parameter and it is a tiny positive constant. This step size
parameter controls the influence of the updating issue.
Selection of a an appropriate value for μ is imperative to the performance of the LMS rule, if
value is simply too less the time, the adaptative filter takes to converge on the optimum solutions
are too long; if μ is simply too large or more the adaptative filter becomes unstable and its output
diverges. is that the simplest to implement and is stable once the step size parameter is chosen
suitably.
The main steps of LMS algorithm are given as follows:
The LMS rule equations which are developed primarily based upon the speculation of the wiener
solution for the best filter tap weights, Wo .and it in addition depends on the steepest-descent
method.
This can be a formula that updates the filter coefficients using the current tap weight vector and
also the current gradient of the cost function with respect to the filter tap weight coefficient
vector, ▼ξ
)()()1( nnWnW ∇−=+ eq. (10)
Where
)]([)( 2
neEn = eq. (11)
As the negative gradient vector points within the direction of steepest descent for the N-
dimensional quadratic cost function, every recursion shifts the value of the filter coefficients
nearer toward their optimum value, that corresponds to the minimum possible value of the cost
function, ξ(n).
By doing the random process implementation of the steepest descent algorithm, the LMS
algorithm can be derived. Here the expectation for the error signal isn't notable therefore the
instantaneous value is employed as associate estimate. The steepest descent algorithmic rule then
becomes
)()()1( nnWnW ∇−=+ eq. (12)
Where
International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014
15
)()( 2
nen = eq. (13)
The cost function gradient which is , ▼ξ(n), can be expressed in another alternative form in the
following steps.
)}({)( 2
nen ∇=∇ eq. (14)
Wne ∂∂= /)(2
Wnene ∂∂= /)()(2
Wnyndne ∂−∂−= /))()(()(2
)()(2 nxne−=
Substituting this into the steepest descent algorithm of Eq.12, we arrive at the recursion for the
LMS adaptive algorithm.
)(2)()1( nexnWnW +=+ eq. (15)
The main reason is its computational simplicity behind the popularity of LMS algorithms,
making it easier to implement than all other commonly used adaptive algorithms
3. KERNEL ADAPTIVE FILTERS
In this proposed approach, we have used the kernel methodology which is a smart nonparametric
modeling technique within which, it transforms the input data into a high dimensional feature
space via a reproducing kernel such that the with the help of the kernel evaluations, scalar
product operation within the feature space is computed with efficiency.
When an appropriate linear methods linear strategies are subsequently applied on the remodeled
information. As long as algorithm formula is developed in terms of inner product (or equivalent
kernel evaluation), there is no need to perform computations within the high dimensional feature
space.[5]..
The kernel adaptive filtering technique employed in this proposed approach is which depends on
the adaptive filtering technique for general nonlinear issues. This itself is a natural generalization
of linear adaptive filtering in reproducing kernel Hilbert spaces domain. Kernel adaptive filters
are like kernel strategies, closely associated with some artificial neural networks like radial basis
function networks and regularization networks [6].The KLMS algorithm could be a random
gradient methodology to unravel statistical procedure issues in RKHS. as a result of the update
equation is written in terms of scalar product, KLMS is computed with efficiency within the
input space. The nice approximation ability of KLMS stems from the very fact that the
remodeled data embody probably infinite totally different features of the first data.
A linear finite impulse response filter is which is assumed within the LMS formula. If the
mapping between desired and input is very nonlinear, then poor performance is expected from
LMS [7]. therefore the kernel evoked mapping is used to rework the input u(i) into a high –
International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014
16
)'),(()()'()(
1
ujukjeuiw
i
j
T
∑=
=
)'(]))(()([)'()(
1
ujujeuiw T
i
j
T
 ∑=
=
)]'())(()[(
1
ujuje T
i
j
∑=
=
∑=
=
i
j
jje
1
)()( 
∑=
+=
i
j
jjew
1
)()()0( 
dimensional feature house F as ϕ(u(i)), ωT
ϕ(u) could be a rather more powerful model than wT
u
thanks to the distinction in spatiality (more significantly the richness of representation) of u and ϕ
( u ). So, finding ω through random gradient descent might prove as an efficient method of
nonlinear filtering as LMS will for linear issues. Denote ϕ(i) = ϕ(u(i)) for simplicity.
The new example sequence yields is modified with the help of LMS formula as follows
0)0( =w eq. (16)
)()1()()( iiwidie T
−−= eq. (17)
)()()1()( iieiwiw +−= eq. (18)
Where weight update vector w(i) denotes the estimate (at iteration i) of the weight vector in F. As
the dimensionality of ϕ is high, the repeated application of the weight - update equation (18)
through iterations yields
)()()1()( iieiwiw +−=
)()()]1()1()2([ iieiieiw  +−−+−=
……..
By assumption w(0) = 0 eq. (19)
That is, after i-step training, the weight estimate is expressed as a linear combination of all the
previous and present (transformed) inputs, weighted by the prediction errors (and scaled by η).
Considering mainly, the output of the system to a new input u′ can be individually expressed in
terms of inner products between transformed inputs.
eq.(20)
But by the kernel trick we can efficiently compute the filter output in the input space by kernel
evaluations
eq. (21)
When the above equation is compared with LMS weight update equation 15, it is updated
without using the weights.
International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014
17
),.)(()(
1
1
1 jukjef
i
j
i ∑
−
=
− =
))(),(()())((
1
1
1 iujukjeiuf
i
j
i ∑
−
=
− =
If f i is denoted as the estimate of the input – output nonlinear mapping at time i , we have the
following sequential learning rule for the new algorithm
eq. (22)
eq. (23)
))(()()( 1 iufidie i−−= eq. (24)
),.)(()(1 iukieff ii += − eq. (25)
all the above explained equations gives the KLMS method. It's the LMS in RKHS, and filtering
is finished by kernel analysis. KLMS allocates a replacement kernel unit for the new training
information with input u(i) because the center and e(i) which is coefficient. These coefficients
and also the centers are stored in memory throughout training [8]
4. SPEECH RECOGNITION METHODS
4.1 Dynamic time warping (DTW)
Speech is time-dependent methodology.
Hence, the utterances of constant word can have altogether utterly completely different durations,
and utterances of constant word with constant quantity can disagree at intervals the middle, as a
results of various components of the words being spoken at different rates. to induce a worldwide
distance between two speech patterns (represented as a sequence of vectors) a time alignment
have to be compelled to be performed.
Dynamic Time Warping (DTW) that could be a technique that finds the simplest alignment
between two statistic if simply the once series may even be “warped” non-linearly by stretching
or shrinking it on its time axis. This distortion between two words can then be accustomed notice
corresponding regions between the two words or to ascertain the similarity between the two
words. Dynamic time warping is typically utilised in speech recognition to ascertain if two
waveforms represent constant spoken phrase [10]. In addition to speech recognition, dynamic
time warping has together been found useful in many completely different disciplines, in
conjunction with data processing, gesture recognition, robotics, producing, and medicine.
This disadvantage is illustrated at intervals in the below figure two, at intervals that a ``time-
time'' matrix is used to visualize the alignment. In general, all the time alignment examples, the
reference pattern (template) which may go up the side and also the input pattern may go along
all-time low.
Throughout this illustration the input ``SsPEEhH'' may be a `noisy' version of the model
``SPEECH''. The thought is that `h' may be a better match to `H' compared with something at
International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014
18
intervals the model. The input ``SsPEEhH'' are matched against all templates at intervals the
system's repository. The foremost effective matching model is that the one that there's the lowest
distance path positioning the input pattern to the model. an easy global distance score for a path
is simply the total of native distances that head to create up the path.
Figure 2: Dynamic Time Warping Algorithm for Search Path
We apply certain restriction on the direction of propagation to make the algorithm to reduce
excessive computation. The constraints which are given below as follows.
• Matching paths cannot go backwards in time.
• Every frame in the input must be used in a matching path.
• Local distance scores are combined by adding to give a global distance.
This method is thought as Dynamic Programming (DP). Once applied to template-based speech
recognition, it's usually spoken as Dynamic Time warping (DTW). DP is certain to find the
lowest distance path through the matrix, whereas minimizing the number of computation. The DP
method operates in a very time-synchronous manner: every column of the time-time matrix is
taken into account in succession (equivalent to process the input frame-by-frame) in order that,
for a template of length N, the most variety of ways being considered at any time is N.
Consider if D(i,j) is the global distance up to (i,j) and the local distance at (i,j) is given by d(i,j).
( , ) min[ ( 1, 1), ( 1, ), ( , 1)] ( , )D i j D i j D i j D i j d i j= − − − − + eq. (26)
Given that D(1,1) = d(1,1) (which is the initial condition),
We have the idea for a suitable algorithmic rule for computing D(i,j). The ultimate global
distance D(n,N) provides overall matching score of the template with the input. The input word
is then recognized because the word corresponding the template with all-time low matching
score.
4.2 Hidden Markov Model (HMM)
The Hidden Markov Model (HMM) which is a powerful statistical based tool for modeling
generative sequences which will be characterised by an underlying method generating an evident
International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014
19
sequence [9]. HMMs have found application in several areas fascinated by signal processing, and
specifically speech processing, however have additionally been applied successfully to low level
NLP(Natural Language processing) tasks like phrase unitization, extracting target info from
documents and part-of-speech tagging.
The technique is employed to train a model that is, in our case ought to represent an utterance of
a word. This model is employed afterward within the testing of an utterance
and conniving the chance of that the model has created the sequence of vectors.
The very important distinction between the Observable Markov Model (OMM) and a Hidden
Markov Model (HMM) which is that within the Observable the output state is
completely determined at on every occasion instant t. Inside the hidden Markov Model the state
at on each occasion t must be inferred from observations. An observation may be a probabilistic
function of a state.
The basic Hidden Markov Model is represented by λ = (π, A, B).
π = initial state distribution vector.
A = State transition probability matrix.
B = continuous observation probability density function matrix.
The three important fundamental problems in the Hidden Markov Model design which are given
as follows:
Problem one – Recognition:
With the given observation sequence O = (o1, o2,...,oT) and the model λ = ( π, A, B ), how to
compute the is the probability of the observation sequence given the model? That is, how the
P(O|λ) is computed efficiently?
Problem two - Optimal state sequence:
With the given observation sequence O = (o1, o2,...,oT) and the model λ = ( π, A, B ), how
would be a corresponding state sequence, q = (q1, q2,...,qT), chosen to be optimal in some sense
Problem three – Adjustment:
How the probability measures will be, λ = ( π, A, B), adjusted to maximize P(O|λ)
with the given a sequence of acoustic vectors
{ }1 2, ,..., Ty y y y=
, and for the speech recognition
task is to find the optimal words sequence
{ }1 2, ,..., LW w w w=
, in which maximizes the
probability
( )|P W y
, i.e. the probability of the word sequence W given the acoustic feature
vectors y . This probability is normally computed with the help of Bayes Theorem:
( )
( ) ( )
( )
|
|
P y W P W
P W y
P y
=
eq. (27)
In the equation 27, probability
( )P W
is the word sequenceW , which is named language model
probability and it is computed using a language model. The probability
( )|P y W
is referred to as
International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014
20
acoustic model probability, which is the probability that the acoustic feature vector y is produced
by the word sequenceW .
( )|P y W
is calculated by using acoustic model[14].
4.3 Hybrid HMM/DTW Approach
Therefore, we followed a different approach, namely the extension of a state-of-the-art system
that achieves extremely good recognition rates with a HMM that is trained. For recognition
purpose used the DTW method.
5. EXPERIMENTS AND RESULTS
In this section, the performance of the speech recognition of ten words using the hybrid model is
explained. The training of spoken words is done with the HMM model and to classify the data,
used the DTW method. The noise present in the speech signal is removed by using the KLMS
algorithm and the improvement in SNR is 2.1 dB compared to the LMS algorithm. The
recognition rate obtained is 94 % with the hybrid model compared to 90% using the individual
method with noise conditions.
Table 1: Comparison of speech recognition rate for isolated words
Word Recognition rate without hybrid
model without noise removal
Recognition Rate(For Hybrid
HMM/DTW)
Siva 100 100
Sunny 90 90
Venkat 90 90
Santhosh 80 80
Sanjeev 90 90
Venu 90 100
Sai 90 100
Raju 90 100
Keerthi 90 100
Karthik 90 90
The figure 3 gives the simulation results of speech recognition for the given words.
International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014
21
Figure 3: Speech Recognition results using the hybrid methods with kernel adaptive filtering method.
6. CONCLUSIONS AND FUTURE SCOPE
In this work, we used a new approach which is applied for speech recognition using a hybrid
HMM/DTW method with kernel adaptive filtering method for speech enhancement. From the
experimental results, it is observed that the proposed approach gave better recognition rate for
speech recognition than the traditional methods. In future work, Artificial Neural Networks and
Support Vector Machines can be used for classification to improve the recognition rate.
REFERENCES
[1] Oliver Gauci, Carl J. Debono, Paul Micallef, "A Reproducing Kernel Hilbert Space Approach for
Speech Enhancement" ISCCSP 2008, Malta, 12-14 March 2008
[2] SaeedV.Vaseghi,“Advanced digital signal processing and noise reduction." John Wiley and Sons.
[3] http://en.wikipedia.org/wiki/Adaptive_filter
[4] Simon Haykin ,"Adaptive Filter Theory", Prentice Hall of India, 4th Edition.
[5] http://en.wikipedia.org/wiki/Kernel_adaptive_filter
[6] Weifeng Liu, Puskal P. Pokharel, and Jose C. Principe,"The Kernel Least-Mean-Square Algorithm"
IEEE Transactions on signal processing, vol 56, no 2, February 2008.
[7] Jose C. Principe, Weifeng Liu, Simon Haykin, "Kernel Adaptive Filtering: A Comprehensive
Introduction", Wiley, March 2010.
[8] Dr. T. Kishore Kumar, N. Siva Prasad, “Speech Enhancement using Kernel Adaptive Filtering
Method”, IEEE COMCAS November 2011 in Tel Aviv, Israel.
[9] Justin, J. and I. Vennila,” A Hybrid Speech Recognition System with Hidden Markov Model and
Radial Basis Function Neural Network” American Journal of Applied Sciences, Volume10, Issue 10,
Pages 1148-1153.
[10] R. Shashikant and Daulappa G.Bhalke,” Speech Recognition using Dynamic Time Warping”, CiiT
International Journal of Digital Signal Processing Print: ISSN 0974 – 9705 & Online: ISSN 0974 –
9594
[11] J. Baker, L. Deng, J. Glass, S. Khudanpur, C. Lee, N. Morgan, and D. O'Shaughnessy, "Research
Developments and Directions in Speech Recognition and Understanding, Part 1," IEEE Signal
Processing Magazine, 75-80, May 2009.
[12] Lawrence R. Rabiner. , "A Tutorial on Hidden Markov Models and Selected Applications in Speech
Recognition", Proceedings of the IEEE, 77 (2), 1989, p. 257–286.
[13] C. Lee, J. Glass, and O. Ghitza, "An Efferent-Inspired Auditory Model Front-End for Speech
Recognition," Proc. Interspeech, Florence, 2011.

Contenu connexe

Tendances

Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...shaik chand basha
 
Comparison of Stable NLMF and NLMS Algorithms for Adaptive Noise Cancellation...
Comparison of Stable NLMF and NLMS Algorithms for Adaptive Noise Cancellation...Comparison of Stable NLMF and NLMS Algorithms for Adaptive Noise Cancellation...
Comparison of Stable NLMF and NLMS Algorithms for Adaptive Noise Cancellation...IJERA Editor
 
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Brati Sundar Nanda
 
Adaptive filter
Adaptive filterAdaptive filter
Adaptive filterA. Shamel
 
Project_report_BSS
Project_report_BSSProject_report_BSS
Project_report_BSSKamal Bhagat
 
A Combined Voice Activity Detector Based On Singular Value Decomposition and ...
A Combined Voice Activity Detector Based On Singular Value Decomposition and ...A Combined Voice Activity Detector Based On Singular Value Decomposition and ...
A Combined Voice Activity Detector Based On Singular Value Decomposition and ...CSCJournals
 
Reducting Power Dissipation in Fir Filter: an Analysis
Reducting Power Dissipation in Fir Filter: an AnalysisReducting Power Dissipation in Fir Filter: an Analysis
Reducting Power Dissipation in Fir Filter: an AnalysisCSCJournals
 
Adaptive Channel Equalization for Nonlinear Channels using Signed Regressor F...
Adaptive Channel Equalization for Nonlinear Channels using Signed Regressor F...Adaptive Channel Equalization for Nonlinear Channels using Signed Regressor F...
Adaptive Channel Equalization for Nonlinear Channels using Signed Regressor F...IDES Editor
 
Adaptive Noise Cancellation
Adaptive Noise CancellationAdaptive Noise Cancellation
Adaptive Noise Cancellationtazim68
 
Adaptive filters and band reject filters
Adaptive filters and band reject filtersAdaptive filters and band reject filters
Adaptive filters and band reject filtersSanSan149
 
Blind deconvolution in Wireless Communication
Blind deconvolution in Wireless CommunicationBlind deconvolution in Wireless Communication
Blind deconvolution in Wireless CommunicationAritra Chatterjee
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 

Tendances (20)

Adaptive Equalization
Adaptive EqualizationAdaptive Equalization
Adaptive Equalization
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalization
 
Dsp lecture vol 7 adaptive filter
Dsp lecture vol 7 adaptive filterDsp lecture vol 7 adaptive filter
Dsp lecture vol 7 adaptive filter
 
Dct and adaptive filters
Dct and adaptive filtersDct and adaptive filters
Dct and adaptive filters
 
Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...
 
Comparison of Stable NLMF and NLMS Algorithms for Adaptive Noise Cancellation...
Comparison of Stable NLMF and NLMS Algorithms for Adaptive Noise Cancellation...Comparison of Stable NLMF and NLMS Algorithms for Adaptive Noise Cancellation...
Comparison of Stable NLMF and NLMS Algorithms for Adaptive Noise Cancellation...
 
Adaptive filter
Adaptive filterAdaptive filter
Adaptive filter
 
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
 
Adaptive filter
Adaptive filterAdaptive filter
Adaptive filter
 
Project_report_BSS
Project_report_BSSProject_report_BSS
Project_report_BSS
 
Adaptive filters
Adaptive filtersAdaptive filters
Adaptive filters
 
A Combined Voice Activity Detector Based On Singular Value Decomposition and ...
A Combined Voice Activity Detector Based On Singular Value Decomposition and ...A Combined Voice Activity Detector Based On Singular Value Decomposition and ...
A Combined Voice Activity Detector Based On Singular Value Decomposition and ...
 
Reducting Power Dissipation in Fir Filter: an Analysis
Reducting Power Dissipation in Fir Filter: an AnalysisReducting Power Dissipation in Fir Filter: an Analysis
Reducting Power Dissipation in Fir Filter: an Analysis
 
Adaptive Channel Equalization for Nonlinear Channels using Signed Regressor F...
Adaptive Channel Equalization for Nonlinear Channels using Signed Regressor F...Adaptive Channel Equalization for Nonlinear Channels using Signed Regressor F...
Adaptive Channel Equalization for Nonlinear Channels using Signed Regressor F...
 
Av 738 - Adaptive Filtering Lecture 1 - Introduction
Av 738 - Adaptive Filtering Lecture 1 - IntroductionAv 738 - Adaptive Filtering Lecture 1 - Introduction
Av 738 - Adaptive Filtering Lecture 1 - Introduction
 
Adaptive Noise Cancellation
Adaptive Noise CancellationAdaptive Noise Cancellation
Adaptive Noise Cancellation
 
Adaptive filters and band reject filters
Adaptive filters and band reject filtersAdaptive filters and band reject filters
Adaptive filters and band reject filters
 
D0341015020
D0341015020D0341015020
D0341015020
 
Blind deconvolution in Wireless Communication
Blind deconvolution in Wireless CommunicationBlind deconvolution in Wireless Communication
Blind deconvolution in Wireless Communication
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 

En vedette

MOVEMENT ASSISTED COMPONENT BASED SCALABLE FRAMEWORK FOR DISTRIBUTED WIRELESS...
MOVEMENT ASSISTED COMPONENT BASED SCALABLE FRAMEWORK FOR DISTRIBUTED WIRELESS...MOVEMENT ASSISTED COMPONENT BASED SCALABLE FRAMEWORK FOR DISTRIBUTED WIRELESS...
MOVEMENT ASSISTED COMPONENT BASED SCALABLE FRAMEWORK FOR DISTRIBUTED WIRELESS...ijcsa
 
Computation grid as a connected world
Computation grid as a connected worldComputation grid as a connected world
Computation grid as a connected worldijcsa
 
Assessment of wind resource and
Assessment of wind resource andAssessment of wind resource and
Assessment of wind resource andijcsa
 
Efficient lung air volume estimation using
Efficient lung air volume estimation usingEfficient lung air volume estimation using
Efficient lung air volume estimation usingijcsa
 
Quality estimation of machine translation outputs through stemming
Quality estimation of machine translation outputs through stemmingQuality estimation of machine translation outputs through stemming
Quality estimation of machine translation outputs through stemmingijcsa
 
Performance analysis of logic
Performance analysis of logicPerformance analysis of logic
Performance analysis of logicijcsa
 
Effect of mesh grid structure in reducing hot carrier effect of nmos device s...
Effect of mesh grid structure in reducing hot carrier effect of nmos device s...Effect of mesh grid structure in reducing hot carrier effect of nmos device s...
Effect of mesh grid structure in reducing hot carrier effect of nmos device s...ijcsa
 
REAL TIME SPECIAL EFFECTS GENERATION AND NOISE FILTRATION OF AUDIO SIGNAL USI...
REAL TIME SPECIAL EFFECTS GENERATION AND NOISE FILTRATION OF AUDIO SIGNAL USI...REAL TIME SPECIAL EFFECTS GENERATION AND NOISE FILTRATION OF AUDIO SIGNAL USI...
REAL TIME SPECIAL EFFECTS GENERATION AND NOISE FILTRATION OF AUDIO SIGNAL USI...ijcsa
 
Extended online graph edge coloring
Extended online graph edge coloringExtended online graph edge coloring
Extended online graph edge coloringijcsa
 
Evaluation the efficiency of cuckoo
Evaluation the efficiency of cuckooEvaluation the efficiency of cuckoo
Evaluation the efficiency of cuckooijcsa
 
Security & privacy issues of cloud & grid computing networks
Security & privacy issues of cloud & grid computing networksSecurity & privacy issues of cloud & grid computing networks
Security & privacy issues of cloud & grid computing networksijcsa
 
Augmented split –protocol; an ultimate d do s defender
Augmented split –protocol; an ultimate d do s defenderAugmented split –protocol; an ultimate d do s defender
Augmented split –protocol; an ultimate d do s defenderijcsa
 
Text independent speaker recognition using combined lpc and mfc coefficients
Text independent speaker recognition using combined lpc and mfc coefficientsText independent speaker recognition using combined lpc and mfc coefficients
Text independent speaker recognition using combined lpc and mfc coefficientseSAT Publishing House
 
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...ijcsa
 
Mining sequential patterns for interval based
Mining sequential patterns for interval basedMining sequential patterns for interval based
Mining sequential patterns for interval basedijcsa
 
Probabilistic models (part 1)
Probabilistic models (part 1)Probabilistic models (part 1)
Probabilistic models (part 1)KU Leuven
 
B@ S Sports Bar
B@ S  Sports  BarB@ S  Sports  Bar
B@ S Sports BarTom Aikins
 

En vedette (20)

MOVEMENT ASSISTED COMPONENT BASED SCALABLE FRAMEWORK FOR DISTRIBUTED WIRELESS...
MOVEMENT ASSISTED COMPONENT BASED SCALABLE FRAMEWORK FOR DISTRIBUTED WIRELESS...MOVEMENT ASSISTED COMPONENT BASED SCALABLE FRAMEWORK FOR DISTRIBUTED WIRELESS...
MOVEMENT ASSISTED COMPONENT BASED SCALABLE FRAMEWORK FOR DISTRIBUTED WIRELESS...
 
Computation grid as a connected world
Computation grid as a connected worldComputation grid as a connected world
Computation grid as a connected world
 
Assessment of wind resource and
Assessment of wind resource andAssessment of wind resource and
Assessment of wind resource and
 
Efficient lung air volume estimation using
Efficient lung air volume estimation usingEfficient lung air volume estimation using
Efficient lung air volume estimation using
 
Quality estimation of machine translation outputs through stemming
Quality estimation of machine translation outputs through stemmingQuality estimation of machine translation outputs through stemming
Quality estimation of machine translation outputs through stemming
 
Performance analysis of logic
Performance analysis of logicPerformance analysis of logic
Performance analysis of logic
 
Effect of mesh grid structure in reducing hot carrier effect of nmos device s...
Effect of mesh grid structure in reducing hot carrier effect of nmos device s...Effect of mesh grid structure in reducing hot carrier effect of nmos device s...
Effect of mesh grid structure in reducing hot carrier effect of nmos device s...
 
REAL TIME SPECIAL EFFECTS GENERATION AND NOISE FILTRATION OF AUDIO SIGNAL USI...
REAL TIME SPECIAL EFFECTS GENERATION AND NOISE FILTRATION OF AUDIO SIGNAL USI...REAL TIME SPECIAL EFFECTS GENERATION AND NOISE FILTRATION OF AUDIO SIGNAL USI...
REAL TIME SPECIAL EFFECTS GENERATION AND NOISE FILTRATION OF AUDIO SIGNAL USI...
 
Extended online graph edge coloring
Extended online graph edge coloringExtended online graph edge coloring
Extended online graph edge coloring
 
Evaluation the efficiency of cuckoo
Evaluation the efficiency of cuckooEvaluation the efficiency of cuckoo
Evaluation the efficiency of cuckoo
 
Security & privacy issues of cloud & grid computing networks
Security & privacy issues of cloud & grid computing networksSecurity & privacy issues of cloud & grid computing networks
Security & privacy issues of cloud & grid computing networks
 
Augmented split –protocol; an ultimate d do s defender
Augmented split –protocol; an ultimate d do s defenderAugmented split –protocol; an ultimate d do s defender
Augmented split –protocol; an ultimate d do s defender
 
Text independent speaker recognition using combined lpc and mfc coefficients
Text independent speaker recognition using combined lpc and mfc coefficientsText independent speaker recognition using combined lpc and mfc coefficients
Text independent speaker recognition using combined lpc and mfc coefficients
 
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...
 
Mining sequential patterns for interval based
Mining sequential patterns for interval basedMining sequential patterns for interval based
Mining sequential patterns for interval based
 
Speaker recognition.
Speaker recognition.Speaker recognition.
Speaker recognition.
 
Probabilistic models (part 1)
Probabilistic models (part 1)Probabilistic models (part 1)
Probabilistic models (part 1)
 
B@ S Sports Bar
B@ S  Sports  BarB@ S  Sports  Bar
B@ S Sports Bar
 
A S T A S394 E S P
A S T  A S394 E S PA S T  A S394 E S P
A S T A S394 E S P
 
Introduccion
IntroduccionIntroduccion
Introduccion
 

Similaire à IJCSA Speech Recognition Hybrid HMM DTW Kernel Filtering

journal paper publication
journal paper publicationjournal paper publication
journal paper publicationchaitanya451336
 
Performance Analysis of Acoustic Echo Cancellation Techniques
Performance Analysis of Acoustic Echo Cancellation TechniquesPerformance Analysis of Acoustic Echo Cancellation Techniques
Performance Analysis of Acoustic Echo Cancellation TechniquesIJERA Editor
 
Paper id 26201481
Paper id 26201481Paper id 26201481
Paper id 26201481IJRAT
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Comparison of fx lms and n fxlms algorithms in matlab using active vibration ...
Comparison of fx lms and n fxlms algorithms in matlab using active vibration ...Comparison of fx lms and n fxlms algorithms in matlab using active vibration ...
Comparison of fx lms and n fxlms algorithms in matlab using active vibration ...IJARIIT
 
P ERFORMANCE A NALYSIS O F A DAPTIVE N OISE C ANCELLER E MPLOYING N LMS A LG...
P ERFORMANCE A NALYSIS  O F A DAPTIVE N OISE C ANCELLER E MPLOYING N LMS A LG...P ERFORMANCE A NALYSIS  O F A DAPTIVE N OISE C ANCELLER E MPLOYING N LMS A LG...
P ERFORMANCE A NALYSIS O F A DAPTIVE N OISE C ANCELLER E MPLOYING N LMS A LG...ijwmn
 
A Decisive Filtering Selection Approach For Improved Performance Active Noise...
A Decisive Filtering Selection Approach For Improved Performance Active Noise...A Decisive Filtering Selection Approach For Improved Performance Active Noise...
A Decisive Filtering Selection Approach For Improved Performance Active Noise...IOSR Journals
 
Vlsi implementation of adaptive kalman filter for
Vlsi implementation of adaptive kalman filter forVlsi implementation of adaptive kalman filter for
Vlsi implementation of adaptive kalman filter foreSAT Publishing House
 
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative StudyEcho Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Studyidescitation
 
A Text-Independent Speaker Identification System based on The Zak Transform
A Text-Independent Speaker Identification System based on The Zak TransformA Text-Independent Speaker Identification System based on The Zak Transform
A Text-Independent Speaker Identification System based on The Zak TransformCSCJournals
 
Channel Equalisation
Channel EqualisationChannel Equalisation
Channel EqualisationPoonan Sahoo
 
Comparison of different Sub-Band Adaptive Noise Canceller with LMS and RLS
Comparison of different Sub-Band Adaptive Noise Canceller with LMS and RLSComparison of different Sub-Band Adaptive Noise Canceller with LMS and RLS
Comparison of different Sub-Band Adaptive Noise Canceller with LMS and RLSijsrd.com
 
F EATURE S ELECTION USING F ISHER ’ S R ATIO T ECHNIQUE FOR A UTOMATIC ...
F EATURE  S ELECTION USING  F ISHER ’ S  R ATIO  T ECHNIQUE FOR  A UTOMATIC  ...F EATURE  S ELECTION USING  F ISHER ’ S  R ATIO  T ECHNIQUE FOR  A UTOMATIC  ...
F EATURE S ELECTION USING F ISHER ’ S R ATIO T ECHNIQUE FOR A UTOMATIC ...IJCI JOURNAL
 
1-A Fault Tolerant Integrated Navigation Scheme Realized through Online Tunin...
1-A Fault Tolerant Integrated Navigation Scheme Realized through Online Tunin...1-A Fault Tolerant Integrated Navigation Scheme Realized through Online Tunin...
1-A Fault Tolerant Integrated Navigation Scheme Realized through Online Tunin...Muhammad Ushaq
 
Performance Variation of LMS And Its Different Variants
Performance Variation of LMS And Its Different VariantsPerformance Variation of LMS And Its Different Variants
Performance Variation of LMS And Its Different VariantsCSCJournals
 
DSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital FiltersDSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital FiltersAmr E. Mohamed
 
Adaptive Noise Cancellation using Multirate Techniques
Adaptive Noise Cancellation using Multirate TechniquesAdaptive Noise Cancellation using Multirate Techniques
Adaptive Noise Cancellation using Multirate TechniquesIJERD Editor
 

Similaire à IJCSA Speech Recognition Hybrid HMM DTW Kernel Filtering (20)

journal paper publication
journal paper publicationjournal paper publication
journal paper publication
 
Performance Analysis of Acoustic Echo Cancellation Techniques
Performance Analysis of Acoustic Echo Cancellation TechniquesPerformance Analysis of Acoustic Echo Cancellation Techniques
Performance Analysis of Acoustic Echo Cancellation Techniques
 
Paper id 26201481
Paper id 26201481Paper id 26201481
Paper id 26201481
 
File 2
File 2File 2
File 2
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Comparison of fx lms and n fxlms algorithms in matlab using active vibration ...
Comparison of fx lms and n fxlms algorithms in matlab using active vibration ...Comparison of fx lms and n fxlms algorithms in matlab using active vibration ...
Comparison of fx lms and n fxlms algorithms in matlab using active vibration ...
 
P ERFORMANCE A NALYSIS O F A DAPTIVE N OISE C ANCELLER E MPLOYING N LMS A LG...
P ERFORMANCE A NALYSIS  O F A DAPTIVE N OISE C ANCELLER E MPLOYING N LMS A LG...P ERFORMANCE A NALYSIS  O F A DAPTIVE N OISE C ANCELLER E MPLOYING N LMS A LG...
P ERFORMANCE A NALYSIS O F A DAPTIVE N OISE C ANCELLER E MPLOYING N LMS A LG...
 
D111823
D111823D111823
D111823
 
A Decisive Filtering Selection Approach For Improved Performance Active Noise...
A Decisive Filtering Selection Approach For Improved Performance Active Noise...A Decisive Filtering Selection Approach For Improved Performance Active Noise...
A Decisive Filtering Selection Approach For Improved Performance Active Noise...
 
Ijetcas14 555
Ijetcas14 555Ijetcas14 555
Ijetcas14 555
 
Vlsi implementation of adaptive kalman filter for
Vlsi implementation of adaptive kalman filter forVlsi implementation of adaptive kalman filter for
Vlsi implementation of adaptive kalman filter for
 
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative StudyEcho Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Study
 
A Text-Independent Speaker Identification System based on The Zak Transform
A Text-Independent Speaker Identification System based on The Zak TransformA Text-Independent Speaker Identification System based on The Zak Transform
A Text-Independent Speaker Identification System based on The Zak Transform
 
Channel Equalisation
Channel EqualisationChannel Equalisation
Channel Equalisation
 
Comparison of different Sub-Band Adaptive Noise Canceller with LMS and RLS
Comparison of different Sub-Band Adaptive Noise Canceller with LMS and RLSComparison of different Sub-Band Adaptive Noise Canceller with LMS and RLS
Comparison of different Sub-Band Adaptive Noise Canceller with LMS and RLS
 
F EATURE S ELECTION USING F ISHER ’ S R ATIO T ECHNIQUE FOR A UTOMATIC ...
F EATURE  S ELECTION USING  F ISHER ’ S  R ATIO  T ECHNIQUE FOR  A UTOMATIC  ...F EATURE  S ELECTION USING  F ISHER ’ S  R ATIO  T ECHNIQUE FOR  A UTOMATIC  ...
F EATURE S ELECTION USING F ISHER ’ S R ATIO T ECHNIQUE FOR A UTOMATIC ...
 
1-A Fault Tolerant Integrated Navigation Scheme Realized through Online Tunin...
1-A Fault Tolerant Integrated Navigation Scheme Realized through Online Tunin...1-A Fault Tolerant Integrated Navigation Scheme Realized through Online Tunin...
1-A Fault Tolerant Integrated Navigation Scheme Realized through Online Tunin...
 
Performance Variation of LMS And Its Different Variants
Performance Variation of LMS And Its Different VariantsPerformance Variation of LMS And Its Different Variants
Performance Variation of LMS And Its Different Variants
 
DSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital FiltersDSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital Filters
 
Adaptive Noise Cancellation using Multirate Techniques
Adaptive Noise Cancellation using Multirate TechniquesAdaptive Noise Cancellation using Multirate Techniques
Adaptive Noise Cancellation using Multirate Techniques
 

Dernier

Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 

Dernier (20)

Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 

IJCSA Speech Recognition Hybrid HMM DTW Kernel Filtering

  • 1. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.1, February 2014 DOI:10.5121/ijcsa.2014.4102 11 Hybrid HMM/DTW based Speech Recognition with Kernel Adaptive Filtering Method Siva Prasad Nandyala1 and T. Kishore Kumar2 1 Research Scholar, 2 Associate Professor, Department of E.C.E., NIT Warangal, Andhra Pradesh, India ABSTRACT We have proposed new approach for the speech recognition system by applying kernel adaptive filter for speech enhancement and for the recognition, the hybrid HMM/DTW methods are used in this paper. Noise removal is very important in many applications like telephone conversation, speech recognition, etc. In the recent past, the kernel methods are showing good results for speech processing applications. The feature used in the recognition process is MFCC features. It consists of a HMM system used to train the speech features and for classification purpose used the DTW method. Experimental results show a relative improvement of recognition rate compared to the traditional methods. KEYWORDS Speech Enhancement, Hidden Markov Model, Dynamic Time Warping, Kernel Least Mean Square, Least Mean Square, Signal to Noise Ratio 1. INTRODUCTION In many quality progressive speech recognition systems, Hidden Markov Models (HMMs) are used to model the acoustic behavior of basic speech units like words or phones. The existence of ground noise can cause degradation inside the standard and quality of the speech signal in many applications, as associate example, speech recognition system, conference systems, mobile communications and hearing aids.[1] [12]. The speech enhancement in the above application is a challenging task, the main reason is the noise signals nature and characteristics may change dramatically in time and also on application to application, for this reason, an effective speech enhancement technique to recover the clean speech from its corrupted observation is necessary [11] [13]. It is required to estimate the speech signal from the observed noisy one in the above applications. Lot of different noise reduction algorithms have been proposed in literature to reduce the noise effect for speech enhancement, if we take for example, spectral subtraction, Wiener filtering for noise reduction, minimum mean square error (MMSE) based speech estimation, Least Mean Square algorithm, Bayesian based estimation and kalman filtering. 2. ADAPTIVE FILTERS There are many approaches are present among the literature related to speech enhancement. For the last 20 years, adaptive filters are effective and popular approaches for the speech
  • 2. International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014 12 enhancement. The advantage of adaptive filters is that they find time varied potentials and conjointly track the dynamic variations of the signals. Before progressing to the LMS and KLMS, associate introduction to adaptive filter is given as follows. Because the name is “adaptive filters” it's going to be vital to know that means of the terms “adaptive” and “filter” during a very general sense. The adjective adaptation will be understood by considering a system that is making an attempt to regulate itself thus on reply to some development that's going down in its close. In different words the system tries to regulate its parameters with a aim of meeting some well outline goal or target that depends upon the state of system moreover as its close .this is what adaptation suggests that. Furthermore there's a necessity to own a collection of steps or sure procedure by that this method of adaptation is administered. And at last the system that carries out or undergoes the method of adaptation is termed by the additional technical name “filter”. The adaptive filter has several benefits which incorporates lower process delay and higher trailing of the mechanical phenomenon of nonstationary signals. These area unit necessary characteristics in applications like noise estimation, echo cancellation, echo cancellation delay estimation and channel equalization in mobile telecommunications, wherever low delay and quick wrenching of time-varying processes and time-varying environments area unit necessary objectives [2]. The mathematical definition of adaptive filter is given below as follows (k)x(k)Wn)Wn(k)x(ky(k) 1M 0n T ∑ = = =−= eq. (1) In which k is the time index, y is the filter output, x the filter input, Wn are the filter coefficients. The below figure 1 shown in the below block diagram, gives the foundation for particular adaptive filter realisation [3], which is named as Least Mean Squares (LMS). The main concept that lies in the block diagram is , a variable filter extracts the estimate of the desired signal. Figure 1: Adaptive Filter From the block diagram, the following assumptions are made: 1) The input signal x(n) which is the sum of a desired signal d(n) and interfering noise v(n) )()()( nvndnx += eq. (2)
  • 3. International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014 13 The variable filter has the structure of Finite Impulse Response (FIR). The impulse response is adequate for such structures the with the filter coefficients. For the filter of order p with the coefficients for square measure given as follows )](),.....1(),0([ pwwww nnnn = eq. (3) 2) the second one is the cost function or the error signal or which is the difference between the estimated signal and desired signal ∧ −= )()()( ndndne eq. (4) The desired signal is estimated by the variable filter which convolves the input signal with the impulse response. This is expressed in vector notation as follows )(*)( nxWnd n= ∧ eq. (5) Where )](),...1(),([)( pnxnxnxnx −−= eq. (6) In which x(n) is an input signal vector and the variable filter updates the filter coefficients at every time instant nNn WWW ∇+=+1 eq. (7) Where, nW∇ is a correction factor for the filter coefficients. This correction factor is generated by the adaptive algorithm based on the error and input signals. The Least Mean sq. (LMS) algorithmic program was initially developed by Widrow and Hoff in1959 through their studies of pattern recognition. From there it's become one among the foremost wide used algorithms in adaptative filtering algorithms [4]. The LMS algorithmic is a stochastic gradient-based algorithm in which it utilizes the gradient vector of the filter tap weights to converge on the optimum wiener solution. It is renowned and wide used owing to its procedure simplicity. It is this simplicity that has created it the benchmark against that all alternative adaptation filtering algorithms which are judged. The LMS algorithmic program could be a linear adaptive filter algorithmic program that normally consists of two basic processes. 1. A filter process: In this the following steps are involved a. Calculating the linear filter output in response to an input signal. b. The estimation error is generated by comparing this output with a desired response. 2. An adaptive process: This involves the automatic adjustment of filter Parameters in accordance with the estimation error. The combination of those two processes working together constitutes a feedback loop; first, we've got a transversal filter, around which the LMS algorithm is constructed. This component is
  • 4. International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014 14 accountable for performing the filtering process. Second, we've got a mechanism for performing the adaptive control process on filter tap weights of the transversal filter. With each iteration of the LMS algorithm, the filter tap weights of the adaptive filter are updated in keeping with the subsequent formula. w (n +1) = w(n) + 2μex(n) eq. (8) Here x(n) is the input vector of time delayed input values, x(n) = [x(n) x(n-1) x(n-2) –.x(n- N+1)] eq. (9) The vector w (n) = [w0 (n), w1 (n), w2 (n), w3 (n)... wN-1(n)] represents the coefficients of the adaptative Finite Impulse Response filter tap weight vector at time n. The parameter μ is understood because the step size parameter and it is a tiny positive constant. This step size parameter controls the influence of the updating issue. Selection of a an appropriate value for μ is imperative to the performance of the LMS rule, if value is simply too less the time, the adaptative filter takes to converge on the optimum solutions are too long; if μ is simply too large or more the adaptative filter becomes unstable and its output diverges. is that the simplest to implement and is stable once the step size parameter is chosen suitably. The main steps of LMS algorithm are given as follows: The LMS rule equations which are developed primarily based upon the speculation of the wiener solution for the best filter tap weights, Wo .and it in addition depends on the steepest-descent method. This can be a formula that updates the filter coefficients using the current tap weight vector and also the current gradient of the cost function with respect to the filter tap weight coefficient vector, ▼ξ )()()1( nnWnW ∇−=+ eq. (10) Where )]([)( 2 neEn = eq. (11) As the negative gradient vector points within the direction of steepest descent for the N- dimensional quadratic cost function, every recursion shifts the value of the filter coefficients nearer toward their optimum value, that corresponds to the minimum possible value of the cost function, ξ(n). By doing the random process implementation of the steepest descent algorithm, the LMS algorithm can be derived. Here the expectation for the error signal isn't notable therefore the instantaneous value is employed as associate estimate. The steepest descent algorithmic rule then becomes )()()1( nnWnW ∇−=+ eq. (12) Where
  • 5. International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014 15 )()( 2 nen = eq. (13) The cost function gradient which is , ▼ξ(n), can be expressed in another alternative form in the following steps. )}({)( 2 nen ∇=∇ eq. (14) Wne ∂∂= /)(2 Wnene ∂∂= /)()(2 Wnyndne ∂−∂−= /))()(()(2 )()(2 nxne−= Substituting this into the steepest descent algorithm of Eq.12, we arrive at the recursion for the LMS adaptive algorithm. )(2)()1( nexnWnW +=+ eq. (15) The main reason is its computational simplicity behind the popularity of LMS algorithms, making it easier to implement than all other commonly used adaptive algorithms 3. KERNEL ADAPTIVE FILTERS In this proposed approach, we have used the kernel methodology which is a smart nonparametric modeling technique within which, it transforms the input data into a high dimensional feature space via a reproducing kernel such that the with the help of the kernel evaluations, scalar product operation within the feature space is computed with efficiency. When an appropriate linear methods linear strategies are subsequently applied on the remodeled information. As long as algorithm formula is developed in terms of inner product (or equivalent kernel evaluation), there is no need to perform computations within the high dimensional feature space.[5].. The kernel adaptive filtering technique employed in this proposed approach is which depends on the adaptive filtering technique for general nonlinear issues. This itself is a natural generalization of linear adaptive filtering in reproducing kernel Hilbert spaces domain. Kernel adaptive filters are like kernel strategies, closely associated with some artificial neural networks like radial basis function networks and regularization networks [6].The KLMS algorithm could be a random gradient methodology to unravel statistical procedure issues in RKHS. as a result of the update equation is written in terms of scalar product, KLMS is computed with efficiency within the input space. The nice approximation ability of KLMS stems from the very fact that the remodeled data embody probably infinite totally different features of the first data. A linear finite impulse response filter is which is assumed within the LMS formula. If the mapping between desired and input is very nonlinear, then poor performance is expected from LMS [7]. therefore the kernel evoked mapping is used to rework the input u(i) into a high –
  • 6. International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014 16 )'),(()()'()( 1 ujukjeuiw i j T ∑= = )'(]))(()([)'()( 1 ujujeuiw T i j T  ∑= = )]'())(()[( 1 ujuje T i j ∑= = ∑= = i j jje 1 )()(  ∑= += i j jjew 1 )()()0(  dimensional feature house F as ϕ(u(i)), ωT ϕ(u) could be a rather more powerful model than wT u thanks to the distinction in spatiality (more significantly the richness of representation) of u and ϕ ( u ). So, finding ω through random gradient descent might prove as an efficient method of nonlinear filtering as LMS will for linear issues. Denote ϕ(i) = ϕ(u(i)) for simplicity. The new example sequence yields is modified with the help of LMS formula as follows 0)0( =w eq. (16) )()1()()( iiwidie T −−= eq. (17) )()()1()( iieiwiw +−= eq. (18) Where weight update vector w(i) denotes the estimate (at iteration i) of the weight vector in F. As the dimensionality of ϕ is high, the repeated application of the weight - update equation (18) through iterations yields )()()1()( iieiwiw +−= )()()]1()1()2([ iieiieiw  +−−+−= …….. By assumption w(0) = 0 eq. (19) That is, after i-step training, the weight estimate is expressed as a linear combination of all the previous and present (transformed) inputs, weighted by the prediction errors (and scaled by η). Considering mainly, the output of the system to a new input u′ can be individually expressed in terms of inner products between transformed inputs. eq.(20) But by the kernel trick we can efficiently compute the filter output in the input space by kernel evaluations eq. (21) When the above equation is compared with LMS weight update equation 15, it is updated without using the weights.
  • 7. International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014 17 ),.)(()( 1 1 1 jukjef i j i ∑ − = − = ))(),(()())(( 1 1 1 iujukjeiuf i j i ∑ − = − = If f i is denoted as the estimate of the input – output nonlinear mapping at time i , we have the following sequential learning rule for the new algorithm eq. (22) eq. (23) ))(()()( 1 iufidie i−−= eq. (24) ),.)(()(1 iukieff ii += − eq. (25) all the above explained equations gives the KLMS method. It's the LMS in RKHS, and filtering is finished by kernel analysis. KLMS allocates a replacement kernel unit for the new training information with input u(i) because the center and e(i) which is coefficient. These coefficients and also the centers are stored in memory throughout training [8] 4. SPEECH RECOGNITION METHODS 4.1 Dynamic time warping (DTW) Speech is time-dependent methodology. Hence, the utterances of constant word can have altogether utterly completely different durations, and utterances of constant word with constant quantity can disagree at intervals the middle, as a results of various components of the words being spoken at different rates. to induce a worldwide distance between two speech patterns (represented as a sequence of vectors) a time alignment have to be compelled to be performed. Dynamic Time Warping (DTW) that could be a technique that finds the simplest alignment between two statistic if simply the once series may even be “warped” non-linearly by stretching or shrinking it on its time axis. This distortion between two words can then be accustomed notice corresponding regions between the two words or to ascertain the similarity between the two words. Dynamic time warping is typically utilised in speech recognition to ascertain if two waveforms represent constant spoken phrase [10]. In addition to speech recognition, dynamic time warping has together been found useful in many completely different disciplines, in conjunction with data processing, gesture recognition, robotics, producing, and medicine. This disadvantage is illustrated at intervals in the below figure two, at intervals that a ``time- time'' matrix is used to visualize the alignment. In general, all the time alignment examples, the reference pattern (template) which may go up the side and also the input pattern may go along all-time low. Throughout this illustration the input ``SsPEEhH'' may be a `noisy' version of the model ``SPEECH''. The thought is that `h' may be a better match to `H' compared with something at
  • 8. International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014 18 intervals the model. The input ``SsPEEhH'' are matched against all templates at intervals the system's repository. The foremost effective matching model is that the one that there's the lowest distance path positioning the input pattern to the model. an easy global distance score for a path is simply the total of native distances that head to create up the path. Figure 2: Dynamic Time Warping Algorithm for Search Path We apply certain restriction on the direction of propagation to make the algorithm to reduce excessive computation. The constraints which are given below as follows. • Matching paths cannot go backwards in time. • Every frame in the input must be used in a matching path. • Local distance scores are combined by adding to give a global distance. This method is thought as Dynamic Programming (DP). Once applied to template-based speech recognition, it's usually spoken as Dynamic Time warping (DTW). DP is certain to find the lowest distance path through the matrix, whereas minimizing the number of computation. The DP method operates in a very time-synchronous manner: every column of the time-time matrix is taken into account in succession (equivalent to process the input frame-by-frame) in order that, for a template of length N, the most variety of ways being considered at any time is N. Consider if D(i,j) is the global distance up to (i,j) and the local distance at (i,j) is given by d(i,j). ( , ) min[ ( 1, 1), ( 1, ), ( , 1)] ( , )D i j D i j D i j D i j d i j= − − − − + eq. (26) Given that D(1,1) = d(1,1) (which is the initial condition), We have the idea for a suitable algorithmic rule for computing D(i,j). The ultimate global distance D(n,N) provides overall matching score of the template with the input. The input word is then recognized because the word corresponding the template with all-time low matching score. 4.2 Hidden Markov Model (HMM) The Hidden Markov Model (HMM) which is a powerful statistical based tool for modeling generative sequences which will be characterised by an underlying method generating an evident
  • 9. International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014 19 sequence [9]. HMMs have found application in several areas fascinated by signal processing, and specifically speech processing, however have additionally been applied successfully to low level NLP(Natural Language processing) tasks like phrase unitization, extracting target info from documents and part-of-speech tagging. The technique is employed to train a model that is, in our case ought to represent an utterance of a word. This model is employed afterward within the testing of an utterance and conniving the chance of that the model has created the sequence of vectors. The very important distinction between the Observable Markov Model (OMM) and a Hidden Markov Model (HMM) which is that within the Observable the output state is completely determined at on every occasion instant t. Inside the hidden Markov Model the state at on each occasion t must be inferred from observations. An observation may be a probabilistic function of a state. The basic Hidden Markov Model is represented by λ = (π, A, B). π = initial state distribution vector. A = State transition probability matrix. B = continuous observation probability density function matrix. The three important fundamental problems in the Hidden Markov Model design which are given as follows: Problem one – Recognition: With the given observation sequence O = (o1, o2,...,oT) and the model λ = ( π, A, B ), how to compute the is the probability of the observation sequence given the model? That is, how the P(O|λ) is computed efficiently? Problem two - Optimal state sequence: With the given observation sequence O = (o1, o2,...,oT) and the model λ = ( π, A, B ), how would be a corresponding state sequence, q = (q1, q2,...,qT), chosen to be optimal in some sense Problem three – Adjustment: How the probability measures will be, λ = ( π, A, B), adjusted to maximize P(O|λ) with the given a sequence of acoustic vectors { }1 2, ,..., Ty y y y= , and for the speech recognition task is to find the optimal words sequence { }1 2, ,..., LW w w w= , in which maximizes the probability ( )|P W y , i.e. the probability of the word sequence W given the acoustic feature vectors y . This probability is normally computed with the help of Bayes Theorem: ( ) ( ) ( ) ( ) | | P y W P W P W y P y = eq. (27) In the equation 27, probability ( )P W is the word sequenceW , which is named language model probability and it is computed using a language model. The probability ( )|P y W is referred to as
  • 10. International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014 20 acoustic model probability, which is the probability that the acoustic feature vector y is produced by the word sequenceW . ( )|P y W is calculated by using acoustic model[14]. 4.3 Hybrid HMM/DTW Approach Therefore, we followed a different approach, namely the extension of a state-of-the-art system that achieves extremely good recognition rates with a HMM that is trained. For recognition purpose used the DTW method. 5. EXPERIMENTS AND RESULTS In this section, the performance of the speech recognition of ten words using the hybrid model is explained. The training of spoken words is done with the HMM model and to classify the data, used the DTW method. The noise present in the speech signal is removed by using the KLMS algorithm and the improvement in SNR is 2.1 dB compared to the LMS algorithm. The recognition rate obtained is 94 % with the hybrid model compared to 90% using the individual method with noise conditions. Table 1: Comparison of speech recognition rate for isolated words Word Recognition rate without hybrid model without noise removal Recognition Rate(For Hybrid HMM/DTW) Siva 100 100 Sunny 90 90 Venkat 90 90 Santhosh 80 80 Sanjeev 90 90 Venu 90 100 Sai 90 100 Raju 90 100 Keerthi 90 100 Karthik 90 90 The figure 3 gives the simulation results of speech recognition for the given words.
  • 11. International Journal on Computational Sciences & Applications (IJCSA) Vol.1, No.4, February 2014 21 Figure 3: Speech Recognition results using the hybrid methods with kernel adaptive filtering method. 6. CONCLUSIONS AND FUTURE SCOPE In this work, we used a new approach which is applied for speech recognition using a hybrid HMM/DTW method with kernel adaptive filtering method for speech enhancement. From the experimental results, it is observed that the proposed approach gave better recognition rate for speech recognition than the traditional methods. In future work, Artificial Neural Networks and Support Vector Machines can be used for classification to improve the recognition rate. REFERENCES [1] Oliver Gauci, Carl J. Debono, Paul Micallef, "A Reproducing Kernel Hilbert Space Approach for Speech Enhancement" ISCCSP 2008, Malta, 12-14 March 2008 [2] SaeedV.Vaseghi,“Advanced digital signal processing and noise reduction." John Wiley and Sons. [3] http://en.wikipedia.org/wiki/Adaptive_filter [4] Simon Haykin ,"Adaptive Filter Theory", Prentice Hall of India, 4th Edition. [5] http://en.wikipedia.org/wiki/Kernel_adaptive_filter [6] Weifeng Liu, Puskal P. Pokharel, and Jose C. Principe,"The Kernel Least-Mean-Square Algorithm" IEEE Transactions on signal processing, vol 56, no 2, February 2008. [7] Jose C. Principe, Weifeng Liu, Simon Haykin, "Kernel Adaptive Filtering: A Comprehensive Introduction", Wiley, March 2010. [8] Dr. T. Kishore Kumar, N. Siva Prasad, “Speech Enhancement using Kernel Adaptive Filtering Method”, IEEE COMCAS November 2011 in Tel Aviv, Israel. [9] Justin, J. and I. Vennila,” A Hybrid Speech Recognition System with Hidden Markov Model and Radial Basis Function Neural Network” American Journal of Applied Sciences, Volume10, Issue 10, Pages 1148-1153. [10] R. Shashikant and Daulappa G.Bhalke,” Speech Recognition using Dynamic Time Warping”, CiiT International Journal of Digital Signal Processing Print: ISSN 0974 – 9705 & Online: ISSN 0974 – 9594 [11] J. Baker, L. Deng, J. Glass, S. Khudanpur, C. Lee, N. Morgan, and D. O'Shaughnessy, "Research Developments and Directions in Speech Recognition and Understanding, Part 1," IEEE Signal Processing Magazine, 75-80, May 2009. [12] Lawrence R. Rabiner. , "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", Proceedings of the IEEE, 77 (2), 1989, p. 257–286. [13] C. Lee, J. Glass, and O. Ghitza, "An Efferent-Inspired Auditory Model Front-End for Speech Recognition," Proc. Interspeech, Florence, 2011.