SlideShare une entreprise Scribd logo
1  sur  27
Télécharger pour lire hors ligne
Course Calendar
Class DATE Contents
1 Sep. 26 Course information & Course overview
2 Oct. 4 Bayes Estimation
3 〃 11 Classical Bayes Estimation - Kalman Filter -
4 〃 18 Simulation-based Bayesian Methods
5 〃 25 Modern Bayesian Estimation :Particle Filter
6 Nov. 1 HMM(Hidden Markov Model)
Nov. 8 No Class
7 〃 15 Supervised Learning
8 〃 29 Bayesian Decision
9 Dec. 6 PCA(Principal Component Analysis)
10 〃 13 ICA(Independent Component Analysis)
11 〃 20 Applications of PCA and ICA
12 〃 27 Clustering, k-means et al.
13 Jan. 17 Other Topics 1 Kernel machine.
14 〃 22(Tue) Other Topics 2
Lecture Plan
Hidden Markov Model
1. Introduction
2. Hidden Markov Model (HMM)
Discrete-time Markov Chain & HMM
3. Evaluation Problem
4. Decoding Problem
5. Learning Problem
6. HMM for speech recognition
1. Introduction
3
1.1 Discrete-time hidden Markov model (HMM)
The HMM is a stochastic model of an ordered or sequential process
that can be used for modeling and estimation. Its distinguishing feature
is the probabilistic model for both states transition and measurements
process. The internal states are usually not observed directly therefore,
are hidden.
1.2 Applications
All fields treating discrete representation of stochastic processes:
Speech recognition, Natural language, Communications, Economics,
Biomedical (DNA analysis), Computer vision (Gesture recognition)
2. HMM
4
2.1 Discrete-time Markov Chains (finite state machine)
 
 
  
 
1, 2,
At each time step t, the state variables are defined by
state space
where, .....,
Pr : the probability that at time t the state i is occupied.
1st-order Markovian:
Pr
xN
i
n
x t
x t
x t


X
X X X X
           
    
1 , 2 ,...., 0 Pr 1
Define a :=Pr 1 (time-stationary)
m r l n m
mn n m
x t x t x x t x t
x t x t
   

    
    
Define the state transition probability:
: Pr 1 (time stationary)
Initial probability: Pr 0 = 1,...,
mn n m
i i x
a x t x t
x i N
 

X
X
(1)
(2)
(3)
(4)
 0 ,....., xNx x
(hidden (latent) states) (visible variables)
Introduce the measurement or output process into the Markov chain
Markov Chain + measurements HMM
discrete outputs in the observation spacyN

 
 
    
             
1 2
:
e.
(observation space)
, ,.....,
The observation or emission probability (likelihood)
: Pr
Define
: 1 , 2 ,......, , : 1 , 2 ,......,
:
yN
kl l k
T i j l T k r p
t T r
y t
y y y
c y t x t
X x x x T Y y y y T
Y y



 

Y
Y
    
0
,......,
Initial / final (absorber) states: , .
st y T
x
2.2 Hidden Markov Model (HMM)
(5)
(6)
(7)
6
x(t) : state sequence (hidden)
y(t) : measurement sequence
 1ix t   jx t  1kx t 
 1ly t   my t  1ny t 
ija jka
ilc jmc knc
Structure of HMM
 : , ,ij jk iParameters a c 
 1ix t   jx t  1kx t 
 1ly t   my t  1ny t 
ija jka
ilc jmc knc
Evaluation problem : Determine the probability
of a particular sequence of output YT
Decoding problem : For a given output sequence YT,
determine the most probable sequence XT of hidden states that
leads to the observation YT
Learning problem : Given
a set of output sequences
{Y(i)
t}, determine the model
parameters .ija jkc
Three Central Issues in HMM
 1ix t   jx t  1kx t 
 1ly t   my t  1ny t 
ija jka
ilc jmc knc
Evaluation problem :
Determine the probability of a particular
sequence of output YT
Decoding problem : For a given output sequences Yt’s,
determine the most probable sequence of hidden states
that lead to those observations
Learning problem : Given
a set of output sequences
Yt’s, determine the model
parameters .ija jmc
Three Central Issues in HMM
 Pr TY
 1ix t   jx t  1kx t 
 1ly t   my t  1ny t 
ija jka
ilc jmc knc
Evaluation problem : Determine the
probability of a particular sequence of output Yt
Decoding problem : For a given output
sequence YT, determine the most probable
sequence XT of hidden states that leads to the
observation YT
Learning problem : Given
a set of output sequences
Yt’s, determine the model
parameters .ija jmc
Three Central Issues in HMM
 1ix t   jx t  1kx t 
 1ly t   my t  1ny t 
ija jka
ilc jmc knc
Evaluation problem : Determine the
probability of a particular sequence of output Yt
Decoding problem : For a given output sequences Yt’s,
determine the most probable sequence of hidden states
that lead to those observations
Learning problem :
Given a set of output
sequences {Y(i)
T},
determine the model
parameters .ija jmc
Three Central Issues in HMM
3. Evaluation Problem
     
max
1
max
The probability that the model produces an output sequence is
Pr Pr Pr
where indicates an index of a possible sequence of T hidden
states of , and (the number of possible
T
r
r r
T T T T
r
r
T
Y
Y Y X X
r
X r




terms in ).T
T xX N
        
    
   
1
0
0
From the first-order Markovian property, the second term of (8)
Pr Pr 1 , 2 ,....,
Pr 1 product of the '
At , : final absorbing state and gives
the output uniquly.
T
T
ij
t
X x x x T
x t x t a s
t T x T x x T
y


  
 

(8)
(9)
      
           
max
max
1
1
sum over all possible sequences of hidden states of
From the measurement mechanism,
Pr Pr
Finally, (9) and (10) give
Pr Pr Pr 1
T
r
T T
t
r T
T
r t
r
Y X y t x t
Y y t x t x t x t



 


the conditional probabilities
 
         
 
    
Basic elements in the Pr representation in (11) :
Pr Pr 1
Evaluation of Pr by the forward algorithm uses the following
: Pr
T
T
j t j
Y
y t x t x t x t
Y
forward variables
t Y x t


Forward Algorithm
(10)
(11)
(12)
(13)
 
 
The represents the probability that the HMM is
in the hidden state at step (i.e. ) having generated
the first (0 ) elements of ( i.e. ).
j
j j
T t
t
x t x t x
t t Y Y


 
 
 
Forward variable computation algorithm:
0 0 and initial state
1 0 and initial state
elsewhere
1
where is the index of the output
j
i ij jk
i
k
t j
t t j
t a c
k y t y




 
  

    


(14)
14
Example: Forward algorithm
   
0
4
0
0 1 2 3 0 1 2 3
Three hidden states and an explicit sbsorber state and unique final
output
= , , , , , , ,
1 0 0 0 1 0 0 0
0.2 0.3 0.1 0.4 0 0.3 0.4
,
0.2 0.5 0.2 0.1 0 0.1 0.1
0.8 0.1 0.0 0.1
,
0.1
0
ij jk
x
y
x x x x y y
c
y y y
a

 
 
  
 
 
 
X Y
        1 3 2 0
1
0
0.2
0.1
0 0.5 0.2 0.2
the probability it generates the following particular sequence
1 , 2 , ,
Suppose we have the initial hidden state at 0 to be
7
1
.
.
0.
Compute
y y y y y t y y t y
t x
 
 
 
 
 
 
   

15
0
1
2
3
x
x
x
x
t
0
 0 0
1
 1 0
0
 2 0
0
 3 0
0
 0 1
0.09
 1 1
0.01
 2 1
0.2
 3 1
0
 0 2
0.0052
 1 2
0.0077
 2 2
0.0057
 3 2
0
 0 3
0.024
 1 3
0.0002
 2 3
0.0007
 3 3
0.0011
 0 4
0
 1 4
0
 2 4
0
 3 4
0 1 2 3 4
 y t 1y 3y 2y 0y
Trellis Diagram of forward variables
* Numerical values in above diagram are rounded to four places of decimals
16
4. Decoding problem
16
Problem:
Suppose we have an HMM as well as an observation YT.
Determine the most likely sequence of hidden states
{x(0), ..……,x(T) } that leads the observation.
Solution:
Convenient method is to connect the hidden states with the
highest value of 𝛼𝑗 at each step t in the trellis diagram.
This one does not always give the optimal solution due to the
existence of forbidden path connection in the case of 𝑎𝑖𝑗 = 0.
17
5. Learning problem
17
Problem:
Given a training set of observation sequences, {YT
j
} j=1,….,J
Find the best (MAP) estimate of aij and cjk ,assuming(*) that the
hidden state sequences for these training data are known a priori.
(*More sophisticated algorithm without assuming this is actually applied in
practice)
Useful approach is to iteratively update the parameters in order to
better explain the observed training sequences.
Forward-backward algorithm
The forward variables 𝛼i(t) as well as the following backward
variables𝛽i(t) are employed in this algorithm.
 
 
 
 
 
0
0
Backward variable computation:
0 and
1 and
1 otherwise
where is the index of the output 1
i
i i
j ij jk
j
k
t T x t x
t t T x t x
t a c
k y t y



 

  
 

 

    
 
 
1:
:
: Pr
represents the probability that the model is in
and generates the given target sequence from ( 1) to ( ) .
i t T i
i
t T
defined by
t Y x t
x t x
y t y T Y
 


Backward Algorithm
Backward variable
(15)
(16)
19
Define the following posterior probability of a state sequence passes
through state xi(t-1) and xj(t).
 
   
 
1
Pr
i ij jk j
ij
T
t a c t
t
Y
 




Forward-backward algorithm
      
 
: Pr 1 , ,
where = , is the HMM parameters.
ij i j T
ij jk
t x t x t Y
a c
 

 
The 𝛾𝑖𝑗 𝑡 is given as follows:
      
    
 
   
 
*
1 :
Pr 1 , ,
Pr 1 , , , 1
Pr Pr
ij i j T
i j t t T i ij jk j
T T
t x t x t Y
x t x t Y Y t a c t
Y Y
 
  
 

 
 
 
(17)
(18)
(19)
Path’s past history at t
Evaluation problem
Current activity at t with
observation yk(t+1)
Path’s future history at t
Baum-Welch estimate
(20)
 
 
 
 
 
1
1 1
1
1 1
ˆ ,
ˆ , such that
x
x
T
ij
t
ij NT
ij
t j
T
ij
t
jk kNT
ij
t j
t
a
t
t
c y t y
t





 

 

 




In our case, since all of the paths are known, then it is possible to
count the number of times each particular transition or output
observation in a set of training data. The HMM parameters are
empirically estimated by use of the following frequent or
repetition ratio.
21
  
  
where
: counted number of the state transition from to
: counted number of the output observation
emitted from (t) to y
ij i j
jk
i k
N x t x x
N y t
x


  
  
  
  
ˆ ˆ,
ij jk
ij jk
ij jk
j j
N x t N y t
a c
N x t N y t
 
 
Unknown state sequence case:
Some types of iterative approach will be applied.
Expectation Maximization (EM)/Baum-Welch approach:
Start form an initial guess of 𝑎ij , and 𝑐jk gives the initial estimates of
αi(t-1) and 𝛽j(t) , then repeat the B-W with known parameters.
(21)
Phoneme(*)-unit HMM
(* smallest segment unit of speech sound)
Short-time Frequency spectrum data
(Mel cepstrum 12-dimensional vector
sequence) → y(t) t=1,…,T
Left-to-right HMM (no reverse-time transition model )
Learning problemHMM of /a/
6. Application of HMM -speech recognition-
Speech Signal
0 0.5 1 1.5 2 2.5 3 3.5 4
x 10
4
-6
-4
-2
0
2
4
6
time
amplitude
/g/
/a/
Phoneme
20 40 60 80 100 120
50
100
150
200
250
-60
-40
-20
0
20
40
Time-Frequency domain representation
-SPECTROGRAM-
Time (Frame number)
Frequency
(
Bin
)
HMM of /k/ HMM of /e/ HMM of /i/ HMM of /o/
Word-level HMM
(Linked phoneme HMM’s)
Learning problem
𝑊𝑖 𝑖th 𝑊𝑜𝑟𝑑 ↔ 𝜽𝑖 (𝐻𝑀𝑀 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠)
Recognition:
Given a sequence of speech feature vectors for an uttered
word y(t) t=1~T,
Find the most probable word WMAX in the following
sense
Evaluationproblem
 
   
 
 
     
max
Pr Pr
arg Pr ( )
Pr
Pr : Language model
Pr Pr 1,.....
i
T i i
i T
W
T
i
T i T i W
Y W W
W Max W Y Bayes
Y
W
Y W Y i N
 
 
Other application fields of HMM
・Time sequence : Music, Economics
・Symbol sequence : Natural language
・Spatial sequence : Image processing (Gesture
recognition)
・structure order : Sequence of a gene's DNA
References:
Main reference materials in this lecture are
[1] R.O. Duda, P.E. Hart, and D. G. Stork, “Pattern Classification”, John Wiley & Sons,
2nd edition, 2004
[2] J. Candy, “ Bayesian Signal Processing Classical, Modern, and Particle Filtering
Methods”, John Wiley/IEEE Press, 2009

Contenu connexe

Tendances

Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Fabian Pedregosa
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Nonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodNonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodSSA KPI
 
A lambda calculus for density matrices with classical and probabilistic controls
A lambda calculus for density matrices with classical and probabilistic controlsA lambda calculus for density matrices with classical and probabilistic controls
A lambda calculus for density matrices with classical and probabilistic controlsAlejandro Díaz-Caro
 
Physical Chemistry Assignment Help
Physical Chemistry Assignment HelpPhysical Chemistry Assignment Help
Physical Chemistry Assignment HelpEdu Assignment Help
 
Principal Component Analysis for Tensor Analysis and EEG classification
Principal Component Analysis for Tensor Analysis and EEG classificationPrincipal Component Analysis for Tensor Analysis and EEG classification
Principal Component Analysis for Tensor Analysis and EEG classificationTatsuya Yokota
 
Adaptive dynamic programming for control
Adaptive dynamic programming for controlAdaptive dynamic programming for control
Adaptive dynamic programming for controlSpringer
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classificationSung Yub Kim
 
Using Principal Component Analysis to Remove Correlated Signal from Astronomi...
Using Principal Component Analysis to Remove Correlated Signal from Astronomi...Using Principal Component Analysis to Remove Correlated Signal from Astronomi...
Using Principal Component Analysis to Remove Correlated Signal from Astronomi...CvilleDataScience
 
Phase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle SystemsPhase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle SystemsStefan Eng
 
Tensor Train decomposition in machine learning
Tensor Train decomposition in machine learningTensor Train decomposition in machine learning
Tensor Train decomposition in machine learningAlexander Novikov
 
On the lambert w function
On the lambert w functionOn the lambert w function
On the lambert w functionTrungKienVu3
 
Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image ijcsa
 

Tendances (20)

Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Nonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodNonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo Method
 
A lambda calculus for density matrices with classical and probabilistic controls
A lambda calculus for density matrices with classical and probabilistic controlsA lambda calculus for density matrices with classical and probabilistic controls
A lambda calculus for density matrices with classical and probabilistic controls
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Physical Chemistry Assignment Help
Physical Chemistry Assignment HelpPhysical Chemistry Assignment Help
Physical Chemistry Assignment Help
 
2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...
2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...
2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Principal Component Analysis for Tensor Analysis and EEG classification
Principal Component Analysis for Tensor Analysis and EEG classificationPrincipal Component Analysis for Tensor Analysis and EEG classification
Principal Component Analysis for Tensor Analysis and EEG classification
 
Adaptive dynamic programming for control
Adaptive dynamic programming for controlAdaptive dynamic programming for control
Adaptive dynamic programming for control
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classification
 
algorithm Unit 5
algorithm Unit 5 algorithm Unit 5
algorithm Unit 5
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
Using Principal Component Analysis to Remove Correlated Signal from Astronomi...
Using Principal Component Analysis to Remove Correlated Signal from Astronomi...Using Principal Component Analysis to Remove Correlated Signal from Astronomi...
Using Principal Component Analysis to Remove Correlated Signal from Astronomi...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Phase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle SystemsPhase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle Systems
 
Ch07 5
Ch07 5Ch07 5
Ch07 5
 
Tensor Train decomposition in machine learning
Tensor Train decomposition in machine learningTensor Train decomposition in machine learning
Tensor Train decomposition in machine learning
 
On the lambert w function
On the lambert w functionOn the lambert w function
On the lambert w function
 
Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image
 

En vedette

招待講演(鶴岡)
招待講演(鶴岡)招待講演(鶴岡)
招待講演(鶴岡)nozomuhamada
 
2012 mdsp pr03 kalman filter
2012 mdsp pr03 kalman filter2012 mdsp pr03 kalman filter
2012 mdsp pr03 kalman filternozomuhamada
 
2012 mdsp pr08 nonparametric approach
2012 mdsp pr08 nonparametric approach2012 mdsp pr08 nonparametric approach
2012 mdsp pr08 nonparametric approachnozomuhamada
 
2012 mdsp pr01 introduction 0921
2012 mdsp pr01 introduction 09212012 mdsp pr01 introduction 0921
2012 mdsp pr01 introduction 0921nozomuhamada
 
2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlonozomuhamada
 

En vedette (7)

招待講演(鶴岡)
招待講演(鶴岡)招待講演(鶴岡)
招待講演(鶴岡)
 
最終講義
最終講義最終講義
最終講義
 
2012 mdsp pr03 kalman filter
2012 mdsp pr03 kalman filter2012 mdsp pr03 kalman filter
2012 mdsp pr03 kalman filter
 
2012 mdsp pr08 nonparametric approach
2012 mdsp pr08 nonparametric approach2012 mdsp pr08 nonparametric approach
2012 mdsp pr08 nonparametric approach
 
2012 mdsp pr01 introduction 0921
2012 mdsp pr01 introduction 09212012 mdsp pr01 introduction 0921
2012 mdsp pr01 introduction 0921
 
2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo
 
Ieice中国地区
Ieice中国地区Ieice中国地区
Ieice中国地区
 

Similaire à 2012 mdsp pr06  hmm

Rfid presentation in internet
Rfid presentation in internetRfid presentation in internet
Rfid presentation in internetAli Azarnia
 
Hidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionHidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionbutest
 
Hidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionHidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionbutest
 
Control digital: Tema 3. Análisis de sistemas discretos
Control digital: Tema 3. Análisis de sistemas discretos Control digital: Tema 3. Análisis de sistemas discretos
Control digital: Tema 3. Análisis de sistemas discretos SANTIAGO PABLO ALBERTO
 
Conditional Random Fields
Conditional Random FieldsConditional Random Fields
Conditional Random Fieldslswing
 
Vu_HPSC2012_02.pptx
Vu_HPSC2012_02.pptxVu_HPSC2012_02.pptx
Vu_HPSC2012_02.pptxQucngV
 
Controllability of Linear Dynamical System
Controllability of  Linear Dynamical SystemControllability of  Linear Dynamical System
Controllability of Linear Dynamical SystemPurnima Pandit
 
Current limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov modelsCurrent limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov modelsPierre Jacob
 
A Fault Tolerant Control for Sensor and Actuator Failures of a Non Linear Hyb...
A Fault Tolerant Control for Sensor and Actuator Failures of a Non Linear Hyb...A Fault Tolerant Control for Sensor and Actuator Failures of a Non Linear Hyb...
A Fault Tolerant Control for Sensor and Actuator Failures of a Non Linear Hyb...IJECEIAES
 
LeastSquaresParameterEstimation.ppt
LeastSquaresParameterEstimation.pptLeastSquaresParameterEstimation.ppt
LeastSquaresParameterEstimation.pptStavrovDule2
 
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...Belinda Marchand
 
Hidden Markov Model - The Most Probable Path
Hidden Markov Model - The Most Probable PathHidden Markov Model - The Most Probable Path
Hidden Markov Model - The Most Probable PathLê Hòa
 
A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...JuanPabloCarbajal3
 
12 Machine Learning Supervised Hidden Markov Chains
12 Machine Learning  Supervised Hidden Markov Chains12 Machine Learning  Supervised Hidden Markov Chains
12 Machine Learning Supervised Hidden Markov ChainsAndres Mendez-Vazquez
 

Similaire à 2012 mdsp pr06  hmm (20)

Rfid presentation in internet
Rfid presentation in internetRfid presentation in internet
Rfid presentation in internet
 
Hidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionHidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognition
 
Hidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionHidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognition
 
Control digital: Tema 3. Análisis de sistemas discretos
Control digital: Tema 3. Análisis de sistemas discretos Control digital: Tema 3. Análisis de sistemas discretos
Control digital: Tema 3. Análisis de sistemas discretos
 
Conditional Random Fields
Conditional Random FieldsConditional Random Fields
Conditional Random Fields
 
Hidden Markov Model
Hidden Markov Model Hidden Markov Model
Hidden Markov Model
 
Section6 stochastic
Section6 stochasticSection6 stochastic
Section6 stochastic
 
Vu_HPSC2012_02.pptx
Vu_HPSC2012_02.pptxVu_HPSC2012_02.pptx
Vu_HPSC2012_02.pptx
 
Controllability of Linear Dynamical System
Controllability of  Linear Dynamical SystemControllability of  Linear Dynamical System
Controllability of Linear Dynamical System
 
Current limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov modelsCurrent limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov models
 
A Fault Tolerant Control for Sensor and Actuator Failures of a Non Linear Hyb...
A Fault Tolerant Control for Sensor and Actuator Failures of a Non Linear Hyb...A Fault Tolerant Control for Sensor and Actuator Failures of a Non Linear Hyb...
A Fault Tolerant Control for Sensor and Actuator Failures of a Non Linear Hyb...
 
LeastSquaresParameterEstimation.ppt
LeastSquaresParameterEstimation.pptLeastSquaresParameterEstimation.ppt
LeastSquaresParameterEstimation.ppt
 
Talk 5
Talk 5Talk 5
Talk 5
 
Lec1 01
Lec1 01Lec1 01
Lec1 01
 
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
 
RNA synthesis
RNA synthesisRNA synthesis
RNA synthesis
 
Hidden Markov Model - The Most Probable Path
Hidden Markov Model - The Most Probable PathHidden Markov Model - The Most Probable Path
Hidden Markov Model - The Most Probable Path
 
intro
introintro
intro
 
A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...
 
12 Machine Learning Supervised Hidden Markov Chains
12 Machine Learning  Supervised Hidden Markov Chains12 Machine Learning  Supervised Hidden Markov Chains
12 Machine Learning Supervised Hidden Markov Chains
 

Dernier

"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 

Dernier (20)

"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 

2012 mdsp pr06  hmm

  • 1. Course Calendar Class DATE Contents 1 Sep. 26 Course information & Course overview 2 Oct. 4 Bayes Estimation 3 〃 11 Classical Bayes Estimation - Kalman Filter - 4 〃 18 Simulation-based Bayesian Methods 5 〃 25 Modern Bayesian Estimation :Particle Filter 6 Nov. 1 HMM(Hidden Markov Model) Nov. 8 No Class 7 〃 15 Supervised Learning 8 〃 29 Bayesian Decision 9 Dec. 6 PCA(Principal Component Analysis) 10 〃 13 ICA(Independent Component Analysis) 11 〃 20 Applications of PCA and ICA 12 〃 27 Clustering, k-means et al. 13 Jan. 17 Other Topics 1 Kernel machine. 14 〃 22(Tue) Other Topics 2
  • 2. Lecture Plan Hidden Markov Model 1. Introduction 2. Hidden Markov Model (HMM) Discrete-time Markov Chain & HMM 3. Evaluation Problem 4. Decoding Problem 5. Learning Problem 6. HMM for speech recognition
  • 3. 1. Introduction 3 1.1 Discrete-time hidden Markov model (HMM) The HMM is a stochastic model of an ordered or sequential process that can be used for modeling and estimation. Its distinguishing feature is the probabilistic model for both states transition and measurements process. The internal states are usually not observed directly therefore, are hidden. 1.2 Applications All fields treating discrete representation of stochastic processes: Speech recognition, Natural language, Communications, Economics, Biomedical (DNA analysis), Computer vision (Gesture recognition)
  • 4. 2. HMM 4 2.1 Discrete-time Markov Chains (finite state machine)          1, 2, At each time step t, the state variables are defined by state space where, ....., Pr : the probability that at time t the state i is occupied. 1st-order Markovian: Pr xN i n x t x t x t   X X X X X                  1 , 2 ,...., 0 Pr 1 Define a :=Pr 1 (time-stationary) m r l n m mn n m x t x t x x t x t x t x t                Define the state transition probability: : Pr 1 (time stationary) Initial probability: Pr 0 = 1,..., mn n m i i x a x t x t x i N    X X (1) (2) (3) (4)  0 ,....., xNx x
  • 5. (hidden (latent) states) (visible variables) Introduce the measurement or output process into the Markov chain Markov Chain + measurements HMM discrete outputs in the observation spacyN                         1 2 : e. (observation space) , ,....., The observation or emission probability (likelihood) : Pr Define : 1 , 2 ,......, , : 1 , 2 ,......, : yN kl l k T i j l T k r p t T r y t y y y c y t x t X x x x T Y y y y T Y y       Y Y      0 ,......, Initial / final (absorber) states: , . st y T x 2.2 Hidden Markov Model (HMM) (5) (6) (7)
  • 6. 6 x(t) : state sequence (hidden) y(t) : measurement sequence  1ix t   jx t  1kx t   1ly t   my t  1ny t  ija jka ilc jmc knc Structure of HMM  : , ,ij jk iParameters a c 
  • 7.  1ix t   jx t  1kx t   1ly t   my t  1ny t  ija jka ilc jmc knc Evaluation problem : Determine the probability of a particular sequence of output YT Decoding problem : For a given output sequence YT, determine the most probable sequence XT of hidden states that leads to the observation YT Learning problem : Given a set of output sequences {Y(i) t}, determine the model parameters .ija jkc Three Central Issues in HMM
  • 8.  1ix t   jx t  1kx t   1ly t   my t  1ny t  ija jka ilc jmc knc Evaluation problem : Determine the probability of a particular sequence of output YT Decoding problem : For a given output sequences Yt’s, determine the most probable sequence of hidden states that lead to those observations Learning problem : Given a set of output sequences Yt’s, determine the model parameters .ija jmc Three Central Issues in HMM  Pr TY
  • 9.  1ix t   jx t  1kx t   1ly t   my t  1ny t  ija jka ilc jmc knc Evaluation problem : Determine the probability of a particular sequence of output Yt Decoding problem : For a given output sequence YT, determine the most probable sequence XT of hidden states that leads to the observation YT Learning problem : Given a set of output sequences Yt’s, determine the model parameters .ija jmc Three Central Issues in HMM
  • 10.  1ix t   jx t  1kx t   1ly t   my t  1ny t  ija jka ilc jmc knc Evaluation problem : Determine the probability of a particular sequence of output Yt Decoding problem : For a given output sequences Yt’s, determine the most probable sequence of hidden states that lead to those observations Learning problem : Given a set of output sequences {Y(i) T}, determine the model parameters .ija jmc Three Central Issues in HMM
  • 11. 3. Evaluation Problem       max 1 max The probability that the model produces an output sequence is Pr Pr Pr where indicates an index of a possible sequence of T hidden states of , and (the number of possible T r r r T T T T r r T Y Y Y X X r X r     terms in ).T T xX N                   1 0 0 From the first-order Markovian property, the second term of (8) Pr Pr 1 , 2 ,...., Pr 1 product of the ' At , : final absorbing state and gives the output uniquly. T T ij t X x x x T x t x t a s t T x T x x T y         (8) (9)
  • 12.                    max max 1 1 sum over all possible sequences of hidden states of From the measurement mechanism, Pr Pr Finally, (9) and (10) give Pr Pr Pr 1 T r T T t r T T r t r Y X y t x t Y y t x t x t x t        the conditional probabilities                    Basic elements in the Pr representation in (11) : Pr Pr 1 Evaluation of Pr by the forward algorithm uses the following : Pr T T j t j Y y t x t x t x t Y forward variables t Y x t   Forward Algorithm (10) (11) (12) (13)
  • 13.     The represents the probability that the HMM is in the hidden state at step (i.e. ) having generated the first (0 ) elements of ( i.e. ). j j j T t t x t x t x t t Y Y         Forward variable computation algorithm: 0 0 and initial state 1 0 and initial state elsewhere 1 where is the index of the output j i ij jk i k t j t t j t a c k y t y                  (14)
  • 14. 14 Example: Forward algorithm     0 4 0 0 1 2 3 0 1 2 3 Three hidden states and an explicit sbsorber state and unique final output = , , , , , , , 1 0 0 0 1 0 0 0 0.2 0.3 0.1 0.4 0 0.3 0.4 , 0.2 0.5 0.2 0.1 0 0.1 0.1 0.8 0.1 0.0 0.1 , 0.1 0 ij jk x y x x x x y y c y y y a               X Y         1 3 2 0 1 0 0.2 0.1 0 0.5 0.2 0.2 the probability it generates the following particular sequence 1 , 2 , , Suppose we have the initial hidden state at 0 to be 7 1 . . 0. Compute y y y y y t y y t y t x                 
  • 15. 15 0 1 2 3 x x x x t 0  0 0 1  1 0 0  2 0 0  3 0 0  0 1 0.09  1 1 0.01  2 1 0.2  3 1 0  0 2 0.0052  1 2 0.0077  2 2 0.0057  3 2 0  0 3 0.024  1 3 0.0002  2 3 0.0007  3 3 0.0011  0 4 0  1 4 0  2 4 0  3 4 0 1 2 3 4  y t 1y 3y 2y 0y Trellis Diagram of forward variables * Numerical values in above diagram are rounded to four places of decimals
  • 16. 16 4. Decoding problem 16 Problem: Suppose we have an HMM as well as an observation YT. Determine the most likely sequence of hidden states {x(0), ..……,x(T) } that leads the observation. Solution: Convenient method is to connect the hidden states with the highest value of 𝛼𝑗 at each step t in the trellis diagram. This one does not always give the optimal solution due to the existence of forbidden path connection in the case of 𝑎𝑖𝑗 = 0.
  • 17. 17 5. Learning problem 17 Problem: Given a training set of observation sequences, {YT j } j=1,….,J Find the best (MAP) estimate of aij and cjk ,assuming(*) that the hidden state sequences for these training data are known a priori. (*More sophisticated algorithm without assuming this is actually applied in practice) Useful approach is to iteratively update the parameters in order to better explain the observed training sequences. Forward-backward algorithm The forward variables 𝛼i(t) as well as the following backward variables𝛽i(t) are employed in this algorithm.
  • 18.           0 0 Backward variable computation: 0 and 1 and 1 otherwise where is the index of the output 1 i i i j ij jk j k t T x t x t t T x t x t a c k y t y                         1: : : Pr represents the probability that the model is in and generates the given target sequence from ( 1) to ( ) . i t T i i t T defined by t Y x t x t x y t y T Y     Backward Algorithm Backward variable (15) (16)
  • 19. 19 Define the following posterior probability of a state sequence passes through state xi(t-1) and xj(t).         1 Pr i ij jk j ij T t a c t t Y       Forward-backward algorithm          : Pr 1 , , where = , is the HMM parameters. ij i j T ij jk t x t x t Y a c      The 𝛾𝑖𝑗 𝑡 is given as follows:                     * 1 : Pr 1 , , Pr 1 , , , 1 Pr Pr ij i j T i j t t T i ij jk j T T t x t x t Y x t x t Y Y t a c t Y Y               (17) (18) (19) Path’s past history at t Evaluation problem Current activity at t with observation yk(t+1) Path’s future history at t
  • 20. Baum-Welch estimate (20)           1 1 1 1 1 1 ˆ , ˆ , such that x x T ij t ij NT ij t j T ij t jk kNT ij t j t a t t c y t y t                  In our case, since all of the paths are known, then it is possible to count the number of times each particular transition or output observation in a set of training data. The HMM parameters are empirically estimated by use of the following frequent or repetition ratio.
  • 21. 21       where : counted number of the state transition from to : counted number of the output observation emitted from (t) to y ij i j jk i k N x t x x N y t x               ˆ ˆ, ij jk ij jk ij jk j j N x t N y t a c N x t N y t     Unknown state sequence case: Some types of iterative approach will be applied. Expectation Maximization (EM)/Baum-Welch approach: Start form an initial guess of 𝑎ij , and 𝑐jk gives the initial estimates of αi(t-1) and 𝛽j(t) , then repeat the B-W with known parameters. (21)
  • 22. Phoneme(*)-unit HMM (* smallest segment unit of speech sound) Short-time Frequency spectrum data (Mel cepstrum 12-dimensional vector sequence) → y(t) t=1,…,T Left-to-right HMM (no reverse-time transition model ) Learning problemHMM of /a/ 6. Application of HMM -speech recognition-
  • 23. Speech Signal 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10 4 -6 -4 -2 0 2 4 6 time amplitude /g/ /a/ Phoneme
  • 24. 20 40 60 80 100 120 50 100 150 200 250 -60 -40 -20 0 20 40 Time-Frequency domain representation -SPECTROGRAM- Time (Frame number) Frequency ( Bin )
  • 25. HMM of /k/ HMM of /e/ HMM of /i/ HMM of /o/ Word-level HMM (Linked phoneme HMM’s)
  • 26. Learning problem 𝑊𝑖 𝑖th 𝑊𝑜𝑟𝑑 ↔ 𝜽𝑖 (𝐻𝑀𝑀 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠) Recognition: Given a sequence of speech feature vectors for an uttered word y(t) t=1~T, Find the most probable word WMAX in the following sense Evaluationproblem                 max Pr Pr arg Pr ( ) Pr Pr : Language model Pr Pr 1,..... i T i i i T W T i T i T i W Y W W W Max W Y Bayes Y W Y W Y i N    
  • 27. Other application fields of HMM ・Time sequence : Music, Economics ・Symbol sequence : Natural language ・Spatial sequence : Image processing (Gesture recognition) ・structure order : Sequence of a gene's DNA References: Main reference materials in this lecture are [1] R.O. Duda, P.E. Hart, and D. G. Stork, “Pattern Classification”, John Wiley & Sons, 2nd edition, 2004 [2] J. Candy, “ Bayesian Signal Processing Classical, Modern, and Particle Filtering Methods”, John Wiley/IEEE Press, 2009