SlideShare une entreprise Scribd logo
1  sur  95
Télécharger pour lire hors ligne
Hidden Markov Models
Outline
1. CG-Islands
2. The “Fair Bet Casino”
3. Hidden Markov Model
4. Decoding Algorithm
5. Forward-Backward Algorithm
6. Profile HMMs
7. HMM Parameter Estimation
8. Viterbi Training
9. Baum-Welch Algorithm
Outline - CHANGE
• The “Fair Bet Casino” – improve graphics in “HMM for Fair
Bet Casino (con‟d)”
• Decoding Algorithm – SHOW the two-row graph for casino
problem
• Forward-Backward Algorithm – SHOW the similarity in
dynamic programming equation between Viterbi and forward-
backward algorithm
• HMM Parameter Estimation – explain the idea of Baum-
Welch
• Profile HMM Alignment SHOW “Profile HMM” slide more
slowly – show M states first and add I and D slides later on –
SHOW an alignment in terms of M, I, D states
• it is not clear how p(xi) term appears after / in “Profile HMM
alignment: Dynamic Programming” – MAKE a dynamic
Section 1:
CG-Islands
CG-Islands
• Given 4 nucleotides: probability of any one‟s occurrence is ~
1/4.
• Thus, probability of occurrence of a given dinucleotide (pair
of successive nucleotides is ~ 1/16.
• However, the frequencies of dinucleotides in DNA sequences
vary widely.
• In particular, CG is typically underepresented (frequency of CG
is typically < 1/16)
CG-Islands
• CG is the least frequent dinucleotide because the C in CG is
easily methylated and has the tendency to mutate into T
afterwards.
• However, methylation is suppressed around genes in a
genome. So, CG appears at relatively high frequency within
these CG-islands.
• So, finding the CG-islands in a genome is an important
biological problem.
Section 2:
The Fair Bet Casino
The “Fair Bet Casino”
• The CG-islands problem can be modeled after a problem
named The Fair Bet Casino.
• The game is to flip two coins, which results in only two
possible outcomes: Head (H) or Tail(T).
• The Fair coin (F) will give H and T each with probability
½, which is written P(H | F) = P(T | F) = ½.
• The Biased coin (B) will give H with probability ¾, which
we write as P(H | B) = ¾, P(T | B) = ¼.
• The crooked dealer changes between F and B coins with
probability 10%.
• How can we tell when the dealer is using F and when he is
using B?
The Fair Bet Casino Problem
• Input: A sequence x = x1x2x3…xn of coin tosses made by two
possible coins (F or B).
• Output: A sequence π = π1 π2 π3… πn, with each πi being either
F or B and indicating that xi is the result of tossing the Fair or
Biased coin respectively.
Problem…
• Any observed outcome of coin tosses could have been
generated by any sequence of states!
• Example: HHHHHHHHHH could be generated by
BBBBBBBBBB, FFFFFFFFFF, FBFBFBFBFB, etc.
• We need to incorporate a way to grade different sequences
differently.
• This provides us with the decoding problem.
Simple Case: The Dealer Never Switches Coins
• We assume first that the dealer never changes coins:
• P(x | F): probability of the dealer using F and generating the
outcome x.
• P(x | B): probability of the dealer using the B coin and
generating outcome x.
• Example: Say that in x we observe k heads and n – k tails:
P x F 
1
2






1
2






n
i1
n

P ( x B) 
3
4






k
1
4






n  k

3
k
4
n
When Does P(x | F) = P(x | B)?
P(x F)  P(x B)
1
2
n

3
k
4
n
2
n
 3
k
n  log 2
3
k
n  k log 2 3
 k 
n
log 2 3
Log-odds Ratio
• We define the log-odds ratio (L) as follows:
• From the previous slide, if L > 0 we have reason to believe that
the coin is fair, and if L < 0 we think the coin is biased.


L  log 2
P x F 
P x B 






 log 2
1
2
n





 log 2
3
k
4
n






 n  log 2 3
k
  log 2 4
n
 
 n  k log 2 3  2n
 n  k log 2
3 
• Consider a sliding window of the outcome sequence and find
the log-odds ratio for this short window.
x1x2x3x4x5x6x7x8…xn
Computing Log-odds Ratio in Sliding Windows
Log-odds value
0
Fair coin most likely
used
Biased coin most likely
used
• Key Disadvantages:
• The length of the CG-island is not known in advance.
• Different windows may classify the same position differently.
• Consider a sliding window of the outcome sequence and find
the log-odds ratio for this short window.
x1x2x3x4x5x6x7x8…xn
Computing Log-odds Ratio in Sliding Windows
Log-odds value
0
Fair coin most likely
used
Biased coin most likely
used
• Key Disadvantages:
• The length of the CG-island is not known in advance.
• Different windows may classify the same position differently.
• Consider a sliding window of the outcome sequence and find
the log-odds ratio for this short window.
x1x2x3x4x5x6x7x8…xn
Computing Log-odds Ratio in Sliding Windows
Log-odds value
0
Fair coin most likely
used
Biased coin most likely
used
• Key Disadvantages:
• The length of the CG-island is not known in advance.
• Different windows may classify the same position differently.
• Consider a sliding window of the outcome sequence and find
the log-odds ratio for this short window.
x1x2x3x4x5x6x7x8…xn
Computing Log-odds Ratio in Sliding Windows
Log-odds value
0
Fair coin most likely
used
Biased coin most likely
used
• Key Disadvantages:
• The length of the CG-island is not known in advance.
• Different windows may classify the same position differently.
• Consider a sliding window of the outcome sequence and find
the log-odds ratio for this short window.
x1x2x3x4x5x6x7x8…xn
Computing Log-odds Ratio in Sliding Windows
Log-odds value
0
Fair coin most likely
used
Biased coin most likely
used
• Key Disadvantages:
• The length of the CG-island is not known in advance.
• Different windows may classify the same position differently.
Section 3:
Hidden Markov Models
Hidden Markov Model (HMM)
• Can be viewed as an abstract machine with k hidden states that
emits symbols from an alphabet Σ.
• Each state has its own probability distribution, and the
machine switches between states and chooses characters
according to this probability distribution.
• While in a certain state, the machine makes two decisions:
1. What state should I move to next?
2. What symbol - from the alphabet Σ - should I emit?
Why “Hidden”?
• Observers can see the emitted symbols of an HMM but have
no ability to know which state the HMM is currently in.
• The goal is to infer the most likely hidden states of an HMM
based on the given sequence of emitted symbols.
HMM Parameters
• Σ: set of emission characters.
• Q: set of hidden states, each emitting symbols from Σ.
• A = (akl): a |Q| x |Q| matrix containing the probabilities of
changing from state k to state l.
• E = (ek(b)): a |Q| x |Σ| matrix of probability of emitting symbol
b while being in state k.
HMM Parameters
• A = (akl): a |Q| x |Q| matrix containing the probabilities of
changing from state k to state l.
• aFF = 0.9 aFB = 0.1
• aBF = 0.1 aBB = 0.9
• E = (ek(b)): a |Q| x |Σ| matrix of probability of emitting symbol
b while being in state k.
• eF(0) = ½ eF(1) = ½
• eB(0) = ¼ eB(1) = ¾
HMM for the Fair Bet Casino
Fair Biased
Fair aFF = 0.9 aFB = 0.1
Biased aBF = 0.1 aBB = 0.9
Tails(0) Heads(1)
Fair eF(0) = ½ eF(1) = ½
Biased eB(0) =
¼
eB(1) =
¾
• The Fair Bet Casino in HMM terms:
• Σ = {0, 1} (0 for T and 1 for H)
• Q = {F, B}
Transition Probabilities (A) Emission Probabilities (E)
HMM for the Fair Bet Casino
• HMM model for the Fair Bet Casino Problem:
Hidden Paths
x 0 1 0 1 1 1 0 1 0 0 1
π = F F F B B B B B F F F
P(xi|πi)
P(πi-1  πi)
Transition probability from state πi-1 to state πi
Probability that xi was emitted from state πi
• A path π = π1… πn in the HMM is defined as a sequence of
states.
• Consider path π = FFFBBBBBFFF and sequence x =
01011101001
Hidden Paths
x 0 1 0 1 1 1 0 1 0 0 1
π = F F F B B B B B F F F
P(xi|πi) ½ ½ ½ ¾ ¾ ¾ ¼ ¾ ½ ½ ½
P(πi-1  πi)
Transition probability from state πi-1 to state πi
Probability that xi was emitted from state πi
• A path π = π1… πn in the HMM is defined as a sequence of
states.
• Consider path π = FFFBBBBBFFF and sequence x =
01011101001
Hidden Paths
x 0 1 0 1 1 1 0 1 0 0 1
π = F F F B B B B B F F F
P(xi|πi) ½ ½ ½ ¾ ¾ ¾ ¼ ¾ ½ ½ ½
P(πi-1  πi) ½ 9/10
9/10
1/10
9/10
9/10
9/10
9/10
1/10
9/10
9/10
Transition probability from state πi-1 to state πi
Probability that xi was emitted from state πi
• A path π = π1… πn in the HMM is defined as a sequence of
states.
• Consider path π = FFFBBBBBFFF and sequence x =
01011101001
P(x | π) Calculation
• P(x | π): Probability that sequence x was generated if we know
that we have the path π.
P x   P 0  1   P xi  
i1
n
  P  i  i1 
 a 0 ,  1
e i
x   a i ,  i 1
i1
n

P(x | π) Calculation
• P(x | π): Probability that sequence x was generated if we know
that we have the path π.
P x   P 0  1   P xi  
i1
n
  P  i  i1 
 a 0 ,  1
e i
x   a i ,  i 1
i1
n

 e i
x   a i ,  i 1
i 0
n

Section 4:
Decoding Algorithm
Decoding Problem
• Goal: Find an optimal hidden path of states given observations.
• Input: Sequence of observations x = x1…xn generated by an
HMM M(Σ, Q, A, E).
• Output: A path that maximizes P(x | π) over all possible paths π.
Building Manhattan for Decoding Problem
• Andrew Viterbi used the Manhattan edit graph model to solve
the Decoding Problem.
• Vertices are composed of n “levels” with |Q| vertices in each
level; each vertex represents a different state.
• We connect each vertex in level i to each vertex in level i + 1
via a directed edge, giving |Q|2(n – 1) edges.
• Therefore every choice of π = π1… πn corresponds to a path in
the graph.
Edit Graph for Decoding Problem: Example
Decoding Problem vs. Alignment Problem
Valid Directions in
Alignment Valid Directions in Decoding
Decoding Problem
• Every path in the graph has the probability P(x | π).
• The Viterbi algorithm finds the path that maximizes P(x | π)
among all possible paths.
• The Viterbi algorithm runs in O(n |Q|2) time.
Decoding Problem: Weights of Edges
• The weight w is given by: ?
w
(k, i) (l, i+1)
• The weight w is given by: ?
Decoding Problem: Weights of Edges
w
(k, i) (l, i+1)

P x   e i 1
xi1   a i ,  i 1
i 0
n 1

• The weight w is given by: ?
Decoding Problem: Weights of Edges
w
(k, i) (l, i+1)
P x   e i 1
xi1   a i ,  i 1
i 0
n 1

i
th
term  e i 1
xi1   a i ,  i 1
• The weight w is given by: el (xi+1) . ak, l
Decoding Problem: Weights of Edges
w
(k, i) (l, i+1)
P x   e i 1
xi1   a i ,  i 1
i 0
n 1

i
th
term  e i 1
xi1   a i ,  i 1
Decoding Problem and Dynamic Programming
• sl, i+1 = max probability of all paths of length i + 1 ending in
state l (for the first i + 1 observations).
• Recursion:
sl, i1  max
kQ
sk, i  weight of edge between k, i  and l, i  1  
 max
kQ
sk, i  ak, l  el xi1  
 el xi1  max
kQ
sk, i  ak, l 
• The value of the product can become extremely small, which
leads to overflow.
• A computer has only finite storage to store any given
number, and if the number is too small it runs out of room.
• To avoid overflow, take the logarithm of the right side instead.
Decoding Problem and Dynamic Programming
sl, i1  log el xi1   max
kQ
log sk, i  log ak, l  
Decoding Problem and Dynamic Programming
• Initialization:
• Let π* be the optimal path. Then,

P x 
  max
kQ
sk, n  ak, end 
sk, 0 
1 if k  begin
0 otherwise



Section 5:
Forward-Backward
Algorithm
Forward-Backward Problem
• Given: a sequence of coin tosses generated by an HMM.
• Goal: Find the probability that the dealer was using a biased
coin at a particular time.
Forward Probability
• Define fk,i (forward probability) as the probability of
emitting the prefix x1…xi and reaching the state π = k.
• The recurrence for the forward algorithm:

fk,i  ek xi  fl, i1 al,k
lQ

Backward Probability
• However, forward probability is not the only factor affecting
P(πi = k | x).
• The sequence of transitions and emissions that the HMM
undergoes between πi+1 and πn also affect P(πi = k | x).
• Define the backward probability bk,i as the probability of
being in state πi = k and emitting the suffix xi+1…xn. Recurrence:
Forward Backwardxi
bk,i  el xi1 
lQ
 bl,i1  ak,l
Backward-Forward Probability
• The probability that HMM is in a certain state k at any
moment i, given that we observe the output x, is therefore
influenced by both the forward and backward probabilities.
• We use the mathematical definition of conditional
probability to calculate P(πi = k | x):

P  i  k x 
P x,  i  k 
P x 

fk, i  bk, i
P x 
Section 6:
Profile HMMs
Finding Distant Members of a Protein Family
• A distant cousin of functionally related sequences in a protein
family may have weak pairwise similarities with each member
of the family and thus fail a significance test.
• However, they may have these weak similarities with many
members of the family, indicating a correlation.
• The goal is to align a sequence to all members of the family at
once.
• A family of related proteins can be represented by their
multiple alignment and the corresponding profile.
Profile Representation of Protein Families
• Aligned DNA sequences can be represented by a 4 x n profile
matrix reflecting the frequencies of nucleotides in every aligned
position.
• Example:
• Similarly, a protein family can be represented by a 20 x n
profile representing frequencies of amino acids.
• Multiple alignment of a protein family shows variations in
conservation along the length of a protein.
• Example: After aligning many globin proteins, biologists
recognized that the helices region in globins are more
conserved than others.
Protein Family Classification
• A profile HMM is a probabilistic representation of a multiple
alignment.
• A given multiple alignment (of a protein family) is used to
build a profile HMM.
• This model then may be used to find and score less obvious
potential matches of new protein sequences.
What Is a Profile HMM?
Profile HMM
• A profile HMM has three sets of states:
• Match states: M1 ,…, Mn (plus begin/end states)
• Insertion states: I0 , I1 ,…, In
• Deletion states: D1 ,…, Dn
1. Multiple alignment is used to construct the HMM model.
2. Assign each column to a Match state in HMM. Add Insertion
and Deletion state.
3. Estimate the emission probabilities according to amino acid
counts in column. Different positions in the protein will have
different emission probabilities.
4. Estimate the transition probabilities between Match, Deletion
and Insertion states.
Building a Profile HMM
Transition Probabilities in a Profile HMM
• Gap Initiation Penalty: The cost of beginning a gap, which
means that we must have transitions from match state to insertion
state and vice versa.
• Penalty:
• Gap Extension Penalty: The cost of extending a gap, which
corresponds to maintaining the insertion state for one period.
• Penalty:

log aMI  log aIM 
log aII 
Emission Probabilities in a Profile HMM
• Probabilty of emitting a symbol a at an insertion state Ij:
• Here p(a) is the frequency of the occurrence of the symbol a
in all the sequences.

eI j
a   p a 
Profile HMM Alignment
• Define vM
j (i) as the logarithmic likelihood score of the best
path for matching x1..xi to the profile HMM ending with xi
emitted by the state Mj.
• vI
j (i) and vD
j (i) are defined similarly.
Profile HMM Alignment: Dynamic Programming
v j
M
i   log
eM j
xi 
p xi 








 max
v j 1
M
i  1  log aM j 1 , M j
 
v j 1
I
i  1  log aI j 1 , M j
 
v j 1
D
i  1  log aD j 1 , M j
 







Profile HMM Alignment: Dynamic Programming
v j
I
i   log
eI j
xi 
p xi 








 max
v j
M
i  1  log aM j , I j
 
v j
I
i  1  log aI j , I j
 
v j
D
i  1  log aD j , I j
 







Paths in Edit Graph and Profile HMM
• At right is a path
through an edit graph
and the corresponding
path through a profile
HMM.
• Observe:
• Diagonalmatch
• Verticalinsertion
• Horizontaldeletion
1. Use BLAST to separate a protein database into families of related
proteins.
2. Construct a multiple alignment for each protein family.
3. Construct a profile HMM model and optimize the parameters of the
model (transition and emission probabilities).
4. Align the target sequence against each HMM to find the best fit
between a target sequence and an HMM.
Making a Collection of HMM for Protein Families
Profile HMMs and Modeling Globin Proteins
• Globins represent a large collection of protein sequences.
• 400 globin sequences were randomly selected from all globins
and used to construct a multiple alignment.
• Multiple alignment was used to assign an HMM.
• 625 remaining globin sequences were aligned to the HMM,
resulting in a multiple alignment. This multiple alignment was
in a good agreement with the structurally derived alignment.
• Other proteins, were randomly chosen from the database and
compared against the globin HMM.
• This experiment resulted in an excellent separation between
globin and non-globin families.
• Pfam decribes protein domains.
• Each protein domain family in Pfam has:
• Seed alignment: Manually verified multiple alignment of a
representative set of sequences.
• HMM: Built from the seed alignment for further searches.
• Full alignment: Generated automatically from the HMM.
• The distinction between seed and full alignments facilitates
Pfam updates.
• Seed alignments are stable resources.
• HMM profiles and full alignments can be updated with
newly found amino acid sequences.
PFAM
• Pfam HMMs span entire domains that include both well-
conserved motifs and less-conserved regions with insertions
and deletions.
• It results in modeling complete domains that facilitates better
sequence annotation and leads to more sensitive detection.
PFAM Uses
Section 7:
HMM Parameter
Estimation
HMM Parameter Estimation
• So far, we have assumed that the transition and emission
probabilities are known.
• However, in most HMM applications, the probabilities are not
known. It is very difficult to estimate the probabilities.
HMM Parameter Estimation Problem
• Given: HMM with states and alphabet (emission characters),
as well as independent training sequences x1, … xm.
• Goal: Find HMM parameters Θ (that is, ak,,b , ek(b) that
maximize the joint probability of the training sequences, which
is given by the following:


P x
1
,K , x
m
 
• P(x1, …, xm | Θ) as a function of Θ is called the likelihood of
the model.
• The training sequences are assumed independent; therefore,
• The parameter estimation problem seeks Θ that realizes
• In practice the log likelihood is computed to avoid underflow
errors.
 
 i
i
xP )|(max
Maximize the Likelihood


P x
1
,K , x
m
   P x
i
 
i1
m

1. Known paths for training sequences:
• CpG islands marked on training sequences
• Casino analogue: One evening the dealer allows us to see when
he changes the dice.
2. Unknown paths for training sequences:
• CpG islands are not marked
• We do not see when the casino dealer changes dice
Two Situations
• Akl = # of times each k  l is taken in the training sequences.
• Ek(b) = # of times b is emitted from state k in the training
sequences.
• Compute akl and ek(b) as maximum likelihood estimators:
Known Paths
ak, l 
Ak, l
Ak, l'
l'

ek b  
E k b 
E k b' 
b '

• Some state k may not appear in any of the training sequences. This
means Ak, l = 0 for every state l and ak, l cannot be computed with
the given equation.
• To avoid this overfitting, use predetermined pseudocounts rkl and
rk(b) which reflect prior biases about the probability values:
• Ak, l = number of transitions kl + rk, l
• Ek(b) = number of emissions of b from k + rk(b)
Pseudocounts
Section 8:
Viterbi Training
Unknown Paths Method 1: Viterbi Training
• Idea: Use Viterbi decoding to compute the most probable path
for training sequence x.
• Method:
1. Start with some guess for initial parameters and compute
π* = the most probable path for x using initial parameters.
2. Iterate until no change in π*.
3. Determine Ak, l and Ek(b) as before.
4. Compute new parameters ak, l and ek(b) using the same
formulas as before.
5. Compute new π* for x and the current parameters.
• The algorithm converges precisely.
• There are finitely many possible paths.
• New parameters are uniquely determined by the current π*.
• There may be several paths for x with the same probability, hence
we must compare the new π* with all previous paths having highest
probability.
• Does not maximize the likelihood Πx P(x | Θ) but rather the
contribution to the likelihood of the most probable path,
Πx P(x | Θ, π*).
• In general, performs less well than Baum-Welch (below).
Viterbi Training Analysis
Section 9:
Baum-Welch Algorithm
• Idea: Guess initial values for parameters.
• This is art and experience, not science.
• We then estimate new (better) values for parameters.
• How?
• We repeat until stopping criterion is met.
• What criterion?
Unknown Paths Method 2: Baum-Welch
• We would need the Ak,l and Ek(b) values, but the path is unknown,
and we do not want to use a most probable path.
• Therefore for all states k, l, symbols b, and training sequences x:
• Compute Ak,l and Ek(b) as expected values, given the current
parameters.
Improved Parameters
Probabilistic Setting for Ak,l
• Given our training sequences x1, … ,xm consider a discrete
probability space with elementary events εk,l, = “k  l is taken in
x1, …, xm.”
• For each x in {x1,…,xm} and each position i in x let Yx,i be a
random variable defined by
• Define Y = Σx Σi Yx,i as the random variable which counts the
number of times the event εk,l happens in x1,…,xm.

Yx, i (k, l ) 
1 if  i  k and  i1  l
0 otherwise



The meaning of Ak,l
• Let Akl be the expectation of Y:
• We therefore need to compute P(πi = k, πi+1 = l | x).
Ak, l  E Y 
 E Yx, i 
i

x

 P Yx, i
 1 
i

x

 P x, l
 i
 k and  i1
 l  i

x

 P  i  k,  i1  l x 
i

x

Probabilistic setting for Ek(b)
• Given x1, … ,xm , consider a discrete probability space with
elementary events εk,b = “b is emitted in state k in x1, … ,xm.”
• For each x in {x1,…,xm} and each position i in x, let Yx,i be a
random variable defined by
• Define Y = Σx Σi Yx,i as the random variable which counts the
number of times the event εk,b happens in x1,…,xm.

Yx, i k, b 
1 if xi  b and  i  k
0 otherwise



Computing New Parameters
• Consider a training sequence x = x1, … , xm.
• Concentrate on positions i and i + 1:
• Use the forward-backward values:

fk, i  P x1 L xi i  k 
bk, i  P xi1 L xn i  k 
Compute Ak,l (1)
• The probability k  l is taken at position i of x:
• Compute P(x) using either forward or backward values.
• Expected number of times k  l is used in training sequences:


P  i  k, i1  l x1 L xn 
P x, i  k, i1  l 
P x 

P x, i  k, i1  l  bl, i1  el xi1  ak, l  fk, i
Ak, l 
bl, i1  el xi1  ak, l  fk, i 
i

x

P x 
Compute Akl(2)

P x,  i  k,  i1  l  P x1L xi, i  k, i1  l, xi1L xn 
 P  i1  l, xi1L xn x1L xi,  i  k  P x1L xi, i  k 
 P  i1
 l, xi1
L xn
i
 k  fk, i
 P xi1L xn i  k, i1  l  P i1  l i  k  fk, i
 P xi1L xn i1  l  ak, l  fk, i
 P xi 2 L xn xi1,  i1  l  P xi1 i1  l  ak, l  fk, i
 P xi 2 L xn i1  l  el xi1  ak, l  fk, i
 bl, i1  el xi1  ak, l  fk, i
Compute Ek(b)
• Probability that xi of x is emitted in state k:
• Expected number of times b is emitted in state k:
E k (b) 
fk, i  bk, i
P x i : xi  b

x



P  i  k x1 L xn 
P  i  k, x1 L xn 
P x 
P  i  k, x1 L xn  P x1 L xi,  i  k, xi1 L xn 
 P xi1 L xn x1 L xi,  i  k  P x1 L xi,  i  k 
 P xi1 L xn  i  k  fk, i
 bk, i  fk, i
Finally, new parameters
• We can then add pseudocounts as before.
ak, l 
Ak, l
Ak, l'
l'


ek (b) 
E k b 
E k b' 
b'

• These methods allow us to calculate our new parameters ak, l
and ek(b):
Stopping criteria
• We cannot actually reach maximum (property of optimization of
continuous functions).
• Therefore we need stopping criteria.
• Compute the log likelihood of the model for current Θ :
• Compare with previous log likelihood.
• Stop if small difference.
• Stop after a certain number of iterations to avoid infinite loop.
log P x   
x

• Initialization: Pick the best-guess for model parameters (or
arbitrary).
• Iteration:
1. Forward for each x
2. Backward for each x
3. Calculate Ak, l , Ek(b)
4. Calculate new ak, l , ek(b)
5. Calculate new log-likelihood
• Repeat until log-likelihood does not change much.
The Baum-Welch Algorithm Summarized
• Log-likelihood is increased by iterations.
• Baum-Welch is a particular case of the expectation
maximization (EM) algorithm.
• Convergence is to local maximum. The choice of initial
parameters determines local maximum to which the algorithm
converges.
Baum-Welch Analysis
Additional Application: Speech Recognition
• Create an HMM of the words in a language.
• Each word is a hidden state in Q.
• Each of the basic sounds in the language is a symbol in Σ.
• Input: Fragment of speech.
• Goal: Find the most probable sequence of states.
Speech Recognition: Building the Model
• Analyze some large source of English sentences, such as a
database of newspaper articles, to form probability matrices.
• A0i: The chance that word i begins a sentence.
• Aij: The chance that word j follows word i.
• Analyze English speakers to determine what sounds are
emitted with what words.
• Ek(b): the chance that sound b is spoken in word k. Allows for
alternate pronunciation of words.
Speech Recognition: Using the Model
• Use the same dynamic programming algorithm as before.
• Weave the spoken sounds through the model the same way
we wove the rolls of the die through the casino model.
• π will therefore represent the most likely set of words.
Using the Model
• How well does the model work?
• Common words, such as „the‟, „a‟, „of‟ make prediction less
accurate, since there are so many words that follow normally.
Improving Speech Recognition
• Initially, we were using a bigram, or a graph connecting every
two words.
• Expand that to a trigram.
• Each state represents two words spoken in succession.
• Each edge joins those two words (A B) to another state
representing (B C).
• Requires n3 vertices and edges, where n is the number of
words in the language.
• Much better, but still limited context.
References
• CS 262 course at Stanford given by Serafim Batzoglou

Contenu connexe

Tendances

Normal forms cfg
Normal forms   cfgNormal forms   cfg
Normal forms cfgRajendran
 
Hidden Markov Models
Hidden Markov ModelsHidden Markov Models
Hidden Markov ModelsVu Pham
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversionankit_saluja
 
Automatic speech recognition system
Automatic speech recognition systemAutomatic speech recognition system
Automatic speech recognition systemAlok Tiwari
 
Hidden Markov Model & It's Application in Python
Hidden Markov Model & It's Application in PythonHidden Markov Model & It's Application in Python
Hidden Markov Model & It's Application in PythonAbhay Dodiya
 
Regular expression with DFA
Regular expression with DFARegular expression with DFA
Regular expression with DFAMaulik Togadiya
 
Deep Learning For Speech Recognition
Deep Learning For Speech RecognitionDeep Learning For Speech Recognition
Deep Learning For Speech Recognitionananth
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentMuhammad Rasel
 
Finite automata-for-lexical-analysis
Finite automata-for-lexical-analysisFinite automata-for-lexical-analysis
Finite automata-for-lexical-analysisDattatray Gandhmal
 
Speaker Diarization
Speaker DiarizationSpeaker Diarization
Speaker DiarizationHONGJOO LEE
 
Mealy moore machine model
Mealy moore machine modelMealy moore machine model
Mealy moore machine modeldeepinderbedi
 
Finite Automata in compiler design
Finite Automata in compiler designFinite Automata in compiler design
Finite Automata in compiler designRiazul Islam
 
Identification d'une empreinte vocale pour les Nuls
Identification d'une empreinte vocale pour les NulsIdentification d'une empreinte vocale pour les Nuls
Identification d'une empreinte vocale pour les NulsAmaury Crickx
 
ARTIFICIAL NEURAL NETWORKS
ARTIFICIAL NEURAL NETWORKSARTIFICIAL NEURAL NETWORKS
ARTIFICIAL NEURAL NETWORKSAIMS Education
 
Hidden Markov Model paper presentation
Hidden Markov Model paper presentationHidden Markov Model paper presentation
Hidden Markov Model paper presentationShiraz316
 

Tendances (20)

Introduction of slam
Introduction of slamIntroduction of slam
Introduction of slam
 
Normal forms cfg
Normal forms   cfgNormal forms   cfg
Normal forms cfg
 
Theory of computation Lec3 dfa
Theory of computation Lec3 dfaTheory of computation Lec3 dfa
Theory of computation Lec3 dfa
 
Hidden Markov Models
Hidden Markov ModelsHidden Markov Models
Hidden Markov Models
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversion
 
Automatic speech recognition system
Automatic speech recognition systemAutomatic speech recognition system
Automatic speech recognition system
 
Hidden Markov Model & It's Application in Python
Hidden Markov Model & It's Application in PythonHidden Markov Model & It's Application in Python
Hidden Markov Model & It's Application in Python
 
Recognition-of-tokens
Recognition-of-tokensRecognition-of-tokens
Recognition-of-tokens
 
Regular expression with DFA
Regular expression with DFARegular expression with DFA
Regular expression with DFA
 
Deep Learning For Speech Recognition
Deep Learning For Speech RecognitionDeep Learning For Speech Recognition
Deep Learning For Speech Recognition
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
 
Finite automata-for-lexical-analysis
Finite automata-for-lexical-analysisFinite automata-for-lexical-analysis
Finite automata-for-lexical-analysis
 
Speaker Diarization
Speaker DiarizationSpeaker Diarization
Speaker Diarization
 
Mealy moore machine model
Mealy moore machine modelMealy moore machine model
Mealy moore machine model
 
Finite Automata in compiler design
Finite Automata in compiler designFinite Automata in compiler design
Finite Automata in compiler design
 
Identification d'une empreinte vocale pour les Nuls
Identification d'une empreinte vocale pour les NulsIdentification d'une empreinte vocale pour les Nuls
Identification d'une empreinte vocale pour les Nuls
 
ARTIFICIAL NEURAL NETWORKS
ARTIFICIAL NEURAL NETWORKSARTIFICIAL NEURAL NETWORKS
ARTIFICIAL NEURAL NETWORKS
 
Finite Automata
Finite AutomataFinite Automata
Finite Automata
 
Hidden Markov Model paper presentation
Hidden Markov Model paper presentationHidden Markov Model paper presentation
Hidden Markov Model paper presentation
 
GMM
GMMGMM
GMM
 

Similaire à Ch11 hmm

Similaire à Ch11 hmm (20)

Polygon Fill
Polygon FillPolygon Fill
Polygon Fill
 
Week6.ppt
Week6.pptWeek6.ppt
Week6.ppt
 
LFSR Lecture
LFSR LectureLFSR Lecture
LFSR Lecture
 
Clipping & Rasterization
Clipping & RasterizationClipping & Rasterization
Clipping & Rasterization
 
Probabilistic systems assignment help
Probabilistic systems assignment helpProbabilistic systems assignment help
Probabilistic systems assignment help
 
ECC_basics.ppt
ECC_basics.pptECC_basics.ppt
ECC_basics.ppt
 
Lecture filling algorithms
Lecture  filling algorithmsLecture  filling algorithms
Lecture filling algorithms
 
ECC_basics.ppt
ECC_basics.pptECC_basics.ppt
ECC_basics.ppt
 
Probabilistic systems exam help
Probabilistic systems exam helpProbabilistic systems exam help
Probabilistic systems exam help
 
Mit6 094 iap10_lec03
Mit6 094 iap10_lec03Mit6 094 iap10_lec03
Mit6 094 iap10_lec03
 
1542 inner products
1542 inner products1542 inner products
1542 inner products
 
Elliptical curve cryptography
Elliptical curve cryptographyElliptical curve cryptography
Elliptical curve cryptography
 
streamingalgo88585858585858585pppppp.pptx
streamingalgo88585858585858585pppppp.pptxstreamingalgo88585858585858585pppppp.pptx
streamingalgo88585858585858585pppppp.pptx
 
Boundary fill algm
Boundary fill algmBoundary fill algm
Boundary fill algm
 
QC-UNIT 2.ppt
QC-UNIT 2.pptQC-UNIT 2.ppt
QC-UNIT 2.ppt
 
Prob-Dist-Toll-Forecast-Uncertainty
Prob-Dist-Toll-Forecast-UncertaintyProb-Dist-Toll-Forecast-Uncertainty
Prob-Dist-Toll-Forecast-Uncertainty
 
Mit6 094 iap10_lec02
Mit6 094 iap10_lec02Mit6 094 iap10_lec02
Mit6 094 iap10_lec02
 
HMM DAY-3.ppt
HMM DAY-3.pptHMM DAY-3.ppt
HMM DAY-3.ppt
 
UNIT2.pptx
UNIT2.pptxUNIT2.pptx
UNIT2.pptx
 
Module 1 ppt class.pptx
Module 1 ppt class.pptxModule 1 ppt class.pptx
Module 1 ppt class.pptx
 

Plus de BioinformaticsInstitute

Comparative Genomics and de Bruijn graphs
Comparative Genomics and de Bruijn graphsComparative Genomics and de Bruijn graphs
Comparative Genomics and de Bruijn graphsBioinformaticsInstitute
 
Биоинформатический анализ данных полноэкзомного секвенирования: анализ качес...
 Биоинформатический анализ данных полноэкзомного секвенирования: анализ качес... Биоинформатический анализ данных полноэкзомного секвенирования: анализ качес...
Биоинформатический анализ данных полноэкзомного секвенирования: анализ качес...BioinformaticsInstitute
 
Вперед в прошлое. Методы генетической диагностики древней днк
Вперед в прошлое. Методы генетической диагностики древней днкВперед в прошлое. Методы генетической диагностики древней днк
Вперед в прошлое. Методы генетической диагностики древней днкBioinformaticsInstitute
 
"Зачем биологам суперкомпьютеры", Александр Предеус
"Зачем биологам суперкомпьютеры", Александр Предеус"Зачем биологам суперкомпьютеры", Александр Предеус
"Зачем биологам суперкомпьютеры", Александр ПредеусBioinformaticsInstitute
 
Иммунотерапия раковых опухолей: взгляд со стороны системной биологии. Максим ...
Иммунотерапия раковых опухолей: взгляд со стороны системной биологии. Максим ...Иммунотерапия раковых опухолей: взгляд со стороны системной биологии. Максим ...
Иммунотерапия раковых опухолей: взгляд со стороны системной биологии. Максим ...BioinformaticsInstitute
 
Рак 101 (Мария Шутова, ИоГЕН РАН)
Рак 101 (Мария Шутова, ИоГЕН РАН)Рак 101 (Мария Шутова, ИоГЕН РАН)
Рак 101 (Мария Шутова, ИоГЕН РАН)BioinformaticsInstitute
 
Секвенирование как инструмент исследования сложных фенотипов человека: от ген...
Секвенирование как инструмент исследования сложных фенотипов человека: от ген...Секвенирование как инструмент исследования сложных фенотипов человека: от ген...
Секвенирование как инструмент исследования сложных фенотипов человека: от ген...BioinformaticsInstitute
 
Инвестиции в биоинформатику и биотех (Андрей Афанасьев)
Инвестиции в биоинформатику и биотех (Андрей Афанасьев)Инвестиции в биоинформатику и биотех (Андрей Афанасьев)
Инвестиции в биоинформатику и биотех (Андрей Афанасьев)BioinformaticsInstitute
 

Plus de BioinformaticsInstitute (20)

Graph genome
Graph genome Graph genome
Graph genome
 
Nanopores sequencing
Nanopores sequencingNanopores sequencing
Nanopores sequencing
 
A superglue for string comparison
A superglue for string comparisonA superglue for string comparison
A superglue for string comparison
 
Comparative Genomics and de Bruijn graphs
Comparative Genomics and de Bruijn graphsComparative Genomics and de Bruijn graphs
Comparative Genomics and de Bruijn graphs
 
Биоинформатический анализ данных полноэкзомного секвенирования: анализ качес...
 Биоинформатический анализ данных полноэкзомного секвенирования: анализ качес... Биоинформатический анализ данных полноэкзомного секвенирования: анализ качес...
Биоинформатический анализ данных полноэкзомного секвенирования: анализ качес...
 
Вперед в прошлое. Методы генетической диагностики древней днк
Вперед в прошлое. Методы генетической диагностики древней днкВперед в прошлое. Методы генетической диагностики древней днк
Вперед в прошлое. Методы генетической диагностики древней днк
 
Knime &amp; bioinformatics
Knime &amp; bioinformaticsKnime &amp; bioinformatics
Knime &amp; bioinformatics
 
"Зачем биологам суперкомпьютеры", Александр Предеус
"Зачем биологам суперкомпьютеры", Александр Предеус"Зачем биологам суперкомпьютеры", Александр Предеус
"Зачем биологам суперкомпьютеры", Александр Предеус
 
Иммунотерапия раковых опухолей: взгляд со стороны системной биологии. Максим ...
Иммунотерапия раковых опухолей: взгляд со стороны системной биологии. Максим ...Иммунотерапия раковых опухолей: взгляд со стороны системной биологии. Максим ...
Иммунотерапия раковых опухолей: взгляд со стороны системной биологии. Максим ...
 
Рак 101 (Мария Шутова, ИоГЕН РАН)
Рак 101 (Мария Шутова, ИоГЕН РАН)Рак 101 (Мария Шутова, ИоГЕН РАН)
Рак 101 (Мария Шутова, ИоГЕН РАН)
 
Плюрипотентность 101
Плюрипотентность 101Плюрипотентность 101
Плюрипотентность 101
 
Секвенирование как инструмент исследования сложных фенотипов человека: от ген...
Секвенирование как инструмент исследования сложных фенотипов человека: от ген...Секвенирование как инструмент исследования сложных фенотипов человека: от ген...
Секвенирование как инструмент исследования сложных фенотипов человека: от ген...
 
Инвестиции в биоинформатику и биотех (Андрей Афанасьев)
Инвестиции в биоинформатику и биотех (Андрей Афанасьев)Инвестиции в биоинформатику и биотех (Андрей Афанасьев)
Инвестиции в биоинформатику и биотех (Андрей Афанасьев)
 
Biodb 2011-everything
Biodb 2011-everythingBiodb 2011-everything
Biodb 2011-everything
 
Biodb 2011-05
Biodb 2011-05Biodb 2011-05
Biodb 2011-05
 
Biodb 2011-04
Biodb 2011-04Biodb 2011-04
Biodb 2011-04
 
Biodb 2011-03
Biodb 2011-03Biodb 2011-03
Biodb 2011-03
 
Biodb 2011-01
Biodb 2011-01Biodb 2011-01
Biodb 2011-01
 
Biodb 2011-02
Biodb 2011-02Biodb 2011-02
Biodb 2011-02
 
Ngs 3 1
Ngs 3 1Ngs 3 1
Ngs 3 1
 

Dernier

How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditSkynet Technologies
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 

Dernier (20)

How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance Audit
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 

Ch11 hmm

  • 2. Outline 1. CG-Islands 2. The “Fair Bet Casino” 3. Hidden Markov Model 4. Decoding Algorithm 5. Forward-Backward Algorithm 6. Profile HMMs 7. HMM Parameter Estimation 8. Viterbi Training 9. Baum-Welch Algorithm
  • 3. Outline - CHANGE • The “Fair Bet Casino” – improve graphics in “HMM for Fair Bet Casino (con‟d)” • Decoding Algorithm – SHOW the two-row graph for casino problem • Forward-Backward Algorithm – SHOW the similarity in dynamic programming equation between Viterbi and forward- backward algorithm • HMM Parameter Estimation – explain the idea of Baum- Welch • Profile HMM Alignment SHOW “Profile HMM” slide more slowly – show M states first and add I and D slides later on – SHOW an alignment in terms of M, I, D states • it is not clear how p(xi) term appears after / in “Profile HMM alignment: Dynamic Programming” – MAKE a dynamic
  • 5. CG-Islands • Given 4 nucleotides: probability of any one‟s occurrence is ~ 1/4. • Thus, probability of occurrence of a given dinucleotide (pair of successive nucleotides is ~ 1/16. • However, the frequencies of dinucleotides in DNA sequences vary widely. • In particular, CG is typically underepresented (frequency of CG is typically < 1/16)
  • 6. CG-Islands • CG is the least frequent dinucleotide because the C in CG is easily methylated and has the tendency to mutate into T afterwards. • However, methylation is suppressed around genes in a genome. So, CG appears at relatively high frequency within these CG-islands. • So, finding the CG-islands in a genome is an important biological problem.
  • 7. Section 2: The Fair Bet Casino
  • 8. The “Fair Bet Casino” • The CG-islands problem can be modeled after a problem named The Fair Bet Casino. • The game is to flip two coins, which results in only two possible outcomes: Head (H) or Tail(T). • The Fair coin (F) will give H and T each with probability ½, which is written P(H | F) = P(T | F) = ½. • The Biased coin (B) will give H with probability ¾, which we write as P(H | B) = ¾, P(T | B) = ¼. • The crooked dealer changes between F and B coins with probability 10%. • How can we tell when the dealer is using F and when he is using B?
  • 9. The Fair Bet Casino Problem • Input: A sequence x = x1x2x3…xn of coin tosses made by two possible coins (F or B). • Output: A sequence π = π1 π2 π3… πn, with each πi being either F or B and indicating that xi is the result of tossing the Fair or Biased coin respectively.
  • 10. Problem… • Any observed outcome of coin tosses could have been generated by any sequence of states! • Example: HHHHHHHHHH could be generated by BBBBBBBBBB, FFFFFFFFFF, FBFBFBFBFB, etc. • We need to incorporate a way to grade different sequences differently. • This provides us with the decoding problem.
  • 11. Simple Case: The Dealer Never Switches Coins • We assume first that the dealer never changes coins: • P(x | F): probability of the dealer using F and generating the outcome x. • P(x | B): probability of the dealer using the B coin and generating outcome x. • Example: Say that in x we observe k heads and n – k tails: P x F  1 2       1 2       n i1 n  P ( x B)  3 4       k 1 4       n  k  3 k 4 n
  • 12. When Does P(x | F) = P(x | B)? P(x F)  P(x B) 1 2 n  3 k 4 n 2 n  3 k n  log 2 3 k n  k log 2 3  k  n log 2 3
  • 13. Log-odds Ratio • We define the log-odds ratio (L) as follows: • From the previous slide, if L > 0 we have reason to believe that the coin is fair, and if L < 0 we think the coin is biased.   L  log 2 P x F  P x B         log 2 1 2 n       log 2 3 k 4 n        n  log 2 3 k   log 2 4 n    n  k log 2 3  2n  n  k log 2 3 
  • 14. • Consider a sliding window of the outcome sequence and find the log-odds ratio for this short window. x1x2x3x4x5x6x7x8…xn Computing Log-odds Ratio in Sliding Windows Log-odds value 0 Fair coin most likely used Biased coin most likely used • Key Disadvantages: • The length of the CG-island is not known in advance. • Different windows may classify the same position differently.
  • 15. • Consider a sliding window of the outcome sequence and find the log-odds ratio for this short window. x1x2x3x4x5x6x7x8…xn Computing Log-odds Ratio in Sliding Windows Log-odds value 0 Fair coin most likely used Biased coin most likely used • Key Disadvantages: • The length of the CG-island is not known in advance. • Different windows may classify the same position differently.
  • 16. • Consider a sliding window of the outcome sequence and find the log-odds ratio for this short window. x1x2x3x4x5x6x7x8…xn Computing Log-odds Ratio in Sliding Windows Log-odds value 0 Fair coin most likely used Biased coin most likely used • Key Disadvantages: • The length of the CG-island is not known in advance. • Different windows may classify the same position differently.
  • 17. • Consider a sliding window of the outcome sequence and find the log-odds ratio for this short window. x1x2x3x4x5x6x7x8…xn Computing Log-odds Ratio in Sliding Windows Log-odds value 0 Fair coin most likely used Biased coin most likely used • Key Disadvantages: • The length of the CG-island is not known in advance. • Different windows may classify the same position differently.
  • 18. • Consider a sliding window of the outcome sequence and find the log-odds ratio for this short window. x1x2x3x4x5x6x7x8…xn Computing Log-odds Ratio in Sliding Windows Log-odds value 0 Fair coin most likely used Biased coin most likely used • Key Disadvantages: • The length of the CG-island is not known in advance. • Different windows may classify the same position differently.
  • 20. Hidden Markov Model (HMM) • Can be viewed as an abstract machine with k hidden states that emits symbols from an alphabet Σ. • Each state has its own probability distribution, and the machine switches between states and chooses characters according to this probability distribution. • While in a certain state, the machine makes two decisions: 1. What state should I move to next? 2. What symbol - from the alphabet Σ - should I emit?
  • 21. Why “Hidden”? • Observers can see the emitted symbols of an HMM but have no ability to know which state the HMM is currently in. • The goal is to infer the most likely hidden states of an HMM based on the given sequence of emitted symbols.
  • 22. HMM Parameters • Σ: set of emission characters. • Q: set of hidden states, each emitting symbols from Σ. • A = (akl): a |Q| x |Q| matrix containing the probabilities of changing from state k to state l. • E = (ek(b)): a |Q| x |Σ| matrix of probability of emitting symbol b while being in state k.
  • 23. HMM Parameters • A = (akl): a |Q| x |Q| matrix containing the probabilities of changing from state k to state l. • aFF = 0.9 aFB = 0.1 • aBF = 0.1 aBB = 0.9 • E = (ek(b)): a |Q| x |Σ| matrix of probability of emitting symbol b while being in state k. • eF(0) = ½ eF(1) = ½ • eB(0) = ¼ eB(1) = ¾
  • 24. HMM for the Fair Bet Casino Fair Biased Fair aFF = 0.9 aFB = 0.1 Biased aBF = 0.1 aBB = 0.9 Tails(0) Heads(1) Fair eF(0) = ½ eF(1) = ½ Biased eB(0) = ¼ eB(1) = ¾ • The Fair Bet Casino in HMM terms: • Σ = {0, 1} (0 for T and 1 for H) • Q = {F, B} Transition Probabilities (A) Emission Probabilities (E)
  • 25. HMM for the Fair Bet Casino • HMM model for the Fair Bet Casino Problem:
  • 26. Hidden Paths x 0 1 0 1 1 1 0 1 0 0 1 π = F F F B B B B B F F F P(xi|πi) P(πi-1  πi) Transition probability from state πi-1 to state πi Probability that xi was emitted from state πi • A path π = π1… πn in the HMM is defined as a sequence of states. • Consider path π = FFFBBBBBFFF and sequence x = 01011101001
  • 27. Hidden Paths x 0 1 0 1 1 1 0 1 0 0 1 π = F F F B B B B B F F F P(xi|πi) ½ ½ ½ ¾ ¾ ¾ ¼ ¾ ½ ½ ½ P(πi-1  πi) Transition probability from state πi-1 to state πi Probability that xi was emitted from state πi • A path π = π1… πn in the HMM is defined as a sequence of states. • Consider path π = FFFBBBBBFFF and sequence x = 01011101001
  • 28. Hidden Paths x 0 1 0 1 1 1 0 1 0 0 1 π = F F F B B B B B F F F P(xi|πi) ½ ½ ½ ¾ ¾ ¾ ¼ ¾ ½ ½ ½ P(πi-1  πi) ½ 9/10 9/10 1/10 9/10 9/10 9/10 9/10 1/10 9/10 9/10 Transition probability from state πi-1 to state πi Probability that xi was emitted from state πi • A path π = π1… πn in the HMM is defined as a sequence of states. • Consider path π = FFFBBBBBFFF and sequence x = 01011101001
  • 29. P(x | π) Calculation • P(x | π): Probability that sequence x was generated if we know that we have the path π. P x   P 0  1   P xi   i1 n   P  i  i1   a 0 ,  1 e i x   a i ,  i 1 i1 n 
  • 30. P(x | π) Calculation • P(x | π): Probability that sequence x was generated if we know that we have the path π. P x   P 0  1   P xi   i1 n   P  i  i1   a 0 ,  1 e i x   a i ,  i 1 i1 n   e i x   a i ,  i 1 i 0 n 
  • 32. Decoding Problem • Goal: Find an optimal hidden path of states given observations. • Input: Sequence of observations x = x1…xn generated by an HMM M(Σ, Q, A, E). • Output: A path that maximizes P(x | π) over all possible paths π.
  • 33. Building Manhattan for Decoding Problem • Andrew Viterbi used the Manhattan edit graph model to solve the Decoding Problem. • Vertices are composed of n “levels” with |Q| vertices in each level; each vertex represents a different state. • We connect each vertex in level i to each vertex in level i + 1 via a directed edge, giving |Q|2(n – 1) edges. • Therefore every choice of π = π1… πn corresponds to a path in the graph.
  • 34. Edit Graph for Decoding Problem: Example
  • 35. Decoding Problem vs. Alignment Problem Valid Directions in Alignment Valid Directions in Decoding
  • 36. Decoding Problem • Every path in the graph has the probability P(x | π). • The Viterbi algorithm finds the path that maximizes P(x | π) among all possible paths. • The Viterbi algorithm runs in O(n |Q|2) time.
  • 37. Decoding Problem: Weights of Edges • The weight w is given by: ? w (k, i) (l, i+1)
  • 38. • The weight w is given by: ? Decoding Problem: Weights of Edges w (k, i) (l, i+1)  P x   e i 1 xi1   a i ,  i 1 i 0 n 1 
  • 39. • The weight w is given by: ? Decoding Problem: Weights of Edges w (k, i) (l, i+1) P x   e i 1 xi1   a i ,  i 1 i 0 n 1  i th term  e i 1 xi1   a i ,  i 1
  • 40. • The weight w is given by: el (xi+1) . ak, l Decoding Problem: Weights of Edges w (k, i) (l, i+1) P x   e i 1 xi1   a i ,  i 1 i 0 n 1  i th term  e i 1 xi1   a i ,  i 1
  • 41. Decoding Problem and Dynamic Programming • sl, i+1 = max probability of all paths of length i + 1 ending in state l (for the first i + 1 observations). • Recursion: sl, i1  max kQ sk, i  weight of edge between k, i  and l, i  1    max kQ sk, i  ak, l  el xi1    el xi1  max kQ sk, i  ak, l 
  • 42. • The value of the product can become extremely small, which leads to overflow. • A computer has only finite storage to store any given number, and if the number is too small it runs out of room. • To avoid overflow, take the logarithm of the right side instead. Decoding Problem and Dynamic Programming sl, i1  log el xi1   max kQ log sk, i  log ak, l  
  • 43. Decoding Problem and Dynamic Programming • Initialization: • Let π* be the optimal path. Then,  P x    max kQ sk, n  ak, end  sk, 0  1 if k  begin 0 otherwise   
  • 45. Forward-Backward Problem • Given: a sequence of coin tosses generated by an HMM. • Goal: Find the probability that the dealer was using a biased coin at a particular time.
  • 46. Forward Probability • Define fk,i (forward probability) as the probability of emitting the prefix x1…xi and reaching the state π = k. • The recurrence for the forward algorithm:  fk,i  ek xi  fl, i1 al,k lQ 
  • 47. Backward Probability • However, forward probability is not the only factor affecting P(πi = k | x). • The sequence of transitions and emissions that the HMM undergoes between πi+1 and πn also affect P(πi = k | x). • Define the backward probability bk,i as the probability of being in state πi = k and emitting the suffix xi+1…xn. Recurrence: Forward Backwardxi bk,i  el xi1  lQ  bl,i1  ak,l
  • 48. Backward-Forward Probability • The probability that HMM is in a certain state k at any moment i, given that we observe the output x, is therefore influenced by both the forward and backward probabilities. • We use the mathematical definition of conditional probability to calculate P(πi = k | x):  P  i  k x  P x,  i  k  P x   fk, i  bk, i P x 
  • 50. Finding Distant Members of a Protein Family • A distant cousin of functionally related sequences in a protein family may have weak pairwise similarities with each member of the family and thus fail a significance test. • However, they may have these weak similarities with many members of the family, indicating a correlation. • The goal is to align a sequence to all members of the family at once. • A family of related proteins can be represented by their multiple alignment and the corresponding profile.
  • 51. Profile Representation of Protein Families • Aligned DNA sequences can be represented by a 4 x n profile matrix reflecting the frequencies of nucleotides in every aligned position. • Example: • Similarly, a protein family can be represented by a 20 x n profile representing frequencies of amino acids.
  • 52. • Multiple alignment of a protein family shows variations in conservation along the length of a protein. • Example: After aligning many globin proteins, biologists recognized that the helices region in globins are more conserved than others. Protein Family Classification
  • 53. • A profile HMM is a probabilistic representation of a multiple alignment. • A given multiple alignment (of a protein family) is used to build a profile HMM. • This model then may be used to find and score less obvious potential matches of new protein sequences. What Is a Profile HMM?
  • 54. Profile HMM • A profile HMM has three sets of states: • Match states: M1 ,…, Mn (plus begin/end states) • Insertion states: I0 , I1 ,…, In • Deletion states: D1 ,…, Dn
  • 55. 1. Multiple alignment is used to construct the HMM model. 2. Assign each column to a Match state in HMM. Add Insertion and Deletion state. 3. Estimate the emission probabilities according to amino acid counts in column. Different positions in the protein will have different emission probabilities. 4. Estimate the transition probabilities between Match, Deletion and Insertion states. Building a Profile HMM
  • 56. Transition Probabilities in a Profile HMM • Gap Initiation Penalty: The cost of beginning a gap, which means that we must have transitions from match state to insertion state and vice versa. • Penalty: • Gap Extension Penalty: The cost of extending a gap, which corresponds to maintaining the insertion state for one period. • Penalty:  log aMI  log aIM  log aII 
  • 57. Emission Probabilities in a Profile HMM • Probabilty of emitting a symbol a at an insertion state Ij: • Here p(a) is the frequency of the occurrence of the symbol a in all the sequences.  eI j a   p a 
  • 58. Profile HMM Alignment • Define vM j (i) as the logarithmic likelihood score of the best path for matching x1..xi to the profile HMM ending with xi emitted by the state Mj. • vI j (i) and vD j (i) are defined similarly.
  • 59. Profile HMM Alignment: Dynamic Programming v j M i   log eM j xi  p xi           max v j 1 M i  1  log aM j 1 , M j   v j 1 I i  1  log aI j 1 , M j   v j 1 D i  1  log aD j 1 , M j         
  • 60. Profile HMM Alignment: Dynamic Programming v j I i   log eI j xi  p xi           max v j M i  1  log aM j , I j   v j I i  1  log aI j , I j   v j D i  1  log aD j , I j         
  • 61. Paths in Edit Graph and Profile HMM • At right is a path through an edit graph and the corresponding path through a profile HMM. • Observe: • Diagonalmatch • Verticalinsertion • Horizontaldeletion
  • 62. 1. Use BLAST to separate a protein database into families of related proteins. 2. Construct a multiple alignment for each protein family. 3. Construct a profile HMM model and optimize the parameters of the model (transition and emission probabilities). 4. Align the target sequence against each HMM to find the best fit between a target sequence and an HMM. Making a Collection of HMM for Protein Families
  • 63. Profile HMMs and Modeling Globin Proteins • Globins represent a large collection of protein sequences. • 400 globin sequences were randomly selected from all globins and used to construct a multiple alignment. • Multiple alignment was used to assign an HMM. • 625 remaining globin sequences were aligned to the HMM, resulting in a multiple alignment. This multiple alignment was in a good agreement with the structurally derived alignment. • Other proteins, were randomly chosen from the database and compared against the globin HMM. • This experiment resulted in an excellent separation between globin and non-globin families.
  • 64. • Pfam decribes protein domains. • Each protein domain family in Pfam has: • Seed alignment: Manually verified multiple alignment of a representative set of sequences. • HMM: Built from the seed alignment for further searches. • Full alignment: Generated automatically from the HMM. • The distinction between seed and full alignments facilitates Pfam updates. • Seed alignments are stable resources. • HMM profiles and full alignments can be updated with newly found amino acid sequences. PFAM
  • 65. • Pfam HMMs span entire domains that include both well- conserved motifs and less-conserved regions with insertions and deletions. • It results in modeling complete domains that facilitates better sequence annotation and leads to more sensitive detection. PFAM Uses
  • 67. HMM Parameter Estimation • So far, we have assumed that the transition and emission probabilities are known. • However, in most HMM applications, the probabilities are not known. It is very difficult to estimate the probabilities.
  • 68. HMM Parameter Estimation Problem • Given: HMM with states and alphabet (emission characters), as well as independent training sequences x1, … xm. • Goal: Find HMM parameters Θ (that is, ak,,b , ek(b) that maximize the joint probability of the training sequences, which is given by the following:   P x 1 ,K , x m  
  • 69. • P(x1, …, xm | Θ) as a function of Θ is called the likelihood of the model. • The training sequences are assumed independent; therefore, • The parameter estimation problem seeks Θ that realizes • In practice the log likelihood is computed to avoid underflow errors.    i i xP )|(max Maximize the Likelihood   P x 1 ,K , x m    P x i   i1 m 
  • 70. 1. Known paths for training sequences: • CpG islands marked on training sequences • Casino analogue: One evening the dealer allows us to see when he changes the dice. 2. Unknown paths for training sequences: • CpG islands are not marked • We do not see when the casino dealer changes dice Two Situations
  • 71. • Akl = # of times each k  l is taken in the training sequences. • Ek(b) = # of times b is emitted from state k in the training sequences. • Compute akl and ek(b) as maximum likelihood estimators: Known Paths ak, l  Ak, l Ak, l' l'  ek b   E k b  E k b'  b ' 
  • 72. • Some state k may not appear in any of the training sequences. This means Ak, l = 0 for every state l and ak, l cannot be computed with the given equation. • To avoid this overfitting, use predetermined pseudocounts rkl and rk(b) which reflect prior biases about the probability values: • Ak, l = number of transitions kl + rk, l • Ek(b) = number of emissions of b from k + rk(b) Pseudocounts
  • 74. Unknown Paths Method 1: Viterbi Training • Idea: Use Viterbi decoding to compute the most probable path for training sequence x. • Method: 1. Start with some guess for initial parameters and compute π* = the most probable path for x using initial parameters. 2. Iterate until no change in π*. 3. Determine Ak, l and Ek(b) as before. 4. Compute new parameters ak, l and ek(b) using the same formulas as before. 5. Compute new π* for x and the current parameters.
  • 75. • The algorithm converges precisely. • There are finitely many possible paths. • New parameters are uniquely determined by the current π*. • There may be several paths for x with the same probability, hence we must compare the new π* with all previous paths having highest probability. • Does not maximize the likelihood Πx P(x | Θ) but rather the contribution to the likelihood of the most probable path, Πx P(x | Θ, π*). • In general, performs less well than Baum-Welch (below). Viterbi Training Analysis
  • 77. • Idea: Guess initial values for parameters. • This is art and experience, not science. • We then estimate new (better) values for parameters. • How? • We repeat until stopping criterion is met. • What criterion? Unknown Paths Method 2: Baum-Welch
  • 78. • We would need the Ak,l and Ek(b) values, but the path is unknown, and we do not want to use a most probable path. • Therefore for all states k, l, symbols b, and training sequences x: • Compute Ak,l and Ek(b) as expected values, given the current parameters. Improved Parameters
  • 79. Probabilistic Setting for Ak,l • Given our training sequences x1, … ,xm consider a discrete probability space with elementary events εk,l, = “k  l is taken in x1, …, xm.” • For each x in {x1,…,xm} and each position i in x let Yx,i be a random variable defined by • Define Y = Σx Σi Yx,i as the random variable which counts the number of times the event εk,l happens in x1,…,xm.  Yx, i (k, l )  1 if  i  k and  i1  l 0 otherwise   
  • 80. The meaning of Ak,l • Let Akl be the expectation of Y: • We therefore need to compute P(πi = k, πi+1 = l | x). Ak, l  E Y   E Yx, i  i  x   P Yx, i  1  i  x   P x, l  i  k and  i1  l  i  x   P  i  k,  i1  l x  i  x 
  • 81. Probabilistic setting for Ek(b) • Given x1, … ,xm , consider a discrete probability space with elementary events εk,b = “b is emitted in state k in x1, … ,xm.” • For each x in {x1,…,xm} and each position i in x, let Yx,i be a random variable defined by • Define Y = Σx Σi Yx,i as the random variable which counts the number of times the event εk,b happens in x1,…,xm.  Yx, i k, b  1 if xi  b and  i  k 0 otherwise   
  • 82. Computing New Parameters • Consider a training sequence x = x1, … , xm. • Concentrate on positions i and i + 1: • Use the forward-backward values:  fk, i  P x1 L xi i  k  bk, i  P xi1 L xn i  k 
  • 83. Compute Ak,l (1) • The probability k  l is taken at position i of x: • Compute P(x) using either forward or backward values. • Expected number of times k  l is used in training sequences:   P  i  k, i1  l x1 L xn  P x, i  k, i1  l  P x   P x, i  k, i1  l  bl, i1  el xi1  ak, l  fk, i Ak, l  bl, i1  el xi1  ak, l  fk, i  i  x  P x 
  • 84. Compute Akl(2)  P x,  i  k,  i1  l  P x1L xi, i  k, i1  l, xi1L xn   P  i1  l, xi1L xn x1L xi,  i  k  P x1L xi, i  k   P  i1  l, xi1 L xn i  k  fk, i  P xi1L xn i  k, i1  l  P i1  l i  k  fk, i  P xi1L xn i1  l  ak, l  fk, i  P xi 2 L xn xi1,  i1  l  P xi1 i1  l  ak, l  fk, i  P xi 2 L xn i1  l  el xi1  ak, l  fk, i  bl, i1  el xi1  ak, l  fk, i
  • 85. Compute Ek(b) • Probability that xi of x is emitted in state k: • Expected number of times b is emitted in state k: E k (b)  fk, i  bk, i P x i : xi  b  x    P  i  k x1 L xn  P  i  k, x1 L xn  P x  P  i  k, x1 L xn  P x1 L xi,  i  k, xi1 L xn   P xi1 L xn x1 L xi,  i  k  P x1 L xi,  i  k   P xi1 L xn  i  k  fk, i  bk, i  fk, i
  • 86. Finally, new parameters • We can then add pseudocounts as before. ak, l  Ak, l Ak, l' l'   ek (b)  E k b  E k b'  b'  • These methods allow us to calculate our new parameters ak, l and ek(b):
  • 87. Stopping criteria • We cannot actually reach maximum (property of optimization of continuous functions). • Therefore we need stopping criteria. • Compute the log likelihood of the model for current Θ : • Compare with previous log likelihood. • Stop if small difference. • Stop after a certain number of iterations to avoid infinite loop. log P x    x 
  • 88. • Initialization: Pick the best-guess for model parameters (or arbitrary). • Iteration: 1. Forward for each x 2. Backward for each x 3. Calculate Ak, l , Ek(b) 4. Calculate new ak, l , ek(b) 5. Calculate new log-likelihood • Repeat until log-likelihood does not change much. The Baum-Welch Algorithm Summarized
  • 89. • Log-likelihood is increased by iterations. • Baum-Welch is a particular case of the expectation maximization (EM) algorithm. • Convergence is to local maximum. The choice of initial parameters determines local maximum to which the algorithm converges. Baum-Welch Analysis
  • 90. Additional Application: Speech Recognition • Create an HMM of the words in a language. • Each word is a hidden state in Q. • Each of the basic sounds in the language is a symbol in Σ. • Input: Fragment of speech. • Goal: Find the most probable sequence of states.
  • 91. Speech Recognition: Building the Model • Analyze some large source of English sentences, such as a database of newspaper articles, to form probability matrices. • A0i: The chance that word i begins a sentence. • Aij: The chance that word j follows word i. • Analyze English speakers to determine what sounds are emitted with what words. • Ek(b): the chance that sound b is spoken in word k. Allows for alternate pronunciation of words.
  • 92. Speech Recognition: Using the Model • Use the same dynamic programming algorithm as before. • Weave the spoken sounds through the model the same way we wove the rolls of the die through the casino model. • π will therefore represent the most likely set of words.
  • 93. Using the Model • How well does the model work? • Common words, such as „the‟, „a‟, „of‟ make prediction less accurate, since there are so many words that follow normally.
  • 94. Improving Speech Recognition • Initially, we were using a bigram, or a graph connecting every two words. • Expand that to a trigram. • Each state represents two words spoken in succession. • Each edge joins those two words (A B) to another state representing (B C). • Requires n3 vertices and edges, where n is the number of words in the language. • Much better, but still limited context.
  • 95. References • CS 262 course at Stanford given by Serafim Batzoglou