SlideShare a Scribd company logo
1 of 56
Markov Analysis
by
Dr.V.V.HaraGopal
Professor,Dept of Statistics,
Osmania University,
Hyderabad-7
STOCHASTIC PROCESS:
Stochastic “denotes the process of selecting from
among a group of theoretically possible alternatives
those elements or factors whose combination will
most closely approximate a desired result”
Stochastic models are not always exact
Stochastic models are useful shorthand
representations of complicated processes.
Markov property
Only the current value of variable is relevant for
future predictions
No information from past prices or path
What's a Markov Process?
3
• A Markov analysis looks at a
sequence of events, and analyzes
the tendency of one event to be
followed by another. Using this
analysis, you can generate a new
sequence of random but related
events, which will look similar to the
original.
4
• A Markov process is useful for analyzing
dependent random events - that is, events
whose likelihood depends on what
happened last. It would NOT be a good
way to model a coin flip, for example, since
every time you toss the coin, it has no
memory of what happened before. The
sequence of heads and tails are not inter-
related. They are independent events.
5
• But many random events are affected by
what happened before. For example,
yesterday's weather does have an
influence on what today's weather is. They
are not independent events.
6
• A Markov model could look at a long sequence of
rainy and sunny days, and analyze the likelihood that
one kind of weather gets followed by another kind.
Let's say it was found that 25% of the time, a rainy
day was followed by a sunny day, and 75% of the
time, rain was followed by more rain. Let's say we
found out additionally, that sunny days were followed
50% of the time by rain, and 50% by sun. Given this
analysis, we could generate a new sequence of
statistically similar weather by following these steps:
1) Start with today's weather.
2) Given today's weather, choose a random number
to pick tomorrow's weather.
3) Make tomorrow's weather "today's weather" and
go back to step 2. 7
• What we'd get is a sequence of days like:
Sunny Sunny Rainy Rainy Rainy Rainy Sunny Rainy
Rainy Sunny Sunny...
• In other words, the "output chain" would reflect
statistically the transition probabilities derived from
weather we observed.
• This stream of events is called a Markov Chain. A
Markov Chain, while similar to the source in the
small, is often nonsensical in the large. (Which is why
it's a lousy way to predict weather.) That is, the
overall shape of the generated material will bear little
formal resemblance to the overall shape of the
source. But taken a few events at a time, things feel
familiar.
8
9
Markov Analysis
In an industry with 3 firms we could look at
the market share of each firm at any time and
the shares have to add up to 100%. If we
had information about how customers might
change from one firm to the next then we
could predict future market shares. This is
just one example of Markov Analysis. In
general we use current probabilities and
transitional information to figure future
probabilities. Here we study an accounts
receivable example.
10
Say in the accounts receivable department, accounts are in one of
4 states, or categories:
state 1 - s1, paid,
state 2 – s2, bad debt, here defined as overdue more than three
months and company writes off the debt,
state 3 – s3, overdue less than one month,
state 4 – s4, overdue between one and three months.
Note the states are mutually exclusive and collectively
exhaustive.
At any given time there will be a certain fraction of accounts in
each state. Say in the current period we have the % of accounts
receivable in each state. In general we have a row vector of
probabilities (s1, s2, s3, s4).
11
Say now there are 25% of the accounts in each state. We
would have (.25, .25, .25, .25). This set of numbers is called
the vector of state probabilities.
Next the matrix of transition probabilities:
1 0 0 0
0 1 0 0
.6 0 .2 .2
.4 .1 .3 .2
The first row is being in the first state in the current period,
the second row is being in the second state in the current
period, and so on down the rows.
12
Now, in the matrix of transition probabilities let’s think about
each column. The first column says an account is in state 1 in
the next period. The second column says an account is in
state 2 in the next period, and so on.
Note the first row has values 1, 0, 0, 0. The values add to one.
If an account is all paid this period then it must be all paid
next period. So the 1 means there is a 100% chance of being
all paid next period and 0 % chance in being in any other
category.
In the second row we have 0, 1, 0, 0. If an account starts as
bad it will always be bad. So it has a zero chance of being
paid, less than one period overdue or be between 1 and 3
periods overdue.
13
In row three we have .6, 0, .2, .2. If an account is less than 1
month overdue now, next period there is a 60% chance it will be
all paid, 0% chance it will be bad because it can not be over 3
months bad, 20% chance it will be less than a month - wait, wait
wait. How can an account be bad less than one month now and
less than one month next period? Any account can have more
than one unpaid bill and we keep track of the oldest unpaid bill
for the category.
Note that each row has to add up to 1.
Now we are ready to ask a question. If each state has 25% of the
accounts this period, what percent will be in each state next
period?
We take the row vector and multiply by the matrix of transition
probabilities, as seen on the next screen.
14
(t, u, v, w)
d e f g
h i j k
l m n o
p q r s
We will end up with (a1, a2, a3, a4), where
a1 = t(d) + u(h) + v(l) + w(p)
a2 = t(e) + u(i) + v(m) + w(q)
a3 = t(f) + u(j) + v(n) + w(r)
a4 = t(g) + u(k) + v(o) + w(s)
Matrix multiplication
15
(.25, .25, .25, .25)
1 0 0 0
0 1 0 0
.6 0 .2 .2
.4 .1 .3. .2
We will end up with (a1, a2, a3, a4), where
a1 = .25(1) + .25(0) + .25(.6) + .25(.4) = .5
a2 = .25(0) + .25(1) + .25(0) + .25(.1) = .275
a3 = .25(0) + .25(0) + .25(.2) + .25(.3) = .125
a4 = .25(0) + .25(0) + .25(.2) + .25(.2) = .1
16
So, if we start with 25% of accounts in state 1, then next period
we have 50 % of accounts in state 1, and so on.
If you wanted to see what the %’s in each state would be two
periods from the start we would do the same calculation, but use
the row vector that we ended with in the first period
(.5, .275, .125, .1)
If I wanted to see the probabilities of being in each state at the
end of two months I would put 2 for number of transitions and
would get (.615, .285, .055, .045).
17
Now, in this particular problem we have what are called
absorbing states. Not all problems have absorbing states and if
not just do what we have done up to now.
An absorbing state is one such that once in it one stays in that
state. For instance, once debt is bad it is always bad. Now, in
the long run all debt will either be bad or paid.
The Markov Analysis problem that has absorbing states, no
matter how many transitions you put there is always an output
section called matrices and it includes the FA matrix. In our
problem we have
.9655 .0345
.8621 .1379
The rows represent the non-absorbing
states and the columns represent the
absorbing states.
18
The first row is state 3, debt of less than one month, and row 2 is
state 4, debt of 1 to 3 months. Column 1 is paid debt and column
2 is bad debt.
So, the first row says 96.55% of less than one month debt will be
paid over the long term and only 3.45% of this debt will not be
paid.
The second row means that 86.21% of 1 to 3 month debt will be
paid over the long terms and 13.79% of this debt will go bad.
Say that there is $2000 in the less than one month overdue
category and $5000 in the 1 to 3 month overdue category. How
much can the company expect to collect of this $7000 and how
much will it not collect?
19
We have to do matrix multiplication, here
(2000, 5000) .9655 .0345
.8621 .1379
([{2000*.9655} + {5000*.8621}], [{2000*.0345} + {5000*.1379}])
or (6241.50, 758.5).
So of the $7000 in states 3 and 4, $6241.50 can be expected to be
collected and $758.5 would not be collected.
Markov Processes
• Markov process models are useful in
studying the evolution of systems over
repeated trials or sequential time periods
or stages.
• Examples:
– Brand Loyalty
– Equipment performance
– Stock performance
Markov Processes
• When utilized, they can state the
probability of switching from one state to
another at a given period of time
• Examples:
– The probability that a person buying Colgate
this period will purchase Crest next period
– The probability that a machine that is working
properly this period will break down the next
period
Markov Processes
• A Markov system (or Markov process or
Markov chain) is a system that can be in
one of several (numbered) states, and can
pass from one state to another each time
step according to fixed probabilities.
• If a Markov system is in state i, there is a
fixed probability, pij, of it going into state j
the next time step, and pij is called a
transition probability.
Markov Processes
• A Markov system can be illustrated by
means of a state transition diagram, which
is a diagram showing all the states and
transition probabilities– probabilities of
switching from one state to another.
Transition Diagram
1 2
3
.4
.8
.2
.35
.65
.50
.15
What does the
diagram mean?
Transition Matrix
• The matrix P whose ijth
entry is pij is called the
transition matrix
associated with the
system.
• The entries in each row
add up to 1.
• Thus, for instance, a 2
2 transition matrix P
would be set up as
shown at the right.
1 2
1 P11 P12
2 P21 P22
From
To
Diagram & Matrix
1 2
3
.4
.8
.2
.35
.6
.50
.15
1 2 3
1 .2 .8 0
2 .4 0 .6
3 .5 .35 .15
From
To
Vectors & Transition Matrix
• A probability vector is a row vector in
which the entries are nonnegative and add
up to 1.
• The entries in a probability vector can
represent the probabilities of finding a
system in each of the states.
Probability Vector
• Let P =
.2 .8 0
.4 0 .6
.5 .35 .15
State Probabilities
• The state probabilities at any stage of the
process can be recursively calculated by
multiplying the initial state probabilities by
the state of the process at stage n.
State Probabilities
Πi (n) Probability that the system is in
state i in period n
Π(n) = [ Π1 (n) Π2 (n) ] Denotes the vector of state
probabilities for the system in
period n
Π(n+1) = Π(n) P State probabilities for period
n+1 can be found by multiplying
the known state probabilities for
period n by the transition matrix
State Probabilities
• Example:
∀Π(n) = [π1 (n) π2 (n) ]
∀Π(1) = Π(0) P
∀Π(2) = Π(1) P
∀Π(3) = Π(2) P
∀Π(n+1) = Π(n) P
Steady State Probabilities
• The probabilities that we approach after a
large number of transitions are referred to
as steady state probabilities.
• As n gets large, the state probabilities at
the (n+1)th period are very close to those
at the nth period.
Steady State Probabilities
• Knowing this, we can compute steady
state probabilities without having to carry
out a large # of calculations
Π(n) = [π1 (n) π2 (n) ]
[ π1 (n+1) π2 (n+1) ] = p11 p12
[π1 (n) π2 (n)] p21 p22
Example
• Hari, a persistent salesman, calls ABC
Hardware Store once a week hoping to
speak with the store's buying agent,
Shyam. If Shyam does not accept Hari's
call this week, the probability he will do the
same next week (and not accept his call)
is .35. On the other hand, if he accepts
Hari's call this week, the probability he will
not accept his call next week is .20.
Example: Transition Matrix
Refuses Accepts
Refuses .35 .65
Accepts .20 .80
This Week’s
Call
Next Week’s
Call
Example
• How many times per year can Hari expect
to talk to Shyam?
• Answer: To find the expected number of
accepted calls per year, find the long-run
proportion (probability) of a call being
accepted and multiply it by 52 weeks.
Example
Let π1 = long run proportion of refused calls
π2 = long run proportion of accepted
calls
Then,
.35 .65
[π1 π2 ] .20 .80 = [π1 π2 ]
Example
.35π1 + .20π2 = π1 (1)
.65π1 + .80π2 = π2 (2)
π1 + π2 = 1 (3)
Solve for π1 and π2
• The probability of the system being in a
particular state after a large number of
stages is called a steady-state probability.
Example: Machine Adjustment
ToTo
FromFrom
In adj. (1)In adj. (1)
Out of adj. (2)Out of adj. (2)
InIn
adjustmentadjustment
(state 1)(state 1)
0.70.7
0.60.6
Out ofOut of
adjustmentadjustment
(state 2)(state 2)
0.30.3
0.40.4
Example: Machine Adjustment
Day 1
11
22
11
.7.7
.3.3
Day 2
.7.7
.3.3
If the machine is found to be in adjustment on day 1,
what is the likelihood it will be in adjustment on day 3?
Not in adjustment?
Example: Machine Adjustment
11
22 11
22
11 22
11
.7.7
.3.3
.7.7
.3.3
.6.6
.4.4
.49.49
.21.21
.18.18
.12.12
Day 1
Day 2
.67.67
.33.33
Day 3
11
22 11
22
11 22
11
11
22
11
22
11
22
11
22.7.7
.3.3
.7.7
.3.3
.6.6
.4.4
.7.7
.3.3
.7.7
.3.3
.6.6
.4.4
.6.6
.4.4
Day 4
11
22 11
22
11 22
11
11
22
11
22
11
22
11
22.7.7
.3.3
.7.7
.3.3
.6.6
.4.4
.7.7
.3.3
.7.7
.3.3
.6.6
.4.4
.6.6
.4.4
Day 4
Example: Machine Adjustment
• Day 4:
P(S1|S1) = .7(.7)(.7) + .7(.3)(.6) +.3(.6)(.7)
+.3(.4)(.6)
= .667
P(S2|S1) = .7(.7)(.3) + .7(.3)(.4) + .3(.6)
(.3) + 3(.4)(.4)
= .333
Day 5
11
22 11
22
11 22
11
11
22
11
22
11
22
11
22
11
22
11
22
11
22
11
22
11
22
11
22
11
22
11
22.7.7
.3.3
.7.7
.3.3
.6.6
.4.4
.7.7
.3.3
.7.7
.3.3
.6.6
.4.4
.6.6
.4.4
.7.7
.3.3
.6.6
.4.4
.7.7
.3.3
.6.6
.4.4
.7.7
.3.3
.6.6
.4.4
.7.7
.3.3
.6.6
.4.4
Example: Machine Adjustment
• Day 5:
P(S1|S1) = .7(.7)(.7)(.7)+ .7(.7)(.3)(.6)
+.7(.3)(.6)(.7) +.7(.3)(.4)(.6)
+ .3(.6)(.7)(.7) + .3(.6)(.3)(.6)
+ .3(.4)(.6)(.7) + .3(.4)(.4)(.6)
= .666
P(S2|S1) = .7(.7)(.7)(.3) + .7(.7)(.3)(.4)
+ .7(.3)(.6)(.3) + .7(.3)(.4)(.4)
+ .3(.6)(.7)(.3) + .3(.6)(.3)(.4)
+ .3(.4)(.6)(.3) + .3(.4)(.4)(.4)
= .334 Notice anything interesting?
Steady State Probabilities
• These probabilities are called steady state
probabilities
• The long term probability of being in a
particular state no matter which state you
begin in
– Steady state prob. (state 1)= .667
– Steady state prob. (state 2) = .333
Example: Machine Adjustment
ToTo
FromFrom
In adj. (1)In adj. (1)
Out of adj. (2)Out of adj. (2)
InIn
adjustmentadjustment
(state 1)(state 1)
0.70.7
0.60.6
Out ofOut of
adjustmentadjustment
(state 2)(state 2)
0.30.3
0.40.4
Example: Machine Adjustment
ToTo
FromFrom
In adj. (1)In adj. (1)
Out of adj.Out of adj.
(2)(2)
InIn
adjustmentadjustment
(state 1)(state 1)
pp1111
pp2121
Out ofOut of
adjustmentadjustment
(state 2)(state 2)
pp1212
pp2222
Steady State Probabilities
P(S1 Day n+1|S1) =P(S1 Day n+1|S1) =
.7 P(S1 Day n|S1) + .6 P(S2 Day n|S1).7 P(S1 Day n|S1) + .6 P(S2 Day n|S1)
PP11
PP22
P(S2 Day n+1|S1) =P(S2 Day n+1|S1) =
.3 P(S1 Day n|S1) + .4 P(S2 Day n|S1).3 P(S1 Day n|S1) + .4 P(S2 Day n|S1)
PP11
PP22
Steady State Probabilities
PP11 == pp1111PP11 ++ pp2121PP22
PP11 = (1 -= (1 - pp1212 )) PP11 ++ pp2121 (1-(1- PP11))
PP11 = P= P11 -- pp1212 PP11 ++ pp2121 -- pp2121 PP11
pp1212 PP11 ++ pp2121 PP11 == pp2121
PP11 ==
pp2121
pp1212 ++ pp2121
Steady State Probabilities
p21
p12 + p21
P1 = = = =
.6
.3+.6
.6
.9
2
3
p12
p12 + p21
P2 = = = =
.3
.3+.6
.3
.9
1
3
Example – Steady State
Let p1 = long run proportion of refused calls
p2 = long run proportion of accepted
calls
Then,
.70 .30
[p1 p2 ] .60 .40 = [p1 p2 ]
Example – Steady State
.70p1 + .60p2 = p1 (1)
.30p1 + .40p2 = p2 (2)
p1 + p2 = 1 (3)
Solve for p1 and p2
p1 = 1 – p2
p2 = 1 – p1
Can be restated as:
Example – Steady State
Using equations (2) and (3), substitute p1 = 1 – p2
into (2):
.30(1 - p2) + .40p2 = p2
This gives p2 = .3333
Substituting back into equation (3) gives
p1 = .67 Thus the expected number of accepted calls
per year is: (.76471)(52) = 39.76 or about 40

More Related Content

What's hot

Markov Chain and its Analysis
Markov Chain and its Analysis Markov Chain and its Analysis
Markov Chain and its Analysis ShreyasBirje
 
Forecasting Slides
Forecasting SlidesForecasting Slides
Forecasting Slidesknksmart
 
Monte Carlo Simulation
Monte Carlo SimulationMonte Carlo Simulation
Monte Carlo SimulationDeepti Singh
 
Sequencing problems in Operations Research
Sequencing problems in Operations ResearchSequencing problems in Operations Research
Sequencing problems in Operations ResearchAbu Bashar
 
Demand Forecasting
Demand ForecastingDemand Forecasting
Demand ForecastingAnupam Basu
 
M A R K O V C H A I N
M A R K O V  C H A I NM A R K O V  C H A I N
M A R K O V C H A I Nashishtqm
 
Arima model
Arima modelArima model
Arima modelJassika
 
Operational research queuing theory
Operational research queuing theoryOperational research queuing theory
Operational research queuing theoryVidhya Kannan
 
queuing theory/ waiting line theory
queuing theory/ waiting line theoryqueuing theory/ waiting line theory
queuing theory/ waiting line theoryArushi Verma
 
Unit 2 monte carlo simulation
Unit 2 monte carlo simulationUnit 2 monte carlo simulation
Unit 2 monte carlo simulationDevaKumari Vijay
 
Operation scheduling
Operation schedulingOperation scheduling
Operation schedulingsai precious
 
Simulation technique in OR
Simulation technique in ORSimulation technique in OR
Simulation technique in ORSarabjeet Kaur
 
Time Series Analysis - Modeling and Forecasting
Time Series Analysis - Modeling and ForecastingTime Series Analysis - Modeling and Forecasting
Time Series Analysis - Modeling and ForecastingMaruthi Nataraj K
 
Simulation in Operation Research
Simulation in Operation ResearchSimulation in Operation Research
Simulation in Operation ResearchYamini Kahaliya
 
Queueing Theory and its BusinessS Applications
Queueing Theory and its BusinessS ApplicationsQueueing Theory and its BusinessS Applications
Queueing Theory and its BusinessS ApplicationsBiswajit Bhattacharjee
 
Demand forecasting by time series analysis
Demand forecasting by time series analysisDemand forecasting by time series analysis
Demand forecasting by time series analysisSunny Gandhi
 

What's hot (20)

Markov Chain and its Analysis
Markov Chain and its Analysis Markov Chain and its Analysis
Markov Chain and its Analysis
 
Forecasting Slides
Forecasting SlidesForecasting Slides
Forecasting Slides
 
Forecasting Techniques
Forecasting TechniquesForecasting Techniques
Forecasting Techniques
 
Monte Carlo Simulation
Monte Carlo SimulationMonte Carlo Simulation
Monte Carlo Simulation
 
Sequencing problems in Operations Research
Sequencing problems in Operations ResearchSequencing problems in Operations Research
Sequencing problems in Operations Research
 
Demand Forecasting
Demand ForecastingDemand Forecasting
Demand Forecasting
 
M A R K O V C H A I N
M A R K O V  C H A I NM A R K O V  C H A I N
M A R K O V C H A I N
 
Queuing theory
Queuing theoryQueuing theory
Queuing theory
 
Arima model
Arima modelArima model
Arima model
 
Operational research queuing theory
Operational research queuing theoryOperational research queuing theory
Operational research queuing theory
 
queuing theory/ waiting line theory
queuing theory/ waiting line theoryqueuing theory/ waiting line theory
queuing theory/ waiting line theory
 
Unit 2 monte carlo simulation
Unit 2 monte carlo simulationUnit 2 monte carlo simulation
Unit 2 monte carlo simulation
 
Operation scheduling
Operation schedulingOperation scheduling
Operation scheduling
 
Forecasting Methods
Forecasting MethodsForecasting Methods
Forecasting Methods
 
Simulation technique in OR
Simulation technique in ORSimulation technique in OR
Simulation technique in OR
 
Time Series Analysis - Modeling and Forecasting
Time Series Analysis - Modeling and ForecastingTime Series Analysis - Modeling and Forecasting
Time Series Analysis - Modeling and Forecasting
 
Simulation in Operation Research
Simulation in Operation ResearchSimulation in Operation Research
Simulation in Operation Research
 
Queueing Theory and its BusinessS Applications
Queueing Theory and its BusinessS ApplicationsQueueing Theory and its BusinessS Applications
Queueing Theory and its BusinessS Applications
 
Demand forecasting by time series analysis
Demand forecasting by time series analysisDemand forecasting by time series analysis
Demand forecasting by time series analysis
 
Queuing theory
Queuing theoryQueuing theory
Queuing theory
 

Viewers also liked

Actuarial Application of Monte Carlo Simulation
Actuarial Application of Monte Carlo Simulation Actuarial Application of Monte Carlo Simulation
Actuarial Application of Monte Carlo Simulation Adam Conrad
 
Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro2vikasdubey
 
Hr forecasting techniques
Hr forecasting techniquesHr forecasting techniques
Hr forecasting techniquesJenil Vora
 
construction risk factor analysis: BBN Network
construction risk factor analysis: BBN Networkconstruction risk factor analysis: BBN Network
construction risk factor analysis: BBN NetworkShaswati Mohapatra
 

Viewers also liked (6)

Session 42_1 Peter Fries-Hansen
Session 42_1 Peter Fries-HansenSession 42_1 Peter Fries-Hansen
Session 42_1 Peter Fries-Hansen
 
Actuarial Application of Monte Carlo Simulation
Actuarial Application of Monte Carlo Simulation Actuarial Application of Monte Carlo Simulation
Actuarial Application of Monte Carlo Simulation
 
Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro
 
Hr forecasting techniques
Hr forecasting techniquesHr forecasting techniques
Hr forecasting techniques
 
Berger 2000
Berger 2000Berger 2000
Berger 2000
 
construction risk factor analysis: BBN Network
construction risk factor analysis: BBN Networkconstruction risk factor analysis: BBN Network
construction risk factor analysis: BBN Network
 

Similar to Markov Analysis Guide

meng.ppt
meng.pptmeng.ppt
meng.pptaozcan1
 
Jerry banks introduction to simulation
Jerry banks   introduction to simulationJerry banks   introduction to simulation
Jerry banks introduction to simulationsarubianoa
 
Markov Chains.pptx
Markov Chains.pptxMarkov Chains.pptx
Markov Chains.pptxTarigBerba
 
Credit Analogue (2003)
Credit Analogue (2003)Credit Analogue (2003)
Credit Analogue (2003)Texxi Global
 
auto correlation.pdf
auto correlation.pdfauto correlation.pdf
auto correlation.pdfRiyaKhanna34
 
markov chain.ppt
markov chain.pptmarkov chain.ppt
markov chain.pptDWin Myo
 
regression-linearandlogisitics-220524024037-4221a176 (1).pdf
regression-linearandlogisitics-220524024037-4221a176 (1).pdfregression-linearandlogisitics-220524024037-4221a176 (1).pdf
regression-linearandlogisitics-220524024037-4221a176 (1).pdflisow86669
 
15-Markov-Chains-post.pdf
15-Markov-Chains-post.pdf15-Markov-Chains-post.pdf
15-Markov-Chains-post.pdfrayyverma
 
Vasicek Model Project
Vasicek Model ProjectVasicek Model Project
Vasicek Model ProjectCedric Melhy
 
MSc Finance_EF_0853352_Kartik Malla
MSc Finance_EF_0853352_Kartik MallaMSc Finance_EF_0853352_Kartik Malla
MSc Finance_EF_0853352_Kartik MallaKartik Malla
 
Time Series
Time SeriesTime Series
Time Seriesyush313
 
Question 1 1. A state condition where-as the probability of one .docx
Question 1 1. A state condition where-as the probability of one .docxQuestion 1 1. A state condition where-as the probability of one .docx
Question 1 1. A state condition where-as the probability of one .docxamrit47
 
Time Series Analysis.pptx
Time Series Analysis.pptxTime Series Analysis.pptx
Time Series Analysis.pptxSunny429247
 
Improving predictability and performance by relating the number of events and...
Improving predictability and performance by relating the number of events and...Improving predictability and performance by relating the number of events and...
Improving predictability and performance by relating the number of events and...Asoka Korale
 

Similar to Markov Analysis Guide (20)

Markov chain analysis
Markov chain analysisMarkov chain analysis
Markov chain analysis
 
meng.ppt
meng.pptmeng.ppt
meng.ppt
 
Distributed lag model koyck
Distributed lag model koyckDistributed lag model koyck
Distributed lag model koyck
 
Jerry banks introduction to simulation
Jerry banks   introduction to simulationJerry banks   introduction to simulation
Jerry banks introduction to simulation
 
Markov Chains.pptx
Markov Chains.pptxMarkov Chains.pptx
Markov Chains.pptx
 
Credit Analogue (2003)
Credit Analogue (2003)Credit Analogue (2003)
Credit Analogue (2003)
 
auto correlation.pdf
auto correlation.pdfauto correlation.pdf
auto correlation.pdf
 
Money and Infinity
Money and InfinityMoney and Infinity
Money and Infinity
 
markov chain.ppt
markov chain.pptmarkov chain.ppt
markov chain.ppt
 
regression-linearandlogisitics-220524024037-4221a176 (1).pdf
regression-linearandlogisitics-220524024037-4221a176 (1).pdfregression-linearandlogisitics-220524024037-4221a176 (1).pdf
regression-linearandlogisitics-220524024037-4221a176 (1).pdf
 
Linear and Logistics Regression
Linear and Logistics RegressionLinear and Logistics Regression
Linear and Logistics Regression
 
15-Markov-Chains-post.pdf
15-Markov-Chains-post.pdf15-Markov-Chains-post.pdf
15-Markov-Chains-post.pdf
 
Vasicek Model Project
Vasicek Model ProjectVasicek Model Project
Vasicek Model Project
 
MSc Finance_EF_0853352_Kartik Malla
MSc Finance_EF_0853352_Kartik MallaMSc Finance_EF_0853352_Kartik Malla
MSc Finance_EF_0853352_Kartik Malla
 
Time Series
Time SeriesTime Series
Time Series
 
Question 1 1. A state condition where-as the probability of one .docx
Question 1 1. A state condition where-as the probability of one .docxQuestion 1 1. A state condition where-as the probability of one .docx
Question 1 1. A state condition where-as the probability of one .docx
 
Statistics For Management 3 October
Statistics For Management 3 OctoberStatistics For Management 3 October
Statistics For Management 3 October
 
Time Series Analysis.pptx
Time Series Analysis.pptxTime Series Analysis.pptx
Time Series Analysis.pptx
 
CSS
CSSCSS
CSS
 
Improving predictability and performance by relating the number of events and...
Improving predictability and performance by relating the number of events and...Improving predictability and performance by relating the number of events and...
Improving predictability and performance by relating the number of events and...
 

Recently uploaded

New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...AliaaTarek5
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 

Recently uploaded (20)

New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 

Markov Analysis Guide

  • 1. Markov Analysis by Dr.V.V.HaraGopal Professor,Dept of Statistics, Osmania University, Hyderabad-7
  • 2. STOCHASTIC PROCESS: Stochastic “denotes the process of selecting from among a group of theoretically possible alternatives those elements or factors whose combination will most closely approximate a desired result” Stochastic models are not always exact Stochastic models are useful shorthand representations of complicated processes. Markov property Only the current value of variable is relevant for future predictions No information from past prices or path
  • 3. What's a Markov Process? 3
  • 4. • A Markov analysis looks at a sequence of events, and analyzes the tendency of one event to be followed by another. Using this analysis, you can generate a new sequence of random but related events, which will look similar to the original. 4
  • 5. • A Markov process is useful for analyzing dependent random events - that is, events whose likelihood depends on what happened last. It would NOT be a good way to model a coin flip, for example, since every time you toss the coin, it has no memory of what happened before. The sequence of heads and tails are not inter- related. They are independent events. 5
  • 6. • But many random events are affected by what happened before. For example, yesterday's weather does have an influence on what today's weather is. They are not independent events. 6
  • 7. • A Markov model could look at a long sequence of rainy and sunny days, and analyze the likelihood that one kind of weather gets followed by another kind. Let's say it was found that 25% of the time, a rainy day was followed by a sunny day, and 75% of the time, rain was followed by more rain. Let's say we found out additionally, that sunny days were followed 50% of the time by rain, and 50% by sun. Given this analysis, we could generate a new sequence of statistically similar weather by following these steps: 1) Start with today's weather. 2) Given today's weather, choose a random number to pick tomorrow's weather. 3) Make tomorrow's weather "today's weather" and go back to step 2. 7
  • 8. • What we'd get is a sequence of days like: Sunny Sunny Rainy Rainy Rainy Rainy Sunny Rainy Rainy Sunny Sunny... • In other words, the "output chain" would reflect statistically the transition probabilities derived from weather we observed. • This stream of events is called a Markov Chain. A Markov Chain, while similar to the source in the small, is often nonsensical in the large. (Which is why it's a lousy way to predict weather.) That is, the overall shape of the generated material will bear little formal resemblance to the overall shape of the source. But taken a few events at a time, things feel familiar. 8
  • 9. 9 Markov Analysis In an industry with 3 firms we could look at the market share of each firm at any time and the shares have to add up to 100%. If we had information about how customers might change from one firm to the next then we could predict future market shares. This is just one example of Markov Analysis. In general we use current probabilities and transitional information to figure future probabilities. Here we study an accounts receivable example.
  • 10. 10 Say in the accounts receivable department, accounts are in one of 4 states, or categories: state 1 - s1, paid, state 2 – s2, bad debt, here defined as overdue more than three months and company writes off the debt, state 3 – s3, overdue less than one month, state 4 – s4, overdue between one and three months. Note the states are mutually exclusive and collectively exhaustive. At any given time there will be a certain fraction of accounts in each state. Say in the current period we have the % of accounts receivable in each state. In general we have a row vector of probabilities (s1, s2, s3, s4).
  • 11. 11 Say now there are 25% of the accounts in each state. We would have (.25, .25, .25, .25). This set of numbers is called the vector of state probabilities. Next the matrix of transition probabilities: 1 0 0 0 0 1 0 0 .6 0 .2 .2 .4 .1 .3 .2 The first row is being in the first state in the current period, the second row is being in the second state in the current period, and so on down the rows.
  • 12. 12 Now, in the matrix of transition probabilities let’s think about each column. The first column says an account is in state 1 in the next period. The second column says an account is in state 2 in the next period, and so on. Note the first row has values 1, 0, 0, 0. The values add to one. If an account is all paid this period then it must be all paid next period. So the 1 means there is a 100% chance of being all paid next period and 0 % chance in being in any other category. In the second row we have 0, 1, 0, 0. If an account starts as bad it will always be bad. So it has a zero chance of being paid, less than one period overdue or be between 1 and 3 periods overdue.
  • 13. 13 In row three we have .6, 0, .2, .2. If an account is less than 1 month overdue now, next period there is a 60% chance it will be all paid, 0% chance it will be bad because it can not be over 3 months bad, 20% chance it will be less than a month - wait, wait wait. How can an account be bad less than one month now and less than one month next period? Any account can have more than one unpaid bill and we keep track of the oldest unpaid bill for the category. Note that each row has to add up to 1. Now we are ready to ask a question. If each state has 25% of the accounts this period, what percent will be in each state next period? We take the row vector and multiply by the matrix of transition probabilities, as seen on the next screen.
  • 14. 14 (t, u, v, w) d e f g h i j k l m n o p q r s We will end up with (a1, a2, a3, a4), where a1 = t(d) + u(h) + v(l) + w(p) a2 = t(e) + u(i) + v(m) + w(q) a3 = t(f) + u(j) + v(n) + w(r) a4 = t(g) + u(k) + v(o) + w(s) Matrix multiplication
  • 15. 15 (.25, .25, .25, .25) 1 0 0 0 0 1 0 0 .6 0 .2 .2 .4 .1 .3. .2 We will end up with (a1, a2, a3, a4), where a1 = .25(1) + .25(0) + .25(.6) + .25(.4) = .5 a2 = .25(0) + .25(1) + .25(0) + .25(.1) = .275 a3 = .25(0) + .25(0) + .25(.2) + .25(.3) = .125 a4 = .25(0) + .25(0) + .25(.2) + .25(.2) = .1
  • 16. 16 So, if we start with 25% of accounts in state 1, then next period we have 50 % of accounts in state 1, and so on. If you wanted to see what the %’s in each state would be two periods from the start we would do the same calculation, but use the row vector that we ended with in the first period (.5, .275, .125, .1) If I wanted to see the probabilities of being in each state at the end of two months I would put 2 for number of transitions and would get (.615, .285, .055, .045).
  • 17. 17 Now, in this particular problem we have what are called absorbing states. Not all problems have absorbing states and if not just do what we have done up to now. An absorbing state is one such that once in it one stays in that state. For instance, once debt is bad it is always bad. Now, in the long run all debt will either be bad or paid. The Markov Analysis problem that has absorbing states, no matter how many transitions you put there is always an output section called matrices and it includes the FA matrix. In our problem we have .9655 .0345 .8621 .1379 The rows represent the non-absorbing states and the columns represent the absorbing states.
  • 18. 18 The first row is state 3, debt of less than one month, and row 2 is state 4, debt of 1 to 3 months. Column 1 is paid debt and column 2 is bad debt. So, the first row says 96.55% of less than one month debt will be paid over the long term and only 3.45% of this debt will not be paid. The second row means that 86.21% of 1 to 3 month debt will be paid over the long terms and 13.79% of this debt will go bad. Say that there is $2000 in the less than one month overdue category and $5000 in the 1 to 3 month overdue category. How much can the company expect to collect of this $7000 and how much will it not collect?
  • 19. 19 We have to do matrix multiplication, here (2000, 5000) .9655 .0345 .8621 .1379 ([{2000*.9655} + {5000*.8621}], [{2000*.0345} + {5000*.1379}]) or (6241.50, 758.5). So of the $7000 in states 3 and 4, $6241.50 can be expected to be collected and $758.5 would not be collected.
  • 20. Markov Processes • Markov process models are useful in studying the evolution of systems over repeated trials or sequential time periods or stages. • Examples: – Brand Loyalty – Equipment performance – Stock performance
  • 21. Markov Processes • When utilized, they can state the probability of switching from one state to another at a given period of time • Examples: – The probability that a person buying Colgate this period will purchase Crest next period – The probability that a machine that is working properly this period will break down the next period
  • 22. Markov Processes • A Markov system (or Markov process or Markov chain) is a system that can be in one of several (numbered) states, and can pass from one state to another each time step according to fixed probabilities. • If a Markov system is in state i, there is a fixed probability, pij, of it going into state j the next time step, and pij is called a transition probability.
  • 23. Markov Processes • A Markov system can be illustrated by means of a state transition diagram, which is a diagram showing all the states and transition probabilities– probabilities of switching from one state to another.
  • 25. Transition Matrix • The matrix P whose ijth entry is pij is called the transition matrix associated with the system. • The entries in each row add up to 1. • Thus, for instance, a 2 2 transition matrix P would be set up as shown at the right. 1 2 1 P11 P12 2 P21 P22 From To
  • 26. Diagram & Matrix 1 2 3 .4 .8 .2 .35 .6 .50 .15 1 2 3 1 .2 .8 0 2 .4 0 .6 3 .5 .35 .15 From To
  • 27. Vectors & Transition Matrix • A probability vector is a row vector in which the entries are nonnegative and add up to 1. • The entries in a probability vector can represent the probabilities of finding a system in each of the states.
  • 28. Probability Vector • Let P = .2 .8 0 .4 0 .6 .5 .35 .15
  • 29. State Probabilities • The state probabilities at any stage of the process can be recursively calculated by multiplying the initial state probabilities by the state of the process at stage n.
  • 30. State Probabilities Πi (n) Probability that the system is in state i in period n Π(n) = [ Π1 (n) Π2 (n) ] Denotes the vector of state probabilities for the system in period n Π(n+1) = Π(n) P State probabilities for period n+1 can be found by multiplying the known state probabilities for period n by the transition matrix
  • 31. State Probabilities • Example: ∀Π(n) = [π1 (n) π2 (n) ] ∀Π(1) = Π(0) P ∀Π(2) = Π(1) P ∀Π(3) = Π(2) P ∀Π(n+1) = Π(n) P
  • 32. Steady State Probabilities • The probabilities that we approach after a large number of transitions are referred to as steady state probabilities. • As n gets large, the state probabilities at the (n+1)th period are very close to those at the nth period.
  • 33. Steady State Probabilities • Knowing this, we can compute steady state probabilities without having to carry out a large # of calculations Π(n) = [π1 (n) π2 (n) ] [ π1 (n+1) π2 (n+1) ] = p11 p12 [π1 (n) π2 (n)] p21 p22
  • 34. Example • Hari, a persistent salesman, calls ABC Hardware Store once a week hoping to speak with the store's buying agent, Shyam. If Shyam does not accept Hari's call this week, the probability he will do the same next week (and not accept his call) is .35. On the other hand, if he accepts Hari's call this week, the probability he will not accept his call next week is .20.
  • 35. Example: Transition Matrix Refuses Accepts Refuses .35 .65 Accepts .20 .80 This Week’s Call Next Week’s Call
  • 36. Example • How many times per year can Hari expect to talk to Shyam? • Answer: To find the expected number of accepted calls per year, find the long-run proportion (probability) of a call being accepted and multiply it by 52 weeks.
  • 37. Example Let π1 = long run proportion of refused calls π2 = long run proportion of accepted calls Then, .35 .65 [π1 π2 ] .20 .80 = [π1 π2 ]
  • 38. Example .35π1 + .20π2 = π1 (1) .65π1 + .80π2 = π2 (2) π1 + π2 = 1 (3) Solve for π1 and π2
  • 39. • The probability of the system being in a particular state after a large number of stages is called a steady-state probability.
  • 40. Example: Machine Adjustment ToTo FromFrom In adj. (1)In adj. (1) Out of adj. (2)Out of adj. (2) InIn adjustmentadjustment (state 1)(state 1) 0.70.7 0.60.6 Out ofOut of adjustmentadjustment (state 2)(state 2) 0.30.3 0.40.4
  • 41. Example: Machine Adjustment Day 1 11 22 11 .7.7 .3.3 Day 2 .7.7 .3.3 If the machine is found to be in adjustment on day 1, what is the likelihood it will be in adjustment on day 3? Not in adjustment?
  • 42. Example: Machine Adjustment 11 22 11 22 11 22 11 .7.7 .3.3 .7.7 .3.3 .6.6 .4.4 .49.49 .21.21 .18.18 .12.12 Day 1 Day 2 .67.67 .33.33 Day 3
  • 45. Example: Machine Adjustment • Day 4: P(S1|S1) = .7(.7)(.7) + .7(.3)(.6) +.3(.6)(.7) +.3(.4)(.6) = .667 P(S2|S1) = .7(.7)(.3) + .7(.3)(.4) + .3(.6) (.3) + 3(.4)(.4) = .333
  • 46. Day 5 11 22 11 22 11 22 11 11 22 11 22 11 22 11 22 11 22 11 22 11 22 11 22 11 22 11 22 11 22 11 22.7.7 .3.3 .7.7 .3.3 .6.6 .4.4 .7.7 .3.3 .7.7 .3.3 .6.6 .4.4 .6.6 .4.4 .7.7 .3.3 .6.6 .4.4 .7.7 .3.3 .6.6 .4.4 .7.7 .3.3 .6.6 .4.4 .7.7 .3.3 .6.6 .4.4
  • 47. Example: Machine Adjustment • Day 5: P(S1|S1) = .7(.7)(.7)(.7)+ .7(.7)(.3)(.6) +.7(.3)(.6)(.7) +.7(.3)(.4)(.6) + .3(.6)(.7)(.7) + .3(.6)(.3)(.6) + .3(.4)(.6)(.7) + .3(.4)(.4)(.6) = .666 P(S2|S1) = .7(.7)(.7)(.3) + .7(.7)(.3)(.4) + .7(.3)(.6)(.3) + .7(.3)(.4)(.4) + .3(.6)(.7)(.3) + .3(.6)(.3)(.4) + .3(.4)(.6)(.3) + .3(.4)(.4)(.4) = .334 Notice anything interesting?
  • 48. Steady State Probabilities • These probabilities are called steady state probabilities • The long term probability of being in a particular state no matter which state you begin in – Steady state prob. (state 1)= .667 – Steady state prob. (state 2) = .333
  • 49. Example: Machine Adjustment ToTo FromFrom In adj. (1)In adj. (1) Out of adj. (2)Out of adj. (2) InIn adjustmentadjustment (state 1)(state 1) 0.70.7 0.60.6 Out ofOut of adjustmentadjustment (state 2)(state 2) 0.30.3 0.40.4
  • 50. Example: Machine Adjustment ToTo FromFrom In adj. (1)In adj. (1) Out of adj.Out of adj. (2)(2) InIn adjustmentadjustment (state 1)(state 1) pp1111 pp2121 Out ofOut of adjustmentadjustment (state 2)(state 2) pp1212 pp2222
  • 51. Steady State Probabilities P(S1 Day n+1|S1) =P(S1 Day n+1|S1) = .7 P(S1 Day n|S1) + .6 P(S2 Day n|S1).7 P(S1 Day n|S1) + .6 P(S2 Day n|S1) PP11 PP22 P(S2 Day n+1|S1) =P(S2 Day n+1|S1) = .3 P(S1 Day n|S1) + .4 P(S2 Day n|S1).3 P(S1 Day n|S1) + .4 P(S2 Day n|S1) PP11 PP22
  • 52. Steady State Probabilities PP11 == pp1111PP11 ++ pp2121PP22 PP11 = (1 -= (1 - pp1212 )) PP11 ++ pp2121 (1-(1- PP11)) PP11 = P= P11 -- pp1212 PP11 ++ pp2121 -- pp2121 PP11 pp1212 PP11 ++ pp2121 PP11 == pp2121 PP11 == pp2121 pp1212 ++ pp2121
  • 53. Steady State Probabilities p21 p12 + p21 P1 = = = = .6 .3+.6 .6 .9 2 3 p12 p12 + p21 P2 = = = = .3 .3+.6 .3 .9 1 3
  • 54. Example – Steady State Let p1 = long run proportion of refused calls p2 = long run proportion of accepted calls Then, .70 .30 [p1 p2 ] .60 .40 = [p1 p2 ]
  • 55. Example – Steady State .70p1 + .60p2 = p1 (1) .30p1 + .40p2 = p2 (2) p1 + p2 = 1 (3) Solve for p1 and p2 p1 = 1 – p2 p2 = 1 – p1 Can be restated as:
  • 56. Example – Steady State Using equations (2) and (3), substitute p1 = 1 – p2 into (2): .30(1 - p2) + .40p2 = p2 This gives p2 = .3333 Substituting back into equation (3) gives p1 = .67 Thus the expected number of accepted calls per year is: (.76471)(52) = 39.76 or about 40