2. STOCHASTIC PROCESS:
Stochastic “denotes the process of selecting from
among a group of theoretically possible alternatives
those elements or factors whose combination will
most closely approximate a desired result”
Stochastic models are not always exact
Stochastic models are useful shorthand
representations of complicated processes.
Markov property
Only the current value of variable is relevant for
future predictions
No information from past prices or path
4. • A Markov analysis looks at a
sequence of events, and analyzes
the tendency of one event to be
followed by another. Using this
analysis, you can generate a new
sequence of random but related
events, which will look similar to the
original.
4
5. • A Markov process is useful for analyzing
dependent random events - that is, events
whose likelihood depends on what
happened last. It would NOT be a good
way to model a coin flip, for example, since
every time you toss the coin, it has no
memory of what happened before. The
sequence of heads and tails are not inter-
related. They are independent events.
5
6. • But many random events are affected by
what happened before. For example,
yesterday's weather does have an
influence on what today's weather is. They
are not independent events.
6
7. • A Markov model could look at a long sequence of
rainy and sunny days, and analyze the likelihood that
one kind of weather gets followed by another kind.
Let's say it was found that 25% of the time, a rainy
day was followed by a sunny day, and 75% of the
time, rain was followed by more rain. Let's say we
found out additionally, that sunny days were followed
50% of the time by rain, and 50% by sun. Given this
analysis, we could generate a new sequence of
statistically similar weather by following these steps:
1) Start with today's weather.
2) Given today's weather, choose a random number
to pick tomorrow's weather.
3) Make tomorrow's weather "today's weather" and
go back to step 2. 7
8. • What we'd get is a sequence of days like:
Sunny Sunny Rainy Rainy Rainy Rainy Sunny Rainy
Rainy Sunny Sunny...
• In other words, the "output chain" would reflect
statistically the transition probabilities derived from
weather we observed.
• This stream of events is called a Markov Chain. A
Markov Chain, while similar to the source in the
small, is often nonsensical in the large. (Which is why
it's a lousy way to predict weather.) That is, the
overall shape of the generated material will bear little
formal resemblance to the overall shape of the
source. But taken a few events at a time, things feel
familiar.
8
9. 9
Markov Analysis
In an industry with 3 firms we could look at
the market share of each firm at any time and
the shares have to add up to 100%. If we
had information about how customers might
change from one firm to the next then we
could predict future market shares. This is
just one example of Markov Analysis. In
general we use current probabilities and
transitional information to figure future
probabilities. Here we study an accounts
receivable example.
10. 10
Say in the accounts receivable department, accounts are in one of
4 states, or categories:
state 1 - s1, paid,
state 2 – s2, bad debt, here defined as overdue more than three
months and company writes off the debt,
state 3 – s3, overdue less than one month,
state 4 – s4, overdue between one and three months.
Note the states are mutually exclusive and collectively
exhaustive.
At any given time there will be a certain fraction of accounts in
each state. Say in the current period we have the % of accounts
receivable in each state. In general we have a row vector of
probabilities (s1, s2, s3, s4).
11. 11
Say now there are 25% of the accounts in each state. We
would have (.25, .25, .25, .25). This set of numbers is called
the vector of state probabilities.
Next the matrix of transition probabilities:
1 0 0 0
0 1 0 0
.6 0 .2 .2
.4 .1 .3 .2
The first row is being in the first state in the current period,
the second row is being in the second state in the current
period, and so on down the rows.
12. 12
Now, in the matrix of transition probabilities let’s think about
each column. The first column says an account is in state 1 in
the next period. The second column says an account is in
state 2 in the next period, and so on.
Note the first row has values 1, 0, 0, 0. The values add to one.
If an account is all paid this period then it must be all paid
next period. So the 1 means there is a 100% chance of being
all paid next period and 0 % chance in being in any other
category.
In the second row we have 0, 1, 0, 0. If an account starts as
bad it will always be bad. So it has a zero chance of being
paid, less than one period overdue or be between 1 and 3
periods overdue.
13. 13
In row three we have .6, 0, .2, .2. If an account is less than 1
month overdue now, next period there is a 60% chance it will be
all paid, 0% chance it will be bad because it can not be over 3
months bad, 20% chance it will be less than a month - wait, wait
wait. How can an account be bad less than one month now and
less than one month next period? Any account can have more
than one unpaid bill and we keep track of the oldest unpaid bill
for the category.
Note that each row has to add up to 1.
Now we are ready to ask a question. If each state has 25% of the
accounts this period, what percent will be in each state next
period?
We take the row vector and multiply by the matrix of transition
probabilities, as seen on the next screen.
14. 14
(t, u, v, w)
d e f g
h i j k
l m n o
p q r s
We will end up with (a1, a2, a3, a4), where
a1 = t(d) + u(h) + v(l) + w(p)
a2 = t(e) + u(i) + v(m) + w(q)
a3 = t(f) + u(j) + v(n) + w(r)
a4 = t(g) + u(k) + v(o) + w(s)
Matrix multiplication
16. 16
So, if we start with 25% of accounts in state 1, then next period
we have 50 % of accounts in state 1, and so on.
If you wanted to see what the %’s in each state would be two
periods from the start we would do the same calculation, but use
the row vector that we ended with in the first period
(.5, .275, .125, .1)
If I wanted to see the probabilities of being in each state at the
end of two months I would put 2 for number of transitions and
would get (.615, .285, .055, .045).
17. 17
Now, in this particular problem we have what are called
absorbing states. Not all problems have absorbing states and if
not just do what we have done up to now.
An absorbing state is one such that once in it one stays in that
state. For instance, once debt is bad it is always bad. Now, in
the long run all debt will either be bad or paid.
The Markov Analysis problem that has absorbing states, no
matter how many transitions you put there is always an output
section called matrices and it includes the FA matrix. In our
problem we have
.9655 .0345
.8621 .1379
The rows represent the non-absorbing
states and the columns represent the
absorbing states.
18. 18
The first row is state 3, debt of less than one month, and row 2 is
state 4, debt of 1 to 3 months. Column 1 is paid debt and column
2 is bad debt.
So, the first row says 96.55% of less than one month debt will be
paid over the long term and only 3.45% of this debt will not be
paid.
The second row means that 86.21% of 1 to 3 month debt will be
paid over the long terms and 13.79% of this debt will go bad.
Say that there is $2000 in the less than one month overdue
category and $5000 in the 1 to 3 month overdue category. How
much can the company expect to collect of this $7000 and how
much will it not collect?
19. 19
We have to do matrix multiplication, here
(2000, 5000) .9655 .0345
.8621 .1379
([{2000*.9655} + {5000*.8621}], [{2000*.0345} + {5000*.1379}])
or (6241.50, 758.5).
So of the $7000 in states 3 and 4, $6241.50 can be expected to be
collected and $758.5 would not be collected.
20. Markov Processes
• Markov process models are useful in
studying the evolution of systems over
repeated trials or sequential time periods
or stages.
• Examples:
– Brand Loyalty
– Equipment performance
– Stock performance
21. Markov Processes
• When utilized, they can state the
probability of switching from one state to
another at a given period of time
• Examples:
– The probability that a person buying Colgate
this period will purchase Crest next period
– The probability that a machine that is working
properly this period will break down the next
period
22. Markov Processes
• A Markov system (or Markov process or
Markov chain) is a system that can be in
one of several (numbered) states, and can
pass from one state to another each time
step according to fixed probabilities.
• If a Markov system is in state i, there is a
fixed probability, pij, of it going into state j
the next time step, and pij is called a
transition probability.
23. Markov Processes
• A Markov system can be illustrated by
means of a state transition diagram, which
is a diagram showing all the states and
transition probabilities– probabilities of
switching from one state to another.
25. Transition Matrix
• The matrix P whose ijth
entry is pij is called the
transition matrix
associated with the
system.
• The entries in each row
add up to 1.
• Thus, for instance, a 2
2 transition matrix P
would be set up as
shown at the right.
1 2
1 P11 P12
2 P21 P22
From
To
27. Vectors & Transition Matrix
• A probability vector is a row vector in
which the entries are nonnegative and add
up to 1.
• The entries in a probability vector can
represent the probabilities of finding a
system in each of the states.
29. State Probabilities
• The state probabilities at any stage of the
process can be recursively calculated by
multiplying the initial state probabilities by
the state of the process at stage n.
30. State Probabilities
Πi (n) Probability that the system is in
state i in period n
Π(n) = [ Π1 (n) Π2 (n) ] Denotes the vector of state
probabilities for the system in
period n
Π(n+1) = Π(n) P State probabilities for period
n+1 can be found by multiplying
the known state probabilities for
period n by the transition matrix
32. Steady State Probabilities
• The probabilities that we approach after a
large number of transitions are referred to
as steady state probabilities.
• As n gets large, the state probabilities at
the (n+1)th period are very close to those
at the nth period.
33. Steady State Probabilities
• Knowing this, we can compute steady
state probabilities without having to carry
out a large # of calculations
Π(n) = [π1 (n) π2 (n) ]
[ π1 (n+1) π2 (n+1) ] = p11 p12
[π1 (n) π2 (n)] p21 p22
34. Example
• Hari, a persistent salesman, calls ABC
Hardware Store once a week hoping to
speak with the store's buying agent,
Shyam. If Shyam does not accept Hari's
call this week, the probability he will do the
same next week (and not accept his call)
is .35. On the other hand, if he accepts
Hari's call this week, the probability he will
not accept his call next week is .20.
36. Example
• How many times per year can Hari expect
to talk to Shyam?
• Answer: To find the expected number of
accepted calls per year, find the long-run
proportion (probability) of a call being
accepted and multiply it by 52 weeks.
37. Example
Let π1 = long run proportion of refused calls
π2 = long run proportion of accepted
calls
Then,
.35 .65
[π1 π2 ] .20 .80 = [π1 π2 ]
38. Example
.35π1 + .20π2 = π1 (1)
.65π1 + .80π2 = π2 (2)
π1 + π2 = 1 (3)
Solve for π1 and π2
39. • The probability of the system being in a
particular state after a large number of
stages is called a steady-state probability.
40. Example: Machine Adjustment
ToTo
FromFrom
In adj. (1)In adj. (1)
Out of adj. (2)Out of adj. (2)
InIn
adjustmentadjustment
(state 1)(state 1)
0.70.7
0.60.6
Out ofOut of
adjustmentadjustment
(state 2)(state 2)
0.30.3
0.40.4
41. Example: Machine Adjustment
Day 1
11
22
11
.7.7
.3.3
Day 2
.7.7
.3.3
If the machine is found to be in adjustment on day 1,
what is the likelihood it will be in adjustment on day 3?
Not in adjustment?
42. Example: Machine Adjustment
11
22 11
22
11 22
11
.7.7
.3.3
.7.7
.3.3
.6.6
.4.4
.49.49
.21.21
.18.18
.12.12
Day 1
Day 2
.67.67
.33.33
Day 3
48. Steady State Probabilities
• These probabilities are called steady state
probabilities
• The long term probability of being in a
particular state no matter which state you
begin in
– Steady state prob. (state 1)= .667
– Steady state prob. (state 2) = .333
49. Example: Machine Adjustment
ToTo
FromFrom
In adj. (1)In adj. (1)
Out of adj. (2)Out of adj. (2)
InIn
adjustmentadjustment
(state 1)(state 1)
0.70.7
0.60.6
Out ofOut of
adjustmentadjustment
(state 2)(state 2)
0.30.3
0.40.4
50. Example: Machine Adjustment
ToTo
FromFrom
In adj. (1)In adj. (1)
Out of adj.Out of adj.
(2)(2)
InIn
adjustmentadjustment
(state 1)(state 1)
pp1111
pp2121
Out ofOut of
adjustmentadjustment
(state 2)(state 2)
pp1212
pp2222
51. Steady State Probabilities
P(S1 Day n+1|S1) =P(S1 Day n+1|S1) =
.7 P(S1 Day n|S1) + .6 P(S2 Day n|S1).7 P(S1 Day n|S1) + .6 P(S2 Day n|S1)
PP11
PP22
P(S2 Day n+1|S1) =P(S2 Day n+1|S1) =
.3 P(S1 Day n|S1) + .4 P(S2 Day n|S1).3 P(S1 Day n|S1) + .4 P(S2 Day n|S1)
PP11
PP22
54. Example – Steady State
Let p1 = long run proportion of refused calls
p2 = long run proportion of accepted
calls
Then,
.70 .30
[p1 p2 ] .60 .40 = [p1 p2 ]
55. Example – Steady State
.70p1 + .60p2 = p1 (1)
.30p1 + .40p2 = p2 (2)
p1 + p2 = 1 (3)
Solve for p1 and p2
p1 = 1 – p2
p2 = 1 – p1
Can be restated as:
56. Example – Steady State
Using equations (2) and (3), substitute p1 = 1 – p2
into (2):
.30(1 - p2) + .40p2 = p2
This gives p2 = .3333
Substituting back into equation (3) gives
p1 = .67 Thus the expected number of accepted calls
per year is: (.76471)(52) = 39.76 or about 40