This is about discussing Normalizing The Risk Free Rate a post at LinkedIn Pulse by Nikolay Markov at https://www.linkedin.com/pulse/risk-free-rates-nikolay-markov?trk=prof-post
2. Sukarnen Suwanto Will the decision to normalize RFR have the implication that we should
normalize MRP as well? If yes, will that be possible to have a normalized MRP? The idea of
having a normalized RFR is good in view of current depressed market and the longterm
view of a business, yet what we mean with "normalized", could mean different thing to
different person. And suddenly we will have so many methods and approaches (some of
them might be quite elegant) to arrive to something we call "normalized RFR". I am afraid at
the end, this is just a consensus among practitioners. Show less
5d
Malcolm McLelland Ph.D. Hi Sukarnen, I agree with all you say. Regarding MRP, in my
view the profession has always used normalized MRP to one extent or another because
the normalization comes through the historicalaggregatingaveraging process. I see the
issue like this: What sample period is used to estimate the MRP? We could estimate it
using a one day sample period using intraday transaction data, for some assets we could
use a 40 year sample period using annual averages of transaction data, etc. ad infinitum.
So, MRPs are essentially averages and the averaging normalizes the MRPs for non
persistent factors. Following from "ad infinitum" we have exactly, as you say, so many
methods and approaches that we (I think appropriately) fall back on a consensus among
practitioners. Cheers, MMc Show less
Like 5d
Rod Burkert Note that D&P advocates using a normalized RFR with their "conditional"
MRP. The result is that the normalized RFR is higher than the spot RFR ... the conditional
MRP is lower than the unconditional MRP ... and so their sum is not that much different than
the spot RFR + the unconditional MRP. In Damdoaran's blog post (noted above),
Damodaran has a graph showing that the combined spot RFR and his unconditional
version of the MRP is fairly stable over time. All of this leads me to believe that this is much
ado about nothing especially for SMEs. Hitchner calculates a COE 68 different ways
using the available permutations and combinations of the "old" Ibbotson/Morningstar data
and the "new" D&P data and has found the variance btwn the results to be 100200 basis
points. Show less
Like 2 5d
Malcolm McLelland Ph.D. Hi Rod, I agree. But to me the issue always is that I'm required
to *demonstrate*using directly relevant theory and empirical datahow I estimated the
discount rate. At least in my world, it doesn't help much to use estimates published by D&P
and *conditional expectations* are considered very important. So, as is almost always the
case, I agree with you in principle ... but differ on the details of application. :) Cheers, MMc
Show less
Like 5d
Sukarnen Suwanto I don’t really in a big fan for normalizing the parameters in CAPM
formula. I am not too sure that by doing that (that is normalizing RFR, etc) will really make
us have a better “discount rate”. In view of CAPM is an exante concept, then forget it, how
elegant the model to normalize it, at the end of that day, nobody will really know what is
really “normalized”, and by looking to the past, in terms of discount rate, doesn’t translate
us to be a good predictor. At the same, changes in the business risk nowadays are not just
merely attributable to the business cycle (which we could expect as to when everything
could return to its “normal” condition), but more than that, there is a fundamental shift in the
business and financial world. Show less
1 3d
Sukarnen Suwanto Again, instead of making the discount rate building up more
complicated, I suggest better to spend much time in taking a closer look on how we build
the cash flow projection. Many valuation results are far wide off the mark not because of
incorrect discount rate, but the cash flows forecast that, instead of sustainable, turns out to
be transitory.
Using sensitivity to deal with discount rate is I guess, the best way to give us a better idea
about whether the results are making finance and business sense.
As a joke to myself while reading many papers about normalizing RFR or any effort to
make CAPM sound more complicated in its application, probably, yes, the finance scholars
are figuring out some ways to come up with a better idea about the discount rate to be
used, yet, more important for them, they figure out how to get paid for doing so.
Show less
1 3d
Sukarnen Suwanto I read Pablo Fernandez paper posted at ssrn, CAPM : an absurd
model, and found it quite mindprovoking. Though he doesn't point to another model as an
alternative, yet, at least, that paper makes us think that probably in many cases, using a
common sense is a better way to deal with this discount rate.
3d
Sukarnen Suwanto I guess we need to be careful, even though, “logically” it sounds
making sense that we could do the normalization of RFR (and other parameters in CAPM
as well), yet the basic question, whether such “normalized RFR” really does exist in the
market. There are so many occasions, valuation analysts ask the readers or users of their
reports “just to believe”. Show less
3d
Scott Hakala I don't normalize the RFR simply because I vary the ERP to reflect the total
base discount rate.
Fernandez's paper on CAPM overstates the problem. Pure CAPM is, like most models, an
incomplete and oversimplified model of reality but what he wrote is really a disservice to
the valuation community because it misstates both the evidence and the theory that
investors should care much more about systematic risk and less about unsystematic risk
in the discount rate. If investors seek diversification, then beta matters and should matter
but the slope will be less than 1.0 due to estimation errors of beta (Most appraisers and
3. even Duff & Phelps frankly do a horrible job of estimating the true beta relative to some
modern timeseries statistical techniques and don't use the average debt to equity ratio
over the estimation period properly to "normalize" beta.) and CAPM being an incomplete
model of reality. Show less
Like 1 3d
Malcolm McLelland Ph.D. I agree with Scott, as usual. I also think it's important to keep in
mind that at a practical level "normalizing" is just "averaging", and there's a roughly infinite
number of ways to average things like riskfree rates, equity risk premiums, etc. We can
average across time, across markets, across economic conditions, and across the
exogenous risk factors that drive variation in these things. In short, whether we normally
think of it this way or not, almost all components of riskadjusted discount rates that we use
in valuation represent *conditional expectations*. And like *unconditional expectations*,
there's an roughly infinite number of ways to develop or estimate them. So, how do we
optimally choose between an infinite number of estimation methods? I'm not sure there's a
good answer to the question. :) Cheers, MMc Show less
Like 2d
Sukarnen Suwanto Nice comments from Scott and Malcolm. Just to add quickly.
First, the term “normalized” itself I believe is ambiguous. It is not quite clear whether that
term will mean “best estimate”, “[single] most likely”, “expected”, “probabilityweighted
mean”, etc.
Second, breaking RFR will give us (i) real RFR and (ii) inflation, then the question, which
one to normalize? Note: Discount rate itself has two components : time and risk
(Robichek&Myers, 1966).
In a corporate valuation context, the cash flow stream generally has a fairly long duration,
or we could say, in the extreme way, common stock equity has no duration. If this is the
case, we should begin with a [very] longterm RFR (e.g., 10year or 30year Treasury
Bonds, or TIPs). Normalizing longterm RFR will be a daunting task, even for somebody
with a background in macroeconomics discipline.
Show less
1d
Kashyap Shah Its a debatable topic. I think most important thing future Industry trend and
business of company matters the most.
Like 17h
Sukarnen Suwanto Scott's comment reminded me of an email exchange I received from
Prof. Peter DeMarzo (Stanford University) regarding Fernandez's paper "CAPM: An
Absurd Model"
He said,
this article is absurd :)
CAPM is just a tool, and can be easily misused/abused, just like any other tool (including
NPV).
That is why it is important to understand it conceptually, not simply mechanically, so one
can make informed judgments. Show less
1 17h
Sukarnen Suwanto Scott's comment also reminded me of not to mechanically average
"beta" from comparables. There are some extra efforts to make even if we really want to
average several betas. Don't forget the error estimation of regressionobtained beta is not
negligible.
1 16h
Reply to this conversation...