SlideShare une entreprise Scribd logo
1  sur  31
Télécharger pour lire hors ligne
University of Connecticut
School of Business
Risk Modeling Project:
“Dot-com” Bubble: 1996 - 2002
Group 8
Group Members:
Tianyan Li
Wang Dai
Jianguo Li
Meng Xu
John Almeida
I Overview of the Dot-Com Crisis
1 Causes
The first web US web sites on the world wide web were established in 1992, but “the web” grew
at an exponential rate (Adamic & Huberman, 1999). By the mid-90’s, software companies like
AOL, Internet Explorer, and Netscape provided users the ability to “surf” the web via sites such
as Lycos, Altavista, and Yahoo (Week, 1996). Schools were increasingly interested in providing
internet capabilities to students, and enrollment in US colleges and universities was at an all-
time high: close to 40% of the population of 22- 24 year-olds was in school in the mid-1990’s
(US Census Bureau, 2013). Computers and internet access became increasingly available at
home and work: 18% of households had internet access by 1997 (US Census Bureau, 2013).
Coincidentally, US government initiated rate capital gains tax cuts that created available capital
among the wealthiest income groups. This is considered to have increased volatility in the
market by leading to more speculative investments (Dai, Shackelford, & Zhang, 2013). Thus,
when the supply of internet capability and the “free” money from recent tax cuts met with the
demand for experimentation in the new medium, the “dot-com” boom was created. Named after
the ubiquitous domain suffix indicating a commercial (as opposed to military, government, or
educational) website, this phenomena is alternately considered the “dot-com bubble,” the “tech
bubble,” or something such as this.
Much of this was enabled by the NASDAQ. Although the NASDAQ was founded in the 1970’s
and became popular for over-the-counter trades, they found they were able to expand by
offering an electronic interface for traders (Simms, 1999). NASDAQ’s emphasis on technology
allowed it to join the London Stock Exchange to form an intercontinental security market. The
NASDAQ’s competitive advantage in technology over the NYSE made it the perfect market for
these new technology companies and their stocks. 1995 – 2000 saw NASDAQ’s volume grow
from 300 million shares traded per day to 2 billion shares traded per day (Yahoo! Finance).
2 Starting and Growing the Bubble
Many consider the start of the “tech” bubble to be Netscape’s initial public offering (IPO).
Netscape opened at $71 (U.S.), more than double the $28 public-
offering price set late Tuesday. But after an intra-day high of $75, the
stock drifted to close up $30.25 at $58.25. Volume soared to 13.8
million shares, making Netscape the second most-active Nasdaq
issue and the third-best performing initial public offering in history
(Toronto Star, 1995).
The dot-com boom proceeded as initial public offerings (IPO’s) and venture capital funding for
rapidly expanding technology companies became increasingly popular. Coincidentally, both the
desire to trade stocks and the technology to become one’s own stockbroker was available via
companies such as Charles Schwab, Datek, and E*Trade became increasingly popular
(Barboza, 1999). Companies such as broadcast.com (sold for $5.7B), Geocities.com (sold for
$3.5B), and theglobe.com ($28M) all successfully raised substantial amounts of money from
investors who were hoping to find a quick return on their investments (Honan & Leckart, 2010).
Other companies such as Microsoft, Dell, and Intel enjoyed solid business before the internet
craze, but certainly enjoyed a surge in business from new customers (Picker & Levy, 2014).
While the trading happened at internet terminals and “on” NASDAQ (physically located in New
York), most of the new technology companies were in “Silicon Valley (Picker & Levy, 2014).”
3 Height of the Bubble and Thereafter
However, a problem emerged: many analysts start to doubt that these new, unproven
technology companies could earn the profits that they claimed they could earn. The technology
companies typically had almost no property, plant, or equipment, which allowed them very low
startup costs and no required no debt. With hundreds of millions of potential customers in a
business-to-customer model or millions of potential customers in a business-to-business model
(with other businesses as customers), there was no way to provide an accurate assessment of
what future revenue would be. It became clear that the projections of future profits were based
on questionable assumptions at best and reckless speculation at worst (Cassidy, 2003).
Startup technology companies that had been flooded with cash began to exit the market at an
increasing rate. By March 2001, such (formerly) notable companies as boo.com, pets.com, and
etoys.com had failed. NASDAQ peaked on March 10th
, 2000 (Yahoo! Finance). Although it’s
impossible to a single cause of the point of inflection, layoffs peaked in April, 2000 and
technology company closures peaked in May, 2000 (BBC, 2002).
The impact of the dot-com bubble’s deflation varied across different groups. The “average
American citizen” was hardly any worse off as the dot-com businesses declared bankruptcies
and ceased operations. Arguably, the “average investor,” assuming a well-balanced portfolio
and a long-term investment strategy, survived reasonably well. The group hardest hit were
technology workers, who had to contend with unpredictable employment conditions (Cassidy,
2003). The dot-com crisis had no clearly defined ending. Many of the unproven startups were
bought by larger existing businesses, only to delay realizing their losses. Many companies
whose wealth had rapidly accumulated lost their wealth equally quick. Several of the
companies who grew substantially and maintained reasonable expenses actually survived the
crisis and craze to provide valuable services to their customers: E-Bay, Amazon, and even
Priceline (Cassidy, 2003).
II. Data Analysis
1 Data Selection
The NASDAQ index is a stock market index of the common stocks and similar securities listed on
the NASDAQ stock market, meaning that it has over 3,000 components. It is highly followed in the
U.S. as an indicator of the performance of stocks of technology companies and growth
companies. At that time, most dot-com companies appeared on the NASDAQ. NASDAQ index could
obviously illustrate the dot-com bubble. The data sample ranges from 1/1/1990 to 12/31/2004, where
data of 1/1/1990 to 1/1/1992 is for model building and the rest is for analyze the dot-com crisis. We
collected the data from www.finance.yahoo.com.
2 Risk Models
2.1 Historical Simulation
To start the analysis of the financial crisis, we firstly build historical simulation model. We find the
VaR of today by simply choosing a percentage from the historical data. (Please see the Appendix 1
and Excel File for procedure details). The results are in the figure as follow.
The reaction of this model is relatively slow by comparing others which we develop later.
2.2 RiskMetrics Model
We develop the risk-metrics model to find out the VaR by finding the parameters first. We use the
historical data from 1990 to 1992 to establish the data. The parameter estimated of the model is
shown below.
RISKMETRICS
Initial Values
λ 0.94
Maximum Likelihood
Estimates
λ 0.940090181
LOG LIKEHOOD 9611.838209
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
VaR 1%
The value of the parameter is very close to 0.94 which is also applied by the JP Morgan Chase. After
establishing the parameter, we should choose a distribution for the returns. Firstly we can choose
the normal distributions. Sometimes, the t distribution and the filtered historical model are better
distributions. Therefore, we develop all those three distributions for the model. (Please see the
Appendix 1 and Excel File for procedure details).
Then we can get the result of this model following below. To illustrate the result, we just show the
result of the normal distribution returns.
Also, we can use riskmetrics model to find out the daily Expected Shortfall. Following is the result for
normal distribution. (For t-distribution, please see the Excel File).
0%
2%
4%
6%
8%
10%
12%
14%
VaR
0%
2%
4%
6%
8%
10%
12%
14%
16%
ES
2.3 GARCH Model
Similar to the risk-metrics model, we can use the GARCH model to search for VaR and Expected
Shortfall. The parameters estimated by the log likelihood maximum theory. The results are shown
below.
Maximum Likelihood
Estimation
Starting Values
 0.0500000
 0.8000000
 1.500E-06

Results
 0.1171359
 0.8828641
 1.421E-06

Log
likelihood 9624.001
Persistence 1.0000000
With these parameters, we can build the GARCH model to find out the VaRs and ES values. Again,
we can apply the normal distribution, t distribution or the Filtered Historical Simulation as the
distribution of the model. The result for VaRs at normal distribution is shown below. (For t-distribution
and Filtered Historical Simulation, please see the Excel File).
0%
2%
4%
6%
8%
10%
12%
14%
16%
VaR
2.4 NGARCH Model
Similar to the GARCH model, we can use the NGARCH model to search for VaR and Expected
Shortfall. The parameters estimated by the log likelihood maximum theory. The results are shown
below.
Maximum Likelihood
Estimation
Starting Values
 0.0500000
 0.8000000
 1.500E-06
 1.2500000
Results
 0.1041919
 0.7916445
 4.359E-06
 0.9998649
Log
likelihood 9613.38
Persistence 1.000000
With these parameters, we can build the GARCH model to find out the VaRs and ES values. Again,
we can apply the normal distribution, t distribution or the Filtered Historical Simulation as the
distribution of the model. The result for VaRs Comparison under normal distribution is shown below.
(For t-distribution and Filtered Historical Simulation, please see the Excel File).
0%
5%
10%
15%
20%
25%
VaR Comparison
VaR VaR FHS VaR
3. Back Testing
In orderto findoutthe validationof those models,we canapplythe back testingtechniques.First,we
can testindependence characterof a model whichtestthe hitsindependence.Afterthatwe can do
coverage testwhichteststhe real fractionthat hashitscomparedto the theoretical value indicatedby
the VaR model.Finally,we cantestof the independence andcoverage bycombiningboththe separate
tests.The resultsforRiskMetricsmodel andGARCHmodel are listedbelow. (For procedure details,
please see Appendix 1 and the Excel File)
Hypothesis Testing (Chi-Square Test)
Significance level = 10%
RiskMetrics GARCH
LRuc
Reject
VaR
model
Don't
Reject
VaR
model
Reject
VaR
model
Reject
VaR
model
LRind
Don't
Reject
VaR
model
Reject
VaR
model
Reject
VaR
model
Don't
Reject
VaR
model
LRcc
Reject
VaR
model
Don't
Reject
VaR
model
Don't
Reject
VaR
model
Don't
Reject
VaR
model
For othermodelsundertdistributionandFilteredhistoricalmodel,we canuse the similartests.
4. Analysisforthe CrisiswithRiskModels
4.1 Model Selection
In orderto selectabestmodel forthisanalysis,we canuse the exceedanceanalysisforeachmodel.To
illustrate the ideasimple,we choose the RiskMetricsmodel,GARCHmodel andNGARCHmodel under
normal distribution.We ignore the Historical Simulationhere because othermodelshave muchbetter
characteristicsthanit.The exceedanceisthe numberof standarddeviationof anegative return(loss)
that exceedsthe correspondingVaRindicatedbythe model.If the returnlossdoesnotexceedthe VaR
given,the value of the exceedance iszero.Therefore,we canhave resultsasbelow. (For procedure
details, please see Appendix 1 and the Excel File)
RiskMetricsModel
GARCH Model
NGARCHModel
0
1
2
3
4
5
6
RM 5%
0
0.5
1
1.5
2
2.5
3
3.5
4
Exceedance
Apparently,the exceedance analysisshowsthatthe NGARCHmodel isthe bestof the three.The
exceedance numberare more close to5% indicatedbythe model andthe distributionof hitsisrelatively
random.Therefore,we are goingtouse thismodel toanalyze the indicationfromthe crisis.
4.2 CrisisAnalysis
Firstly,if we take a lookat the returnand price figure of the crisisperiod,we canget a basicinformation
for the crisis.Fromhistorical data,we have figuresbelow.
0
1
2
3
4
5
6
Exceedance
0
1000
2000
3000
4000
5000
6000
Adj Close Prices
From these figures,we canknowthatthe markethad a large volatilityduring1999-2001. Andthe price
peakappearsat 2000. Thenwe take a lookat the VaR indicatedbythe NGARCHmodel.
4.2.1 InformationBefore the Crisis
We can findlarge VaRssince 1998 to 1999. This isa signal of large market volatility.We canunderstand
thisunusual volatilityasasignal of irrational marketbehavior.Therefore,asriskmanagers,we can
detectthissignal before the crisisbefore the large lossescoming.
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
RETURN
0%
2%
4%
6%
8%
10%
12%
14%
16%
18%
VaR
4.2.2 InformationDuringthe Crisis
The VaR duringthe crisisfrom1999-2001 can getas highas 16% for 1% VaR. However,the evenworse
situationisthe largestexceedance of VaRcanbe 5 standard deviationif we use NGARCHmodel and4
standardDeviationif we use Garch model.
The technique adjustmentwe canapplyhere isto change the model orthe model’sparametermore
frequently.Otherwise,the riskmanagingsystemcannotworkwell duringthisperiod.
4.2.3 InformationAfterthe Crisis
After2001, the marketvolatilitydecreasesslowlyandfinallybecome stableafter2002. This can be
interpretedasthe signal thatthe marketreturnsto normal situation.Atthispoint,riskmanagersare
able to adjusttheirmodelssothatthe model doesnotprovide tooconservativestrategies.
III Stress Testing
The reason why we will do the stress testing is that most risk management work is short of data
samples. This could be a big issue if available historical data cannot fully reflect the potential risks
going forward. For example, the available data may lack extreme events such as an equity market
crash, which occurs very infrequently.
Since the portfolio is consist of Ebay and Amazon, in the first step of stress testing, we have to use
the solver function in excel to get the weight of each asset to make the unconditional VaR to be
minimum. As we can see in the excel result, the weight of Ebay is 0.53, while the weight of Amazon
is 0.47. The portfolio constructing result is the figure as follow from the Excel File.
Minimizing
Ebay Variance 0.0024956
Weight 0.5302198
VaR 0.0846235
Amazon Variance 0.0028244
Weight 0.4697802
VaR 0.0847399
Portfolio
Unconditional
Covariance
-6.024E-
05
Variance 0.0012949
VaR 0.0837137
Correlation
-
0.0226896
Next, under the RiskMetric model, we can calculate the portfolio returns and variance during the
period in which the date we collected. (See Appendix). In this case, we select the period that is from
January 2002 to June 2004 as the normal time period. We choose a period of 60 days. On the other
hand, we select the period that is from January 1999 to December 2000 as the crisis time period.
We choose another period of 60 days.
Here is the chart that is showing the daily VaR during the testing period by using the historical date
we collect during the normal period.
As we can see, the daily VaR shown by the blue line remains at the level of 8% stably. The
maximum and minimum VaR are 10% and 6% approximately. There is not much big difference
between each scatterplots in the line.
The second chart is showing the relationship of crisis daily VaR and correlated daily Var.
0
0.02
0.04
0.06
0.08
0.1
0.12
Normal Time
The VaR with higher correlation has higher values. However, the figure shapes are very similar.
Compared with the daily VaR under the first scenario, the crisis daily VaR is a little higher. As we
can see, before September 2004, the red line is decreasing rapidly and is much higher than the blue
line. The reason is that without considering the diversification benefits, the unconditional VaR of
portfolio that only involves Ebay alone will be become much larger. In this case, the weight of Ebay
in the portfolio is equal to one. However, the tendency of decreasing of red line becomes smaller.
Although the red line is still decreasing as the time passed, the rate becomes slower and slower.
After September 2004, the red line is on the bottom of the blue line. In this case, the reason I think is
the other asset in the portfolio, Amazon, is much more risky than the Ebay.
Finally, we can use the filtered historical simulation of different time period for different scenarios to
calculate the daily VaR by assuming the normal distribution for both stocks.
IV Conclusions and Recommendations
There was a substantial amount of benefit that occurred as the internet grewin the late
1990’s. The internet did create substantial benefits for its users: time savings, increased
communications, and new learning opportunities. However, during this time there was also a
substantial amount of volatility, confusion, and waste. In order to support the strategy of supporting
growth, even “excited” growth while reducing “irrational exuberance,“ the government could enact a
tiered capital gains tax, for example taxing capital gains at 20% and taxing capital gains above
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Chart Title
VaR-Crisis VaR-Crisis-Cor=1
$1,000,000 at 50%. This would reduce the incentive for individual investors to seek large gains
(Pascal, 2000).
Alternately, the government could create a tax policy that triggered transaction trades if the
market grew too quickly or if volatility reached a certain level. Both of these measures would create
a natural “brake” for the economy should stocks start to trade too quickly. Additionally, the
government could create a system where new (especially unproven) companies who experienced
substantial capital gains would also experience substantial taxes not applicable to larger companies.
Such a mechanism would encourage companies to carefully consider the demand for their stock and
price the IPO appropriately: an IPO that is priced according to its value should maintain a constant
price rather than experience wild price swings (Rydqvist, 1997).
The dot-com boom showed that new technologies, while definitely exciting, can have both
costs and benefits. The benefits of technologies include time savings, money savings, increased
interaction, and new opportunities. Time savings and money savings can be difficult to quantify
sometimes, but the benefit of a better quality interaction or new opportunity is usually very difficult to
quantify. This can add substantial volatility to the market, creating risk for investors and questioning
underlying assumptions. A general rule is to consider look for logical errors in underlying
assumptions (for example, that demand for the internet would necessarily translate to revenue
earned for new internet companies) and to create stress tests for different circumstances. Although
investing in any market necessarily involves risk, a well-managed portfolio works hard to find
unnecessary risk and adjust itself accordingly.
Reference
Adamic,L. A.,& Huberman,B. A.(1999, 09). InternetGrowthDynamicsof the World-Wide Web. Nature,
401.
Barboza,D. (1999, February1).Small On-Line BrokersRaise Share of Trades. New YorkTimes.
BBC. (2002, March 12). Dot-ComTimeline.London,England.Retrievedfrom
http://news.bbc.co.uk/2/hi/business/1869544.stm
Cassidy,J.(2003). Dot.con:HowAmerica Lost ItsMind and ItsMoney in the InternetEra. New York:
Harper Perennial.
Dai,Z., Shackelford,D.A.,& Zhang,H. H. (2013, September).DoesFinancial ConstraintAffectthe
RelationbetweenShareholderTaxesandthe Costof EquityCapital? TheAccounting Review,
88(5).
Honan,M., & Leckart,S. (2010, February10). 10 Years After:A Look Back at the DotcomBoom and Bust.
Wired.
Picker,L.,& Levy,A. (2014, March 6). IPODot-ComBubble EchoSeenMutedas OlderCompaniesDebut.
Bloomberg.com.
Simms,M. (1999, February).The History:How NasdaqWas Born. TradersMagazine.
TorontoStar. (1995, August10). Netscape skyrocketsatlaunchHuge demandforsharesof Internet
software firm.Toronto,Canada.
US CensusBureau.(2013). Computerand InternetUsein the United States. Washington,DC:US
Departmentof Commerce.
Week,P.B. (1996, April 15). On Wall Street:NetDirectoriesGetBoostfromStock OfferingsLycos,
Excite,andYahoo!
Yahoo!Finance.(n.d.).NASDAQComposite (^IXIC).
Appendix 1:Risk Modeling Techniques
Historical Simulation
HS technique isthe simplestwaytocalcaulate the VaRand ES(ExpectedShortfall).It assumesthatthe
distributionoftomorrow’sportfolioreturns,RPF,t+1,is well approximatedbythe empirical distributionof
the past m observations.
The pseudologreturncan now be definedas
Considerthe availabilityof apast sequence of mdailyhypothetical portfolioreturns,calculatedusing
past pricesof the underlyingassetsof the portfolio,butusingtoday’sportfolioweight .
The VaR withcoverage rate,p,isthensimplycalculatedas100pth percentileof the sequence of past
portfolioreturns.The formulais
In Excel,we couldsortthe returnsin ascendingorderandchoose the VaRto be the numbersuchthat
only100% of the observationsare smallerthanthe VaR.
Expectedshortfall isanalternativeriskmeasure.Itisdefinedasthe expectedreturngiventhatthe
returnfallsbelowthe VaR.Soforthe 1-dayhorizon,we have
WeightedHistorical Simulation
WHS technique is improvement of HS by designing to relieve the tension in the choice of sample
size, m. We assign relatively more weight to the most recent observations and relatively less
weight to the return further in the past.
In WHS, our sample of m past hypothetical returns, is assigned probability weight declining
exponentially through the past as follows.
So we can easily see that when
After using this way to get the return, we could repeat the process in the HS to get the VaR.
RiskMetrics Models
RiskMetrics system considers the following model, where the weights on past squared returns
decline exponentially as we move backward in time. RM variance model formula is
So RiskMetrics model’s foreast for tomorrow’s volatility can be seen as a weighted average of
today’s volatility and today’s squared return.
RM tracks variance changes in a way that is broadly consistent with observed returns.RM found
that the estimates were quite similar across assets, and they therefore simply set ^=0.94 for every
asset for daily variance forecasting. So
In Excel, after we got the today’s variance and returns, use ,then we can calculate the
tomorrow’s variance.
GARCH model
GARCH model is the simplest model that capture important features of returns data and that are
flexible enough to accommodate specific aspects of individual assets. The formula is
Note that RM model can be viewed as a special case of the simple GARCH model if a =1 -^,
B=^ amd w=0/. But there is a important advantage about GARCH: it consider the fact that the
long-run average variance tends to be relatively stable over time.
In Excel, we could use the Maximum likelihood Estimation to estimate the three parameters
w,a,b. Then with the help of the today’s returns ,today’s variance, we could get the
tomorrow’s variance.
NGARCH model
NGARCH model (nonlinear GARCH model) is a extensions of the GARCH Model. It could
be used to captured the leverage effect, which means that negative return increases variance by
more than a positive return of the same magnitude. GARCH Models is modified so that the
weight given to the return depends on whether the return is positive or negative in the following
simple mannar:
Which is sometimes refered to as the NGARCH model.
It is strictly speaking a positive piece of news, zt>0,rather than raw reurn Rt,which has less of an
impact on variance than a negative piece of news, if θ>0.
In Excel, we could use the Maximum likelihood Estimation to estimate the three parameters
w,a,b. Then with the help of the today’s returns ,today’s variance, we could get the
tomorrow’s variance.
Filtered Historical Simulation
The FHS approach attempts to combine the best of the model-based with the best of the model-
free approaches in a very intuitive fashion.
Using the GARCH Models, where
Withthe sequence of pastreturns, ,we can estimate the GARCH model and
calculate past standardized returns from the observed returns and from the estimatd standard
deviations as
So we can calculate the VaR by using following formula:
For ES, we can calculate inthe followingway:
T-distribution
The t-distribution can capture the most important deviations from normality. It been defind by
where
D is the only parameter that we need to use maximum likelihood estimation to get.
We can get the VaR and ES by using the following way:
Monte Carlo Simulation
Monte Carlo Simulation relies on artigicail random numbers to simulate the hypothetical daily
returns from day t+1 to day t+K as
In MCS, we first use GAECH models to obtain the tomorrow’s variance. Then using the
ramdom number generator, we can generate a set of artificial random numbers
From these random numbers we can calculate a set of hypothetical returns for tomorrow as
Given these hypothetical returns, we can update the variance to get a set of hypothetical
variances for the day after tomorrow,t+2,as follow:
Then given a nes st of random numbers frawn from the N(0,1) distribution,
We can calculate a set of hypothetical returns for t+2 day, and the variance
So after we repeat these step for K times, we can simulate the hypothetical daily returns from day
t+1 to day t+K as
For VaR and ES of Monte Carlo, we can calculate them in the following way:
Appendix 2—Figures from the risk models
(1) RiskMetrics
Normal Distribution
0%
2%
4%
6%
8%
10%
12%
14%
VaR
1%VaR 5%VaR
0%
2%
4%
6%
8%
10%
12%
14%
16%
ES
1%ES 5%ES
t-Distribution
0%
2%
4%
6%
8%
10%
12%
14%
16%
1%VaR&ES
0%
2%
4%
6%
8%
10%
12%
5%VaR&ES
5%VaR 5%ES
(2)GARCH
Normal Distribution
0%
2%
4%
6%
8%
10%
12%
VaR
1%VaR 5%VaR
0%
2%
4%
6%
8%
10%
12%
14%
16%
VaR
1%VaR 5%VaR
0%
2%
4%
6%
8%
10%
12%
14%
16%
18%
ES
1%ES 5%ES
t-Distribution
0%
2%
4%
6%
8%
10%
12%
14%
16%
18%
1%VaR&ES
1%VaR 1%ES
0%
2%
4%
6%
8%
10%
12%
14%
5%VaR&ES
5%VaR 5%ES
(3) NGARCH
Normal Distribution
0%
2%
4%
6%
8%
10%
12%
14%
VaR
1%VaR 5%VaR
0%
2%
4%
6%
8%
10%
12%
14%
16%
18%
VaR
1%VaR 5%VaR
0%
2%
4%
6%
8%
10%
12%
14%
16%
18%
20%
ES
1%ES 5%ES
t-distribution
0%
2%
4%
6%
8%
10%
12%
14%
16%
18%
20%
1%VaR&ES
1%VaR 1%ES
0%
2%
4%
6%
8%
10%
12%
14%
16%
5%VaR&ES
5%VaR 5%ES
0%
2%
4%
6%
8%
10%
12%
14%
VaR
1%VaR 5%VaR

Contenu connexe

Similaire à Report Final

MIS Quarterly Executive Vol. 6 No. 2 June 2007 67© 2007 Univ.docx
MIS Quarterly Executive Vol. 6 No. 2  June 2007 67© 2007 Univ.docxMIS Quarterly Executive Vol. 6 No. 2  June 2007 67© 2007 Univ.docx
MIS Quarterly Executive Vol. 6 No. 2 June 2007 67© 2007 Univ.docxannandleola
 
The Black Swan Event: Funding in the time of Coronavirus with Mark Suster
The Black Swan Event: Funding in the time of Coronavirus with Mark SusterThe Black Swan Event: Funding in the time of Coronavirus with Mark Suster
The Black Swan Event: Funding in the time of Coronavirus with Mark Sustersaastr
 
2011 autumn e business 1
2011 autumn e business 12011 autumn e business 1
2011 autumn e business 1Ian Miles
 
Consumer reactions toward clicks and bricksinvestigating bu.docx
Consumer reactions toward clicks and bricksinvestigating bu.docxConsumer reactions toward clicks and bricksinvestigating bu.docx
Consumer reactions toward clicks and bricksinvestigating bu.docxmaxinesmith73660
 
MGT 410Homework Set 3Provide a short answer to each of the f
MGT 410Homework Set 3Provide a short answer to each of the fMGT 410Homework Set 3Provide a short answer to each of the f
MGT 410Homework Set 3Provide a short answer to each of the fDioneWang844
 
The smac revolution
The smac revolutionThe smac revolution
The smac revolutionSumit Roy
 
Dont Get SMACked
Dont Get SMACkedDont Get SMACked
Dont Get SMACkedCognizant
 
Atomico Need-to-Know 12 February 2018
Atomico Need-to-Know 12 February 2018Atomico Need-to-Know 12 February 2018
Atomico Need-to-Know 12 February 2018Atomico
 
2020 Data Breach Investigations Report (DBIR)
2020 Data Breach Investigations Report (DBIR)2020 Data Breach Investigations Report (DBIR)
2020 Data Breach Investigations Report (DBIR)- Mark - Fullbright
 
Wilhelm research paper
Wilhelm research paperWilhelm research paper
Wilhelm research paperwennwilhelm
 
Dot comslide
Dot comslideDot comslide
Dot comslides1170225
 
20090402 Nmtc Keynote V1.1
20090402 Nmtc Keynote V1.120090402 Nmtc Keynote V1.1
20090402 Nmtc Keynote V1.1azjohna8350
 
Overcomming Big Data Mining Challenges for Revolutionary Breakthroughs in Com...
Overcomming Big Data Mining Challenges for Revolutionary Breakthroughs in Com...Overcomming Big Data Mining Challenges for Revolutionary Breakthroughs in Com...
Overcomming Big Data Mining Challenges for Revolutionary Breakthroughs in Com...AnthonyOtuonye
 
Disruptive Technologies: Impact on Strategic Alliances, Partnerships & Channels
Disruptive Technologies: Impact on Strategic Alliances, Partnerships & ChannelsDisruptive Technologies: Impact on Strategic Alliances, Partnerships & Channels
Disruptive Technologies: Impact on Strategic Alliances, Partnerships & ChannelsPhil Hogg
 

Similaire à Report Final (20)

MIS Quarterly Executive Vol. 6 No. 2 June 2007 67© 2007 Univ.docx
MIS Quarterly Executive Vol. 6 No. 2  June 2007 67© 2007 Univ.docxMIS Quarterly Executive Vol. 6 No. 2  June 2007 67© 2007 Univ.docx
MIS Quarterly Executive Vol. 6 No. 2 June 2007 67© 2007 Univ.docx
 
STKI 10th Annual 2010 CIO Bootcamp
STKI 10th Annual 2010 CIO BootcampSTKI 10th Annual 2010 CIO Bootcamp
STKI 10th Annual 2010 CIO Bootcamp
 
The Black Swan Event: Funding in the time of Coronavirus with Mark Suster
The Black Swan Event: Funding in the time of Coronavirus with Mark SusterThe Black Swan Event: Funding in the time of Coronavirus with Mark Suster
The Black Swan Event: Funding in the time of Coronavirus with Mark Suster
 
Group Project_WCMDMTJT
Group Project_WCMDMTJTGroup Project_WCMDMTJT
Group Project_WCMDMTJT
 
2011 autumn e business 1
2011 autumn e business 12011 autumn e business 1
2011 autumn e business 1
 
Consumer reactions toward clicks and bricksinvestigating bu.docx
Consumer reactions toward clicks and bricksinvestigating bu.docxConsumer reactions toward clicks and bricksinvestigating bu.docx
Consumer reactions toward clicks and bricksinvestigating bu.docx
 
General management2
General management2General management2
General management2
 
MGT 410Homework Set 3Provide a short answer to each of the f
MGT 410Homework Set 3Provide a short answer to each of the fMGT 410Homework Set 3Provide a short answer to each of the f
MGT 410Homework Set 3Provide a short answer to each of the f
 
Connect X
Connect X Connect X
Connect X
 
The smac revolution
The smac revolutionThe smac revolution
The smac revolution
 
Dont Get SMACked
Dont Get SMACkedDont Get SMACked
Dont Get SMACked
 
Atomico Need-to-Know 12 February 2018
Atomico Need-to-Know 12 February 2018Atomico Need-to-Know 12 February 2018
Atomico Need-to-Know 12 February 2018
 
2020 Data Breach Investigations Report (DBIR)
2020 Data Breach Investigations Report (DBIR)2020 Data Breach Investigations Report (DBIR)
2020 Data Breach Investigations Report (DBIR)
 
Wilhelm research paper
Wilhelm research paperWilhelm research paper
Wilhelm research paper
 
Dot comslide
Dot comslideDot comslide
Dot comslide
 
20090402 Nmtc Keynote V1.1
20090402 Nmtc Keynote V1.120090402 Nmtc Keynote V1.1
20090402 Nmtc Keynote V1.1
 
Overcomming Big Data Mining Challenges for Revolutionary Breakthroughs in Com...
Overcomming Big Data Mining Challenges for Revolutionary Breakthroughs in Com...Overcomming Big Data Mining Challenges for Revolutionary Breakthroughs in Com...
Overcomming Big Data Mining Challenges for Revolutionary Breakthroughs in Com...
 
MBA6008_Unit_6_Assignment_2
MBA6008_Unit_6_Assignment_2MBA6008_Unit_6_Assignment_2
MBA6008_Unit_6_Assignment_2
 
MBA6008_Unit_6_Assignment_2
MBA6008_Unit_6_Assignment_2MBA6008_Unit_6_Assignment_2
MBA6008_Unit_6_Assignment_2
 
Disruptive Technologies: Impact on Strategic Alliances, Partnerships & Channels
Disruptive Technologies: Impact on Strategic Alliances, Partnerships & ChannelsDisruptive Technologies: Impact on Strategic Alliances, Partnerships & Channels
Disruptive Technologies: Impact on Strategic Alliances, Partnerships & Channels
 

Report Final

  • 1. University of Connecticut School of Business Risk Modeling Project: “Dot-com” Bubble: 1996 - 2002 Group 8 Group Members: Tianyan Li Wang Dai Jianguo Li Meng Xu John Almeida
  • 2. I Overview of the Dot-Com Crisis 1 Causes The first web US web sites on the world wide web were established in 1992, but “the web” grew at an exponential rate (Adamic & Huberman, 1999). By the mid-90’s, software companies like AOL, Internet Explorer, and Netscape provided users the ability to “surf” the web via sites such as Lycos, Altavista, and Yahoo (Week, 1996). Schools were increasingly interested in providing internet capabilities to students, and enrollment in US colleges and universities was at an all- time high: close to 40% of the population of 22- 24 year-olds was in school in the mid-1990’s (US Census Bureau, 2013). Computers and internet access became increasingly available at home and work: 18% of households had internet access by 1997 (US Census Bureau, 2013). Coincidentally, US government initiated rate capital gains tax cuts that created available capital among the wealthiest income groups. This is considered to have increased volatility in the market by leading to more speculative investments (Dai, Shackelford, & Zhang, 2013). Thus, when the supply of internet capability and the “free” money from recent tax cuts met with the demand for experimentation in the new medium, the “dot-com” boom was created. Named after the ubiquitous domain suffix indicating a commercial (as opposed to military, government, or educational) website, this phenomena is alternately considered the “dot-com bubble,” the “tech bubble,” or something such as this. Much of this was enabled by the NASDAQ. Although the NASDAQ was founded in the 1970’s and became popular for over-the-counter trades, they found they were able to expand by offering an electronic interface for traders (Simms, 1999). NASDAQ’s emphasis on technology allowed it to join the London Stock Exchange to form an intercontinental security market. The NASDAQ’s competitive advantage in technology over the NYSE made it the perfect market for these new technology companies and their stocks. 1995 – 2000 saw NASDAQ’s volume grow from 300 million shares traded per day to 2 billion shares traded per day (Yahoo! Finance). 2 Starting and Growing the Bubble Many consider the start of the “tech” bubble to be Netscape’s initial public offering (IPO). Netscape opened at $71 (U.S.), more than double the $28 public- offering price set late Tuesday. But after an intra-day high of $75, the stock drifted to close up $30.25 at $58.25. Volume soared to 13.8 million shares, making Netscape the second most-active Nasdaq issue and the third-best performing initial public offering in history (Toronto Star, 1995). The dot-com boom proceeded as initial public offerings (IPO’s) and venture capital funding for rapidly expanding technology companies became increasingly popular. Coincidentally, both the desire to trade stocks and the technology to become one’s own stockbroker was available via companies such as Charles Schwab, Datek, and E*Trade became increasingly popular (Barboza, 1999). Companies such as broadcast.com (sold for $5.7B), Geocities.com (sold for
  • 3. $3.5B), and theglobe.com ($28M) all successfully raised substantial amounts of money from investors who were hoping to find a quick return on their investments (Honan & Leckart, 2010). Other companies such as Microsoft, Dell, and Intel enjoyed solid business before the internet craze, but certainly enjoyed a surge in business from new customers (Picker & Levy, 2014). While the trading happened at internet terminals and “on” NASDAQ (physically located in New York), most of the new technology companies were in “Silicon Valley (Picker & Levy, 2014).” 3 Height of the Bubble and Thereafter However, a problem emerged: many analysts start to doubt that these new, unproven technology companies could earn the profits that they claimed they could earn. The technology companies typically had almost no property, plant, or equipment, which allowed them very low startup costs and no required no debt. With hundreds of millions of potential customers in a business-to-customer model or millions of potential customers in a business-to-business model (with other businesses as customers), there was no way to provide an accurate assessment of what future revenue would be. It became clear that the projections of future profits were based on questionable assumptions at best and reckless speculation at worst (Cassidy, 2003). Startup technology companies that had been flooded with cash began to exit the market at an increasing rate. By March 2001, such (formerly) notable companies as boo.com, pets.com, and etoys.com had failed. NASDAQ peaked on March 10th , 2000 (Yahoo! Finance). Although it’s impossible to a single cause of the point of inflection, layoffs peaked in April, 2000 and technology company closures peaked in May, 2000 (BBC, 2002). The impact of the dot-com bubble’s deflation varied across different groups. The “average American citizen” was hardly any worse off as the dot-com businesses declared bankruptcies and ceased operations. Arguably, the “average investor,” assuming a well-balanced portfolio and a long-term investment strategy, survived reasonably well. The group hardest hit were technology workers, who had to contend with unpredictable employment conditions (Cassidy, 2003). The dot-com crisis had no clearly defined ending. Many of the unproven startups were bought by larger existing businesses, only to delay realizing their losses. Many companies whose wealth had rapidly accumulated lost their wealth equally quick. Several of the companies who grew substantially and maintained reasonable expenses actually survived the crisis and craze to provide valuable services to their customers: E-Bay, Amazon, and even Priceline (Cassidy, 2003). II. Data Analysis 1 Data Selection The NASDAQ index is a stock market index of the common stocks and similar securities listed on the NASDAQ stock market, meaning that it has over 3,000 components. It is highly followed in the U.S. as an indicator of the performance of stocks of technology companies and growth companies. At that time, most dot-com companies appeared on the NASDAQ. NASDAQ index could
  • 4. obviously illustrate the dot-com bubble. The data sample ranges from 1/1/1990 to 12/31/2004, where data of 1/1/1990 to 1/1/1992 is for model building and the rest is for analyze the dot-com crisis. We collected the data from www.finance.yahoo.com. 2 Risk Models 2.1 Historical Simulation To start the analysis of the financial crisis, we firstly build historical simulation model. We find the VaR of today by simply choosing a percentage from the historical data. (Please see the Appendix 1 and Excel File for procedure details). The results are in the figure as follow. The reaction of this model is relatively slow by comparing others which we develop later. 2.2 RiskMetrics Model We develop the risk-metrics model to find out the VaR by finding the parameters first. We use the historical data from 1990 to 1992 to establish the data. The parameter estimated of the model is shown below. RISKMETRICS Initial Values λ 0.94 Maximum Likelihood Estimates λ 0.940090181 LOG LIKEHOOD 9611.838209 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 VaR 1%
  • 5. The value of the parameter is very close to 0.94 which is also applied by the JP Morgan Chase. After establishing the parameter, we should choose a distribution for the returns. Firstly we can choose the normal distributions. Sometimes, the t distribution and the filtered historical model are better distributions. Therefore, we develop all those three distributions for the model. (Please see the Appendix 1 and Excel File for procedure details). Then we can get the result of this model following below. To illustrate the result, we just show the result of the normal distribution returns. Also, we can use riskmetrics model to find out the daily Expected Shortfall. Following is the result for normal distribution. (For t-distribution, please see the Excel File). 0% 2% 4% 6% 8% 10% 12% 14% VaR 0% 2% 4% 6% 8% 10% 12% 14% 16% ES
  • 6. 2.3 GARCH Model Similar to the risk-metrics model, we can use the GARCH model to search for VaR and Expected Shortfall. The parameters estimated by the log likelihood maximum theory. The results are shown below. Maximum Likelihood Estimation Starting Values  0.0500000  0.8000000  1.500E-06  Results  0.1171359  0.8828641  1.421E-06  Log likelihood 9624.001 Persistence 1.0000000 With these parameters, we can build the GARCH model to find out the VaRs and ES values. Again, we can apply the normal distribution, t distribution or the Filtered Historical Simulation as the distribution of the model. The result for VaRs at normal distribution is shown below. (For t-distribution and Filtered Historical Simulation, please see the Excel File). 0% 2% 4% 6% 8% 10% 12% 14% 16% VaR
  • 7. 2.4 NGARCH Model Similar to the GARCH model, we can use the NGARCH model to search for VaR and Expected Shortfall. The parameters estimated by the log likelihood maximum theory. The results are shown below. Maximum Likelihood Estimation Starting Values  0.0500000  0.8000000  1.500E-06  1.2500000 Results  0.1041919  0.7916445  4.359E-06  0.9998649 Log likelihood 9613.38 Persistence 1.000000 With these parameters, we can build the GARCH model to find out the VaRs and ES values. Again, we can apply the normal distribution, t distribution or the Filtered Historical Simulation as the distribution of the model. The result for VaRs Comparison under normal distribution is shown below. (For t-distribution and Filtered Historical Simulation, please see the Excel File).
  • 9. 3. Back Testing In orderto findoutthe validationof those models,we canapplythe back testingtechniques.First,we can testindependence characterof a model whichtestthe hitsindependence.Afterthatwe can do coverage testwhichteststhe real fractionthat hashitscomparedto the theoretical value indicatedby the VaR model.Finally,we cantestof the independence andcoverage bycombiningboththe separate tests.The resultsforRiskMetricsmodel andGARCHmodel are listedbelow. (For procedure details, please see Appendix 1 and the Excel File) Hypothesis Testing (Chi-Square Test) Significance level = 10% RiskMetrics GARCH LRuc Reject VaR model Don't Reject VaR model Reject VaR model Reject VaR model LRind Don't Reject VaR model Reject VaR model Reject VaR model Don't Reject VaR model LRcc Reject VaR model Don't Reject VaR model Don't Reject VaR model Don't Reject VaR model For othermodelsundertdistributionandFilteredhistoricalmodel,we canuse the similartests. 4. Analysisforthe CrisiswithRiskModels 4.1 Model Selection In orderto selectabestmodel forthisanalysis,we canuse the exceedanceanalysisforeachmodel.To illustrate the ideasimple,we choose the RiskMetricsmodel,GARCHmodel andNGARCHmodel under normal distribution.We ignore the Historical Simulationhere because othermodelshave muchbetter characteristicsthanit.The exceedanceisthe numberof standarddeviationof anegative return(loss) that exceedsthe correspondingVaRindicatedbythe model.If the returnlossdoesnotexceedthe VaR given,the value of the exceedance iszero.Therefore,we canhave resultsasbelow. (For procedure details, please see Appendix 1 and the Excel File) RiskMetricsModel
  • 11. Apparently,the exceedance analysisshowsthatthe NGARCHmodel isthe bestof the three.The exceedance numberare more close to5% indicatedbythe model andthe distributionof hitsisrelatively random.Therefore,we are goingtouse thismodel toanalyze the indicationfromthe crisis. 4.2 CrisisAnalysis Firstly,if we take a lookat the returnand price figure of the crisisperiod,we canget a basicinformation for the crisis.Fromhistorical data,we have figuresbelow. 0 1 2 3 4 5 6 Exceedance 0 1000 2000 3000 4000 5000 6000 Adj Close Prices
  • 12. From these figures,we canknowthatthe markethad a large volatilityduring1999-2001. Andthe price peakappearsat 2000. Thenwe take a lookat the VaR indicatedbythe NGARCHmodel. 4.2.1 InformationBefore the Crisis We can findlarge VaRssince 1998 to 1999. This isa signal of large market volatility.We canunderstand thisunusual volatilityasasignal of irrational marketbehavior.Therefore,asriskmanagers,we can detectthissignal before the crisisbefore the large lossescoming. -0.15 -0.1 -0.05 0 0.05 0.1 0.15 RETURN 0% 2% 4% 6% 8% 10% 12% 14% 16% 18% VaR
  • 13. 4.2.2 InformationDuringthe Crisis The VaR duringthe crisisfrom1999-2001 can getas highas 16% for 1% VaR. However,the evenworse situationisthe largestexceedance of VaRcanbe 5 standard deviationif we use NGARCHmodel and4 standardDeviationif we use Garch model. The technique adjustmentwe canapplyhere isto change the model orthe model’sparametermore frequently.Otherwise,the riskmanagingsystemcannotworkwell duringthisperiod. 4.2.3 InformationAfterthe Crisis After2001, the marketvolatilitydecreasesslowlyandfinallybecome stableafter2002. This can be interpretedasthe signal thatthe marketreturnsto normal situation.Atthispoint,riskmanagersare able to adjusttheirmodelssothatthe model doesnotprovide tooconservativestrategies. III Stress Testing The reason why we will do the stress testing is that most risk management work is short of data samples. This could be a big issue if available historical data cannot fully reflect the potential risks going forward. For example, the available data may lack extreme events such as an equity market crash, which occurs very infrequently. Since the portfolio is consist of Ebay and Amazon, in the first step of stress testing, we have to use the solver function in excel to get the weight of each asset to make the unconditional VaR to be minimum. As we can see in the excel result, the weight of Ebay is 0.53, while the weight of Amazon is 0.47. The portfolio constructing result is the figure as follow from the Excel File. Minimizing Ebay Variance 0.0024956 Weight 0.5302198 VaR 0.0846235 Amazon Variance 0.0028244 Weight 0.4697802 VaR 0.0847399
  • 14. Portfolio Unconditional Covariance -6.024E- 05 Variance 0.0012949 VaR 0.0837137 Correlation - 0.0226896 Next, under the RiskMetric model, we can calculate the portfolio returns and variance during the period in which the date we collected. (See Appendix). In this case, we select the period that is from January 2002 to June 2004 as the normal time period. We choose a period of 60 days. On the other hand, we select the period that is from January 1999 to December 2000 as the crisis time period. We choose another period of 60 days. Here is the chart that is showing the daily VaR during the testing period by using the historical date we collect during the normal period. As we can see, the daily VaR shown by the blue line remains at the level of 8% stably. The maximum and minimum VaR are 10% and 6% approximately. There is not much big difference between each scatterplots in the line. The second chart is showing the relationship of crisis daily VaR and correlated daily Var. 0 0.02 0.04 0.06 0.08 0.1 0.12 Normal Time
  • 15. The VaR with higher correlation has higher values. However, the figure shapes are very similar. Compared with the daily VaR under the first scenario, the crisis daily VaR is a little higher. As we can see, before September 2004, the red line is decreasing rapidly and is much higher than the blue line. The reason is that without considering the diversification benefits, the unconditional VaR of portfolio that only involves Ebay alone will be become much larger. In this case, the weight of Ebay in the portfolio is equal to one. However, the tendency of decreasing of red line becomes smaller. Although the red line is still decreasing as the time passed, the rate becomes slower and slower. After September 2004, the red line is on the bottom of the blue line. In this case, the reason I think is the other asset in the portfolio, Amazon, is much more risky than the Ebay. Finally, we can use the filtered historical simulation of different time period for different scenarios to calculate the daily VaR by assuming the normal distribution for both stocks. IV Conclusions and Recommendations There was a substantial amount of benefit that occurred as the internet grewin the late 1990’s. The internet did create substantial benefits for its users: time savings, increased communications, and new learning opportunities. However, during this time there was also a substantial amount of volatility, confusion, and waste. In order to support the strategy of supporting growth, even “excited” growth while reducing “irrational exuberance,“ the government could enact a tiered capital gains tax, for example taxing capital gains at 20% and taxing capital gains above 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 Chart Title VaR-Crisis VaR-Crisis-Cor=1
  • 16. $1,000,000 at 50%. This would reduce the incentive for individual investors to seek large gains (Pascal, 2000). Alternately, the government could create a tax policy that triggered transaction trades if the market grew too quickly or if volatility reached a certain level. Both of these measures would create a natural “brake” for the economy should stocks start to trade too quickly. Additionally, the government could create a system where new (especially unproven) companies who experienced substantial capital gains would also experience substantial taxes not applicable to larger companies. Such a mechanism would encourage companies to carefully consider the demand for their stock and price the IPO appropriately: an IPO that is priced according to its value should maintain a constant price rather than experience wild price swings (Rydqvist, 1997). The dot-com boom showed that new technologies, while definitely exciting, can have both costs and benefits. The benefits of technologies include time savings, money savings, increased interaction, and new opportunities. Time savings and money savings can be difficult to quantify sometimes, but the benefit of a better quality interaction or new opportunity is usually very difficult to quantify. This can add substantial volatility to the market, creating risk for investors and questioning underlying assumptions. A general rule is to consider look for logical errors in underlying assumptions (for example, that demand for the internet would necessarily translate to revenue earned for new internet companies) and to create stress tests for different circumstances. Although investing in any market necessarily involves risk, a well-managed portfolio works hard to find unnecessary risk and adjust itself accordingly.
  • 17. Reference Adamic,L. A.,& Huberman,B. A.(1999, 09). InternetGrowthDynamicsof the World-Wide Web. Nature, 401. Barboza,D. (1999, February1).Small On-Line BrokersRaise Share of Trades. New YorkTimes. BBC. (2002, March 12). Dot-ComTimeline.London,England.Retrievedfrom http://news.bbc.co.uk/2/hi/business/1869544.stm Cassidy,J.(2003). Dot.con:HowAmerica Lost ItsMind and ItsMoney in the InternetEra. New York: Harper Perennial. Dai,Z., Shackelford,D.A.,& Zhang,H. H. (2013, September).DoesFinancial ConstraintAffectthe RelationbetweenShareholderTaxesandthe Costof EquityCapital? TheAccounting Review, 88(5). Honan,M., & Leckart,S. (2010, February10). 10 Years After:A Look Back at the DotcomBoom and Bust. Wired. Picker,L.,& Levy,A. (2014, March 6). IPODot-ComBubble EchoSeenMutedas OlderCompaniesDebut. Bloomberg.com. Simms,M. (1999, February).The History:How NasdaqWas Born. TradersMagazine. TorontoStar. (1995, August10). Netscape skyrocketsatlaunchHuge demandforsharesof Internet software firm.Toronto,Canada. US CensusBureau.(2013). Computerand InternetUsein the United States. Washington,DC:US Departmentof Commerce. Week,P.B. (1996, April 15). On Wall Street:NetDirectoriesGetBoostfromStock OfferingsLycos, Excite,andYahoo! Yahoo!Finance.(n.d.).NASDAQComposite (^IXIC).
  • 18. Appendix 1:Risk Modeling Techniques Historical Simulation HS technique isthe simplestwaytocalcaulate the VaRand ES(ExpectedShortfall).It assumesthatthe distributionoftomorrow’sportfolioreturns,RPF,t+1,is well approximatedbythe empirical distributionof the past m observations. The pseudologreturncan now be definedas Considerthe availabilityof apast sequence of mdailyhypothetical portfolioreturns,calculatedusing past pricesof the underlyingassetsof the portfolio,butusingtoday’sportfolioweight . The VaR withcoverage rate,p,isthensimplycalculatedas100pth percentileof the sequence of past portfolioreturns.The formulais In Excel,we couldsortthe returnsin ascendingorderandchoose the VaRto be the numbersuchthat only100% of the observationsare smallerthanthe VaR. Expectedshortfall isanalternativeriskmeasure.Itisdefinedasthe expectedreturngiventhatthe returnfallsbelowthe VaR.Soforthe 1-dayhorizon,we have WeightedHistorical Simulation WHS technique is improvement of HS by designing to relieve the tension in the choice of sample size, m. We assign relatively more weight to the most recent observations and relatively less weight to the return further in the past. In WHS, our sample of m past hypothetical returns, is assigned probability weight declining exponentially through the past as follows. So we can easily see that when After using this way to get the return, we could repeat the process in the HS to get the VaR.
  • 19. RiskMetrics Models RiskMetrics system considers the following model, where the weights on past squared returns decline exponentially as we move backward in time. RM variance model formula is So RiskMetrics model’s foreast for tomorrow’s volatility can be seen as a weighted average of today’s volatility and today’s squared return. RM tracks variance changes in a way that is broadly consistent with observed returns.RM found that the estimates were quite similar across assets, and they therefore simply set ^=0.94 for every asset for daily variance forecasting. So In Excel, after we got the today’s variance and returns, use ,then we can calculate the tomorrow’s variance. GARCH model GARCH model is the simplest model that capture important features of returns data and that are flexible enough to accommodate specific aspects of individual assets. The formula is Note that RM model can be viewed as a special case of the simple GARCH model if a =1 -^, B=^ amd w=0/. But there is a important advantage about GARCH: it consider the fact that the long-run average variance tends to be relatively stable over time. In Excel, we could use the Maximum likelihood Estimation to estimate the three parameters w,a,b. Then with the help of the today’s returns ,today’s variance, we could get the tomorrow’s variance. NGARCH model
  • 20. NGARCH model (nonlinear GARCH model) is a extensions of the GARCH Model. It could be used to captured the leverage effect, which means that negative return increases variance by more than a positive return of the same magnitude. GARCH Models is modified so that the weight given to the return depends on whether the return is positive or negative in the following simple mannar: Which is sometimes refered to as the NGARCH model. It is strictly speaking a positive piece of news, zt>0,rather than raw reurn Rt,which has less of an impact on variance than a negative piece of news, if θ>0. In Excel, we could use the Maximum likelihood Estimation to estimate the three parameters w,a,b. Then with the help of the today’s returns ,today’s variance, we could get the tomorrow’s variance. Filtered Historical Simulation The FHS approach attempts to combine the best of the model-based with the best of the model- free approaches in a very intuitive fashion. Using the GARCH Models, where Withthe sequence of pastreturns, ,we can estimate the GARCH model and calculate past standardized returns from the observed returns and from the estimatd standard deviations as So we can calculate the VaR by using following formula: For ES, we can calculate inthe followingway: T-distribution The t-distribution can capture the most important deviations from normality. It been defind by where D is the only parameter that we need to use maximum likelihood estimation to get.
  • 21. We can get the VaR and ES by using the following way: Monte Carlo Simulation Monte Carlo Simulation relies on artigicail random numbers to simulate the hypothetical daily returns from day t+1 to day t+K as In MCS, we first use GAECH models to obtain the tomorrow’s variance. Then using the ramdom number generator, we can generate a set of artificial random numbers From these random numbers we can calculate a set of hypothetical returns for tomorrow as Given these hypothetical returns, we can update the variance to get a set of hypothetical variances for the day after tomorrow,t+2,as follow: Then given a nes st of random numbers frawn from the N(0,1) distribution, We can calculate a set of hypothetical returns for t+2 day, and the variance So after we repeat these step for K times, we can simulate the hypothetical daily returns from day t+1 to day t+K as
  • 22. For VaR and ES of Monte Carlo, we can calculate them in the following way:
  • 23. Appendix 2—Figures from the risk models (1) RiskMetrics Normal Distribution 0% 2% 4% 6% 8% 10% 12% 14% VaR 1%VaR 5%VaR 0% 2% 4% 6% 8% 10% 12% 14% 16% ES 1%ES 5%ES