MAHA Global and IPR: Do Actions Speak Louder Than Words?
161783709 chapter-04-answers
1. 46
TEACHING SUGGESTIONS
Teaching Suggestion 4.1: Which Is the Independent Variable?
We find that students are often confused about which variable is
independent and which is dependent in a regression model. For
example, in Triple A’s problem, clarify which variable is X and
which is Y. Emphasize that the dependent variable (Y) is what we
are trying to predict based on the value of the independent (X)
variable. Use examples such as the time required to drive to a store
and the distance traveled, the totals number of units sold and the
selling price of a product, and the cost of a computer and the
processor speed.
Teaching Suggestion 4.2: Statistical Correlation Does Not
Always Mean Causality.
Students should understand that a high R2
doesn’t always mean
one variable will be a good predictor of the other. Explain that
skirt lengths and stock market prices may be correlated, but rais-
ing one doesn’t necessarily mean the other will go up or down. An
interesting study indicated that, over a 10-year period, the salaries
of college professors were highly correlated to the dollar sales vol-
ume of alcoholic beverages (both were actually correlated with
inflation).
Teaching Suggestion 4.3: Give students a set of data and have
them plot the data and manually draw a line through the data. A
discussion of which line is “best” can help them appreciate the
least squares criterion.
Teaching Suggestion 4.4: Select some randomly generated values
for X and Y (you can use random numbers from the random
number table in Chapter 15 or use the RAND function in Excel).
Develop a regression line using Excel and discuss the coefficient
of determination and the F-test. Students will see that a regression
line can always be developed, but it may not necessarily be useful.
Teaching Suggestion 4.5: A discussion of the long formulas and
short-cut formulas that are provided in the appendix is helpful.
The long formulas provide students with a better understanding
of the meaning of the SSE and SST. Since many people use
computers for regression problems, it helps to see the original
formulas. The short-cut formulas are helpful if students are
performing the computations on a calculator.
ALTERNATIVE EXAMPLES
Alternative Example 4.1: The sales manager of a large apart-
ment rental complex feels the demand for apartments may be related
to the number of newspaper ads placed during the previous month.
She has collected the data shown in the accompanying table.
Ads purchased, (X) Apartments leased, (Y)
15 6
9 4
40 16
20 6
25 13
25 9
15 10
35 16
We can find a mathematical equation by using the least squares
regression approach.
Leases, Y Ads, X (X Ϫ ¯¯X)2
(X Ϫ ¯¯X)(Y Ϫ ¯¯Y)
6 15 64 32
4 9 196 84
16 40 289 102
6 20 9 12
13 25 4 6
9 25 4 Ϫ2
10 15 64 0
16 35 144 72
͚Y ϭ 80 ͚X ϭ 184 ͚(X Ϫ ¯¯X)2
ϭ 774 ͚(X Ϫ ¯¯X)(Y Ϫ ¯¯Y) ϭ 306
b1 ϭ 306/774 ϭ 0.395
b0 ϭ 10 Ϫ 0.395(23) ϭ 0.915
The estimated regression equation is
ˆY ϭ 0.915 ϩ 0.395X
or
Apartments leased ϭ 0.915 ϩ 0.395 ads placed
If the number of ads is 30, we can estimate the number of apart-
ments leased with the regression equation
0.915 ϩ 0.395(30) ϭ 12.76 or 13 apartments
Alternative Example 4.2: Given the data on ads and apartment
rentals in Alternative Example 4.1, find the coefficient of determi-
nation. The following have been computed in the table that
follows:
SST ϭ 150; SSE ϭ 29.02; SSR ϭ 120.76
(Note: Round-off error may cause this to be slightly different than
a computer solution.)
€
Y X= = = =
80
8
10
184
8
23;
4C H A P T E R
Regression Models
M04_REND6289_10_IM_C04.QXD 5/7/08 2:49 PM Page 46
REVISED
2. CHAPTER 4 REGRESSION MODELS 47
From this the coefficient of determination is
r2
ϭ SSR/SST ϭ 120.76/150 ϭ 0.81
Alternative Example 4.3: For Alternative Examples 4.1 and 4.2,
dealing with ads, X, and apartments leased, Y, compute the correla-
tion coefficient.
Since r2
ϭ 0.81 and the slope is positive (ϩ0.395), the posi-
tive square root of 0.81 is the correlation coefficient. r ϭ 0.90.
SOLUTIONS TO DISCUSSION QUESTIONS
AND PROBLEMS
4-1. The term least-squares means that the regression line will
minimize the sum of the squared errors (SSE). No other line will
give a lower SSE.
4-2. Dummy variables are used when a qualitative factor such
as the gender of an individual (male or female) is to be included in
the model. Usually this is given a value of 1 when the condition is
met (e.g. person is male) and 0 otherwise. When there are more
than two levels or values for the qualitative factor, more than one
dummy variable must be used. The number of dummy variables is
one less than the number of possible values or categories. For ex-
ample, if students are classified as freshmen, sophomores, juniors
and seniors, three dummy variables would be necessary.
4-3. The coefficient of determination (r2
) is the square of the
coefficient of correlation (r). Both of these give an indication of
how well a regression model fits a particular set of data. An r2
value of 1 would indicate a perfect fit of the regression model to
the points. This would also mean that r would equal Ϫ1 or ϩ1.
4-4. A scatter diagram is a plot of the data. This graphical
image helps to determine if a linear relationship is present, or if
another type of relationship would be more appropriate.
4-5. The adjusted r2
value is used to help determine if a new
variable should be added to a regression model. Generally, if the
adjusted r2
value increases when a new variable is added to a
model, this new variable should be included in the model. If the
12
10
8
6
4
2
0
0 2 4 6 8 10
TV Appearances
Demand
adjusted r2
value declines or does not increase when a new vari-
able is added, then the variable should not be added to the model.
4-6. The F-test is used to determine if the overall regression
model is helpful in predicting the value of the independent variable
(Y). If the F-value is large and the p-value or significance level is
low, then we can conclude that there is a linear relationship and the
model is useful, as these results would probably not occur by
chance. If the significance level is high, then the model is not useful
and the results in the sample could be due to random variations.
4-7. The SSE is the sum of the squared errors in a regression
model. SST ϭ SSE ϩ SSR.
4-8. When the residuals (errors) are plotted after a regression
line is found, the errors should be random and should not show
any significant pattern. If a pattern does exist, then the assump-
tions may not be met or another model (perhaps nonlinear) would
be more appropriate.
4-9. a. ˆY ϭ 36 ϩ 4.3(70) ϭ 337
b. ˆY ϭ 36 ϩ 4.3(80) ϭ 380
c. ˆY ϭ 36 ϩ 4.3(90) ϭ 423
4-10. a.
Y X (Y Ϫ ¯¯Y)2 ˆYϭ 0.915ϩ0.395X (Y Ϫ ˆY)2
( ˆY Ϫ ¯¯Y)2
6.00 15.00 16 6.84 0.706 9.986
4.00 9.00 36 4.47 0.221 30.581
16.00 40.00 36 16.715 0.511 45.091
6.00 20.00 16 8.815 7.924 1.404
13.00 25.00 9 10.79 4.884 0.624
9.00 25.00 1 10.79 3.204 0.624
10.00 15.00 0 6.84 9.986 9.986
16.00 35.00 36 14.74 1.588 22.468
80.00 184.00 SSTϭ150.00 80.00 SSEϭ29.02 SSRϭ120.76
M04_REND6289_10_IM_C04.QXD 5/7/08 2:49 PM Page 47
REVISED
3. 48 CHAPTER 4 REGRESSION MODELS
SST ϭ 29.5; SSE ϭ 12; SSR ϭ 17.5
b1 ϭ 17.5/17.5 ϭ 1
b0 ϭ 6.5 Ϫ 1(5.5) ϭ 1
The regression equation is ˆY ϭ 1 ϩ 1X.
c. ˆY ϭ 1 ϩ 1X ϭ 1 ϩ 1(6) ϭ 7.
4-10. b.
Demand TV Appearances
Y X (X Ϫ ¯¯X)2
(Y Ϫ ¯¯Y)2
(X Ϫ ¯¯X)(Y Ϫ ¯¯Y) ˆY (Y Ϫ ˆY)2
( ˆY Ϫ ¯¯Y)2
3 3 6.25 12.25 8.75 4 1 6.25
6 4 2.25 0.25 0.75 5 1 2.25
7 7 2.25 0.25 0.75 8 1 2.25
5 6 0.25 2.25 Ϫ0.75 7 4 0.25
10 8 6.25 12.25 8.75 9 1 6.25
8 5 0.25 2.25 Ϫ0.75 6 4 0.25
͚Y ϭ 39.0 ͚X ϭ 33 17.5 29.5 17.5 12 17.5
¯¯Y ϭ 6.5 ¯¯X ϭ 5.5
SST SSE SSR
4-11. See the table for the solution to problem 4-10 to obtain
some of these numbers.
MSE = SSE/(n Ϫ k Ϫ 1) = 12/(6 Ϫ 1 Ϫ 1) = 3
MSR = SSR/k = 17.7/1 = 17.5
F = MSR/MSE = 17.5/3 = 5.83
df1 = k = 1
df2 = n Ϫ k Ϫ 1 = 6 Ϫ 1 Ϫ 1 = 4
F0.05, 1, 4 = 7.71
Do not reject H0 since 5.83 Ͻ 7.71. Therefore, we cannot conclude
there is a statistically significant relationship at the 0.05 level.
Fin. Test 1
Ave,(Y) (X) (X Ϫ ¯¯X)2
(Y Ϫ ¯¯Y)2
(X Ϫ ¯¯X)(Y Ϫ ¯¯Y) Y (Y Ϫ ˆY)2
(ˆY Ϫ ¯¯Y)2
93 98 285.235 196 236.444 91.5 2.264 156.135
78 77 16.901 1 4.111 76 4.168 9.252
84 88 47.457 25 34.444 84.1 0.009 25.977
73 80 1.235 36 6.667 78.2 26.811 0.676
84 96 221.679 25 74.444 90 36.188 121.345
64 61 404.457 225 301.667 64.1 0.015 221.396
64 66 228.346 225 226.667 67.8 14.592 124.994
95 95 192.901 256 222.222 89.3 32.766 105.592
76 69 146.679 9 36.333 70 35.528 80.291
711 730 1544.9 998 1143 152.341 845.659
b1 = 1143/1544.9 = 0.740
b0 = (711/9) Ϫ 0.740 (730/9) = 18.99
4-12. Using Excel, the regression equation is ˆY ϭ 1 ϩ 1X.
F ϭ 5.83, the significance level is 0.073. This is significant at the
0.10 level (0.073 Ͻ 0.10), but it is not significant at the 0.05 level.
There is marginal evidence that there is a relationship between
demand for drums and TV appearances.
4-13.
M04_REND6289_10_IM_C04.QXD 5/7/08 2:49 PM Page 48
REVISED
4. CHAPTER 4 REGRESSION MODELS 49
a. ˆY ϭ 18.99 ϩ 0.74X
b. ˆY ϭ 18.99 ϩ 0.74(83) ϭ 80.41
c. r2
= SSR/SST = 845.629/998 = 0.85; r ϭ 0.92; this
means that 85% of the variability in the final average can
be explained by the variability in the first test score.
50
45
40
35
30
25
20
15
10
5
0
0 5 10
Tourists (Millions)
15 20 25
Ridership(100,000s)
4-14. See the table for the solution to problem 4-13 to obtain
some of these numbers.
MSE = SSE/(n Ϫ k Ϫ 1) = 152.341/(9 Ϫ 1 Ϫ 1) = 21.76
MSR = SSR/k = 845.659/1 = 845.659
F = MSR/MSE = 845.659/21.76 = 38.9
df1 = k = 1
df2 = n Ϫ k Ϫ 1 = 9 Ϫ 1 Ϫ 1 = 7
F0.05, 1, 7 = 5.59
Because 38.9 Ͼ 5.59, we can conclude (at the 0.05 level) that
there is a statistically significant relationship between the first test
grade and the final average.
4-15. F ϭ 38.86; the significance level ϭ 0.0004 (which is ex-
tremely small) so there is definitely a statistically significant
relationship.
4-16. a. ˆY ϭ 13,473 ϩ 37.65(1,860) ϭ $83,502.
b. The predicted average selling price for a house this
size would be $83,502. Some will sell for more and some
will sell for less. There are other factors besides size that
influence the price of the house.
c. Some other variables that might be included are age of
the house, number of bedrooms, and size of the lot. There
are other factors in addition to these that one can identify.
d. The coefficient of determination (r2
) ϭ (0.63)2
ϭ
0.3969.
4-17. The multiple regression equation is ˆY ϭ $90.00 ϩ
$48.50X1 ϩ $0.40X2
a. Number of days on the road: X1 ϭ 5; Distance traveled:
X2 ϭ 300 miles
The amount he may be expected to claim is
ˆY ϭ 90.00 ϩ 48.50(5) ϩ $0.40(300) ϭ $452.50
b. The reimbursement request, according to the model,
appears to be too high. However, this does not mean that it is
not justified. The accountants should question Thomas
Williams about his expenses to see if there are other explana-
tions for the high cost.
c. A number of other variables should be included, such as
the type of travel (air or car), conference fees if any, and ex-
penses for entertainment of customers, and other transportation
(cab and limousine) expenses. In addition, the coefficient of
correlation is only 0.68 and r2
ϭ (0.68)2
ϭ 0.46. Thus, about
46% of the variability in the cost of the trip is explained by this
model; the other 54% is due to other factors.
4-18. Using computer software to get the regression equation,
we get
ˆY ϭ 1.03 ϩ 0.0034X
where ˆY ϭ predicted GPA and X ϭ SAT score.
If a student scores 450 on the SAT, we get
ˆY ϭ 1.03 ϩ 0.0034(450) ϭ 2.56.
If a student scores 800 on the SAT, we get
ˆY ϭ 1.03 ϩ 0.0034(800) ϭ 3.75.
4-19. a. A linear model is reasonable from the graph below.
b. ˆY ϭ 5.060 ϩ 1.593X
c. ˆY ϭ 5.060 ϩ 1.593(10) ϭ 20.99, or 2,099,000 people.
d. If there are no tourists, the predicted ridership would be
5.06 (100,000s) or 506,000. Because X ϭ 0 is outside the
range of values that were used to construct the regression
model, this number may be questionable.
4-20. The F-value for the F-test is 52.6 and the significance
level is extremely small (0.00002) which indicates that there is a
statistically significant relationship between number of tourists
and ridership. The coefficient of determination is 0.84 indicating
that 84% of the variability in ridership from one year to the next
could be explained by the variations in the number of tourists.
4-21. a. ˆY ϭ 24,328 ϩ 3026.67X1 ϩ 6684X2
where ˆY predicted starting salary; X1 ϭ GPA; X2 ϭ 1 if business
major, 0 otherwise.
b. ˆY ϭ 24,328 ϩ 3026.67(3.0) ϩ 6684(1) ϭ $40,092.01.
c. The starting salary for business majors tends to be
about $6,684 higher than non-business majors in this
sample, even after adjusting for variations in GPA.
d. The overall significance level is 0.099 and r2
ϭ 0.69.
Thus, the model is significant at the 0.10 level and 69% of
the variability in starting salary is explained by GPA and
major. The model is useful in predicting starting salary.
4-22. a. Let
ˆY ϭ predicted selling price
X1 ϭ square footage
X2 ϭ number of bedrooms
X3 ϭ age
The model with square footage: ˆY ϭ 2367.26 ϩ 46.60X1 ; r2
ϭ 0.65
The model with number of bedrooms: ˆY ϭ 1923.5 ϩ 36137.76X2 ;
r2
ϭ 0.36
The model with age: ˆY ϭ 147670.9 Ϫ 2424.16X3 ; r2
ϭ 0.78
M04_REND6289_10_IM_C04.QXD 5/7/08 2:49 PM Page 49
REVISED
5. 50 CHAPTER 4 REGRESSION MODELS
All of these models are significant at the 0.01 level or less. The
best model uses age as the independent variable. The coefficient of
determination is highest for this, and it is significant.
4-23. ˆY ϭ 5701.45 ϩ 48.51X1 Ϫ 2540.39X2 and r2
ϭ 0.65.
ˆY ϭ 5701.45 ϩ 48.51(2000) Ϫ 2540.39(3) ϭ 95,100.28.
Notice the r2
value is the same as it was in the previous problem
with just square footage as the independent variable. Adding the
number of bedrooms did not add any significant information that
was not already captured by the square footage. It should not be
included in the model. The r2
for this is lower than for age alone in
the previous problem.
4-24. ˆY ϭ 82185.5 ϩ 25.94X1 Ϫ 2151.7X2 Ϫ 1711.5X3 and
r2
ϭ 0.89.
ˆY ϭ 82185.5 ϩ 25.94(2000) Ϫ 2151.7(3) Ϫ 1711.5(10)
ϭ $110,495.4.
4-25. ˆY ϭ 3071.885 ϩ 6.5326X where
Y ϭ DJIA and X ϭ S&P.
r ϭ 0.84 and r2
ϭ 0.70.
ˆY ϭ 3071.885 ϩ 6.5326(1100) ϭ 10257.8 (rounded)
4-26. With one independent variable, beds, in the model, r2
ϭ
0.88. With just admissions in the model, r2
ϭ 0.974. When both
variables are in the model, r2
ϭ 0.975. Thus, the model with only
admissions as the independent variable is the best. Adding the
number of beds had virtually no impact on r2
, and the adjusted r2
decreased slightly. Thus, the best model is ˆY ϭ 1.518 ϩ 0.6686X
where Y ϭ expense and X ϭ admissions.
4-27. Using Excel with Y ϭ MPG; X1 ϭ horsepower; X2 ϭ
weight the models are:
ˆY ϭ 53.87 Ϫ 0.269X1; r2
ϭ 0.77
ˆY ϭ 57.53 Ϫ 0.01X2; r2
ϭ 0.73.
Thus, the model with horsepower as the independent variable is
better since r2
is higher.
4-28. ˆY ϭ 57,69 Ϫ 0.17X1 Ϫ 0.005X2 where
Y ϭ MPG
X1 ϭ horsepower
X2 ϭ weight
r2
ϭ 0.82.
This model is better because the coefficient of determination is much
higher with both variables than it is with either one individually.
4-29. Let Y ϭ MPG; X1 ϭ horsepower; X2 ϭ weight
The model ˆY ϭ b0 ϩ b1X1 ϩ b2X1
2
is ˆY ϭ 69.93 Ϫ0.620X1 ϩ
0.001747X1
2
and has r2
ϭ 0.798.
The model ˆY ϭ b0 ϩ b3X2 ϩ b4X2
2
is ˆY ϭ 89.09 Ϫ 0.0337X2 ϩ
0.0000039X2
2
and has r2
ϭ 0.800.
The model ˆY ϭ b0 ϩ b1X1 ϩ b2X1
2
ϩ b3X2 ϩ b4X2
2
is ˆY ϭ 89.2 Ϫ
0.51X1 ϩ 0.001889X1
2
Ϫ 0.01615X2 ϩ 0.00000162X2
2
and has r2
ϭ
0.883. This model has a higher r2
value than the model in 4-28. A
graph of the data would show a nonlinear relationship.
4-30. If SAT median score alone is used to predict the cost,
we get
ˆY ϭ Ϫ7793.1 ϩ 21.8X1 with r2
ϭ 0.22.
If both SAT and a dummy variable (X2 ϭ 1 for private, 0 otherwise)
are used to predict the cost, we get r2
ϭ 0.79. The model is
ˆY ϭ 7121.8 ϩ 5.16X1 ϩ 9354.99X2.
This says that a private school tends to be about $9,355 more ex-
pensive than a public school when the median SAT score is used to
adjust for the quality of the school. The coefficient of determination
indicates that about 79% of the variability in cost can be explained
by these factors. The model is significant at the 0.001 level.
4-31.
There is a significant relationship between the number of victories
(Y) and the payroll (X) at the 0.054 level, which is marginally sig-
nificant. However, r2
= 0.24, so the relationship is not very strong.
Only about 24% of the variability in victories is explained by this
model.
4-32. a.
b.
c. The correlation coefficient for the first stock is only
0.19 while the correlation coefficient for the second is
0.96. Thus, there is a much stronger correlation between
stock 2 and the DJI than there is for stock 1 and the DJI.
CASE STUDIES
SOLUTION TO NORTH–SOUTH AIRLINE CASE
Northern Airline Data
Airframe Cost Engine Cost Average Age
Year per Aircraft per Aircraft (Hours)
2001 51.80 43.49 6,512
2002 54.92 38.58 8,404
2003 69.70 51.48 11,077
2004 68.90 58.72 11,717
2005 63.72 45.47 13,275
2006 84.73 50.26 15,215
2007 78.74 79.60 18,390
Southeast Airline Data
Airframe Cost Engine Cost Average Age
Year per Aircraft per Aircraft (Hours)
2001 13.29 18.86 5,107
2002 25.15 31.55 8,145
2003 32.18 40.43 7,360
2004 31.78 22.10 5,773
2005 25.34 19.69 7,150
2006 32.78 32.58 9,364
2007 35.56 38.07 8,259
Utilizing QM for Windows, we can develop the following regres-
sion equations for the variables of interest.
Northern Airline—airframe maintenance cost:
Cost ϭ 36.10 ϩ 0.0025 (airframe age)
Coefficient of determination ϭ 0.7694
Coefficient of correlation ϭ 0.8771
ˆ . .Y X= − +31 54 0 0058
ˆ . .Y X= +42 43 0 0004
ˆ . .Y X= +67 8 0 0145
M04_REND6289_10_IM_C04.QXD 5/7/08 2:49 PM Page 50
REVISED
6. CHAPTER 4 REGRESSION MODELS 51
Northern Airline—engine maintenance cost:
Cost ϭ 20.57 ϩ 0.0026 (airframe age)
Coefficient of determination ϭ 0.6124
Coefficient of correlation ϭ 0.7825
Southeast Airline—airframe maintenance cost:
Cost ϭ 4.60 ϩ 0.0032 (airframe age)
Coefficient of determination ϭ 0.3904
Coefficient of correlation ϭ 0.6248
Southeast Airline—engine maintenance cost:
Cost ϭ Ϫ0.671 ϩ 0.0041 (airframe age)
Coefficient of determination ϭ 0.4599
Coefficient of correlation ϭ 0.6782
The graphs below portray both the actual data and the regres-
sion lines for airframe and engine maintenance costs for both air-
lines. Note that the two graphs have been drawn to the same scale
to facilitate comparisons between the two airlines.
Northern Airline: There seem to be modest correlations be-
tween maintenance costs and airframe age for Northern Airline.
There is certainly reason to conclude, however, that airframe age
is not the only important factor.
Southeast Airline: The relationships between maintenance
costs and airframe age for Southeast Airline are much less well
defined. It is even more obvious that airframe age is not the only
important factor—perhaps not even the most important factor.
Overall, it would seem that:
1. Northern Airline has the smallest variance in mainte-
nance costs, indicating that the day-to-day management of
maintenance is working pretty well.
2. Maintenance costs seem to be more a function of airline
than of airframe age.
3. The airframe and engine maintenance costs for Southeast
Airline are not only lower but more nearly similar than those
for Northern Airline, but, from the graphs at least, appear to
be rising more sharply with age.
4. From an overall perspective, it appears that Southeast Air-
line may perform more efficiently on sporadic or emergency
repairs, and Northern Airline may place more emphasis on
preventive maintenance.
Ms. Young’s report should conclude that:
1. There is evidence to suggest that maintenance costs
could be made to be a function of airframe age by implement-
ing more effective management practices.
2. The difference between maintenance procedures of the
two airlines should be investigated.
3. The data with which she is presently working do not pro-
vide conclusive results.
90
80
70
60
50
40
30
20
10
5 7 9 11 13 15 17 19
Average Airframe Age (Thousands)
Cost($)
Northern Airline
Airframe
Engine
90
80
70
60
50
40
30
20
10
5 7 9 11 13 15 17 19
Average Airframe Age (Thousands)
Cost($)
Southeast Airline
Airframe
Engine
M04_REND6289_10_IM_C04.QXD 5/7/08 2:49 PM Page 51
REVISED