2. Objectives
To determine the relationship
between response variable
and independent variables for
prediction purposes
2
3. • compute a simple linear regression model
• interpret the slope and intercept in a linear
regression model
• Model adequacy checking
• Use the model for prediction purposes
3
4. Contents
1. Introduction
regression and correlation
2. Simple Linear Regression
- Simple linear regression model ( deals
with one independent variable)
- Least- square estimation of parameters
- Hypothesis testing on the parameters
- Interpretation
4
6. Learning Outcomes
• Student will be able to identify the nature
of the association between a given pair of
variables
• Find a suitable regression model to a given
set of data of two variables
• Check for model assumptions
• Interpret the model parameters of the fixed
model
• Predict or estimate Y values for given X
values
6
7. Reference
1. Introduction to Linear Regression Analysis (3 rd
edition) D.C. Montgomery, E.A. Peck and G.G.
Vining, John Wiley ( 2004)
2. Applied Regression Analysis ( 3rd
edition) N.R.
Draper, H. Smith, John Wiley ( 1998)
7
8. Introduction
Regression and correlation are very important
statistical tools which are used to identify
and quantify the relationship between two
or more variables
Application of regression occurs almost in
every field such as engineering, physical
and chemical sciences, economics, life and
biological sciences and social science
8
9. Regression analysis was first developed by Sir
Francis Galton ( 1822-1911)
Regression and correlation are two different but
closely related concepts
Regression is a quantitative expression of the basic
nature of the relationship between the dependent
and independent variables
Correlation is the strength of the relationship. That
means correlation measures how strong the
relationship between two variables is?
9
10. Dependent variable
• In a research study, the dependent variable
is the variable that you believe might be
influenced or modified by some treatment
or exposure. It may also represent the
variable you are trying to predict.
Sometimes the dependent variable is called
the outcome variable. This definition
depends on the context of the study
10
11. If one variable is depended on other we can say that
one variable is a function of another
Y = ƒ (X)
Hear Y depends on X in some manner
As Y depends on X , Y is called the dependent
variable, criterion variable or response variable..
11
12. Independent variable
In a research study, an independent variable
is a variable that you believe might
influence your outcome measure.
X is called the independent variable,
predictor variable, regress or explanatory
variable
12
13. This might be a variable that you control, like
a treatment, or a variable not under your
control, like an exposure.
It also might represent a demographic factor
like age or gender
13
14. Regression
Simple Y = ƒ (X)
Multiple Y = ƒ (X1,X2,…X3)
Linear
Non linear
Linear
Non linear
14
15. CONTENTS
• Coefficients of correlation
–meaning
–values
–role
–significance
• Regression
–line of best fit
–prediction
–significance
15
16. • Correlation
–the strength of the linear relationship
between two variables
• Regression analysis
–determines the nature of the relationship
Ex : Is there a relationship between the
number of units of alcohol consumed
and the likelihood of developing
cirrhosis of the liver?
16
18. Measures the relative strength of the linear
relationship between two variables
The correlation is scale invariant and the
units of measurement don't matter (unit-
less)
This gives the direction (- or +) and strength
(0 to1) of the linear relationship between X
and Y.
18
19. • It is always true that -1≤ corr(X; Y ) ≤ 1. That means
ranges between –1 and 1
• The closer to –1, the stronger the negative linear
relationship
• The closer to 1, the stronger the positive linear
relationship
• The closer to 0, the weaker any linear relationship
Though a value close to zero indicates almost no
linear association it does not mean no relationship
19
20. Scatter Plots of Data with Various
Correlation Coefficients
Y
X
Y
X
Y
X
Y
X
Y
X
r = -1 r = -.6 r = 0
r = +.3r = +1
Y
X
r = 0 20
24. interpreting the Pearson correlation
coefficient
• The value of r for this
data is 0.39. thus
indicating weak
positive linear
association.
• Omitting the last
observation, r is 0.96.
• Thus, r is sensitive to
extreme observations.
Hight (inches)
Weight(lbs)
7672686460
170
160
150
140
130
120
110
100
90
Scatterplot of Weight (lbs) vs Hight (inches)
Extreme observation
24
25. • The value of r
here is 0.94.
• However, a
straight line model
may not be
suitable.
• The relationship
appears
curvilinear.
Predictor
Response
20151050
90
80
70
60
50
40
30
20
10
25
26. continued…
Extreme Observation
• The value of r is
-0.07.
• But the plot indicates
positive linear
association.
• Again, this anomaly
is due to extreme
data values.
OBT marks
Finalmarks
9080706050403020
70
60
50
40
30
20
10
Scatterplot of Final marks vs OBT marks
26
27. • The value of r is around
0.006, thus indicating
almost no linear
association.
• However, from the plot,
we find strong
relationship between the
two variables.
• This exemplifies that r
does not provide evidence
of all relationships.
• These examples highlight
the importance of looking
at scatter plots of data
prior to deciding on a
model function.
Age in years
ReactiontimeinSeconds
403020100
50
40
30
20
10
0
Scatterplot of Reaction time in Seconds vs Age in years
27
28. 17.28
Coefficient of Determination
R2
has a value of .6483. This means 64.83% of
the variation in the auction selling prices (y) is
explained by your regression model. The
remaining 35.17% is unexplained, i.e. due to
error.
.
28
29. Unlike the value of a test statistic, the
coefficient of determination does not have
a critical value that enables us to draw
conclusions.
In general the higher the value of R2
, the better
the model fits the data.
R2
= 1: Perfect match between the line and the
data points.
R2
= 0: There are no linear relationship
between x and y
29
30. Coefficient of determination
x1 x2
y1
y2
y
Two data points (x1,y1) and (x2,y2)
of a certain sample are shown.
=−+− 2
2
2
1 )yy()yy( 2
2
2
1 )yyˆ()yyˆ( −+− 2
22
2
11 )yˆy()yˆy( −+−+
Total variation in y = Variation explained by the
regression line
+ Unexplained variation (error)
Variation in y = SSR + SSE
30
31. Coefficient of Determination
• How “strong” is relationship between predictor &
outcome? (Fraction of observed variance of
outcome variable explained by the predictor
variables).
• Relationship Among SST, SSR, SSE
where:where:
SST = total sum of squaresSST = total sum of squares
SSR = sum of squares due to regressionSSR = sum of squares due to regression
SSE = sum of squares due to errorSSE = sum of squares due to error
SST = SSR + SSESST = SSR + SSE
2
( )iy y−∑ 2
ˆ( )iy y= −∑ 2
ˆ( )i iy y+ −∑
31
33. Estimation Process
Regression ModelRegression Model
yy == ββ00 ++ ββ11xx ++εε
Regression EquationRegression Equation
EE((yy) =) = ββ00 ++ ββ11xx
Unknown ParametersUnknown Parameters
ββ00,, ββ11
Sample Data:Sample Data:
x yx y
xx11 yy11
. .. .
. .. .
xxnn yynn
bb00 andand bb11
provide estimates ofprovide estimates of
ββ00 andand ββ11
EstimatedEstimated
Regression EquationRegression Equation
Sample StatisticsSample Statistics
bb00,, bb11
0 1
ˆy b b x= +
33
34. Introduction
• We will examine the relationship between
quantitative variables x and y via a
mathematical equation.
• The motivation for using the technique:
– Forecast the value of a dependent variable (y)
from the value of independent variables (x1, x2,
…xk.).
– Analyze the specific relationships between the
independent variables and the dependent 34
35. For a continuous variable X the easiest way
of checking for a linear relationship with Y
is by means of a scatter plot of Y against X.
Hence, regression analysis can be started
with a scatter plot.
35
36. 3636
Least SquaresLeast Squares
• 1.1. ‘Best Fit’ Means Difference Between‘Best Fit’ Means Difference Between
Actual Y Values & Predicted Y Values AreActual Y Values & Predicted Y Values Are
a Minimum.a Minimum. ButBut Positive Differences Off-Positive Differences Off-
Set Negative. So square errors!Set Negative. So square errors!
• 2.2. LS Minimizes the Sum of the SquaredLS Minimizes the Sum of the Squared
Differences (errors) (SSE)Differences (errors) (SSE)
( ) ∑∑ ==
=−
n
i
i
n
i
ii YY
1
2
1
2
ˆˆ ε
36
37. 3737
Coefficient EquationsCoefficient Equations
• Prediction equationPrediction equation
• Sample slopeSample slope
• Sample Y - interceptSample Y - intercept
ii xy 10
ˆˆˆ ββ +=
( )( )
( )∑ −
∑ −−
==
21
ˆ
xx
yyxx
SS
SS
i
ii
xx
xy
β
xy 10
ˆˆ ββ −=
37
38. Interpreting regression
coefficients
You should interpret the slope and the
intercept of this line as follows:
–The slope represents the estimated
average change in Y when X increases by one
unit.
–The intercept represents the estimated
average value of Y when X equals zero
38
39. 3939
Interpretation of CoefficientsInterpretation of Coefficients
• 1.1. Slope (Slope (ββ11))
– EstimatedEstimated YY changes bychanges by ββ11 for each 1 unitfor each 1 unit
increase inincrease in XX
• IfIf ββ11 = 2, then= 2, then YY is Expected to Increase by 2 foris Expected to Increase by 2 for
each 1 unit increase ineach 1 unit increase in XX
• 2.2. Y-Intercept (Y-Intercept (ββ00))
– Average Value ofAverage Value of YY whenwhen XX = 0= 0
• IfIf ββ00 = 4, then Average of= 4, then Average of YY is expected to beis expected to be
4 when4 when XX is 0is 0
^^
^^
^^
^^
^^
39
40. The Model
• The first order linear model
y = dependent variable
x = independent variable
β0 = y-intercept
β1 = slope of the line
ε = error variable
ε+β+β= xy 10
x
y
β0
Run
Rise β1 = Rise/Run
β0 and β1 are unknown population
parameters, therefore are estimated
from the data.
40
41. The Least Squares (Regression)
Line
A good line is one that minimizes
the sum of squared differences between the
points and the line.
41
42. Model adequacy cheking
When conducting linear regression, it is important
to make sure the assumptions behind the model
are met. It is also important to verify that the
estimated linear regression model is a good fit for
the data (often a linear regression line can be
estimated by SAS, SPSS, MINITAB etc. even if
it’s not appropriate—in this case it is up to you to
judge whether the model is a good one).
42
43. Assumptions
• The relationship between the explanatory
variable and the outcome variable is linear.
In other words, each increase by one unit in
an explanatory variable is associated with a
fixed increase in the outcome variable.
• The regression equation describes the mean
value of the dependent variable for a given
values of independent variable.
43
44. • The individual data points of Y (the
response variable) for each value of the
explanatory variable are normally
distributed about the line of means
(regression line).
• The variance of the data points about the
line of means is the same for each value of
explanatory variable.
44
45. Assumptions About the Error
Term ε
1. The error1. The error εε is a random variable with mean of zero.is a random variable with mean of zero.
2.2. The variance ofThe variance of εε ,, denoted bydenoted by σσ 22
,, is the same foris the same for
all values of the independent variable.all values of the independent variable.
3.3. The values ofThe values of εε are independent (randomly distributed.are independent (randomly distributed.
4.4. The errorThe error εε is a normally distributed randomis a normally distributed random
variable with mean zero and variancevariable with mean zero and variance σσ 22
..
45
46. Testing the assumptions for
regression - 2
• Normality (interval level variables)
– Skewness & Kurtosis must lie within acceptable limits
(-1 to +1)
• How to test?
• You can examine a histogram. Normality of distribution of
Y data points can be checked by plotting a histogram of
the residuals.
46
47. • If condition violated?
– Regression procedure can overestimate significance, so
should add a note of caution to the interpretation of
results (increases type I error rate)
47
48. Testing the assumptions -
normality
To compute skewness and
kurtosis for the included
cases, select Descriptive
Statistics|Descriptives…
from the Analyze menu.
1
48
49. Testing the assumptions -
normality
Second, click on
the Continue
button to complete
the options.
First, mark the
checkboxes for Kurtosis
and Skew ness.
49
50. Analysis of Residual
• To examine whether the regression model is
appropriate for the data being analyzed, we can
check the residual plots.
• Residual plots are:
– Plot a histogram of the residuals
– Plot residuals against the fitted values.
– Plot residuals against the independent variable.
– Plot residuals over time if the data are chronological.
50
51. Analysis of Residual
• A histogram of the residuals provides a check on
the normality assumption. A Normal quantile plot
of the residuals can also be used to check the
Normality assumptions.
• Regression Inference is robust against moderate
lack of Normality. On the other hand, outliers and
influential observations can invalidate the results
of inference for regression
• Plot of residuals against fitted values or the
independent variable can be used to check the
assumption of constant variance and the aptness
of the model.
51
52. Analysis of Residual
• Plot of residuals against time provides a
check on the independence of the error
terms assumption.
• Assumption of independence is the most
critical one.
52
53. Residual plots
• The residuals should
have no systematic
pattern.
• The residual plot to
right shows a scatter
of the points with no
individual
observations or
systematic change as x
increases.
Degree Days Residual Plot
-1
-0.5
0
0.5
1
0 20 40 60
Degree DaysResiduals
53
54. Residual plots
• The points in this
residual plot have a
curve pattern, so a
straight line fits poorly
54
55. Residual plots
• The points in this plot
show more spread for
larger values of the
explanatory variable x,
so prediction will be
less accurate when x is
large.
55
56. Heteroscedasticity
• When the requirement of a constant variance is violated we
have a condition of heteroscedasticity.
• Diagnose heteroscedasticity by plotting the residual
against the predicted y.
+ + +
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The spread increases with y^
y^
Residual
^y
+
+
+
+
+
+
+
+
+
+
+
++
+
+
+
+
+
+
+
+
+
+
56
57. Patterns in the appearance of the residuals indicates that
autocorrelation exists.
+
+
+
+ +
+
+
+
+
+
+
+
+ + +
+
+
+
+
+
+
+
+
+
+
Time
Residual Residual
Time
+
+
+
Note the runs of positive residuals,
replaced by runs of negative residuals
Note the oscillating behavior of the
residuals around zero.
0 0
Non Independence of Error Variables
57
58. Outliers
• An outlier is an observation that is unusually small or
large.
• Several possibilities need to be investigated when an
outlier is observed:
– There was an error in recording the value.
– The point does not belong in the sample.
– The observation is valid.
• Identify outliers from the scatter diagram.
• It is customary to suspect an observation is an outlier
if its |standard residual| > 2 58
60. Variable transformations
• If the residual plot suggests that the variance is not
constant, a transformation can be used to stabilize
the variance.
• If the residual plot suggests a non linear
relationship between x and y, a transformation
may reduce it to one that is approximately linear.
• Common linearizing transformations are:
• Variance stabilizing transformations are:
)log(,
1
x
x
2
,),log(,
1
yyy
y 60
61. The Model
• The first order linear model
y = dependent variable
x = independent variable
β0 = y-intercept
β1 = slope of the line
ε = error variable
ε+β+β= xy 10
x
y
β0
Run
Rise β1 = Rise/Run
β0 and β1 are unknown population
parameters, therefore are estimated
from the data.
61
62. The Least Squares (Regression)
Line
A good line is one that minimizes
the sum of squared differences between the
points and the line.
62
63. Example
• Following observations are made on an
experiment that was carried out to measure
the relationship of a mathematics placement
test conducted at a faculty and final grades
of 20 students as faculty decided not to give
admissions to those students got marks
below 35 at the placement test.
63
68. PROG REG
Submit the following program in SAS. In addition to the first
two statements with which you are familiar, the third
statement requests a plot of the residuals by weight and the
fourth statement requests a plot of the studentized
(standardized) residuals by weight:
PROC REG DATA = blood;
MODEL level = weight;
PLOT level * weight;
PLOT residual. * weight;
PLOT student. * weight;
RUN;
68
69. Interpreting Output
Notice that the overall F-test has a p-value of
0.2160, which is greater than 0.05.
Therefore, we would conclude that blood
level and weight are independent (fail to
reject Ho: β1 = 0).
Now look at the following plots:
69
70. Plot of Regression Line: Notice it is the same plot as the one
you created from PROC GPLOT, except the fitted regression line
has been added to it.
70
71. Plot of residuals * weight: you want an even spread of
points above and below the dashed line. This is a good way
to eyeball the data for potential outliers.
71
72. Plot of studentized residuals * weight: look for
values with an absolute value larger than 2.6 to
determine if there are any outliers.
72
73. You can see from the plot that the observation
with weight = 128 (observation #4) is an
outlier.
The residual plots also help you determine
whether the assumption of constant variance is
met. Because the residuals appear to be
randomly scattered without any definite
pattern, this suggests that the data are
independent with constant variance.
73
74. The Normality Assumption
A convenient way to test for normality is by
constructing a “Normal Quantile Quantile”
plot. This plots the residuals you would see
under normality versus the residuals that are
actually observed. If the data are completely
normal, the residuals will follow a 45° line.
Use the following code in SAS to make the
NQQ plot:
PLOT residual. * nqq.;
RUN;
74
76. Interpreting the NQQ Plot
The residuals do not clearly follow a 45° line.
Because the tails of this line seem curved,
this suggests that the data may be skewed,
not normally distributed.
76
77. Recommendations
• It is extremely important to look at plots of raw
data prior to selecting a tentative model
• Need to be cautious in interpreting the correlation
coefficient r.
• Proper model assessment should be done prior to
using the fitted model for predictions.
• Need to focus on the range of x values used for
building the model prior to making predictions at
a desired x value. 77