Call Girls In Hsr Layout ā 7737669865 š„µ Book Your One night Stand
Ā
Lecture 2
1. Lecture 2: Basics of Bayesian modeling
Lecture 2: Basics of Bayesian modeling
Shravan Vasishth
Department of Linguistics
University of Potsdam, Germany
October 9, 2014
1 / 83
2. Lecture 2: Basics of Bayesian modeling
Some important concepts from lecture 1
The bivariate distribution
Independent (normal) random variables
If we have two random variables U0, U1, and we examine their
joint distribution, we can plot a 3-d plot which shows, u0, u1, and
f(u0,u1). E.g., f (u0,u1) ā (N(0,1)), with two independent
random variables:
u0
ā3
ā2
ā1
0
1
2 3
u1
ā3
ā2
ā1
0
1
2
3
z
0.05
0.10
0.15
2 / 83
3. Lecture 2: Basics of Bayesian modeling
Some important concepts from lecture 1
The bivariate distribution
Correlated random variables (r = 0.6)
The random variables U, W could be correlated positively. . .
u0
ā3
ā2
ā1
0
1
2 3
u1
ā3
ā2
ā1
0
1
2
3
z
0.00
0.05
0.10
0.15
3 / 83
4. Lecture 2: Basics of Bayesian modeling
Some important concepts from lecture 1
The bivariate distribution
Correlated random variables (r = 0.6)
. . . or negatively:
u0
ā3
ā2
ā1
0
1
2 3
u1
ā3
ā2
ā1
0
1
2
3
z
0.00
0.05
0.10
0.15
4 / 83
5. Lecture 2: Basics of Bayesian modeling
Some important concepts from lecture 1
Bivariate distributions
This is why, when talking about two normal random variables U0
and U1, we have to talk about
1 U0ās mean and variance
2 U1ās mean and variance
3 the correlation r between them
A mathematically convenient way to talk about it is in terms of the
variance-covariance matrix we saw in lecture 1:
āu =
ļ£æ
s2
u0 ru su0su1
ru su0su1 s2
u1
=
ļ£æ
0.612 .51ā„0.61ā„0.23
.51ā„0.61ā„0.23 0.232
(1)
5 / 83
6. Lecture 2: Basics of Bayesian modeling
Introduction
Todayās goals
In this second lecture, my goals are to
1 Give you a feeling for how Bayesian analysis works using two
simple examples that need only basic mathematical operations
(addition, subtraction, division, multiplication).
2 Start thinking about priors for parameters in preparation for
ļ¬tting linear mixed models.
3 Start ļ¬tting linear regression models in JAGS.
I will assign some homework which is designed to help you
understand these concepts. Solutions will be discussed next week.
6 / 83
7. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
A random variable X is a function X : S ! R that associates to
each outcome w 2 S exactly one number X(w) = x.
SX is all the xās (all the possible values of X, the support of X).
I.e., x 2 SX .
Good example: number of coin tosses till H
X : w ! x
w: H, TH, TTH,. . . (inļ¬nite)
x = 0,1,2,...;x 2 SX
7 / 83
8. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
Every discrete (continuous) random variable X has associated with
it a probability mass (distribution) function (pmf, pdf).
PMF is used for discrete distributions and PDF for continuous.
pX : SX ! [0,1] (2)
deļ¬ned by
pX (x) = P(X(w) = x),x 2 SX (3)
[Note: Books sometimes abuse notation by overloading the
meaning of X. They usually have: pX (x) = P(X = x),x 2 SX ]
8 / 83
9. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
Probability density functions (continuous case) or probability mass
functions (discrete case) are functions that assign probabilities or
relative frequencies to all events in a sample space.
The expression
X ā f (Ā·) (4)
means that the random variable X has pdf/pmf f (Ā·). For example,
if we say that X ā Normal(Āµ,s2), we are assuming that the pdf is
f (x) =
1
p
2ps2
exp[
(x Āµ)2
2s2
] (5)
9 / 83
10. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
We also need a cumulative distribution function or cdf because,
in the continuous case, P(X=some point value) is zero and we
therefore need a way to talk about P(X in a speciļ¬c range). cdfs
serve that purpose.
In the continuous case, the cdf or distribution function is deļ¬ned
as:
P(x < k) = F(x < k) = The area under the curve to the left of k
(6)
For example, suppose X ā Normal(600,50).
10 / 83
11. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
We can ask for Prob(X < 600):
> pnorm(600,mean=600,sd=sqrt(50))
[1] 0.5
We can ask for the quantile that has 50% of the probability to the left of
it:
> qnorm(0.5,mean=600,sd=sqrt(50))
[1] 600
. . . or to the right of it:
> qnorm(0.5,mean=600,sd=sqrt(50),lower.tail=FALSE)
[1] 600
11 / 83
12. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
We can also calculate the probability that X lies between 590 and 610:
Prob(590 < X < 610):
> pnorm(610,mean=600,sd=sqrt(50))-pnorm(490,mean=600,sd=sqrt(50))
[1] 0.9213504
12 / 83
13. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
Another way to compute area under the curve is by simulation:
> x<-rnorm(10000,mean=600,sd=sqrt(50))
> ## proportion of cases where
> ## x is less than 500:
> mean(x<590)
[1] 0.0776
> ## theoretical value:
> pnorm(590,mean=600,sd=sqrt(50))
[1] 0.0786496
We will be doing this a lot.
13 / 83
14. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
E.g., in linguistics we take as continous random variables:
1 reading time: Here the random variable (RV) X has possible
values w ranging from 0 ms to some upper bound b ms (or
maybe unbounded?), and the RV X maps each possible value
w to the corresponding number (0 to 0 ms, 1 to 1 ms, etc.).
2 acceptability ratings (technically not correct; but people
generally treat ratings as continuous, at least in
psycholinguistics)
3 EEG signals
In this course, due to time constraints, we will focus almost
exclusively on reading time data (eye-tracking and self-paced
reading).
14 / 83
15. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
We will also focus mostly on the normal distribution.
fX (x) =
1
s
p
2p
e
(x Āµ)2
2s2
, ā¢ < x < ā¢. (7)
It is conventional to write X ā N(Āµ,s2).
Important note: The normal distribution is represented diāµerently
in diāµerent programming languages:
1 R: dnorm(mean,sigma)
2 JAGS: dnorm(mean,precision) where
precision = 1/variance
3 Stan: normal(mean,sigma)
I apologize in advance for all the confusion this will cause.
15 / 83
16. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
ā6 ā4 ā2 0 2 4 6
0.00.20.4
Normal density N(0,1)
X
density
16 / 83
17. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
Standard or unit normal random variable:
If X is normally distributed with parameters Āµ and s2, then
Z = (X Āµ)/s is normally distributed with parameters 0,1.
We conventionally write (x) for the CDF:
(x) =
1
p
2p
Ė x
ā¢
e
y2
2 dy where y = (x Āµ)/s (8)
In R, we can type pnorm(x), to ļ¬nd out (x). Suppose x = 2:
> pnorm(-2)
[1] 0.02275013
17 / 83
18. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
If Z is a standard normal random variable (SNRV) then
p{Z ļ£æ x} = p{Z > x}, ā¢ < x < ā¢ (9)
We can check this with R:
> ##P(Z < -x):
> pnorm(-2)
[1] 0.02275013
> ##P(Z > x):
> pnorm(2,lower.tail=FALSE)
[1] 0.02275013
18 / 83
19. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
Although the following expression looks scary:
(x) =
1
p
2p
Ė x
ā¢
e
y2
2 dy where y = (x Āµ)/s (10)
all it is saying isāļ¬nd the area under the normal curve, ranging -
Inļ¬nity to x. Always visualize! ( 2):
ā3 ā2 ā1 0 1 2 3
0.00.20.4
Standard Normal
x
dnorm(x,0,1)
Ļ(ā2)
19 / 83
20. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
Since Z = ((X Āµ)/s) is an SNRV whenever X is normally
distributed with parameters Āµ and s2, then the CDF of X can be
expressed as:
FX (a) = P{X ļ£æ a} = P
ā
X Āµ
s
ļ£æ
a Āµ
s
ā
=
ā
a Āµ
s
ā
(11)
Practical application: Suppose you know that X ā N(Āµ,s2), and you
know Āµ but not s. If you know that a 95% conļ¬dence interval is [-q,+q],
then you can work out s by
a. computing the Z ā N(0,1) that has 2.5% of the area to its right:
> round(qnorm(0.025,lower.tail=FALSE),digits=2)
[1] 1.96
b. Solve for s in Z = q Āµ
s .
We will use this fact in Homework 0.
20 / 83
21. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
Summary of useful commands:
## pdf of normal:
dnorm(x, mean = 0, sd = 1)
## compute area under the curve:
pnorm(q, mean = 0, sd = 1)
## find out the quantile that has
## area (probability) p under the curve:
qnorm(p, mean = 0, sd = 1)
## generate normally distributed data of size n:
rnorm(n, mean = 0, sd = 1)
21 / 83
22. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Likelihood function (Normal distribution)
Letās assume that we have generated a data point from a particular
normal distribution:
x ā N(Āµ = 600,s2 = 50).
> x<-rnorm(1,mean=600,sd=sqrt(50))
Given x, and diāµerent values of Āµ, we can determine which Āµ is
most likely to have generated x. You can eyeball the result:
400 500 600 700 800
0.000.03
mu
dnorm(x,mean=mu,sd=sqrt(50))
22 / 83
23. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Likelihood function
Suppose that we had generated 10 independent values of x:
> x<-rnorm(10,mean=600,sd=sqrt(50))
We can plot the likelihood of each of the xās that the mean of the
Normal distribution that generated the data is Āµ, for diāµerent
values of Āµ:
> ## mu = 500
> dnorm(x,mean=500,sd=sqrt(50))
[1] 2.000887e-48 6.542576e-43 3.383346e-31 1.423345e-46 7.
[6] 1.686776e-41 3.214674e-39 2.604492e-40 6.892547e-50 1.
Since each of the xās are independently generated, the total
likelihood of the 10 xās is:
f (x1)ā„f (x2)ā„Ā·Ā·Ā·ā„f (x10) (12)
for some Āµ in f (Ā·) = Normal(Āµ,50).
23 / 83
24. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Likelihood function
Itās computationally easier to just take logs and sum them up (log
likelihood):
logf (x1)+logf (x2)ā„Ā·Ā·Ā·ā„logf (x10) (13)
> ## mu = 500
> sum(dnorm(x,mean=500,sd=sqrt(50),log=TRUE))
[1] -986.1017
24 / 83
25. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
(Log) likelihood function
We can now plot, for diāµerent values of Āµ, the likelihood that each
of the Āµ generated the 10 data points:
> mu<-seq(400,800,by=0.1)
> liks<-rep(NA,length(mu))
> for(i in 1:length(mu)){
+ liks[i]<-sum(dnorm(x,mean=mu[i],sd=sqrt(50),log=TRUE))
+ }
> plot(mu,liks,type="l")
400 500 600 700 800
ā4000ā1000
liks
25 / 83
26. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
(Log) likelihood function
1 Itās intuitively clear that weād probably want to declare the
value of Āµ that brings us to theāhighestāpoint in this ļ¬gure.
2 This is the maximum likelihood estimate, MLE.
3 Practical implication: In frequentist statistics, our data
vector x is assumed to be X ā N(Āµ,s2), and we attempt to
ļ¬gure out the MLE, i.e., the estimates of Āµ and s2 that
would maximize the likelihood.
4 In Bayesian models, when we assume a uniform prior, we will
get an estimate of the parameters which coincides with the
MLE (examples coming soon).
26 / 83
27. Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Bayesian modeling examples
Next, I will work through ļ¬ve relatively simple examples that use
Bayesā Theorem.
1 Example 1: Proportions
2 Example 2: Normal distribution
3 Example 3: Linear regression with one predictor
4 Example 4: Linear regression with multiple predictors
5 Example 5: Generalized linear models example (binomial link).
27 / 83
28. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
1 Recall the binomial distribution: Let X: no. successes in n
trials. We generally assume that
X ā Binomial(n,q), q unknown.
2 Suppose we have 46 successes out of 100. We generally use
the empirically observed proportion 46/100 as our estimate of
q. I.e., we assume that the generating distribution is
X ā Binomial(n = 100,q = .46).
3 This is because, for all possible values of q, going from 0 to 1,
0.46 has the highest likelihood.
28 / 83
29. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
> dbinom(x=46,size=100,0.4)
[1] 0.03811036
> dbinom(x=46,size=100,0.46)
[1] 0.07984344
> dbinom(x=46,size=100,0.5)
[1] 0.0579584
> dbinom(x=46,size=100,0.6)
[1] 0.001487007
Actually, we could take a whole range of values of q from 0 to 1
and plot the resulting probabilities.
29 / 83
30. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
> theta<-seq(0,1,by=0.01)
> plot(theta,dbinom(x=46,size=100,theta),
+ xlab=expression(theta),type="l")
0.0 0.2 0.4 0.6 0.8 1.0
0.000.040.08
Īø
dbinom(x=46,size=100,theta)
This is the likelihood function for the binomial distribution, and
we will write it as f (data | q). It is a function of q.
30 / 83
31. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Since Binomial(x,n,q) = n
x qx qn x , we can see that:
f (data | q) Āµ q46
(1 q)54
(14)
We are now going to use Bayesā theorem to work out the posterior
distribution of q given the data:
f (q | data)
"
posterior
Āµ f (data | q)
"
likelihood
f (q)
"
prior
(15)
All thatās missing here is the prior distribution f (q). So letās try to
deļ¬ne a prior for q.
31 / 83
32. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
To deļ¬ne a prior for q, we will use a distribution called the Beta
distribution, which takes two parameters, a and b.
We can plot some Beta distributions to get a feel for what these
parameters do.
We are going to plot
1 Beta(a=2,b=2)
2 Beta(a=3,b=3)
3 Beta(a=6,b=6)
4 Beta(a=10,b=10)
32 / 83
33. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
0.0 0.2 0.4 0.6 0.8 1.0
01234
Beta density
density
a=2,b=2
a=3,b=3
a=6,b=6
a=60,b=60
33 / 83
34. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Each successive density expresses increasing certainty about q
being centered around 0.5; notice that the spread about 0.5 is
decreasing as a and b increase.
1 Beta(a=2,b=2)
2 Beta(a=3,b=3)
3 Beta(a=6,b=6)
4 Beta(a=10,b=10)
34 / 83
35. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
1 If we donāt have much prior information, we could use
a=b=2; this gives us a uniform prior; this is called an
uninformative prior or non-informative prior.
2 If we have a lot of prior knowledge and/or a strong belief that
q has a particular value, we can use a larger a,b to reļ¬ect our
greater certainty about the parameter.
3 You can think of the parameter referring to the number of
successes, and the parameter b to the number of failures.
So the beta distribution can be used to deļ¬ne the prior distribution
of q.
35 / 83
36. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Just for the sake of illustration, letās take four diāµerent beta priors,
each reļ¬ecting increasing certainty.
1 Beta(a=2,b=2)
2 Beta(a=3,b=3)
3 Beta(a=6,b=6)
4 Beta(a=21,b=21)
Each (except perhaps the ļ¬rst) reļ¬ects a belief that q = 0.5, with
varying degrees of uncertainty.
Note an important fact: Beta(q | a,b) Āµ qa 1(1 q)b 1.
This is because the Beta distribution is:
f (q | a,b) = (a,b)
(a) (b) qa 1(1 q)b 1
36 / 83
37. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Now we just need to plug in the likelihood and the prior to get the
posterior:
f (q | data) Āµ f (data | q)f (q) (16)
The four corresponding posterior distributions would be as follows
(I hope I got the sums right!).
37 / 83
39. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
1 We can now visualize each of these triplets of priors,
likelihoods and posteriors.
2 Note that I use the beta to model the likelihood because this
allows me to visualize all three (prior, lik., posterior) in the
same plot.
3 I ļ¬rst show the plot just for the prior
q ā Beta(a = 2,a = 2)
39 / 83
40. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Beta(2,2) prior: posterior is shifted just a bit to the right compared to the likelihood
0.0 0.2 0.4 0.6 0.8 1.0
0246810
X
post
lik
prior
40 / 83
41. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Beta(6,6) prior: Posterior shifts even more towards the prior
0.0 0.2 0.4 0.6 0.8 1.0
0246810
X
post
lik
prior
41 / 83
42. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Beta(21,21) prior: Posterior shifts even more towards the prior
0.0 0.2 0.4 0.6 0.8 1.0
0246810
X
post
lik
prior
42 / 83
43. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Basically, the posterior is a compromise between the prior and the
likelihood.
1 When the prior has high uncertainty or we have a lot of data,
the likelihood will dominate.
2 When the prior has high certainty (like the Beta(21,21) case),
and we have very little data, then the prior will dominate.
So, Bayesian methods are particularly important when you have
little data but a whole lot of expert knowledge.
But they are also useful for standard psycholinguistic research, as I
hope to demonstrate.
43 / 83
44. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
An example application
(This exercise is from a Copenhagen course I took in 2012. I quote it
almost verbatim).
āThe French mathematician Pierre-Simon Laplace (1749-1827) was the
ļ¬rst person to show deļ¬nitively that the proportion of female births in
the French population was less then 0.5, in the late 18th century, using a
Bayesian analysis based on a uniform prior distribution. Suppose you
were doing a similar analysis but you had more deļ¬nite prior beliefs about
the ratio of male to female births. In particular, if q represents the
proportion of female births in a given population, you are willing to place
a Beta(100,100) prior distribution on q.
1 Show that this means you are more than 95% sure that q is
between 0.4 and 0.6, although you are ambivalent as to whether it
is greater or less than 0.5.
2 Now you observe that out of a random sample of 1,000 births, 511
are boys. What is your posterior probability that q > 0.5?ā
44 / 83
45. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
An example application
Show that this means you are more than 95% sure that q is
between 0.4 and 0.6, although you are ambivalent as to whether it
is greater or less than 0.5.
Prior: Beta(a=100,b=100)
> round(qbeta(0.025,shape1=100,shape2=100),digits=1)
[1] 0.4
> round(qbeta(0.975,shape1=100,shape2=100),digits=1)
[1] 0.6
> ## ambivalent as to whether theta <0.5 or not:
> round(pbeta(0.5,shape1=100,shape2=100),digits=1)
[1] 0.5
[Exercise: Draw the prior distribution]
45 / 83
46. Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
An example application
Now you observe that out of a random sample of 1,000 births, 511
are boys. What is your posterior probability that q > 0.5?
Prior: Beta(a=100,b=100)
Data: 489 girls out of 1000.
Posterior:
f (q | data) Āµ [q489
(1 q)511
][q100 1
(1 q)100 1
] = q588
(1 q)610
(21)
Since Beta(q | a,b) Āµ qa 1(1 q)b 1, the posterior is
Beta(a=589,b=611).
Therefore the posterior probability of q > 0.5 is:
> qbeta(0.5,shape1=589,shape2=611,lower.tail=FALSE)
[1] 0.4908282
46 / 83
47. Lecture 2: Basics of Bayesian modeling
Example 2: Normal distribution
Normal distribution
The normal distribution is the most frequently used probability
model in psycholinguistics.
If ĀÆx is sample mean, sample size is n and sample variance is known
to be s2, and if the prior on the mean Āµ is Normal(m,v), then:
It is pretty easy to derive (see Lynch textbook) the posterior mean
mā¤ and variance vā¤ analytically:
vā¤
=
1
1
v + n
s2
mā¤
= vā¤
ā
m
v
+
nĀÆx
s2
ā
(22)
E[q | x] = mā„
w1
w1+w2
+ ĀÆx ā„
w2
w1+w2
w1 = v 1
,w2 = (s2
/n) 1
(23)
So: the posterior mean is a weighted mean of the prior and
likelihood.
47 / 83
48. Lecture 2: Basics of Bayesian modeling
Example 2: Normal distribution
Normal distribution
1 The weight w1 is determined by the inverse of the prior
variance.
2 The weight w2 is determined by the inverse of the sample
standard error.
3 It is common in Bayesian statistics to talk about
precision = 1
variance , so that
1 w1 = v 1 = precisionprior
2 w2 = (s2/n) 1 = precisiondata
48 / 83
49. Lecture 2: Basics of Bayesian modeling
Example 2: Normal distribution
Normal distribution
If w1 is very large compared to w2, then the posterior mean will be
determined mainly by the prior mean m:
E[q | x] = mā„
w1
w1+w2
+ ĀÆx ā„
w2
w1+w2
w1 = v 1
,w2 = (s2
/n) 1
(24)
If w2 is very large compared to w1, then the posterior mean will be
determined mainly by the sample mean ĀÆx:
E[q | x] = mā„
w1
w1+w2
+ ĀÆx ā„
w2
w1+w2
w1 = v 1
,w2 = (s2
/n) 1
(25)
49 / 83
50. Lecture 2: Basics of Bayesian modeling
Example 2: Normal distribution
An example application
[This is an old exam problem from She eld University]
Letās say there is a hormone measurement test that yields a
numerical value that can be positive or negative. We know the
following:
The doctorās prior: 75% interval (āpatient healthyā) is
[-0.3,0.3].
Data from patient: x = 0.2, known s = 0.15.
Compute posterior N(m*,v*).
Iāll leave this as Homework 0 (solution next week). Hint: see
slide 20.
50 / 83
51. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Simple linear regression
We begin with a simple example. Let the response variable be
yi ,i = 1,...,n, and let there be p predictors, x1i ,...,xpi . Also, let
yi ā N(Āµi ,s2
), Āµi = b0 +Ćbxki (26)
(the summation is over the p predictors, i.e., k = 1,...,p).
We need to specify a prior distribution for the parameters:
bk ā Normal(0,1002
) logs ā Unif ( 100,100) (27)
51 / 83
52. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Source: Gelman and Hill 2007
> beautydata<-read.table("data/beauty.txt",header=T)
> ## Note: beauty level is centered.
> head(beautydata)
beauty evaluation
1 0.2015666 4.3
2 -0.8260813 4.5
3 -0.6603327 3.7
4 -0.7663125 4.3
5 1.4214450 4.4
6 0.5002196 4.2
> ## restate the data as a list for JAGS:
> data<-list(x=beautydata$beauty,
+ y=beautydata$evaluation)
52 / 83
53. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
We literally follow the speciļ¬cation of the linear model given
above. We specify the model for the data frame row by row, using
a for loop, so that for each dependent variable value yi (the
evaluation score) we specify how we believe it was generated.
yi ā Normal(Āµ[i],s2
) i = 1,...,463 (28)
Āµ[i] ā b0 +b1xi Note: predictor is centered (29)
Deļ¬ne priors on the b and on s:
b0 ā Uniform( 10,10) b0 ā Uniform( 10,10) (30)
s ā Uniform(0,100) (31)
53 / 83
54. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
> library(rjags)
> cat("
+ model
+ {
+ ## specify model for data:
+ for(i in 1:463){
+ y[i] ~ dnorm(mu[i],tau)
+ mu[i] <- beta0 + beta1 * (x[i])
+ }
+ # priors:
+ beta0 ~ dunif(-10,10)
+ beta1 ~ dunif(-10,10)
+ sigma ~ dunif(0,100)
+ sigma2 <- pow(sigma,2)
+ tau <- 1/sigma2
+ }",
+ file="JAGSmodels/beautyexample1.jag" )
54 / 83
55. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Data from Gelman and Hill, 2007
Some things to note in JAGS syntax:
1 The normal distribution is deļ¬ned in terms of precision, not
variance.
2 ā meansāis generated byā
3 <-
is a deterministic assignment, like = in mathematics.
4 The model speciļ¬cation is declarative, order does not matter.
For example, we would have written the following in any order:
sigma ~ dunif(0,100)
sigma2 <- pow(sigma,2)
tau <- 1/sigma2
55 / 83
56. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
> ## specify which variables you want to examine
> ## the posterior distribution of:
> track.variables<-c("beta0","beta1","sigma")
> ## define model:
> beauty.mod <- jags.model(
+ file = "JAGSmodels/beautyexample1.jag",
+ data=data,
+ n.chains = 1,
+ n.adapt =2000,
+ quiet=T)
> ## sample from posterior:
> beauty.res <- coda.samples(beauty.mod,
+ var = track.variables,
+ n.iter = 2000,
+ thin = 1 )
56 / 83
58. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Compare with standard lm ļ¬t:
> lm_summary<-summary(lm(evaluation~beauty,
+ beautydata))
> round(lm_summary$coef,digits=2)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.01 0.03 157.21 0
beauty 0.13 0.03 4.13 0
> round(lm_summary$sigma,digits=2)
[1] 0.55
Note that: (a) with uniform priors, we get a Bayesian estimate
equal to the MLE, (b) we get uncertainty estimates for s in the
Bayesian model.
58 / 83
59. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Posterior distributions of parameters
> densityplot(beauty.res)
Density
051015
3.95 4.00 4.05 4.10
beta0
0246812
0.00 0.05 0.10 0.15 0.20 0.25
beta1
051020
0.50 0.55 0.60
sigma
59 / 83
60. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Posterior distributions of parameters
> op<-par(mfrow=c(1,3),pty="s")
> traceplot(beauty.res)
2000 3000 4000
3.954.05
Iterations
Trace of beta0
2000 3000 4000
0.050.15
Iterations
Trace of beta1
2000 3000 4000
0.500.56
Iterations
Trace of sigma
60 / 83
61. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Posterior distributions of parameters
> MCMCsamp<-as.matrix(beauty.res)
> op<-par(mfrow=c(1,3),pty="s")
> hist(MCMCsamp[,1],main=expression(beta[0]),
+ xlab="",freq=FALSE)
> hist(MCMCsamp[,2],main=expression(beta[1]),
+ xlab="",freq=FALSE)
> hist(MCMCsamp[,3],main=expression(sigma),
+ xlab="",freq=FALSE)
Ī²0 Ī²1 Ļ 61 / 83
62. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Homework 1: Ratsā weights
Source: Lunn et al 2012
Five measurements of a ratās weight, in grams, as a function of
some x (say some nutrition-based variable). Note that here we will
center the predictor in the model code.
First we load/enter the data:
> data<-list(x=c(8,15,22,29,36),
+ y=c(177,236,285,350,376))
Then we ļ¬t the linear model using lm, for comparison with the
Bayesian model:
> lm_summary_rats<-summary(fm<-lm(y~x,data))
> round(lm_summary_rats$coef,digits=3)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 123.886 11.879 10.429 0.002
x 7.314 0.492 14.855 0.001
62 / 83
63. Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Homework 2: Ratsā weights (solution in next class)
Fit the following model in JAGS:
> cat("
+ model
+ {
+ ## specify model for data:
+ for(i in 1:5){
+ y[i] ~ dnorm(mu[i],tau)
+ mu[i] <- beta0 + beta1 * (x[i]-mean(x[]))
+ }
+ # priors:
+ beta0 ~ dunif(-500,500)
+ beta1 ~ dunif(-500,500)
+ tau <- 1/sigma2
+ sigma2 <-pow(sigma,2)
+ sigma ~ dunif(0,200)
+ }",
+ file="JAGSmodels/ratsexample2.jag" )
63 / 83
64. Lecture 2: Basics of Bayesian modeling
Example 4: Multiple regression
Multiple predictors
Source: Baayenās book
We ļ¬t log reading time to Trial id (centered), Native Language, and Sex.
The categorical variables are centered as well.
> lexdec<-read.table("data/lexdec.txt",header=TRUE)
> data<-lexdec[,c(1,2,3,4,5)]
> contrasts(data$NativeLanguage)<-contr.sum(2)
> contrasts(data$Sex)<-contr.sum(2)
> lm_summary_lexdec<-summary(fm<-lm(RT~scale(Trial,scale=F)+
+ NativeLanguage+Sex,data))
> round(lm_summary_lexdec$coef,digits=2)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.41 0.01 1061.53 0.00
scale(Trial, scale = F) 0.00 0.00 -2.39 0.02
NativeLanguage1 -0.08 0.01 -14.56 0.00
Sex1 -0.03 0.01 -5.09 0.00
64 / 83
65. Lecture 2: Basics of Bayesian modeling
Example 4: Multiple regression
Multiple predictors
Preparing the data for JAGS:
> dat<-list(y=data$RT,
+ Trial=(data$Trial-mean(data$Trial)),
+ Lang=data$NativeLanguage,
+ Sex=data$Sex)
65 / 83
69. Lecture 2: Basics of Bayesian modeling
Example 4: Multiple regression
Multiple predictors
Here is the lm ļ¬t for comparison (I center all predictors):
> lm_summary_lexdec<-summary(
+ fm<-lm(RT~scale(Trial,scale=F)
+ +NativeLanguage+Sex,
+ lexdec))
> round(lm_summary_lexdec$coef,digits=2)[,1:2]
Estimate Std. Error
(Intercept) 6.29 0.01
scale(Trial, scale = F) 0.00 0.00
NativeLanguageOther 0.17 0.01
SexM 0.06 0.01
Note: We should have ļ¬t a linear mixed model here; I will
return to this later.
69 / 83
70. Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
We consider the model
yi ā Binomial(pi ,ni ) logit(pi ) = b0 +b1(xi ĀÆx) (32)
A simple example is beetle data from Dobson et al 2010:
> beetledata<-read.table("data/beetle.txt",header=T)
> head(beetledata)
dose number killed
1 1.6907 59 6
2 1.7242 60 13
3 1.7552 62 18
4 1.7842 56 28
5 1.8113 63 52
6 1.8369 59 53
70 / 83
71. Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
Prepare data for JAGS:
> dat<-list(x=beetledata$dose-mean(beetledata$dose),
+ n=beetledata$number,
+ y=beetledata$killed)
71 / 83
75. Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
The values match up with glm output:
> round(coef(glm(killed/number~scale(dose,scale=F),
+ weights=number,
+ family=binomial(),beetledata)),
+ digits=2)
(Intercept) scale(dose, scale = F)
0.74 34.27
75 / 83
76. Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
> plot(glm.res)
2000 2500 3000 3500 4000
0.40.81.2
Iterations
Trace of beta0
0.2 0.4 0.6 0.8 1.0 1.2
0.01.02.0
Density of beta0
N = 2000 Bandwidth = 0.03123
2000 2500 3000 3500 4000
253545
Iterations
Trace of beta1
25 30 35 40 45
0.000.060.12
Density of beta1
N = 2000 Bandwidth = 0.6572 76 / 83
77. Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
Homework 3: GLMs
We ļ¬t uniform priors to the coe cients b:
# priors:
beta0 ~ dunif(0,pow(100,2))
beta1 ~ dunif(0,pow(100,2))
Fit the beetle data again, using suitable normal distribution priors
for the coe cients beta0 and beta1. Does the posterior
distribution depend on the prior?
77 / 83
78. Lecture 2: Basics of Bayesian modeling
Prediction
GLMs: Predicting future/missing data
One important thing we can do is to predict the posterior
distribution of future or missing data.
One easy to way to do this is to deļ¬ne how we expect the
predicted data to be generated.
This example revisits the earlier toy example from Lunn et al. on
rat data (slide 62).
> data<-list(x=c(8,15,22,29,36),
+ y=c(177,236,285,350,376))
78 / 83
83. Lecture 2: Basics of Bayesian modeling
Concluding remarks
Summing up
1 In some cases Bayesā Theorem can be used analytically
(Examples 1, 2)
2 It is relatively easy to deļ¬ne diāµerent kinds of Bayesian
models using programming languages like JAGS.
3 Today we saw some examples from linear models (Examples
3-5).
4 Next week: Linear mixed models.
83 / 83