SlideShare a Scribd company logo
1 of 83
Download to read offline
Lecture 2: Basics of Bayesian modeling
Lecture 2: Basics of Bayesian modeling
Shravan Vasishth
Department of Linguistics
University of Potsdam, Germany
October 9, 2014
1 / 83
Lecture 2: Basics of Bayesian modeling
Some important concepts from lecture 1
The bivariate distribution
Independent (normal) random variables
If we have two random variables U0, U1, and we examine their
joint distribution, we can plot a 3-d plot which shows, u0, u1, and
f(u0,u1). E.g., f (u0,u1) ā‡  (N(0,1)), with two independent
random variables:
u0
āˆ’3
āˆ’2
āˆ’1
0
1
2 3
u1
āˆ’3
āˆ’2
āˆ’1
0
1
2
3
z
0.05
0.10
0.15
2 / 83
Lecture 2: Basics of Bayesian modeling
Some important concepts from lecture 1
The bivariate distribution
Correlated random variables (r = 0.6)
The random variables U, W could be correlated positively. . .
u0
āˆ’3
āˆ’2
āˆ’1
0
1
2 3
u1
āˆ’3
āˆ’2
āˆ’1
0
1
2
3
z
0.00
0.05
0.10
0.15
3 / 83
Lecture 2: Basics of Bayesian modeling
Some important concepts from lecture 1
The bivariate distribution
Correlated random variables (r = 0.6)
. . . or negatively:
u0
āˆ’3
āˆ’2
āˆ’1
0
1
2 3
u1
āˆ’3
āˆ’2
āˆ’1
0
1
2
3
z
0.00
0.05
0.10
0.15
4 / 83
Lecture 2: Basics of Bayesian modeling
Some important concepts from lecture 1
Bivariate distributions
This is why, when talking about two normal random variables U0
and U1, we have to talk about
1 U0ā€™s mean and variance
2 U1ā€™s mean and variance
3 the correlation r between them
A mathematically convenient way to talk about it is in terms of the
variance-covariance matrix we saw in lecture 1:
āŒƒu =
ļ£æ
s2
u0 ru su0su1
ru su0su1 s2
u1
=
ļ£æ
0.612 .51ā‡„0.61ā‡„0.23
.51ā‡„0.61ā‡„0.23 0.232
(1)
5 / 83
Lecture 2: Basics of Bayesian modeling
Introduction
Todayā€™s goals
In this second lecture, my goals are to
1 Give you a feeling for how Bayesian analysis works using two
simple examples that need only basic mathematical operations
(addition, subtraction, division, multiplication).
2 Start thinking about priors for parameters in preparation for
ļ¬tting linear mixed models.
3 Start ļ¬tting linear regression models in JAGS.
I will assign some homework which is designed to help you
understand these concepts. Solutions will be discussed next week.
6 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
A random variable X is a function X : S ! R that associates to
each outcome w 2 S exactly one number X(w) = x.
SX is all the xā€™s (all the possible values of X, the support of X).
I.e., x 2 SX .
Good example: number of coin tosses till H
X : w ! x
w: H, TH, TTH,. . . (inļ¬nite)
x = 0,1,2,...;x 2 SX
7 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
Every discrete (continuous) random variable X has associated with
it a probability mass (distribution) function (pmf, pdf).
PMF is used for discrete distributions and PDF for continuous.
pX : SX ! [0,1] (2)
deļ¬ned by
pX (x) = P(X(w) = x),x 2 SX (3)
[Note: Books sometimes abuse notation by overloading the
meaning of X. They usually have: pX (x) = P(X = x),x 2 SX ]
8 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
Probability density functions (continuous case) or probability mass
functions (discrete case) are functions that assign probabilities or
relative frequencies to all events in a sample space.
The expression
X ā‡  f (Ā·) (4)
means that the random variable X has pdf/pmf f (Ā·). For example,
if we say that X ā‡  Normal(Āµ,s2), we are assuming that the pdf is
f (x) =
1
p
2ps2
exp[
(x Āµ)2
2s2
] (5)
9 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
We also need a cumulative distribution function or cdf because,
in the continuous case, P(X=some point value) is zero and we
therefore need a way to talk about P(X in a speciļ¬c range). cdfs
serve that purpose.
In the continuous case, the cdf or distribution function is deļ¬ned
as:
P(x < k) = F(x < k) = The area under the curve to the left of k
(6)
For example, suppose X ā‡  Normal(600,50).
10 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
We can ask for Prob(X < 600):
> pnorm(600,mean=600,sd=sqrt(50))
[1] 0.5
We can ask for the quantile that has 50% of the probability to the left of
it:
> qnorm(0.5,mean=600,sd=sqrt(50))
[1] 600
. . . or to the right of it:
> qnorm(0.5,mean=600,sd=sqrt(50),lower.tail=FALSE)
[1] 600
11 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
We can also calculate the probability that X lies between 590 and 610:
Prob(590 < X < 610):
> pnorm(610,mean=600,sd=sqrt(50))-pnorm(490,mean=600,sd=sqrt(50))
[1] 0.9213504
12 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
Another way to compute area under the curve is by simulation:
> x<-rnorm(10000,mean=600,sd=sqrt(50))
> ## proportion of cases where
> ## x is less than 500:
> mean(x<590)
[1] 0.0776
> ## theoretical value:
> pnorm(590,mean=600,sd=sqrt(50))
[1] 0.0786496
We will be doing this a lot.
13 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Random variables
E.g., in linguistics we take as continous random variables:
1 reading time: Here the random variable (RV) X has possible
values w ranging from 0 ms to some upper bound b ms (or
maybe unbounded?), and the RV X maps each possible value
w to the corresponding number (0 to 0 ms, 1 to 1 ms, etc.).
2 acceptability ratings (technically not correct; but people
generally treat ratings as continuous, at least in
psycholinguistics)
3 EEG signals
In this course, due to time constraints, we will focus almost
exclusively on reading time data (eye-tracking and self-paced
reading).
14 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
We will also focus mostly on the normal distribution.
fX (x) =
1
s
p
2p
e
(x Āµ)2
2s2
, ā€¢ < x < ā€¢. (7)
It is conventional to write X ā‡  N(Āµ,s2).
Important note: The normal distribution is represented diā†µerently
in diā†µerent programming languages:
1 R: dnorm(mean,sigma)
2 JAGS: dnorm(mean,precision) where
precision = 1/variance
3 Stan: normal(mean,sigma)
I apologize in advance for all the confusion this will cause.
15 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
āˆ’6 āˆ’4 āˆ’2 0 2 4 6
0.00.20.4
Normal density N(0,1)
X
density
16 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
Standard or unit normal random variable:
If X is normally distributed with parameters Āµ and s2, then
Z = (X Āµ)/s is normally distributed with parameters 0,1.
We conventionally write (x) for the CDF:
(x) =
1
p
2p
Ė† x
ā€¢
e
y2
2 dy where y = (x Āµ)/s (8)
In R, we can type pnorm(x), to ļ¬nd out (x). Suppose x = 2:
> pnorm(-2)
[1] 0.02275013
17 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
If Z is a standard normal random variable (SNRV) then
p{Z ļ£æ x} = p{Z > x}, ā€¢ < x < ā€¢ (9)
We can check this with R:
> ##P(Z < -x):
> pnorm(-2)
[1] 0.02275013
> ##P(Z > x):
> pnorm(2,lower.tail=FALSE)
[1] 0.02275013
18 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
Although the following expression looks scary:
(x) =
1
p
2p
Ė† x
ā€¢
e
y2
2 dy where y = (x Āµ)/s (10)
all it is saying isā€œļ¬nd the area under the normal curve, ranging -
Inļ¬nity to x. Always visualize! ( 2):
āˆ’3 āˆ’2 āˆ’1 0 1 2 3
0.00.20.4
Standard Normal
x
dnorm(x,0,1)
Ļ†(āˆ’2)
19 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
Since Z = ((X Āµ)/s) is an SNRV whenever X is normally
distributed with parameters Āµ and s2, then the CDF of X can be
expressed as:
FX (a) = P{X ļ£æ a} = P
āœ“
X Āµ
s
ļ£æ
a Āµ
s
ā—†
=
āœ“
a Āµ
s
ā—†
(11)
Practical application: Suppose you know that X ā‡  N(Āµ,s2), and you
know Āµ but not s. If you know that a 95% conļ¬dence interval is [-q,+q],
then you can work out s by
a. computing the Z ā‡  N(0,1) that has 2.5% of the area to its right:
> round(qnorm(0.025,lower.tail=FALSE),digits=2)
[1] 1.96
b. Solve for s in Z = q Āµ
s .
We will use this fact in Homework 0.
20 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Normal random variables
Summary of useful commands:
## pdf of normal:
dnorm(x, mean = 0, sd = 1)
## compute area under the curve:
pnorm(q, mean = 0, sd = 1)
## find out the quantile that has
## area (probability) p under the curve:
qnorm(p, mean = 0, sd = 1)
## generate normally distributed data of size n:
rnorm(n, mean = 0, sd = 1)
21 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Likelihood function (Normal distribution)
Letā€™s assume that we have generated a data point from a particular
normal distribution:
x ā‡  N(Āµ = 600,s2 = 50).
> x<-rnorm(1,mean=600,sd=sqrt(50))
Given x, and diā†µerent values of Āµ, we can determine which Āµ is
most likely to have generated x. You can eyeball the result:
400 500 600 700 800
0.000.03
mu
dnorm(x,mean=mu,sd=sqrt(50))
22 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Likelihood function
Suppose that we had generated 10 independent values of x:
> x<-rnorm(10,mean=600,sd=sqrt(50))
We can plot the likelihood of each of the xā€™s that the mean of the
Normal distribution that generated the data is Āµ, for diā†µerent
values of Āµ:
> ## mu = 500
> dnorm(x,mean=500,sd=sqrt(50))
[1] 2.000887e-48 6.542576e-43 3.383346e-31 1.423345e-46 7.
[6] 1.686776e-41 3.214674e-39 2.604492e-40 6.892547e-50 1.
Since each of the xā€™s are independently generated, the total
likelihood of the 10 xā€™s is:
f (x1)ā‡„f (x2)ā‡„Ā·Ā·Ā·ā‡„f (x10) (12)
for some Āµ in f (Ā·) = Normal(Āµ,50).
23 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Likelihood function
Itā€™s computationally easier to just take logs and sum them up (log
likelihood):
logf (x1)+logf (x2)ā‡„Ā·Ā·Ā·ā‡„logf (x10) (13)
> ## mu = 500
> sum(dnorm(x,mean=500,sd=sqrt(50),log=TRUE))
[1] -986.1017
24 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
(Log) likelihood function
We can now plot, for diā†µerent values of Āµ, the likelihood that each
of the Āµ generated the 10 data points:
> mu<-seq(400,800,by=0.1)
> liks<-rep(NA,length(mu))
> for(i in 1:length(mu)){
+ liks[i]<-sum(dnorm(x,mean=mu[i],sd=sqrt(50),log=TRUE))
+ }
> plot(mu,liks,type="l")
400 500 600 700 800
āˆ’4000āˆ’1000
liks
25 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
(Log) likelihood function
1 Itā€™s intuitively clear that weā€™d probably want to declare the
value of Āµ that brings us to theā€œhighestā€point in this ļ¬gure.
2 This is the maximum likelihood estimate, MLE.
3 Practical implication: In frequentist statistics, our data
vector x is assumed to be X ā‡  N(Āµ,s2), and we attempt to
ļ¬gure out the MLE, i.e., the estimates of Āµ and s2 that
would maximize the likelihood.
4 In Bayesian models, when we assume a uniform prior, we will
get an estimate of the parameters which coincides with the
MLE (examples coming soon).
26 / 83
Lecture 2: Basics of Bayesian modeling
Some preliminaries: Probability distributions
Bayesian modeling examples
Next, I will work through ļ¬ve relatively simple examples that use
Bayesā€™ Theorem.
1 Example 1: Proportions
2 Example 2: Normal distribution
3 Example 3: Linear regression with one predictor
4 Example 4: Linear regression with multiple predictors
5 Example 5: Generalized linear models example (binomial link).
27 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
1 Recall the binomial distribution: Let X: no. successes in n
trials. We generally assume that
X ā‡  Binomial(n,q), q unknown.
2 Suppose we have 46 successes out of 100. We generally use
the empirically observed proportion 46/100 as our estimate of
q. I.e., we assume that the generating distribution is
X ā‡  Binomial(n = 100,q = .46).
3 This is because, for all possible values of q, going from 0 to 1,
0.46 has the highest likelihood.
28 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
> dbinom(x=46,size=100,0.4)
[1] 0.03811036
> dbinom(x=46,size=100,0.46)
[1] 0.07984344
> dbinom(x=46,size=100,0.5)
[1] 0.0579584
> dbinom(x=46,size=100,0.6)
[1] 0.001487007
Actually, we could take a whole range of values of q from 0 to 1
and plot the resulting probabilities.
29 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
> theta<-seq(0,1,by=0.01)
> plot(theta,dbinom(x=46,size=100,theta),
+ xlab=expression(theta),type="l")
0.0 0.2 0.4 0.6 0.8 1.0
0.000.040.08
Īø
dbinom(x=46,size=100,theta)
This is the likelihood function for the binomial distribution, and
we will write it as f (data | q). It is a function of q.
30 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Since Binomial(x,n,q) = n
x qx qn x , we can see that:
f (data | q) Āµ q46
(1 q)54
(14)
We are now going to use Bayesā€™ theorem to work out the posterior
distribution of q given the data:
f (q | data)
"
posterior
Āµ f (data | q)
"
likelihood
f (q)
"
prior
(15)
All thatā€™s missing here is the prior distribution f (q). So letā€™s try to
deļ¬ne a prior for q.
31 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
To deļ¬ne a prior for q, we will use a distribution called the Beta
distribution, which takes two parameters, a and b.
We can plot some Beta distributions to get a feel for what these
parameters do.
We are going to plot
1 Beta(a=2,b=2)
2 Beta(a=3,b=3)
3 Beta(a=6,b=6)
4 Beta(a=10,b=10)
32 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
0.0 0.2 0.4 0.6 0.8 1.0
01234
Beta density
density
a=2,b=2
a=3,b=3
a=6,b=6
a=60,b=60
33 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Each successive density expresses increasing certainty about q
being centered around 0.5; notice that the spread about 0.5 is
decreasing as a and b increase.
1 Beta(a=2,b=2)
2 Beta(a=3,b=3)
3 Beta(a=6,b=6)
4 Beta(a=10,b=10)
34 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
1 If we donā€™t have much prior information, we could use
a=b=2; this gives us a uniform prior; this is called an
uninformative prior or non-informative prior.
2 If we have a lot of prior knowledge and/or a strong belief that
q has a particular value, we can use a larger a,b to reļ¬‚ect our
greater certainty about the parameter.
3 You can think of the parameter referring to the number of
successes, and the parameter b to the number of failures.
So the beta distribution can be used to deļ¬ne the prior distribution
of q.
35 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Just for the sake of illustration, letā€™s take four diā†µerent beta priors,
each reļ¬‚ecting increasing certainty.
1 Beta(a=2,b=2)
2 Beta(a=3,b=3)
3 Beta(a=6,b=6)
4 Beta(a=21,b=21)
Each (except perhaps the ļ¬rst) reļ¬‚ects a belief that q = 0.5, with
varying degrees of uncertainty.
Note an important fact: Beta(q | a,b) Āµ qa 1(1 q)b 1.
This is because the Beta distribution is:
f (q | a,b) = (a,b)
(a) (b) qa 1(1 q)b 1
36 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Now we just need to plug in the likelihood and the prior to get the
posterior:
f (q | data) Āµ f (data | q)f (q) (16)
The four corresponding posterior distributions would be as follows
(I hope I got the sums right!).
37 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
f (q | data) Āµ [q46
(1 q)54
][q2 1
(1 q)2 1
] = q47
(1 q)55
(17)
f (q | data) Āµ [q46
(1 q)54
][q3 1
(1 q)3 1
] = q48
(1 q)56
(18)
f (q | data) Āµ [q46
(1 q)54
][q6 1
(1 q)6 1
] = q51
(1 q)59
(19)
f (q | data) Āµ [q46
(1 q)54
][q21 1
(1 q)21 1
] = q66
(1 q)74
(20)
38 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
1 We can now visualize each of these triplets of priors,
likelihoods and posteriors.
2 Note that I use the beta to model the likelihood because this
allows me to visualize all three (prior, lik., posterior) in the
same plot.
3 I ļ¬rst show the plot just for the prior
q ā‡  Beta(a = 2,a = 2)
39 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Beta(2,2) prior: posterior is shifted just a bit to the right compared to the likelihood
0.0 0.2 0.4 0.6 0.8 1.0
0246810
X
post
lik
prior
40 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Beta(6,6) prior: Posterior shifts even more towards the prior
0.0 0.2 0.4 0.6 0.8 1.0
0246810
X
post
lik
prior
41 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Beta(21,21) prior: Posterior shifts even more towards the prior
0.0 0.2 0.4 0.6 0.8 1.0
0246810
X
post
lik
prior
42 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
Proportions
Basically, the posterior is a compromise between the prior and the
likelihood.
1 When the prior has high uncertainty or we have a lot of data,
the likelihood will dominate.
2 When the prior has high certainty (like the Beta(21,21) case),
and we have very little data, then the prior will dominate.
So, Bayesian methods are particularly important when you have
little data but a whole lot of expert knowledge.
But they are also useful for standard psycholinguistic research, as I
hope to demonstrate.
43 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
An example application
(This exercise is from a Copenhagen course I took in 2012. I quote it
almost verbatim).
ā€œThe French mathematician Pierre-Simon Laplace (1749-1827) was the
ļ¬rst person to show deļ¬nitively that the proportion of female births in
the French population was less then 0.5, in the late 18th century, using a
Bayesian analysis based on a uniform prior distribution. Suppose you
were doing a similar analysis but you had more deļ¬nite prior beliefs about
the ratio of male to female births. In particular, if q represents the
proportion of female births in a given population, you are willing to place
a Beta(100,100) prior distribution on q.
1 Show that this means you are more than 95% sure that q is
between 0.4 and 0.6, although you are ambivalent as to whether it
is greater or less than 0.5.
2 Now you observe that out of a random sample of 1,000 births, 511
are boys. What is your posterior probability that q > 0.5?ā€
44 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
An example application
Show that this means you are more than 95% sure that q is
between 0.4 and 0.6, although you are ambivalent as to whether it
is greater or less than 0.5.
Prior: Beta(a=100,b=100)
> round(qbeta(0.025,shape1=100,shape2=100),digits=1)
[1] 0.4
> round(qbeta(0.975,shape1=100,shape2=100),digits=1)
[1] 0.6
> ## ambivalent as to whether theta <0.5 or not:
> round(pbeta(0.5,shape1=100,shape2=100),digits=1)
[1] 0.5
[Exercise: Draw the prior distribution]
45 / 83
Lecture 2: Basics of Bayesian modeling
Example 1: Proportions
An example application
Now you observe that out of a random sample of 1,000 births, 511
are boys. What is your posterior probability that q > 0.5?
Prior: Beta(a=100,b=100)
Data: 489 girls out of 1000.
Posterior:
f (q | data) Āµ [q489
(1 q)511
][q100 1
(1 q)100 1
] = q588
(1 q)610
(21)
Since Beta(q | a,b) Āµ qa 1(1 q)b 1, the posterior is
Beta(a=589,b=611).
Therefore the posterior probability of q > 0.5 is:
> qbeta(0.5,shape1=589,shape2=611,lower.tail=FALSE)
[1] 0.4908282
46 / 83
Lecture 2: Basics of Bayesian modeling
Example 2: Normal distribution
Normal distribution
The normal distribution is the most frequently used probability
model in psycholinguistics.
If ĀÆx is sample mean, sample size is n and sample variance is known
to be s2, and if the prior on the mean Āµ is Normal(m,v), then:
It is pretty easy to derive (see Lynch textbook) the posterior mean
mā‡¤ and variance vā‡¤ analytically:
vā‡¤
=
1
1
v + n
s2
mā‡¤
= vā‡¤
āœ“
m
v
+
nĀÆx
s2
ā—†
(22)
E[q | x] = mā‡„
w1
w1+w2
+ ĀÆx ā‡„
w2
w1+w2
w1 = v 1
,w2 = (s2
/n) 1
(23)
So: the posterior mean is a weighted mean of the prior and
likelihood.
47 / 83
Lecture 2: Basics of Bayesian modeling
Example 2: Normal distribution
Normal distribution
1 The weight w1 is determined by the inverse of the prior
variance.
2 The weight w2 is determined by the inverse of the sample
standard error.
3 It is common in Bayesian statistics to talk about
precision = 1
variance , so that
1 w1 = v 1 = precisionprior
2 w2 = (s2/n) 1 = precisiondata
48 / 83
Lecture 2: Basics of Bayesian modeling
Example 2: Normal distribution
Normal distribution
If w1 is very large compared to w2, then the posterior mean will be
determined mainly by the prior mean m:
E[q | x] = mā‡„
w1
w1+w2
+ ĀÆx ā‡„
w2
w1+w2
w1 = v 1
,w2 = (s2
/n) 1
(24)
If w2 is very large compared to w1, then the posterior mean will be
determined mainly by the sample mean ĀÆx:
E[q | x] = mā‡„
w1
w1+w2
+ ĀÆx ā‡„
w2
w1+w2
w1 = v 1
,w2 = (s2
/n) 1
(25)
49 / 83
Lecture 2: Basics of Bayesian modeling
Example 2: Normal distribution
An example application
[This is an old exam problem from She eld University]
Letā€™s say there is a hormone measurement test that yields a
numerical value that can be positive or negative. We know the
following:
The doctorā€™s prior: 75% interval (ā€œpatient healthyā€) is
[-0.3,0.3].
Data from patient: x = 0.2, known s = 0.15.
Compute posterior N(m*,v*).
Iā€™ll leave this as Homework 0 (solution next week). Hint: see
slide 20.
50 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Simple linear regression
We begin with a simple example. Let the response variable be
yi ,i = 1,...,n, and let there be p predictors, x1i ,...,xpi . Also, let
yi ā‡  N(Āµi ,s2
), Āµi = b0 +Ƃbxki (26)
(the summation is over the p predictors, i.e., k = 1,...,p).
We need to specify a prior distribution for the parameters:
bk ā‡  Normal(0,1002
) logs ā‡  Unif ( 100,100) (27)
51 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Source: Gelman and Hill 2007
> beautydata<-read.table("data/beauty.txt",header=T)
> ## Note: beauty level is centered.
> head(beautydata)
beauty evaluation
1 0.2015666 4.3
2 -0.8260813 4.5
3 -0.6603327 3.7
4 -0.7663125 4.3
5 1.4214450 4.4
6 0.5002196 4.2
> ## restate the data as a list for JAGS:
> data<-list(x=beautydata$beauty,
+ y=beautydata$evaluation)
52 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
We literally follow the speciļ¬cation of the linear model given
above. We specify the model for the data frame row by row, using
a for loop, so that for each dependent variable value yi (the
evaluation score) we specify how we believe it was generated.
yi ā‡  Normal(Āµ[i],s2
) i = 1,...,463 (28)
Āµ[i] ā‡  b0 +b1xi Note: predictor is centered (29)
Deļ¬ne priors on the b and on s:
b0 ā‡  Uniform( 10,10) b0 ā‡  Uniform( 10,10) (30)
s ā‡  Uniform(0,100) (31)
53 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
> library(rjags)
> cat("
+ model
+ {
+ ## specify model for data:
+ for(i in 1:463){
+ y[i] ~ dnorm(mu[i],tau)
+ mu[i] <- beta0 + beta1 * (x[i])
+ }
+ # priors:
+ beta0 ~ dunif(-10,10)
+ beta1 ~ dunif(-10,10)
+ sigma ~ dunif(0,100)
+ sigma2 <- pow(sigma,2)
+ tau <- 1/sigma2
+ }",
+ file="JAGSmodels/beautyexample1.jag" )
54 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Data from Gelman and Hill, 2007
Some things to note in JAGS syntax:
1 The normal distribution is deļ¬ned in terms of precision, not
variance.
2 ā‡  meansā€œis generated byā€
3 <-
is a deterministic assignment, like = in mathematics.
4 The model speciļ¬cation is declarative, order does not matter.
For example, we would have written the following in any order:
sigma ~ dunif(0,100)
sigma2 <- pow(sigma,2)
tau <- 1/sigma2
55 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
> ## specify which variables you want to examine
> ## the posterior distribution of:
> track.variables<-c("beta0","beta1","sigma")
> ## define model:
> beauty.mod <- jags.model(
+ file = "JAGSmodels/beautyexample1.jag",
+ data=data,
+ n.chains = 1,
+ n.adapt =2000,
+ quiet=T)
> ## sample from posterior:
> beauty.res <- coda.samples(beauty.mod,
+ var = track.variables,
+ n.iter = 2000,
+ thin = 1 )
56 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
> round(summary(beauty.res)$statistics[,1:2],digits=2)
Mean SD
beta0 4.01 0.03
beta1 0.13 0.03
sigma 0.55 0.02
> round(summary(beauty.res)$quantiles[,c(1,3,5)],digits=2)
2.5% 50% 97.5%
beta0 3.96 4.01 4.06
beta1 0.07 0.13 0.19
sigma 0.51 0.55 0.58
57 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Compare with standard lm ļ¬t:
> lm_summary<-summary(lm(evaluation~beauty,
+ beautydata))
> round(lm_summary$coef,digits=2)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.01 0.03 157.21 0
beauty 0.13 0.03 4.13 0
> round(lm_summary$sigma,digits=2)
[1] 0.55
Note that: (a) with uniform priors, we get a Bayesian estimate
equal to the MLE, (b) we get uncertainty estimates for s in the
Bayesian model.
58 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Posterior distributions of parameters
> densityplot(beauty.res)
Density
051015
3.95 4.00 4.05 4.10
beta0
0246812
0.00 0.05 0.10 0.15 0.20 0.25
beta1
051020
0.50 0.55 0.60
sigma
59 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Posterior distributions of parameters
> op<-par(mfrow=c(1,3),pty="s")
> traceplot(beauty.res)
2000 3000 4000
3.954.05
Iterations
Trace of beta0
2000 3000 4000
0.050.15
Iterations
Trace of beta1
2000 3000 4000
0.500.56
Iterations
Trace of sigma
60 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Better looking professors get better teaching evaluations
Posterior distributions of parameters
> MCMCsamp<-as.matrix(beauty.res)
> op<-par(mfrow=c(1,3),pty="s")
> hist(MCMCsamp[,1],main=expression(beta[0]),
+ xlab="",freq=FALSE)
> hist(MCMCsamp[,2],main=expression(beta[1]),
+ xlab="",freq=FALSE)
> hist(MCMCsamp[,3],main=expression(sigma),
+ xlab="",freq=FALSE)
Ī²0 Ī²1 Ļƒ 61 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Homework 1: Ratsā€™ weights
Source: Lunn et al 2012
Five measurements of a ratā€™s weight, in grams, as a function of
some x (say some nutrition-based variable). Note that here we will
center the predictor in the model code.
First we load/enter the data:
> data<-list(x=c(8,15,22,29,36),
+ y=c(177,236,285,350,376))
Then we ļ¬t the linear model using lm, for comparison with the
Bayesian model:
> lm_summary_rats<-summary(fm<-lm(y~x,data))
> round(lm_summary_rats$coef,digits=3)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 123.886 11.879 10.429 0.002
x 7.314 0.492 14.855 0.001
62 / 83
Lecture 2: Basics of Bayesian modeling
Example 3: Linear regression
Homework 2: Ratsā€™ weights (solution in next class)
Fit the following model in JAGS:
> cat("
+ model
+ {
+ ## specify model for data:
+ for(i in 1:5){
+ y[i] ~ dnorm(mu[i],tau)
+ mu[i] <- beta0 + beta1 * (x[i]-mean(x[]))
+ }
+ # priors:
+ beta0 ~ dunif(-500,500)
+ beta1 ~ dunif(-500,500)
+ tau <- 1/sigma2
+ sigma2 <-pow(sigma,2)
+ sigma ~ dunif(0,200)
+ }",
+ file="JAGSmodels/ratsexample2.jag" )
63 / 83
Lecture 2: Basics of Bayesian modeling
Example 4: Multiple regression
Multiple predictors
Source: Baayenā€™s book
We ļ¬t log reading time to Trial id (centered), Native Language, and Sex.
The categorical variables are centered as well.
> lexdec<-read.table("data/lexdec.txt",header=TRUE)
> data<-lexdec[,c(1,2,3,4,5)]
> contrasts(data$NativeLanguage)<-contr.sum(2)
> contrasts(data$Sex)<-contr.sum(2)
> lm_summary_lexdec<-summary(fm<-lm(RT~scale(Trial,scale=F)+
+ NativeLanguage+Sex,data))
> round(lm_summary_lexdec$coef,digits=2)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.41 0.01 1061.53 0.00
scale(Trial, scale = F) 0.00 0.00 -2.39 0.02
NativeLanguage1 -0.08 0.01 -14.56 0.00
Sex1 -0.03 0.01 -5.09 0.00
64 / 83
Lecture 2: Basics of Bayesian modeling
Example 4: Multiple regression
Multiple predictors
Preparing the data for JAGS:
> dat<-list(y=data$RT,
+ Trial=(data$Trial-mean(data$Trial)),
+ Lang=data$NativeLanguage,
+ Sex=data$Sex)
65 / 83
Lecture 2: Basics of Bayesian modeling
Example 4: Multiple regression
Multiple predictors
The JAGS model:
> cat("
+ model
+ {
+ ## specify model for data:
+ for(i in 1:1659){
+ y[i] ~ dnorm(mu[i],tau)
+ mu[i] <- beta0 +
+ beta1 * Trial[i]+
+ beta2 * Lang[i] + beta3 * Sex[i]
+ }
+ # priors:
+ beta0 ~ dunif(-10,10)
+ beta1 ~ dunif(-5,5)
+ beta2 ~ dunif(-5,5)
+ beta3 ~ dunif(-5,5)
+ tau <- 1/sigma2
+ sigma2 <-pow(sigma,2)
+ sigma ~ dunif(0,200)
+ }",
+ file="JAGSmodels/multregexample1.jag" )
66 / 83
Lecture 2: Basics of Bayesian modeling
Example 4: Multiple regression
Multiple predictors
> track.variables<-c("beta0","beta1",
+ "beta2","beta3","sigma")
> library(rjags)
> lexdec.mod <- jags.model(
+ file = "JAGSmodels/multregexample1.jag",
+ data=dat,
+ n.chains = 1,
+ n.adapt =2000,
+ quiet=T)
> lexdec.res <- coda.samples( lexdec.mod,
+ var = track.variables,
+ n.iter = 3000)
67 / 83
Lecture 2: Basics of Bayesian modeling
Example 4: Multiple regression
Multiple predictors
> round(summary(lexdec.res)$statistics[,1:2],
+ digits=2)
Mean SD
beta0 6.06 0.03
beta1 0.00 0.00
beta2 0.17 0.01
beta3 0.06 0.01
sigma 0.23 0.00
> round(summary(lexdec.res)$quantiles[,c(1,3,5)],
+ digits=2)
2.5% 50% 97.5%
beta0 6.01 6.06 6.12
beta1 0.00 0.00 0.00
beta2 0.14 0.17 0.19
beta3 0.04 0.06 0.09
sigma 0.22 0.23 0.23 68 / 83
Lecture 2: Basics of Bayesian modeling
Example 4: Multiple regression
Multiple predictors
Here is the lm ļ¬t for comparison (I center all predictors):
> lm_summary_lexdec<-summary(
+ fm<-lm(RT~scale(Trial,scale=F)
+ +NativeLanguage+Sex,
+ lexdec))
> round(lm_summary_lexdec$coef,digits=2)[,1:2]
Estimate Std. Error
(Intercept) 6.29 0.01
scale(Trial, scale = F) 0.00 0.00
NativeLanguageOther 0.17 0.01
SexM 0.06 0.01
Note: We should have ļ¬t a linear mixed model here; I will
return to this later.
69 / 83
Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
We consider the model
yi ā‡  Binomial(pi ,ni ) logit(pi ) = b0 +b1(xi ĀÆx) (32)
A simple example is beetle data from Dobson et al 2010:
> beetledata<-read.table("data/beetle.txt",header=T)
> head(beetledata)
dose number killed
1 1.6907 59 6
2 1.7242 60 13
3 1.7552 62 18
4 1.7842 56 28
5 1.8113 63 52
6 1.8369 59 53
70 / 83
Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
Prepare data for JAGS:
> dat<-list(x=beetledata$dose-mean(beetledata$dose),
+ n=beetledata$number,
+ y=beetledata$killed)
71 / 83
Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
> cat("
+ model
+ {
+ for(i in 1:8){
+ y[i] ~ dbin(p[i],n[i])
+ logit(p[i]) <- beta0 + beta1 * x[i]
+ }
+ # priors:
+ beta0 ~ dunif(0,pow(100,2))
+ beta1 ~ dunif(0,pow(100,2))
+ }",
+ file="JAGSmodels/glmexample1.jag" )
72 / 83
Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
Notice use of initial values:
> track.variables<-c("beta0","beta1")
> ## new:
> inits <- list (list(beta0=0,
+ beta1=0))
> glm.mod <- jags.model(
+ file = "JAGSmodels/glmexample1.jag",
+ data=dat,
+ ## new:
+ inits=inits,
+ n.chains = 1,
+ n.adapt =2000, quiet=T)
> glm.res <- coda.samples( glm.mod,
+ var = track.variables,
+ n.iter = 2000)
73 / 83
Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
> round(summary(glm.res)$statistics[,1:2],
+ digits=2)
Mean SD
beta0 0.75 0.13
beta1 34.53 2.84
> round(summary(glm.res)$quantiles[,c(1,3,5)],
+ digits=2)
2.5% 50% 97.5%
beta0 0.49 0.75 1.03
beta1 29.12 34.51 40.12
74 / 83
Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
The values match up with glm output:
> round(coef(glm(killed/number~scale(dose,scale=F),
+ weights=number,
+ family=binomial(),beetledata)),
+ digits=2)
(Intercept) scale(dose, scale = F)
0.74 34.27
75 / 83
Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
GLMs
> plot(glm.res)
2000 2500 3000 3500 4000
0.40.81.2
Iterations
Trace of beta0
0.2 0.4 0.6 0.8 1.0 1.2
0.01.02.0
Density of beta0
N = 2000 Bandwidth = 0.03123
2000 2500 3000 3500 4000
253545
Iterations
Trace of beta1
25 30 35 40 45
0.000.060.12
Density of beta1
N = 2000 Bandwidth = 0.6572 76 / 83
Lecture 2: Basics of Bayesian modeling
Example 5: Generalized linear models
Homework 3: GLMs
We ļ¬t uniform priors to the coe cients b:
# priors:
beta0 ~ dunif(0,pow(100,2))
beta1 ~ dunif(0,pow(100,2))
Fit the beetle data again, using suitable normal distribution priors
for the coe cients beta0 and beta1. Does the posterior
distribution depend on the prior?
77 / 83
Lecture 2: Basics of Bayesian modeling
Prediction
GLMs: Predicting future/missing data
One important thing we can do is to predict the posterior
distribution of future or missing data.
One easy to way to do this is to deļ¬ne how we expect the
predicted data to be generated.
This example revisits the earlier toy example from Lunn et al. on
rat data (slide 62).
> data<-list(x=c(8,15,22,29,36),
+ y=c(177,236,285,350,376))
78 / 83
Lecture 2: Basics of Bayesian modeling
Prediction
GLMs: Predicting future/missing data
> cat("
+ model
+ {
+ ## specify model for data:
+ for(i in 1:5){
+ y[i] ~ dnorm(mu[i],tau)
+ mu[i] <- beta0 + beta1 * (x[i]-mean(x[]))
+ }
+ ## prediction
+ mu45 <- beta0+beta1 * (45-mean(x[]))
+ y45 ~ dnorm(mu45,tau)
+ # priors:
+ beta0 ~ dunif(-500,500)
+ beta1 ~ dunif(-500,500)
+ tau <- 1/sigma2
+ sigma2 <-pow(sigma,2)
+ sigma ~ dunif(0,200)
79 / 83
Lecture 2: Basics of Bayesian modeling
Prediction
GLMs: Predicting future/missing data
> track.variables<-c("beta0","beta1","sigma","y45")
> rats.mod <- jags.model(
+ file = "JAGSmodels/ratsexample2pred.jag",
+ data=data,
+ n.chains = 1,
+ n.adapt =2000, quiet=T)
> rats.res <- coda.samples( rats.mod,
+ var = track.variables,
+ n.iter = 2000,
+ thin = 1)
80 / 83
Lecture 2: Basics of Bayesian modeling
Prediction
GLMs: Predicting future/missing data
> round(summary(rats.res)$statistics[,1:2],
+ digits=2)
Mean SD
beta0 284.99 11.72
beta1 7.34 1.26
sigma 20.92 16.46
y45 453.87 37.37
81 / 83
Lecture 2: Basics of Bayesian modeling
Prediction
GLMs: Predicting future/missing data
Density
0.000.05
200 250 300 350 400
beta0
0.00.4
0 5 10 15
beta1
0.000.05
0 50 100 150
sigma
0.000
300 400 500 600 700 800
y45
82 / 83
Lecture 2: Basics of Bayesian modeling
Concluding remarks
Summing up
1 In some cases Bayesā€™ Theorem can be used analytically
(Examples 1, 2)
2 It is relatively easy to deļ¬ne diā†µerent kinds of Bayesian
models using programming languages like JAGS.
3 Today we saw some examples from linear models (Examples
3-5).
4 Next week: Linear mixed models.
83 / 83

More Related Content

What's hot

Discrete probability distribution (complete)
Discrete probability distribution (complete)Discrete probability distribution (complete)
Discrete probability distribution (complete)ISYousafzai
Ā 
calculus applications integration volumes.ppt
calculus applications integration volumes.pptcalculus applications integration volumes.ppt
calculus applications integration volumes.pptLakpaNuru
Ā 
Inverse functions and relations
Inverse functions and relationsInverse functions and relations
Inverse functions and relationsJessica Garcia
Ā 
homogeneous Equation All Math Solved
homogeneous Equation All Math Solvedhomogeneous Equation All Math Solved
homogeneous Equation All Math SolvedNeAMul1
Ā 
Regression Analysis presentation by Al Arizmendez and Cathryn Lottier
Regression Analysis presentation by Al Arizmendez and Cathryn LottierRegression Analysis presentation by Al Arizmendez and Cathryn Lottier
Regression Analysis presentation by Al Arizmendez and Cathryn LottierAl Arizmendez
Ā 
Lesson 1 INTRODUCTION TO FUNCTIONS
Lesson 1   INTRODUCTION TO FUNCTIONSLesson 1   INTRODUCTION TO FUNCTIONS
Lesson 1 INTRODUCTION TO FUNCTIONSLouiseLyn
Ā 
Advanced Trigonometry
Advanced TrigonometryAdvanced Trigonometry
Advanced Trigonometrytimschmitz
Ā 
2.4 Scatterplots, correlation, and regression
2.4 Scatterplots, correlation, and regression2.4 Scatterplots, correlation, and regression
2.4 Scatterplots, correlation, and regressionLong Beach City College
Ā 
Lesson 11: Limits and Continuity
Lesson 11: Limits and ContinuityLesson 11: Limits and Continuity
Lesson 11: Limits and ContinuityMatthew Leingang
Ā 
Probability Distribution
Probability DistributionProbability Distribution
Probability DistributionSagar Khairnar
Ā 
Abstract algebra & its applications (1)
Abstract algebra & its applications (1)Abstract algebra & its applications (1)
Abstract algebra & its applications (1)drselvarani
Ā 
divergence of vector and divergence theorem
divergence of vector and divergence theoremdivergence of vector and divergence theorem
divergence of vector and divergence theoremAbhishekLalkiya
Ā 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regressionJames Neill
Ā 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrixitutor
Ā 
Expected value of random variables
Expected value of random variablesExpected value of random variables
Expected value of random variablesAfrasiyab Haider
Ā 
Vector calculus
Vector calculusVector calculus
Vector calculusraghu ram
Ā 

What's hot (20)

Discrete probability distribution (complete)
Discrete probability distribution (complete)Discrete probability distribution (complete)
Discrete probability distribution (complete)
Ā 
calculus applications integration volumes.ppt
calculus applications integration volumes.pptcalculus applications integration volumes.ppt
calculus applications integration volumes.ppt
Ā 
Inverse functions and relations
Inverse functions and relationsInverse functions and relations
Inverse functions and relations
Ā 
homogeneous Equation All Math Solved
homogeneous Equation All Math Solvedhomogeneous Equation All Math Solved
homogeneous Equation All Math Solved
Ā 
Regression Analysis presentation by Al Arizmendez and Cathryn Lottier
Regression Analysis presentation by Al Arizmendez and Cathryn LottierRegression Analysis presentation by Al Arizmendez and Cathryn Lottier
Regression Analysis presentation by Al Arizmendez and Cathryn Lottier
Ā 
Inverse function
Inverse functionInverse function
Inverse function
Ā 
Lesson 1 INTRODUCTION TO FUNCTIONS
Lesson 1   INTRODUCTION TO FUNCTIONSLesson 1   INTRODUCTION TO FUNCTIONS
Lesson 1 INTRODUCTION TO FUNCTIONS
Ā 
Advanced Trigonometry
Advanced TrigonometryAdvanced Trigonometry
Advanced Trigonometry
Ā 
Random Variable
Random VariableRandom Variable
Random Variable
Ā 
Partial derivatives
Partial derivativesPartial derivatives
Partial derivatives
Ā 
Curve fitting
Curve fittingCurve fitting
Curve fitting
Ā 
2.4 Scatterplots, correlation, and regression
2.4 Scatterplots, correlation, and regression2.4 Scatterplots, correlation, and regression
2.4 Scatterplots, correlation, and regression
Ā 
Lesson 11: Limits and Continuity
Lesson 11: Limits and ContinuityLesson 11: Limits and Continuity
Lesson 11: Limits and Continuity
Ā 
Probability Distribution
Probability DistributionProbability Distribution
Probability Distribution
Ā 
Abstract algebra & its applications (1)
Abstract algebra & its applications (1)Abstract algebra & its applications (1)
Abstract algebra & its applications (1)
Ā 
divergence of vector and divergence theorem
divergence of vector and divergence theoremdivergence of vector and divergence theorem
divergence of vector and divergence theorem
Ā 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regression
Ā 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrix
Ā 
Expected value of random variables
Expected value of random variablesExpected value of random variables
Expected value of random variables
Ā 
Vector calculus
Vector calculusVector calculus
Vector calculus
Ā 

Viewers also liked

An introduction to bayesian statistics
An introduction to bayesian statisticsAn introduction to bayesian statistics
An introduction to bayesian statisticsJohn Tyndall
Ā 
On being Bayesian
On being BayesianOn being Bayesian
On being BayesianStephen Senn
Ā 
Modeling and Mining Sequential Data
Modeling and Mining Sequential DataModeling and Mining Sequential Data
Modeling and Mining Sequential DataPhilipp Singer
Ā 
Bayesian intro
Bayesian introBayesian intro
Bayesian introBayesLaplace1
Ā 
Introduction to Bayesian Statistics
Introduction to Bayesian StatisticsIntroduction to Bayesian Statistics
Introduction to Bayesian StatisticsPhilipp Singer
Ā 
Bayesian statistics using r intro
Bayesian statistics using r   introBayesian statistics using r   intro
Bayesian statistics using r introBayesLaplace1
Ā 
Beta distribution and Dirichlet distribution (ćƒ™ćƒ¼ć‚æ分åøƒćØćƒ‡ć‚£ćƒŖć‚Æćƒ¬åˆ†åøƒ)
Beta distribution and Dirichlet distribution (ćƒ™ćƒ¼ć‚æ分åøƒćØćƒ‡ć‚£ćƒŖć‚Æćƒ¬åˆ†åøƒ)Beta distribution and Dirichlet distribution (ćƒ™ćƒ¼ć‚æ分åøƒćØćƒ‡ć‚£ćƒŖć‚Æćƒ¬åˆ†åøƒ)
Beta distribution and Dirichlet distribution (ćƒ™ćƒ¼ć‚æ分åøƒćØćƒ‡ć‚£ćƒŖć‚Æćƒ¬åˆ†åøƒ)Taro Tezuka
Ā 
Remote Control Architecture: How We Are Building The Worldā€™s Fastest Remote C...
Remote Control Architecture: How We Are Building The Worldā€™s Fastest Remote C...Remote Control Architecture: How We Are Building The Worldā€™s Fastest Remote C...
Remote Control Architecture: How We Are Building The Worldā€™s Fastest Remote C...Kaseya
Ā 
An introduction to Bayesian Statistics using Python
An introduction to Bayesian Statistics using PythonAn introduction to Bayesian Statistics using Python
An introduction to Bayesian Statistics using Pythonfreshdatabos
Ā 
Lecture 5: Bayesian Classification
Lecture 5: Bayesian ClassificationLecture 5: Bayesian Classification
Lecture 5: Bayesian ClassificationMarina Santini
Ā 
Introduction to Bayesian Methods
Introduction to Bayesian MethodsIntroduction to Bayesian Methods
Introduction to Bayesian MethodsCorey Chivers
Ā 
Bayesian Network Modeling using Python and R
Bayesian Network Modeling using Python and RBayesian Network Modeling using Python and R
Bayesian Network Modeling using Python and RPyData
Ā 
Bayesian Belief Networks for dummies
Bayesian Belief Networks for dummiesBayesian Belief Networks for dummies
Bayesian Belief Networks for dummiesGilad Barkan
Ā 

Viewers also liked (14)

An introduction to bayesian statistics
An introduction to bayesian statisticsAn introduction to bayesian statistics
An introduction to bayesian statistics
Ā 
On being Bayesian
On being BayesianOn being Bayesian
On being Bayesian
Ā 
Modeling and Mining Sequential Data
Modeling and Mining Sequential DataModeling and Mining Sequential Data
Modeling and Mining Sequential Data
Ā 
Bayesian intro
Bayesian introBayesian intro
Bayesian intro
Ā 
Introduction to Bayesian Statistics
Introduction to Bayesian StatisticsIntroduction to Bayesian Statistics
Introduction to Bayesian Statistics
Ā 
Bayesian statistics using r intro
Bayesian statistics using r   introBayesian statistics using r   intro
Bayesian statistics using r intro
Ā 
Beta distribution and Dirichlet distribution (ćƒ™ćƒ¼ć‚æ分åøƒćØćƒ‡ć‚£ćƒŖć‚Æćƒ¬åˆ†åøƒ)
Beta distribution and Dirichlet distribution (ćƒ™ćƒ¼ć‚æ分åøƒćØćƒ‡ć‚£ćƒŖć‚Æćƒ¬åˆ†åøƒ)Beta distribution and Dirichlet distribution (ćƒ™ćƒ¼ć‚æ分åøƒćØćƒ‡ć‚£ćƒŖć‚Æćƒ¬åˆ†åøƒ)
Beta distribution and Dirichlet distribution (ćƒ™ćƒ¼ć‚æ分åøƒćØćƒ‡ć‚£ćƒŖć‚Æćƒ¬åˆ†åøƒ)
Ā 
Remote Control Architecture: How We Are Building The Worldā€™s Fastest Remote C...
Remote Control Architecture: How We Are Building The Worldā€™s Fastest Remote C...Remote Control Architecture: How We Are Building The Worldā€™s Fastest Remote C...
Remote Control Architecture: How We Are Building The Worldā€™s Fastest Remote C...
Ā 
An introduction to Bayesian Statistics using Python
An introduction to Bayesian Statistics using PythonAn introduction to Bayesian Statistics using Python
An introduction to Bayesian Statistics using Python
Ā 
Lecture 5: Bayesian Classification
Lecture 5: Bayesian ClassificationLecture 5: Bayesian Classification
Lecture 5: Bayesian Classification
Ā 
Introduction to Bayesian Methods
Introduction to Bayesian MethodsIntroduction to Bayesian Methods
Introduction to Bayesian Methods
Ā 
Bayesian Network Modeling using Python and R
Bayesian Network Modeling using Python and RBayesian Network Modeling using Python and R
Bayesian Network Modeling using Python and R
Ā 
Naive bayes
Naive bayesNaive bayes
Naive bayes
Ā 
Bayesian Belief Networks for dummies
Bayesian Belief Networks for dummiesBayesian Belief Networks for dummies
Bayesian Belief Networks for dummies
Ā 

Similar to Lecture 2

Probability distributionv1
Probability distributionv1Probability distributionv1
Probability distributionv1Beatrice van Eden
Ā 
pattern recognition
pattern recognition pattern recognition
pattern recognition MohammadMoattar2
Ā 
Estimation theory 1
Estimation theory 1Estimation theory 1
Estimation theory 1Gopi Saiteja
Ā 
DSP_DiscSignals_LinearS_150417.pptx
DSP_DiscSignals_LinearS_150417.pptxDSP_DiscSignals_LinearS_150417.pptx
DSP_DiscSignals_LinearS_150417.pptxHamedNassar5
Ā 
Statement of stochastic programming problems
Statement of stochastic programming problemsStatement of stochastic programming problems
Statement of stochastic programming problemsSSA KPI
Ā 
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920Karl Rudeen
Ā 
ļ¼ˆDLč¼ŖčŖ­ļ¼‰Variational Dropout Sparsifies Deep Neural Networks
ļ¼ˆDLč¼ŖčŖ­ļ¼‰Variational Dropout Sparsifies Deep Neural Networksļ¼ˆDLč¼ŖčŖ­ļ¼‰Variational Dropout Sparsifies Deep Neural Networks
ļ¼ˆDLč¼ŖčŖ­ļ¼‰Variational Dropout Sparsifies Deep Neural NetworksMasahiro Suzuki
Ā 
Fixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractionsFixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractionsAlexander Decker
Ā 
Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
Ā 
Moment-Generating Functions and Reproductive Properties of Distributions
Moment-Generating Functions and Reproductive Properties of DistributionsMoment-Generating Functions and Reproductive Properties of Distributions
Moment-Generating Functions and Reproductive Properties of DistributionsIJSRED
Ā 
IJSRED-V2I5P56
IJSRED-V2I5P56IJSRED-V2I5P56
IJSRED-V2I5P56IJSRED
Ā 
Lec. 10: Making Assumptions of Missing data
Lec. 10: Making Assumptions of Missing dataLec. 10: Making Assumptions of Missing data
Lec. 10: Making Assumptions of Missing dataMohamadKharseh1
Ā 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateAlexander Litvinenko
Ā 
Introduction to Evidential Neural Networks
Introduction to Evidential Neural NetworksIntroduction to Evidential Neural Networks
Introduction to Evidential Neural NetworksFederico Cerutti
Ā 

Similar to Lecture 2 (20)

Probability distributionv1
Probability distributionv1Probability distributionv1
Probability distributionv1
Ā 
pattern recognition
pattern recognition pattern recognition
pattern recognition
Ā 
Estimation theory 1
Estimation theory 1Estimation theory 1
Estimation theory 1
Ā 
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
Ā 
DSP_DiscSignals_LinearS_150417.pptx
DSP_DiscSignals_LinearS_150417.pptxDSP_DiscSignals_LinearS_150417.pptx
DSP_DiscSignals_LinearS_150417.pptx
Ā 
Statement of stochastic programming problems
Statement of stochastic programming problemsStatement of stochastic programming problems
Statement of stochastic programming problems
Ā 
Assignment 2 solution acs
Assignment 2 solution acsAssignment 2 solution acs
Assignment 2 solution acs
Ā 
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
Ā 
Project Paper
Project PaperProject Paper
Project Paper
Ā 
ļ¼ˆDLč¼ŖčŖ­ļ¼‰Variational Dropout Sparsifies Deep Neural Networks
ļ¼ˆDLč¼ŖčŖ­ļ¼‰Variational Dropout Sparsifies Deep Neural Networksļ¼ˆDLč¼ŖčŖ­ļ¼‰Variational Dropout Sparsifies Deep Neural Networks
ļ¼ˆDLč¼ŖčŖ­ļ¼‰Variational Dropout Sparsifies Deep Neural Networks
Ā 
Fixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractionsFixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractions
Ā 
Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...
Ā 
stochastic notes
stochastic notes stochastic notes
stochastic notes
Ā 
The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)
The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)
The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)
Ā 
Losseless
LosselessLosseless
Losseless
Ā 
Moment-Generating Functions and Reproductive Properties of Distributions
Moment-Generating Functions and Reproductive Properties of DistributionsMoment-Generating Functions and Reproductive Properties of Distributions
Moment-Generating Functions and Reproductive Properties of Distributions
Ā 
IJSRED-V2I5P56
IJSRED-V2I5P56IJSRED-V2I5P56
IJSRED-V2I5P56
Ā 
Lec. 10: Making Assumptions of Missing data
Lec. 10: Making Assumptions of Missing dataLec. 10: Making Assumptions of Missing data
Lec. 10: Making Assumptions of Missing data
Ā 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
Ā 
Introduction to Evidential Neural Networks
Introduction to Evidential Neural NetworksIntroduction to Evidential Neural Networks
Introduction to Evidential Neural Networks
Ā 

Recently uploaded

Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...amitlee9823
Ā 
Midocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxMidocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxolyaivanovalion
Ā 
āž„šŸ” 7737669865 šŸ”ā–» Bangalore Call-girls in Women Seeking Men šŸ”BangalorešŸ” Esc...
āž„šŸ” 7737669865 šŸ”ā–» Bangalore Call-girls in Women Seeking Men  šŸ”BangalorešŸ”   Esc...āž„šŸ” 7737669865 šŸ”ā–» Bangalore Call-girls in Women Seeking Men  šŸ”BangalorešŸ”   Esc...
āž„šŸ” 7737669865 šŸ”ā–» Bangalore Call-girls in Women Seeking Men šŸ”BangalorešŸ” Esc...amitlee9823
Ā 
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service BangaloreCall Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangaloreamitlee9823
Ā 
Call Girls In Nandini Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Nandini Layout ā˜Ž 7737669865 šŸ„µ Book Your One night StandCall Girls In Nandini Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Nandini Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Standamitlee9823
Ā 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteedamy56318795
Ā 
Call Girls Bommasandra Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
Call Girls Bommasandra Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...Call Girls Bommasandra Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
Call Girls Bommasandra Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...amitlee9823
Ā 
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...amitlee9823
Ā 
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night StandCall Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Standamitlee9823
Ā 
Call Girls Jalahalli Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ban...Call Girls Jalahalli Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ban...amitlee9823
Ā 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Valters Lauzums
Ā 
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -Pooja Nehwal
Ā 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsJoseMangaJr1
Ā 
Call Girls Bannerghatta Road Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Ser...Call Girls Bannerghatta Road Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Ser...amitlee9823
Ā 
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...amitlee9823
Ā 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...ZurliaSoop
Ā 
Call Girls In Hsr Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Hsr Layout ā˜Ž 7737669865 šŸ„µ Book Your One night StandCall Girls In Hsr Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Hsr Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Standamitlee9823
Ā 

Recently uploaded (20)

Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Ā 
Midocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxMidocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFx
Ā 
āž„šŸ” 7737669865 šŸ”ā–» Bangalore Call-girls in Women Seeking Men šŸ”BangalorešŸ” Esc...
āž„šŸ” 7737669865 šŸ”ā–» Bangalore Call-girls in Women Seeking Men  šŸ”BangalorešŸ”   Esc...āž„šŸ” 7737669865 šŸ”ā–» Bangalore Call-girls in Women Seeking Men  šŸ”BangalorešŸ”   Esc...
āž„šŸ” 7737669865 šŸ”ā–» Bangalore Call-girls in Women Seeking Men šŸ”BangalorešŸ” Esc...
Ā 
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service BangaloreCall Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
Ā 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Ā 
Call Girls In Nandini Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Nandini Layout ā˜Ž 7737669865 šŸ„µ Book Your One night StandCall Girls In Nandini Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Nandini Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Ā 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
Ā 
Call Girls Bommasandra Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
Call Girls Bommasandra Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...Call Girls Bommasandra Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
Call Girls Bommasandra Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
Ā 
Anomaly detection and data imputation within time series
Anomaly detection and data imputation within time seriesAnomaly detection and data imputation within time series
Anomaly detection and data imputation within time series
Ā 
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
Ā 
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night StandCall Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Ā 
Call Girls Jalahalli Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ban...Call Girls Jalahalli Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ban...
Ā 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Ā 
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
Ā 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Ā 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter Lessons
Ā 
Call Girls Bannerghatta Road Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Ser...Call Girls Bannerghatta Road Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Ser...
Ā 
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Ā 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Ā 
Call Girls In Hsr Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Hsr Layout ā˜Ž 7737669865 šŸ„µ Book Your One night StandCall Girls In Hsr Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Hsr Layout ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Ā 

Lecture 2

  • 1. Lecture 2: Basics of Bayesian modeling Lecture 2: Basics of Bayesian modeling Shravan Vasishth Department of Linguistics University of Potsdam, Germany October 9, 2014 1 / 83
  • 2. Lecture 2: Basics of Bayesian modeling Some important concepts from lecture 1 The bivariate distribution Independent (normal) random variables If we have two random variables U0, U1, and we examine their joint distribution, we can plot a 3-d plot which shows, u0, u1, and f(u0,u1). E.g., f (u0,u1) ā‡  (N(0,1)), with two independent random variables: u0 āˆ’3 āˆ’2 āˆ’1 0 1 2 3 u1 āˆ’3 āˆ’2 āˆ’1 0 1 2 3 z 0.05 0.10 0.15 2 / 83
  • 3. Lecture 2: Basics of Bayesian modeling Some important concepts from lecture 1 The bivariate distribution Correlated random variables (r = 0.6) The random variables U, W could be correlated positively. . . u0 āˆ’3 āˆ’2 āˆ’1 0 1 2 3 u1 āˆ’3 āˆ’2 āˆ’1 0 1 2 3 z 0.00 0.05 0.10 0.15 3 / 83
  • 4. Lecture 2: Basics of Bayesian modeling Some important concepts from lecture 1 The bivariate distribution Correlated random variables (r = 0.6) . . . or negatively: u0 āˆ’3 āˆ’2 āˆ’1 0 1 2 3 u1 āˆ’3 āˆ’2 āˆ’1 0 1 2 3 z 0.00 0.05 0.10 0.15 4 / 83
  • 5. Lecture 2: Basics of Bayesian modeling Some important concepts from lecture 1 Bivariate distributions This is why, when talking about two normal random variables U0 and U1, we have to talk about 1 U0ā€™s mean and variance 2 U1ā€™s mean and variance 3 the correlation r between them A mathematically convenient way to talk about it is in terms of the variance-covariance matrix we saw in lecture 1: āŒƒu = ļ£æ s2 u0 ru su0su1 ru su0su1 s2 u1 = ļ£æ 0.612 .51ā‡„0.61ā‡„0.23 .51ā‡„0.61ā‡„0.23 0.232 (1) 5 / 83
  • 6. Lecture 2: Basics of Bayesian modeling Introduction Todayā€™s goals In this second lecture, my goals are to 1 Give you a feeling for how Bayesian analysis works using two simple examples that need only basic mathematical operations (addition, subtraction, division, multiplication). 2 Start thinking about priors for parameters in preparation for ļ¬tting linear mixed models. 3 Start ļ¬tting linear regression models in JAGS. I will assign some homework which is designed to help you understand these concepts. Solutions will be discussed next week. 6 / 83
  • 7. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Random variables A random variable X is a function X : S ! R that associates to each outcome w 2 S exactly one number X(w) = x. SX is all the xā€™s (all the possible values of X, the support of X). I.e., x 2 SX . Good example: number of coin tosses till H X : w ! x w: H, TH, TTH,. . . (inļ¬nite) x = 0,1,2,...;x 2 SX 7 / 83
  • 8. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Random variables Every discrete (continuous) random variable X has associated with it a probability mass (distribution) function (pmf, pdf). PMF is used for discrete distributions and PDF for continuous. pX : SX ! [0,1] (2) deļ¬ned by pX (x) = P(X(w) = x),x 2 SX (3) [Note: Books sometimes abuse notation by overloading the meaning of X. They usually have: pX (x) = P(X = x),x 2 SX ] 8 / 83
  • 9. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Random variables Probability density functions (continuous case) or probability mass functions (discrete case) are functions that assign probabilities or relative frequencies to all events in a sample space. The expression X ā‡  f (Ā·) (4) means that the random variable X has pdf/pmf f (Ā·). For example, if we say that X ā‡  Normal(Āµ,s2), we are assuming that the pdf is f (x) = 1 p 2ps2 exp[ (x Āµ)2 2s2 ] (5) 9 / 83
  • 10. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Random variables We also need a cumulative distribution function or cdf because, in the continuous case, P(X=some point value) is zero and we therefore need a way to talk about P(X in a speciļ¬c range). cdfs serve that purpose. In the continuous case, the cdf or distribution function is deļ¬ned as: P(x < k) = F(x < k) = The area under the curve to the left of k (6) For example, suppose X ā‡  Normal(600,50). 10 / 83
  • 11. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Random variables We can ask for Prob(X < 600): > pnorm(600,mean=600,sd=sqrt(50)) [1] 0.5 We can ask for the quantile that has 50% of the probability to the left of it: > qnorm(0.5,mean=600,sd=sqrt(50)) [1] 600 . . . or to the right of it: > qnorm(0.5,mean=600,sd=sqrt(50),lower.tail=FALSE) [1] 600 11 / 83
  • 12. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Random variables We can also calculate the probability that X lies between 590 and 610: Prob(590 < X < 610): > pnorm(610,mean=600,sd=sqrt(50))-pnorm(490,mean=600,sd=sqrt(50)) [1] 0.9213504 12 / 83
  • 13. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Random variables Another way to compute area under the curve is by simulation: > x<-rnorm(10000,mean=600,sd=sqrt(50)) > ## proportion of cases where > ## x is less than 500: > mean(x<590) [1] 0.0776 > ## theoretical value: > pnorm(590,mean=600,sd=sqrt(50)) [1] 0.0786496 We will be doing this a lot. 13 / 83
  • 14. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Random variables E.g., in linguistics we take as continous random variables: 1 reading time: Here the random variable (RV) X has possible values w ranging from 0 ms to some upper bound b ms (or maybe unbounded?), and the RV X maps each possible value w to the corresponding number (0 to 0 ms, 1 to 1 ms, etc.). 2 acceptability ratings (technically not correct; but people generally treat ratings as continuous, at least in psycholinguistics) 3 EEG signals In this course, due to time constraints, we will focus almost exclusively on reading time data (eye-tracking and self-paced reading). 14 / 83
  • 15. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Normal random variables We will also focus mostly on the normal distribution. fX (x) = 1 s p 2p e (x Āµ)2 2s2 , ā€¢ < x < ā€¢. (7) It is conventional to write X ā‡  N(Āµ,s2). Important note: The normal distribution is represented diā†µerently in diā†µerent programming languages: 1 R: dnorm(mean,sigma) 2 JAGS: dnorm(mean,precision) where precision = 1/variance 3 Stan: normal(mean,sigma) I apologize in advance for all the confusion this will cause. 15 / 83
  • 16. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Normal random variables āˆ’6 āˆ’4 āˆ’2 0 2 4 6 0.00.20.4 Normal density N(0,1) X density 16 / 83
  • 17. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Normal random variables Standard or unit normal random variable: If X is normally distributed with parameters Āµ and s2, then Z = (X Āµ)/s is normally distributed with parameters 0,1. We conventionally write (x) for the CDF: (x) = 1 p 2p Ė† x ā€¢ e y2 2 dy where y = (x Āµ)/s (8) In R, we can type pnorm(x), to ļ¬nd out (x). Suppose x = 2: > pnorm(-2) [1] 0.02275013 17 / 83
  • 18. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Normal random variables If Z is a standard normal random variable (SNRV) then p{Z ļ£æ x} = p{Z > x}, ā€¢ < x < ā€¢ (9) We can check this with R: > ##P(Z < -x): > pnorm(-2) [1] 0.02275013 > ##P(Z > x): > pnorm(2,lower.tail=FALSE) [1] 0.02275013 18 / 83
  • 19. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Normal random variables Although the following expression looks scary: (x) = 1 p 2p Ė† x ā€¢ e y2 2 dy where y = (x Āµ)/s (10) all it is saying isā€œļ¬nd the area under the normal curve, ranging - Inļ¬nity to x. Always visualize! ( 2): āˆ’3 āˆ’2 āˆ’1 0 1 2 3 0.00.20.4 Standard Normal x dnorm(x,0,1) Ļ†(āˆ’2) 19 / 83
  • 20. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Normal random variables Since Z = ((X Āµ)/s) is an SNRV whenever X is normally distributed with parameters Āµ and s2, then the CDF of X can be expressed as: FX (a) = P{X ļ£æ a} = P āœ“ X Āµ s ļ£æ a Āµ s ā—† = āœ“ a Āµ s ā—† (11) Practical application: Suppose you know that X ā‡  N(Āµ,s2), and you know Āµ but not s. If you know that a 95% conļ¬dence interval is [-q,+q], then you can work out s by a. computing the Z ā‡  N(0,1) that has 2.5% of the area to its right: > round(qnorm(0.025,lower.tail=FALSE),digits=2) [1] 1.96 b. Solve for s in Z = q Āµ s . We will use this fact in Homework 0. 20 / 83
  • 21. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Normal random variables Summary of useful commands: ## pdf of normal: dnorm(x, mean = 0, sd = 1) ## compute area under the curve: pnorm(q, mean = 0, sd = 1) ## find out the quantile that has ## area (probability) p under the curve: qnorm(p, mean = 0, sd = 1) ## generate normally distributed data of size n: rnorm(n, mean = 0, sd = 1) 21 / 83
  • 22. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Likelihood function (Normal distribution) Letā€™s assume that we have generated a data point from a particular normal distribution: x ā‡  N(Āµ = 600,s2 = 50). > x<-rnorm(1,mean=600,sd=sqrt(50)) Given x, and diā†µerent values of Āµ, we can determine which Āµ is most likely to have generated x. You can eyeball the result: 400 500 600 700 800 0.000.03 mu dnorm(x,mean=mu,sd=sqrt(50)) 22 / 83
  • 23. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Likelihood function Suppose that we had generated 10 independent values of x: > x<-rnorm(10,mean=600,sd=sqrt(50)) We can plot the likelihood of each of the xā€™s that the mean of the Normal distribution that generated the data is Āµ, for diā†µerent values of Āµ: > ## mu = 500 > dnorm(x,mean=500,sd=sqrt(50)) [1] 2.000887e-48 6.542576e-43 3.383346e-31 1.423345e-46 7. [6] 1.686776e-41 3.214674e-39 2.604492e-40 6.892547e-50 1. Since each of the xā€™s are independently generated, the total likelihood of the 10 xā€™s is: f (x1)ā‡„f (x2)ā‡„Ā·Ā·Ā·ā‡„f (x10) (12) for some Āµ in f (Ā·) = Normal(Āµ,50). 23 / 83
  • 24. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Likelihood function Itā€™s computationally easier to just take logs and sum them up (log likelihood): logf (x1)+logf (x2)ā‡„Ā·Ā·Ā·ā‡„logf (x10) (13) > ## mu = 500 > sum(dnorm(x,mean=500,sd=sqrt(50),log=TRUE)) [1] -986.1017 24 / 83
  • 25. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions (Log) likelihood function We can now plot, for diā†µerent values of Āµ, the likelihood that each of the Āµ generated the 10 data points: > mu<-seq(400,800,by=0.1) > liks<-rep(NA,length(mu)) > for(i in 1:length(mu)){ + liks[i]<-sum(dnorm(x,mean=mu[i],sd=sqrt(50),log=TRUE)) + } > plot(mu,liks,type="l") 400 500 600 700 800 āˆ’4000āˆ’1000 liks 25 / 83
  • 26. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions (Log) likelihood function 1 Itā€™s intuitively clear that weā€™d probably want to declare the value of Āµ that brings us to theā€œhighestā€point in this ļ¬gure. 2 This is the maximum likelihood estimate, MLE. 3 Practical implication: In frequentist statistics, our data vector x is assumed to be X ā‡  N(Āµ,s2), and we attempt to ļ¬gure out the MLE, i.e., the estimates of Āµ and s2 that would maximize the likelihood. 4 In Bayesian models, when we assume a uniform prior, we will get an estimate of the parameters which coincides with the MLE (examples coming soon). 26 / 83
  • 27. Lecture 2: Basics of Bayesian modeling Some preliminaries: Probability distributions Bayesian modeling examples Next, I will work through ļ¬ve relatively simple examples that use Bayesā€™ Theorem. 1 Example 1: Proportions 2 Example 2: Normal distribution 3 Example 3: Linear regression with one predictor 4 Example 4: Linear regression with multiple predictors 5 Example 5: Generalized linear models example (binomial link). 27 / 83
  • 28. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions 1 Recall the binomial distribution: Let X: no. successes in n trials. We generally assume that X ā‡  Binomial(n,q), q unknown. 2 Suppose we have 46 successes out of 100. We generally use the empirically observed proportion 46/100 as our estimate of q. I.e., we assume that the generating distribution is X ā‡  Binomial(n = 100,q = .46). 3 This is because, for all possible values of q, going from 0 to 1, 0.46 has the highest likelihood. 28 / 83
  • 29. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions > dbinom(x=46,size=100,0.4) [1] 0.03811036 > dbinom(x=46,size=100,0.46) [1] 0.07984344 > dbinom(x=46,size=100,0.5) [1] 0.0579584 > dbinom(x=46,size=100,0.6) [1] 0.001487007 Actually, we could take a whole range of values of q from 0 to 1 and plot the resulting probabilities. 29 / 83
  • 30. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions > theta<-seq(0,1,by=0.01) > plot(theta,dbinom(x=46,size=100,theta), + xlab=expression(theta),type="l") 0.0 0.2 0.4 0.6 0.8 1.0 0.000.040.08 Īø dbinom(x=46,size=100,theta) This is the likelihood function for the binomial distribution, and we will write it as f (data | q). It is a function of q. 30 / 83
  • 31. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions Since Binomial(x,n,q) = n x qx qn x , we can see that: f (data | q) Āµ q46 (1 q)54 (14) We are now going to use Bayesā€™ theorem to work out the posterior distribution of q given the data: f (q | data) " posterior Āµ f (data | q) " likelihood f (q) " prior (15) All thatā€™s missing here is the prior distribution f (q). So letā€™s try to deļ¬ne a prior for q. 31 / 83
  • 32. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions To deļ¬ne a prior for q, we will use a distribution called the Beta distribution, which takes two parameters, a and b. We can plot some Beta distributions to get a feel for what these parameters do. We are going to plot 1 Beta(a=2,b=2) 2 Beta(a=3,b=3) 3 Beta(a=6,b=6) 4 Beta(a=10,b=10) 32 / 83
  • 33. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions 0.0 0.2 0.4 0.6 0.8 1.0 01234 Beta density density a=2,b=2 a=3,b=3 a=6,b=6 a=60,b=60 33 / 83
  • 34. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions Each successive density expresses increasing certainty about q being centered around 0.5; notice that the spread about 0.5 is decreasing as a and b increase. 1 Beta(a=2,b=2) 2 Beta(a=3,b=3) 3 Beta(a=6,b=6) 4 Beta(a=10,b=10) 34 / 83
  • 35. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions 1 If we donā€™t have much prior information, we could use a=b=2; this gives us a uniform prior; this is called an uninformative prior or non-informative prior. 2 If we have a lot of prior knowledge and/or a strong belief that q has a particular value, we can use a larger a,b to reļ¬‚ect our greater certainty about the parameter. 3 You can think of the parameter referring to the number of successes, and the parameter b to the number of failures. So the beta distribution can be used to deļ¬ne the prior distribution of q. 35 / 83
  • 36. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions Just for the sake of illustration, letā€™s take four diā†µerent beta priors, each reļ¬‚ecting increasing certainty. 1 Beta(a=2,b=2) 2 Beta(a=3,b=3) 3 Beta(a=6,b=6) 4 Beta(a=21,b=21) Each (except perhaps the ļ¬rst) reļ¬‚ects a belief that q = 0.5, with varying degrees of uncertainty. Note an important fact: Beta(q | a,b) Āµ qa 1(1 q)b 1. This is because the Beta distribution is: f (q | a,b) = (a,b) (a) (b) qa 1(1 q)b 1 36 / 83
  • 37. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions Now we just need to plug in the likelihood and the prior to get the posterior: f (q | data) Āµ f (data | q)f (q) (16) The four corresponding posterior distributions would be as follows (I hope I got the sums right!). 37 / 83
  • 38. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions f (q | data) Āµ [q46 (1 q)54 ][q2 1 (1 q)2 1 ] = q47 (1 q)55 (17) f (q | data) Āµ [q46 (1 q)54 ][q3 1 (1 q)3 1 ] = q48 (1 q)56 (18) f (q | data) Āµ [q46 (1 q)54 ][q6 1 (1 q)6 1 ] = q51 (1 q)59 (19) f (q | data) Āµ [q46 (1 q)54 ][q21 1 (1 q)21 1 ] = q66 (1 q)74 (20) 38 / 83
  • 39. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions 1 We can now visualize each of these triplets of priors, likelihoods and posteriors. 2 Note that I use the beta to model the likelihood because this allows me to visualize all three (prior, lik., posterior) in the same plot. 3 I ļ¬rst show the plot just for the prior q ā‡  Beta(a = 2,a = 2) 39 / 83
  • 40. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions Beta(2,2) prior: posterior is shifted just a bit to the right compared to the likelihood 0.0 0.2 0.4 0.6 0.8 1.0 0246810 X post lik prior 40 / 83
  • 41. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions Beta(6,6) prior: Posterior shifts even more towards the prior 0.0 0.2 0.4 0.6 0.8 1.0 0246810 X post lik prior 41 / 83
  • 42. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions Beta(21,21) prior: Posterior shifts even more towards the prior 0.0 0.2 0.4 0.6 0.8 1.0 0246810 X post lik prior 42 / 83
  • 43. Lecture 2: Basics of Bayesian modeling Example 1: Proportions Proportions Basically, the posterior is a compromise between the prior and the likelihood. 1 When the prior has high uncertainty or we have a lot of data, the likelihood will dominate. 2 When the prior has high certainty (like the Beta(21,21) case), and we have very little data, then the prior will dominate. So, Bayesian methods are particularly important when you have little data but a whole lot of expert knowledge. But they are also useful for standard psycholinguistic research, as I hope to demonstrate. 43 / 83
  • 44. Lecture 2: Basics of Bayesian modeling Example 1: Proportions An example application (This exercise is from a Copenhagen course I took in 2012. I quote it almost verbatim). ā€œThe French mathematician Pierre-Simon Laplace (1749-1827) was the ļ¬rst person to show deļ¬nitively that the proportion of female births in the French population was less then 0.5, in the late 18th century, using a Bayesian analysis based on a uniform prior distribution. Suppose you were doing a similar analysis but you had more deļ¬nite prior beliefs about the ratio of male to female births. In particular, if q represents the proportion of female births in a given population, you are willing to place a Beta(100,100) prior distribution on q. 1 Show that this means you are more than 95% sure that q is between 0.4 and 0.6, although you are ambivalent as to whether it is greater or less than 0.5. 2 Now you observe that out of a random sample of 1,000 births, 511 are boys. What is your posterior probability that q > 0.5?ā€ 44 / 83
  • 45. Lecture 2: Basics of Bayesian modeling Example 1: Proportions An example application Show that this means you are more than 95% sure that q is between 0.4 and 0.6, although you are ambivalent as to whether it is greater or less than 0.5. Prior: Beta(a=100,b=100) > round(qbeta(0.025,shape1=100,shape2=100),digits=1) [1] 0.4 > round(qbeta(0.975,shape1=100,shape2=100),digits=1) [1] 0.6 > ## ambivalent as to whether theta <0.5 or not: > round(pbeta(0.5,shape1=100,shape2=100),digits=1) [1] 0.5 [Exercise: Draw the prior distribution] 45 / 83
  • 46. Lecture 2: Basics of Bayesian modeling Example 1: Proportions An example application Now you observe that out of a random sample of 1,000 births, 511 are boys. What is your posterior probability that q > 0.5? Prior: Beta(a=100,b=100) Data: 489 girls out of 1000. Posterior: f (q | data) Āµ [q489 (1 q)511 ][q100 1 (1 q)100 1 ] = q588 (1 q)610 (21) Since Beta(q | a,b) Āµ qa 1(1 q)b 1, the posterior is Beta(a=589,b=611). Therefore the posterior probability of q > 0.5 is: > qbeta(0.5,shape1=589,shape2=611,lower.tail=FALSE) [1] 0.4908282 46 / 83
  • 47. Lecture 2: Basics of Bayesian modeling Example 2: Normal distribution Normal distribution The normal distribution is the most frequently used probability model in psycholinguistics. If ĀÆx is sample mean, sample size is n and sample variance is known to be s2, and if the prior on the mean Āµ is Normal(m,v), then: It is pretty easy to derive (see Lynch textbook) the posterior mean mā‡¤ and variance vā‡¤ analytically: vā‡¤ = 1 1 v + n s2 mā‡¤ = vā‡¤ āœ“ m v + nĀÆx s2 ā—† (22) E[q | x] = mā‡„ w1 w1+w2 + ĀÆx ā‡„ w2 w1+w2 w1 = v 1 ,w2 = (s2 /n) 1 (23) So: the posterior mean is a weighted mean of the prior and likelihood. 47 / 83
  • 48. Lecture 2: Basics of Bayesian modeling Example 2: Normal distribution Normal distribution 1 The weight w1 is determined by the inverse of the prior variance. 2 The weight w2 is determined by the inverse of the sample standard error. 3 It is common in Bayesian statistics to talk about precision = 1 variance , so that 1 w1 = v 1 = precisionprior 2 w2 = (s2/n) 1 = precisiondata 48 / 83
  • 49. Lecture 2: Basics of Bayesian modeling Example 2: Normal distribution Normal distribution If w1 is very large compared to w2, then the posterior mean will be determined mainly by the prior mean m: E[q | x] = mā‡„ w1 w1+w2 + ĀÆx ā‡„ w2 w1+w2 w1 = v 1 ,w2 = (s2 /n) 1 (24) If w2 is very large compared to w1, then the posterior mean will be determined mainly by the sample mean ĀÆx: E[q | x] = mā‡„ w1 w1+w2 + ĀÆx ā‡„ w2 w1+w2 w1 = v 1 ,w2 = (s2 /n) 1 (25) 49 / 83
  • 50. Lecture 2: Basics of Bayesian modeling Example 2: Normal distribution An example application [This is an old exam problem from She eld University] Letā€™s say there is a hormone measurement test that yields a numerical value that can be positive or negative. We know the following: The doctorā€™s prior: 75% interval (ā€œpatient healthyā€) is [-0.3,0.3]. Data from patient: x = 0.2, known s = 0.15. Compute posterior N(m*,v*). Iā€™ll leave this as Homework 0 (solution next week). Hint: see slide 20. 50 / 83
  • 51. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Simple linear regression We begin with a simple example. Let the response variable be yi ,i = 1,...,n, and let there be p predictors, x1i ,...,xpi . Also, let yi ā‡  N(Āµi ,s2 ), Āµi = b0 +Ƃbxki (26) (the summation is over the p predictors, i.e., k = 1,...,p). We need to specify a prior distribution for the parameters: bk ā‡  Normal(0,1002 ) logs ā‡  Unif ( 100,100) (27) 51 / 83
  • 52. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Better looking professors get better teaching evaluations Source: Gelman and Hill 2007 > beautydata<-read.table("data/beauty.txt",header=T) > ## Note: beauty level is centered. > head(beautydata) beauty evaluation 1 0.2015666 4.3 2 -0.8260813 4.5 3 -0.6603327 3.7 4 -0.7663125 4.3 5 1.4214450 4.4 6 0.5002196 4.2 > ## restate the data as a list for JAGS: > data<-list(x=beautydata$beauty, + y=beautydata$evaluation) 52 / 83
  • 53. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Better looking professors get better teaching evaluations We literally follow the speciļ¬cation of the linear model given above. We specify the model for the data frame row by row, using a for loop, so that for each dependent variable value yi (the evaluation score) we specify how we believe it was generated. yi ā‡  Normal(Āµ[i],s2 ) i = 1,...,463 (28) Āµ[i] ā‡  b0 +b1xi Note: predictor is centered (29) Deļ¬ne priors on the b and on s: b0 ā‡  Uniform( 10,10) b0 ā‡  Uniform( 10,10) (30) s ā‡  Uniform(0,100) (31) 53 / 83
  • 54. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Better looking professors get better teaching evaluations > library(rjags) > cat(" + model + { + ## specify model for data: + for(i in 1:463){ + y[i] ~ dnorm(mu[i],tau) + mu[i] <- beta0 + beta1 * (x[i]) + } + # priors: + beta0 ~ dunif(-10,10) + beta1 ~ dunif(-10,10) + sigma ~ dunif(0,100) + sigma2 <- pow(sigma,2) + tau <- 1/sigma2 + }", + file="JAGSmodels/beautyexample1.jag" ) 54 / 83
  • 55. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Better looking professors get better teaching evaluations Data from Gelman and Hill, 2007 Some things to note in JAGS syntax: 1 The normal distribution is deļ¬ned in terms of precision, not variance. 2 ā‡  meansā€œis generated byā€ 3 <- is a deterministic assignment, like = in mathematics. 4 The model speciļ¬cation is declarative, order does not matter. For example, we would have written the following in any order: sigma ~ dunif(0,100) sigma2 <- pow(sigma,2) tau <- 1/sigma2 55 / 83
  • 56. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Better looking professors get better teaching evaluations > ## specify which variables you want to examine > ## the posterior distribution of: > track.variables<-c("beta0","beta1","sigma") > ## define model: > beauty.mod <- jags.model( + file = "JAGSmodels/beautyexample1.jag", + data=data, + n.chains = 1, + n.adapt =2000, + quiet=T) > ## sample from posterior: > beauty.res <- coda.samples(beauty.mod, + var = track.variables, + n.iter = 2000, + thin = 1 ) 56 / 83
  • 57. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Better looking professors get better teaching evaluations > round(summary(beauty.res)$statistics[,1:2],digits=2) Mean SD beta0 4.01 0.03 beta1 0.13 0.03 sigma 0.55 0.02 > round(summary(beauty.res)$quantiles[,c(1,3,5)],digits=2) 2.5% 50% 97.5% beta0 3.96 4.01 4.06 beta1 0.07 0.13 0.19 sigma 0.51 0.55 0.58 57 / 83
  • 58. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Better looking professors get better teaching evaluations Compare with standard lm ļ¬t: > lm_summary<-summary(lm(evaluation~beauty, + beautydata)) > round(lm_summary$coef,digits=2) Estimate Std. Error t value Pr(>|t|) (Intercept) 4.01 0.03 157.21 0 beauty 0.13 0.03 4.13 0 > round(lm_summary$sigma,digits=2) [1] 0.55 Note that: (a) with uniform priors, we get a Bayesian estimate equal to the MLE, (b) we get uncertainty estimates for s in the Bayesian model. 58 / 83
  • 59. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Better looking professors get better teaching evaluations Posterior distributions of parameters > densityplot(beauty.res) Density 051015 3.95 4.00 4.05 4.10 beta0 0246812 0.00 0.05 0.10 0.15 0.20 0.25 beta1 051020 0.50 0.55 0.60 sigma 59 / 83
  • 60. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Better looking professors get better teaching evaluations Posterior distributions of parameters > op<-par(mfrow=c(1,3),pty="s") > traceplot(beauty.res) 2000 3000 4000 3.954.05 Iterations Trace of beta0 2000 3000 4000 0.050.15 Iterations Trace of beta1 2000 3000 4000 0.500.56 Iterations Trace of sigma 60 / 83
  • 61. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Better looking professors get better teaching evaluations Posterior distributions of parameters > MCMCsamp<-as.matrix(beauty.res) > op<-par(mfrow=c(1,3),pty="s") > hist(MCMCsamp[,1],main=expression(beta[0]), + xlab="",freq=FALSE) > hist(MCMCsamp[,2],main=expression(beta[1]), + xlab="",freq=FALSE) > hist(MCMCsamp[,3],main=expression(sigma), + xlab="",freq=FALSE) Ī²0 Ī²1 Ļƒ 61 / 83
  • 62. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Homework 1: Ratsā€™ weights Source: Lunn et al 2012 Five measurements of a ratā€™s weight, in grams, as a function of some x (say some nutrition-based variable). Note that here we will center the predictor in the model code. First we load/enter the data: > data<-list(x=c(8,15,22,29,36), + y=c(177,236,285,350,376)) Then we ļ¬t the linear model using lm, for comparison with the Bayesian model: > lm_summary_rats<-summary(fm<-lm(y~x,data)) > round(lm_summary_rats$coef,digits=3) Estimate Std. Error t value Pr(>|t|) (Intercept) 123.886 11.879 10.429 0.002 x 7.314 0.492 14.855 0.001 62 / 83
  • 63. Lecture 2: Basics of Bayesian modeling Example 3: Linear regression Homework 2: Ratsā€™ weights (solution in next class) Fit the following model in JAGS: > cat(" + model + { + ## specify model for data: + for(i in 1:5){ + y[i] ~ dnorm(mu[i],tau) + mu[i] <- beta0 + beta1 * (x[i]-mean(x[])) + } + # priors: + beta0 ~ dunif(-500,500) + beta1 ~ dunif(-500,500) + tau <- 1/sigma2 + sigma2 <-pow(sigma,2) + sigma ~ dunif(0,200) + }", + file="JAGSmodels/ratsexample2.jag" ) 63 / 83
  • 64. Lecture 2: Basics of Bayesian modeling Example 4: Multiple regression Multiple predictors Source: Baayenā€™s book We ļ¬t log reading time to Trial id (centered), Native Language, and Sex. The categorical variables are centered as well. > lexdec<-read.table("data/lexdec.txt",header=TRUE) > data<-lexdec[,c(1,2,3,4,5)] > contrasts(data$NativeLanguage)<-contr.sum(2) > contrasts(data$Sex)<-contr.sum(2) > lm_summary_lexdec<-summary(fm<-lm(RT~scale(Trial,scale=F)+ + NativeLanguage+Sex,data)) > round(lm_summary_lexdec$coef,digits=2) Estimate Std. Error t value Pr(>|t|) (Intercept) 6.41 0.01 1061.53 0.00 scale(Trial, scale = F) 0.00 0.00 -2.39 0.02 NativeLanguage1 -0.08 0.01 -14.56 0.00 Sex1 -0.03 0.01 -5.09 0.00 64 / 83
  • 65. Lecture 2: Basics of Bayesian modeling Example 4: Multiple regression Multiple predictors Preparing the data for JAGS: > dat<-list(y=data$RT, + Trial=(data$Trial-mean(data$Trial)), + Lang=data$NativeLanguage, + Sex=data$Sex) 65 / 83
  • 66. Lecture 2: Basics of Bayesian modeling Example 4: Multiple regression Multiple predictors The JAGS model: > cat(" + model + { + ## specify model for data: + for(i in 1:1659){ + y[i] ~ dnorm(mu[i],tau) + mu[i] <- beta0 + + beta1 * Trial[i]+ + beta2 * Lang[i] + beta3 * Sex[i] + } + # priors: + beta0 ~ dunif(-10,10) + beta1 ~ dunif(-5,5) + beta2 ~ dunif(-5,5) + beta3 ~ dunif(-5,5) + tau <- 1/sigma2 + sigma2 <-pow(sigma,2) + sigma ~ dunif(0,200) + }", + file="JAGSmodels/multregexample1.jag" ) 66 / 83
  • 67. Lecture 2: Basics of Bayesian modeling Example 4: Multiple regression Multiple predictors > track.variables<-c("beta0","beta1", + "beta2","beta3","sigma") > library(rjags) > lexdec.mod <- jags.model( + file = "JAGSmodels/multregexample1.jag", + data=dat, + n.chains = 1, + n.adapt =2000, + quiet=T) > lexdec.res <- coda.samples( lexdec.mod, + var = track.variables, + n.iter = 3000) 67 / 83
  • 68. Lecture 2: Basics of Bayesian modeling Example 4: Multiple regression Multiple predictors > round(summary(lexdec.res)$statistics[,1:2], + digits=2) Mean SD beta0 6.06 0.03 beta1 0.00 0.00 beta2 0.17 0.01 beta3 0.06 0.01 sigma 0.23 0.00 > round(summary(lexdec.res)$quantiles[,c(1,3,5)], + digits=2) 2.5% 50% 97.5% beta0 6.01 6.06 6.12 beta1 0.00 0.00 0.00 beta2 0.14 0.17 0.19 beta3 0.04 0.06 0.09 sigma 0.22 0.23 0.23 68 / 83
  • 69. Lecture 2: Basics of Bayesian modeling Example 4: Multiple regression Multiple predictors Here is the lm ļ¬t for comparison (I center all predictors): > lm_summary_lexdec<-summary( + fm<-lm(RT~scale(Trial,scale=F) + +NativeLanguage+Sex, + lexdec)) > round(lm_summary_lexdec$coef,digits=2)[,1:2] Estimate Std. Error (Intercept) 6.29 0.01 scale(Trial, scale = F) 0.00 0.00 NativeLanguageOther 0.17 0.01 SexM 0.06 0.01 Note: We should have ļ¬t a linear mixed model here; I will return to this later. 69 / 83
  • 70. Lecture 2: Basics of Bayesian modeling Example 5: Generalized linear models GLMs We consider the model yi ā‡  Binomial(pi ,ni ) logit(pi ) = b0 +b1(xi ĀÆx) (32) A simple example is beetle data from Dobson et al 2010: > beetledata<-read.table("data/beetle.txt",header=T) > head(beetledata) dose number killed 1 1.6907 59 6 2 1.7242 60 13 3 1.7552 62 18 4 1.7842 56 28 5 1.8113 63 52 6 1.8369 59 53 70 / 83
  • 71. Lecture 2: Basics of Bayesian modeling Example 5: Generalized linear models GLMs Prepare data for JAGS: > dat<-list(x=beetledata$dose-mean(beetledata$dose), + n=beetledata$number, + y=beetledata$killed) 71 / 83
  • 72. Lecture 2: Basics of Bayesian modeling Example 5: Generalized linear models GLMs > cat(" + model + { + for(i in 1:8){ + y[i] ~ dbin(p[i],n[i]) + logit(p[i]) <- beta0 + beta1 * x[i] + } + # priors: + beta0 ~ dunif(0,pow(100,2)) + beta1 ~ dunif(0,pow(100,2)) + }", + file="JAGSmodels/glmexample1.jag" ) 72 / 83
  • 73. Lecture 2: Basics of Bayesian modeling Example 5: Generalized linear models GLMs Notice use of initial values: > track.variables<-c("beta0","beta1") > ## new: > inits <- list (list(beta0=0, + beta1=0)) > glm.mod <- jags.model( + file = "JAGSmodels/glmexample1.jag", + data=dat, + ## new: + inits=inits, + n.chains = 1, + n.adapt =2000, quiet=T) > glm.res <- coda.samples( glm.mod, + var = track.variables, + n.iter = 2000) 73 / 83
  • 74. Lecture 2: Basics of Bayesian modeling Example 5: Generalized linear models GLMs > round(summary(glm.res)$statistics[,1:2], + digits=2) Mean SD beta0 0.75 0.13 beta1 34.53 2.84 > round(summary(glm.res)$quantiles[,c(1,3,5)], + digits=2) 2.5% 50% 97.5% beta0 0.49 0.75 1.03 beta1 29.12 34.51 40.12 74 / 83
  • 75. Lecture 2: Basics of Bayesian modeling Example 5: Generalized linear models GLMs The values match up with glm output: > round(coef(glm(killed/number~scale(dose,scale=F), + weights=number, + family=binomial(),beetledata)), + digits=2) (Intercept) scale(dose, scale = F) 0.74 34.27 75 / 83
  • 76. Lecture 2: Basics of Bayesian modeling Example 5: Generalized linear models GLMs > plot(glm.res) 2000 2500 3000 3500 4000 0.40.81.2 Iterations Trace of beta0 0.2 0.4 0.6 0.8 1.0 1.2 0.01.02.0 Density of beta0 N = 2000 Bandwidth = 0.03123 2000 2500 3000 3500 4000 253545 Iterations Trace of beta1 25 30 35 40 45 0.000.060.12 Density of beta1 N = 2000 Bandwidth = 0.6572 76 / 83
  • 77. Lecture 2: Basics of Bayesian modeling Example 5: Generalized linear models Homework 3: GLMs We ļ¬t uniform priors to the coe cients b: # priors: beta0 ~ dunif(0,pow(100,2)) beta1 ~ dunif(0,pow(100,2)) Fit the beetle data again, using suitable normal distribution priors for the coe cients beta0 and beta1. Does the posterior distribution depend on the prior? 77 / 83
  • 78. Lecture 2: Basics of Bayesian modeling Prediction GLMs: Predicting future/missing data One important thing we can do is to predict the posterior distribution of future or missing data. One easy to way to do this is to deļ¬ne how we expect the predicted data to be generated. This example revisits the earlier toy example from Lunn et al. on rat data (slide 62). > data<-list(x=c(8,15,22,29,36), + y=c(177,236,285,350,376)) 78 / 83
  • 79. Lecture 2: Basics of Bayesian modeling Prediction GLMs: Predicting future/missing data > cat(" + model + { + ## specify model for data: + for(i in 1:5){ + y[i] ~ dnorm(mu[i],tau) + mu[i] <- beta0 + beta1 * (x[i]-mean(x[])) + } + ## prediction + mu45 <- beta0+beta1 * (45-mean(x[])) + y45 ~ dnorm(mu45,tau) + # priors: + beta0 ~ dunif(-500,500) + beta1 ~ dunif(-500,500) + tau <- 1/sigma2 + sigma2 <-pow(sigma,2) + sigma ~ dunif(0,200) 79 / 83
  • 80. Lecture 2: Basics of Bayesian modeling Prediction GLMs: Predicting future/missing data > track.variables<-c("beta0","beta1","sigma","y45") > rats.mod <- jags.model( + file = "JAGSmodels/ratsexample2pred.jag", + data=data, + n.chains = 1, + n.adapt =2000, quiet=T) > rats.res <- coda.samples( rats.mod, + var = track.variables, + n.iter = 2000, + thin = 1) 80 / 83
  • 81. Lecture 2: Basics of Bayesian modeling Prediction GLMs: Predicting future/missing data > round(summary(rats.res)$statistics[,1:2], + digits=2) Mean SD beta0 284.99 11.72 beta1 7.34 1.26 sigma 20.92 16.46 y45 453.87 37.37 81 / 83
  • 82. Lecture 2: Basics of Bayesian modeling Prediction GLMs: Predicting future/missing data Density 0.000.05 200 250 300 350 400 beta0 0.00.4 0 5 10 15 beta1 0.000.05 0 50 100 150 sigma 0.000 300 400 500 600 700 800 y45 82 / 83
  • 83. Lecture 2: Basics of Bayesian modeling Concluding remarks Summing up 1 In some cases Bayesā€™ Theorem can be used analytically (Examples 1, 2) 2 It is relatively easy to deļ¬ne diā†µerent kinds of Bayesian models using programming languages like JAGS. 3 Today we saw some examples from linear models (Examples 3-5). 4 Next week: Linear mixed models. 83 / 83